Showing 335 open source projects for "website using python"

View related business solutions
  • Field Sales+ for MS Dynamics 365 and Salesforce Icon
    Field Sales+ for MS Dynamics 365 and Salesforce

    Maximize your sales performance on the go.

    Bring Dynamics 365 and Salesforce wherever you go with Resco’s solution. With powerful offline features and reliable data syncing, your team can access CRM data on mobile devices anytime, anywhere. This saves time, cuts errors, and speeds up customer visits.
    Learn More
  • Field Service+ for MS Dynamics 365 & Salesforce Icon
    Field Service+ for MS Dynamics 365 & Salesforce

    Empower your field service with mobility and reliability

    Resco’s mobile solution streamlines your field service operations with offline work, fast data sync, and powerful tools for frontline workers, all natively integrated into Dynamics 365 and Salesforce.
    Learn More
  • 1
    Checkov

    Checkov

    Prevent cloud misconfigurations during build-time for Terraform

    ...Checkov uses a common command-line interface to manage and analyze infrastructure as code (IaC) scan results across platforms such as Terraform, CloudFormation, Kubernetes, Helm, ARM Templates and Serverless framework. Verify changes to hundreds of supported resource types in all major cloud providers. Checkov supports developers using Terraform, Terraform plan, CloudFormation, Kubernetes, ARM Templates, Serverless, Helm, and AWS CDK. Scan cloud resources in build-time for misconfigured attributes with a simple Python policy-as-code framework. Analyze relationships between cloud resources using Checkov’s graph-based YAML policies. Execute, test, and modify runner parameters in the context of a subject repository CI/CD and version control integrations.
    Downloads: 6 This Week
    Last Update:
    See Project
  • 2
    Selectolax

    Selectolax

    Python binding to Modest and Lexbor engines

    A fast HTML5 parser with CSS selectors using Modest and Lexbor engines. Selectolax supports two backends: Modest and Lexbor. By default, all examples use the Modest backend. Most of the features between backends are almost identical, but there are still some differences. Currently, the Lexbor backend is in beta and missing some of the features. To use lexbor, just import the parser and use it in the similar way to the HTMLParser.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 3
    nginx-proxy

    nginx-proxy

    Automated nginx proxy for Docker containers using docker-gen

    nginx-proxy sets up a container running nginx and docker-gen. docker-gen generates reverse proxy configs for nginx and reloads nginx when containers are started and stopped. The containers being proxied must expose the port to be proxied, either by using the EXPOSE directive in their Dockerfile or by using the --expose flag to docker run or docker create and be in the same network. By default, if you don't pass the --net flag when your nginx-proxy container is created, it will only be...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 4
    Grab Framework Project

    Grab Framework Project

    Web Scraping Framework

    Grab is a python framework for building web scrapers. With Grab you can build web scrapers of various complexity, from simple 5-line scripts to complex asynchronous website crawlers processing millions of web pages. Grab provides an API for performing network requests and for handling the received content e.g. interacting with DOM tree of the HTML document.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Skillfully - The future of skills based hiring Icon
    Skillfully - The future of skills based hiring

    Realistic Workplace Simulations that Show Applicant Skills in Action

    Skillfully transforms hiring through AI-powered skill simulations that show you how candidates actually perform before you hire them. Our platform helps companies cut through AI-generated resumes and rehearsed interviews by validating real capabilities in action. Through dynamic job specific simulations and skill-based assessments, companies like Bloomberg and McKinsey have cut screening time by 50% while dramatically improving hire quality.
    Learn More
  • 5
    Ajenti 2

    Ajenti 2

    Ajenti Core and stock plugins

    ...Does not overwrite your config files, options and comments. All changes are non-destructive. Includes lots of plugins for system and software configuration, monitoring and management. Ajenti 2 is easily extensible using Python. Plugin development is quick and pleasant with Ajenti APIs. Write your first plugin. Pleasant to look at, satisfying to click and accessible anywhere from tablets and mobile. Small memory footprint and CPU usage. Runs on low-end machines, wall plugs, routers and so on.
    Downloads: 5 This Week
    Last Update:
    See Project
  • 6
    proxy.py

    proxy.py

    Utilize all available CPU cores for accepting new client connections

    proxy.py is made with performance in mind. By default, proxy.py will try to utilize all available CPU cores to it for accepting new client connections. This is achieved by starting AcceptorPool which listens on configured server port. Then, AcceptorPool starts Acceptor processes (--num-acceptors) to accept incoming client connections. Alongside, if --threadless is enabled, ThreadlessPool is setup which starts Threadless processes (--num-workers) to handle the incoming client connections....
    Downloads: 1 This Week
    Last Update:
    See Project
  • 7
    Powerline

    Powerline

    Statusline plugin for vim with prompts for several other applications

    Powerline is a statusline plugin for vim, and provides statuslines and prompts for several other applications, including zsh, bash, tmux, IPython, Awesome, i3 and Qtile. Powerline was completely rewritten in Python to get rid of as much vimscript as possible. This has allowed much better extensibility, leaner and better config files, and a structured, object-oriented codebase with no mandatory third-party dependencies other than a Python interpreter. Using Python has allowed unit testing of all the project code. The code is tested to work in Python 2.6+ and Python 3. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 8
    Amazon Braket PennyLane Plugin

    Amazon Braket PennyLane Plugin

    A plugin for allowing Xanadu PennyLane to use Amazon Braket devices

    The Amazon Braket PennyLane plugin offers two Amazon Braket quantum devices to work with PennyLane. The Amazon Braket Python SDK is an open-source library that provides a framework to interact with quantum computing hardware devices and simulators through Amazon Braket. PennyLane is a machine learning library for optimization and automatic differentiation of hybrid quantum-classical computations. Once the Pennylane-Braket plugin is installed, the provided Braket devices can be accessed...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 9
    HTTPie Desktop

    HTTPie Desktop

    Cross-platform API testing client for humans

    HTTPie Desktop is a graphical API client built on top of the popular HTTPie terminal tool, offering a user-friendly interface for testing and interacting with APIs. It combines the simplicity of HTTPie’s CLI with a modern desktop and web UI for a more visual workflow. Developers can easily build, send, and preview HTTP requests without needing to memorize commands or write scripts. The platform supports organizing work into spaces, collections, and tabs, making it ideal for managing multiple...
    Downloads: 13 This Week
    Last Update:
    See Project
  • The full-stack observability platform that protects your dataLayer, tags and conversion data Icon
    The full-stack observability platform that protects your dataLayer, tags and conversion data

    Stop losing revenue to bad data today. and protect your marketing data with Code-Cube.io.

    Code-Cube.io detects issues instantly, alerts you in real time and helps you resolve them fast. No manual QA. No unreliable data. Just data you can trust and act on.
    Learn More
  • 10
    Changelog CI

    Changelog CI

    Changelog CI is a GitHub Action that enables a project

    Changelog CI is a GitHub Action that enables a project to automatically generate changelogs. Changelog CI can be triggered on pull_request, workflow_dispatch, and any other events that can provide the required inputs. Changelog CI uses python and GitHub API to generate a changelog for a repository. First, it tries to get the latest release from the repository (If available). Then, it checks all the pull requests/commits merged after the last release using the GitHub API. After that, it parses the data and generates the changelog. It is able to use Markdown or reStructuredText to generate a Changelog. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 11
    bilibili-manga-downloader

    bilibili-manga-downloader

    Download and manage Bilibili Manga chapters with GUI downloader

    BiliBili-Manga-Downloader is an open source desktop application designed to download manga chapters from the Bilibili Manga platform for offline reading and local management. It was created to address limitations of the web reading experience, such as intrusive advertisements, inconvenient image zooming, and inconsistent navigation during reading sessions. It provides a graphical user interface that allows users to search for manga titles using keywords, view detailed information about...
    Downloads: 17 This Week
    Last Update:
    See Project
  • 12
    Scrapy-Redis

    Scrapy-Redis

    Redis-based components for Scrapy

    ...Scheduler + Duplication Filter, Item Pipeline, Base Spiders. Default requests serializer is pickle, but it can be changed to any module with loads and dumps functions. Note that pickle is not compatible between python versions. Version 0.3 changed the requests serialization from marshal to cPickle, therefore persisted requests using version 0.2 will not able to work on 0.3. The class scrapy_redis.spiders.RedisSpider enables a spider to read the urls from redis. The urls in the redis queue will be processed one after another, if the first request yields more requests, the spider will process those requests before fetching another url from redis.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 13
    autocrawler

    autocrawler

    Multiprocess Selenium crawler for downloading images by keywords

    AutoCrawler is a Python-based image crawling tool designed to automatically download large numbers of images from search engines using automated browser interaction. It uses Selenium and a Chrome browser driver to navigate image search pages and collect image sources based on keywords provided by the user. AutoCrawler supports multiprocess and multithreaded downloading, which allows it to retrieve images faster by running several tasks simultaneously.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 14
    DjangoBlog

    DjangoBlog

    A blog system based on python3.8 and Django3.0

    Articles, pages, categories, tags (add, delete, edit), etc. Articles and pages support Markdown and highlighting. Articles support full-text search. Complete comment feature, include posting reply comment and email notification. Markdown supporting. Sidebar feature, new articles, most readings, tags, etc. OAuth Login supported, including Google, GitHub, Facebook, Weibo, QQ. Memcache supported, with cache auto refresh. Simple SEO Features, notify Google and Baidu when there was a new article...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 15
    Crawl4AI

    Crawl4AI

    Open-source LLM Friendly Web Crawler & Scraper

    Crawl4AI is a high-performance, AI‑ready web crawler tailored for LLM data ingestion and RAG pipelines. It supports adaptive crawling heuristics (stopping when enough info is gathered), structured markdown output, and high-speed parallel execution. Designed to operate at scale with optional Docker deployment and framework integrations.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 16
    douyin

    douyin

    Open source Douyin crawler for collecting and downloading public data

    DouyinCrawler is an open source data collection tool designed to gather publicly available information from the Douyin platform. It demonstrates how to build a Python-based web crawler combined with a graphical interface and command line functionality. It allows users to collect data from various types of Douyin content, including user profiles, videos, hashtags, and music pages. DouyinCrawler supports both automated scraping and batch operations to process multiple targets efficiently. It...
    Downloads: 5 This Week
    Last Update:
    See Project
  • 17
    Tweepy

    Tweepy

    Twitter for Python

    An easy-to-use Python library for accessing the Twitter API. You can also use Git to clone the repository from GitHub to install the latest development version. The easiest way to install the latest version from PyPI is by using pip. Twitter requires all requests to use OAuth for authentication. The API class provides access to the entire twitter RESTful API methods.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 18
    AWS SAM CLI

    AWS SAM CLI

    CLI tool to build, test, debug, and deploy Serverless applications

    The AWS Serverless Application Model (SAM) CLI is an open-source CLI tool that helps you develop serverless applications containing Lambda functions, Step Functions, API Gateway, EventBridge, SQS, SNS and more. The AWS Serverless Application Model (SAM) is an open-source framework for building serverless applications. It provides shorthand syntax to express functions, APIs, databases, and event source mappings. With just a few lines per resource, you can define the application you want and...
    Downloads: 3 This Week
    Last Update:
    See Project
  • 19
    Scweet

    Scweet

    Scrape tweets, profiles, followers and following from Twitter/X

    Scweet is a Python-based Twitter/X scraping library and CLI designed to collect tweets, profile timelines, followers, following lists, and user profile data without requiring the official Twitter/X API or a developer account. Instead of depending on deprecated unauthenticated scraping methods, it works by using X’s web GraphQL API together with authenticated browser cookies, which gives it a more current and practical approach for data extraction.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 20
    Linkedin Scraper

    Linkedin Scraper

    A library that scrapes Linkedin for user data

    Linkedin Scraper is a library that scrapes Linkedin for user data. Version 2.0.0 and before is called linkedin_user_scraper and can be installed via pip3 install --user linkedin_user_scraper. The reason is that LinkedIn has recently blocked people from viewing certain profiles without having previously signed in. So by setting scrape=False, it doesn't automatically scrape the profile, but Chrome will open the linkedin page anyways. You can login and logout, and the cookie will stay in the...
    Downloads: 4 This Week
    Last Update:
    See Project
  • 21
    newspaper4k

    newspaper4k

    Python library for scraping and analyzing online news articles easily

    Newspaper4k is a Python library designed for extracting, processing, and analyzing news articles from websites. It is a continuation and active fork of the original newspaper3k library, which had stopped receiving updates, with the goal of keeping the ecosystem maintained while adding improvements and bug fixes. It provides developers with tools to automatically download web pages, extract the main article content, and collect associated metadata such as titles, authors, images, and...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 22
    requests-cache

    requests-cache

    Persistent HTTP cache for python requests

    requests-cache is a persistent HTTP cache that provides an easy way to get better performance with the Python requests library. Keep using the requests library you’re already familiar with. Add caching with a drop-in replacement for requests. The session, or install globally to add transparent caching to all request functions. Get sub-millisecond response times for cached responses. When they expire, you still save time with conditional requests.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 23
    owllook

    owllook

    Vertical novel search engine with unified reading and tracking tools

    Owllook is an open source vertical search engine designed for discovering and reading online novels from multiple sources. Instead of redirecting users to different sites, the system parses content from many novel platforms and presents it in a unified reading interface. It focuses on providing a simple and comfortable reading experience with features such as searching for books, following updates, bookmarking chapters, and maintaining a personal bookshelf. It aggregates results from...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 24
    OnionShare

    OnionShare

    Securely and anonymously share files of any size

    OnionShare is an open source tool that allows you to securely and anonymously share files of any size, host websites, and chat with friends using the Tor network. There's no need for middlemen that could very well violate the privacy and security of the things you share online. With OnionShare, you can share files directly with just an address in Tor Browser. OnionShare works because it is accessible as a Tor Onion Service. All you need to do is open it and drag and drop the files you...
    Downloads: 2 This Week
    Last Update:
    See Project
  • 25
    diskover-community

    diskover-community

    Open source file indexing & storage analytics powered by Elasticsearch

    ...Diskover also helps identify outdated or unused files, duplicate data, and inefficient storage usage that can waste resources or increase operational costs. A Python-based indexing engine performs the scanning and indexing tasks.
    Downloads: 1 This Week
    Last Update:
    See Project
MongoDB Logo MongoDB