Openai style api for open large language models
Run Local LLMs on Any Device. Open-source
A high-throughput and memory-efficient inference and serving engine
FlashInfer: Kernel Library for LLM Serving
The Triton Inference Server provides an optimized cloud
Ready-to-use OCR with 80+ supported languages
LMDeploy is a toolkit for compressing, deploying, and serving LLMs
AIMET is a library that provides advanced quantization and compression
Operating LLMs in production
The official Python client for the Huggingface Hub
Uncover insights, surface problems, monitor, and fine tune your LLM
Everything you need to build state-of-the-art foundation models
A library for accelerating Transformer models on NVIDIA GPUs
MII makes low-latency and high-throughput inference possible
Optimizing inference proxy for LLMs
20+ high-performance LLMs with recipes to pretrain, finetune at scale
Multilingual Automatic Speech Recognition with word-level timestamps
State-of-the-art Parameter-Efficient Fine-Tuning
Large Language Model Text Generation Inference
Easiest and laziest way for building multi-agent LLMs applications
Multi-LoRA inference server that scales to 1000s of fine-tuned LLMs
Official inference library for Mistral models
Replace OpenAI GPT with another LLM in your app
Bring the notion of Model-as-a-Service to life
Library for OCR-related tasks powered by Deep Learning