MII makes low-latency and high-throughput inference possible
A high-performance ML model serving framework, offers dynamic batching
Large Language Model Text Generation Inference
20+ high-performance LLMs with recipes to pretrain, finetune at scale
GPU environment management and cluster orchestration
A Pythonic framework to simplify AI service building
Replace OpenAI GPT with another LLM in your app
Pytorch domain library for recommendation systems
A set of Docker images for training and serving models in TensorFlow
PyTorch extensions for fast R&D prototyping and Kaggle farming
Libraries for applying sparsification recipes to neural networks
Low-latency REST API for serving text-embeddings
Standardized Serverless ML Inference Platform on Kubernetes
Simplifies the local serving of AI models from any source
Lightweight Python library for adding real-time multi-object tracking
Neural Network Compression Framework for enhanced OpenVINO
Openai style api for open large language models
Sparsity-aware deep learning inference runtime for CPUs
PyTorch library of curated Transformer models and their components
Library for serving Transformers models on Amazon SageMaker
Efficient few-shot learning with Sentence Transformers
A library for accelerating Transformer models on NVIDIA GPUs
Trainable, memory-efficient, and GPU-friendly PyTorch reproduction
Probabilistic reasoning and statistical analysis in TensorFlow
Official inference library for Mistral models