Official inference repo for FLUX.1 models
Qwen3-ASR is an open-source series of ASR models
GLM-4.5: Open-source LLM for intelligent agents by Z.ai
Qwen2.5-VL is the multimodal large language model series
Chinese and English multimodal conversational language model
State-of-the-art Image & Video CLIP, Multimodal Large Language Models
Open-source large language model family from Tencent Hunyuan
Large-language-model & vision-language-model based on Linear Attention
A Pragmatic VLA Foundation Model
Ling is a MoE LLM provided and open-sourced by InclusionAI
A series of math-specific large language models of our Qwen2 series
NVIDIA Isaac GR00T N1.5 is the world's first open foundation model
A state-of-the-art open visual language model
Official inference repo for FLUX.2 models
Contexts Optical Compression
Repo of Qwen2-Audio chat & pretrained large audio language model
A Family of Open Sourced Music Foundation Models
GLM-4.6V/4.5V/4.1V-Thinking, towards versatile multimodal reasoning
Ring is a reasoning MoE LLM provided and open-sourced by InclusionAI
Research code artifacts for Code World Model (CWM)
tiktoken is a fast BPE tokeniser for use with OpenAI's models
CLIP, Predict the most relevant text snippet given an image
Open-source multi-speaker long-form text-to-speech model
GPT4V-level open-source multi-modal model based on Llama3-8B
GLM-4.5V and GLM-4.1V-Thinking: Towards Versatile Multimodal Reasoning