Convert codebases into structured prompts optimized for LLM analysis
Local AI coding agent CLI with multi-agent orchestration tools
Generate music based on natural language prompts using LLMs
High-performance inference server for text embeddings models API layer
157 models, 30 providers, one command to find what runs on hardware
Renderer for the harmony response format to be used with gpt-oss
All-in-one LLM CLI tool featuring Shell Assistant
Python-free Rust inference server
Fast, flexible LLM inference
Distributed LLM and StableDiffusion inference
A small clipboard reader
Shinkai allows you to create advanced AI (local) agents effortlessly
A computer vision framework to create and deploy apps in minutes
Python Computer Vision & Video Analytics Framework With Batteries Incl