The directory
Curated by humans. Honest pricing, real pros & cons, no sponsored rankings.
The most widely adopted AI API — powering GPT-4o, o1, DALL-E, and Whisper.
The Swiss Army knife of local AI — Gradio interface supporting every model format and backend.
Build teams of AI agents that collaborate — role-based multi-agent orchestration framework.
Europe's leading AI lab — open-weight models with frontier performance at competitive prices.
Jamba model family — hybrid SSM-Transformer architecture for efficient long-context AI.
High-throughput LLM serving engine — the production standard for GPU inference at scale.
The fastest AI inference — custom LPU chips delivering 10x speed for open-source models.
High-performance inference platform — fast, cheap, and optimized for production workloads.
Run LLMs locally with one command — the easiest way to get AI running on your machine.
Beautiful desktop app for running LLMs locally — discover, download, and chat with AI models.
Access Claude models via API — industry-leading for coding, analysis, and long-context tasks.
AI gateway with built-in MCP server directory — route, monitor, and extend your AI stack.
13–24 of 156 tools