The directory
Curated by humans. Honest pricing, real pros & cons, no sponsored rankings.
High-throughput LLM serving engine — the production standard for GPU inference at scale.
High-performance inference platform — fast, cheap, and optimized for production workloads.
Access Claude models via API — industry-leading for coding, analysis, and long-context tasks.
Jamba model family — hybrid SSM-Transformer architecture for efficient long-context AI.
Run 200+ open-source models via fast, affordable API — the one-stop shop for open AI.
The fastest AI inference — custom LPU chips delivering 10x speed for open-source models.
Build teams of AI agents that collaborate — role-based multi-agent orchestration framework.
The most widely adopted AI API — powering GPT-4o, o1, DALL-E, and Whisper.
Europe's leading AI lab — open-weight models with frontier performance at competitive prices.
Enterprise AI platform specializing in RAG, embeddings, and text understanding.
Meta's open-source LLM family — the most popular foundation for self-hosted and fine-tuned AI.
One API, every model — unified access to OpenAI, Anthropic, Google, Meta, and 200+ more.
1–12 of 156 tools