
Don’t see an integration you need? We’d love to hear from you!
Integration Types
Phoenix offers several types of integrations to support your AI development workflow:Developer Tools
Integrate Phoenix with AI coding assistants like Claude Code and Cursor for debugging and analysis.
Tracing Integrations
Automatically capture traces from your AI applications built with popular frameworks and LLM providers.
Eval Model Integrations
Use any LLM provider to power Phoenix’s evaluation capabilities for scoring and classifying your traces.
Eval Library Integrations
Integrate with external evaluation libraries like Ragas and Cleanlab to visualize results in Phoenix.
Vector Database Integrations
Connect with vector databases for embedding analysis and retrieval debugging.
Span Processors
Convert traces from other instrumentation libraries to the OpenInference format.
Developer Tools
Integrate Phoenix with AI coding assistants to debug and analyze your LLM applications directly from your development environment.Coding Agents
Install Phoenix debugging skills and CLI for Claude Code, Cursor, and other AI coding assistants.
Phoenix MCP Server
Connect AI assistants directly to your Phoenix instance via the Model Context Protocol.
Tracing Integrations
Phoenix captures detailed traces from your AI applications, giving you visibility into every step of your LLM pipeline.By Language
- Python
- TypeScript
- Java
LLM Providers
Phoenix provides native tracing support for all major LLM providers:Platforms
Integrate Phoenix with AI development platforms and infrastructure:Eval Model Integrations
Phoenix’s evaluation library (phoenix-evals) can use any LLM provider to power evaluations. These models score, classify, and analyze your traces.

































