Provider Integration
Conduit supports a wide range of LLM providers, allowing you to switch between services without changing your application code.
Supported Providers
Conduit integrates with many popular LLM providers:
| Provider | Supported Models | Key Features |
|---|---|---|
| OpenAI | GPT-3.5, GPT-4, etc. | Chat, Completions, Embeddings, Images |
| Anthropic | Claude 3 (Opus, Sonnet, Haiku) | Chat with long context windows |
| Azure OpenAI | Same as OpenAI | Azure-specific deployments |
| Google Gemini | Gemini Pro, Ultra | Chat, Embeddings, Vision |
| Cohere | Command, Embed | Chat, Completions, Embeddings |
| Mistral | Mistral Small, Medium, Large | Efficient chat models |
| AWS Bedrock | Claude, Llama 2, etc. | AWS-integrated models |
| Groq | Llama 2, Mixtral, etc. | High-speed inference |
| Replicate | Various open models | Wide range of specialized models |
| HuggingFace | Thousands of models | Open-source model variety |
| Ollama | Local open models | Self-hosted models |
| VertexAI | Google's enterprise AI | Enterprise-grade AI solutions |
| SageMaker | Custom models | AWS ML deployments |
Adding Provider Credentials
To add a new provider:
- Navigate to Configuration > Provider Credentials
- Click Add Provider Credential
- Select the provider type
- Enter your API key and other required credentials
- Save the configuration
Different providers may require various credential types:
- API Keys (most providers)
- Organization IDs (some providers)
- Project IDs (Google)
- Deployment IDs (Azure)
- Region settings (AWS)
Model Mappings
After adding provider credentials, you need to create model mappings:
- Navigate to Configuration > Model Mappings
- Click Add Model Mapping
- Define:
- Virtual Model Name: The name clients will use
- Provider Model: The actual provider model
- Provider: The provider to use
- Priority: Used for routing decisions
Provider-Specific Features
Conduit normalizes provider features where possible, but also exposes provider-specific capabilities:
- Vision Models: Available through multimodal input support
- Function Calling: Supported for providers that offer it
- Streaming: Enabled for all providers that support it
- Long Context: Available for models that support extended contexts
Custom Provider Integration
For on-premise or custom LLM deployments:
- Navigate to Configuration > Provider Credentials
- Select "Custom Provider" type
- Configure the endpoint URL and authentication
- Create model mappings as needed
Monitoring Provider Health
Conduit includes provider health monitoring:
- Navigate to Provider Health
- View the status of each configured provider
- Configure health check settings
- Set up notifications for provider issues
Next Steps
- Learn about Model Routing to understand how requests are directed to providers
- Explore Multimodal Support for handling images and other media
- See the API Reference for details on provider-specific parameters