Frequently Asked Questions
General Questions
What is Conduit?
Conduit is a unified API gateway for Large Language Models (LLMs) that simplifies the integration of various AI services into your applications. It provides a consistent interface, allowing you to switch between different LLM providers seamlessly without changing your application code.
Is Conduit open source?
Yes, Conduit is an open-source project. The source code is available on GitHub under the MIT license.
What providers does Conduit support?
Conduit supports many providers including:
- OpenAI
- Anthropic
- Azure OpenAI
- Google Gemini
- Cohere
- Mistral
- AWS Bedrock
- Groq
- Replicate
- HuggingFace
- Ollama
- And more
What are the system requirements for running Conduit?
Minimum requirements:
- Docker and Docker Compose (for containerized deployment)
- 2GB RAM
- 20GB disk space
- Internet connectivity to reach LLM providers
Recommended:
- 4GB+ RAM
- 50GB+ SSD storage
- Multi-core CPU
Does Conduit work with my existing OpenAI code?
Yes, Conduit provides an OpenAI-compatible API, allowing you to use it with existing code that works with the OpenAI API. You typically only need to change the base URL and API key.
Setup and Configuration
How do I set up Conduit for the first time?
- Clone the repository:
git clone https://github.com/knnlabs/conduit.git
- Create a
.env
file with your configuration - Run
docker compose up -d
- Access the Web UI at
http://localhost:5001
- Log in with your master key
- Add provider credentials and configure model mappings
For detailed instructions, see the Installation Guide.
How do I add a new provider?
- Navigate to Configuration > Provider Credentials in the Web UI
- Click Add Provider Credential
- Select the provider type
- Enter your API key and other required credentials
- Save the configuration
How do I create a virtual key?
- Navigate to Virtual Keys in the Web UI
- Click Create New Key
- Provide a name and description
- Set permissions and rate limits
- Click Create
- Copy the generated key (it will only be shown once)
Can I use Conduit without Docker?
Yes, you can run Conduit directly on your system:
- Install .NET 8 SDK
- Clone the repository
- Build the solution:
dotnet build
- Run the API:
dotnet run --project ConduitLLM.Http
- Run the Web UI:
dotnet run --project ConduitLLM.WebUI
How do I upgrade Conduit to a newer version?
For Docker deployments:
- Pull the latest code:
git pull
- Rebuild and restart containers:
docker compose down && docker compose up -d --build
For direct deployments:
- Pull the latest code:
git pull
- Rebuild the solution:
dotnet build
- Restart the services
How do I reset my master key?
- Stop Conduit:
docker compose down
- Update the
CONDUIT_MASTER_KEY
in your.env
file - Restart Conduit:
docker compose up -d
Usage
How do I switch between different LLM providers?
There are several ways to switch providers:
- Model Mappings: Change the provider in your model mapping configuration
- Routing Strategies: Set up routing rules to automatically select providers
- Request-Level: Specify a provider override in individual requests
How does Conduit handle provider failures?
Conduit includes fallback mechanisms that can automatically route requests to alternative providers when a primary provider fails. This behavior is configurable through the routing settings.
Can I use Conduit with local models?
Yes, Conduit supports local LLM deployments through:
- Ollama integration
- Custom provider configuration for local API endpoints
- Direct integration with local model servers
How does caching work in Conduit?
Conduit can cache responses from LLM providers to improve performance and reduce costs:
- Identical requests generate a cache key
- If a response exists in the cache, it's returned immediately
- Otherwise, the request is sent to the provider and the response is cached
- Cache TTL (time-to-live) controls how long responses are stored
What is the maximum number of virtual keys I can create?
There is no hard limit on the number of virtual keys. However, for performance reasons, we recommend keeping the number of active keys under a few thousand.
Cost and Performance
Does Conduit add latency to requests?
Conduit adds minimal latency (typically 10-50ms) to requests. This overhead is usually negligible compared to the response time of LLM providers (often 500ms-5s). Enabling caching can significantly reduce latency for repeated requests.
How does Conduit help with cost management?
Conduit provides several cost management features:
- Budget Limits: Set spending caps for virtual keys
- Cost Tracking: Monitor usage across providers
- Least Cost Routing: Automatically select the most economical provider
- Caching: Avoid paying for repeated identical requests
- Usage Analytics: Identify opportunities for optimization
How accurate is the token counting?
Conduit's token counting is highly accurate for most models, typically within 1-2% of the provider's count. For OpenAI models, Conduit uses the same tokenizer (tiktoken) that OpenAI uses.
What database does Conduit use?
By default, Conduit uses SQLite for simplicity. For production deployments with high throughput, you can configure it to use PostgreSQL.
Can Conduit handle high traffic?
Yes, Conduit is designed to handle high traffic loads. For improved performance in high-traffic scenarios:
- Use Redis for caching
- Configure PostgreSQL for the database
- Set up multiple API instances behind a load balancer
- Optimize rate limits and routing strategies
Security
Is communication between Conduit and providers encrypted?
Yes, all communication between Conduit and LLM providers uses HTTPS/TLS encryption.
How are provider credentials stored?
Provider credentials are stored in the database with appropriate security measures. For additional security, you can use environment variables instead of storing credentials in the database.
Can I restrict virtual keys to specific IP addresses?
Yes, you can configure IP restrictions for virtual keys in the Web UI under the key's advanced settings.
Does Conduit support rate limiting?
Yes, Conduit supports rate limiting at both the global level and per virtual key. You can configure limits based on requests per minute, hour, or day.
Can I audit usage of virtual keys?
Yes, Conduit provides detailed logging of all requests, including which virtual key was used, the requested model, and usage statistics. These logs are accessible through the Web UI.
Troubleshooting
Why am I getting "Model not found" errors?
This usually happens because:
- The model name in your request doesn't match any configured model mapping
- The virtual key doesn't have permission to access the model
- The provider for the model is not properly configured
Check your model mappings in the Web UI under Configuration > Model Mappings.
Why are my requests timing out?
Request timeouts can occur due to:
- Provider service issues
- Network connectivity problems
- Request complexity (very large prompts)
- Insufficient timeout configuration
Try increasing the timeout settings or using a fallback configuration.
How do I fix "Rate limit exceeded" errors?
If you're encountering rate limit errors:
- Check if the virtual key has hit its configured rate limit
- Verify if the provider itself is rate limiting your account
- Implement request batching or throttling in your application
- Consider using multiple provider accounts
Why isn't caching working?
If caching isn't working as expected:
- Verify that caching is enabled in Configuration > Caching
- Check the cache provider configuration (Redis or in-memory)
- Ensure the cache key generation is working correctly
- Check if requests include the
no_cache
parameter
How do I fix database migration errors?
For database migration issues:
- Backup your data
- Check database permissions
- Try manually running migrations:
dotnet ef database update --project ConduitLLM.Configuration
- For persistent issues, consider recreating the database
Extensions and Customization
Can I add custom providers to Conduit?
Yes, you can add custom providers by:
- Implementing the
ILLMClient
interface - Registering your provider in the
LLMClientFactory
- Adding appropriate configuration options
Can I modify the routing algorithm?
Yes, you can create custom routing strategies by:
- Implementing the
IModelSelectionStrategy
interface - Registering your strategy in the
ModelSelectionStrategyFactory
- Selecting your strategy in the routing configuration
How can I contribute to Conduit?
You can contribute to Conduit by:
- Submitting bug reports and feature requests on GitHub
- Creating pull requests for bug fixes or new features
- Improving documentation
- Sharing your experiences and use cases
See the Contributing Guide for details.
Are there any webhooks or events I can subscribe to?
Conduit supports webhook notifications for various events:
- Provider health status changes
- Budget alerts
- Error notifications
- Request logging (optional)
Configure webhooks in the Web UI under Configuration > Notifications.
Can I use Conduit in a microservices architecture?
Yes, Conduit works well in a microservices architecture:
- Deploy Conduit as a standalone service
- Configure multiple instances for high availability
- Use a shared Redis cache for consistency
- Implement authentication and routing as needed