Adding Custom Models in ConsoleX AI
ConsoleX AI allows you to integrate your own AI model endpoints. This guide explains how to set up and use custom model endpoints.
Supported Model Types
ConsoleX AI supports integrating model endpoints from the following model providers and cloud service providers:
- OpenAI
- Azure OpenAI
- Anthropic
- Google AI Studio
- Google Vertex AI
- AWS Bedrock
- Ollama
- DeepSeek
- xAI
- Groq AI
- Qwen
- OpenRouter
- Mistral
- Any OpenAI API-compatible endpoint
Steps to Set Up Custom Models
To integrate your own model endpoints, follow these steps:
- Click the "Model Hub" link in the bottom left corner of the page.
- Click the "Add Custom Model" button in the top right corner of the page.
- Select a model provider from the list.
- Enter the necessary information for the model endpoint, such as API key, base URL, etc.
- Click the "Save" button.
Usage Limitations
- When using custom private model endpoints, you do not need to consume balance on ConsoleX AI.
- Free users can integrate up to 3 private models, while paid subscription users can integrate unlimited private model endpoints.
Cache Application
When users call custom model endpoints, ConsoleX AI will try to include cache request parameters in each call to help users use caching to save on calling costs.