Skip to content

Model Configuration

Configure AI providers and models for the AI Assistant.

  1. Go to Options > AI Assistant > Models
  2. Or click the model selector in the chat panel and select “Model Settings”

Click on any provider card to expand its settings:

  • OpenAI - GPT models
  • Google - Gemini models
  • Anthropic - Claude models
  • DeepSeek - DeepSeek models
  • Qwen - Alibaba Cloud models
  • OpenRouter - Multi-model gateway
  • Chrome Built-in - Browser’s local AI

Toggle the Enable switch to activate the provider.

  1. Enter your API key in the API Key field
  2. Click Save Configuration

::: tip API Key Security

  • API keys are encrypted and stored locally only
  • Keys are never sent to our servers
  • Keys are not synced to cloud :::

Step 4: Configure Custom Endpoint (Optional)

Section titled “Step 4: Configure Custom Endpoint (Optional)”

If you need to use a proxy or custom endpoint:

  1. Enter the endpoint URL in the API Endpoint field
  2. Leave empty to use the default endpoint
  1. Visit OpenAI Platform
  2. Sign in or create an account
  3. Click “Create new secret key”
  4. Copy the key and paste into VertiTab
  1. Visit Google AI Studio
  2. Sign in with your Google account
  3. Click “Create API Key”
  4. Copy the key and paste into VertiTab
  1. Visit Anthropic Console
  2. Sign in or create an account
  3. Go to API Keys section
  4. Create a new key and paste into VertiTab
  1. Visit DeepSeek Platform
  2. Sign in or create an account
  3. Navigate to API Keys
  4. Generate a new key and paste into VertiTab
  1. Visit OpenRouter
  2. Sign in or create an account
  3. Create an API key
  4. Paste into VertiTab

Some models support a trial experience without configuring your own API key:

  • Limited daily usage quota
  • Usage is tracked per account/device
  • Rate-limited for fair usage

::: warning Quota Limits

  • Guest Users: Limited daily quota
  • Free Users: Extended daily quota (requires sign-in)
  • Premium Users: Higher quota + BYOK support

For unlimited usage, configure your own API key. :::

Each model has different capabilities:

CapabilityDescription
StreamingReal-time response output
Tool CallingBrowser operation support (Agent mode)
JSON SchemaStructured output support

::: info Model Recommendation For Agent mode, choose models with Tool Calling capability for the best experience. :::

  1. Go to provider settings
  2. Click Add button in Available Models section
  3. Fill in model details:
    • Model ID: The API model identifier
    • Display Name: Name shown in the UI
    • Context Window: Maximum token limit
    • Capabilities: Select supported features
  4. Click Add Model
  1. Scroll to the bottom of providers list
  2. Click Add Provider
  3. Fill in provider details:
    • Provider ID: Unique identifier
    • Display Name: Name shown in the UI
    • Provider Type: Select base type (OpenAI compatible, etc.)
    • API Endpoint: Custom endpoint URL
  4. Click Add
  5. Configure API key and add models

Set your preferred default model for new sessions:

  1. Go to Options > AI Assistant > Basic Settings
  2. Find Default AI Provider and select a provider
  3. Find Default Model and select a model
  1. Check if the key is correctly copied (no extra spaces)
  2. Verify the key has proper permissions
  3. Check if the provider account has credits/quota
  4. Try regenerating a new API key
  1. Ensure the provider is enabled
  2. Check if the model ID is correct
  3. Verify your API key has access to the model
  1. Check your internet connection
  2. Try a different model
  3. Consider using a streaming-enabled model
  4. Check provider status pages for outages