Skip to content

Models

Red Station is model-agnostic. It uses a Model Routing layer to direct different types of work to the most appropriate Large Language Model (LLM).

Providers

A provider is an upstream service that hosts models.

  • Ollama: (Default) Runs local models like llama3 or mistral on your machine. Zero latency, zero cost, full privacy.
  • OpenAI: Access gpt-4 family models.
  • Anthropic: Access claude-3 family models.

Routing & Aliases

Instead of hardcoding "GPT-4" into every agent, Red Station uses Aliases to decouple intent from implementation.

AliasPurposeTypical Model
fastQuick logic, simple chatsllama3:8b (Local)
codeCode generationcodellama (Local) or claude-3-opus
smartComplex reasoning, planninggpt-4-turbo or claude-3-opus

CLI Commands

bash
# List active model routes
pilot mcp models

# Check status of configured providers
pilot status engine

Configuration (models.yaml)

Defined in ~/.pilot/models.yaml.

yaml
default: ollama

providers:
  ollama:
    type: ollama
    base_url: http://localhost:11434
    
  openai:
    type: openai
    api_key: ${secrets.OPENAI_API_KEY}

aliases:
  fast:
    provider: ollama
    model: llama3:8b

Released under the MIT License.