Appearance
Models
Red Station is model-agnostic. It uses a Model Routing layer to direct different types of work to the most appropriate Large Language Model (LLM).
Providers
A provider is an upstream service that hosts models.
- Ollama: (Default) Runs local models like
llama3ormistralon your machine. Zero latency, zero cost, full privacy. - OpenAI: Access
gpt-4family models. - Anthropic: Access
claude-3family models.
Routing & Aliases
Instead of hardcoding "GPT-4" into every agent, Red Station uses Aliases to decouple intent from implementation.
| Alias | Purpose | Typical Model |
|---|---|---|
fast | Quick logic, simple chats | llama3:8b (Local) |
code | Code generation | codellama (Local) or claude-3-opus |
smart | Complex reasoning, planning | gpt-4-turbo or claude-3-opus |
CLI Commands
bash
# List active model routes
pilot mcp models
# Check status of configured providers
pilot status engineConfiguration (models.yaml)
Defined in ~/.pilot/models.yaml.
yaml
default: ollama
providers:
ollama:
type: ollama
base_url: http://localhost:11434
openai:
type: openai
api_key: ${secrets.OPENAI_API_KEY}
aliases:
fast:
provider: ollama
model: llama3:8b