Section 6: Selecting Models - Daily Use vs. Coding Tasks
No single model is best at everything. Match the model to the task and you'll get better speed, quality, and cost control.
Four Model Categories
| Category | Example Models | Best For |
|---|---|---|
| Fast & efficient | qwen2.5-7b, gpt-4.1-mini |
Daily chat, reminders, quick Q&A |
| Smart & capable | claude-sonnet-4.5, gpt-4.1 |
Complex reasoning, writing |
| Coding specialists | deepseek-coder-v2, codestral |
Code generation, debugging |
| Vision/image analysis | gpt-4.1, llava |
Image descriptions, diagrams |
Default vs. Task-Specific Overrides
OpenClaw uses one default model for most work, but you can override by task. For example:
- Use
claude-sonnet-4.5for drafting a long email. - Switch to
deepseek-coder-v2for debugging a script.
Failover Chains
If your primary model fails (rate limit, outage, timeout), OpenClaw tries the next model in the chain. This keeps workflows moving without manual intervention.
Cost Awareness
Long prompts on premium models (for example, gpt-4o) can
get expensive. Reserve them for high-value tasks, and use lower-cost
models for routine work.
Free Downloadable Local Models
If you want to avoid recurring API costs, this is the best place in the guide to discuss free models you can download and run locally. These models usually work through tools like Ollama or other local inference runtimes.
Good beginner categories:
- Small fast models such as Gemma, Qwen, or small Llama variants for daily chat and utility tasks
- Coding-focused models such as DeepSeek Coder or Codestral-family local options for programming help
- Vision-capable local models such as LLaVA-style models if you want basic image understanding on your own machine
Main tradeoff: downloadable models are free to obtain, but they shift the cost to your hardware. A lightweight laptop can run small models, while larger models often need a stronger desktop or GPU.
claude-sonnet-4.5 for emails, while Agent B uses deepseek-coder-v2 for code reviews.Starter Config Strategy
- Pick one default model (for example,
qwen2.5-7bfor daily chat). - Add one fallback model (for example,
gpt-4.1-mini). - Add task overrides for specialized work.
Example config snippet:
⚙️ Reference only — do not paste this into any file:
{
"default_model": "qwen2.5-7b",
"fallback_models": ["gpt-4.1-mini"],
"task_overrides": {
"coding": "deepseek-coder-v2",
"writing": "claude-sonnet-4.5"
}
}::: action Run openclaw onboard to set up your first
model, then tune config choices as you learn what works best. :::