Start Here
Decide your path, then follow the guides that match.
Answer a few questions on the home page to get setup suggestions tailored to your budget and goals.
1. Do you want local models?
- Yes — You’ll run Ollama (or similar) on a Mac. See Mac mini guide for RAM/model tiers and Setup Guides for install.
- No — Use cloud APIs (Anthropic, OpenAI, Google, Groq, etc.). Any Mac mini or laptop is fine; focus on OpenClaw + gateway. See Cloud options below.
- Not sure — Start with one Mac mini and Ollama (8B model). You can add cloud later.
2. Budget
| Budget | Hardware | Models |
|---|---|---|
| Tight | M1/M2 8GB or 16GB | 8B Q4 (e.g. Llama 3.1:8b) |
| Moderate | M2 16GB | 14B Q4 or 8B Q8 (Qwen2.5, Llama 3.1) |
| No limit | M2 Pro 32GB or M4 16GB+ | 27B–70B quantized (Gemma 27B, Llama 70B) |
3. Setup type
- Solo machine — One Mac runs gateway + (optionally) Ollama. Easiest.
- BRAIN + agent machines — One machine runs a backup local model (BRAIN); others run agents and connect to the gateway. See Architecture for diagrams.
4. Storage
If you need more space for models, use an external SSD. We explain DRAM vs DRAMless and budget-friendly picks there.
Cloud options (API-backed models)
If you don’t run local models, OpenClaw can use cloud APIs. Configure your provider in openclaw config or openclaw configure.
| Provider | Best for | Notes |
|---|---|---|
| Anthropic (Claude) | Reasoning, long context, API + Claude Pro/Max | OAuth or API key. Default choice for many. |
| OpenAI (GPT) | GPT-4o, fast chat, function calling | API key. Pay per token. |
| Google (Gemini) | Generous free tier, multimodal | Good for trying without heavy spend. |
| Groq | Very fast inference, free tier | OpenClaw-compatible; good for speed. |
| Others (Kimi, etc.) | Various open-weight APIs | Use OpenAI-compatible or provider-specific config. |
You can mix cloud and local: e.g. primary = Anthropic, backup = ollama/gpt-oss:20b. See Local Models and Best Practices.