Start Here

Decide your path, then follow the guides that match.

Answer a few questions on the home page to get setup suggestions tailored to your budget and goals.

1. Do you want local models?

  • Yes — You’ll run Ollama (or similar) on a Mac. See Mac mini guide for RAM/model tiers and Setup Guides for install.
  • No — Use cloud APIs (Anthropic, OpenAI, Google, Groq, etc.). Any Mac mini or laptop is fine; focus on OpenClaw + gateway. See Cloud options below.
  • Not sure — Start with one Mac mini and Ollama (8B model). You can add cloud later.

2. Budget

BudgetHardwareModels
TightM1/M2 8GB or 16GB8B Q4 (e.g. Llama 3.1:8b)
ModerateM2 16GB14B Q4 or 8B Q8 (Qwen2.5, Llama 3.1)
No limitM2 Pro 32GB or M4 16GB+27B–70B quantized (Gemma 27B, Llama 70B)

3. Setup type

  • Solo machine — One Mac runs gateway + (optionally) Ollama. Easiest.
  • BRAIN + agent machines — One machine runs a backup local model (BRAIN); others run agents and connect to the gateway. See Architecture for diagrams.

4. Storage

If you need more space for models, use an external SSD. We explain DRAM vs DRAMless and budget-friendly picks there.

Cloud options (API-backed models)

If you don’t run local models, OpenClaw can use cloud APIs. Configure your provider in openclaw config or openclaw configure.

ProviderBest forNotes
Anthropic (Claude)Reasoning, long context, API + Claude Pro/MaxOAuth or API key. Default choice for many.
OpenAI (GPT)GPT-4o, fast chat, function callingAPI key. Pay per token.
Google (Gemini)Generous free tier, multimodalGood for trying without heavy spend.
GroqVery fast inference, free tierOpenClaw-compatible; good for speed.
Others (Kimi, etc.)Various open-weight APIsUse OpenAI-compatible or provider-specific config.

You can mix cloud and local: e.g. primary = Anthropic, backup = ollama/gpt-oss:20b. See Local Models and Best Practices.