OpenClaw hardware — what actually runs well.
The official minimum is 2 vCPU and 4 GB RAM. The real minimum depends on whether you want browser automation, how many agents you run, and what model you call. Use the sizer below for a recommendation grounded in real production setups.
Quick answers
Do I need a Mac mini for OpenClaw?
No. OpenClaw runs on Linux, macOS, Windows (via WSL2), Docker, Raspberry Pi, and any VPS. Mac mini M4 16 GB is excellent because of unified memory (great for local models) and silent operation, but it's not required. Cheapest realistic host is a $5/month Hetzner VPS.How much RAM does OpenClaw need?
Minimum 4 GB for chat-only with cloud LLMs. 8 GB for browser automation. 16 GB for a local 7B model. 32 GB for local 14B + browser. 64 GB+ for local 70B-class models.Does OpenClaw need a GPU?
Only for local models via Ollama. With cloud LLMs (Claude, GPT) the inference happens at the provider — your GPU does nothing. With Ollama, GPU vs CPU is the difference between 5 tok/s and 80+ tok/s on a 7B model.What's the absolute minimum hardware for OpenClaw?
2 vCPU, 4 GB RAM, 20 GB disk. Chat-only with cloud LLMs, no browser. Below 4 GB Docker becomes unstable during skill loading; below 2 vCPU the gateway and active sessions fight for CPU.Can I run OpenClaw on a laptop?
Yes — fine for tinkering. The catch: when you close the lid, your agent stops receiving messages. Within a week most people move it to a VPS, Mac mini, or Pi for always-on operation.
The honest floor
Real minimums
The official "2 vCPU / 4 GB RAM" minimum is technically true. In practice, that's only enough for chat-only personal use with cloud LLMs and no browser. Most setups need more.
| Use case | vCPU | RAM | Disk | Realistic spec |
|---|---|---|---|---|
| Chat-only, cloud LLM, single agent | 2 | 4 GB | 20 GB | Hetzner CPX11, $5/mo |
| Above + Telegram/Slack channel | 2 | 4 GB | 30 GB | Same |
| Above + occasional browser | 2 | 8 GB | 60 GB | Hetzner CPX21, €7/mo |
| Browser-heavy daily research | 4 | 16 GB | 120 GB | Hetzner CPX31, €18/mo |
| Multi-agent (5+) team setup | 8 | 32 GB | 300 GB | Hetzner AX41 or Mac mini M4 |
| Local 7B model + everything | 8 | 16 GB | 100 GB | Mac mini M4 16 GB or Pi 5 16 GB |
| Local 14B model + browser | 8+ | 32 GB | 200 GB | Mac mini M4 Pro 24 GB |
| Local 70B model | 16+ | 64 GB | 400 GB | Mac Studio or workstation |
Build to spec
Hardware sizer
Tell the tool what you're planning and it'll calculate the minimum spec plus a recommendation.
Hardware sizer
Recommended
- vCPU2
- RAM8 GB
- Disk45 GB SSD
Pi 5 8GB or any laptop. ~$80–$150 all-in.
Where it goes
RAM in detail
RAM is the resource that bites first. Here's where it actually goes on a typical setup:
| Component | Resident RAM | Notes |
|---|---|---|
| OS overhead | 0.5–1 GB | Linux base; macOS reserves more |
| Node + gateway | 0.5–1 GB | Steady-state for the daemon |
| Active session context | 0.2–0.4 GB per agent | Grows with conversation |
| SQLite memory index | 0.1–0.3 GB | Bigger as memory grows |
| Headless Chrome (idle) | 0.8 GB per instance | Each browser session |
| Headless Chrome (active) | 1.5–2 GB per instance | While rendering pages |
| Local Ollama 7B model | 5–6 GB | Quantized Q4_K_M |
| Local Ollama 14B model | 10–11 GB | Q4_K_M |
| Local Ollama 70B model | 44 GB | Q4_K_M |
Headroom matters
Linux OOM-killer is unforgiving. Stay 25%+ below your hard RAM limit; agents that occasionally spike can OOM the host if you size right at the edge.Where CPU matters
CPU + browser
OpenClaw spends most of its time waiting for the LLM API to respond. That's I/O, not CPU. Where CPU genuinely matters:
- Browser rendering. Chrome is CPU-heavy. Single-vCPU boxes get sluggish during page loads.
- Concurrent agents. Each agent's gateway loop and skill execution wants its own thread.
- Local model inference. Without a GPU, this is 100% CPU. A 7B model on 8 cores: 8–15 tok/s. On 2 cores: 1–3 tok/s, painful.
- SQLite vector queries. memory_search on a large index is CPU-bound for the BM25 + similarity ranking.
Practical rule: 2 vCPU for personal, 4 vCPU when you add browser work, 8 vCPU when you go multi-agent or local inference.
Plan for years
Disk + memory growth
Memory grows. Skills accumulate. Logs pile up. Plan for what the disk will look like at month 12, not day 1.
| Asset | Day 1 | Month 6 | Year 1 |
|---|---|---|---|
| MEMORY.md + daily notes | 1 KB | 5–20 MB | 30–80 MB |
| SQLite memory index | 0 | 100–500 MB | 300 MB–1.5 GB |
| Installed skills | 0 | 100–300 MB | 200–500 MB |
| Log files (un-rotated) | 0 | 1–5 GB | 5–20 GB |
| Browser cache + screenshots | 0 | 500 MB–2 GB | 2–10 GB |
| Total | 0.5 GB | 2–8 GB | 8–30 GB |
Rotate logs
Set up logrotate or use Docker's log rotation. Otherwise logs eat all your free disk by month 6 and the gateway starts failing in confusing ways.Local models only
When you need a GPU
If you're running cloud LLMs, GPUs do nothing for OpenClaw. The GPU question only matters for local inference via Ollama.
| GPU | VRAM | Best model fit | Speed |
|---|---|---|---|
| No GPU (CPU only) | — | Up to 7B (slowly) | 5–15 tok/s |
| Apple M-series unified | 16–192 GB | Up to 70B | 20–80 tok/s |
| Nvidia RTX 4060 8 GB | 8 GB | Up to 7B | 40–80 tok/s |
| Nvidia RTX 4090 24 GB | 24 GB | Up to 32B | 80–200 tok/s |
| Nvidia A100 80 GB | 80 GB | Up to 70B | 100+ tok/s |
Apple M-series is a sleeper hit here — unified memory means your "VRAM" is the system RAM. Mac mini M4 16 GB runs 7B models at 50+ tok/s; Mac Studio M4 Max with 128 GB unified memory genuinely runs 70B-class models. Without an NDA-grade workstation, the M-series is the cheapest path to local inference for OpenClaw.
Provision
Or skip the hardware
The hardware decision matters if you're self-hosting. If you're not, you don't make it. Provision runs OpenClaw on infrastructure sized to your usage — no Pi to babysit, no VPS to update, no Docker compose to debug at 2 AM.
FAQ
Want OpenClaw without the ops?
Provision is the managed OpenClaw cloud — agents, channels, browser, and skills, all running. $99/mo. 48-hour free trial.