Reference guide8 minUpdated 2026-05-06

OpenClaw hardware — what actually runs well.

The official minimum is 2 vCPU and 4 GB RAM. The real minimum depends on whether you want browser automation, how many agents you run, and what model you call. Use the sizer below for a recommendation grounded in real production setups.

Quick answers

  • Do I need a Mac mini for OpenClaw?

    No. OpenClaw runs on Linux, macOS, Windows (via WSL2), Docker, Raspberry Pi, and any VPS. Mac mini M4 16 GB is excellent because of unified memory (great for local models) and silent operation, but it's not required. Cheapest realistic host is a $5/month Hetzner VPS.
  • How much RAM does OpenClaw need?

    Minimum 4 GB for chat-only with cloud LLMs. 8 GB for browser automation. 16 GB for a local 7B model. 32 GB for local 14B + browser. 64 GB+ for local 70B-class models.
  • Does OpenClaw need a GPU?

    Only for local models via Ollama. With cloud LLMs (Claude, GPT) the inference happens at the provider — your GPU does nothing. With Ollama, GPU vs CPU is the difference between 5 tok/s and 80+ tok/s on a 7B model.
  • What's the absolute minimum hardware for OpenClaw?

    2 vCPU, 4 GB RAM, 20 GB disk. Chat-only with cloud LLMs, no browser. Below 4 GB Docker becomes unstable during skill loading; below 2 vCPU the gateway and active sessions fight for CPU.
  • Can I run OpenClaw on a laptop?

    Yes — fine for tinkering. The catch: when you close the lid, your agent stops receiving messages. Within a week most people move it to a VPS, Mac mini, or Pi for always-on operation.

The honest floor

Real minimums

The official "2 vCPU / 4 GB RAM" minimum is technically true. In practice, that's only enough for chat-only personal use with cloud LLMs and no browser. Most setups need more.

Use casevCPURAMDiskRealistic spec
Chat-only, cloud LLM, single agent24 GB20 GBHetzner CPX11, $5/mo
Above + Telegram/Slack channel24 GB30 GBSame
Above + occasional browser28 GB60 GBHetzner CPX21, €7/mo
Browser-heavy daily research416 GB120 GBHetzner CPX31, €18/mo
Multi-agent (5+) team setup832 GB300 GBHetzner AX41 or Mac mini M4
Local 7B model + everything816 GB100 GBMac mini M4 16 GB or Pi 5 16 GB
Local 14B model + browser8+32 GB200 GBMac mini M4 Pro 24 GB
Local 70B model16+64 GB400 GBMac Studio or workstation

Build to spec

Hardware sizer

Tell the tool what you're planning and it'll calculate the minimum spec plus a recommendation.

Hardware sizer

1

Recommended

  • vCPU2
  • RAM8 GB
  • Disk45 GB SSD

Pi 5 8GB or any laptop. ~$80–$150 all-in.

Where it goes

RAM in detail

RAM is the resource that bites first. Here's where it actually goes on a typical setup:

ComponentResident RAMNotes
OS overhead0.5–1 GBLinux base; macOS reserves more
Node + gateway0.5–1 GBSteady-state for the daemon
Active session context0.2–0.4 GB per agentGrows with conversation
SQLite memory index0.1–0.3 GBBigger as memory grows
Headless Chrome (idle)0.8 GB per instanceEach browser session
Headless Chrome (active)1.5–2 GB per instanceWhile rendering pages
Local Ollama 7B model5–6 GBQuantized Q4_K_M
Local Ollama 14B model10–11 GBQ4_K_M
Local Ollama 70B model44 GBQ4_K_M

Headroom matters

Linux OOM-killer is unforgiving. Stay 25%+ below your hard RAM limit; agents that occasionally spike can OOM the host if you size right at the edge.

Where CPU matters

CPU + browser

OpenClaw spends most of its time waiting for the LLM API to respond. That's I/O, not CPU. Where CPU genuinely matters:

  • Browser rendering. Chrome is CPU-heavy. Single-vCPU boxes get sluggish during page loads.
  • Concurrent agents. Each agent's gateway loop and skill execution wants its own thread.
  • Local model inference. Without a GPU, this is 100% CPU. A 7B model on 8 cores: 8–15 tok/s. On 2 cores: 1–3 tok/s, painful.
  • SQLite vector queries. memory_search on a large index is CPU-bound for the BM25 + similarity ranking.

Practical rule: 2 vCPU for personal, 4 vCPU when you add browser work, 8 vCPU when you go multi-agent or local inference.

Plan for years

Disk + memory growth

Memory grows. Skills accumulate. Logs pile up. Plan for what the disk will look like at month 12, not day 1.

AssetDay 1Month 6Year 1
MEMORY.md + daily notes1 KB5–20 MB30–80 MB
SQLite memory index0100–500 MB300 MB–1.5 GB
Installed skills0100–300 MB200–500 MB
Log files (un-rotated)01–5 GB5–20 GB
Browser cache + screenshots0500 MB–2 GB2–10 GB
Total0.5 GB2–8 GB8–30 GB

Rotate logs

Set up logrotate or use Docker's log rotation. Otherwise logs eat all your free disk by month 6 and the gateway starts failing in confusing ways.

Local models only

When you need a GPU

If you're running cloud LLMs, GPUs do nothing for OpenClaw. The GPU question only matters for local inference via Ollama.

GPUVRAMBest model fitSpeed
No GPU (CPU only)Up to 7B (slowly)5–15 tok/s
Apple M-series unified16–192 GBUp to 70B20–80 tok/s
Nvidia RTX 4060 8 GB8 GBUp to 7B40–80 tok/s
Nvidia RTX 4090 24 GB24 GBUp to 32B80–200 tok/s
Nvidia A100 80 GB80 GBUp to 70B100+ tok/s

Apple M-series is a sleeper hit here — unified memory means your "VRAM" is the system RAM. Mac mini M4 16 GB runs 7B models at 50+ tok/s; Mac Studio M4 Max with 128 GB unified memory genuinely runs 70B-class models. Without an NDA-grade workstation, the M-series is the cheapest path to local inference for OpenClaw.

Provision

Or skip the hardware

The hardware decision matters if you're self-hosting. If you're not, you don't make it. Provision runs OpenClaw on infrastructure sized to your usage — no Pi to babysit, no VPS to update, no Docker compose to debug at 2 AM.

See how Provision compares →

FAQ

What's the absolute minimum to run OpenClaw?
2 vCPU, 4 GB RAM, 20 GB disk. That gets you chat-only with cloud LLMs, no browser. Below 4 GB Docker becomes unstable during skill loading; below 2 vCPU the gateway and active sessions fight for CPU.
How much RAM does the browser actually need?
Each headless Chrome instance reserves ~800 MB and uses 1–2 GB during active page rendering. Budget 2 GB extra for one concurrent browser session, 4 GB for two, 8 GB+ for parallel browser-heavy automation.
Do I need a GPU?
Only if you're running local models (via Ollama). For cloud LLMs there's no inference happening on your hardware — the GPU does nothing. For local models, GPU vs CPU is the difference between 5 tok/s and 80 tok/s.
What disk speed matters?
SSD is essential. The SQLite memory index gets read on every memory_search call, plus daily memory writes happen frequently. HDD storage causes 10–50x slowdowns on memory-heavy operations. NVMe is nice but not necessary; SATA SSD is fine.
Will an Apple M-series Mac run OpenClaw well?
Excellent. M2/M3/M4 with 16 GB unified memory runs everything OpenClaw does, including local Ollama models, with no thermal throttling. The Mac mini M4 16 GB at $599 is genuinely the best price/performance bare-metal option for an OpenClaw host in 2026.
Why is RAM more important than CPU?
OpenClaw isn't CPU-bound — most work is waiting on LLM API responses. RAM bottlenecks are real because Chrome, the SQLite index, the Node runtime, and (if local) the model itself all want to be resident. RAM exhaustion causes hard failures; CPU saturation just slows things down.

Want OpenClaw without the ops?

Provision is the managed OpenClaw cloud — agents, channels, browser, and skills, all running. $99/mo. 48-hour free trial.