Open source guide

Self-host an AI agent on your own hardware.
Open source. MIT licensed.

The Provision core (OpenClaw harness, agent runtime, channel adapters, dashboard) is open source under MIT. You can run the same platform that powers the managed cloud on your own servers — for free. This page is the honest guide to doing it well: hardware, setup, the gotchas you'll hit, and the cases where managed cloud is the better call.

Why self-host an AI agent at all

Three reasons usually drive the decision. Compliance — your data has to live on hardware you control, full stop. Cost at scale — once you're running many agents continuously, the fixed-cost-per-month model of managed cloud loses to a server you already own. Curiosity and control — some teams just like owning their stack and don't mind the ops work.

All three are valid. The honest answer for most teams is that self-hosting AI agents is more work than it looks, especially the email and channel layers, and that you should pick self-host only if at least one of those three reasons applies specifically to you.

What you're running

A self-hosted Provision deployment is the same component set as the managed cloud, just on your hardware. Concretely:

OpenClaw harness

The agent runtime — orchestration loop, browser, filesystem, memory, skill system. Open source, MIT licensed. The brain of every agent.

Provision core

Dashboard, channel adapters (Slack/Telegram/Discord/Web Chat), per-agent inbox provisioning, kanban task board, multi-agent coordination. Open source, MIT licensed.

Postgres

Persistent storage for agent state, conversations, memory, and the dashboard. Standard Postgres — no special build.

Redis (or BullMQ)

Job queue for long-running agent tasks and async work. Standard Redis.

Headless Chrome

Each agent's sandboxed browser. Isolated containers per agent so they can't cross-contaminate.

Email gateway

SMTP send and IMAP receive. Inbound parsing routes replies back to the right agent. You handle SPF/DKIM/DMARC for your sending domain.

Model gateway

Pluggable LLM adapter — point at OpenAI, Anthropic, Google, or local Ollama. Same interface; switch by config.

Reverse proxy

Caddy or nginx fronting the dashboard, channel webhook receivers, and OAuth callbacks.

The setup, end to end

Plan for a long afternoon if you're experienced, a weekend if you're newer to ops. Below is the actual sequence — skip steps at your peril.

  1. 1. Get the hardware ready (15 min – 2 days)

    Linux server with Docker, or a Mac Mini with Docker Desktop / OrbStack. 16GB+ RAM, 256GB+ SSD. Public hostname or tunnel (Cloudflare, ngrok) for OAuth callbacks. SSH access to the box.

  2. 2. Clone and start the core (~30 min)

    git clone github.com/provision-org/provision-core. Copy .env.example, fill in DATABASE_URL, REDIS_URL, model API keys, signing secrets. docker compose up -d. The dashboard comes up at localhost:8000.

  3. 3. Configure browser sandboxing (~1–2 hours)

    Verify Chrome containers spawn correctly per agent and isolate filesystem and cookies. Set resource limits. Test that agent A can't read agent B's session.

  4. 4. Wire up email (~2–8 hours)

    Pick a sending domain (provisionagents.yourcompany.com works). Add SPF, DKIM, DMARC. Connect a transactional provider for outbound (Postmark, Resend, SES). Set up inbound parsing (Cloudflare Email Routing, Postmark inbound, etc.). Test deliverability with mail-tester.com.

  5. 5. Connect channels (~1–3 hours each)

    Slack: create a Slack app, configure scopes, set redirect URI to your-host/oauth/slack/callback. Telegram: create a bot via @BotFather, paste the token. Discord: create a Discord application, generate a bot token, set OAuth redirect. Provision core handles the runtime side once tokens are in place.

  6. 6. Create your first agent and test

    From the dashboard, create an agent (Buzz, Marketing Lead). Pick channels. Send a test message in Slack. Verify the agent receives, processes, and responds. Send the agent an email; verify the inbound parsing works.

  7. 7. Set up monitoring and backups (~1 hour)

    Tail the agent runtime logs to a log aggregator. Schedule nightly Postgres dumps. Add uptime monitoring on the dashboard. This is the difference between a working setup and a stable one.

When self-host wins, when cloud wins

Self-host wins when…

  • Compliance requires your data on your hardware.
  • You're running 50+ agents and the per-month math has flipped.
  • You enjoy the ops work and your time is genuinely free.
  • You need air-gapped local-model deployment.
  • You want full code-level customization beyond skills.

Provision Cloud wins when…

  • You'd rather pay $99/mo than spend a weekend on infra.
  • You don't have an ops team or AI engineer.
  • Email deliverability and channel OAuth are not where you want to spend time.
  • You want continuous updates and security patches handled.
  • You might want to migrate later — both directions are supported.

FAQ

Can I self-host the entire Provision platform?
Yes. The Provision core — OpenClaw harness, agent runtime, channel adapters, dashboard, and database schema — is MIT licensed and on GitHub. Clone it, configure your environment variables, run docker compose up, and you have the same platform that runs the cloud.
What hardware do I need?
For cloud-based models: any modern Linux box or a Mac Mini ($599+) works. CPU and 16GB+ RAM are enough since the heavy lifting happens in the model API. For local models: 24GB+ unified memory recommended (M4 Pro Mac Mini or similar). See the /openclaw-mac-mini guide for the detailed breakdown.
What's harder than expected when self-hosting?
Three things. (1) Email deliverability — getting agent emails to land in inboxes (not spam) requires SPF/DKIM/DMARC, a reputable IP, and warmup. (2) Channel OAuth — Slack/Telegram/Discord each need their own bot apps, scopes, redirect URLs, token handling. (3) Browser sandboxing — running per-agent Chrome safely, with isolation between agents, takes more thought than you'd guess.
Is self-hosting cheaper than $99/mo Provision Cloud?
It depends on how you value your time. Pure infra costs are very low (~$10-20/mo if you already have a server). The hidden cost is setup time (8-16 hours) plus ongoing maintenance (2-4 hours/month). If your time is worth more than $30/hour, $99/mo Provision is cheaper than self-hosting in year one.
Can I use my own ChatGPT or Claude subscription?
Yes — that's a primary design goal. The self-hosted Provision core supports BYO ChatGPT Plus/Pro, Claude Pro/Max, OpenAI/Anthropic API keys, and local models via Ollama. Same flexibility as the managed cloud.
What's the migration story between self-host and managed?
Bidirectional. Start on managed Provision Cloud, migrate to self-host later if compliance requires it. Or start self-hosted and move to the cloud when you'd rather pay than maintain. Same code, same agent definitions, same data formats — just different infrastructure.
Is OpenClaw the same as Provision?
OpenClaw is the open-source AI agent harness — the runtime, browser, filesystem, memory, and skill system. Provision is a managed cloud and platform layer on top of OpenClaw, adding the dashboard, channel integrations, email-per-agent, multi-agent UX, and team-of-agents structure. The Provision core is also open source; the cloud is the paid layer.
Do I need OpenClaw to use the Provision core?
Yes — OpenClaw is the agent harness Provision is built on. You install both as part of the self-host setup. They're separate projects with separate repos, but they're designed to work together.

Self-host or managed — both paths are real.

Try managed for 48 hours free. Migrate to self-host whenever you're ready.