Use case · Research Analyst

A research AI agent who actually reads the sources.

Most AI research tools are dressed-up search engines — you type a query, you get a generated summary that hallucinates a third of the citations. A research AI agent that's actually useful in 2026 looks different: a named teammate ("Max, our Research Analyst") who reads dozens of real sources via their browser, builds structured outputs (matrices, briefs, market sizings) over hours of work, and ships the final deliverable to your Slack with each claim cited to a real URL. This page is what that looks like and how Provision sets one up.

Where research AI sits in 2026

Research is one of the categories where AI agents have started to genuinely outperform humans on speed, while still trailing on judgment. Stanford HAI research and other academic sources have documented that frontier models can synthesize across dozens of sources in minutes — work that used to take a junior analyst days. The honest caveat is that the same models hallucinate citations or miss sub-text; they're a powerful assistant, not yet a fully autonomous analyst.

The 2026 generation of research agents is the difference between "AI summarizes a search" and "AI runs a multi-hour research workflow with browser-based source verification." The first is what ChatGPT does. The second is what an agent like Max does: load a list of competitors, browse each company's site, pull pricing pages, read their last earnings call, summarize their public roadmap, build a structured matrix, ship the result.

The right framing for a research AI agent is as the executor for a senior analyst's instructions — much the way a junior analyst at a consulting firm or a sell-side equity desk operates. The senior asks for the report; the junior pulls the sources, runs the analysis, drafts the deck. The senior reviews, edits, and ships. AI agents fill the junior role; the senior judgment still belongs to humans.

What a research AI agent actually does

The work that's a strong fit: competitive matrices, market sizings (top-down and bottom-up), summaries of earnings calls and analyst transcripts, technology landscape scans, regulatory reading, executive bio pulls, M&A target lists, customer interview synthesis. In each case the agent reads many sources, extracts structured information, and delivers a structured output you can act on.

The work that's a weak fit: original primary research (the agent can't conduct interviews or run experiments), highly opinionated thesis writing (the agent is good at synthesizing other people's opinions, weaker at constructing novel ones), and any work that requires non-public access or networking-driven sources.

The honest reframe: a research AI agent is a force-multiplier for a senior researcher, not a replacement. The senior gets to spend their time on the structuring, the strategic interpretation, and the final write-up. The agent does the source-reading and the structured-output drafting that historically ate 70% of the analyst's time.

A day in the life of Max, your research agent

Research work is project-shaped, not ticket-shaped. A typical day looks more like running multiple parallel research tasks than a steady stream of small jobs.

8:00 AM

Posts in #research: yesterday's deliverables, today's queue, blockers.

9:00 AM

Picks up the morning's largest task — "build a competitive matrix on the voice cloning category." Pulls a list of 12 competitors, opens each company's homepage in their browser, reads pricing/features, captures structured data.

11:00 AM

Reads ElevenLabs' last earnings call transcript via their browser. Pulls 8 key quotes, structures them by theme (growth, product, competition, regulation), drafts a 1-page TL;DR.

12:30 PM

Posts the matrix and the earnings summary in #research with structured TL;DRs and full source citations linked.

2:00 PM

Picks up the afternoon's request from Slack: "@max, can you scan EU regulatory filings on AI voice synthesis?" Browses the relevant agencies, captures relevant rulings, drafts a 5-bullet summary.

4:00 PM

Triages 6 inbound source-of-truth requests from the team — quick lookups ("what's Tavus's funding history?") that take 5 minutes each via their browser.

5:30 PM

End-of-day digest: 2 deep deliverables shipped, 6 quick lookups handled, 1 deliverable in progress for tomorrow morning.

How Provision delivers a research AI agent

A Provision research agent runs on managed OpenClaw with a sandboxed browser as their primary tool — that's the whole game for research. Their browser handles JavaScript-heavy sites, multi-step navigation, and login-walled sources where you've authorized them. Setup is one OAuth click for Slack and one optional setting for the email inbox.

The skills that matter most for research come pre-loaded: web-search, browse-and-read (renders pages and extracts content), summarize-thread, draft-long-form (structured outputs with headings and citations), competitive-matrix (multi-step skill that takes a list of competitors and produces a structured comparison). Custom skills wrap your internal tools — a CRM lookup, a private database, an analyst access portal.

  • Sandboxed Chrome — JavaScript-heavy and login-walled sources work the same as for a human.
  • Structured-output skills — matrices, briefs, market sizings, source synthesis with real citations.
  • Long-context model support — bring your own ChatGPT or Claude subscription with extended context for big source sets.
  • Slack-resident — drops deliverables in-channel, takes asks via @-mention.
  • Multi-agent — Buzz (Marketing) can ask Max to dig deeper on a competitor; Max returns a brief in-channel.
  • Open-source MIT core — auditable for compliance contexts.
  • $99/mo flat with a 48-hour free trial.

AI research agent vs adjacent tools

The research AI category has a lot of overlapping products with very different shapes.

Generative search engines (Perplexity, ChatGPT Search, Gemini Search)

What it is: AI-powered search that returns synthesized answers with citations.

vs Provision: Different shape. Useful for one-shot lookups; not built for multi-hour research workflows or structured outputs. A research AI agent uses these underneath but adds the team-resident, multi-step, persistent-output layer.

Deep research products (ChatGPT Deep Research, Gemini Deep Research)

What it is: Multi-step research orchestrators that run for tens of minutes per query.

vs Provision: Closer in shape. Differences: Provision research agents are persistent named teammates with channel handles, not session-scoped queries. They build context over weeks of work and live in your Slack.

Competitive intel platforms (Crayon, Klue)

What it is: Curated competitive monitoring with battle cards and tracking.

vs Provision: Complementary. Provision agents can pull from these tools through their browser, but they're not replacing the platform. They're the analyst who operates it.

Hire a research analyst

What it is: Junior or mid-level researcher at $60-110k/year.

vs Provision: Different category. Humans bring strategic interpretation and primary research capability. AI agents bring fast structured-output drafting from public sources. Best research operations use both — agents do the source-reading, humans do the interpretation.

Cost and ROI

Provision is $99/mo flat. BLS data on market research analysts puts the fully-loaded cost of a junior analyst north of $80k/year. The hard ROI math: a $99/mo Provision research agent typically replaces 8-12 hours/week of a senior analyst's source-reading time, freeing them for interpretation. The senior cost varies widely; the freed-time value is in the same neighborhood as a part-time hire.

The harder-to-measure ROI is decision speed. Strategy decisions that used to wait two weeks for an analyst to compile the deck now happen in two days. HBR research on decision-making consistently shows that decision velocity matters more than decision quality at most companies. A research agent that compresses the prep cycle is a strategic investment, not just a cost line.

FAQ

Does it hallucinate citations?
It can, like every LLM-driven system. Provision research agents mitigate by fetching the actual page contents through their browser before citing — every claim ships with a URL the agent actually visited. You can audit the agent's browsing trail in the Provision dashboard. For published deliverables, we recommend a human spot-check of citations on first uses; trust grows with track record.
Can it read paywalled sources we have access to?
Yes — log in to the source through the agent's browser session and the agent inherits the session. Bloomberg Terminal, FactSet, paid academic journals, internal databases — anywhere a human researcher would log in.
How long does a deep research task take?
A 12-source competitive matrix typically takes 20-40 minutes. A 30-source market landscape with structured output takes 2-4 hours. A multi-day project with iterative feedback takes... multiple days. The agent works asynchronously and posts updates in Slack as it progresses.
What about confidential sources?
The agent runs in an isolated, sandboxed runtime per team. We don't train on your data. For highly sensitive research (M&A targets, internal IP analysis), self-host the open-source Provision core on your own hardware — same code, full data residency.
Can it produce slide decks?
It can produce structured outputs that map to deck slides — title, subtitle, bullets, citation. It can drive Google Slides or PowerPoint via their browser to assemble actual decks, but the visual design is uneven. Most teams have the agent produce the structured content and a human (or Buzz, the Marketing agent) handle the deck design step.
How does this compare to ChatGPT Deep Research?
ChatGPT Deep Research is excellent for one-shot deep research queries — faster and often higher-quality on a single ask. A Provision research agent is structurally different: persistent identity, channel-resident, accumulating memory, integrated with your team's other agents. They complement each other; many teams use Deep Research for ad-hoc exploration and Provision for ongoing competitive monitoring.
Will it learn what's important to us?
Yes. After a few weeks the agent learns your team's frameworks ("we always include market timing in competitive analyses"), your domain ("voice cloning, not video synthesis"), and your level of detail ("two-page brief, not 10"). The longer the agent works with the team, the less explicit instruction each request requires.

Hire Max Carter.
48 hours, free.

$99/mo after the trial. Cancel anytime. Open-source core if you ever want to self-host.