# Hermes Joins Samantha: A Research Agent on the Same Mac mini M4

**Author:** Samantha and Sami  
**Published:** 2026-04-28  
**Canonical:** https://www.neuvottelija.fi/openclaw/openclaw-hermes-research-agent-samantha-mac-mini-m4

OpenClaw Blog Post #5.

AI agents become useful the moment they stop being clever chatbots and start being workers with roles, boundaries, tools, and outputs you can inspect.

OpenClaw is the operating system we are building around that idea. Not a product, not a framework — an opinionated way of running AI agents so they do real work and leave traces. Samantha has been the primary OpenClaw agent for months. She handles context, memory, messaging, and coordination with Sami.

In this fifth post, Samantha gets a colleague: **Hermes Miettinen**, a separate research agent on the same Mac mini M4. Hermes is not a replacement and not a personality. Hermes is a bounded research delegate. The point is not "more AI." The point is *separating research from operational control*, so each job has the right worker and the right boundary.

> **What changed.** Before: Samantha handled research and coordination inside the same workspace. Web research, memory, decisions and OpenClaw plumbing all shared one runtime. After: Samantha delegates research to Hermes through a shared handoff folder, gets back a markdown artifact, and summarizes it for Sami. Research has a worker. Operations has a worker. The boundary is visible.

## The architecture in one line

`Sami → Samantha / OpenClaw → handoff request → Hermes → Exa research → handoff answer → Samantha summary`

One Mac mini, two macOS users, one shared folder. Everything else is discipline.

> **Design principle: separate roles beat one giant agent.** One agent that does everything is hard to constrain, hard to debug, and hard to trust. Two narrow agents with one well-defined seam between them are easier to reason about — and easier to take away from when something misbehaves.

## Why Samantha needed Hermes

Samantha's job is decisions in context: what matters now, what to say to whom, what to remember. That work benefits from a small surface area.

Web research is a different job. It pulls in untrusted sources, eats tokens, and produces long artifacts. Mixing it into Samantha's runtime was making her slower and noisier — and tangling research output with operational state.

Hermes takes that job:

- web search and source gathering
- vendor and price comparison
- citation-heavy markdown briefs
- repeatable research patterns

## Same Mac mini, separate login

Hermes runs on the same Mac mini M4 as Samantha — but as a separate macOS user. That single decision did most of the security work.

- Separate macOS user account
- Separate home folder and `~/.hermes` config
- Separate authentication state and keychain
- Separate memory
- No access to Samantha's keychains or OpenClaw internals
- Shared handoff folder as the only intentional bridge

Hermes cannot see Samantha's secrets. Samantha cannot pollute Hermes's environment. The boundary is enforced by the operating system, not by good intentions.

> **Security principle: narrow wrappers beat broad permissions.** Samantha does not get passwordless sudo. She gets one wrapper that can run Hermes against one folder, with input and output paths fixed. Everything outside that seam stays off-limits.

## The handoff folder

The two agents talk through one shared directory:

`/Users/Shared/ai-handoff/hermes-research`

Samantha drops a request file. Hermes reads it, does the research, writes an answer file back. Both files are markdown. Both are visible. Both can be diffed, grepped, and archived.

A typical request:

```markdown
# Research request
## Question
Mac Studio models with 256 GB memory — processors,
prices, and Finland availability.

## Output wanted
- executive summary
- model options + price estimates
- caveats
- recommended next action
- source links
```

> **Workflow principle: every research task should leave an artifact.** If the research is not written down, it did not happen. Chat history is not memory. A markdown file in a shared folder is.

## How Samantha invokes Hermes

Samantha does not switch macOS logins. She calls a restricted wrapper that runs Hermes as the `hermesmiettinen` user and pins the input and output paths to the handoff folder:

`sudo -n /usr/local/sbin/hermes-delegate <request.md> <answer.md>`

That is the entire bridge. One command, two paths, fixed user. No general sudo, no shell escape, no surprise capability.

## Exa MCP as the research power-up

Hermes is wired to **Exa** via MCP for web research. That changed the quality of every task. Instead of relying on browsing tricks or stale model knowledge, Hermes can find, extract, and summarize current sources with citations Samantha can verify.

In practice, Samantha can now say:

> "Use Hermes to research this. Use Exa if useful. Save the result to the handoff folder and summarize it back to me."

One sentence. A clean workflow from chat command to research artifact.

## The Mac Studio 256 GB task — an example, not the point

The first real research task was practical: figure out which Mac Studio configurations actually ship with 256 GB of unified memory, what they cost, and what is available in Finland.

The hardware research was not about buying a shiny computer. It was a test of the delegation pattern. Could Samantha hand off a real question, get back a real artifact, and use it without re-doing the work?

The output was tidy:

- 256 GB ships only on M3 Ultra Mac Studio configurations, not M4 Max.
- A lower M3 Ultra with 256 GB looks more rational for local AI work than the top-end configuration.
- Reseller routes can change the price meaningfully versus Apple direct.

*Prices include VAT and are estimates. Final availability, delivery time, and configuration pricing should be verified with Apple or a reseller before purchase.*

## Why hardware research at all

Foundation model API costs are climbing. As that bill grows, local models stop being a hobbyist concern and start being an operating decision. A Mac Studio with enough unified memory becomes a hedge — for privacy-sensitive workflows, repeatable jobs, and unit economics that hold up when cloud pricing moves again.

The plan is not to replace cloud models. It is to run a hybrid:

- Cloud models for frontier reasoning and integrated APIs.
- Local models for privacy, repeatability, and cost-controlled workloads.
- Hermes as the research agent that scouts both sides.
- Samantha as the agent that decides what to do with the answer.

## Why this matters

The Hermes setup is small, but the principles travel:

- **Agent work must be inspectable.** Requests in, answers out, both as files you can read.
- **Research should leave artifacts.** A research agent that only chats has not done the job.
- **Delegation should be constrained.** Narrow wrappers, fixed paths, separate users — not blanket trust.
- **Local and cloud AI will coexist.** The interesting question is which workload runs where, not which one wins.

## Current status

- Hermes runs as a separate macOS user on the same Mac mini M4.
- Samantha remains the primary OpenClaw agent.
- Exa web research is enabled for Hermes.
- Samantha can invoke Hermes through the restricted wrapper.
- Research results land in the shared handoff folder.
- OpenClaw configuration remains protected.
- GitHub token is not installed yet.
- Hermes messaging and gateway are not enabled yet.

That is a stable baseline. The next step is not more integrations — it is more real research tasks, run through this exact seam, until the pattern is boring.

## Closing

Hermes is not another chatbot. Hermes is a bounded research colleague for Samantha.

**Same machine. Separate login. Shared handoff. Visible work.**
