puppyone for CrewAI: a shared workspace for multi-role crews

23 avril 2026puppyone team

TL;DR

  • CrewAI gives you the role / goal / task abstractions. A Researcher, a Writer, an Editor, all collaborating to produce something. It's a beautiful framework for expressing multi-agent work.
  • It does not give you the place where that work actually lives. By default, intermediate outputs are passed through the orchestrator as strings, dumped to local files you have to manage yourself, or shoved into a memory class that doesn't survive process restarts.
  • puppyone is the workspace under the crew. Each role gets its own Access Point. The Researcher writes to /research/, the Writer reads /research/ and writes /drafts/, the Editor reads /drafts/ and writes /reviews/. Every write is a commit. Every role only sees what it should. Tomorrow's crew picks up where today's left off.
  • The crew stays a crew. The state stops being scattered.

What changes when you wire puppyone into CrewAI

CrewAI projects almost always evolve through the same four stages:

  1. Stage 1: A toy crew. Three roles, each with one task, each task returns a string, the orchestrator concatenates them. Beautiful in a demo. Falls over the moment one task needs to reference another's full output instead of a summary.
  2. Stage 2: Local files. You start writing intermediate outputs to ./tmp/research.md so the next role can read them. Works on one machine. Dies when you go to production.
  3. Stage 3: A homegrown shared store. Postgres table, S3 bucket, Redis. You're now writing schema and ACL code instead of crew code.
  4. Stage 4: Multi-agent collaboration that actually works. Each role has its own scoped workspace, version history, and visibility into what the other roles produced. This is where CrewAI was always trying to take you. puppyone is what stage 4 looks like without you building it.

With puppyone wired in, your CrewAI crew gets:

  • A persistent workspace per crew (or per crew run). Survives restarts, deployments, and "we need to rerun yesterday's pipeline".
  • One Tool per role with the right scope. The Researcher gets read+write on /research/, read on /specs/. The Writer gets read on /research/, read+write on /drafts/. The Editor gets read on /drafts/, read+write on /reviews/. The orchestrator doesn't enforce this — the workspace does.
  • Outputs that other agents and humans can read. A draft isn't a string in the orchestrator's prompt; it's /drafts/post-2026-04-23.md that anyone with the right Access Point can open.
  • Automatic version history per write. "Show me what the Writer changed between draft v3 and v4" is a built-in operation, not a feature you build.
  • Cross-crew visibility. The Editor crew can read what last week's Research crew wrote. Crews can hand off work cleanly.
  • No orchestrator string-passing of huge artifacts. Tasks return paths, not 50KB of markdown. The orchestrator stays light; the storage handles the bulk.

CrewAI keeps its job of expressing the collaboration. puppyone takes the job of being the place the collaboration accumulates.

What stays exactly the same

  • All your existing Agent, Task, Crew, and Process definitions. We don't replace any of them.
  • Your model provider. OpenAI, Anthropic, your local model — unchanged.
  • CrewAI's hierarchical / sequential process modes. Unchanged.
  • CrewAI's built-in memory and tools. Add puppyone alongside them. Use them where they're the right fit; use puppyone for the durable, shared, file-shaped state.
  • Your other tools (web search, code interpreter, custom Python). Unchanged.

How to wire it up (the short version)

  1. Spin up a puppyone workspace. Cloud (try.puppyone.ai) or self-hosted Docker.
  2. Define one Access Point per role (or one per role per crew, if you have several crews) with the right path scopes:
    • Researcher: read+write on /research/, read on /specs/, /integrations/.
    • Writer: read on /research/, /specs/, read+write on /drafts/.
    • Editor: read on /drafts/, read+write on /reviews/.
  3. Install the puppyone Python SDK and import the CrewAI tool helpers.
  4. Add the puppyone tool to each Agent's tools=[...], configured with that role's Access Point.
  5. Adjust task descriptions to instruct each role where to write/read in the workspace ("Write your findings to /research/<topic>.md. Read the spec from /specs/<feature>.md first.").
  6. Optionally: have the orchestrator pass run_id so each crew run gets its own subtree (/runs/<run-id>/research/...), making history per-run trivial.

That's the whole integration. No glue code, no schema, no ACL plumbing.

Workflows that actually get better

1. Research → Write → Edit pipeline

Without puppyone: Researcher returns a 4KB string. Writer's prompt now contains 4KB of input. Editor's prompt now contains the Writer's full draft plus the original research. By the third role, you're paying for context every model call.

With puppyone: Researcher writes /research/topic.md and returns the path. Writer reads the file (only when it needs to), writes /drafts/post.md, returns the path. Editor reads what it needs. Orchestrator passes paths, not payloads.

2. Plan → Execute → Verify with rollback

Without puppyone: Plan agent produces a plan. Execute agent acts. Verifier finds a bug. You re-run the whole thing because you don't have a clean snapshot of what the plan was when execution started.

With puppyone: Plan agent writes /runs/<run-id>/plan.md. Execute agent reads it, writes /runs/<run-id>/execution.md. Verifier reads both. If something's wrong, you can roll back, re-run just the affected step, and inspect the diff between attempts.

3. Long-running crews

Without puppyone: A crew that's supposed to run "for a week, with humans checking in daily" can't do that — there's no place for state to live between runs.

With puppyone: State lives in the workspace. Crew checkpoints into files. Restart the process whenever, the next run reads what the last run wrote, continues. Humans review files between sessions.

4. Hand-off between specialised crews

Without puppyone: Crew A produces something Crew B needs. You build a serialisation format, a queue, a handoff agent.

With puppyone: Crew A writes to /handoff/. Crew B reads from /handoff/. The "format" is "markdown files with whatever structure makes sense". Each crew has the appropriate Access Point.

5. Human-in-the-loop without separate tooling

Without puppyone: Crew produces output. You build a UI for humans to review. You serialise edits back into the next crew's prompt.

With puppyone: Crew writes /drafts/. Humans open puppyone (or any text editor talking to it) and edit. Next crew reads the edited version. No UI to build. The version history shows exactly what the human changed.

6. Multi-tenant crew SaaS

Without puppyone: Each customer's crew has access to "all the agent state" unless you carefully partition. Easy to build a leak.

With puppyone: Each tenant gets a workspace (or a scoped subtree). Each role has an Access Point scoped to that tenant. Storage-level isolation. Your application code doesn't enforce tenancy — the storage does.

Patterns to avoid

  • Don't pass full artifacts through the orchestrator anymore. Once roles share a workspace, return paths from tasks, not bodies. Saves tokens, latency, and prompt size.
  • Don't give every role the same Access Point. That undoes the per-role permissioning entirely. Distinct scopes per role is the whole point.
  • Don't write to canonical paths from a low-trust role. Researcher writing to /research/ is fine; the Researcher writing directly to /specs/ is asking for trouble. Have a review step (human or another role) promote into canonical paths.
  • Don't skip the run_id partition for crews you'll rerun a lot. A flat namespace for a daily-recurring crew gets messy fast. Per-run subtrees keep history clean.
  • Don't put per-LLM-call scratch in puppyone. Intermediate reasoning that nobody else needs to read can stay in memory. puppyone is for state another role, another run, or a human will want.

How this fits with the rest of your stack

  • CrewAI expresses the crew — roles, goals, tasks, process.
  • puppyone is the shared workspace those roles read and write. Per-role permissions and version history are the storage's job, not the orchestrator's.
  • Cursor / Claude Code see the same workspace via MCP, so when a developer wants to inspect what the crew did, they just open it in the editor. See puppyone for Cursor and puppyone for Claude Code.
  • n8n / LangChain can hand work into and out of the same workspace, so CrewAI crews integrate cleanly with workflow and chain-based agents you already have. See puppyone for n8n and puppyone for LangChain.
  • Postgres / vector DB / S3 stay in your stack. CrewAI tools that talk to them keep talking to them. puppyone is the file-shaped collaboration layer. See puppyone vs Postgres and puppyone vs Pinecone for where the line sits.

CrewAI describes the crew. puppyone is the room they actually work in.

FAQ

Is this an official CrewAI integration? We ship a Python SDK with CrewAI-compatible tool wrappers. Drop them into your Agent's tools=[...] and configure with the role's Access Point.

Does puppyone replace CrewAI's built-in memory? For long-running, multi-session, multi-crew use — yes, puppyone is the durable layer. CrewAI's built-in memory is fine for short-lived context within a single crew run.

Can I use puppyone with CrewAI's hierarchical process? Yes. The "manager" agent in a hierarchical process can read the workspace to decide who to delegate to next, and each delegated role reads/writes its own scope.

How do I represent "the spec the whole crew is working off" in puppyone? A file (or directory) under /specs/. All roles get read on it; only a designated role (or a human) gets write. Version history shows changes over time.

Can different crews safely share the same workspace? Yes — that's the multi-crew handoff pattern. Use distinct Access Points per crew with non-overlapping write scopes (overlapping read scopes are fine — that's how handoff works).

What happens if two roles try to write the same file at the same time? puppyone uses optimistic concurrency under the hood — last writer wins, but the loser sees the conflict and the version history shows both attempts so you can reconcile. In practice, design crews so two roles don't write the same canonical file simultaneously.

Does puppyone work with CrewAI's kickoff_for_each for parallel crews? Yes. Each parallel crew can write to its own /runs/<run-id>/ subtree, then a downstream aggregation crew reads all of them.

I'm doing a 3-agent demo on my laptop. Do I need puppyone? No. For demos and single-machine experiments, local files are fine. You'll want puppyone the moment you have multiple sessions, multiple users, multiple crews handing off, or a need for version history.

TL;DR (again)

CrewAI is the framework that takes "I want a Researcher, a Writer, and an Editor" and turns it into runnable agents. puppyone is the workspace where the Researcher's notes, the Writer's drafts, and the Editor's reviews actually live — durably, version-controlled, scoped per role, visible to humans. Wire them together and your crew stops passing huge strings through the orchestrator and starts collaborating in a real workspace.

Give your crew a real workspace, not a 50KB string in the orchestrator's prompt.Get started