/research/, the Writer reads /research/ and writes /drafts/, the Editor reads /drafts/ and writes /reviews/. Every write is a commit. Every role only sees what it should. Tomorrow's crew picks up where today's left off.CrewAI projects almost always evolve through the same four stages:
./tmp/research.md so the next role can read them. Works on one machine. Dies when you go to production.With puppyone wired in, your CrewAI crew gets:
read+write on /research/, read on /specs/. The Writer gets read on /research/, read+write on /drafts/. The Editor gets read on /drafts/, read+write on /reviews/. The orchestrator doesn't enforce this — the workspace does./drafts/post-2026-04-23.md that anyone with the right Access Point can open.CrewAI keeps its job of expressing the collaboration. puppyone takes the job of being the place the collaboration accumulates.
Agent, Task, Crew, and Process definitions. We don't replace any of them.read+write on /research/, read on /specs/, /integrations/.read on /research/, /specs/, read+write on /drafts/.read on /drafts/, read+write on /reviews/.Agent's tools=[...], configured with that role's Access Point./research/<topic>.md. Read the spec from /specs/<feature>.md first.").run_id so each crew run gets its own subtree (/runs/<run-id>/research/...), making history per-run trivial.That's the whole integration. No glue code, no schema, no ACL plumbing.
Without puppyone: Researcher returns a 4KB string. Writer's prompt now contains 4KB of input. Editor's prompt now contains the Writer's full draft plus the original research. By the third role, you're paying for context every model call.
With puppyone: Researcher writes /research/topic.md and returns the path. Writer reads the file (only when it needs to), writes /drafts/post.md, returns the path. Editor reads what it needs. Orchestrator passes paths, not payloads.
Without puppyone: Plan agent produces a plan. Execute agent acts. Verifier finds a bug. You re-run the whole thing because you don't have a clean snapshot of what the plan was when execution started.
With puppyone: Plan agent writes /runs/<run-id>/plan.md. Execute agent reads it, writes /runs/<run-id>/execution.md. Verifier reads both. If something's wrong, you can roll back, re-run just the affected step, and inspect the diff between attempts.
Without puppyone: A crew that's supposed to run "for a week, with humans checking in daily" can't do that — there's no place for state to live between runs.
With puppyone: State lives in the workspace. Crew checkpoints into files. Restart the process whenever, the next run reads what the last run wrote, continues. Humans review files between sessions.
Without puppyone: Crew A produces something Crew B needs. You build a serialisation format, a queue, a handoff agent.
With puppyone: Crew A writes to /handoff/. Crew B reads from /handoff/. The "format" is "markdown files with whatever structure makes sense". Each crew has the appropriate Access Point.
Without puppyone: Crew produces output. You build a UI for humans to review. You serialise edits back into the next crew's prompt.
With puppyone: Crew writes /drafts/. Humans open puppyone (or any text editor talking to it) and edit. Next crew reads the edited version. No UI to build. The version history shows exactly what the human changed.
Without puppyone: Each customer's crew has access to "all the agent state" unless you carefully partition. Easy to build a leak.
With puppyone: Each tenant gets a workspace (or a scoped subtree). Each role has an Access Point scoped to that tenant. Storage-level isolation. Your application code doesn't enforce tenancy — the storage does.
/research/ is fine; the Researcher writing directly to /specs/ is asking for trouble. Have a review step (human or another role) promote into canonical paths.run_id partition for crews you'll rerun a lot. A flat namespace for a daily-recurring crew gets messy fast. Per-run subtrees keep history clean.CrewAI describes the crew. puppyone is the room they actually work in.
Is this an official CrewAI integration?
We ship a Python SDK with CrewAI-compatible tool wrappers. Drop them into your Agent's tools=[...] and configure with the role's Access Point.
Does puppyone replace CrewAI's built-in memory? For long-running, multi-session, multi-crew use — yes, puppyone is the durable layer. CrewAI's built-in memory is fine for short-lived context within a single crew run.
Can I use puppyone with CrewAI's hierarchical process? Yes. The "manager" agent in a hierarchical process can read the workspace to decide who to delegate to next, and each delegated role reads/writes its own scope.
How do I represent "the spec the whole crew is working off" in puppyone?
A file (or directory) under /specs/. All roles get read on it; only a designated role (or a human) gets write. Version history shows changes over time.
Can different crews safely share the same workspace? Yes — that's the multi-crew handoff pattern. Use distinct Access Points per crew with non-overlapping write scopes (overlapping read scopes are fine — that's how handoff works).
What happens if two roles try to write the same file at the same time? puppyone uses optimistic concurrency under the hood — last writer wins, but the loser sees the conflict and the version history shows both attempts so you can reconcile. In practice, design crews so two roles don't write the same canonical file simultaneously.
Does puppyone work with CrewAI's kickoff_for_each for parallel crews?
Yes. Each parallel crew can write to its own /runs/<run-id>/ subtree, then a downstream aggregation crew reads all of them.
I'm doing a 3-agent demo on my laptop. Do I need puppyone? No. For demos and single-machine experiments, local files are fine. You'll want puppyone the moment you have multiple sessions, multiple users, multiple crews handing off, or a need for version history.
CrewAI is the framework that takes "I want a Researcher, a Writer, and an Editor" and turns it into runnable agents. puppyone is the workspace where the Researcher's notes, the Writer's drafts, and the Editor's reviews actually live — durably, version-controlled, scoped per role, visible to humans. Wire them together and your crew stops passing huge strings through the orchestrator and starts collaborating in a real workspace.
Give your crew a real workspace, not a 50KB string in the orchestrator's prompt.Get started