read_file, write_file, list_directory, search — backed by a persistent, version-controlled, per-agent-permissioned file workspace. Tomorrow's run reads what today's run wrote. Other agents (Cursor, Claude Code, n8n, your other Python services) read it too.Most LangChain projects have the same shape:
ConversationBufferMemory, a vector DB, a custom Postgres table, a folder of pickled files.puppyone collapses steps 4–6 into "register one Tool / Retriever and use it":
read_file, write_file, list_directory, search. Standard BaseTool shape.Documents LangChain already knows how to consume.DocumentLoaders for your LangChain pipelines./research/. A writer chain writes /drafts/. A reviewer chain reads /drafts/ and writes /reviews/. No agent sees more than it needs to.LangChain stays the orchestration framework. puppyone stops you from rebuilding the persistence layer for the fifth time.
/scratch/<agent>/, read /specs, read /integrations/, etc.read_file, write_file, list_directory, search, version_history.BaseTool instances. Add them to your AgentExecutor / LCEL chain like any other tool.Document objects with metadata (path, version, source connector, last modified by which agent).NotionLoader, SlackLoader, PostgresLoader, etc., all backed by puppyone's connectors so you don't write integration code yourself.Without puppyone: A research agent assembles findings into a string in memory. Process exits. Findings gone unless you wrote a custom serializer.
With puppyone: The agent's tool spec includes write_file. It writes findings to /research/<topic>/<date>.md. Tomorrow's agent reads the directory, builds on it. Full version history. Other agents (review, summarisation) read the same files.
Without puppyone: "Plan" chain produces a plan. "Execute" chain takes it. You serialise the plan into the conversation buffer or a JSON file you wrote yourself.
With puppyone: Plan chain writes /plans/<run-id>.md. Execute chain reads it. Reviewer chain reads /plans/<run-id>.md and /executions/<run-id>.md. Each chain has its own Access Point. The "plan format" is just "a markdown file" — humans can read it and edit it too.
Without puppyone: Each agent in your crew has its own memory. Coordinating context between them means message passing through the orchestrator, which becomes the bottleneck.
With puppyone: Agents share a workspace. Researcher writes /research/. Writer reads /research/ and writes /drafts/. Reviewer reads /drafts/ and writes /reviews/. The orchestrator just routes; the state lives in files all agents can see.
Without puppyone: You stand up a vector DB, write loaders for each source (Notion, Slack, Postgres, GitHub issues), maintain ingestion pipelines, dedupe, sync.
With puppyone: Connectors mirror sources into the workspace. The puppyone Retriever does hybrid search across all of them. The DocumentLoaders return Documents with provenance (which source, which path, what version). Your RAG chain doesn't care that some came from Notion and some from Slack — it just retrieves files.
Without puppyone: Agent produces output, you build a separate UI for humans to review and edit, you serialise edits back into the agent's memory.
With puppyone: Agent writes to /drafts/. Human opens puppyone (or any text editor talking to it) and edits. Next agent reads the edited version. The "review UI" is "a folder of files".
Without puppyone: Your Python LangChain agent and your TypeScript LangChain.js agent each have their own memory. Sharing requires a shared DB and you maintaining the schema.
With puppyone: Both connect to the same workspace. Both read and write the same files. Schema is "files in a directory tree". No coordination code.
RunnableLambda outputs that nobody else needs to see can stay in memory. puppyone is for state you'll want to read later, by another run, or another agent, or a human.LangChain stays the framework you reach for. puppyone stops you from rebuilding the boring infrastructure underneath.
Is this an official LangChain integration?
We ship a Python and TypeScript SDK with BaseTool, BaseRetriever, and BaseLoader implementations following LangChain's standard interfaces. Drop them into your existing chains.
Does puppyone replace ConversationBufferMemory / chat memory?
For long-running, multi-session agents — yes, you'll want puppyone-backed memory so it survives the process. For a 5-turn chat that doesn't need to outlive the request, the in-memory class is fine.
Does puppyone replace my vector DB? For multi-source, evolving, file-shaped context where you'd otherwise be building loaders and ingestion plumbing — yes. For a high-throughput, single-domain semantic search index — no, keep your dedicated vector DB.
Can I use puppyone with LangGraph? Yes. Each node in the graph can read/write puppyone via the same Tool. Persistent state across graph executions becomes a property of the workspace, not of the graph runtime.
What about LangSmith / tracing? Calls into puppyone show up as standard tool calls in your traces — file path, operation, latency, return size. No special integration needed.
Does it work with LangChain.js? Yes. Same SDK shape, same Access Points, same workspace. A Python agent and a TypeScript agent can share the workspace transparently.
How do permissions work in a multi-tenant SaaS where each customer has their own LangChain agents? Each tenant gets their own puppyone workspace (or their own scoped tree within one), each agent has its own Access Point, no cross-tenant leakage at the storage layer. Your application logic doesn't need to enforce tenancy — the storage does.
What if I'm using LCEL only, not the older agent abstractions?
Same answer — register the tools as RunnableLambda wrappers (or use the SDK helpers) and compose them into your chain.
LangChain gives you the agents. It doesn't give you a place to put their state — that's been an exercise for the user. puppyone fills the gap as a drop-in Tool, Retriever, and DocumentLoader, so your agents stop forgetting and your team stops maintaining ad-hoc persistence layers. Same chains. Same prompts. Real workspace underneath.
Give your LangChain agents a workspace, not a Python dict that dies when the process exits.Get started