AI Agent Infrastructure Needs an Agents Filesystem and a Versioned Control Filesystem

April 21, 2026Lin Ivan

Key takeaways

  • The next bottleneck in AI agent infrastructure is not only model quality. It is how agents receive context, access tools, write files, and recover from mistakes.
  • Cloudflare's internal AI engineering stack shows the shape of production agent systems: access control, MCP servers, sandboxed execution, repository instructions, code review, and organizational knowledge.
  • Most teams do not need to rebuild that entire platform at once. They do need one missing layer early: an Agents Filesystem that turns scattered company context into governed files agents can read and write.
  • A Versioned Control Filesystem matters because agents make changes. Every write should have identity, diff, history, rollback, and audit trails instead of disappearing into an ephemeral session.
  • puppyone is one implementation path for that layer: a file workspace for AI agents with SaaS ingestion, MCP access, scoped permissions, sandbox mounting, version history, and audit logs.

The Cloudflare lesson: AI agents become useful when the stack around them exists

Cloudflare recently published an engineering post about the internal AI stack it built for its own R&D organization: The AI engineering stack we built internally. The article is interesting because it does not treat AI adoption as a prompt-writing problem. It treats it as infrastructure.

The pattern is clear. Cloudflare wired together authentication, centralized LLM routing, MCP servers, sandboxed execution, long-running agent sessions, a service catalog, AGENTS.md files, and AI code review. Each part solves a different production failure mode:

Production problemInfrastructure response
Engineers use different AI clients and providersCentralized routing, provider policy, cost tracking
Agents need internal toolsMCP servers and a portal layer
Agents need to execute code safelySandboxed runtime environments
Agents need repo-specific instructionsAGENTS.md and engineering standards
Agents need organizational knowledgeService catalog and dependency graph
Agent output needs reviewAI code review tied to standards

That is what production AI agent infrastructure looks like. It is not one chatbot with a longer context window. It is an operating environment where agents can find the right context, use the right tools, act inside the right boundaries, and leave a trace an engineering team can review.

Most companies will not build the same stack in the same order. But the architectural lesson transfers directly: once agents touch real work, you need a governed substrate for context and files.

Why "just add MCP" is not enough

MCP is a useful protocol layer. It gives agents a standardized way to discover tools and call external systems. The official Model Context Protocol documentation frames MCP as a way to connect models with tools and data sources through a common interface.

That helps, but it does not solve the whole infrastructure problem.

When every tool is exposed as a live API call, agents can still run into predictable issues:

  • They burn context on tool schemas before doing the work.
  • They have no durable place to store intermediate plans, extracted facts, transformed data, or generated files.
  • They cannot easily diff what changed between two runs.
  • They may need separate connectors for every SaaS system, repo, database, and document store.
  • Their outputs often live inside a chat thread, a sandbox temp directory, or an ungoverned local folder.

MCP is the access protocol. It is not automatically the memory, storage, permission, audit, or recovery layer.

That distinction matters for builders. You can have a beautiful MCP setup and still lose agent work when a session ends. You can have ten useful tools and still lack a shared workspace where multiple agents collaborate safely. You can connect agents to GitHub, Notion, Google Drive, Slack, and internal databases, and still not have a coherent context model.

This is where the idea of an Agents Filesystem becomes practical.

Build a governed context layer for your agents with puppyoneGet started

What an Agents Filesystem means

An Agents Filesystem is a file workspace designed for AI agents rather than humans.

Traditional file systems assume a human is nearby to decide what is safe, what is current, what should be committed, and what should be ignored. Agents do not work like that. They read whatever is mounted, write wherever they are allowed, and often treat files as part of their reasoning loop.

A production Agents Filesystem should provide five capabilities.

CapabilityWhy agents need it
Unified contextAgents should access SaaS data, repo docs, tickets, specs, and prior outputs through one navigable file space.
Scoped accessEach agent should see only the paths it is allowed to read or write. Hidden files should not become accidental prompt context.
Native agent interfacesAgents should access context through Bash, MCP, REST, CLI, or sandbox mounts depending on how they run.
Durable writesPlans, scratch files, reports, generated code, and transformed data should survive beyond the agent session.
Operational tracesReads, writes, diffs, and permission failures should be visible for debugging and governance.

This is different from a generic shared folder. Dropbox, S3, Git, local disk, and vector databases each solve part of the problem, but none of them are built around agent identity, per-agent permissions, MCP-native distribution, sandboxed file mounting, and automatic version history in one place.

For a deeper treatment of file-level safety, see our guide to filesystem design for AI agents. The infrastructure question is larger: how does the filesystem become the context backbone for the whole agent stack?

Why version control has to move into the filesystem layer

Agents do not only retrieve context. They change it.

They summarize meetings, generate migration plans, rewrite documentation, transform datasets, open pull requests, create reports, and update task artifacts. Once agents write files, the system needs a recovery model.

This is why the phrase Versioned Control Filesystem is useful, even if the more natural search phrase is "version-controlled filesystem." The point is simple: version control cannot be an optional habit the agent remembers to perform. It has to be part of the storage layer.

In a Versioned Control Filesystem for agents:

  • every write creates a versioned change
  • every change is attributed to an agent, user, access point, or workflow
  • every diff is inspectable
  • every folder can be rolled back to a known state
  • every risky run can start from a checkpoint
  • every permission boundary can be audited

That creates a different operating model. If an agent deletes a folder, overwrites a policy file, or corrupts a generated dataset, the team can inspect and revert. If two agents work in the same context space, the system can distinguish their changes. If a reviewer wants to know why a decision file changed, the history is part of the filesystem, not buried in logs.

Git already taught engineering teams the value of diffs, branches, and rollbacks. Agent infrastructure needs similar semantics for context, not only source code.

A reference architecture for AI agent infrastructure

A practical AI agent infrastructure stack can be designed in layers:

User / workflow request
  -> Agent client or orchestration layer
  -> Model routing and policy layer
  -> MCP and tool access layer
  -> Agents Filesystem
       - synced SaaS context
       - repo and project context
       - generated artifacts
       - per-agent permissions
       - version history and audit logs
  -> Sandbox execution layer
  -> Review, approval, and deployment layer

The Agents Filesystem sits in the middle because it is where context becomes operational. It is upstream of execution because agents need files before they act. It is downstream of connectors because raw SaaS data needs to be normalized before agents can use it. It is adjacent to MCP because the same context should be available through protocol-native clients. It is connected to review because agent output needs a durable artifact to inspect.

Without that layer, teams tend to glue together fragments:

  • one folder for local agent runs
  • one vector index for retrieval
  • one MCP server for each tool
  • one wiki for process docs
  • one object bucket for outputs
  • one spreadsheet for audit notes

That can work for demos. It becomes brittle when multiple agents, teams, tools, and permission boundaries enter the system.

Where puppyone fits

puppyone is built as a file workspace for multi-agent collaboration. In the language of this article, it provides the Agents Filesystem and Versioned Control Filesystem layer inside AI agent infrastructure.

The core idea is straightforward: connect the context your agents need, represent it as files, govern access by path and agent identity, and expose the workspace through the interfaces agents already use.

In practice, that means:

  • SaaS and data connectors bring sources like Notion, GitHub, Google Drive, Gmail, Airtable, Linear, and databases into a unified context space.
  • Context is represented as agent-friendly files such as Markdown, JSON, folders, and raw assets.
  • Access Points scope what each agent can read or write.
  • MCP endpoints let Claude, Cursor, and other MCP-compatible clients access the same governed workspace.
  • Sandbox mounts give execution environments only the files they are authorized to see.
  • Version history and rollback make agent writes inspectable and recoverable.
  • Audit logs show who or what touched which file and when.

This is not a replacement for every layer Cloudflare described. You still need model policy, runtime isolation, CI, review workflows, and organization-specific standards. But an Agents Filesystem makes those layers easier to compose because the context and artifacts are no longer scattered.

For adjacent implementation patterns, read MCP in Agentic AI and AI audit best practices for secure agent deployments.

How to evaluate your own stack

If you are building internal AI agent infrastructure, do not start with the vendor diagram. Start with the failure modes you need to prevent.

Use this checklist:

QuestionIf the answer is no
Can every agent find the context it needs without manual copy-paste?You need context ingestion and normalization.
Can each agent access only the files and tools it is allowed to use?You need scoped permissions and policy enforcement.
Can agent-generated files survive after the session ends?You need durable artifact storage.
Can you inspect what changed between two agent runs?You need versioned filesystem history.
Can you roll back an incorrect agent write?You need checkpoints and restore semantics.
Can your sandbox mount only authorized context?You need filesystem-aware access control.
Can humans review agent output as artifacts rather than chat transcripts?You need file-based workflows and audit trails.

This checklist keeps the conversation grounded. It turns "we need AI agents" into "we need a system where agents can safely read, write, execute, and be reviewed."

That is the real search intent behind AI agent infrastructure.

The strategic shift: context becomes operational

For years, enterprise AI architecture treated context as retrieval material. Put documents in a vector database, retrieve chunks, add them to a prompt, produce an answer.

Agents push beyond that model. They do not only answer questions. They operate on context.

They need to inspect folders, write plans, create intermediate files, compare versions, call tools, run code, and hand off artifacts to other agents or humans. Context becomes part of the workflow state. That is why AI agent infrastructure needs a file layer, not only a retrieval layer.

The Cloudflare post is useful because it shows what happens when a serious engineering organization treats agents as infrastructure. The puppyone lesson is narrower and immediately actionable: before rebuilding the whole stack, give agents a filesystem that was designed for them.

An Agents Filesystem gives agents a place to work. A Versioned Control Filesystem gives teams a way to trust, inspect, and recover that work.

That is the layer most teams should build first.

Start building agent infrastructure on a versioned file workspaceGet started

FAQs

Q1: Is an Agents Filesystem the same thing as MCP?

No. MCP is a protocol for connecting agents to tools and data sources. An Agents Filesystem is the storage and context layer where files, artifacts, permissions, versions, and audit trails live. The two work well together: MCP can expose the filesystem to compatible clients, while the filesystem provides durable state and governed context.

Q2: Is this different from a vector database?

Yes. A vector database is useful for semantic retrieval. It usually does not give agents a normal read/write file workspace, path-level permissions, rollback, file diffs, sandbox mounts, or Git-style collaboration semantics. Many production stacks will use both: retrieval for finding relevant knowledge and a filesystem for durable agent work.

Q3: Why call it a Versioned Control Filesystem?

The phrase points to a specific need: version control should be native to the filesystem agents use. Agents should not have to remember to commit. Every write should create recoverable history automatically, with identity, timestamp, diff, and rollback.

Q4: When should a team adopt this layer?

Adopt it when agents move from isolated experiments to shared workflows: multiple agents, multiple data sources, sensitive files, long-running tasks, reusable artifacts, or human review. If agent output matters after the chat window closes, you need durable and versioned context.