puppyone vs Postgres / Supabase: relational schemas vs file workspaces for AI agents

2026年4月23日puppyone team

TL;DR

  • Postgres / Supabase are relational databases. Tables, columns, types, transactions, SQL — built for structured data with predictable shape.
  • puppyone is a file-shaped, version-controlled context layer for AI agents. Files, paths, commits, per-agent permissions, MCP / Bash / REST — built for unstructured context that an LLM actually wants to read.
  • They're not in the same category. Asking "Postgres vs puppyone" is like asking "MySQL vs Dropbox": the right answer is almost always "both, doing different jobs".
  • The pattern in production: Postgres / Supabase store structured app data; puppyone holds the unstructured context (specs, docs, agent outputs, transformed slices of your DB) that agents actually read and write; agents query Postgres through puppyone-mounted SQL connectors when they need a fresh number.

The honest version

Postgres is the best relational database humans have ever built. Supabase took it and made the developer experience excellent. We use both, our customers use both, every serious team we know uses both.

We don't replace either of them. We don't want to.

What we replace is the missing layer above them: the place agents read and write the unstructured context that doesn't fit in a row. Specs. Long-form research. Generated reports. Multi-step plans. Conversation snapshots. Slack threads turned into markdown. Slices of a Postgres query, materialised as a file, that an agent actually reads with cat instead of running 30 selects.

Trying to put that kind of context into Postgres works until it doesn't. We'll explain when it stops working below.

Side by side

DimensionPostgres / Supabasepuppyone
Data shapeStructured rows in typed columnsUnstructured files (markdown, JSON, CSV, anything)
Query modelSQLcat / ls / grep / MCP read_file
SchemaRequired, enforced, migratedFilesystem hierarchy, no schema migrations
Primary userApplication code, analystsAI agents and the humans who configure them
PermissionsRLS / roles / per-table grantsPer-agent Access Points with scoped read/write paths
VersioningApp-level: append-only tables, history columns, CDC, or noneBuilt-in: every write is a commit with author, diff, rollback
Audit trailTriggers / pgaudit / external toolingNative commit log per agent identity
What an LLM seesWhatever your code SELECTs and serialisesFiles exactly as written
Self-hostedYes (both)Yes (open source, Docker)
Best atTransactions, joins, structured app state, dashboardsLong-form context, agent-readable knowledge, multi-agent collaboration, scratch space

When you should use Postgres / Supabase (and not puppyone)

If your real job is structured application data, use Postgres or Supabase:

  • Anything with rows, joins, transactions, foreign keys, RLS.
  • App backends. Dashboards. Auth. Billing. Anything where consistency and SQL semantics matter.
  • High-volume reads / writes from application code that already speaks SQL.
  • Vector search via pgvector for production RAG, when your embeddings actually belong next to your structured data.

We have nothing to add to that. Postgres has 30 years of head start and an enormous ecosystem. Use it.

When you should use puppyone (and not Postgres / Supabase)

If your real job is giving an AI agent durable context, use puppyone:

  • Agents that need to read unstructured knowledge — specs, docs, prior research, AGENTS.md-style instructions.
  • Multi-agent workflows where each agent should write into its own scoped path with a clean audit trail.
  • Long-running agents whose outputs need version history and rollback (not just a created_at timestamp on a row).
  • Pipelines where agents need to collaborate on the same workspace — researcher writes /research/, planner reads it, executor writes /output/, all without stepping on each other.
  • Anywhere you've found yourself building a documents table with a content TEXT column, an agent_id column, an updated_at column, a parent_id for revisions, and a permissions JSONB blob — congratulations, you're rebuilding puppyone in Postgres. We did this for you.

When you should use both (the most common answer)

In production it almost always looks like this:

  1. Postgres / Supabase stores your app's structured data. Users, orders, sessions, billing, embeddings — the row-shaped reality.
  2. puppyone stores the agent-readable context. Specs, docs, transformed views, agent scratch, generated reports — the file-shaped reality.
  3. puppyone mounts Postgres as a connector. Specific queries / tables / views show up as folders or files inside the agent's workspace, refreshed on a schedule. The agent reads /db/active-customers.csv instead of writing SQL it might get wrong.
  4. For live queries, agents call out to Postgres via MCP / a query tool exposed through puppyone. They get fresh numbers when they need them, and a stable file view when they don't.
  5. Agent outputs (plans, drafts, research) stay in puppyone with full version history. Only the parts you've decided are "stable" get materialised back into Postgres tables for the app to consume.

This split has a clean rule: rows belong in Postgres, narratives belong in puppyone. If you find yourself shoving narratives into Postgres TEXT columns, the seam is in the wrong place.

"Why not just put agent context in Postgres? It can hold strings."

It can. People do. Here's what tends to break:

  • Diffs aren't free in Postgres. You either store full revisions (and pay the storage / write cost), or write CDC, or accept that you've lost history. puppyone gives you Git-style diffs at the storage layer, no app code.
  • Per-agent permissions in Postgres are RLS gymnastics. Doable, but you'll be writing policies for every table for every agent. puppyone does this with one Access Point per agent and a path scope.
  • LLMs don't natively speak SQL. They speak text and files. Every "agent reads from Postgres" project ends up writing a translation layer (function calling, MCP server, query templates). With puppyone, the agent already speaks the interface — it's a filesystem.
  • Postgres rows are not the agent's working memory. Agents draft, scratch, retry, branch, throw away. Doing all of that in Postgres rows produces either an explosion of versioned rows or a polluted production table. puppyone gives them a workspace where scratch is normal and history is automatic.
  • Materialised files are denser context than schemas. "Here is customers-summary.md" is cheaper for a model than "Here is the schema of 14 tables, please write a SELECT". Fewer tokens, fewer mistakes.

If you're already running Postgres or Supabase, you don't need to move anything. You add a connector in puppyone, and structured data starts showing up in your agent's filesystem the way the agent wants it.

"What about Supabase Vector / pgvector? Isn't that the same thing?"

No, but they're complementary.

pgvector and Supabase Vector are great for embedding-backed similarity search over structured columns. That's a real, useful primitive. It is not a place an agent reads files from. Embeddings are how a system finds the right document; a document still has to live somewhere.

In a typical setup:

  • puppyone stores the actual files (canonical content, full text, version history).
  • Embeddings live in pgvector / Supabase Vector / Pinecone / your favourite vector store, computed from puppyone's content.
  • The agent searches via vectors, then cats the file via puppyone — best of both worlds.

(We have a separate page on puppyone vs vector databases that goes deeper on this distinction.)

How the migration tends to go

There's no migration. Postgres / Supabase do not move.

What does happen:

  1. Stay on whatever you're already running. Postgres, Supabase, AlloyDB, Neon — same story.
  2. Add the SQL connector in puppyone. Pick the queries / tables / views you want agents to see as files. They appear under /db/ (or wherever you mount them), refreshed on a schedule.
  3. Move agent-readable narrative out of Postgres TEXT columns into puppyone files. Specs, docs, plans, prompts, AGENTS.md — these stop being database rows and start being version-controlled files.
  4. Give each agent an Access Point. Researcher reads /db/, /specs, /research. Cursor reads /codebase, /db/schema-snapshot/. They write to their own scoped output folders.
  5. Keep production app data in Postgres. Anything that powers your real app — auth, billing, orders, sessions — keeps living in Postgres exactly as before.

After a month, the pattern is: Postgres holds rows that your app and your dashboards depend on; puppyone holds the long-form, versioned, agent-readable context; agents see both, and you stopped trying to make one tool do both jobs.

FAQ

Does puppyone replace my database? No. We don't store your users, your orders, or your billing. Postgres / Supabase do that. We hold the unstructured, agent-readable context layer that doesn't belong in rows.

Can puppyone connect to Supabase? Yes — both via the Postgres connector and the Supabase REST/Storage APIs. Tables, queries, and storage buckets can show up as folders inside an agent's workspace.

Can puppyone do row-level security like RLS? We don't replace RLS for your app traffic. For agents, our equivalent is Access Points: per-agent identity with explicit read/write path scopes. It's the same idea (least privilege per principal) at the file layer.

Should I store agent prompts in Postgres or in puppyone? If they're versioned, hand-written, and read by humans editing them — puppyone (file with diff history). If they're machine-generated, high-throughput, append-only — Postgres.

What about pgvector for agent memory? Use pgvector for similarity search. Use puppyone for the actual file an agent reads after the search. They're not competitors; they're layers.

TL;DR (again, for the people who scrolled)

Postgres / Supabase are databases for structured app data. puppyone is a file workspace for unstructured agent context. Trying to make one do the other's job ends in either an unholy documents table or an unworkable filesystem-as-database. Run both, with a clean seam between rows and narratives.

Mount your Postgres / Supabase as files your agents can actually read.Get started