Postgres is the best relational database humans have ever built. Supabase took it and made the developer experience excellent. We use both, our customers use both, every serious team we know uses both.
We don't replace either of them. We don't want to.
What we replace is the missing layer above them: the place agents read and write the unstructured context that doesn't fit in a row. Specs. Long-form research. Generated reports. Multi-step plans. Conversation snapshots. Slack threads turned into markdown. Slices of a Postgres query, materialised as a file, that an agent actually reads with cat instead of running 30 selects.
Trying to put that kind of context into Postgres works until it doesn't. We'll explain when it stops working below.
| Dimension | Postgres / Supabase | puppyone |
|---|---|---|
| Data shape | Structured rows in typed columns | Unstructured files (markdown, JSON, CSV, anything) |
| Query model | SQL | cat / ls / grep / MCP read_file |
| Schema | Required, enforced, migrated | Filesystem hierarchy, no schema migrations |
| Primary user | Application code, analysts | AI agents and the humans who configure them |
| Permissions | RLS / roles / per-table grants | Per-agent Access Points with scoped read/write paths |
| Versioning | App-level: append-only tables, history columns, CDC, or none | Built-in: every write is a commit with author, diff, rollback |
| Audit trail | Triggers / pgaudit / external tooling | Native commit log per agent identity |
| What an LLM sees | Whatever your code SELECTs and serialises | Files exactly as written |
| Self-hosted | Yes (both) | Yes (open source, Docker) |
| Best at | Transactions, joins, structured app state, dashboards | Long-form context, agent-readable knowledge, multi-agent collaboration, scratch space |
If your real job is structured application data, use Postgres or Supabase:
pgvector for production RAG, when your embeddings actually belong next to your structured data.We have nothing to add to that. Postgres has 30 years of head start and an enormous ecosystem. Use it.
If your real job is giving an AI agent durable context, use puppyone:
created_at timestamp on a row)./research/, planner reads it, executor writes /output/, all without stepping on each other.documents table with a content TEXT column, an agent_id column, an updated_at column, a parent_id for revisions, and a permissions JSONB blob — congratulations, you're rebuilding puppyone in Postgres. We did this for you.In production it almost always looks like this:
/db/active-customers.csv instead of writing SQL it might get wrong.This split has a clean rule: rows belong in Postgres, narratives belong in puppyone. If you find yourself shoving narratives into Postgres TEXT columns, the seam is in the wrong place.
It can. People do. Here's what tends to break:
customers-summary.md" is cheaper for a model than "Here is the schema of 14 tables, please write a SELECT". Fewer tokens, fewer mistakes.If you're already running Postgres or Supabase, you don't need to move anything. You add a connector in puppyone, and structured data starts showing up in your agent's filesystem the way the agent wants it.
No, but they're complementary.
pgvector and Supabase Vector are great for embedding-backed similarity search over structured columns. That's a real, useful primitive. It is not a place an agent reads files from. Embeddings are how a system finds the right document; a document still has to live somewhere.
In a typical setup:
pgvector / Supabase Vector / Pinecone / your favourite vector store, computed from puppyone's content.cats the file via puppyone — best of both worlds.(We have a separate page on puppyone vs vector databases that goes deeper on this distinction.)
There's no migration. Postgres / Supabase do not move.
What does happen:
/db/ (or wherever you mount them), refreshed on a schedule.TEXT columns into puppyone files. Specs, docs, plans, prompts, AGENTS.md — these stop being database rows and start being version-controlled files./db/, /specs, /research. Cursor reads /codebase, /db/schema-snapshot/. They write to their own scoped output folders.After a month, the pattern is: Postgres holds rows that your app and your dashboards depend on; puppyone holds the long-form, versioned, agent-readable context; agents see both, and you stopped trying to make one tool do both jobs.
Does puppyone replace my database? No. We don't store your users, your orders, or your billing. Postgres / Supabase do that. We hold the unstructured, agent-readable context layer that doesn't belong in rows.
Can puppyone connect to Supabase? Yes — both via the Postgres connector and the Supabase REST/Storage APIs. Tables, queries, and storage buckets can show up as folders inside an agent's workspace.
Can puppyone do row-level security like RLS? We don't replace RLS for your app traffic. For agents, our equivalent is Access Points: per-agent identity with explicit read/write path scopes. It's the same idea (least privilege per principal) at the file layer.
Should I store agent prompts in Postgres or in puppyone? If they're versioned, hand-written, and read by humans editing them — puppyone (file with diff history). If they're machine-generated, high-throughput, append-only — Postgres.
What about pgvector for agent memory?
Use pgvector for similarity search. Use puppyone for the actual file an agent reads after the search. They're not competitors; they're layers.
Postgres / Supabase are databases for structured app data. puppyone is a file workspace for unstructured agent context. Trying to make one do the other's job ends in either an unholy documents table or an unworkable filesystem-as-database. Run both, with a clean seam between rows and narratives.