puppyone for n8n: durable, version-controlled state across workflow runs

23. April 2026puppyone team

TL;DR

  • n8n is excellent at orchestration. It triggers, branches, calls APIs, runs AI nodes, hands data between steps. It is not, by itself, a durable state store across runs — once a workflow finishes, intermediate context is gone unless you wrote it somewhere.
  • puppyone gives n8n a persistent, version-controlled file workspace that any node can read from / write to via REST (or Bash for self-hosted n8n). The next run reads what the last run wrote. AI nodes share context across runs. Humans see the same outputs.
  • The pattern: n8n owns the orchestration; puppyone owns the durable, shared state. Stop stuffing JSON into n8n credentials, sticky variables, or random Postgres tables.

What changes when you wire puppyone into n8n

Without puppyone, n8n workflows have a few classic state-storage problems:

  • Run-local context disappears at the end of the run. If the next run wants to know what the previous run did, you have to write it somewhere — usually a custom database table, a Google Sheet, or n8n's built-in static data store.
  • AI Agent / OpenAI / Claude nodes start cold every time, with whatever context you assemble in that run. Long-running research, multi-step plans, accumulated knowledge — gone unless you stored them externally.
  • Sharing data between workflows / between humans + agents requires picking and maintaining a shared store: spreadsheet? S3 bucket? Notion? Each has its own seams.
  • No version history on intermediate state. A bad run silently overwrites yesterday's good output. No rollback.

With puppyone connected via REST (or Bash for self-hosted), every n8n workflow gets:

  • A persistent file workspace that survives across runs, restarts, and deployments.
  • Per-workflow / per-node Access Points so different parts of your n8n setup can have different read/write scopes.
  • Automatic version history — every write from any node is a commit with author (the workflow), timestamp, diff, rollback.
  • One workspace shared with your other agents — Cursor, Claude Code, custom code can all see what n8n produced.
  • SaaS data already as files — instead of every n8n run hitting Notion / Slack / Postgres APIs separately, those connectors run in puppyone and your workflow reads cached, versioned files.

n8n keeps doing what n8n does. puppyone is the durable layer underneath.

What stays exactly the same

  • Your existing n8n workflows. We don't replace any nodes. puppyone is accessed via standard HTTP Request / cURL nodes (or via a thin community node in the medium term).
  • Your triggers, branching, scheduling, error handling — unchanged.
  • Other databases, spreadsheets, S3 buckets keep working. puppyone replaces the ones you used as ad-hoc state stores; the rest stay where they are.
  • n8n's built-in features (credentials, queue mode, queue workers) — unchanged.

How to wire it up (the short version)

  1. Spin up a puppyone workspace. Cloud (try.puppyone.ai) or self-hosted Docker.
  2. Create an Access Point for n8n with the path scopes you want (e.g. read /specs, /integrations; read+write /n8n/<flow-name> for each flow's working area).
  3. Add the puppyone REST endpoint as an HTTP Request credential in n8n. Use the Access Point token as the bearer.
  4. In any node where you'd usually write state, replace it with an HTTP Request to puppyone — POST /v1/files/<path> to write, GET /v1/files/<path> to read.
  5. For AI nodes, fetch context from puppyone before the AI call (so it has yesterday's plan / prior research / current spec) and write the AI's output back into puppyone after.

Workflows that actually get better

1. Long-running research / data-collection flows

Without puppyone: A nightly research workflow accumulates findings into a Google Sheet, with no version history and no per-row provenance. Re-runs overwrite or duplicate.

With puppyone: Each run writes findings to /research/<date>/<topic>.md. Full history. The next morning's analysis flow reads /research/<date-1>/ and continues. Humans and other agents see the same files.

2. Multi-step plans across runs

Without puppyone: A "plan → wait for human approval → execute" flow has to serialise the plan into n8n static data or a Postgres row, with manual schema upkeep.

With puppyone: Plan is written to /plans/<run-id>.md with full markdown. Approval flow reads it. Execution flow reads the same file. Every step is versioned.

3. AI nodes with shared, growing context

Without puppyone: Each AI node gets prompted with whatever your Set nodes assembled this run. No memory of past runs unless you piped it through somewhere.

With puppyone: Pre-AI step pulls /context/agents/<flow-name>/recent.md from puppyone. AI runs with rich context. Post-AI step writes the output back to /context/agents/<flow-name>/<date>.md. Tomorrow's run inherits everything.

4. Pipelines feeding humans + other agents

Without puppyone: n8n workflow output sits in a database. Humans don't read databases. Other agents don't have credentials. You build extra surfaces.

With puppyone: Output is a file. Humans read it via the puppyone UI / CLI. Other agents read it via MCP / Bash / REST. Same file, multiple consumers.

5. Replacing "store JSON in a Postgres table" patterns

Without puppyone: You created a table called agent_state with id, key, value JSONB, updated_at. You're maintaining schema, indexes, GC policy.

With puppyone: You write a JSON file to /agent-state/<key>.json. Read it back with one HTTP call. History is automatic. No schema to maintain.

Patterns to avoid

  • Don't put high-frequency, ephemeral state in puppyone. Per-execution scratch (variables, intermediate transforms) belongs inside the n8n run. puppyone is for state you'd want to read tomorrow.
  • Don't make every n8n step write to puppyone. Pick the meaningful checkpoints — start of run, end of major phase, end of run.
  • Don't put production credentials in puppyone files. Use n8n credentials for secrets; use puppyone files for content.
  • Don't grant one Access Point write access to everything. Per-workflow scopes prevent one runaway flow from corrupting another's state.
  • Don't skip version control for "performance". Commits are cheap; the cost of losing yesterday's data when a run misbehaves is much higher.

How this fits with the rest of your stack

  • n8n orchestrates. It calls APIs, branches, runs AI nodes.
  • puppyone stores. It holds the durable files n8n reads and writes.
  • Cursor / Claude Code see the same puppyone workspace via MCP. See puppyone for Cursor and puppyone for Claude Code.
  • Postgres / Supabase / Pinecone stay where they are for structured data and vectors. n8n still writes to them when appropriate; puppyone covers the file-shaped state in between. See puppyone vs Postgres and puppyone vs Pinecone.
  • GitHub still hosts your code; n8n + puppyone live above it. See puppyone vs GitHub.

n8n is the conductor. puppyone is the score. Everyone else is a player.

FAQ

Do I need a custom n8n node for puppyone? No — the puppyone REST API works with n8n's standard HTTP Request node. A community node is on the roadmap to make common operations (read / write / list) one-click, but it isn't required.

Can I use puppyone with n8n self-hosted? Yes. If both are on the same network you can also mount puppyone via Bash or NFS-style access for higher throughput. Cloud n8n + cloud puppyone works the same way over HTTPS.

What about secrets and credentials? Keep secrets in n8n credentials. puppyone is for content, not secrets. (Self-hosted puppyone in your VPC keeps content data sovereign as well.)

Does puppyone replace the n8n static data store? For state you want versioned, shared with other agents, or auditable — yes. For tiny flow-local config (last-run timestamp, etc.) the n8n static store is fine.

Does puppyone work with n8n's built-in AI nodes? Yes. Use HTTP Request nodes around the AI nodes to pull context from / push outputs to puppyone. The AI node itself stays unchanged.

Can I put my entire workflow's output in one big JSON in puppyone? You can; we'd usually recommend many small files instead — easier to version, easier for humans to read, easier for other agents to consume one at a time.

TL;DR (again)

n8n is the best workflow orchestrator out there. puppyone is the durable, version-controlled file layer it doesn't have. Wire them together over REST, stop using Sheets / Postgres / static data as ad-hoc state stores, and your n8n workflows start having actual memory across runs and across teammates.

Give your n8n flows a memory that survives the next run.Get started