Choose your Plan

Free

$ 0per month

Individuals

Get Started
  • 1 workspace
  • Most 30 blocks per workspace
  • Up to 5MB uploaded file
  • 1 deployed server
  • 50 LLM-calling
  • 100 single runs

Pro

$ 25per month

Small Teams

Get Started
  • Up to 20 workspaces
  • Unlimited blocks
  • Up to 50MB uploaded file
  • Up to 5 deployed servers
  • 200 LLM-calling
  • 1000 single runs

Enterprise

Custom

Large Organizations

Contact Sales
  • Unlimited Workspaces
  • Up to 500MB uploaded file
  • Unlimited deployed servers
  • Pay as you go

Compare plans

Free
$ 0per month
Pro
$ 25per month
Enterprise
Custom
Block Count30UnlimitedUnlimited
Workspace Count120Unlimited
File Uploaded5 MB50 MB500 MB
Deployed Server15Unlimited
LLM-calling50500Unlimited
Single Run1001000Unlimited
Priority Support

Ask Puppy
Questions

pricing.faq.whatIs.question

pricing.faq.whatIs.answer

What's the difference between Context Base and a Vector Database?

Vector DBs rely on probabilistic similarity, which is fuzzy and prone to hallucinations when dealing with exact numbers, SKUs, or complex logic. Context Base provides deterministic retrieval based on hybrid indexing (Text + Structure), giving you 100% accuracy for mission-critical business context.

Who is this product built for?

It is built for the 'Vibe Coding' generation: FDEs (Full-Stack Engineers), Product Engineers, and 'Business Geeks' who are building their own AI tools using Cursor or Claude. If you need a backend for your custom Agent that is more flexible than a SaaS and smarter than a database, this is for you.

Does it support local deployment? My data cannot leave the company.

Yes. We follow a 'Local-First' architecture. You can run the entire kernel via Docker on your own infrastructure. Your proprietary 'Know-How' and sensitive data never leave your VPC, while still empowering your internal Deepwide Research Agents.

What is the pricing model?

We do not charge per 'Seat' like traditional SaaS. We charge based on 'Asset Usage'—specifically the volume of Know-How stored and the API compute for retrieval/ingestion. This aligns with the 'Infra + Talent' model of the AI era.