← Back to Blog

Netlify Database, in Practice: Why a Postgres Branch per Deploy Preview Changes the Workflow

Netlify Database, in Practice: Why a Postgres Branch per Deploy Preview Changes the Workflow

Netlify shipped Netlify Database last week as a generally available product. The headline feature is that every Deploy Preview gets its own isolated Postgres branch, automatically, with a copy of production data taken at the moment the preview was created. That sounds technical and it is. The reason it matters is more interesting than the spec sheet.

I want to walk through what changes in your actual day-to-day when this is wired up, with code examples for the common cases. Then I'll talk about where I think this fits and where it doesn't.

The problem this solves, said plainly

Every team that uses a database long enough lands on the same problem. You've got production. You've got "staging" or "QA." Staging has been in a half-broken state since November because three engineers all pushed half-finished work to it before lunch on different Tuesdays. Nobody trusts staging. Half the team is terrified to run a migration on production because they can't preview it against real-shaped data.

The conventional answer is "just spin up a database per branch." Most teams don't do this because the operational cost of provisioning, seeding, and tearing down a Postgres instance per pull request is more work than the bug-fix that the pull request was solving. Some teams build it themselves. The ones who do tend to write a 2,000-word internal doc about how the database-per-branch system works and which one of three engineers can debug it.

Netlify Database makes that the default. You don't write the system. The platform writes it. Every Deploy Preview gets a branch. The branch starts as a snapshot of prod data. Schema migrations run on the branch when the preview deploys. Production stays untouched until you merge.

That's it. The reason this is a workflow shift, not just a feature, is that the cognitive load of thinking about databases collapses from "what state is staging in today" down to "the branch I'm working in is the branch I'm working in."

The minimal setup

Per the Netlify Database docs, provisioning a database is one dependency added to your project's package.json:

{
  "dependencies": {
    "@netlify/database": "^1.0.0",
    "drizzle-orm": "^0.30.0"
  },
  "devDependencies": {
    "drizzle-kit": "^0.20.0"
  }
}

The @netlify/database package is the optimized driver. It picks the right connection method for whatever context your code is running in (a serverless function, an edge function, a build step, an agent run). You import it like any other Node module:

// netlify/functions/articles.mjs
import { neon } from '@netlify/database';

export default async (request) => {
  const sql = neon();
  const articles = await sql`SELECT id, title, slug FROM articles WHERE published = true ORDER BY published_at DESC LIMIT 20`;
  return Response.json({ articles });
};

That's the read path. No connection-pool config, no env-var wiring, no separate database-URL handling between local and production. The driver picks up the right connection string from the Netlify runtime.

A schema migration, end to end

The moment of truth for any database product is what happens when you change the schema. Netlify Database uses migration files in your repository. Drizzle is the recommended ORM but the migration system is independent of it; raw SQL files work too.

Here's a tiny example. You start with one table:

-- migrations/0001_init.sql
CREATE TABLE articles (
  id SERIAL PRIMARY KEY,
  title TEXT NOT NULL,
  slug TEXT UNIQUE NOT NULL,
  body TEXT NOT NULL,
  published BOOLEAN DEFAULT false,
  published_at TIMESTAMP
);

You commit, push, and let Netlify deploy. Production runs the migration once on first deploy. Done.

A week later you want to add comments. You write the migration:

-- migrations/0002_comments.sql
CREATE TABLE comments (
  id SERIAL PRIMARY KEY,
  article_id INTEGER NOT NULL REFERENCES articles(id) ON DELETE CASCADE,
  author_email TEXT NOT NULL,
  body TEXT NOT NULL,
  created_at TIMESTAMP DEFAULT NOW()
);

CREATE INDEX idx_comments_article ON comments(article_id);

You push to a feature branch. Netlify creates a Deploy Preview. The preview gets its own database branch with a snapshot of production's article data. Migration 0002 runs against the branch. The new comments table now exists in the preview. Production's database is untouched. Production has no comments table yet. Production keeps serving reads from production data with the production schema.

You test the preview. You write a comment. You delete a comment. You decide the column should be author_name, not author_email. You amend migration 0002 and push. The preview's database branch is reset and the migration re-runs. Production still has no comments table.

When you merge to main, the production deploy runs migrations 0001 (already-applied, skipped) and 0002 (the latest version) against production. The comments table now exists in production with the schema you confirmed in preview.

The migration never went through a "does this work" test against an out-of-sync staging database. The test was the preview deploy. The test ran on a copy of real production data. The same SQL ran in production a few minutes later.

The agent-collaboration case

This is the part Netlify is leaning into hardest in their announcement, and I think they're right to. If you're using an Agent Runner (Netlify supports Claude Code, Codex, and Gemini at this point) to build a feature, the agent gets its own database branch by default.

Concretely, what changes:

  • The agent can run any SQL it wants against its branch. DROP TABLE articles is fine. The agent's branch is sandboxed. Production has zero exposure.
  • The agent doesn't have credentials to alter production. Even if the model decides production should have a different schema, it can't actually issue the change to prod. The migration files it generates have to be reviewed and merged before production touches them.
  • Multiple agent runs in parallel each get their own branch. Two agents working on two separate features don't step on each other.

That last point is the one that's missing from most existing AI-coding setups. If you give an agent a real database to work against, you spend the rest of the session worrying about what it might do to prod. If you give it a fake mock database, you spend the rest of the session debugging where the mock and the real schema disagreed. The branch-per-agent model removes both anxieties.

What the docs gloss past

A few things that are worth knowing before you adopt this:

Branch creation has a cost. A new branch with a snapshot of production data is fast (seconds to single-digit minutes for typical-size databases) but not free. If your production database is 100 GB, every preview that creates a branch is allocating up to 100 GB of branch-state. Netlify scales to zero on inactive branches, which limits the cost, but the cost exists. Plan for it.

Migrations have to be forward-only and idempotent. You should not be writing migrations that, when re-run, do something destructive. The migration system applies migrations in order on every deploy; if migration 0002 includes DROP TABLE x, and someone reverts to an old branch where x already doesn't exist, the migration fails. Use DROP TABLE IF EXISTS. Use ADD COLUMN IF NOT EXISTS. Make every migration safe to re-run on any state of the database.

The preview's database is a copy of production data, not a redirected pointer. That means PII in production is now in every preview branch any developer can access. If you have GDPR or HIPAA constraints, you need to scrub or anonymize at branch-creation time. Netlify supports a pre-branch hook for this; use it.

Connection pooling on serverless still matters. The @netlify/database driver handles a lot of this for you, but if you write a function that opens a new connection on every invocation and never closes it, you'll exhaust the connection pool. The driver's default is reasonable; don't override it without knowing why.

Local development is its own thing. You can use netlify dev to connect a local environment to a development branch. You can also run a local Postgres via Docker if you want speed. The official path is the dev branch; the unofficial path is local Docker; the wrong path is "everyone develops against the production branch."

Three patterns I'd reach for

1. Adding a feature that needs a new column. Branch + migration + preview + merge. This is the canonical flow and the docs cover it well. Five minutes, no surprises.

2. Renaming a column in production. This is the hard case. The pattern: add the new column, write data into both, deploy, switch reads to the new column, deploy, drop the old column, deploy. Three migrations, three deploys, never a moment when production is in a half-renamed state. The branch isolation makes this much less stressful: you can run the whole sequence against a branch first to make sure it works.

3. Backfilling data. Run the backfill on a branch with production data, see how long it takes, see what the lock contention looks like, see whether you blow the connection pool. Then schedule the actual production backfill for a low-traffic window with the timing data you collected. The preview branch is the dress rehearsal.

What this isn't

Netlify Database is Postgres. It's not Redis, it's not DynamoDB, it's not Mongo, it's not a vector store, it's not a queue. If your application needs any of those, you still need a separate provider for them. The branch-per-preview model is interesting because it pairs deeply with Netlify's deploy primitives; if you're not using Deploy Previews, you don't get the workflow win.

It's also not a replacement for a real database administrator's job at scale. Once you're past 100 GB, once you have actual indexes to tune, once read-replica routing matters, once you need point-in-time recovery to a specific second, you're in real-DBA territory and Netlify's managed surface is going to feel limited. The product is excellent for the under-100-GB, under-100-RPS-on-the-DB case, which is most of what most apps actually are.

The product fits naturally with the launch-fast-then-scale model I covered in The $97 Launch, the first book in the Digital Empire trilogy. Chapter 11 is the case for picking a managed Postgres for the first version of any app and worrying about scale-out later. Netlify Database is now the simplest answer to "which managed Postgres should I use" if you're already on Netlify, because the integration removes a category of operational work that, at small scale, is the actual bottleneck.

Related reading

  • Neon Serverless Postgres covers the underlying Postgres provider that powers Netlify Database, including connection-pool and scale-to-zero details.
  • The Mattel Platform Consolidation Case is what platform-level integration looks like at enterprise scale.
  • The SPA Shell Trap is the related Netlify-deployment gotcha you should know about before you ship any app.
  • The Migration Analyzer covers the broader process of moving between platforms and databases without losing data.
  • The Mega Analyzer is what I use to confirm a deployed app is configured correctly before declaring victory on a migration.

Fact-check notes and sources

  • Product announcement: Netlify Database is now available, from provisioning to integrated Postgres, Elad Rosenheim, April 28, 2026.
  • Documentation: Netlify Database overview (retrieved 2026-05-05).
  • Migration system and workflow details per the linked docs; the multi-step rename pattern (add new, dual-write, switch reads, drop old) is a generic schema-migration practice not specific to Netlify.
  • Underlying database: Postgres on Neon's infrastructure, per the announcement post.
  • Beta-period statistic ("400,000 databases created during beta") per the announcement.
  • Pricing details (compute and bandwidth metering, scale-to-zero on inactive branches, 5-minutes-to-never sleep window control) per the announcement; verify against your current Netlify plan and the billing docs before relying on these.

This post is informational, not infrastructure-consulting advice. I'm not a Netlify employee. The product details are accurate as of the publish date; verify current pricing and plan availability with Netlify before adopting.

← Back to Blog

Accessibility Options

Text Size
High Contrast
Reduce Motion
Reading Guide
Link Highlighting
Accessibility Statement

J.A. Watte is committed to ensuring digital accessibility for people with disabilities. This site conforms to WCAG 2.1 and 2.2 Level AA guidelines.

Measures Taken

  • Semantic HTML with proper heading hierarchy
  • ARIA labels and roles for interactive components
  • Color contrast ratios meeting WCAG AA (4.5:1)
  • Full keyboard navigation support
  • Skip navigation link
  • Visible focus indicators (3:1 contrast)
  • 44px minimum touch/click targets
  • Dark/light theme with system preference detection
  • Responsive design for all devices
  • Reduced motion support (CSS + toggle)
  • Text size customization (14px–20px)
  • Print stylesheet

Feedback

Contact: jwatte.com/contact

Full Accessibility StatementPrivacy Policy

Last updated: April 2026