How to Test Your Node.js & Postgres App Using Drizzle & PGlite
Benjamin Daniel

Benjamin Daniel @benjamindaniel

About: Always learning, often ranting (politely), and occasionally shipping something useful. Currently building things with golang, nodejs, react native, and breaking them with curiosity.

Location:
Earth
Joined:
Feb 19, 2018

How to Test Your Node.js & Postgres App Using Drizzle & PGlite

Publish Date: Aug 2
9 2

I was recently tasked with writing tests for a large-scale production codebase serving over 2.1 million users. During my conversation with the lead engineer he mentioned something that stood out: “One of our biggest pain points is our test setup.”

That conversation made me realize, that the approach I’ve been using might actually be more useful than I thought. This post walks through that setup.


The Stack

We mostly use PostgreSQL in production and write functional or integration tests, which means no database mocking. In the past, we relied on Testcontainers, to spin up real Postgres instances, but that approach was, slow and heavy. You can find my other article on Testcontainers here: The death of mocks by Testcontainers

Eventually, we went looking for lighter alternatives.

That’s when we found @electric-sql/pglite, the WASM build of Postgres from the great people at ElectricSQL. It’s really perfect for a huge chunk of tests and it gets the job done with zero Docker overhead.


What We Use


The Core Idea

At test time, we dynamically swap the production database with a PG LITE instance.

Here’s how.


Swapping the DB Connection in Tests

// test/setup.ts
import { DatabaseConnection } from "@src/app.bind";
import { container } from "../app.container";
import { drizzle } from "drizzle-orm/pglite";

container.unbindSync(DatabaseConnection); // Remove existing binding

const db = drizzle({ schema }); // Use pglite instead of Postgres

container
  .bind(DatabaseConnection)
  .toConstantValue(db as unknown as DatabaseConnection); // Rebind with test DB

Enter fullscreen mode Exit fullscreen mode

Running Migrations in Tests

To make sure your schema is applied to the in-memory DB before tests run, we do this:

beforeAll(async () => {
  const { createRequire } =
    await vi.importActual<typeof import("node:module")>("node:module");
  // @ts-expect-error import isn't allowed in ESM
  const require = createRequire(import.meta.url);
  const { pushSchema } =
    require("drizzle-kit/api") as typeof import("drizzle-kit/api");

  const { apply } = await pushSchema(schema, db as unknown as never);
  await apply();

  container
    .rebindSync(DatabaseConnection)
    .toConstantValue(db as unknown as DatabaseConnection);
});
Enter fullscreen mode Exit fullscreen mode

This pushes your schema to the PG LITE DB before any tests run, you can find more about drizzle push schema here https://orm.drizzle.team/docs/drizzle-kit-push.


Be Careful

Behavior may diverge silently from production in edge cases, I haven't run into these kind of scenarios yet.

If you’re tired of:

  • Over-engineered mocks
  • Docker-heavy CI pipelines
  • Testing strategies that don’t resemble prod

…then this setup might be worth trying.

Want to see how we structure actual tests with this setup? I’ll share more in the next post, kindly follow me for that.

Or drop a comment if you want the full boilerplate repo.

Comments 2 total

  • Colin Kierans
    Colin KieransAug 2, 2025

    I'm using test containers right now so I'm intrigued by this.

    How do you handle mock data in your database for testing?

    We have a bunch of functions that we've created to insert data into the database before we run tests. And we have another strategy where we use snippets of production data that we previously grabbed and anonymized.

    Both have their pros but also their fair share of cons.

    • Benjamin Daniel
      Benjamin DanielAug 3, 2025

      Hi Colins, great question, and I completely agree: both have their pros and fair share of cons.

      I’ve used both too, but I generally lean toward creating mock data through factories, especially when I need precise control over edge cases or when testing regressions (like bugs that were fixed and need to stay fixed). It’s also easier to isolate what matters in the test.

      I think you should probably stick to whichever of the case looks right with you or better still, mix them. 


      Thanks for reading and your comment.

Add comment