Postgres is Too Good (And Why That's Actually a Problem)
Shayan

Shayan @shayy

About: Building UserJot in Public

Location:
Maryland, United States
Joined:
Jan 14, 2025

Postgres is Too Good (And Why That's Actually a Problem)

Publish Date: Jun 13
158 30

We need to talk about something that's been bothering me for months. I've been watching indie hackers and startup founders frantically cobbling together tech stacks with Redis for caching, RabbitMQ for queues, Elasticsearch for search, and MongoDB for... reasons?

I'm guilty of this too. When I started building UserJot (my feedback and roadmap tool), my first instinct was to plan out a "proper" architecture with separate services for everything. Then I stopped and asked myself: what if I just used Postgres for everything?

Turns out, there's this elephant in the room that nobody wants to acknowledge:

Postgres can do literally all of this.

And it does it better than you think.

The "Postgres Can't Scale" Myth That's Costing You Money

Let me guess - you've been told that Postgres is "just a relational database" and you need specialized tools for specialized jobs. That's what I thought too, until I discovered that Instagram scaled to 14 million users on a single Postgres instance. Discord handles billions of messages. Notion built their entire product on Postgres.

But here's the kicker: they're not using Postgres like it's 2005.

Queue Systems

Stop paying for Redis and RabbitMQ. Postgres has native support for LISTEN/NOTIFY and can handle job queues better than most dedicated solutions:

-- Simple job queue in pure Postgres
CREATE TABLE job_queue (
    id SERIAL PRIMARY KEY,
    job_type VARCHAR(50),
    payload JSONB,
    status VARCHAR(20) DEFAULT 'pending',
    created_at TIMESTAMP DEFAULT NOW(),
    processed_at TIMESTAMP
);

-- ACID-compliant job processing
BEGIN;
UPDATE job_queue
SET status = 'processing', processed_at = NOW()
WHERE id = (
    SELECT id FROM job_queue
    WHERE status = 'pending'
    ORDER BY created_at
    FOR UPDATE SKIP LOCKED
    LIMIT 1
)
RETURNING *;
COMMIT;
Enter fullscreen mode Exit fullscreen mode

This gives you exactly-once processing with zero additional infrastructure. Try doing that with Redis without pulling your hair out.

In UserJot, I use this exact pattern for processing feedback submissions, sending notifications, and updating roadmap items. One transaction, guaranteed consistency, no message broker complexity.

Key-Value Storage

Redis costs $20/month minimum on most platforms. Postgres JSONB is included in your existing database and does most of what you need:

-- Your Redis alternative
CREATE TABLE kv_store (
    key VARCHAR(255) PRIMARY KEY,
    value JSONB,
    expires_at TIMESTAMP
);

-- GIN index for blazing fast JSON queries
CREATE INDEX idx_kv_value ON kv_store USING GIN (value);

-- Query nested JSON faster than most NoSQL databases
SELECT * FROM kv_store
WHERE value @> '{"user_id": 12345}';
Enter fullscreen mode Exit fullscreen mode

The @> operator is Postgres's secret weapon. It's faster than most NoSQL queries and your data stays consistent.

Full-Text Search

Elasticsearch clusters are expensive and complex. Postgres has built-in full-text search that's shockingly good:

-- Add search to any table
ALTER TABLE posts ADD COLUMN search_vector tsvector;

-- Auto-update search index
CREATE OR REPLACE FUNCTION update_search_vector()
RETURNS trigger AS $$
BEGIN
    NEW.search_vector := to_tsvector('english',
        COALESCE(NEW.title, '') || ' ' ||
        COALESCE(NEW.content, '')
    );
    RETURN NEW;
END;
$$ LANGUAGE plpgsql;

-- Ranked search results
SELECT title, ts_rank(search_vector, query) as rank
FROM posts, to_tsquery('startup & postgres') query
WHERE search_vector @@ query
ORDER BY rank DESC;
Enter fullscreen mode Exit fullscreen mode

This handles fuzzy matching, stemming, and relevance ranking out of the box.

For UserJot's feedback search, this lets users find feature requests instantly across titles, descriptions, and comments. No Elasticsearch cluster needed - just pure Postgres doing what it does best.

Real-Time Features

Forget complex WebSocket infrastructure. Postgres LISTEN/NOTIFY gives you real-time updates with zero additional services:

-- Notify clients of changes
CREATE OR REPLACE FUNCTION notify_changes()
RETURNS trigger AS $$
BEGIN
    PERFORM pg_notify('table_changes',
        json_build_object(
            'table', TG_TABLE_NAME,
            'action', TG_OP,
            'data', row_to_json(NEW)
        )::text
    );
    RETURN NEW;
END;
$$ LANGUAGE plpgsql;
Enter fullscreen mode Exit fullscreen mode

Your application listens for these notifications and pushes updates to users. No Redis pub/sub needed.

The Hidden Costs of "Specialized" Tools

Let's do some math. A typical "modern" stack costs:

  • Redis: $20/month
  • Message queue: $25/month
  • Search service: $50/month
  • Monitoring for 3 services: $30/month
  • Total: $125/month

But that's just the hosting costs. The real pain comes from:

Operational Overhead:

  • Three different services to monitor, update, and debug
  • Different scaling patterns and failure modes
  • Multiple configurations to maintain
  • Separate backup and disaster recovery procedures
  • Different security considerations for each service

Development Complexity:

  • Different client libraries and connection patterns
  • Coordinating deployments across multiple services
  • Inconsistent data between systems
  • Complex testing scenarios
  • Different performance tuning approaches

If you self-host, add server management, security patches, and the inevitable 3 AM debugging sessions when Redis decides to consume all your memory.

Postgres handles all of this with a single service that you're already managing.

The Single Database That Scales

Here's something most people don't realize: a single Postgres instance can handle massive scale. We're talking millions of transactions per day, terabytes of data, and thousands of concurrent connections.

Real-world examples:

  • Airbnb: Single Postgres cluster handling millions of bookings
  • Robinhood: Billions of financial transactions
  • GitLab: Entire DevOps platform on Postgres

The magic is in Postgres's architecture. It's designed to scale vertically incredibly well, and when you finally need horizontal scaling, you have proven options like:

  • Read replicas for query scaling
  • Partitioning for large tables
  • Connection pooling for concurrency
  • Logical replication for distributed setups

Most businesses never hit these limits. You're probably fine with a single instance until you're processing millions of users or complex analytical workloads.

Compare this to managing separate services that all scale differently - your Redis might max out memory while your message queue struggles with throughput and your search service needs different hardware entirely.

Stop Overengineering From Day One

The biggest trap in modern development is architectural astronauting. We design systems for problems we don't have, with traffic we've never seen, for scale we may never reach.

The overengineering cycle:

  1. "We might need to scale someday"
  2. Add Redis, queues, microservices, multiple databases
  3. Spend months debugging integration issues
  4. Launch to 47 users
  5. Pay $200/month for infrastructure that could run on a $5 VPS

Meanwhile, your competitors ship faster because they're not managing a distributed system before they need one.

The better approach:

  • Start simple with Postgres
  • Monitor actual bottlenecks, not imaginary ones
  • Scale specific components when you hit real limits
  • Add complexity only when it solves actual problems

Your users don't care about your architecture. They care about whether your product works and solves their problems.

When You Actually Need Specialized Tools

Don't get me wrong - specialized tools have their place. But you probably don't need them until:

  • You're processing 100,000+ jobs per minute
  • You need sub-millisecond cache responses
  • You're doing complex analytics on terabytes of data
  • You have millions of concurrent users
  • You need global data distribution with specific consistency requirements

If you're reading this on dev.to, you're probably not there yet.

Why This Actually Matters

Here's what blew my mind: Postgres can be your primary database, cache, queue, search engine, AND real-time system simultaneously. All while maintaining ACID transactions across everything.

-- One transaction, multiple operations
BEGIN;
    INSERT INTO users (email) VALUES ('user@example.com');
    INSERT INTO job_queue (job_type, payload)
    VALUES ('send_welcome_email', '{"user_id": 123}');
    UPDATE kv_store SET value = '{"last_signup": "2024-01-15"}'
    WHERE key = 'stats';
COMMIT;
Enter fullscreen mode Exit fullscreen mode

Try doing that across Redis, RabbitMQ, and Elasticsearch without crying.

The Boring Technology That Wins

Postgres isn't sexy. It doesn't have a flashy website or viral TikTok presence. But it's been quietly powering the internet for decades while other databases come and go.

There's something to be said for choosing boring, reliable technology that just works.

Action Steps for Your Next Project

  1. Start with Postgres only - Resist the urge to add other databases
  2. Use JSONB for flexibility - You get schema-less benefits with SQL power
  3. Implement queues in Postgres - Save money and complexity
  4. Add specialized tools only when you hit real limits - Not imaginary ones

My Real-World Experience

Building UserJot has been the perfect test case for this philosophy. It's a feedback and roadmap tool that needs:

  • Real-time updates when feedback is submitted
  • Full-text search across thousands of feature requests
  • Background jobs for sending notifications
  • Caching for frequently accessed roadmaps
  • Key-value storage for user preferences and settings

My entire backend is a single Postgres database. No Redis, no Elasticsearch, no message queues. Just Postgres handling everything from user authentication to real-time WebSocket notifications.

The result? I ship features faster, have fewer moving parts to debug, and my infrastructure costs are minimal. When users submit feedback, search for features, or get real-time updates on roadmap changes - it's all Postgres under the hood.

This isn't theoretical anymore. It's working in production with real users and real data.

The Uncomfortable Conclusion

Postgres might be too good for its own good. It's so capable that it makes most other databases seem unnecessary for 90% of applications. The industry has convinced us we need specialized tools for everything, but maybe we're just making things harder than they need to be.

Your startup doesn't need to be a distributed systems showcase. It needs to solve real problems for real people. Postgres lets you focus on that instead of babysitting infrastructure.

So next time someone suggests adding Redis "for performance" or MongoDB "for flexibility," ask them: "Have you actually tried doing this in Postgres first?"

You might be surprised by the answer. I know I was when I built UserJot entirely on Postgres - and it's been running smoothly ever since.


What's your experience with Postgres? Have you successfully used it beyond traditional relational data? I'm always curious to hear how other developers are using it. If you want to see a real example of Postgres doing everything, check out UserJot - it's my proof that you really can build a full SaaS with just one database.

Comments 30 total

  • Jonas Scholz
    Jonas ScholzJun 13, 2025

  • Dotallio
    DotallioJun 13, 2025

    Agree with this so much. I run everything from stateful AI flows to realtime dashboards with just Postgres - curious if anyone actually hit a limit that Postgres couldn't handle?

  • Nathan Tarbert
    Nathan TarbertJun 13, 2025

    been loving this energy tbh, made me rethink all those times i jumped straight to the fancy stacks instead of just trusting one thing that actually works you think sticking to boring tech too long ever backfires or nah

  • david duymelinck
    david duymelinckJun 14, 2025

    I would not put all the data is a single database. I go for task specific schemas.

    There are benefits other than fulfilling the basic job in the specialized database systems. Scaling isn't the only reason to use them.
    If you only need the basic job Postgres can be the solution.

    • Jurgo Boemo
      Jurgo BoemoJun 15, 2025

      I would say that if you want to divide your data, you should do it by domain. So you can have multiple postgress istances/schemas organized by domain. If you do that, all the points in the article are still valid except for the pub/sub part.
      I have the feeling that with this approarch, the completely is reduced so much that your performance will be ok even with a monolithic approach for a long time though

      • david duymelinck
        david duymelinckJun 15, 2025

        I would not bother with domain separation in most cases because it will not have the performance boost or scaling possibilities you think it has.
        I would rather prefix the table name with the domain name to make it easier to scan through the table overview, and detect domain crossing foreign keys.
        I would do domain separation when a high level of data security is needed.

        Splitting it up by tasks makes the schemas single purpose. The full text schema is going to require more data because of the token splitting. the queue schema is going to be very scalable because the amount of rows is going to fluctuate.
        Because the schema's are single purpose it will be easier to draw conclusions when things go wrong.

  • applyatjob
    applyatjobJun 14, 2025

    Explore applyatjob.com — a smarter job platform where candidates can apply and get interviewed by AI, all in one place

  • nadeem zia
    nadeem ziaJun 14, 2025

    nice work

  • sahil1330
    sahil1330Jun 14, 2025

    Discord uses scyllaDB.

    • Shayan
      ShayanJun 14, 2025

      They also use Postgres for core relational data.

  • Günter Zöchbauer
    Günter ZöchbauerJun 14, 2025

    When NoSQL became popular, many thought NoSQL is superior to SQL. While there are scenarios where this can be true, it's only for the most trivial use cases or at extreme scale where performance has much higher priority than functionality. The latter is probably why many thought NoSQL is superior in general, but only very few ever need that kind of distributed performance some NoSQL databases can offer. SQL databases like Postgres offer a shitload of extremely useful functionality and can scale, just somewhat less than specialized NoSQL databases at the cost of extremely limited functionality.

  • Alvarin
    AlvarinJun 14, 2025

    *Great Read! But How Do You Handle CI/CD With This Approach?
    *

    @shayy Your post about using Postgres for everything really resonated with me. The UserJot example proves this works in production, but I'm curious about the operational side that wasn't covered.

    When Postgres is handling your queues, search indexes, real-time notifications AND core data, how do you manage deployments safely? A single schema migration could impact job processing, search performance, and real-time features all at once. Do you use tools like Flyway for migrations, or have you found simpler approaches? And how do you test these interdependent features - especially when simulating production load across all the different "services" within Postgres?

    I suspect managing CI/CD for one well-configured Postgres instance might actually be simpler than coordinating deployments across Redis + RabbitMQ + Elasticsearch + your main database. Would love to hear your real-world experience with testing and deployment strategies!

  • Stephen Potter
    Stephen PotterJun 14, 2025

    Love this. Can’t agree more.

  • Erin Boeger
    Erin BoegerJun 15, 2025

    Thank you for this!

  • Navin Yadav
    Navin YadavJun 15, 2025

    Let give it a try

  • Alois Sečkár
    Alois SečkárJun 15, 2025

    What about storing files (large binary data) in Postgres? This is my current usecase for MongoDB - storing user uploads. Can this be substituted with Postgres as well?

    • Jo
      JoJun 15, 2025

      postgresql.org/docs/7.1/jdbc-lo.html
      They are called BLOBs, large binary objects stored in DB tables. I'm not sure this is the best way to do it, but it's possible

    • Chigozie Okali
      Chigozie OkaliJun 19, 2025

      Try and experiment with the bytea (byte array) data type for your file storage, can possibly store up to 1GB on a single column.

    • Peter Lamb
      Peter LambJun 20, 2025

      Yes, it absolutely can. I built a document management system using Postgres and have stored binaries in Postgres for other applications too. It works very well. I even implemented binary chunks to allow me to store very large binaries.

  • Diogo Klein
    Diogo KleinJun 15, 2025

    This post smells quite a lot as chatgpt

  • Shaun Jansen Van Nieuwenhuizen
    Shaun Jansen Van NieuwenhuizenJun 15, 2025

    Very well written!.
    I use postgress in enterprise environments, and mysql in private environments (considering moving).

    I do believe that the postgress events will still require pub/sub when horizontally scaling.

    Redis works great for caching, I have not tried postgress solution that you posted, so will definitely give this a try, that being said, I think there are more libs that support redis out of the box

  • Ha Aang
    Ha AangJun 15, 2025

    Finally something good on dev.to even it has op product plug.

  • Rock Brown
    Rock BrownJun 15, 2025

    I start with SQLite3, then move up to Postgres. This makes it even easier than having to start with a DB server. I personally use SQLite for my personal projects, I don't have a valid need yet for a "bigger" SQL server for them. (I'm in Oracle for work all day long.). Just for fun, I want a self hosted Postgres server.

  • Alex Colls Outumuro
    Alex Colls OutumuroJun 15, 2025

    I couldn't agree more! Great post! 👏

  • Cali LaFollett
    Cali LaFollettJun 16, 2025

    @shayy GREAT, well laid out article explaining WHY you really don't need much else.

    I really like SQL Server but Postgres leaves it in he dust for features and is my go-to DB now.

  • Nicolus
    NicolusJun 16, 2025

    I completely agree with your sentiment, but I think you're overestimating the cost and complexity of Redis or Valkey: You can install it for free on a $5 VPS in an hour (or 5 minutes if you just use the default configuration) and it works pretty much as expected out of the box. It gives you both a key-value cache and a pubsub queue system.

    So doing everything in the DB is absolutely viable and probably the best approach for a MVP, but I've never felt like Redis was a burden.

  • Chigozie Okali
    Chigozie OkaliJun 19, 2025

    Nice one. Such a free, wonderful and extensible, high performance database. The extensions (including full-text search, geospatial and vector support) are truly out this world.

  • özkan pakdil
    özkan pakdilJun 23, 2025

    I agree having one PostgreSQL is a very nice syarting point but I still wonder is there a comparison/benchmark with numbers full text search in lucene vs PG and same with redis and kafka.

  • Luce5in3
    Luce5in3Jun 24, 2025

    If I were to do a project that I loved, I think I'd try to use it,nice article。

Add comment