How I Got x311 Faster Analytics on 110M Rows
Yaroslav Demenskyi

Yaroslav Demenskyi @ydemenskyi30

About: I'm just an IT guy.

Joined:
May 6, 2025

How I Got x311 Faster Analytics on 110M Rows

Publish Date: May 6
0 0

Hey everyone! I recently got fed up with waiting minutes for simple analytics queries to finish, so I threw together a little demo to see how SingleStore, MySQL, and PostgreSQL stack up against each other on a 110 million‑row banking transactions dataset.

Spoiler: the results surprised me - and I think they’ll surprise you too.

In this post I’ll walk you through:

  1. The setup I used
  2. The queries I ran and what happened
  3. How to spin it up on your machine

My Setup

The Queries

I tested three common analytics patterns:

  1. Sum of all successful transactions in the last 30 days
  2. Top transfer recipients by count
  3. Recent transactions joined with user and account info

Each query ran against all three databases, and I recorded both the absolute time and the relative speed.

1. Total Transaction Volume (30 days)

  • SingleStore: 923 ms
  • Postgres: 1m 3.3  s
  • MySQL: timed out

That meant SingleStore was about x68 faster than PostgreSQL - and MySQL couldn't finish in 5 minutes.

Query-1

2. Top Transfer Recipients

  • SingleStore: 3.8  s
  • Postgres: 1m 19.2  s
  • MySQL: timed out

Here, SingleStore was roughly x20 faster than PostgreSQL - and MySQL couldn’t finish in 5 minutes.

Query-2

3. Join Users, Accounts & Transactions

  • SingleStore: 206 ms
  • Postgres: 1m 4.1 s
  • MySQL: timed out

SingleStore handled the multi‑table join roughly x311 faster than PostgreSQL - and MySQL couldn’t finish the query in 5 minutes again.

Query-3

What the numbers tell us

SingleStore consistently outperformed both MySQL and PostgreSQL in query latency and sustained throughput under identical conditions. That makes it a strong candidate for real‑time analytics on operational data - whether you’re driving dashboards, powering machine‑learning features, or simply need to crunch large datasets at speed.

Try It Yourself

To run this demo locally, just follow the instructions in the GitHub repository.

Conclusion

In this benchmark, I ran each database engine straight “out of the box” - no extra tuning, no custom extensions, just the default Docker images. You can check out the exact setup in my Docker Compose file here.

All three containers lived side by side on identical AWS EC2 t2.large instances. I generated one 110 million‑row synthetic banking‑transactions dataset and loaded it into each database exactly the same way. If you’re curious about the details, the scripts are all open source:

For all my queries and connections, I relied on Drizzle ORM - no hand‑rolled code:

If you want to see these performance gains on your own workloads give SingleStore a try and let me know what you find.

Also, if you spot anything I’ve overlooked or have ideas to squeeze even more performance out of the setup, please drop a comment - I’d love to learn from you.

Thank you for your attention!

Comments 0 total

    Add comment