Hey everyone! I recently got fed up with waiting minutes for simple analytics queries to finish, so I threw together a little demo to see how SingleStore, MySQL, and PostgreSQL stack up against each other on a 110 million‑row banking transactions dataset.
Spoiler: the results surprised me - and I think they’ll surprise you too.
In this post I’ll walk you through:
- The setup I used
- The queries I ran and what happened
- How to spin it up on your machine
My Setup
- Docker Images:
- SingleStore Free Dev:
ghcr.io/singlestore-labs/singlestoredb-dev:latest
- MySQL:
mysql:latest
- PostgreSQL:
postgres:latest
- SingleStore Free Dev:
- Platforms: three
t2.large
EC2 instances - one for each database - Dataset: 110 million synthetic “banking transactions”
- Code: Next.js + Drizzle ORM application (GitHub repository)
The Queries
I tested three common analytics patterns:
- Sum of all successful transactions in the last 30 days
- Top transfer recipients by count
- Recent transactions joined with user and account info
Each query ran against all three databases, and I recorded both the absolute time and the relative speed.
1. Total Transaction Volume (30 days)
- SingleStore: 923 ms
- Postgres: 1m 3.3 s
- MySQL: timed out
That meant SingleStore was about x68 faster than PostgreSQL - and MySQL couldn't finish in 5 minutes.
2. Top Transfer Recipients
- SingleStore: 3.8 s
- Postgres: 1m 19.2 s
- MySQL: timed out
Here, SingleStore was roughly x20 faster than PostgreSQL - and MySQL couldn’t finish in 5 minutes.
3. Join Users, Accounts & Transactions
- SingleStore: 206 ms
- Postgres: 1m 4.1 s
- MySQL: timed out
SingleStore handled the multi‑table join roughly x311 faster than PostgreSQL - and MySQL couldn’t finish the query in 5 minutes again.
What the numbers tell us
SingleStore consistently outperformed both MySQL and PostgreSQL in query latency and sustained throughput under identical conditions. That makes it a strong candidate for real‑time analytics on operational data - whether you’re driving dashboards, powering machine‑learning features, or simply need to crunch large datasets at speed.
Try It Yourself
To run this demo locally, just follow the instructions in the GitHub repository.
Conclusion
In this benchmark, I ran each database engine straight “out of the box” - no extra tuning, no custom extensions, just the default Docker images. You can check out the exact setup in my Docker Compose file here.
All three containers lived side by side on identical AWS EC2 t2.large
instances. I generated one 110 million‑row synthetic banking‑transactions dataset and loaded it into each database exactly the same way. If you’re curious about the details, the scripts are all open source:
For all my queries and connections, I relied on Drizzle ORM - no hand‑rolled code:
If you want to see these performance gains on your own workloads give SingleStore a try and let me know what you find.
Also, if you spot anything I’ve overlooked or have ideas to squeeze even more performance out of the setup, please drop a comment - I’d love to learn from you.
Thank you for your attention!