Writing My First Microservice: A Painful Yet Rewarding Journey
Prashant Swaroop

Prashant Swaroop @prashantswarop9

About: Backend-focused full-stack dev. Obsessed with building things that work and figuring out AWS one confused click at a time. Writing to remember and maybe help someone else skip the pain.

Joined:
Jul 25, 2021

Writing My First Microservice: A Painful Yet Rewarding Journey

Publish Date: Jun 13
1 0

🚀 Introduction

"I thought building my first microservice would make me a dev hero—until the API Gateway nightmare hit! 😅

This post walks through the mistakes, lessons, and the microservice trenches I went through while setting up my first API gateway

😤 Pain Point #1 – API Gateway Setup

So, what even is an API Gateway?

Think of it like a traffic cop or central bouncer — every request from the frontend comes here first. It then routes the request to the correct service: Auth, Payments, Listings, etc.

It's also the only service exposed to the outside world.

api gateway

😩 Why was this painful?

  • It sounds simple, but setup gets complex really fast.
  • Forwarding headers and preserving user context was tricky.
  • Debugging routing and rewriting paths is not very intuitive.

💥 Pain Point #2 – Forwarding Headers for Auth

Let's take an example to understand this better.

You have two microservices:

  • 🛡️ Auth Service – handles login and JWTs
  • 💳 Payment Service – processes payments

Both services are independent. So even if the user is authenticated through Auth, Payment doesn’t know that.

✅ Solution

Once the user logs in:

  1. API Gateway verifies the JWT centrally.
  2. Gateway extracts user info (userId, role, etc.).
  3. It adds them as custom headers:
    • x-user-id
    • x-user-role
  4. These are forwarded to other services (like Payment), which can trust the info and skip separate JWT validation.

This way, each downstream service doesn't need to re-verify the token. They just trust the gateway.

diagram


🔁 Choosing the Right Proxy Middleware

At first, I used express-http-proxy.

It worked, but was too verbose and required too much config.

Then I discovered http-proxy-middleware.

It’s cleaner, easier to set up, and works beautifully with Express.

⚙️ Code Walkthrough

Let’s break down how the setup looks in code.

📦 API Gateway Setup with http-proxy-middleware

import { createProxyMiddleware, Plugin } from "http-proxy-middleware"; // import the main library
import logger from "./utils/logger"; // pino logger to log things
import { RequestHandler } from "express";
import dotenv from "dotenv";
import { JwtPayload } from "jsonwebtoken";

// Interface for typed JWT payload
interface JwtUserPayload extends JwtPayload {
    userId: string;
    role: string;
    email: string;
}

// base url on which you are going to forward the request. 
// in our case we are forwarding request to auth service
const authTargetUrl: string = process.env.IDENTITY_SERVICE!; 

// 🔥 Quick note: don’t write 'localhost' when targeting localhost it has some issues in resolving this
// Use the IP like 127.0.0.1:5000 

// plugin to log proxy requests using Pino
const pinoLogPlugin: Plugin = (proxyServer) => {
    proxyServer.on('proxyReq', (proxyReq, req, res) => {
        logger.info(`[PROXY] ${req.method} ${req.url}`);
    });
};

// a pino to visually see what method are we using like PUT POST and so on
// req.url tells us about converted url, must-have for debugging

/*
🧠 How this works:

We make a request to our API Gateway, and this proxy forwards your request
to the target URL by rewriting the initial URL into something usable.

Example: 
You make a request to: http://localhost:3000/auth-route/api/auth/login
This gets converted to: http://localhost:5000/api/auth/login
*/

const authServiceProxy: RequestHandler = createProxyMiddleware({
    target: authTargetUrl, // our target url = http:localhost:5000 
    changeOrigin: true,
    pathRewrite: {
        '^/auth-route': '', // removes /auth-route from the path
    },
    plugins: [pinoLogPlugin],
    on: {
        proxyReq: (proxyReq, req, res) => {
            const fullUrl = `${proxyReq.protocol || 'http:'}//${proxyReq.getHeader('host')}${proxyReq.path || ''}`;
            logger.info(`[PROXY] Rewriting and forwarding to: ${fullUrl}`);
            // this helps us debug as this gives us converted URL
        },
        proxyRes: (proxyRes, req, res) => {
            logger.info(`[PROXY] Response status from target: ${proxyRes.statusCode}`);
            // we print here what response code we get from target service
        }
    }
});

// 2nd config with header forwarding

// base URL for listing service
const listingTargetUrl = process.env.LISTING_SERVICE!;

const listingServiceProxy: RequestHandler = createProxyMiddleware({
    target: listingTargetUrl,
    changeOrigin: true,
    pathRewrite: {
        '^/listing-route': '',
    },
    plugins: [pinoLogPlugin],
    on: {
        proxyReq: (proxyReq, req, res) => {
            const fullUrl = `${proxyReq.protocol || 'http:'}//${proxyReq.getHeader('host')}${proxyReq.path || ''}`;
            logger.info(`[PROXY] Rewriting and forwarding to: ${fullUrl}`);

            const user = (req as any).user;

            // we get the user details from an auth middleware
            // which already has decoded user id and role for us
            logger.info(`user data: ${JSON.stringify(user)}`);

            if (user) {
                proxyReq.setHeader('x-user-id', user.userId);
                proxyReq.setHeader('x-user-role', user.role);
            }

            // here we set the headers before forwarding it
        },
        proxyRes: (proxyRes, req, res) => {
            logger.info(`[PROXY] Response status from target: ${proxyRes.statusCode}`);
        }
    }
});

export { authServiceProxy, listingServiceProxy };

Enter fullscreen mode Exit fullscreen mode

🚦 How it's used in server.ts


import { authServiceProxy, listingServiceProxy } from "./proxy";
import { verifyAccessToken } from "./middlewares/auth";

app.use("/auth-route", authServiceProxy);

// Auth middleware decodes the JWT, adds user info to req.user
app.use("/listing-route", verifyAccessToken, listingServiceProxy);

Enter fullscreen mode Exit fullscreen mode

🔐 Middleware: verifyAccessToken.ts

Because every API gateway needs a bouncer at the door.

For those of you who appreciate seeing the nitty-gritty, or just want to copy-paste (no judgment here, we've all been there! 😉), here's the code for that verifyAccessToken middleware. This bad boy takes the incoming JWT, validates it, and then — if all's good — extracts the user's userId and role and attaches it to the request for downstream services to use.


import jwt, { JwtPayload } from "jsonwebtoken";
import logger from "../utils/logger";
import { Request, Response, NextFunction } from "express";
import dotenv from "dotenv";

dotenv.config();

const JWT_SECRET: string = process.env.JWT_SECRET!;

// Define the shape of our expected user payload in JWT
interface JwtUserPayload extends JwtPayload {
  userId: string;
  role: string;
  email: string;
}

// Extend Express Request to include the decoded user info
interface AuthenticatedRequest extends Request {
  user?: JwtUserPayload;
}

// 🧠 Middleware to verify JWT token from Authorization header
const verifyAccessToken = async (
  req: AuthenticatedRequest,
  res: Response,
  next: NextFunction
) => {
  try {
    const authHeader = req.headers.authorization;

    // Bearer <token>
    const token = authHeader?.split(" ")[1];

    if (!token) {
      logger.warn("Missing auth token in header");
      return res.status(401).json({
        success: false,
        message: "Auth token is missing",
      });
    }

    // Verify the token using our secret key
    const verifiedToken = jwt.verify(token, JWT_SECRET) as JwtUserPayload;

    // Add the decoded user info to the request object
    req.user = verifiedToken;

    next(); // ✅ All good, proceed to next middleware
  } catch (error: any) {
    logger.error("Token verification failed", error);

    return res.status(401).json({
      success: false,
      message:
        error.name === "TokenExpiredError"
          ? "Token has expired"
          : "Invalid token",
    });
  }
};

export default verifyAccessToken;
Enter fullscreen mode Exit fullscreen mode

🧠 2nd Pain Point — Winston made me cry 😩

Setting up logging with Winston was a tedious process. I just couldn’t digest the fact that I had to go through the entire nitty-gritty of configuration just to get basic logs working.

Like most of us, I copy-pasted it from a blog and somehow made it work. But deep down, I knew — I didn’t have the courage to go through all that ever again.

So I did what we all do... I Perplexity’d. I Googled. I Reddit’d. And one name kept coming up again and again: Pino.

So I gave it a shot. And guess what? I fell in love ❤️

✅ Why I Chose Pino

  • Simple: A few lines and you’re done.
  • Fast: One of the fastest loggers out there.
  • Beautiful logs: With pino-pretty, you get color-coded logs that are a joy to read.

But fair warning — in my experience, Pino has:

  • Limited TypeScript IntelliSense support.
  • A few quirky config edges, but nothing too scary.

Still, for 90% of us, we don’t even need Pino’s advanced features. Keep it simple, log your stuff, move on.


📦 Pino Config Setup

import pino from "pino";

// Define the transport configuration for Pino
const transport: any = pino.transport({
  targets: [
    {
      // Logs saved to a file (for production logs)
      target: 'pino/file',
      level: 'info',
      options: { destination: 'logs/app.log' }
      // 💡 Make sure you create a 'logs' folder in the root of your project,
      // or this will throw an error!
    },
    {
      // Logs pretty-printed to the terminal (for development)
      target: 'pino-pretty',
      level: 'debug',
      options: {
        colorize: true,
        translateTime: 'SYS:standard', // shows readable time
        singleLine: false, // optional: makes logs cleaner
        ignore: 'pid,hostname' // ignores some metadata in terminal output
        // 💡 Make sure to install both `pino` and `pino-pretty`
      }
    }
  ]
});

// Create the logger instance
const logger = pino(transport);

export default logger;

Enter fullscreen mode Exit fullscreen mode

📍 Where to Place This

You should place this logger config inside utils/logger.ts, then import it wherever you need logging:


import logger from './utils/logger';

logger.info("Server started successfully 🚀");
logger.error("Something went wrong ❌");

Enter fullscreen mode Exit fullscreen mode

🛠️ 3rd Pain Point — Writing microservices is easy, keeping them alive is pain 🧟‍♂️

Let’s be honest — building microservices feels great at first.

You write one service, then another, and suddenly you're an architecture god…

But wait — how do you run RabbitMQ, Redis, Auth Service, Payment Service, API Gateway all at once?


🎯 My Experience (a Windows dev’s curse)

I was using RabbitMQ for messaging and Redis for caching. It all looked good on paper until Redis decided to be like:

“Yeah sorry, I don’t work directly on Windows, figure it out.” 💀

So each time I wanted to test my services:

  • I had to start WSL manually
  • Then do redis-cli
  • Then ping a message just to make sure Redis was alive and not throwing ECONNREFUSED errors all over the place

And don’t get me started on RabbitMQ's web dashboard randomly refusing to open sometimes 😵‍💫


💡 The Solution? One word: Docker 🐳

If you're even thinking about building microservices, learn Docker from Day 1.

Here’s what helped me stay sane:

  • Create a central docker-compose.yml file
  • Spin up all your services using a single command:

docker compose up -d
Enter fullscreen mode Exit fullscreen mode

That’s it. Boom — Redis, RabbitMQ, API Gateway, all up and running. No more WSL drama. No more random port issues. No more service by service startup.


✅ What You Should Do

  • ✅ Learn Docker (just enough to write a Dockerfile and docker-compose.yml)
  • ✅ Treat every microservice as a container
  • ✅ Never manually run Redis again
  • ✅ Bonus: Set restart policies to make things auto-heal

docker compose

Example:-

services:
  postgres:
    image: postgres:17.4
    ports:
      - "5433:5432"
    environment:
      POSTGRES_USER: user101
      POSTGRES_PASSWORD: pass101
      POSTGRES_DB: appointment
    volumes:
      - pgdata:/var/lib/postgresql/data

  redis:
    image: redis:latest 
    ports:
      - "6380:6379"
    volumes:
      - redisdata:/data  

volumes:
    pgdata: 
    redisdata:   
Enter fullscreen mode Exit fullscreen mode

🚀 Closing Out: Your API Gateway Playbook

Hey, fellow dev! As someone knee-deep in microservices, I get how wild it can get. Here’s my straight-up plan to ace your API Gateway setup. Let’s roll! 💻

🛠️ 4 Steps to Nail It

  1. Kick Off with Auth Service Build your Auth Service and test those JWTs till they’re rock-solid. No shortcuts! 🧪
  2. Launch Your API Gateway Route frontend calls (/login, /register) through the Gateway to your Auth Service. Smooth flow, fam! 🌐
  3. Lock Down JWTs
    Drop middleware to verify JWTs and pass user info as headers:

    `'x-user-id': user.userId,
    'x-user-role': user.role`
    
  4. Scale with More Services
    Add services like Payment or Listings with a /ping route. Spot x-user-id and x-user-role in logs? You’re killing it! 🎉


🔧 Pro Tips

  • Level Up with Linux CLI: Hit up boot.dev for quick, backend-focused lessons. Command line swagger! 🖥️
  • Docker FTW: Whip up a docker-compose.yml to spin up Redis or PostgreSQL. Use named volumes to keep data safe. Start simple, scale later. 🐳

✌️ Final Vibes

You’ve wrestled the setup beast and won. Keep coding, logging, and shipping! If this sparked something, drop a like or share—it fuels the grind. Go own it! 🔥

Comments 0 total

    Add comment