Your laptop can run a full devops stack here’s how I set mine up
<devtips/>

<devtips/> @dev_tips

About: Helping developers find the right cloud and software solutions

Joined:
Feb 27, 2025

Your laptop can run a full devops stack here’s how I set mine up

Publish Date: Jun 19
0 0

1. Intro devops isn’t just for the cloud elite

The internet loves throwing DevOps buzzwords at you:
CI/CD. Observability. Infrastructure as Code.
And right after that they hand you a Terraform template with a $300/month AWS footprint.

But here’s the thing:
You don’t need cloud credits or a company login to learn real DevOps skills.

You can build a complete DevOps lab Git server, CI/CD, monitoring, deployment workflows entirely on your laptop.
Yes, seriously.

No fake “local emulation.”
No “learn DevOps in theory.”
I’m talking about actually pushing code, triggering builds, deploying containers, tracking metrics all from your machine.

Why did I build one?

Because I was broke.
And impatient.
And I wanted to experiment without waiting for Terraform to provision an EC2 in some forgotten region.

€50 free credits for 30 days trial
 Promo code: devlink50

This article walks you through:

  • What tools I used
  • How I glued them together
  • What went wrong
  • And how you can spin up your own self-contained DevOps playground (in a few hours, not days)

Let’s break it down.

2. Why run a devops lab locally in the first place?

Before I started building this lab, I asked the same question you’re probably thinking:

“Can’t I just get a free AWS/GCP trial and test things there?”

Sure.
And by the time you set up IAM roles, wait 10 minutes for an EC2 instance, and accidentally forget to delete a bucket, you’ll have spent more time (and money) than you wanted.

Here’s why local wins for learning, testing, and tinkering.

1. zero cost, always

No surprise bills.
No “you’ve exceeded your free tier” emails at 3am.
Just you, your laptop, and the stack you control. Run it all without touching your wallet.

2. deeper understanding of infrastructure

Cloud hides complexity.
Locally, you touch every piece volumes, networks, ports, reverse proxies, monitoring agents.
You feel what makes systems work.

3. full privacy and control

You don’t need to expose anything online.
No worries about leaking secrets, ports, or random containers running :latest.
It’s all sandboxed in your machine.

4. portability and offline access

Wi-Fi down? No problem.
Your DevOps lab still runs.
You can demo it, show it off, or test pipelines anywhere cafes, flights, mountains (if you’re into that).

5. experiment, reset, repeat

Break stuff, wipe it, rebuild from scratch.
Your laptop becomes your personal staging environment fast feedback, no consequences.

If you want to actually understand DevOps (not just follow tutorials blindly), running things locally forces you to learn what matters.

3. Hardware + system requirements

Let’s get this out of the way:
You don’t need a $3,000 MacBook Pro or a liquid-cooled Linux rig named “KubernetesSlayer99” to run a DevOps lab.

But you do need enough juice to avoid crying every time you run docker-compose up.

Here’s what I recommend:

Minimum setup (it works, but keep it light):

  • 8 GB RAM (Docker will eat most of it)
  • Dual-core CPU
  • SSD storage (HDD will slow things to a crawl)
  • 20–30 GB free disk space (logs + volumes add up fast)
  • Linux/macOS or Windows with WSL2

Ideal setup (smooth experience):

  • 16 GB RAM (no swapping, no lag)
  • Quad-core CPU or higher
  • 50+ GB disk space (especially with monitoring and container registry)
  • Docker Desktop or Podman
  • Multipass (for creating clean VMs on-demand)
  • Optional: Use Tailscale to access your lab from any device securely

If your machine checks these boxes, you’re ready to host Git, CI/CD, monitoring, and more all from your own hardware.

4. Stack overview what services you’ll run locally

Now that your machine’s ready to hustle, let’s talk about what’s actually going inside this DevOps lab.

This isn’t just Docker Hello World.
We’re running real stuff like you’d see in production, but with fewer meetings.

Here’s the core stack I run on my laptop:

Git server Gitea or GitLab CE

Host your own Git repositories. Push code, create pull requests, manage users all locally.
Gitea is lightweight and perfect for laptops.
If you have RAM to spare, try GitLab CE.

CI/CD tool Jenkins or Drone CI

Run tests, builds, deployments trigger pipelines on every push.
Jenkins is old-school but powerful.
Drone CI is lightweight, Docker-native, and fast.

Container registry Harbor or local Docker registry

Push and pull images without Docker Hub rate limits.
Harbor has a slick UI and RBAC.
Docker Registry UI is minimal and works well.

Infrastructure automation Ansible or Bash scripts

Spin up, tear down, reconfigure all from code.
Start with Ansible playbooks.
Or write smart Bash scripts (sometimes Bash > YAML).

Monitoring & metrics Prometheus + Grafana

Track container health, uptime, system stats and make pretty dashboards.
Use docker-compose-prometheus-grafana to get started fast.

Reverse proxy Nginx or Traefik (optional)

Route everything to local ports via nice .localhost domains.
Bonus: Add local SSL with mkcert to flex on yourself.

Optional power-ups

  • K3s / kind run Kubernetes clusters locally
  • Vault store and rotate secrets securely
  • Consul service discovery & health checks
  • Terraform define infra as code, even for local VMs

Summary table:

This is your DevOps starter stack everything needed to build, test, deploy, and observe apps, just like the pros… but without the cloud bill.

5. How to spin up everything locally (without losing your mind)

Here’s the deal:
You could go full mad scientist and run 12 separate Docker commands and configure each service by hand.

But we’re smarter than that.

We’re using Docker Compose to orchestrate everything — one config file to rule them all.

Step 1: Set up a project folder

mkdir ~/devops-lab && cd ~/devops-lab

You’ll store all your services, volumes, configs, and compose files here. Keep it tidy.

Step 2: build a docker-compose.yml

Here’s a minimal example with Gitea, Drone CI, and Prometheus to get started:

version: '3.9'

services:
gitea:
image: gitea/gitea:latest
ports:
- "3000:3000"
volumes:
- gitea:/data

drone:
image: drone/drone:latest
ports:
- "8080:80"
environment:
- DRONE_GITEA=true
- DRONE_RPC_SECRET=supersecret
depends_on:
- gitea

prometheus:
image: prom/prometheus:latest
ports:
- "9090:9090"
volumes:
- ./prometheus.yml:/etc/prometheus/prometheus.yml

volumes:
gitea:

Tip: break your stack into multiple compose files (e.g. ci.yml, monitoring.yml) and use docker compose -f to run them separately.

Step 3: test each service locally

docker compose up -d

Visit:

Make sure ports don’t clash. If you’re using WSL2 or macOS, use localhost directly. On native Linux, you’re golden.

Bonus: make a Makefile

Save yourself from typing the same commands 50 times a day:

up:
docker compose up -d

down:
docker compose down

logs:
docker compose logs -f

Bonus #2: use .env files for secrets and ports

Keep sensitive stuff out of your Compose file. Use .env like a grown-up dev:

env
DRONE_RPC_SECRET=supersecret
GITEA_PORT=3000

This setup gives you a fully functioning mini-DevOps stack in under 10 minutes and you can build on top of it however you like.

6. Mistakes i made (so you don’t have to)

Let’s be real I didn’t nail this lab on day one.
I broke stuff. Lost data. Got weird errors with zero Google results.

Here are the dumb-but-teachable moments I hit so you can avoid them like unused Jenkins plugins.

Mistake #1: forgetting to persist volumes

I spun up containers, configured everything… then nuked it all with a simple docker compose down.

💀 Goodbye config. Goodbye Git repos. Goodbye sanity.

Fix: Always use volumes: in Compose, and back up any important /data paths.
Better yet, mount a host directory during testing so you can inspect things locally.

Mistake #2: port collisions

Running Gitea on 3000? So is your local React app.
Now your browser’s crying.

Fix: Use .env files to make ports configurable. If you’re going full chaos-mode, install ngrok or Tailscale and map services to cool dev URLs like http://ci.localhost.

Mistake #3: hardcoding everything

I had secrets, tokens, and config values baked into Dockerfiles like I didn’t care about life.

Fix: Use .env files + Docker secrets if you want to pretend to be responsible. At the very least, don’t git commit them. Ever.

Mistake #4: running too much at once

I got greedy and ran GitLab, Jenkins, Prometheus, and a Minikube cluster all at once on 8GB RAM.
Laptop fans: preparing for takeoff

Fix: Start small. Run 2–3 services max while developing. Add the rest when you need them.

Mistake #5: skipping health checks

Some services silently fail or hang (especially CI).
You don’t want to spend hours debugging a job that never ran because your container was stuck in restart.

Fix: Use Docker healthcheck: and a basic status page (like cAdvisor) to see what’s alive.

Every mistake helped shape a more reliable lab.
Now it runs on autopilot and I’ve built CI/CD pipelines faster than most cloud bootcamps.

<img alt="" src="https://miro.medium.com/v2/resize:fit:945/1-B2lMY1MMcIMhfOxJwiYqw.png">

8. What i learned from building a local devops lab

Honestly, I started this as a weekend project.
I thought I’d just spin up a Git server, maybe a CI tool, mess around for a bit, and delete it all on Monday.

But here’s what I didn’t expect:

I learned more about networking than in any Udemy course

Reverse proxies, port bindings, volume mounts I stopped copying configs and started understanding them.

I broke stuff and got better at fixing it

Most of my learning came from recovering broken builds, missing configs, and misbehaving containers.
Every mistake made me more confident with Docker and Linux in general.

I stopped being scared of YAML

Seriously.
When you stare at 300 lines of docker-compose.yml and make it work, cloud config files stop looking like encrypted manuscripts.

I finally understood what “infrastructure as code” actually means

Not just defining services, but thinking about repeatability, modularity, backup, and rebuilds.
Your lab becomes your infra playground but with purpose.

And i got faster at real-world DevOps tasks

Want to test Git hooks, build pipelines, or monitor logs like a pro?
Do it in your own local stack, where no one’s watching and nothing costs $0.005 per minute.

This wasn’t just “practice” it turned into a safe, powerful sandbox to grow in.

9. Conclusion don’t wait for cloud access to build devops skills

You don’t need AWS credits.
You don’t need a $100 Udemy course.
You don’t need to beg your manager for a sandbox account.

If you’ve got:

  • A laptop
  • Some free time
  • A bit of disk space

Then congratulations you’ve got everything you need to become dangerous in DevOps.

Running a local DevOps lab forces you to think like an engineer:

  • How do systems talk to each other?
  • What happens when things go down?
  • How do you automate recovery, deployment, monitoring all without duct tape?

This isn’t fake learning.
This is real experience, minus the cloud bill and waiting time.

You can try things. Break them. Rebuild them.
All on your terms, on your machine, at your own pace.

So yeah my stack might not be globally distributed.
But it taught me more than a thousand cloud dashboards ever could.

Helpful resources

  1. Docker Mastery Bret Fisher (YouTube) https://www.youtube.com/c/BretFisherDockerDockerDocker Learn Docker from scratch and understand best practices for local setups.
  2. Play with Docker (Free playground) https://labs.play-with-docker.com/ A browser-based Docker lab you can experiment with before installing locally.
  3. Awesome-Selfhosted GitHub list https://github.com/awesome-selfhosted/awesome-selfhosted Massive curated list of self-hostable apps, including DevOps tools.
  4. cAdvisor for container monitoring https://github.com/google/cadvisor Real-time stats for your Docker containers.
  5. Gitea Docs Lightweight GitHub alternative https://docs.gitea.io/ Perfect Git server for running on a local dev machine.
  6. DevOps Toolchain Guide Atlassian https://www.atlassian.com/devops/devops-tools Great overview of how different tools fit into the DevOps lifecycle.

Comments 0 total

    Add comment