🛠️ My Hands-On Dive into GitLab CI/CD
Instead of just learning CI/CD concepts in theory, I decided to put them into practice.I built two real pipelines for real apps — one with a Go Backend, another with a Nodejs — using GitLab CI/CD.
🔧 What I Tried to Do
I worked on two projects to understand GitLab CI/CD in action:
1: Built a 3-stage pipeline (Build → Test → Deploy) for a React frontend and Go backend app. This helped me automate the entire flow from code to container.
2: Set up and used a self-hosted GitLab Runner to run jobs on my own system. It gave me hands-on experience with job execution and custom runner configurations.
Project 1 – Full Stack App Pipeline
Work done on gitlab_link
For this pipeline setup, I used an existing open-source project from GitHub. It’s a full-stack application with a React frontend and a Go backend — a great combo to test out CI/CD workflows.The project simulates a basic web app architecture, making it perfect for understanding how continuous integration and deployment work across both frontend and backend.By using this real-world project, I was able to apply CI/CD concepts practically and troubleshoot challenges that commonly appear in production environments.The project that i have referred - Github_link
🐞 Issues I Faced
At first, I wrote the CI job script assuming it would just execute like a normal shell environment. But I didn’t know that GitLab’s SaaS runners use Docker images to execute each job.That meant:- The runner didn’t know how to run my commands — because it had no idea what environment or dependencies were needed.
I wanted to use
docker build
inside my pipeline job, but it kept failing. What I didn’t realize is that: You can’t directly use Docker inside a container unless you set it up properly.GitLab SaaS runners run jobs inside containers, and to use Docker commands there, you need a setup called Docker-in-Docker (DinD).
What I Learned
To fix this, I learned about something called Docker-in-Docker (DinD).
- DinD allows Docker to run inside a Docker container by connecting to a Docker daemon.
- By enabling
privileged: true
and using the officialdocker:latest
image along with a service likedocker:dind
, I was able to use Docker commands inside my pipeline.
Here’s a snippet from the working .gitlab-ci.yml
:
image: docker:latest
services:
- docker:dind
variables:
DOCKER_HOST: tcp://docker:2375/
DOCKER_TLS_CERTDIR: ""
before_script:
- docker info
script:
- docker compose build
- docker compose up -d
Below is the .gitlab-ci.yml
file screenshot
Project 2 – Using Custom GitLab Runners
Work done on Gitlab_link
For my second project, I wanted to explore how GitLab CI/CD can work across multiple environments by using two custom GitLab runners. I set up one runner on my local machine (Runner dev) and another on an AWS EC2 instance (Runner aws). The goal was to simulate a real-world DevOps workflow where the application is built locally and store the image of application on gitlab artifacts and then deployed remotely.
On Runner dev (my local system), I built the Docker image of the Nodejs application and then pushed that image as a build artifact or stored it temporarily. GitLab CI/CD allowed me to pass this artifact between jobs. In the next stage, the job assigned to Runner aws (on AWS) picked up this artifact. There, I used the built image to deploy the app by running the container on the EC2 instance.
Issue I Faced
One of the most confusing problems I ran into was that even after modifying my application code and rebuilding the Docker image on Runner dev (my local machine), those changes weren’t showing up when I deployed the app on Runner aws (EC2). I thought deleting the old Docker image and rebuilding it would ensure the latest version got deployed, but somehow the updates weren’t reflected.
How I Solved the Issue
✅ How I Solved It
After a lot of trial and error, I realized something important GitLab CI with Docker Compose generates images with names like:
<gitlab_folder_name>_<service_name> by default
Initially, I was deleting the container using the name I gave in docker-compose.yml (via container_name), but that doesn't remove the actual image. So even though I ran:
docker-compose build --no-cache
…the image still didn’t reflect my recent code changes — because Docker was still using the existing image with the same name.
Solution? I listed the images with docker images, found the correct one (named like todo_app_from_tws_web), and deleted it using:
docker image rm <name_of_the_image>
Then I rebuilt the image and re-ran the pipeline — and this time, the new changes were reflected correctly on the deployment side (AWS runner).
.gitlab-ci.yml file of Todo app
Stages of Pipeline
Artifacts of the project
🟢 And finally... it’s alive!
Resources That Helped Me Along the Way
I didn't figure it all out on my own - these resources were super helpful during the process:
- gitlab docs - From Errors to Execution!
- Youtube Tutorial - To visually understand the concept.
- Google - For quick searches.
If you're starting out, I highly recommend using these - they make learning much easier!
🙌 Wrapping Up
Working with GitLab CI/CD pipelines across two different setups taught me more than just YAML and jobs — it showed how real DevOps systems behave in action. From understanding how GitLab SaaS runners execute jobs, to handling image build issues across custom self-hosted runners, this hands-on experience gave me clarity on what goes on behind automated deployments.
💬 Over to You
Have you worked with GitLab CI/CD pipelines or tried setting up your own runners? Faced any weird issues like Docker not reflecting changes or environment configs acting up?
I’d love to hear how you tackled them — or even if you're just starting out, feel free to reach out!
Let’s connect and grow together in this DevOps journey:
Connect on LinkedIn
Say hi on X (Twitter)