Introduction
Starting off with a 5-year-old notebook from my apprenticeship that I don’t use anymore since I got the M2 Air, I thought of putting Ubuntu Server on it and seeing where the path leads me. The following few weeks, I have been immersed into the whole Homelab world and enjoying everything. I then took this whole event as a chance to learn more about servers, networking, and containerization to grow my career in a new field.
Setup
I first started off with a Raspberry Pi 3 and 4, a 15-year-old HP notebook, and as said, my Lenovo notebook. I managed to put them into a K3S cluster. But honestly, after some testing around, I had to find that the old HP was running way too hot, and both Raspberries were not contributing much.
So I removed everything and turned just my Lenovo notebook into a single-node Docker server. Everything is hidden in the living room cabinet where most of the networking before was. That all results in:
Lenovo Yoga 530-14IKB | External Drives | Networking |
---|---|---|
Ubuntu 24.04.2 LTS, Intel i7-8550U, 16GB DDR4 RAM, 256GB SSD, Intel UHD Graphics 620 |
WD 3TB HDD (Media) WD 1TB HDD (Cloud) |
Sunrise Connect Box 3 Ubiquiti Flex Mini 2.5G Ubiquiti Network cable LogiLink USB-A/-C to 2.5G Ethernet |
Remote Access (VPN)
For accessing my server from outside and anywhere, I use Tailscale. This service creates a VPN between all your desired devices. It offers much better security and less risk than using port forwarding or exposing ports/services to the whole internet. Runs fantastic — and even for free!
Server and Containerization
Since I'm not using Kubernetes/K3S to manage the orchestration, I simply use Docker Compose files, organized in folders and track with Git. Secrets and credentials are stored securely using .env
files so nothing is hardcoded.
For maintenance, I use Portainer. The whole Docker management and Ubuntu itself run on the internal SSD for the fastest speed possible, ensuring no other services interfere.
For updates, I’m using a container called Watchtower that updates my Docker images automatically, together with unattended-upgrades
to update system security packages.
The current architecture (might differ due to not updating the diagram active):
Media Server
The core reason for my server: the media server. For that I'm using the *arr stack, which includes multiple containers handling everything. In the end it's a fully automated workflow where I only search for content on Jellyseerr and have it ready after 5 minutes in the Jellyfin app.
- Bazarr - subtitles
- Prowlarr - indexer
- Sonarr - TV shows
- Radarr - movies
- SABnzbd - download client, routed through
- gluetun - WireGuard VPN tunnel
- Jellyfin - media player
- Jellyseerr - media discovery
- Profilarr - quality profile manager
I recommend checking out Trash Guides for more details about the *arr stack.
For the whole media server setup, I've dedicated a 3TB HDD, which is enough for now. Later, I can upgrade using JBOD bays
Cloud Server
With my remaining 1TB HDD, I thought of making a locally hosted cloud for all my devices to sync school and private documents, as well as images if needed. For that, I'm currently using Nextcloud, which runs well as long as there are no issues.
Networking
I'm paying for a 10 Gbit down and 100 Mbit up internet, from those I can only utilize 2.5 Gbit down and 100 Mbit up due to not having fiber, using only HFC.
From the ISP router I then took 2.5Gbit WAN into the Unifi switch to have more than one 2.5Gbit port. The switch itself can provide together with the Unifi controller container a nice view and management of the network.
An average movie with 10GB takes about half a minute to download if all components run at full speed. Due to using external USB hard drives, the time until I can view the movie is a bit longer because it takes time to move the file from the incomplete to the complete downloads folder. To solve that problem, I have two solutions:
- Either using an external USB SSD which provides much faster speed, but costs much more to scale for the increasing amount of files.
- Using a JBOD bay where I can put 7200RPM SATA hard drives which cost less than the SSD variant, but has also its limitations.
I will most likely go with a JBOD bay system for the media server, RAID would also be interesting to check out, but I don't need redundancy because my *arr stack will automatically download missing content if one drive fails. Using USB drives (HDD or SSD) will not be scaleable anyways, USB ports of the laptop deliver just enough to power one drive each. So i have to get a JBOD bay to have external power supply.
For the cloud server I might buy a fast and solid 2 to 4 TB external SSD to give me the speed I need and that my network can utilize.. The current method of using the 1 TB HDD is not ideal, considering drive failure risk and the fact that I don’t have backups.
Development
I still haven’t had the time to set up my DevOps & CI/CD pipeline environment for my own projects.
Currently, I’m running a simple dev stack in Docker:
- code-server - VS Code in browser
- it-tools - handy dev toolkit in the browser
Monitoring
I like keeping an eye on everything, both what’s running and what’s not.
Scrutiny monitors my HDD/SSD for temperatures, errors, and more, with a clean web UI.
Uptime Kuma pings my most important services (where possible via
/health
endpoints). If something is down, I immediately get a mobile notification.
NetAlertX is something fun I’m experimenting with — it provides network alerts when a device connects to the network for the first time. Whether it’s expected or possibly an intruder, I get a full notification with hostname and device info. Feels like a "network intruder alert".
AI & Automation
n8n
I've recently added a new stack to my server for AI and automation. First, I set up a self-hosted n8n container. With it, I can build various automated workflows.
To test things out, I followed an example workflow to try some basic automation:
Ollama
Out of curiosity, I wondered how much I could achieve with the computing power I have. So I tried hosting a local AI model using Ollama.
I pulled the Mistral-7B model and added the Open WebUI container for a ChatGPT-like interface.
It's working, though the model is a bit slow and limited due to my hardware. But for basic private questions or integrating into my n8n workflows, it's perfect.
Here’s a screenshot where I asked it to review this blog:
Conclusion and Learnings
In this relatively short time, I’ve learned a lot: setting up an Ubuntu server, managing drives and partitions, using Kubernetes and Docker for containers, writing Docker Compose files, Docker networking, volumes, exposing ports with internal:external mappings, switches and building a media server with the *arr stack that runs seamlessly across all my devices.
I also created my own cloud, set up a VPN system, hosting my own local LLM and built a monitoring system with mobile alerts.
Networking basics, troubleshooting permissions and path mappings between drives and containers were the most challenging part in this journey.
My most used resources: the official documentation of the tools I used, ChatGPT (came in clutch many times), and YouTube, especially when setting up Profilarr.
Future upgrades
For the next upgrades I will try to achieve the following:
- Integrate AdGuard DNS
- Add reverse proxies for better URL handling
- Use dynv6 for dynamic DNS
- Expand storage with more HDDs/SSDs in a JBOD bay
- Set up full DevOps workflows and pipelines
- Add password management
- Host my personal web services and websites
If you have questions, feedback, or ideas for improvement, feel free to leave a comment! I’m always happy to chat and learn from others.