Spending too much time manually configuring servers in your homelab? Wish you could automate setup, updates, and deployments reliably? Ansible is a powerful yet simple, agentless automation tool that can tame your infrastructure, whether it's a personal lab or a production environment.
This article walks through practical examples directly from my latest YouTube video, showing you how to automate Proxmox updates, VM setups, PostgreSQL configuration, and even handle secrets securely with Ansible Vault.
If you'd rather watch a youtube video than read:
If not, let's dive in!
Why Ansible for Your Homelab (and Beyond)?
So, why choose Ansible over manual configurations or complex scripts? Here are a few compelling reasons:
Simplicity & Readability
Ansible uses YAML for its playbooks, which is incredibly easy for humans to read and write. You describe the state you want your systems to be in, rather than scripting every single step. This makes your automation understandable and maintainable.
Agentless Architecture
This is a huge advantage! You don't need to install or manage special agent software on your servers (nodes). Ansible communicates primarily over standard SSH for Linux/Unix systems (or WinRM for Windows), using Python on the remote end (which is usually already there). This means less setup, less resource overhead, and a smaller attack surface.
Idempotency: The Automation Superpower
As highlighted in the video, Ansible operations are typically idempotent. This means you can run a playbook multiple times, and it will only make changes if needed to reach the desired state. Running it again won't break things or cause unintended side effects. This is absolutely crucial for reliable and predictable automation – run your updates, and only the nodes that need updating get touched!
Power & Flexibility ("Batteries Included")
Don't let the simplicity fool you. Ansible comes with a vast library of built-in modules that handle thousands of common tasks – installing packages (apt
, yum
), managing services (service
, systemd
), copying files (copy
), generating configs from templates (template
), managing users (user
), interacting with cloud providers, configuring databases (like PostgreSQL!), and much more. You describe what you want done, and the module handles the how.
Getting Started: Prerequisites
Before you start automating, you need a couple of things set up.
1. On Your Control Machine (Where you run Ansible):
-
Python 3: Ansible is built with Python. Make sure you have Python 3 installed. You can check with:
python3 --version
-
Ansible: Install Ansible itself. Using
pip
(Python's package manager) is often recommended for getting the latest version:pip3 install ansible
. Alternatively, use your OS package manager (likeapt
orbrew
). Check your installation with:
ansible --version
(Optional) Ansible VS Code Extension: If you use Visual Studio Code, the official Ansible extension by Red Hat provides excellent syntax highlighting and autocompletion.
Ansible Packages: Some Ansible community collections might need to be installed separately using commands like
ansible-galaxy collection install community.postgresql
for example.
2. Access to Your Target Nodes (Your Homelab Servers):
- SSH Access: Since Ansible is agentless for Linux/Unix, it relies on SSH to connect and execute commands. You need SSH access from your control machine to your homelab servers.
-
SSH Key Authentication (Recommended): For seamless automation, set up SSH key-based authentication so Ansible doesn't need to ask for passwords. The easiest way is to copy your public SSH key (e.g.,
~/.ssh/id_ed25519.pub
) to your target nodes usingssh-copy-id
. Replaceroot@<server_ip>
with the appropriate user and IP for each server:
ssh-copy-id -i ~/.ssh/id_ed25519.pub root@<your_homelab_server_ip>
(Note: For Windows targets, Ansible uses WinRM instead of SSH, which requires different setup steps not covered in detail here.)
Understanding Your Ansible Project: Structure & Core Concepts
A well-structured Ansible project makes automation easier to manage and scale. Here's a typical layout, incorporating the core concepts:
your-ansible-project/
├── ansible.cfg # Project-specific Ansible configuration
├── inventory/ # Defines WHAT servers Ansible manages
│ └── hosts.ini # Your inventory file(s)
├── playbooks/ # Contains your automation instructions (playbooks)
│ ├── proxmox.yml
│ ├── postgres.yml
│ └── ...
├── vars/ # Stores variables, including secrets
│ ├── common_vars.yml
│ └── secrets.yml # Encrypted secrets using Ansible Vault
└── roles/ # (Optional) Reusable units of automation (Not covered in detail here)
[Image: Simple diagram showing this directory structure]
Let's break down each part:
-
inventory/
(The "What"): This directory holds your Inventory file(s) (oftenhosts.ini
orinventory.yml
). This tells Ansible which servers (nodes) it should manage. You can list servers individually or group them logically (like[proxmox_nodes]
or[webservers]
). Your script showed an exampleinventory.ini
:
[proxmox_nodes] pve ansible_host=192.168.31.100 srv-1 ansible_host=192.168.31.101 srv-2 ansible_host=192.168.31.110 # The new node [k3s_nodes] k3s-master-1 ansible_host=192.168.31.50 ansible_user=k3s ansible_become_password="{{ k3s_password }}" # ... other nodes ... [all:vars] # Global variables can go here ansible_user=root # Default user if not specified per host/group
[Note: Variables like
ansible_host
,ansible_user
, and group variables ([all:vars]
) can also be defined here.] -
playbooks/
(The "How"): This is where your Playbooks live. Playbooks are written in YAML and define a series of Tasks to be executed on hosts defined in your inventory. Each task typically uses an Ansible Module (the building blocks likeapt
,service
,template
,command
,postgresql_user
) to perform a specific action. Remember, good modules are idempotent. A basic task structure looks like this:
--- - name: Example Play # Descriptive name for the whole play hosts: proxmox_nodes # Which group(s) from inventory to target become: true # Execute tasks with root privileges (like sudo) tasks: - name: Update Proxmox # Descriptive name for this task apt: # The module being used update_cache: yes # Module parameters upgrade: dist
-
vars/
(The Data & Secrets): Store your variables here, separating them from your playbooks for better organization. You can have files for common variables (common_vars.yml
) and crucially, for secrets (secrets.yml
). Never commit plain-text passwords! Use Ansible Vault to encrypt sensitive files:
# Create a new encrypted file ansible-vault create vars/secrets.yml # Edit an existing encrypted file ansible-vault edit vars/secrets.yml # Run playbook asking for password # ansible-playbook my_playbook.yml --ask-vault-pass # Or better, use a password file (add vault.pass to .gitignore!) ansible-playbook my_playbook.yml --vault-password-file=./vault.pass
Your
secrets.yml
might contain things likedb_password: your_secret_password
.
When you run ansible-vault create
or edit
, Ansible opens your default text editor ($EDITOR
). Inside the editor, you just write standard YAML key-value pairs with your secrets, like this:
```yaml
# vars/secrets.yml
# --- This is just plain YAML while you edit! ---
k3s_password: "your_k3s_sudo_password"
app1_db_password: "supersafepassword123"
some_api_key: "xyz789abc123"
some_secret_value: 1234
```
Once you save and close the editor, Ansible encrypts the content. The actual file on disk becomes unreadable text, which is safe to commit to version control. This file is encrypted and can be decrypted with the password you picked when creating the vault!
-
ansible.cfg
(The Settings): This optional file defines project-specific Ansible settings, overriding system defaults. Your script mentioned useful settings:
[defaults] inventory = ./inventory/hosts.ini # Default inventory file location remote_user = root # Default user to connect as private_key_file = ~/.ssh/id_ed25519 # Default SSH key host_key_checking = False # Disable host key checking (use with caution!). Acceptable in trusted homelabs, but avoid in production - Disabling key checking skips verification of the remote host's identity, making you vulnerable to man-in-the-middle attacks. [privilege_escalation] become = True # Run tasks with privileges by default become_method = sudo # How to gain privileges become_user = root # Which user to become become_ask_pass = False # Don't ask for sudo password (assumes NOPASSWD)
Organizing your project this way keeps things tidy and makes your automation easier to understand, maintain, and share.
Homelab Automation in Action: Examples
Theory is great, but let's see Ansible solve real homelab problems, based on the examples from the YouTube video.
Before we look at example, do me a solid favor and check out my YouTube Channel, Let's Talk Dev
Who knows, you might find those videos helpful and even subscribe!
1. Fixing Proxmox Subscription Error by Adding the Non-Subscription Repositories
- Problem: Dealing with the Proxmox "no subscription" warnings by switching repositories, and keeping all Proxmox nodes consistently updated.
-
Solution: Use Ansible's dedicated
apt_repository
module to manage APT sources correctly, and theapt
module for updates.
Fixing Repositories (using apt_repository
):
# playbooks/proxmox_config.yml (Example Snippet)
---
- name: Fix Proxmox repository errors by adding the no-subscription repository
hosts: proxmox_nodes
become: true
gather_facts: false # Facts not strictly needed for this part
# vars_files: # Only needed if secrets were used here
# - ../vars/secrets.yml
tasks:
- name: Disable Proxmox enterprise repository
ansible.builtin.apt_repository:
repo: "deb [https://enterprise.proxmox.com/debian/pve](https://enterprise.proxmox.com/debian/pve) bookworm pve-enterprise" # Adjust 'bookworm' for your PVE version
state: absent
filename: pve-enterprise # Explicitly target the correct file
- name: Add Proxmox no-subscription repository
ansible.builtin.apt_repository:
repo: "deb [http://download.proxmox.com/debian/pve](http://download.proxmox.com/debian/pve) bookworm pve-no-subscription" # Adjust 'bookworm'
state: present
filename: pve-no-subscription # Use a distinct filename
- name: Disable Proxmox enterprise ceph repository (if applicable)
ansible.builtin.apt_repository:
repo: "deb [https://enterprise.proxmox.com/debian/ceph-quincy](https://enterprise.proxmox.com/debian/ceph-quincy) bookworm enterprise" # Adjust 'quincy'/'bookworm'
state: absent
filename: ceph-enterprise # Explicitly target the correct file
- name: Add Proxmox no-subscription ceph repository (if applicable)
ansible.builtin.apt_repository:
repo: "deb [http://download.proxmox.com/debian/ceph-quincy](http://download.proxmox.com/debian/ceph-quincy) bookworm no-subscription" # Adjust 'quincy'/'bookworm'
state: present
filename: ceph-no-subscription # Use a distinct filename
- name: Update apt cache after repo changes
ansible.builtin.apt:
update_cache: true
Explanation: This automation uses the ansible.builtin.apt_repository
module, which is the idiomatic way to manage APT repositories. state: absent
ensures the enterprise repositories are removed or commented out in their respective files (specified by filename
), while state: present
ensures the no-subscription repositories are added correctly. Finally, apt update_cache: true
refreshes the package list.
2. Keeping Proxmox up to date!
Upgrading Proxmox Nodes (upgrade-proxmox.yml
logic):
# playbooks/proxmox_upgrade.yml (Example Snippet)
---
- name: Upgrade Proxmox Nodes
hosts: proxmox_nodes
become: true
tasks:
- name: Update apt cache and perform dist-upgrade
ansible.builtin.apt:
update_cache: yes # Good practice before upgrade
upgrade: dist # Handles kernel/PVE upgrades correctly
autoclean: yes # Clean up downloaded package files
autoremove: yes # Remove unused dependencies
- name: Check if reboot is required
ansible.builtin.stat:
path: /var/run/reboot-required
register: reboot_required_file
- name: Reboot the server if required
ansible.builtin.reboot:
msg: "Reboot initiated by Ansible due to OS updates"
connect_timeout: 5
reboot_timeout: 600 # Wait up to 10 mins for reboot
pre_reboot_delay: 0
post_reboot_delay: 30 # Wait 30s after reboot before continuing
test_command: uptime # Command to test if server is back up
when: reboot_required_file.stat.exists # Only reboot if flag file exists
Explanation: This automation uses the apt
module for the upgrade process (upgrade: dist
is important for Proxmox). Crucially, it checks if the system flagged a required reboot (stat
module) and then uses the reboot
module only when that flag exists (when
condition). Idempotency ensures nodes already up-to-date are simply skipped by the apt
module.
3. Managing VMs Consistently
- Problem: Ensuring all your VMs (e.g., Ubuntu servers running Docker or k3s) have essential tools like Python and specific libraries installed.
-
Solution: Target a group of VMs in your inventory and use the
apt
module.
# playbooks/vm_setup.yml (Example Snippet) --- - name: Setup Base Packages on VMs hosts: k3s_nodes # Or a group like [all_vms] become: true # Needs sudo/root privileges # Assumes ansible_become_password might be needed (defined in inventory/vault) tasks: - name: Install Python, Pip, and other required packages ansible.builtin.apt: name: - python3 - python3-pip - python3-psycopg2 # For PostgreSQL interaction later - python3-docker # For Docker interaction later state: present update_cache: yes
Explanation: This playbook targets a specific group (
k3s_nodes
). Theapt
module ensures the listed packages are installed (state: present
). If a package is already installed, Ansible makes no changes.
4. Automating PostgreSQL Initialization and Configuration
- Problem: Manually creating database users and databases for different applications is repetitive and error-prone.
-
Solution: Use service-specific Ansible modules for more complex tasks.
# playbooks/postgres_setup.yml (Example Snippet) --- - name: Configure PostgreSQL Users and Databases hosts: database_servers # Assuming a group for your DB server(s) become: true become_user: postgres # Run postgres tasks as the 'postgres' OS user, alternatively if postgres runs as root, keep using root. vars_files: - ../vars/secrets.yml # Load sensitive variables like passwords tasks: - name: Ensure database user exists for App1 community.postgresql.postgresql_user: name: app1_user password: "{{ app1_db_password }}" # Password from vault state: present # This module handles creating the user idempotently - name: Ensure database exists for App1 community.postgresql.postgresql_db: name: app1_db owner: app1_user state: present # This module handles creating the database idempotently
Explanation: This example uses modules from the
community.postgresql
collection (postgresql_user
,postgresql_db
). It runs tasks as thepostgres
system user (become_user
) and pulls the password from an Ansible Vault encrypted file (vars_files
and{{ app1_db_password }}
). These modules are designed to be idempotent.
These examples show how Ansible moves beyond simple commands to reliably manage configurations, updates, and service setups across your homelab infrastructure.
Conclusion
As we've seen, Ansible is a fantastic tool for bringing order and efficiency to your homelab (and potentially your day job!). By defining your infrastructure as code, you reduce tedious manual work, ensure configurations are consistent across machines, and gain confidence through idempotent operations. The examples here – fixing Proxmox repos, updating nodes, setting up VMs, and configuring PostgreSQL – are just the beginning. Think about initial server setups, Docker container deployments, or keeping k3s clusters up-to-date; Ansible can handle it all.
Remember, the key concept is idempotency: run your playbooks as often as you need, knowing they'll only make changes necessary to reach your desired state.
I hope this gives you a solid starting point! For the full walkthrough and live demos, make sure to check out the companion Ansible Automation YouTube video.
What's the first thing you plan to automate in your homelab using Ansible? Let me know in the comments below!
If you found this article and the video helpful, consider subscribing to Let's Talk Dev on YouTube for more software engineering and DevOps content, and follow me here on Dev.to! Happy automating!
About me
I'm Mihai Farcas, a software architect with over a decade of experience under my belt. I'm passionate about writing code and love sharing knowledge with fellow developers.
My YouTube channel, "Let's Talk Dev," is where I break down complex concepts, share my experiences (both the good and the face-palm moments).
Connect with me:
Website: https://mihai.ltd
YouTube: https://www.youtube.com/@letstalkdev
GitHub: https://github.com/mihailtd
LinkedIn: https://www.linkedin.com/in/mihai-farcas-ltd/