Complete Guide: Installing k3s on Hetzner Cloud
This comprehensive guide walks you through deploying a production-ready k3s cluster on Hetzner Cloud using the automation tool hetzner-k3s
.
Table of Contents
- Prerequisites
- Getting Your Hetzner API Token
- Installing Required Tools
- Cluster Configuration
- Creating the Cluster
- Post-Deployment Verification
- Managing Your Cluster
- Troubleshooting
- Additional Resources
Prerequisites
Before starting, ensure you have:
- Hetzner Cloud Account with billing set up
- SSH key pair for server access
- Local machine with terminal access
- Basic understanding of YAML and cloud infrastructure concepts
Getting Your Hetzner API Token
Step 1: Create a Hetzner Cloud Account
- Visit Hetzner Cloud Console
- Sign up for an account or log in if you already have one
- Complete the verification process and add a payment method
Step 2: Generate an API Token
- In the Hetzner Cloud Console, navigate to your project
- Go to Security → API Tokens in the left sidebar
- Click Generate API Token
- Configure your token:
- Description: Give it a meaningful name (e.g., "k3s-cluster-token")
- Permissions: Select Read & Write (required for cluster creation)
- Click Generate API Token
- Important: Copy and securely store the token immediately - it won't be shown again
Security Best Practices
- Store the token in a secure password manager
- Never commit the token to version control
- Consider using environment variables:
export HETZNER_TOKEN="your_token_here"
- Rotate tokens periodically for enhanced security
Installing Required Tools
Installing hetzner-k3s
macOS
Option 1: Homebrew (Recommended)
brew install vitobotta/tap/hetzner_k3s
Option 2: Manual Binary Installation
First, install dependencies:
brew install libevent bdw-gc libyaml pcre gmp
For Apple Silicon (ARM):
wget https://github.com/vitobotta/hetzner-k3s/releases/download/v2.3.4/hetzner-k3s-macos-arm64
chmod +x hetzner-k3s-macos-arm64
sudo mv hetzner-k3s-macos-arm64 /usr/local/bin/hetzner-k3s
For Intel (x86):
wget https://github.com/vitobotta/hetzner-k3s/releases/download/v2.3.4/hetzner-k3s-macos-amd64
chmod +x hetzner-k3s-macos-amd64
sudo mv hetzner-k3s-macos-amd64 /usr/local/bin/hetzner-k3s
Linux
For Fedora/RHEL users - Add this to your .bashrc
or .zshrc
to avoid OpenSSL issues:
hetzner-k3s() {
OPENSSL_CONF=/dev/null OPENSSL_MODULES=/dev/null command hetzner-k3s "$@"
}
AMD64:
wget https://github.com/vitobotta/hetzner-k3s/releases/download/v2.3.4/hetzner-k3s-linux-amd64
chmod +x hetzner-k3s-linux-amd64
sudo mv hetzner-k3s-linux-amd64 /usr/local/bin/hetzner-k3s
ARM64:
wget https://github.com/vitobotta/hetzner-k3s/releases/download/v2.3.4/hetzner-k3s-linux-arm64
chmod +x hetzner-k3s-linux-arm64
sudo mv hetzner-k3s-linux-arm64 /usr/local/bin/hetzner-k3s
Windows
Use Windows Subsystem for Linux (WSL) and follow the Linux installation instructions.
Docker Alternative
If you prefer not to install locally, you can use Docker to run hetzner-k3s.
Verify Installation:
hetzner-k3s --version
Installing Helm
Helm is the package manager for Kubernetes and is essential for installing and managing applications on your k3s cluster.
macOS
Option 1: Homebrew (Recommended)
brew install helm
Option 2: Install Script
curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
Option 3: Manual Binary Installation
# Download the latest release
curl -LO https://get.helm.sh/helm-v3.13.0-darwin-amd64.tar.gz
# For Apple Silicon
curl -LO https://get.helm.sh/helm-v3.13.0-darwin-arm64.tar.gz
# Extract and install
tar -zxvf helm-v3.13.0-darwin-*.tar.gz
sudo mv darwin-*/helm /usr/local/bin/helm
Linux
Option 1: Install Script (Recommended)
curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
Option 2: Package Manager
Ubuntu/Debian:
curl https://baltocdn.com/helm/signing.asc | gpg --dearmor | sudo tee /usr/share/keyrings/helm.gpg > /dev/null
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/helm.gpg] https://baltocdn.com/helm/stable/debian/ all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list
sudo apt-get update
sudo apt-get install helm
CentOS/RHEL/Fedora:
sudo dnf install helm
Snap (Ubuntu):
sudo snap install helm --classic
Option 3: Manual Binary Installation
# Download latest release
curl -LO https://get.helm.sh/helm-v3.13.0-linux-amd64.tar.gz
# For ARM64
curl -LO https://get.helm.sh/helm-v3.13.0-linux-arm64.tar.gz
# Extract and install
tar -zxvf helm-v3.13.0-linux-*.tar.gz
sudo mv linux-*/helm /usr/local/bin/helm
Windows
Option 1: Chocolatey
choco install kubernetes-helm
Option 2: Scoop
scoop install helm
Option 3: Manual Installation
- Download the Windows binary from Helm Releases
- Extract the zip file
- Add
helm.exe
to your PATH
Verify Helm Installation
helm version
Expected output:
version.BuildInfo{Version:"v3.13.0", GitCommit:"...", GitTreeState:"clean", GoVersion:"go1.20.8"}
Installing kubectl
macOS
Homebrew (Recommended):
brew install kubectl
Manual Installation:
curl -LO "https://dl.k8s.io/release/$(curl -s https://dl.k8s.io/release/stable.txt)/bin/darwin/amd64/kubectl"
chmod +x kubectl
sudo mv kubectl /usr/local/bin/
For Apple Silicon, replace amd64
with arm64
Linux
Ubuntu/Debian:
sudo apt update
curl -LO "https://dl.k8s.io/release/$(curl -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
chmod +x kubectl
sudo mv kubectl /usr/local/bin/
Snap (Ubuntu):
sudo snap install kubectl --classic
Windows
Chocolatey:
choco install kubernetes-cli
PowerShell:
curl.exe -LO "https://dl.k8s.io/release/$(curl.exe -s https://dl.k8s.io/release/stable.txt)/bin/windows/amd64/kubectl.exe"
Verify Installation:
kubectl version --client
Cluster Configuration
Create a configuration file named cluster_config.yaml
:
---
hetzner_token: <your token>
cluster_name: test
kubeconfig_path: "./kubeconfig"
k3s_version: v1.30.3+k3s1
networking:
ssh:
port: 22
use_agent: false # set to true if your key has a passphrase
public_key_path: "~/.ssh/id_ed25519.pub"
private_key_path: "~/.ssh/id_ed25519"
allowed_networks:
ssh:
- 0.0.0.0/0
api: # this will firewall port 6443 on the nodes
- 0.0.0.0/0
public_network:
ipv4: true
ipv6: true
# hetzner_ips_query_server_url: https://.. # for large clusters, see https://github.com/vitobotta/hetzner-k3s/blob/main/docs/Recommendations.md
# use_local_firewall: false # for large clusters, see https://github.com/vitobotta/hetzner-k3s/blob/main/docs/Recommendations.md
private_network:
enabled: true
subnet: 10.0.0.0/16
existing_network_name: ""
cni:
enabled: true
encryption: false
mode: flannel
cilium:
# Optional: specify a path to a custom values file for Cilium Helm chart
# When specified, this file will be used instead of the default values
# helm_values_path: "./cilium-values.yaml"
# chart_version: "v1.17.2"
# cluster_cidr: 10.244.0.0/16 # optional: a custom IPv4/IPv6 network CIDR to use for pod IPs
# service_cidr: 10.43.0.0/16 # optional: a custom IPv4/IPv6 network CIDR to use for service IPs. Warning, if you change this, you should also change cluster_dns!
# cluster_dns: 10.43.0.10 # optional: IPv4 Cluster IP for coredns service. Needs to be an address from the service_cidr range
# manifests:
# cloud_controller_manager_manifest_url: "https://github.com/hetznercloud/hcloud-cloud-controller-manager/releases/download/v1.23.0/ccm-networks.yaml"
# csi_driver_manifest_url: "https://raw.githubusercontent.com/hetznercloud/csi-driver/v2.12.0/deploy/kubernetes/hcloud-csi.yml"
# system_upgrade_controller_deployment_manifest_url: "https://github.com/rancher/system-upgrade-controller/releases/download/v0.14.2/system-upgrade-controller.yaml"
# system_upgrade_controller_crd_manifest_url: "https://github.com/rancher/system-upgrade-controller/releases/download/v0.14.2/crd.yaml"
# cluster_autoscaler_manifest_url: "https://raw.githubusercontent.com/kubernetes/autoscaler/master/cluster-autoscaler/cloudprovider/hetzner/examples/cluster-autoscaler-run-on-master.yaml"
# cluster_autoscaler_container_image_tag: "v1.32.0"
datastore:
mode: etcd # etcd (default) or external
# external_datastore_endpoint: postgres://....
schedule_workloads_on_masters: false
# image: rocky-9 # optional: default is ubuntu-24.04
# autoscaling_image: 103908130 # optional, defaults to the `image` setting
# snapshot_os: microos # optional: specified the os type when using a custom snapshot
masters_pool:
instance_type: cpx21
instance_count: 3 # for HA; you can also create a single master cluster for dev and testing (not recommended for production)
locations: # You can choose a single location for single master clusters or if you prefer to have all masters in the same location. For regional clusters (which are only available in the eu-central network zone), each master needs to be placed in a separate location.
- fsn1
- hel1
- nbg1
worker_node_pools:
- name: small-static
instance_type: cpx21
instance_count: 4
location: hel1
# image: debian-11
# labels:
# - key: purpose
# value: blah
# taints:
# - key: something
# value: value1:NoSchedule
- name: medium-autoscaled
instance_type: cpx31
location: fsn1
autoscaling:
enabled: true
min_instances: 0
max_instances: 3
# cluster_autoscaler:
# scan_interval: "10s" # How often cluster is reevaluated for scale up or down
# scale_down_delay_after_add: "10m" # How long after scale up that scale down evaluation resumes
# scale_down_delay_after_delete: "10s" # How long after node deletion that scale down evaluation resumes
# scale_down_delay_after_failure: "3m" # How long after scale down failure that scale down evaluation resumes
# max_node_provision_time: "15m" # Maximum time CA waits for node to be provisioned
embedded_registry_mirror:
enabled: true # Enables fast p2p distribution of container images between nodes for faster pod startup. Check if your k3s version is compatible before enabling this option. You can find more information at https://docs.k3s.io/installation/registry-mirror
protect_against_deletion: true
create_load_balancer_for_the_kubernetes_api: false # Just a heads up: right now, we can’t limit access to the load balancer by IP through the firewall. This feature hasn’t been added by Hetzner yet.
k3s_upgrade_concurrency: 1 # how many nodes to upgrade at the same time
# additional_packages:
# - somepackage
# additional_pre_k3s_commands:
# - apt update
# - apt upgrade -y
# additional_post_k3s_commands:
# - apt autoremove -y
# For more advanced usage like resizing the root partition for use with Rook Ceph, see [Resizing root partition with additional post k3s commands](./Resizing_root_partition_with_post_create_commands.md)
# kube_api_server_args:
# - arg1
# - ...
# kube_scheduler_args:
# - arg1
# - ...
# kube_controller_manager_args:
# - arg1
# - ...
# kube_cloud_controller_manager_args:
# - arg1
# - ...
# kubelet_args:
# - arg1
# - ...
# kube_proxy_args:
# - arg1
# - ...
# api_server_hostname: k8s.example.com # optional: DNS for the k8s API LoadBalancer. After the script has run, create a DNS record with the address of the API LoadBalancer.
Configuration Options Explained
Instance Types (Common Options):
-
cpx11
: 2 vCPUs, 2GB RAM - Development/testing -
cpx21
: 3 vCPUs, 4GB RAM - Small production workloads -
cpx31
: 4 vCPUs, 8GB RAM - Medium workloads -
cpx41
: 8 vCPUs, 16GB RAM - Large workloads
Locations:
-
hel1
: Helsinki, Finland -
nbg1
: Nuremberg, Germany -
fsn1
: Falkenstein, Germany -
ash
: Ashburn, VA, USA
Security Considerations:
- Replace
0.0.0.0/0
with your specific IP ranges for better security - Use at least 3 master nodes for high availability
- Consider using different locations for geographic distribution
Creating the Cluster
Method 1: Local Installation
hetzner-k3s create --config cluster_config.yaml
Method 2: Docker
docker run --rm -it \
-v "${PWD}:/cluster" \
-v "${HOME}/.ssh:/tmp/.ssh" \
vitobotta/hetzner-k3s create-cluster \
--config-file /cluster/cluster_config.yaml
What Happens During Creation
The tool will automatically:
-
Provision Infrastructure:
- Create servers (masters and workers)
- Set up private networking
- Configure firewall rules
- Create load balancer for API server
-
Install K3s:
- Deploy k3s on master nodes
- Join worker nodes to the cluster
- Configure high availability
-
Install Components:
- Hetzner Cloud Controller Manager
- Hetzner CSI Driver
- System Upgrade Controller
- Cluster Autoscaler
Expected Duration: 5-10 minutes depending on cluster size.
Post-Deployment Verification
1. Check Cluster Status
export KUBECONFIG=./kubeconfig
kubectl get nodes
Expected output:
NAME STATUS ROLES AGE VERSION
my-k3s-cluster-master1 Ready control-plane,etcd,master 5m v1.29.0+k3s1
my-k3s-cluster-master2 Ready control-plane,etcd,master 4m v1.29.0+k3s1
my-k3s-cluster-master3 Ready control-plane,etcd,master 4m v1.29.0+k3s1
my-k3s-cluster-worker1 Ready <none> 3m v1.29.0+k3s1
my-k3s-cluster-worker2 Ready <none> 3m v1.29.0+k3s1
my-k3s-cluster-worker3 Ready <none> 3m v1.29.0+k3s1
2. Verify System Pods
kubectl get pods -n kube-system
3. Test Cluster Functionality
# Create a test deployment
kubectl create deployment nginx --image=nginx
kubectl expose deployment nginx --port=80 --type=LoadBalancer
# Check the service
kubectl get services
4. Access Your Application
The LoadBalancer service will get a public IP from Hetzner. You can access your nginx deployment using this IP.
Managing Your Cluster
Scaling Worker Nodes
- Edit
cluster_config.yaml
:
worker_node_pools:
- name: pool-small
instance_type: cpx21
instance_count: 5 # Increased from 3
location: hel1
- Apply changes:
hetzner-k3s create --config-file cluster_config.yaml
Adding New Worker Pools
worker_node_pools:
- name: pool-small
instance_type: cpx21
instance_count: 3
location: hel1
- name: pool-large
instance_type: cpx41
instance_count: 2
location: nbg1
Upgrading K3s Version
- Update the version in your config:
k3s_version: v1.30.0+k3s1
- Apply the upgrade:
hetzner-k3s create --config-file cluster_config.yaml
Cluster Maintenance
Backing Up Configuration:
# Always backup your cluster config and kubeconfig
cp cluster_config.yaml cluster_config.yaml.backup
cp kubeconfig kubeconfig.backup
Monitoring Cluster Health:
kubectl get nodes
kubectl get pods --all-namespaces
kubectl top nodes # Requires metrics-server
Troubleshooting
Common Issues
1. SSH Key Issues
# Ensure your SSH key is properly configured
ssh-keygen -t rsa -b 4096 -C "your_email@example.com"
ssh-add ~/.ssh/id_rsa
2. API Token Issues
- Verify the token has Read & Write permissions
- Check if the token is expired
- Ensure no extra spaces in the token
3. Network Issues
# Check if you can reach Hetzner API
curl -H "Authorization: Bearer YOUR_TOKEN" https://api.hetzner.cloud/v1/servers
4. OpenSSL Issues on Linux
export OPENSSL_CONF=/dev/null
export OPENSSL_MODULES=/dev/null
hetzner-k3s create --config-file cluster_config.yaml
Getting Help
Check Logs:
hetzner-k3s create --config-file cluster_config.yaml --debug
Cluster Information:
kubectl cluster-info
kubectl describe nodes
Destroying the Cluster
When you're done with your cluster:
hetzner-k3s destroy --config-file cluster_config.yaml
Warning: This will permanently delete all resources and data in your cluster.
Trouble shooting
- If you experience Shell command failed: error: write /dev/stdout: bad file descriptor make sure that you run the command in a terminal not in PyCharm terminal.
Additional Resources
- Official Documentation: hetzner-k3s GitHub
- Hetzner Cloud API: docs.hetzner.cloud
- K3s Documentation: docs.k3s.io
- Kubernetes Documentation: kubernetes.io/docs
Quick Reference Commands
Action | Command |
---|---|
Create cluster | hetzner-k3s create --config-file cluster_config.yaml |
Destroy cluster | hetzner-k3s destroy --config-file cluster_config.yaml |
Check nodes | kubectl get nodes |
Check pods | kubectl get pods --all-namespaces |
Scale workers | Edit config → re-run create command |
Get cluster info | kubectl cluster-info |
Need Help? Feel free to ask questions about customizing the configuration, automating deployment with CI/CD, or troubleshooting specific issues!