If you’re running Kubernetes in production, you already know that performance bottlenecks are the stuff of nightmares.
One of the most underrated, impactful changes you can make is switching your cluster’s networking from IPTables to IPVS (IP Virtual Server) mode.
But here’s the challenge: AWS EKS doesn’t support IPVS out of the box. So what do we do? We roll up our sleeves and take the alternative route. in this article, I’ll walk you through exactly how to do it.
TL;DR
- If your k8s cluster is handling more than 500–1000Service objects, it’s highly recommended to switch kube-proxy from IpTables to IPVS mode.
- In fact, IPVS should almost always be your default choice over iptables mode — it’s faster, more efficient, and built to scale better with large service counts.
And when I say “almost always,” I mean it! If you have a compelling reason to stick with iptables instead of IPVS, drop a comment below — I’d genuinely love to learn from your experience and issues you faces.
🚀 The Need for Speed: Why IPVS Over IPTables?
IPTables handles every packet one by one, scanning a long list of rules like a librarian looking for the right page in an old book.
IPVS, on the other hand, is like a high-speed toll plaza with dedicated lanes and an automated system — it’s built for scale and speed.
In tech terms:
- IPTable is rule-based, and slow with scale.
- IPVS is connection-based, highly performant, and supports better load-balancing algorithms.
🧱 The AWS EKS Catch: Why You Can’t Enable IPVS Natively
AWS EKS manages the underlying infrastructure, including the base AMI used for your worker nodes. That means you can’t directly enable IPVS modules via the default Amazon EKS-optimized AMI.
To get around this limitation, we’ll have to:
- Create a custom EC2 launch template or change user data, we will use user data.
- Enable IPVS modules using cloud-init.
- Tweak the kube-proxy deployment and config.
Let’s go step by step.
🛠️ Our Workaround High-Level Steps
- Create a launch template that installs IPVS on boot.
- Deploy nodes using this custom template.
- Edit the kube-proxy DaemonSet to use IPVS.
- Modify the kube-proxy ConfigMap.
- Verify and validate IPVS is running.
🔧 Step 1: Create a Launch Template With IPVS Support
📦 Modifying the EC2 Cloud-Init User Data
The magic starts with the user-data script in your launch template. Paste the following Bash snippet:
#!/bin/bash
sudo yum install -y ipvsadm
sudo ipvsadm -l
sudo modprobe ip_vs
sudo modprobe ip_vs_rr
sudo modprobe ip_vs_wrr
sudo modprobe ip_vs_sh
sudo modprobe nf_conntrack_ipv4
🧩 Why These Kernel Modules Matter
Each modprobe line loads a kernel module essential for IPVS operations:
- ip_vs - The core IPVS module.
- ip_vs_rr, ip_vs_wrr, ip_vs_sh - Different load-balancing strategies.
- nf_conntrack_ipv4 - Enables connection tracking.
Without these, IPVS is like a car with no engine.
🚀 Step 2: Deploy Nodes Using the Custom Launch Template
Attach your launch template to your EKS node group:
- Go to the EKS Console or use eksctl.
- Use your custom AMI or the default AMI + user-data.
Make sure the nodes spin up successfully and the modules are loaded (you can SSH in and verify with lsmod | grep ip_vs).
🧠 Step 3: Edit the kube-proxy DaemonSet
By default, kube-proxy runs in iptables mode. Time to change that.
Run:
kubectl -n kube-system edit ds kube-proxy
Change:
containers:
- command:
- kube-proxy
- --v=2
- --config=/var/lib/kube-proxy-config/config
To:
containers:
- command:
- kube-proxy
- --v=2
- --proxy-mode=ipvs
- --ipvs-scheduler=rr
- --config=/var/lib/kube-proxy-config/config
🌍 Adding IPVS Environment Variable
Add the environment variable too under changes in the daemon set:
env:
- name: KUBE_PROXY_MODE
value: ipvs
Combined changes we did in daemonSet(For your Better Understanding):
containers:
- command:
- kube-proxy
- --v=2
- --proxy-mode=ipvs
- --ipvs-scheduler=rr
- --config=/var/lib/kube-proxy/config
env:
- name: KUBE_PROXY_MODE
value: ipvs
🧾 Step 4: Tweak the Kube-Proxy ConfigMap
This step is crucial. Let’s make the IPVS mode permanent in the config. be careful while you make changes here.
Run:
kubectl -n kube-system edit cm kube-proxy-config
🔑 The Key Sections to Modify
Change the following in the config: block:
ipvs:
scheduler: "rr"
mode: "ipvs"
Complete example:
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: "ipvs"
ipvs:
scheduler: "rr"
🔄 Load Balancing Algorithms You Can Use
Choose your poison:
- rr: Round Robin
- lc: Least Connection
- dh: Destination Hashing
- sh: Source Hashing
- sed: Shortest Expected Delay
- nq: Never Queue
After this change restart the daemon set of kube-proxy.
kubectl rollout restart -n kube-system daemonset kube-proxy
⚠️ Common Pitfalls to Avoid
- Don’t forget to restart the kube-proxy pods after editing the config.
- Your custom AMI must be kept up-to-date with security patches.
- Ensure IPVS modules load successfully on each reboot.
- Watch out for conflicts between kube-proxy and CNI plugins.
🧪 How to Verify IPVS Is Working as Expected
🔍 Using ipvsadm to Inspect Rules
SSH into one of your worker nodes and run:
sudo ipvsadm -Ln
You should see a list of services and destinations. If you see IPVS table empty, something's wrong—check your kube-proxy config again.
⚖️ Pros and Cons: Should You Go This Route?
Pros:
- High-performance packet routing
- Better load-balancing strategies
- Scales better under pressure
Cons:
- More complex to set it up in a better way
- Needs custom launch templates
- Maintenance overhead
🔮 Future-Proofing: Will AWS Support Native IPVS?
Maybe one day. Currently, AWS does not support a native way to enable IPVS on EKS nodes. But if enough people start requesting it (hint: open a feature request), who knows?
✅ Conclusion: Is the Work-Around Helpful?
So, should you bother?
If you’re running high-throughput, latency-sensitive workloads , then yes, switching to IPVS can make a night-and-day difference. It’s not just a tweak — it’s a performance strategy.
It takes some work, sure. But so does anything worth doing in tech.
❓ FAQ: Your Burning Questions Answered
1. Can I enable IPVS mode without custom launch templates?
No, not currently. AWS’s default AMIs do not come with IPVS kernel modules.
2. Will enabling IPVS affect my existing workloads?
No, as long as kube-proxy is configured correctly, the switch is seamless to your workloads.
3. Is this setup compatible with all CNI plugins?
Most major CNIs like Calico and Cilium support IPVS, but check the documentation for version-specific notes.
4. Can I automate this setup?
Yes! You can bake the user-data script into a Terraform or CloudFormation setup.
5. What’s the performance gain from switching to IPVS?
While mileage may vary, IPVS can handle millions of connections per second , significantly outperforming IPTables in high-load scenarios.
Thank you so much for reading the article till the end! 🙌🏻 Your time and interest truly mean a lot. 😁📃
If you have any questions or thoughts about this blog, feel free to connect with me:
🔗 LinkedIn: Ravi Kyada
🐦 Twitter: @ravijkyada
Until next time, ✌🏻 Cheers to more learning and discovery! 🇮🇳 🚀