Exposing Kubernetes Pods to Internet with AWS Load Balancer Controller on EKS Using LoadBalancer Service and Ingress
Chinmay Tonape

Chinmay Tonape @chinmay13

About: AWS Enthusiast, Cyclist, Trekker

Location:
Pune, India
Joined:
Nov 2, 2023

Exposing Kubernetes Pods to Internet with AWS Load Balancer Controller on EKS Using LoadBalancer Service and Ingress

Publish Date: Jul 29
9 1

In earlier posts, we explored how to:

  • Deploy an Amazon EKS cluster using Terraform community modules and custom resource definitions.
  • Deploy a sample containerized application.
  • Access it locally using kubectl port-forward.

Now it's time to make our application accessible from the internet — in a secure and scalable way. In this post, we’ll use the AWS Load Balancer Controller, an EKS add-on that helps route external traffic into the Kubernetes cluster.

We’ll use Helm to install the controller and expose a simple Node.js application using a NodePort and Ingress service type.

What is AWS Load Balancer Controller?

The AWS Load Balancer Controller is a Kubernetes controller that:

  • Provisions and manages Elastic Load Balancers (ALBs/NLBs) in AWS.
  • Handles both Kubernetes Ingress resources (with ALB) and Service of type LoadBalancer (with NLB).
  • Integrates tightly with AWS IAM, VPC, and ELBv2 APIs to automate networking for Kubernetes services.

This controller eliminates the need to manually set up and manage AWS Load Balancers, making it a critical component in production-grade EKS clusters.

EKS Architecture Overview

We are using the terraform-aws-eks module to provision the infrastructure. Here's the architecture:

  • Worker Nodes in Private Subnets: Application pods are deployed in private subnets, improving security by preventing direct internet access.
  • NAT Gateway in Public Subnets: Enables outbound traffic (like package installs or container image pulls) from private subnets.
  • ALB (Application Load Balancer): Automatically provisioned by the controller in public subnets to handle internet-facing traffic.

This follows the principle of least privilege networking, combining public-facing endpoints with private compute.

Step 1: Create IAM Role using eksctl

Download an IAM policy for the AWS Load Balancer Controller

curl -O https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.13.3/docs/install/iam_policy.json
Enter fullscreen mode Exit fullscreen mode

Create an IAM policy using the policy downloaded

$ aws iam create-policy \
    --policy-name AWSLoadBalancerControllerIAMPolicy \
    --policy-document file://iam_policy.json

{
    "Policy": {
        "PolicyName": "AWSLoadBalancerControllerIAMPolicy",
        "PolicyId": "ANPAS34IFJLGGXKAR3NS4",
        "Arn": "arn:aws:iam::197317184204:policy/AWSLoadBalancerControllerIAMPolicy",
        "Path": "/",
        "DefaultVersionId": "v1",
        "AttachmentCount": 0,
        "PermissionsBoundaryUsageCount": 0,
        "IsAttachable": true,
        "CreateDate": "2025-07-29T08:24:13+00:00",
        "UpdateDate": "2025-07-29T08:24:13+00:00"
    }
}
Enter fullscreen mode Exit fullscreen mode

The AWS Load Balancer Controller requires permissions to manage AWS resources. You can create an IAM service account using eksctl:

$ eksctl create iamserviceaccount \
    --cluster=CT-EKS-Cluster \
    --namespace=kube-system \
    --name=aws-load-balancer-controller \
    --attach-policy-arn=arn:aws:iam::197317184204:policy/AWSLoadBalancerControllerIAMPolicy \
    --region us-east-1 \
    --approve
Enter fullscreen mode Exit fullscreen mode

This will create a CloudFormation stack to create IAM Service Account

2025-07-29 13:57:42 [ℹ]  1 iamserviceaccount (kube-system/aws-load-balancer-controller) was included (based on the include/exclude rules)
2025-07-29 13:57:42 [!]  serviceaccounts that exist in Kubernetes will be excluded, use --override-existing-serviceaccounts to override
2025-07-29 13:57:42 [ℹ]  1 task: { 
    2 sequential sub-tasks: { 
        create IAM role for serviceaccount "kube-system/aws-load-balancer-controller",
        create serviceaccount "kube-system/aws-load-balancer-controller",
    } }2025-07-29 13:57:42 [ℹ]  building iamserviceaccount stack "eksctl-CT-EKS-Cluster-addon-iamserviceaccount-kube-system-aws-load-balancer-controller"
2025-07-29 13:57:42 [ℹ]  deploying stack "eksctl-CT-EKS-Cluster-addon-iamserviceaccount-kube-system-aws-load-balancer-controller"
2025-07-29 13:57:42 [ℹ]  waiting for CloudFormation stack "eksctl-CT-EKS-Cluster-addon-iamserviceaccount-kube-system-aws-load-balancer-controller"
2025-07-29 13:58:13 [ℹ]  waiting for CloudFormation stack "eksctl-CT-EKS-Cluster-addon-iamserviceaccount-kube-system-aws-load-balancer-controller"
2025-07-29 13:58:14 [ℹ]  created serviceaccount "kube-system/aws-load-balancer-controller"
Enter fullscreen mode Exit fullscreen mode

You can also create a custom IAM policy by downloading this policy JSON and attaching it manually.

Step 2: Install AWS Load Balancer Controller using Helm

Add the Helm chart and install the controller into your cluster:

helm repo add eks https://aws.github.io/eks-charts
helm repo update eks

$ helm install aws-load-balancer-controller eks/aws-load-balancer-controller \
  -n kube-system \
  --set clusterName=CT-EKS-Cluster \
  --set serviceAccount.create=false \
  --set serviceAccount.name=aws-load-balancer-controller \
  --set region=us-east-1 \
  --set vpcId=vpc-0c954459647ab5aea

LAST DEPLOYED: Tue Jul 29 14:00:19 2025
NAMESPACE: kube-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
AWS Load Balancer controller installed!
Enter fullscreen mode Exit fullscreen mode

Step 3: Verify the Controller Installation

Check if the pod is running in the kube-system namespace:

$ kubectl get pods -n kube-system
NAME                                            READY   STATUS    RESTARTS   AGE
aws-load-balancer-controller-6f7599b4fb-4mgks   1/1     Running   0          21s
aws-load-balancer-controller-6f7599b4fb-jrbjn   1/1     Running   0          21s
aws-node-2hgtt                                  2/2     Running   0          28m
aws-node-tz9r2                                  2/2     Running   0          28m
coredns-6b9575c64c-4pd2r                        1/1     Running   0          35m
coredns-6b9575c64c-fhdb6                        1/1     Running   0          35m
kube-proxy-4vdpw                                1/1     Running   0          28m
kube-proxy-8fpgd                                1/1     Running   0          28m
Enter fullscreen mode Exit fullscreen mode

You should see a pod named aws-load-balancer-controller-* in the Running state.

Step 4: Deploy a Sample Node.js App with LoadBalancer Service

LoadBalancer service creates a Network Load Balancer and by default it uses private subnets, therefore even if NLB has a public DNS name, it will not not accesible from internet.

To make NLB accessible from internet, it should be placed in public subnets, make sure following is setup:

  1. Ensure your public subnets are tagged like below. These tags tell EKS (and the cloud controller manager) to use them for internet-facing load balancers.
kubernetes.io/role/elb = 1
Enter fullscreen mode Exit fullscreen mode

If your subnets are tagged as below, they’ll be used for internal-only load balancers (NLBs/ALBs in private mode).

kubernetes.io/role/internal-elb = 1
Enter fullscreen mode Exit fullscreen mode
  1. Annotate the Service (Optional but Helpful) If you explicitly want an internet-facing NLB, add this annotation:
metadata:
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
Enter fullscreen mode Exit fullscreen mode

If you want an internal NLB instead (for internal communication), use:

metadata:
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-scheme: internal
Enter fullscreen mode Exit fullscreen mode

If you don’t specify this, the default behavior is based on subnet tagging.

Create a manifest (simple-nodejs-app-loadbalancer.yaml) for a simple Node.js app and expose it using a LoadBalancer-type service:

---
apiVersion: v1
kind: Namespace
metadata:
  name: simple-nodejs-app
---
apiVersion: apps/v1
kind: Deployment
metadata:
  namespace: simple-nodejs-app
  name: deployment-nodejs-app
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: nodejs-app
  replicas: 5
  template:
    metadata:
      labels:
        app.kubernetes.io/name: nodejs-app
    spec:
      containers:
      - image: public.ecr.aws/n4o6g6h8/simple-nodejs-app:latest
        imagePullPolicy: Always
        name: nodejs-app
        ports:
        - containerPort: 8080
        readinessProbe:
          httpGet:
            path: /
            port: 8080
          initialDelaySeconds: 10
          periodSeconds: 10
          failureThreshold: 3
---
apiVersion: v1
kind: Service
metadata:
  namespace: simple-nodejs-app
  name: service-nodejs-app
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
    service.beta.kubernetes.io/aws-load-balancer-scheme: "internet-facing"
spec:
  type: LoadBalancer
  ports:
    - port: 80
      name: http
      targetPort: 8080
  selector:
    app.kubernetes.io/name: nodejs-app
Enter fullscreen mode Exit fullscreen mode

Apply it using:

$ kubectl apply -f simple-nodejs-app-loadbalancer.yaml

namespace/simple-nodejs-app created
deployment.apps/deployment-nodejs-app created
service/service-nodejs-app created
Enter fullscreen mode Exit fullscreen mode

After a few moments, retrieve the external DNS or IP of the service:

$ kubectl get svc -n simple-nodejs-app
NAME                 TYPE           CLUSTER-IP      EXTERNAL-IP                                                                     PORT(S)        AGE
service-nodejs-app   LoadBalancer   172.20.76.207   k8s-simpleno-servicen-f47229fff5-f703af5ffc123c28.elb.us-east-1.amazonaws.com   80:30770/TCP   14m
Enter fullscreen mode Exit fullscreen mode

You should see a hostname under the EXTERNAL-IP column — that’s your public endpoint!

Step 5: Deploy a Sample Node.js App with Ingress

Create a manifest (nodejs-app-ingress.yaml) for a simple Node.js app and expose it using a Ingress-type service:

---
---
apiVersion: v1
kind: Namespace
metadata:
  name: simple-nodejs-app
---
apiVersion: apps/v1
kind: Deployment
metadata:
  namespace: simple-nodejs-app
  name: deployment-nodejs-app
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: nodejs-app
  replicas: 5
  template:
    metadata:
      labels:
        app.kubernetes.io/name: nodejs-app
    spec:
      containers:
      - image: public.ecr.aws/n4o6g6h8/simple-nodejs-app:latest
        imagePullPolicy: Always
        name: nodejs-app
        ports:
        - containerPort: 8080
        readinessProbe:
          httpGet:
            path: /
            port: 8080
          initialDelaySeconds: 30
          periodSeconds: 10
          failureThreshold: 3
---
apiVersion: v1
kind: Service
metadata:
  namespace: simple-nodejs-app
  name: service-nodejs-app
spec:
  type: ClusterIP
  ports:
    - port: 80
      name: http
      targetPort: 8080
  selector:
    app.kubernetes.io/name: nodejs-app
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-nodejs-app
  namespace: simple-nodejs-app
  annotations:
    kubernetes.io/ingress.class: alb
    alb.ingress.kubernetes.io/scheme: internet-facing
    alb.ingress.kubernetes.io/target-type: ip
    alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}]'
    alb.ingress.kubernetes.io/backend-protocol: HTTP
spec:
  ingressClassName: alb
  rules:
    - http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: service-nodejs-app
                port:
                  number: 80
Enter fullscreen mode Exit fullscreen mode

Apply it using:

$ kubectl apply -f simple-nodejs-app-ingress.yaml 

namespace/simple-nodejs-app created
deployment.apps/deployment-nodejs-app created
service/service-nodejs-app created
ingress.networking.k8s.io/ingress-nodejs-app created
Enter fullscreen mode Exit fullscreen mode

The service type is ClusterIP and it will not show the external IP for ALB, because ALB is created by ingress and you should look at ingress details instead.

$ kubectl get svc -n simple-nodejs-app
NAME                 TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)   AGE
service-nodejs-app   ClusterIP   172.20.41.2   <none>        80/TCP    3m48s
Enter fullscreen mode Exit fullscreen mode

After a few moments, retrieve the external DNS or IP of the service:

$ kubectl get ingress -n simple-nodejs-app
NAME                 CLASS   HOSTS   ADDRESS                                                                  PORTS   AGE
ingress-nodejs-app   alb     *       k8s-simpleno-ingressn-6e3501bccd-256234786.us-east-1.elb.amazonaws.com   80      4m50s
Enter fullscreen mode Exit fullscreen mode

You should see a hostname under the EXTERNAL-IP column — that’s your public endpoint!

🧹 Cleanup
To avoid incurring unnecessary charges, clean up the resources when done:

kubectl delete namespace simple-nodejs-app

helm uninstall aws-load-balancer-controller -n kube-system

eksctl delete iamserviceaccount \
  --name aws-load-balancer-controller \
  --namespace kube-system \
  --cluster <your-cluster-name>
Enter fullscreen mode Exit fullscreen mode

Conclusion

In this post, we explored how to expose services in an Amazon EKS cluster using the AWS Load Balancer Controller. This provides a scalable, cloud-native way to expose your applications to the internet in a secure and manageable manner.

The AWS Load Balancer Controller is essential for production-grade workloads in EKS, allowing you to manage traffic routing with flexibility, apply SSL certificates, and configure routing rules via annotations — all through native Kubernetes resources.

References

Comments 1 total

Add comment