Automated Kubernetes Governance with Kyverno and Slack Alerts
Ravindra Singh

Ravindra Singh @ravindras

About: I deploy and manage infrastructure and applications across AWS and GCP Cloud. Certified in these platforms, I use Terraform and CI/CD tools to build scalable, secure solutions.

Location:
Pune, India
Joined:
Oct 31, 2023

Automated Kubernetes Governance with Kyverno and Slack Alerts

Publish Date: Jul 13
5 0

Introduction

As Kubernetes adoption increases in production environments, enforcing security, compliance, and operational consistency becomes crucial. Developers move fast, but clusters need guardrails. This is where Kyverno and Policy Reporter come in.

🏗️ Kyverno Architecture (Explained Simply)
Kyverno works as an Admission Controller inside your Kubernetes cluster.

Whenever you run a command like:

kubectl apply -f deployment.yaml
Enter fullscreen mode Exit fullscreen mode

The Kubernetes API server first checks:

Is the YAML valid?
Is the user authorized?
Are there any policies that should be applied?

This is where Kyverno steps in.

🔁 How it Works (Step-by-Step):

  • The API server sends the resource to Kyverno via an admission webhook.
  • Kyverno checks if any policy applies to that resource It checks the resource against all policies stored in its Policy Cache.

  • Depending on the policy type, Kyverno can:

1. Validate → Block if not compliant
2. Mutate → Auto-correct the YAML (e.g., add a label)
3. Generate → Create supporting resources (like NetworkPolicy)

  • Kyverno sends a response back to the API server (allow or deny).

  • Kyverno also creates a Policy Report and can send an alert via Slack if integrated with Policy Reporter.

In this blog, we will deploy following resources:

  • Deploy Kyverno and Policy Reporter
  • Write a real policy to protect critical resources
  • Trigger a violation and send a Slack alert

📌 Why Kyverno?
Kyverno is a Kubernetes-native policy engine, which means:

  • No need to learn a new DSL — policies are written in YAML
  • Designed for DevOps and platform engineers, not security specialists only
  • Supports validation (deny non-compliant resources), mutation (auto-fix), generation (auto-create resources), and cleanup (delete resources)

🚨 Why Do You Need Kyverno in Your Kubernetes Cluster?
As organizations scale their Kubernetes usage, maintaining consistency, security, and compliance across teams and environments becomes a serious challenge.

Source Git Link: [https://github.com/ravindrasinghh/Kubernetes-Playlist/tree/master/Lesson6]

🚀 Step 1: Install Kyverno with Helm
Install Kyverno’s admission controllers into your Kubernetes cluster:
Install it using Helm:

helm repo add kyverno https://kyverno.github.io/kyverno/
helm repo update
helm install kyverno kyverno/kyverno --create-namespace -n kyverno -f kyverno-values.yaml  
Enter fullscreen mode Exit fullscreen mode

📊 Step 2: Install Policy Reporter with Slack Integration
Install it using Helm:
Install the Policy Reporter UI to visualize Kyverno policies and send alerts to Slack:

helm repo add policy-reporter https://kyverno.github.io/policy-reporter
helm repo update
helm install policy-reporter policy-reporter/policy-reporter --create-namespace -n policy-reporter -f kyverno-ui.yaml 
Enter fullscreen mode Exit fullscreen mode

🔧 Step 3: Install Pre-defned kyverno-policies
Install a set of baseline and restricted policies provided by Kyverno:

Install it using Helm:

helm repo add kyverno https://kyverno.github.io/kyverno/
helm repo update
helm install kyverno-policies --create-namespace -n kyverno kyverno/kyverno-policies
Enter fullscreen mode Exit fullscreen mode

For more details on the predefined policy sets, visit:
https://github.com/kyverno/kyverno/tree/main/charts/kyverno-policies#kyverno-policies

Examples of Kyverno in Action
1. Validate Policy — Block containers running as root

apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: validate-run-as-non-root
spec:
  validationFailureAction: Enforce
  rules:
    - name: block-root-containers
      match:
        resources:
          kinds:
            - Pod
      validate:
        message: "Running as root is not allowed."
        pattern:
          spec:
            containers:
              - securityContext:
                  runAsNonRoot: true
Enter fullscreen mode Exit fullscreen mode

🧠 Use Case: Prevents insecure containers from being deployed.

✏️ 2. Mutate Policy — Automatically add a label to all Pods

apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: add-default-label
spec:
  rules:
    - name: inject-env-label
      match:
        resources:
          kinds:
            - Pod
      mutate:
        patchStrategicMerge:
          metadata:
            labels:
              env: default
Enter fullscreen mode Exit fullscreen mode

🧠 Use Case: Ensures every Pod has a required label (e.g., for monitoring, billing, or cost allocation).

⚙️ 3. Generate Policy — Auto-create a NetworkPolicy in New Namespaces

apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: generate-default-network-policy
spec:
  rules:
    - name: auto-create-network-policy
      match:
        resources:
          kinds:
            - Namespace
      generate:
        kind: NetworkPolicy
        apiVersion: networking.k8s.io/v1
        name: default-deny
        namespace: "{{request.object.metadata.name}}"
        data:
          spec:
            podSelector: {}
            policyTypes:
              - Ingress
              - Egress
Enter fullscreen mode Exit fullscreen mode

🧠 Use Case: Ensure every new namespace starts with a default NetworkPolicy to deny all traffic unless explicitly allowed.

🔐 4. Preventing Accidental Deletion of Critical Resources (Validate Policy)

apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: prevent-critical-resource-deletion
spec:
  validationFailureAction: Enforce
  rules:
    - name: block-delete-critical-resources
      match:
        resources:
          kinds:
            - ConfigMap
            - Secret
            - Deployment
            - Service
            - Ingress
            - PersistentVolumeClaim
          namespaces:
            - production
      preconditions:
        all:
          - key: "{{request.operation}}"
            operator: Equals
            value: DELETE
      validate:
        message: "⛔️ Deletion of critical resources is not allowed in production namespace."
        deny: {}
Enter fullscreen mode Exit fullscreen mode

🚀 Let’s Start Implementing These Policies in Our Cluster

Now that we’ve seen practical examples of how Kyverno can validate, mutate, and generate Kubernetes resources, it’s time to apply them in our own environment.

Step 1: Apply the Policies One by One
Use kubectl apply or GitOps practices (e.g., ArgoCD, Flux) to safely introduce each policy into your cluster:

kubectl apply -f validate-run-as-non-root.yaml
kubectl apply -f add-default-label.yaml
kubectl apply -f generate-default-network-policy.yaml
kubectl apply -f prevent-critical-resource-deletion.yaml
Enter fullscreen mode Exit fullscreen mode

💡 Tip: Apply them in a dev/staging cluster first and set validationFailureAction: Audit to test without enforcement.

Let's use kyverno CLI and see , how many current pods or other rsources are passing or failing.

kubectl get policyreports -A

Enter fullscreen mode Exit fullscreen mode

-Now let’s test each Kyverno policy to see how it behaves in real-world scenarios.

1.validate-run-as-non-root**
kubectl apply -f pod-root.yaml

apiVersion: v1
kind: Pod
metadata:
  name: root-pod
spec:
  containers:
    - name: nginx
      image: nginx
      # No securityContext, so will run as root by default
Enter fullscreen mode Exit fullscreen mode

📛 As expected, Kyverno blocks this Pod because it violates the policy that enforces runAsNonRoot: true.

2.pod-no-label.yaml
kubectl apply -f pod-no-label.yaml

apiVersion: v1
kind: Pod
metadata:
  name: test-pod-no-label
spec:
  containers:
    - name: nginx
      image: nginx:latest
      securityContext:
        runAsNonRoot: true  
Enter fullscreen mode Exit fullscreen mode

🔄 Thanks to the mutate policy, Kyverno automatically injects the label env: default.

3. Auto-create a NetworkPolicy in New Namespaces
📦 Kyverno detects the new namespace and creates a default NetworkPolicy in it automatically.

kubectl create namespace test-np

4. Preventing Accidental Deletion of Critical Resources
Here comes an important one! Let’s try deleting a critical resource (like a Deployment or Service) in the production namespace:

🔍 Step 2: Monitor Policy Behavior, View Policy Reports and Slack Alerts
Once your Kyverno policies are active, it's important to monitor how they behave and ensure violations don’t go unnoticed. Here's how to track everything using Policy Reporter UI and Slack integration.

📊 1. Access the Policy Reporter Dashboard
Policy Reporter provides a user-friendly dashboard to visualize all Kyverno policy results.

📌 To access it via port-forwarding:

kubectl port-forward service/policy-reporter-ui 8080:8080 -n policy-reporter
Enter fullscreen mode Exit fullscreen mode

🔍 2. Investigate Non-Compliant Workloads
One of the key benefits of using Kyverno with Policy Reporter is the ability to detect and respond to policy violations in real time. Whether it's missing labels, insecure configurations, or critical resource changes—Kyverno helps you enforce best practices, while Policy Reporter ensures you're notified when something goes wrong.

In this section, let’s create a simple policy that requires every Pod to have an app label, and then test what happens when this rule is violated.

🛡️ Example: Require a Label on All Pods
Let’s create a ClusterPolicy that audits Pod missing the app label.**

apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: require-label
spec:
  rules:
    - name: check-label
      match:
        resources:
          kinds:
            - Pod
      validate:
        failureAction: Audit
        message: "Labels 'app' is required on every Pod."
        pattern:
          metadata:
            labels:
              app: "?*"

Enter fullscreen mode Exit fullscreen mode

🔧 Apply the policy using:

kubectl apply -f require-label.yaml

🚨 Trigger a Violation to Test the Policy and Alerts
Now, let’s create a Pod that intentionally violates the policy by omitting the app label:

cat <<EOF | kubectl apply -n test -f -
apiVersion: v1
kind: Pod
metadata:
  name: violate-pod
spec:
  containers:
    - name: nginx
      image: nginx:latest
EOF
Enter fullscreen mode Exit fullscreen mode

📣 Make sure this policy name is included in your Slack channel filter in values.yaml:

policies:
  include: ["prevent-critical-resource-deletion", "require-label"]
Enter fullscreen mode Exit fullscreen mode

🔐 Policy applied → Violation triggered → PolicyReport generated → Slack alert sent

Success! The Pod was created without the required app label, Kyverno generated a PolicyReport, and the Slack alert was triggered as expected.

Want to generate a report right now?
Click the button and generate a Policy Report and view detailed results—all in a single page, broken down by namespace, policy name, and resource.

💬 Thanks for reading!
If you found this helpful or face any issues while setting it up, feel free to reach out. I’d be happy to help!

👉 Drop a comment, connect on LinkedIn, or open an issue on GitHub if you’re using the same setup.

Comments 0 total

    Add comment