From scattered YAML files to a fully traceable GitOps pipeline — here’s how I used Kro to build a cleaner, more maintainable deployment process.
📘 This is Part 2 of the “Building a Real and Traceable GitOps Architecture” series.
👉 Part 1: Why Argo CD Wasn't Enough
👉 Part 4: GitOps Promotion with Kargo: Image Tag → Git Commit → Argo Sync
👉 Part 6: How I Scaled My GitOps Promotion Flow into a Maintainable Architecture
🧩 Why I Started with an RGD
In my previous article, I shared how managing multiple YAML files became a real burden. Each time I updated an image tag, I had to patch three different manifests — Deployment, Service, and ConfigMap — just to reflect a simple change.
So I started asking:
What if I could update a single file and have all dependent resources update automatically?
That’s when I found Kro — a declarative GitOps engine that lets me define a service using one ResourceGraphDefinition
(RGD) and a matching instance.yaml
. From there, it automatically generates and applies all necessary Kubernetes resources.
This post walks through how I implemented that setup, including the actual YAML structure, the pitfalls I hit, and how I connected it with Argo CD and Kargo to build a fully automated GitOps flow.
🧠 What’s Kro, and Why RGD?
Kro is a lightweight GitOps templating engine. It’s designed to:
- Render Kubernetes resources from a template + instance
- Work declaratively with Git as the source of truth
- Cleanly separate schema, templates, and values
It sounds similar to Helm, but here’s how it differs:
- No templating syntax
- No chart packaging or release abstraction
- No values.yaml spaghetti
Instead, Kro is more transparent and tightly aligned with GitOps principles.
At the heart of it is the ResourceGraphDefinition (RGD). Without this file, Kro does nothing. It’s the blueprint that defines which resources are generated and how values flow into them.
🛠 My First RGD: Starting Simple
I decided to start small — a simple frontend web service.
It only needed three resources:
- ConfigMap (for
API_URL
andTIME_ZONE
) - Deployment (for image and replica count)
- Service (to expose a port)
Here’s the schema I wrote for it:
spec:
name: string | default=frontend
namespace: string | default=develop
values:
configMap:
data:
API_HTTP_URL: string
TIME_ZONE: string | default="XXX/XXX"
deployment:
image: string
tag: string
replicas: integer | default=1
service:
port: integer | default=3000
targetPort: integer | default=3000
This schema acts as a contract: every value that an instance provides must follow this structure. It’s simple, explicit, and human-readable.
📄 Template: How Schema Connects to Resources
With the schema in place, I needed to define what it generates.
In Kro, templates are added under the resources:
section. Each one has a unique id
, which Kro uses for change tracking.
Here’s an excerpt from my Deployment
template:
- id: deploy
template:
apiVersion: apps/v1
kind: Deployment
metadata:
name: ${schema.spec.name}
namespace: ${schema.spec.namespace}
spec:
replicas: ${schema.spec.values.deployment.replicas}
template:
spec:
containers:
- image: ${schema.spec.values.deployment.image}:${schema.spec.values.deployment.tag}
No Helm syntax, no conditional logic — just clean variable references.
This is what I liked most about Kro: the schema-template-instance structure is clear and composable, without any magic.
📦 My instance.yaml
: The Missing Piece
The schema and template define what can be deployed. But Kro won’t do anything until you provide values via an instance.
Here’s what my instance.yaml
looked like:
apiVersion: kro.run/v1alpha1
kind: FrontendAppV2
metadata:
name: wsp-frontend
namespace: develop
spec:
name: wsp-frontend
namespace: develop
values:
configMap:
data:
API_HTTP_URL: https://example.com/api
TIME_ZONE: XXX/XXX
deployment:
image: <username>/<your-project>
tag: "1.0.1"
replicas: 1
service:
port: 3000
targetPort: 3000
I defined my schema under the FrontendAppV2
API name, so instances can use that kind and Kro knows how to match them.
I store this file in Git (under develop/app/
) and sync it using Argo CD.
This way, I can declaratively define the state of my service through Git alone — Kro and Argo CD take care of the rest.
🔁 Full Automation: From Tag → Git → Kro Apply
Here’s how I fully automated the flow using Kargo:
- Push a new Docker image to the registry
- Kargo detects it via
Warehouse
, creates aFreight
-
Stage
triggers ayaml-update
that modifiesinstance.yaml
- Commit + push to Git
- Argo CD detects the change and syncs
- Kro sees the updated instance and renders new resources
The key part is the yaml-update
step in Kargo:
- key: spec.values.deployment.tag
value: ${{ quote(imageFrom(vars.imageRepo, warehouse(vars.warehouseName)).Tag) }}
Each change to the image tag automatically flows into Git, then into Kro, and finally into the cluster.
💥 Pitfalls I Ran Into (and How I Fixed Them)
Here are some real-world issues I hit:
1️⃣ Resource didn’t apply, but no error
I had a Service template written correctly — but nothing showed up in the cluster.
Turns out the schema was missing a type
, so the value failed to render and Kro skipped the whole resource.
2️⃣ Tag value caused type mismatch
My Kargo yaml-update
wrote the tag as a number (1.0.1 → 1), and Kro rejected it.
Fix: wrap the tag in quote()
to force it into a string.
3️⃣ Kro skipped update due to unchanged generation
Kro uses generation and delta logic. If the rendered output is identical, it won’t re-apply.
The log says:
Skipping update due to unchanged generation
4️⃣ Debugging requires watching the logs
Kro doesn’t show much in the UI. I rely on controller logs to confirm updates:
Found deltas for resource
Skipping update due to unchanged generation
🧭 Where Kro Fits in My GitOps Architecture
Kro is now the template engine of my GitOps setup.
It’s not just a Helm alternative. It enables me to:
- Separate structure (
schema
) - Abstract resource definitions (
template
) - Provide values through Git (
instance.yaml
)
With Argo CD syncing and Kargo promoting, I now have a full GitOps chain that’s clean, traceable, and reproducible:
Docker tag → Git commit → Argo CD sync → Kro apply
Each deployment is versioned and explainable — no more “mystery state” in the cluster.
🔎 Bonus: My Environment Setup
Currently, I’m using this setup in the develop
namespace.
Each environment (dev, staging, prod) gets its own instance.yaml
and Argo CD Application.
For production
, I plan to use separate Git paths and isolate sync targets.
More on that in the next article.
🔜 Coming Next: Designing a Clean GitOps Repo Structure
In the next part, I’ll show how I organize:
- Git repo layout (per service + environment)
- ApplicationSet management
- How Kro, Argo CD, and Kargo all connect together
If you’re building your own GitOps setup, I hope this post saved you some time — and helped demystify how Kro works behind the scenes.