My AI Pit Crew: Building a Production App in a Single Stop
geethika

geethika @geethika

Joined:
Mar 22, 2024

My AI Pit Crew: Building a Production App in a Single Stop

Publish Date: Aug 3
1 0

Ever have an idea for a web app but get bogged down by the setup before you even write a single line of application code? I’ve been there. This time, while catching up on the F1 highlights from last weekend’s race, I decided to try something different. I built a single-page web application that queries a GraphQL API, and I did it almost entirely within the Amazon Kiro IDE, leaning heavily on a concept called spec-driven development. This is the story of how I went from a simple idea to a solid architectural plan in minutes.

From a Simple Chat to a Solid Plan

My journey started with a chat. Not with a colleague, but with Kiro. True to its friendly ghost icon, the AI felt like an invisible helper ready to assist. I had a clear goal in mind:

  • Frontend: Build a Single-Page Application (SPA) hosted on AWS S3 and served via CloudFront.
  • Backend: Create a serverless AWS Lambda function, exposed via an API Gateway endpoint.
  • Data Source: The Lambda function will query an external GraphQL API to fetch data.
  • Integration: The SPA will call the API Gateway endpoint to retrieve and display the data from the backend. As a firm believer in Infrastructure as Code (IaC), I knew I wanted to define my entire setup in code from day one. I instructed Kiro that my infrastructure should be written using the AWS Cloud Development Kit (CDK) in Python.

To ensure Kiro had the right context and followed best practices, I configured a few MCP (Model Context Protocol) servers, most importantly aws-documentation and aws-cdk ones . This gave the AI the background knowledge it needed to generate high-quality, relevant code and architectural suggestions.

Locking in the Spec

The most amazing part was the speed. After about 15 minutes of interactive chat, refining my requirements and explaining the base architecture, Kiro helped me lock in an initial specification for my project. The back-and-forth felt less like programming and more like a focused brainstorming session with a very knowledgeable partner.

This process is at the heart of what Kiro calls spec-driven development. Instead of jumping straight into writing boilerplate code, I focused on defining what I wanted to build. The initial spec, born from our conversation, became the blueprint for the entire project. It laid out the core components, the cloud infrastructure, and the relationship between them, all before I wrote a single line of CDK or front end code myself.

The result of this step was a concrete requirements file that Kiro would use to generate the project structure. My requirements looked something like this

### Requirement 1

**User Story:** As a developer, I want to create AWS infrastructure using CDK in Python, so that I can deploy a static website with proper cloud distribution.

#### Acceptance Criteria

1. WHEN the CDK stack is synthesized THEN the system SHALL generate CloudFormation templates for S3 and CloudFront resources
2. WHEN the CDK stack is deployed THEN the system SHALL create an S3 bucket configured for static website hosting
3. WHEN the CDK stack is deployed THEN the system SHALL create a CloudFront distribution pointing to the S3 bucket
4. WHEN deploying the infrastructure THEN the system SHALL use the sandpit-1-admin AWS profile from ~/.aws/config
Enter fullscreen mode Exit fullscreen mode

My full requirements.md file at this stage can be found here

Steering the Agent: Defining the Guardrails

With my initial requirements locked in, the next logical step was to provide Kiro with more specific instructions to guide the next steps. This is where I took some time to define the agent steering files. Think of these as guardrails that ensure the AI produces consistent, high-quality output that aligns perfectly with my technical standards.

I used these files to set some firm ground rules. First, all Infrastructure as Code (IaC) had to be written with the AWS CDK. Second, as a core security measure, every IAM Role needed to follow the principle of least privilege. But looking back, I see a critical gap in my instructions: I never specified how to handle secrets. There was no mention of using a service like AWS Secrets Manager. This omission, as you’ll see, became relevant later in the process.

This is a snippet of my initial structure.md

### Lambda Functions
- Each Lambda function has its own Python file
- Logging configured at module level
- Error handling with proper HTTP status codes

### Frontend Structure
- Single-page application with vanilla JavaScript
- No external frameworks or libraries
- API calls use modern fetch API with async/await

### Infrastructure as Code
- Single CDK stack in `meetup_dashboard_stack.py`
- Outputs for important resource identifiers
- Proper IAM permissions with least privilege
- Resource tagging for management
Enter fullscreen mode Exit fullscreen mode

I also created a tech.md file to specify my technology stack and preferences in more detail. In it, I defined:

  • The IaC language would be Python.
  • The preferred versions for key libraries and runtimes.
  • My preference for using Python virtual environments (venv) to manage dependencies cleanly.

This is a snippet of my initial tech.md

## Dependencies

### Python (requirements.txt)
- `aws-cdk-lib>=2.100.0`: AWS CDK framework
- `boto3>=1.26.0`: AWS SDK for Python
- `pytest>=7.0.0`: Testing framework
- `responses>=0.23.0`: HTTP request mocking for tests

## Common Commands

### Development Setup
bash
# Create and activate virtual environment
python -m venv .venv
source .venv/bin/activate 
Enter fullscreen mode Exit fullscreen mode

Taking the time to define these steering instructions upfront is a critical step. It ensures that as Kiro starts generating the artifacts, the output will be predictable, secure, and tailored exactly to my project’s needs.

Designing the Blueprint

With the initial requirements spec’d out and the steering files acting as my guardrails, it was time for the exciting part: generating the architectural design.

I prompted Kiro to create the design based on my spec. The first pass was impressive, translating my abstract requirements into a tangible structure. Of course, no automated design is perfect on the first try, so I needed to iterate on it a bit to refine the details and get it exactly how I wanted.

This is where the power of the steering files truly shone. Because I had already defined my core technical standards in tech.md, I didn’t have to repeat myself during the design iterations. There was no need to constantly re-state “use Python for CDK” or “remember least privilege for IAM roles.

It was like Kiro already knew my preferences. The entire design process became a high-level conversation focused purely on the architecture, not on re-explaining foundational choices. Every subsequent output was already catered to my standards, which saved a massive amount of time and mental energy.

The Action Plan: From Design to Tasks

With the design locked in, I moved to the last step before letting the agent loose on the code: generating a list of tasks. This would serve as the final, granular action plan for building the application.

At this stage, the process was incredibly smooth. I pretty much accepted all the tasks that Kiro generated without much back-and-forth. The earlier work defining the requirements and steering the design meant the proposed tasks were already aligned with my vision.

However, there was one feature here that I have to call out — something the Business Analysts among my readers will absolutely love. Each task Kiro generated included a direct reference back to the specific requirement it was meant to fulfill. This created complete traceability, drawing a clear line from the initial business need all the way down to the individual code generation prompts.

This isn’t just a “nice-to-have”; it’s a powerful tool for project governance. It ensures that every single task has a purpose and that the project scope is tightly controlled. There’s no ambiguity about why a certain piece of infrastructure is being built or a specific function is being written.

The implementation plan (task.md) looked something like this at this stage

# Implementation Plan

- [x] 1. Initialize CDK project structure and dependencies
  - Create CDK Python project with proper directory structure
  - Install required CDK libraries (aws-cdk-lib, aws-cdk.aws-s3, aws-cdk.aws-cloudfront)
  - Set up cdk.json configuration file with app entry point
  - Create requirements.txt with CDK dependencies
  - _Requirements: 1.1, 1.2, 3.1, 3.2_
Enter fullscreen mode Exit fullscreen mode

My full task.md file at this stage can be found here

Lights Out and Away We Go: The Agent Takes Over

With that detailed and fully traceable action plan in hand, I was finally ready. The planning was done. It was time to write some code — or rather, to let the agent write the code.

My role shifted dramatically at this point. My involvement was mostly limited to kicking off the tasks one by one and occasionally granting permission for Kiro to execute shell commands when prompted.

Speaking of permissions, a fantastic enhancement I’ve seen in the latest version of Kiro is the ability to “trust” a command at three different levels. I could trust:

  1. The specific command exactly as written, including all arguments.
  2. Part of the command, like a subcommand.
  3. The entire base command (e.g., cdk or git).

This meant I could progressively give the agent more freedom. By trusting a base command, the next time Kiro wanted to run it, it would proceed without prompting me. This trust level is also configurable in the settings, allowing for a highly efficient and customized workflow.

So, while Kiro was busy generating code, running unit tests, deploying the initial stacks to AWS, and testing functionality, my evening here in Auckland looked a bit different. I was catching up on the highlights of the Belgian Grand Prix, keeping one eye on my laptop screen just to approve a command or two. It was a perfect demonstration of asynchronous work, with the heavy lifting of development happening almost entirely on its own.

Iteration Two: Adding the Backend and Hitting a Wall

Once the initial single-page application with its dummy page was successfully deployed, I started on my next iteration. The plan was to add a backend Lambda function to fetch data using the GraphQL API from Meetup.com. The purpose of this application is to display statistics of all meetups under a single Meetup.com Pro subscription, providing a true “single pane of glass.”

I’ll write a different blog post soon about the actual use case behind this and why I decided to spend my evening on such a project. For the moment, let’s move back to Kiro… or can we? Just as I was getting into the flow of the new feature, that’s when I started hitting the dreaded throttling.

To be honest, I felt like the throttling situation had improved drastically from the week before, and I was pragmatic enough to understand the context. This is a free offering, and everybody who could get their hands on the public beta was now hammering the underlying Sonnet model as hard as they could. When you’re working with cutting-edge tech in a public beta, you have to expect a few bumps in the road.

The Workaround: Vibe Coding in VS Code

Refusing to let throttling stop my progress, I pivoted. I moved back to my trusty VS Code setup and started what I’d call a “poor man’s agentic workflow” with Amazon Q Developer. It was time for some good old-fashioned vibe coding.

This is where I truly realized the value of spec-driven development. Without the carefully crafted system prompts and properly configured Q developer, I found that while powerful, it could easily get distracted and sometimes go around in circles. The guardrails were gone, and I was back to manually guiding every step.

By this time, my main battle wasn’t with the AI, but with the intricacies of GraphQL. The poor API documentation from Meetup.com, which was hidden behind an authentication wall, certainly didn’t help either.

Looking back, I see a missed opportunity. I would have been better off spending my time writing an MCP server to get through the authentication and read the API documentation, or even creating an agent to read it directly from my authenticated browser. It’s a powerful lesson in leveraging the right tool for the job.

A Stitch in Time: The Secrets Manager Realization

Remember earlier when I mentioned I hadn’t specified how to handle secrets? This is the moment that omission came back to bite me. As Kiro was generating the list of tasks, I reviewed the ones for authenticating with the Meetup.com API and noticed the plan was to store the sensitive API keys directly in Lambda environment variables. So I intervened and explicitly instructed Kiro to use AWS Secrets Manager instead. Sure enough, like a good assistant, it quickly corrected the affected tasks to incorporate the more secure approach.

But that’s when it clicked. This manual correction was only necessary because of my own oversight. I should have specified the use of Secrets Manager in my agent steering files from the very beginning. It was a perfect, real-world lesson in the value of being thorough in your initial spec and design, proving that the more upfront thinking you do, the smoother the automated process becomes, software engineering 101, really.

Anyway, with a little bit of persistence and some focused vibe coding late into the evening here in Auckland, I finally got there.

Conclusion: From Idea to MVP in a Few Hours

So, after all that, what did I actually have to show for my evening?

What I built was a genuine, production-quality MVP. And I’m not talking about a fragile script. Thanks to the IaC-first approach and the agent’s adherence to my steering files, this application came with proper security practices, automated tests, and a fully deployed serverless architecture right from the get-go.

But the most incredible part was the sheer speed and efficiency of it all. This was achieved in just a few hours, and for at least half of that time, my main focus was on the F1 highlights. The combination of Kiro’s spec-driven development and a focused agentic workflow handled the heavy lifting.

This journey really proved to me what’s possible with these new tools. Going from a simple idea to a secure, working application in a single evening is a total game-changer.

You can find the full source code for this project on my GitHub repository and stay tuned for my next post where I’ll dive into the “why” behind this application.

Comments 0 total

    Add comment