I was really excited about this hackathon because it was an offline event and completely focused on prompting and building a web game using AI.
But very quickly, I realized this experience was going to teach me much more than I expected.
The prototype had to be built only using AI Studio or Antigravity, so I’ll share the lessons I learned while vibe coding under real pressure.
Round 1: Getting Hands On
There were two rounds. The first was a demo round where we had to build something using the AI tools and get familiar with the workflow.
Since I had already worked with AI Studio while building my portfolio in the New Year, New Me Dev Challenge, I was excited to try Antigravity, especially because it has a VS Code like feel.
What immediately stood out in Antigravity was its planning first approach.
The moment your prompt hits, it:
- analyzes the request
- creates a plan
- executes tasks step by step
Even better, I could modify the plan according to my needs. That feature really stood out to me because it felt like AI was finally doing what it’s supposed to do: plan first, execute second.
My First Build (and Early Confidence)
In the first round, I built a game app using Antigravity.
I did hit several roadblocks, but my previous experience with AI Studio helped me move faster. The first iteration was surprisingly good.
- Game sprites were generated using Nano Banana
- Characters also came out quite well
- Initial deployment worked
At that moment, I felt pretty confident. And then… I hit the wall.
Where Things Started Breaking
The more iterations I tried to push through Antigravity, the more issues started appearing.
I consider myself a beginner vibe coder, and one thing I’ve learned is:
The more you work with prompts, the more your prompting style evolves.
So I reset my approach, started fresh, and tried giving clearer prompts.
But under time pressure, hallucinations started creeping in.
The biggest issues I faced were:
- Uploading code from Antigravity to GitHub
- Deploying to Google Cloud Run
- CORS related problems
- Inconsistent executions
For some participants, the magic worked smoothly. For me not so much.
Still, I pushed through, submitted the prototype, and scored 46% overall.
Not great. But very educational.
The Reset Between Rounds
During lunch, instead of stressing, I cooled off and started talking to other developers.
This turned out to be extremely valuable.
I learned:
- how others were structuring prompts
- how they handled hallucinations
- where they were getting blocked
That peer feedback helped me rethink my approach for Round 2.
Round 2: Changing Strategy
For the main round, I made a strategic shift.
Instead of forcing Antigravity, I moved back to AI Studio, mainly because:
- deployment was more predictable
- GitHub integration felt smoother
- I could move faster under time pressure
I also simplified the scope moving from a 3D game to a 2D game and refining my prompts more carefully.
This time, the system responded much better.
Submission Attempts and Reality Check
We had four submission attempts.
Attempt 1: 50%
Criteria included:
- Code Quality
- Security
- Efficiency
- Testing
- Accessibility
- Google Services
My weak areas were clearly Security and Google Services.
Since I was relying heavily on AI Studio, I wasn’t fully aware of all the security gaps I might be hitting.
Attempt 2: Still 50%
I thought I had improved the code significantly but the score didn’t move. That was a reality check.
Attempt 3: 62.67%
This time I changed tactics:
- asked the model to refactor more carefully
- focused on structure
- tested more deliberately
Glitch Hunt
This is the game I developed and submitted. Making it a newer version of the classic DuckHunt. It's still is in prototype phase.
I still didn’t make the top 10 but the learning curve was massive.
I also reviewed other teams’ web games and noticed a common theme: Everyone was fighting hallucinations and AI limitations in different ways.
Lessons I’m Taking Forward
1. Plan before you vibe code: Don’t jump straight into prompting without thinking through scope and data.
2. Stay flexible with tools: Sometimes switching platforms saves more time than forcing one tool.
3. Prompt clarity improves with iteration: The more precise the prompt, the better the output.
4. AI will hallucinate expect it: Save versions frequently and be ready to roll back.
5. Commit and deploy frequently: Version history saved me multiple times.
6. Peer feedback is underrated: Talking to other builders gave me insights I wouldn’t have found alone.
7. Different models excel at different tasks: I used GPT for prompt generation and Gemini for data heavy reasoning.
8. Don’t over iterate blindly: At one point I kept prompting without validating outputs, which created more confusion than progress.
9. AI tools still need developer judgment: Even when output looks correct, manual review is essential.
10. Time pressure exposes prompt quality: Clear prompts saved far more time than clever but vague ones.
This hackathon didn’t just test my ability to build with AI. It tested how clearly I could think under pressure. I’m still learning, still breaking things, and still refining how I work with AI tools.
Vibe coding looks fast from the outside, but in reality, the quality of prompts, planning discipline, and iteration strategy makes a huge difference.
Would love to hear from others what was the biggest roadblock you faced while vibe coding?

















Although I'm not a vibe-coder, I'm a developer. It actually gives couple of interesting facts to bear in mind as I will attend a hackathon in April. Good to know, thanks for the article !