Vibe Coding Reality Check
Konark Sharma

Konark Sharma @konark_13

About: I am software engineer who finds fun and creativity in Frontend. I would love to be a part of a team to help, develop and learn from new people and add my knowledge to the project.

Joined:
Mar 10, 2025

Vibe Coding Reality Check

Publish Date: Feb 26
52 34

I was really excited about this hackathon because it was an offline event and completely focused on prompting and building a web game using AI.

But very quickly, I realized this experience was going to teach me much more than I expected.

The prototype had to be built only using AI Studio or Antigravity, so I’ll share the lessons I learned while vibe coding under real pressure.

Round 1: Getting Hands On

There were two rounds. The first was a demo round where we had to build something using the AI tools and get familiar with the workflow.

Since I had already worked with AI Studio while building my portfolio in the New Year, New Me Dev Challenge, I was excited to try Antigravity, especially because it has a VS Code like feel.

What immediately stood out in Antigravity was its planning first approach.

The moment your prompt hits, it:

  • analyzes the request
  • creates a plan
  • executes tasks step by step

Even better, I could modify the plan according to my needs. That feature really stood out to me because it felt like AI was finally doing what it’s supposed to do: plan first, execute second.

My First Build (and Early Confidence)

In the first round, I built a game app using Antigravity.

I did hit several roadblocks, but my previous experience with AI Studio helped me move faster. The first iteration was surprisingly good.

  • Game sprites were generated using Nano Banana
  • Characters also came out quite well
  • Initial deployment worked

At that moment, I felt pretty confident. And then… I hit the wall.

Where Things Started Breaking

The more iterations I tried to push through Antigravity, the more issues started appearing.

I consider myself a beginner vibe coder, and one thing I’ve learned is:

The more you work with prompts, the more your prompting style evolves.

So I reset my approach, started fresh, and tried giving clearer prompts.

But under time pressure, hallucinations started creeping in.

The biggest issues I faced were:

  • Uploading code from Antigravity to GitHub
  • Deploying to Google Cloud Run
  • CORS related problems
  • Inconsistent executions

For some participants, the magic worked smoothly. For me not so much.

Still, I pushed through, submitted the prototype, and scored 46% overall.

Not great. But very educational.

The Reset Between Rounds

During lunch, instead of stressing, I cooled off and started talking to other developers.

This turned out to be extremely valuable.

I learned:

  • how others were structuring prompts
  • how they handled hallucinations
  • where they were getting blocked

That peer feedback helped me rethink my approach for Round 2.

Round 2: Changing Strategy

For the main round, I made a strategic shift.

Instead of forcing Antigravity, I moved back to AI Studio, mainly because:

  • deployment was more predictable
  • GitHub integration felt smoother
  • I could move faster under time pressure

I also simplified the scope moving from a 3D game to a 2D game and refining my prompts more carefully.

This time, the system responded much better.

Submission Attempts and Reality Check

We had four submission attempts.

Attempt 1: 50%
Criteria included:

  • Code Quality
  • Security
  • Efficiency
  • Testing
  • Accessibility
  • Google Services

My weak areas were clearly Security and Google Services.

Since I was relying heavily on AI Studio, I wasn’t fully aware of all the security gaps I might be hitting.

Attempt 2: Still 50%

I thought I had improved the code significantly but the score didn’t move. That was a reality check.

Attempt 3: 62.67%

This time I changed tactics:

  • asked the model to refactor more carefully
  • focused on structure
  • tested more deliberately

Glitch Hunt
This is the game I developed and submitted. Making it a newer version of the classic DuckHunt. It's still is in prototype phase.

I still didn’t make the top 10 but the learning curve was massive.

I also reviewed other teams’ web games and noticed a common theme: Everyone was fighting hallucinations and AI limitations in different ways.

Lessons I’m Taking Forward

1. Plan before you vibe code: Don’t jump straight into prompting without thinking through scope and data.

2. Stay flexible with tools: Sometimes switching platforms saves more time than forcing one tool.

3. Prompt clarity improves with iteration: The more precise the prompt, the better the output.

4. AI will hallucinate expect it: Save versions frequently and be ready to roll back.

5. Commit and deploy frequently: Version history saved me multiple times.

6. Peer feedback is underrated: Talking to other builders gave me insights I wouldn’t have found alone.

7. Different models excel at different tasks: I used GPT for prompt generation and Gemini for data heavy reasoning.

8. Don’t over iterate blindly: At one point I kept prompting without validating outputs, which created more confusion than progress.

9. AI tools still need developer judgment: Even when output looks correct, manual review is essential.

10. Time pressure exposes prompt quality: Clear prompts saved far more time than clever but vague ones.

This hackathon didn’t just test my ability to build with AI. It tested how clearly I could think under pressure. I’m still learning, still breaking things, and still refining how I work with AI tools.

Vibe coding looks fast from the outside, but in reality, the quality of prompts, planning discipline, and iteration strategy makes a huge difference.

Would love to hear from others what was the biggest roadblock you faced while vibe coding?

Comments 34 total

  • Luftie The Anonymous
    Luftie The AnonymousFeb 26, 2026

    Although I'm not a vibe-coder, I'm a developer. It actually gives couple of interesting facts to bear in mind as I will attend a hackathon in April. Good to know, thanks for the article !

    • Konark Sharma
      Konark SharmaFeb 26, 2026

      I'm really glad you liked the article. Thank you so much for your support. Give your best in the hackathon maybe we will get to see an amazing build by you and shared in form of an article. Looking forward for your experience.

      • Luftie The Anonymous
        Luftie The AnonymousFeb 26, 2026

        Hopefully, that hackathon will be my official farewell with web-dev and smart-contracts and focusing only on cryptography and blockchain architecture (What I actually do currently). You can read my first article where I introduce my self also feel free to connect on signal or telegram.

        • Konark Sharma
          Konark SharmaFeb 26, 2026

          For sure, I would read that. I am sure you will lead the way of cryptography and blockchain on Dev.to. I would love to connect and learn more about blockchain from you. It is the most fascinating thing I like in Web3.

  • Benjamin Nguyen
    Benjamin NguyenFeb 26, 2026

    It is so true

    • Konark Sharma
      Konark SharmaFeb 26, 2026

      I'm really glad you liked it. You are also providing such amazing articles. Keep up the good work.

  • klement Gunndu
    klement GunnduFeb 27, 2026

    The hallucination-under-pressure pattern is consistent — models get more confident-sounding exactly when context degrades, which is the worst time to trust them. The planning phase working well but execution breaking down is exactly the gap most vibe-coding workflows haven't solved yet.

    • Konark Sharma
      Konark SharmaFeb 27, 2026

      Yes, exactly what I faced while working with vibe coding workflows. I think they lack in the execution of the context because if I say do Task A then it will do Task A + B resulting in a more chaotic output.

      What's your favorite vibe coding tool you use?

  • Harsh
    Harsh Feb 27, 2026

    This is such a real take! It’s easy to think AI tools make building a game simple, but the 'pressure test' of a hackathon always reveals the gap between prompting and actually debugging. What was the most unexpected bug the AI introduced for you?

    • Konark Sharma
      Konark SharmaFeb 27, 2026

      I'm pleased that you liked it. Yeah, pressure test really shows the capabilities of AI in real life like how would a developer react to a 2am call for a production bug but for me the gap was larger.

      The most unexpected bug the AI has introduced to is when I prompt it to do some changes in a long conversation it tends to redo the tasks I prompted it to remove. Like, I'm waiting for an output but all I get is new output and previous outputs. It really creates a mess in the code and also in AI studio it can add files but can't remove them, I prompted it many times as it was an unnecessary file but it couldn't delete the files.

      What's an unexpected bug you faced while vibe-coding or any interesting story you wanna share?

      • Harsh
        Harsh Feb 27, 2026

        That's such a relatable experience! 😅 The 2am production bug analogy is spot on — pressure testing really does expose the gaps.

        And yes, that issue with AI redoing old tasks in long conversations is painfully familiar! It's like the model loses context of what 'remove' actually means and just keeps adding layers. Definitely creates chaos in the codebase.

        The AI Studio file limitation is weird too — being able to add but not delete? That's such a basic need. Hope they fix that soon.

        As for unexpected bugs I've faced — one time the AI kept importing the same library 5 times in one file no matter how many times I told it to stop 😂 Took me longer to clean up than to write it myself!

        Would love to hear more about your hackathon experience — what were you building?"

        • Konark Sharma
          Konark SharmaFeb 27, 2026

          Yes, it has happened to me as well. I told it to remove the library but with newer instructions it kept adding it.

          It was a wonderful experience every hackathon comes with it's pros and cons and as for the building part I have provided the link in the submission attempts. Though, it is still in prototypal phase but I included it to share what I have built on. Here's the link for you.

          Glitch Hunt

          What's your hackathon experiences or have you taken part in a Dev Challenge how was it?

  • Matthew Hou
    Matthew HouFeb 27, 2026

    This is a genuinely useful post because it documents exactly the gap between "AI generates code" and "AI generates correct code under pressure."

    The hackathon setting makes it even more telling. When you're vibe coding for a demo, the stakes are low — if it breaks, you regenerate. But the habits you build there carry into real projects where the cost of a subtle bug is way higher.

    What I keep coming back to: the bottleneck was never generation speed. It was always verification. You can get AI to write a game loop in 30 seconds, but figuring out whether that game loop actually handles edge cases correctly takes the same amount of human attention it always did. Maybe more, because the code looks plausible enough that you trust it.

    The METR research on AI-assisted coding showed developers perceived they were 24% faster while actually being 19% slower. The gap comes from exactly what you described — the time spent debugging and re-prompting eats the generation speed advantage.

    Not saying vibe coding is useless. But knowing when to switch from "generate fast" to "verify carefully" is the real skill. Sounds like the hackathon taught you that under pressure, which is the best way to learn it.

    • Konark Sharma
      Konark SharmaFeb 27, 2026

      I'm really glad you found my article useful.

      Yes, you are absolutely right about the verification and you have made a valid point of game loop actually handling the edge cases. This got me so much frustrated as I wanted to add different levels to the game and make it more difficult for the user to play but while doing so AI started doing things on it's own. The original game logic for loosing was kinda lost while building levels in the game. So, yeah I agree with you it doesn't make us faster it actually makes us slower cause the execution can divert into any possible way and I have to teach AI to do how I want it.

      Yes, I learned a lot about AI Studio, Antigravity and Prompting while building the game. If you are willing to learn even a small ant can teach us amazing lessons.

      What's was the bug you found most annoying while debugging?

  • Mahima From HeyDev
    Mahima From HeyDevFeb 27, 2026

    Loved this write-up, especially the “reset between rounds” part. In my experience vibe coding works great until the first ambiguous state bug shows up, then you end up spending more time debugging prompts than debugging code. The planning-first point is spot on - I usually force myself to write down the data model and 2-3 user flows before touching the AI tool, and it saves a ton of backtracking. Also +1 on frequent commits, version history is basically your safety net with AI-generated code.

    We’ve been collecting a few patterns for rescuing vibe-coded apps (what tends to break, how we stabilize fast) at heydev.us/blog if it’s useful.

  • Mahima From HeyDev
    Mahima From HeyDevFeb 27, 2026

    Loved the “reality check” angle here. In my experience, vibe coding works fine for the first 60-70%, but the last mile is always about making implicit assumptions explicit - data model invariants, error boundaries, retries/timeouts, and observability. One thing that helps is to freeze the happy-path code and then write a short list of “production contracts” (inputs, authz, rate limits, idempotency) and only refactor to satisfy those.

    We end up rescuing a lot of vibe-coded apps once they hit real users, wrote up a few patterns that keep the speed without the chaos: heydev.us/blog

  • Sean  |   Mnemox
    Sean | MnemoxFeb 27, 2026

    Great writeup. Your lesson #1 really resonated — "plan before you vibe code."

    I hit the same wall but earlier in the process. Before I even start prompting, I kept finding out the thing I wanted to build already existed. Claude would happily help me code a food delivery tool for 6 hours, then I'd search GitHub and find 12 mature competitors.

    The hallucination problem you describe during execution is real, but there's an even earlier hallucination most people miss: when you ask "is this idea original?" and the AI says "yes, go build it!" without actually searching anything.

    I ended up building an MCP tool that does that search automatically — scans GitHub, HN, npm, PyPI before you write a single line. Would've saved me a lot of the "reset and start over" cycles you described between rounds.

    Curious — for your Glitch Hunt game, did you check if similar web-based duck hunt remakes existed before building? Not criticizing at all, just wondering if knowing the landscape upfront would've changed your approach or scope.

    • Konark Sharma
      Konark SharmaFeb 27, 2026

      Thank you so much. I'm really glad you found my article helpful.

      Yeah, "the original idea" and "go build it" are the hallucination for us to keep us in delusion that what we wanna build never existed before. Even if it searches anything it could even then tell you to build it with slighter modifications.

      Wow, so cool making that MCP tool. Do share the link if it is publicly available might help me and other developers to research the before even starting planning.

      Yes, I had researched but couldn't found a proper one and due to time constraints I didn't have time to do proper research. If it had already existed then I would have taken the inspiration from it and made my own since the theme provided wanted us to make one. Even if the web-based duck hunt did existed. Either I would have gone for another game that wasn't build or I would have added AI in the game. If you have played the original duck hunt the dog laughs at you when you misses the shot and mocks you. Those taunts I wanted to add in my game as well I used Gemini in the backend to generate taunts based on the user behavior but before it was even launched I finished my quota so dropped and choose random texts for taunting the users.

  • chengkai
    chengkaiFeb 27, 2026

    AI agents management is a tough job than you think it would be. To maintain AI context drifting is difficult. You might want to check this article that I wrote

    dev.to/wilddog64/i-gave-gemini-one...

  • Matthew Anderson
    Matthew AndersonFeb 27, 2026

    Hi @konark_13, I'm building a code editor, Stellify, that's all about preventing AI (and human developers, remember them?) from going off track in the way you've described. The main things it does so far:

    • Uses a set stack, PHP (Laravel on the server), my own JS framework (with adaptors for Vue, React, Svelte) on the client. When AI starts a task, the first call it makes is to a get_project tool that gives it info about the stack, the libraries that are available (plus versions) and key references (see the next bullet).

    • Stores code as granular JSON in a database with persistent references and other metadata. This allows AI to be surgical when it comes to making updates and keep the context window to a minimum. When you're ready to export/ deploy the JSON is re-assembled back to source files.

    • In terms of prompts, I have an intermediate layer that analyses your prompt for keywords and attaches only relevant instructions. The API also provides, code analysis and testing endpoints that AI can use (via MCP) to keep it on track.

    The next (hopefully last) piece of the puzzle, is assisting with planning prompts to optimise for clarity, not fully sure how best to do this given the vast array requirements that are possible. In theory I like the idea of capturing prompts and running a "prefetch" to AI that breaks apart your prompt and makes it optimal before passing he refactored prompt to AI. I'm currently building lots of projects to get a feel for what's right here as it's hard to reason about (at least for me).

    I'd be interested to hear your thoughts!

    • Konark Sharma
      Konark SharmaFeb 28, 2026

      Hi Matthew, Stellify seems like a wonderful idea and an idea that most of the people to help be a better developer. The tech stack that you have used is also good and the workflow that you have mentioned seems good. I can tell more about it to you after I have used the product and all the best for your launching on product hunt.

      For optimizing the prompts what I have learned is you can use the classic way of tokenization, embedding and vector searches this will help you analyze it and get better context. But your idea of running a "prefetch" to AI that breaks apart prompts and make it optimal will be more costly as most of the AI models runs on tokens and with tokens there's a limit for it. For optimizing more tokens will be generated and that will cost you and the user for using LLMs behind the scene. If you can generate your own code to do what LLMs does then the cost can reduce and you can also train the model specifically on just optimizing.

      The idea and execution is amazing. Keep up the good work.

      • Matthew Anderson
        Matthew AndersonMar 5, 2026

        Hi Konark, I appreciate the kind words! You make a valid point about the increase in tokens/ costs. What I've ended up doing as a result of lots of testing (essentially building apps on repeat and analysing the outcome) is injecting defensive code into my api endpoints. I'm really happy with this approach, it's much more effective than trying to engineer the prompt either by prefetching or simply adding more and more instructions to guide AI.

        Here's a YT video I put together showing Claude building a Feedback application.

  • Gass
    GassFeb 27, 2026

    This AI hype is a skill degraiding phase for ppl invested in it. Programmers who don't engage with it will have an advantage over time. Don't print, use the brush.

  • Mahima From HeyDev
    Mahima From HeyDevFeb 28, 2026

    Spot on - vibe coding is great for momentum, but it quietly removes the “design review” checkpoint.

    One thing that’s helped teams I’ve worked with is to add a short hardening pass after the first demo: threat model the data flows, add a few contract tests around the riskiest endpoints, and put basic observability in place (structured logs + one dashboard).

    Curious - do you have a checklist you run through before you let a vibe-coded tool touch production data?

    • Konark Sharma
      Konark SharmaFeb 28, 2026

      Yes, it does remove the "design review" as if you don't provide a design to it. It will simply make the website based on it's previous data.

      The ideas that you shared seems like good ones like structured logs and one dashboard this will help the users to build and to pinpoint the errors and see how the AI generated code works.

      I haven't worked with a production data but I'll be very mindful of letting vibe-coded tool touching a production cause it can delete the code, modify the code. I think the better option would be to be able to just view the data and add a human interaction before modifying any code.

      What's you checklist?

  • Aryan Choudhary
    Aryan ChoudharyFeb 28, 2026

    I'm still trying to wrap my head around the idea of building a game with AI tools. Konark, your experience with hallucinations and deployment issues is a great reminder that these tools are still so new and finicky. I love that you're documenting what you've learned, especially the importance of verification and human judgment, it's a crucial balance to strike when working with AI.

    • Konark Sharma
      Konark SharmaFeb 28, 2026

      Thank you so much Aryan for your kind words. Making game was basically the theme provided to us for the hackathon so no choice left.

      Yes, I love to share my learning and learn from other what's their approaches towards building with AI is. What's your take on it and what are the problems you faced while using AI tools?

      • Aryan Choudhary
        Aryan ChoudharyFeb 28, 2026

        That's very cool of you to share these learnings, they are very valuable indeed.
        I have also experienced almost the same problems as you, and a fresh start has been the answer most of the times. While others include my ideas being too abstract or complex to vibe code and I've usually tackled this one by breaking down things into smaller problems, nothing fancy just the fundamentals of SDLC done right.

        • Konark Sharma
          Konark SharmaFeb 28, 2026

          Thank you so much Aryan for finding my lessons valuable. I share my learning so that everyone can learn and avoid the mistakes I did and also share their experiences for me to learn from.

          I myself felt my ideas are a bit complex to build using vibe code. I maybe wrong but I feel the more complex you make the website the more hallucinations the model goes in. It keeps building my frustration because I can't fully be a vibe coder I need to debug as well. But yeah your approach of breaking into smaller problems and using SDLC as a building block is the most common step that everyone should take.

  • L. Cordero
    L. CorderoFeb 28, 2026

    Thank you for mentioning peer feedback. This is one thing I wish I had more of when I'm building.

    • Konark Sharma
      Konark SharmaFeb 28, 2026

      Of-course, Peer feedback is also important and valuable. If you are building you can share your app/website on this platform cause people here are very supportive and helping. So, I'm sure the feedback you get here and can make your product 100 times better.

  • HussamCantley
    HussamCantleyMar 3, 2026

    ITHINKSO

  • HubSpotTraining
    HubSpotTrainingMar 3, 2026

    This is sow true!

Add comment