My AI Co-Pilot Has No Common Sense
Henk van Hoek

Henk van Hoek @henk_van_hoek

About: Passionate about automation with 55 years of experience, from programmable calculators to AI. Currently building Raspberry Pi and self-hosting tools with Python and VirtualBox.

Joined:
Aug 3, 2025

My AI Co-Pilot Has No Common Sense

Publish Date: Aug 7
27 16

I've been working with an AI on a new project. For the most part, it has been a fantastic experience. It writes boilerplate code in seconds and remembers obscure command syntax that I would have spent an hour searching for. I was starting to think it was the perfect junior partner.

Then it drove the project straight into a wall.

The project is called pi-server-vm. It's a tool to create and clone virtual Raspberry Pi servers in VirtualBox. The main requirement was simple: the virtual machines had to act exactly like real Pis on my local network. My other tools needed to find them, and I needed to access them from any computer in my house.

To be fair, I could have been clearer. We'd discussed the need for it to work on larger networks, but the conversation kept getting derailed by new bugs. An AI's context can drift when you jump between topics, and if I'm being honest, so can a human's.

I hit a bug. A frustrating one. My discovery tool, nmap, couldn't see the VMs, even though I could connect to them directly with PuTTY.

I described the problem to the AI. It instantly came back with a clever solution: create a private "Host-Only" network inside VirtualBox. It was a beautiful, self-contained, and technically elegant fix. We spent hours implementing it. We configured services, enabled DHCP, and tweaked settings. The AI was brilliant, spitting out configuration files and commands faster than I could read them.

And it worked. The nmap scan on this new, private network succeeded. The AI declared victory.

I had to be the one to point out that we had just spent all day meticulously solving the wrong problem.

The virtual Pis were now discoverable, but only from the single computer running VirtualBox. They were completely invisible to my main network, which violated the entire point of the project. The AI, in its laser-focus on fixing the nmap bug, had completely forgotten the primary architectural requirement. It had no common sense.

That was the moment I truly understood how this partnership works. The AI is an incredible engine. It has all the technical knowledge in the world, but it has no wisdom. It doesn't understand the "why" behind a project unless you constantly remind it.

My role, as the engineer, is not just to ask questions. It's to be the project's memory, its architect, and its conscience. It's to know when a clever solution is actually a dead end.

We threw the whole Host-Only network idea in the trash and went back to the original, stubborn problem with the Bridged Adapter. It turned out the fix was to completely reinstall VirtualBox. A drastic step, but the right one.

The AI is an amazing tool. It helped me get there in the end. But it's a tool that needs a firm hand on the wheel. It doesn't have instincts, and it certainly doesn't have common sense.

That, it seems, is still our job.

Comments 16 total

  • Jay
    JayAug 13, 2025

    I'm not sure if you were using github co-pilot or MS co-pilot, but I found it (MS Copilot) to be exactly as you said, it writes great code but lacks the context awareness to stick to anything that isn't singularly focused.

    With that said try out Claude or even Chat GPT (The $20/month plan is pretty great honestly). They do a much better job of considering the project as a whole and not just hyper focusing on that one bug like you mentioned. I haven't used Claude code in a while but GPT even has project folders now so it will hyper focus on any chats within the scope of the project which is great for more advanced systems with many different scripts being utilized.

    • Henk van Hoek
      Henk van HoekAug 13, 2025

      The co-pilot is used as a generic name. I actually use aistudio, sometimes ChatGPT for simpeler issues. Tried Claude as well. Ad Microsoft Co-Pilot is not a real AI, it is a interface betwren me and an LLM you prefer. I think the aistudio is giving me the best responses, even in long chats up to 500.000 tokens used.
      What I learned today is to let aistudio to generate the prompt for a new chat if the UI is getting slower. Sometimes a little tweak is required. But this was a real time saver.

      • Jay
        JayAug 14, 2025

        I learned while using chat GPT to help with larger projects that yeah eventually it slows down when the chat gets long, so what I would do at that point is ask it to provide me a full summary of what was done, what didnt work, what did work, and where I am at that moment and then I would copy and paste that summary into a new chat to be able to quickly get right back to where I was. Might be worth a try!

  • Ahmet Salih
    Ahmet SalihAug 13, 2025

    I need to be reminded every time to create dual-language content on GitHub. As long as I don't forget the reminders, there is no problem.

    • Henk van Hoek
      Henk van HoekAug 15, 2025

      I use now pure English only. No problems anymore. It is funny it catches my spelling mistakes perfectly. Dutch is my first language so it was a little bit getting used to.

  • Alfonso Ardoiz
    Alfonso ArdoizAug 13, 2025

    That's a cool example of recency bias. As AI developer I have experienced many cases like this, where initial instructions are usually ignored as the LLMs tend to use the last input tokens instead to generate new content. Thanks for share it!!

  • Prema Ananda
    Prema AnandaAug 13, 2025

    I mainly use Gemini 2.5 Pro. When it hits a dead end, I go to Claude-4 and it comes to the rescue :-)

    • Henk van Hoek
      Henk van HoekAug 15, 2025

      Maybe it depends on the domain we are working. But definitly worth a try ofcourse.

  • Doug Wilson
    Doug WilsonAug 13, 2025

    "That was the moment I truly understood how this partnership works. The AI is an incredible engine. It has all the technical knowledge in the world, but it has no wisdom. It doesn't understand the "why" behind a project unless you constantly remind it.

    My role, as the engineer, is not just to ask questions. It's to be the project's memory, its architect, and its conscience. It's to know when a clever solution is actually a dead end."

    Extraordinarily well put, sir. Bookmarking and sharing ...

  • Gregory Willis
    Gregory WillisAug 13, 2025

    Great post and truer words never spoken. I have been developing intensively (relative to my experience) with Claude (and occasionally DeepSeek) and have encountered the exact same issues.
    On one occasion Claude actually began to go real deep on a problem that we had already resolved. I had to remind him that it was working.
    In my case at least, I have really needed to upgrade my prompting skills to avoid a lot of wasted time and unnecessary iterations.
    This post hit home. Thanks for sharing.

    Greg

  • D7460N
    D7460NAug 13, 2025

    Very good write-up. My experience is the same, but focus and memory is lost after the third, maybe fifth prompt, if I'm lucky. Falls back on non-contextual, closed-system, group-think. Innovation is dead.

    • Henk van Hoek
      Henk van HoekAug 15, 2025

      I have much longer sessions. Some are over 500.000 tokens. I noticed my desktop with 32 GB Ram is much faster as my laptop wuth 16 GB Ram. Maybe CPU power atributes to the speed. I use my laptop with remote deaktop to the beefy PC.

  • Reid Burton
    Reid BurtonAug 14, 2025

    There is actually an expermintal setting in vscode that helps with context and coding guidelines, you can add the path to a markdown file, and the AI will use it as context.

    • Henk van Hoek
      Henk van HoekAug 15, 2025

      I use google aistudio. There is a text field "system information" for this purpose

Add comment