We talked yesterday about how defining requirements often takes a team effort and a certain amount of time before things become clear. Even then, it’s hard to guarantee that the logic is 100% flawless.
Another point is that in large-scale projects, we usually only have access to part of the codebase. (Due to security reasons, access to other systems is restricted)
In scenarios where multiple systems are running in parallel, AI can be quite fragile. In contrast, humans can rely on experience to make educated guesses and solve issues through effective communication—which ties back to what I mentioned earlier: human-to-human communication is hard to replace.
I run into infinite loop issues with AI almost every day. Even with clear prompts, this still happens from time to time. Sometimes it feels like I spend more time crafting the prompt than I would have spent developing something without AI—like teaching a child who never really learns...
Of course, as an engineer, I believe these problems will eventually be solved. I just don’t think it’ll happen as quickly as the media hype would have us believe.