AI coding tools have changed more in the last two years than in the previous decade. They have split into distinct tiers with genuinely different capabilities. The tools at the top of that stack are doing things that would have sounded like marketing fiction not long ago.
Most developers encountered this space through chat assistants and browser-based copy-paste. Some moved on to inline autocomplete tools embedded in their editor. A smaller number have made the shift to agentic tools that read the codebase, plan across multiple files, and execute without being hand-fed every piece of context.
Each stage represents a different relationship between the developer and the machine. This article maps that evolution and makes the case that the agentic shift is not just an incremental improvement. It changes what a single developer can accomplish.
The Chat Phase: Powerful but Disconnected
Chat assistants such as Claude, ChatGPT, and Gemini remain the most common AI tools in development. The model is simple. You describe a problem, attach files, and receive a response. For explanation, debugging discussion, code review, and drafting small functions they work well.
However, the chatbot is entirely passive, in that it takes no direct action. The chatbot responds only to what you present it, and its output is limited to the response in the web interface. They cannot run your tests or inspect your source code. They cannot know if their suggestions are consistent with the project as a whole.
For many tasks this is acceptable. When projects become complex the copy-paste overhead becomes the bottleneck. You spend as much time managing context as solving the problem.
Inline Autocomplete: Fast but Narrow
GitHub integrated an AI chatbot into the editor called Copilot. Instead of the prompt and response model it watched you type and suggested what should come next. It would inject code into the editor greyed out, and by hitting Tab you could confirm and have it inserted into the editor.
In some respects this was almost magical, able to complete an entire function just by writing a descriptive function name. You could write the comments and have it complete the code. It was a magic trick that soon wore thin. It wasn't able to see the whole codebase, and so didn't have an understanding beyond the immediate file. You couldn't use it to complete features. It was an advanced text complete.
It was also very disruptive to maintaining focus, as it forced you to evaluate suggestions. Often the suggestions would look plausible, but not actually do what you wanted. So you had to read the code anyway, which was in some respects worse that just writing it yourself. This was not the way.
The Shift: From Chatbots to Agentic Tools
Agentic coding tools such as Claude Code, Codex, and Cursor have fundamentally changed the game in how LLM models are used in software development. They can read the codebase, examine files, follow dependencies, and plan changes across multiple files. Then can write the code, run tests, observe failures, and adapt.
OpenAI and Anthropic have taken different paths. OpenAI and CodeX have a kind of virtual environment where code is checked out and developed in a sandbox within the OpenAI infrastructure.
Anthropic have take a different approach, where Claude Code has the ability to actually run command line tools and other plug in skills directly from the users machine. It can read or modify files in your project, or for that matter anywhere on your PC. It can execute builds and tests, and evaluate the results.
The way you interact with Claude Code is similar to a chatbot in many respects, having a straight forward conversation, but at the same time it is able to actually to interact with your computer. Needless to say any sane developer would find this a little scary. Luckily Anthropic have added a permission system to that it will ask permission before executing commands. It is however possible to give broad permissions for a whole class of commands.
This new approach means that you can now write a requirements document or a user story, and essentially give it to Claude Code to complete. It will examine the codebase, develop a plan for the required changes, then perform all the changes. It can even write your unit tests for you.
The move from chatbot where the developer has 100% control over the interaction and the code to agentic systems such as Claude Code is both amazing and disquieting. Suddenly developers are not writing code so much as orchestrating development.
Needless to say some developers are not keen to give machines this degree of autonomy. AI can hallucinate, and go off and make crazy changes based on a misunderstanding. There are real risks involved, so the degree of access and authority given to agentic systems needs to be carefully managed.
A Brave New World
Being a good software developer has always meant adaptation to new technologies. From learning BASIC to dBase to Delphi and Java, my own path has been one of continuous learning and adaptation.
Howevever, where once a software developer might need to know one or two technologies, perhaps a programming language and SQL, we now have multiple front end Javascript frameworks, CSS, multiple back end languages, git, continuous build and deployment systems, AWS, Google Cloud Platform or Azure.
Developers cannot be experts in all languages. Usually they are competent in a specific technology stack. One of the critical skills has been learning when to jump ship to new technologies, before the old gives way. Only the speed of technological development has made that increasingly difficult.
Agentic tools are changing the shape of this problem. An experienced developer with strong architectural thinking can now work effectively in unfamiliar stacks. The tool handles syntax and ecosystem details. The human evaluates whether the result is correct. This already happens in practice.
Last year we had a case where we needed to migrate a REST API from Python and FastAPI to Microsoft C# on Azure. With the help of Agentic AI systems this was achieved, despite not having prior C# experience. Obviously in such cases we needed to have ways of testing the code, and there was a clear existing target.
In another instance we needed to modify an unfamiliar codebase to add new functionality. The system was non trivial, and the feature quite deeply technical. It did in fact take three days of work to crack the initial solution, even with the help of AI. But make no mistake that without the help of AI it would have been impossible to get results in the time frame required.
Finally over the last two weeks or so we developed our first mobile application. The idea and concept was developed while on a walk, talking with ChatGPT. From that brainstorming we worked together on an initial set of requirements. From there we handed it off to Claude Code where we have worked together to develop a React Native Android application. Every line of code was written by Claude.
This is still a collaboration, in that Claude Code is directed by a programmer. We are still able to see and review code. Developers are still in the drivers seat. We are still making decisions about what is ready to commit.
But there is a certain disquiet, that human software developers are not needed to write the code anymore. Developers are becoming something more like orchestrators, knowing how to make the agents get the job done.
Beyond Coding Tools: Autonomous Agents
Agentic coding is only the beginning. The same pattern is spreading to broader digital environments.
Tools such as OpenClaw connect email, calendars, messaging systems, files, and code execution through a single interface. One documented workflow schedules development tasks overnight. The agent runs them while the developer sleeps and produces a summary by morning.
OpenClaw opened the floodgates, and thousands of people installed it and opened up their data and lives to nightmare levels of security risk. In at least one case OpenClaw deleted an executives entire email inbox.
| Capability | Chat Assistants | Inline Autocomplete | Agentic Coding Tools | Autonomous Agents |
|---|---|---|---|---|
| Examples | Claude, ChatGPT, Gemini | GitHub Copilot, Tabnine, Codeium | Claude Code, Codex, Cursor | OpenClaw |
| Reads your codebase | No | Partial (cursor window only) | Yes | Yes |
| Writes files | No | No (suggests only) | Yes | Yes |
| Runs commands | No | No | Yes | Yes |
| Runs tests | No | No | Yes | Yes |
| Plans across multiple files | No | No | Yes | Yes |
| Persistent project context | No | Partial | Yes (via config files) | Yes |
| Iterates on failures | No | No | Yes | Yes |
| Operates autonomously | No | No | Partially (with approval) | Yes |
| Integrates with external services | No | No | Limited | Yes |
| Choice of underlying model | N/A | Yes (Copilot) | Yes (Cursor, Copilot) | Yes |
| Setup required | None | Low | Medium | High |
| Security risk surface | Low | Low | Medium | High |
Whether the benefits outweigh the risks is still an open question. The capability is real and the direction of travel is clear. The tooling is early and the implications of giving an agent that kind of reach are still being worked out in practice. It is worth watching.
Where This Leaves You
The tools have moved from responding to acting. That is the shift worth understanding.
Chat assistants remain useful for explanation and isolated problems. Autocomplete accelerates familiar patterns but does not expand what you can accomplish. Agentic tools operate across more files and more complex systems. They also allow developers to work in unfamiliar territory.
The human judgement in the loop still determines the quality of what comes out. The developer who can describe a problem clearly, evaluate a proposed plan critically, and recognise when the output is wrong will get dramatically better results than one who cannot. The tool amplifies what you bring to it.
These new tools open the door to new risks. They are unlike any risk we have seen before because agents given the ability to act might do so in unpredictable ways. We can't afford to ignore the risks, but neither can we ignore the rapidly advancing capabilities of these new agentic systems.

