🚀 I Mass Terminated My Copilot Plans. Here's Why Claude Code Won.
Suraj Khaitan

Suraj Khaitan @suraj_khaitan_f893c243958

About: I’m Suraj Khaitan, an AI/ML engineer passionate about building real-world applications with Generative AI, NLP, and Computer Vision.

Joined:
Sep 26, 2024

🚀 I Mass Terminated My Copilot Plans. Here's Why Claude Code Won.

Publish Date: Mar 14
1 1

How an agentic AI in the terminal replaced my IDE plugins, scaffold scripts, and half my Stack Overflow tabs—without ever opening a GUI


The Moment I Realized My Coding Workflow Was a Lie

Every developer eventually hits the same wall:

"I have 4 AI extensions, 12 keyboard shortcuts, and I'm still copy-pasting code between a chatbot and my editor."

Tab-complete autocomplete? Great for variable names. IDE chat panels? Nice for explaining regex. But the moment you need an AI to actually understand your codebase, edit 14 files, run your tests, and fix its own mistakes—the shiny plugins fall apart.

Then I tried something that felt reckless:

I gave an AI full access to my terminal.

Specifically: Claude Code—Anthropic's agentic coding tool that lives in your CLI, reads your repo, writes real code, and executes commands.

I haven't looked back.


TL;DR (If You Only Read One Section)

  • Problem: AI coding assistants that autocomplete lines can't architect solutions. Chat-based tools require endless copy-paste.
  • Move: Claude Code operates as an agentic AI inside your terminal—it reads, writes, runs, and iterates autonomously.
  • Result: Multi-file refactors in minutes. Bug fixes with zero context switching. Git workflows handled conversationally.
  • Tradeoff: You're trusting an agent with shell access. Guardrails and review discipline matter more than ever.

Why Claude Code Is Trending Right Now

Scroll through any dev community in 2025–2026, and you'll see the same frustration:

  • "Copilot autocomplete is nice but it doesn't think."
  • "ChatGPT is smart but it doesn't know my codebase."
  • "I spend more time prompt-engineering than actual engineering."

Claude Code hits different because it collapses the gap between knowing and doing. It doesn't suggest code in a sidebar—it implements changes directly in your repo, runs your test suite, reads the errors, and fixes them. In a loop. Without you alt-tabbing once.

The industry term is agentic coding. And it's not a buzzword anymore—it's a workflow.


What Even Is Claude Code?

Claude Code is a command-line tool from Anthropic. You install it, point it at a project, and talk to it like a senior developer sitting next to you.

# Install it
npm install -g @anthropic-ai/claude-code

# Start it in your project
cd my-project
claude
Enter fullscreen mode Exit fullscreen mode

That's it. No VS Code extension to configure. No API keys to paste into settings.json. No "select model" dropdown with 47 options.

You get a REPL-like interface where you type natural language, and Claude:

  1. Reads your files and project structure
  2. Plans the changes needed
  3. Writes the code across multiple files
  4. Runs commands (tests, builds, linters)
  5. Iterates if something breaks

It's like pair programming—except your pair never gets tired, never forgets the module structure, and never says "let me think about that" for 45 minutes.


Real Workflows That Made Me a Believer

1) The "Refactor 30 Files" Moment

I needed to migrate an API layer from Axios to a custom fetch wrapper. With traditional AI tools, that's:

  • Explain the pattern in a chat
  • Copy the suggestion
  • Paste it into File 1
  • Realize it doesn't match my error handling
  • Re-explain
  • Repeat 29 more times

With Claude Code:

> Refactor all API calls in src/features/ from axios to use the 
  fetchWrapper in src/lib/api.ts. Preserve error handling patterns. 
  Run the type checker after.
Enter fullscreen mode Exit fullscreen mode

It read every file, understood the existing patterns, made the changes, ran tsc, found 3 type errors, and fixed them. Total time: 4 minutes.

2) The "Debug This Flaky Test" Nightmare

A test was passing locally and failing in CI. The usual investigation: environment differences, timing issues, mock state leaking.

> The test in src/features/agents/__tests__/AgentList.test.tsx is 
  failing in CI with "Unable to find role='button'". It passes locally. 
  Investigate and fix.
Enter fullscreen mode Exit fullscreen mode

Claude Code read the test, read the component, identified a race condition with an async render, added the correct waitFor wrapper, and ran the test suite to confirm. Done in 90 seconds.

3) The "Write the Whole Feature" Sprint

> Create a new feature module for "cost-management" under src/features/. 
  Follow the same pattern as the agents feature: api layer, components, 
  hooks, and route registration. Include a dashboard page with a summary 
  card grid and a data table.
Enter fullscreen mode Exit fullscreen mode

It scaffolded 8 files, wired up the route, created TanStack Query hooks, and built components using our existing design tokens—because it read our codebase first. Not a template. Not a snippet. Actual contextual code.


The Architecture: Why "Terminal-Native" Is the Unlock

Most AI coding tools follow this pattern:

IDE Plugin → Language Server → AI API → Suggestion → Developer copies it
Enter fullscreen mode Exit fullscreen mode

Claude Code follows this one:

Developer → Claude Code (terminal) → reads repo → plans → writes files → runs commands → verifies → done
Enter fullscreen mode Exit fullscreen mode

The key difference: the feedback loop is closed. Claude doesn't suggest and hope. It acts, observes the result, and iterates.

This is the difference between:

  • A GPS that shows you the route (traditional AI)
  • A self-driving car that takes you there (agentic AI)

Why the Terminal?

The terminal is the most powerful interface a developer has. It's where you:

  • Run builds and tests
  • Manage git
  • Execute scripts
  • Install dependencies
  • Deploy

By living in the terminal, Claude Code has access to the same tools you do. It doesn't need a special plugin API or language server protocol. It just… uses your tools.


The Permission Model: Trust, but Verify

Here's the part that makes security-conscious engineers twitch: this thing can run commands.

Claude Code handles this with a tiered permission system:

Action Permission
Read files ✅ Automatic
Write/edit files ⚠️ Asks permission (configurable)
Run terminal commands ⚠️ Asks permission (configurable)
Run "safe" commands (ls, cat, grep) ✅ Automatic
Run destructive commands 🛑 Always asks

You can configure it to auto-approve certain patterns:

# Allow all file writes in src/
# Allow test runs without asking
# Always ask before git push
Enter fullscreen mode Exit fullscreen mode

The mental model: it's a junior developer with terminal access. You wouldn't let them git push --force without review, but you'd let them run npm test freely.


Claude Code vs. The Field: An Honest Comparison

Capability GitHub Copilot ChatGPT/GPT-4 Cursor Claude Code
Line-level autocomplete ✅ Excellent ❌ N/A ✅ Good ❌ Not its thing
Multi-file edits ❌ Limited ❌ Copy-paste ✅ Good ✅ Excellent
Codebase awareness ⚠️ Current file ❌ None ✅ Good ✅ Excellent
Runs commands ❌ No ❌ No ⚠️ Limited ✅ Full terminal
Self-corrects errors ❌ No ❌ No ⚠️ Sometimes ✅ Yes (loop)
Works without IDE ❌ No ✅ Yes (browser) ❌ No ✅ Yes (terminal)
Agentic workflow ❌ No ❌ No ⚠️ Emerging ✅ Core design

The nuance: Claude Code isn't trying to replace your autocomplete. It's a different tool for a different job. Use Copilot for line-level flow. Use Claude Code when you need an agent that does work.


The Workflow That Actually Works

After months of daily use, here's my optimized flow:

Morning: Strategic Work with Claude Code

> Review the open PR #142. Summarize the changes and flag 
  any potential issues with our auth middleware.
Enter fullscreen mode Exit fullscreen mode
> Implement the API integration for the new knowledge-base 
  management feature. Follow existing patterns in src/features/agents/.
Enter fullscreen mode Exit fullscreen mode

Afternoon: Tactical Fixes

> Fix all TypeScript errors in src/features/tools/. 
  Run the type checker and show me the results.
Enter fullscreen mode Exit fullscreen mode
> Update the unit tests for UseCaseApi to cover the new 
  delete endpoint. Run them and make sure they pass.
Enter fullscreen mode Exit fullscreen mode

End of Day: Cleanup

> Review all changes I've made today. Create a commit with 
  a conventional commit message.
Enter fullscreen mode Exit fullscreen mode

The shift: I went from writing code to directing code. My job became architecture, review, and decision-making. The implementation became a conversation.


Gotchas (The Part Everyone Discovers at 2 AM)

1) It's Confident, Not Always Correct

Claude Code will make changes with conviction. Sometimes those changes are subtly wrong. Always review diffs before committing. Trust the agent, but verify the output.

2) Context Window Limits Are Real

On massive monorepos, Claude Code can't hold your entire codebase in memory. Mitigations:

  • Use a CLAUDE.md file to give it project context and conventions
  • Point it at specific directories rather than the whole repo
  • Break large tasks into focused steps

3) It Can Get Into Loops

Occasionally, it'll try to fix an error, introduce a new one, fix that, introduce another. When you see this:

  • Stop it
  • Give it clearer constraints
  • Break the task down

4) Cost Awareness

Claude Code uses API credits. Complex multi-file refactors with test loops can add up. Monitor your usage, especially in the "let it run" agentic mode.


The CLAUDE.md File: Your Project's AI Constitution

The secret weapon most people miss: create a CLAUDE.md at your project root.

# CLAUDE.md

## Project Overview
This is a React + FastAPI monorepo for an internal platform.

## Conventions
- Use design system tokens, never raw Tailwind colors
- Follow feature-based file organization under src/features/
- Use TanStack Query for server state
- All API calls go through src/lib/api.ts

## Commands
- `pnpm frontend:dev` - Start frontend
- `pnpm frontend:quality` - Type check + lint
- `pytest` - Run backend tests

## Don'ts
- Never modify shared components without discussing
- Don't install new dependencies without justification
- Don't push directly to main
Enter fullscreen mode Exit fullscreen mode

This file acts as persistent memory. Every time Claude Code starts, it reads this file and follows the rules. It's like onboarding documentation—but for your AI pair programmer.


Who Should (and Shouldn't) Use Claude Code

Use it if:

  • You work on codebases with 10+ files that need coordinated changes
  • You're tired of copy-pasting between AI chats and your editor
  • You want to automate repetitive refactors, test writing, or migrations
  • You're comfortable reviewing diffs and understanding the code an AI writes

Skip it if:

  • You mainly need line-level autocomplete (use Copilot)
  • You're learning to code and need to understand every line you write
  • Your org prohibits AI tools from accessing source code
  • You prefer GUI-first workflows and rarely use the terminal

The Bigger Picture: We're Entering the "Agent" Era of Dev Tools

Claude Code isn't an anomaly. It's the leading edge of a shift:

Era 1 — Stack Overflow & Docs (search for answers)

Era 2 — AI Chat (ask for answers)

Era 3 — AI Autocomplete (get suggestions inline)

Era 4Agentic AI (delegate tasks to an autonomous agent)We are here

The developers who thrive in Era 4 won't be the fastest typists. They'll be the best directors—people who can decompose problems, set constraints, review output, and guide an agent toward the right solution.

The skill isn't "can you write a React component?" anymore.

It's "can you describe what the component should do, review what the agent built, and course-correct in real time?"


Final Take: It's Not About Replacing Developers

Every AI tool gets the same question: "Will this replace me?"

No. But it will replace the version of you that spends 60% of the day on mechanical implementation.

Claude Code doesn't have taste. It doesn't know your users. It can't decide whether a feature should exist. It can't navigate a product meeting, push back on a bad spec, or mentor a junior developer.

But it can turn your architectural decisions into working code faster than any tool I've used. And that's not a threat—it's a superpower.


What's your biggest frustration with current AI coding tools? Is it context awareness, copy-paste fatigue, or something else? Drop your take below.


Resources


About the Author

Suraj Khaitan — Gen AI Architect | Building scalable platforms and secure cloud-native systems

Connect on LinkedIn | Follow for more engineering and architecture write-ups


Comments 1 total

Add comment