Introduction
Claude Code has been getting a lot of attention lately. I've been following its increase of the youtube mindshare, and I finally decided I needed to try it out for myself. Since I've been using Cursor so heavily, I was curious about what value Claude Code actually provided: how I would incorporate it into my workflow and if it complimented or replaced Cursor.
The topic of combining Claude Code with Cursor has been covered in some new videos in the past week, and I watched each of these:
Each video is interesting and informative in its own right. This is where I learned that Claude Code was now offered as part of their subscription plans, and is no longer limited to the metered api access. However, neither actually addressed how combining Code with Cursor actually improved anything. So I decided I needed to try it for myself.
Disclaimer - this is extremely early feedback. It's only been a single day, and I am sure my impressions will be completely different in a month.
Why I Hadn’t Tried Claude Code Before
To be completely honest, I haven't seriously considered Claude Code before now because I just have too many things in flight at the same time. Getting up to speed on a completely new technology stack, its attendent build chains, best practices, etc, is complex enough on its own. But that's why I'm using Cursor in the first place, to do most of the heavy lifting. I didn't want to throw something else in the mix, and get distracted by the tool and not be able to focus on the code.
Furthermore, as I said in the introduction, Code has been locked behind the metered/pay-as-you-go boundary. I have been afraid of using it to implement a complicated feature, and the cost spiraling out of control; potentially leaving me with something incomplete that a lesser model couldn't deal with. This fear of potentially unbounded cost was the major sticking point. Now that Code has been unlocked with the subscription plans, I could justify it.
Installation and Setup Experience
Claude Code is pretty simple to install, at least in common/supported environments. It requires npm
, and most engineers have npm setup. On my ubuntu 24 machine, it was a one-line installation command. However, on my arch machine \, it was a different story. Claude Code is expected to be a global installation, and that did not match my arch box (to be honest, I don't remember setting up npm on that machine). Fortunately the error I ran into was a known issue, and Anthropic have a page with details on how to resolve it. The instructions are all based on bash, and if you use an alternative shell (I use fish), then you'll need to translate the steps to that shell.
After setup, launching code is as simple as running claude
from the terminal. It was actually a breeze (except for the Arch+fish hiccup). I decided to run it from the terminal directly at first, and save any Cursor integration for later. I took this opportunity to create a brain dump of the overall project, describing the tech stack, domain model, monorepo layer, important patterns, user flows in the front end, etc, all in a requirements doc. I asked Claude to read the requirements doc and acknowledge that it understood or to ask any relevant questions.
This part was cool. Claude asked me a few things to made sure it understood relevant patterns or design decisions, where I was too vague in my document.
The First Task: A Rocky Start
So the initial setup experience was a breeze. It was time to give Claude some real work, and to try out the much-hyped Claude 4 models. This past week I went through another huge refactor, and it literally broke everyhing: the database, the backend, the api, the front end, navigation, supplementary tools such as data seeding - everything. I should probably write about that experience separately as it's not directly related to AI. So since everthing was broken, I decided I needed to toss out all of the backend's unit tests around the DAO and GraphQL resolver layers. This felt like a great opportunity for Claude 4 Opus to flex its coding chops.
I explained to Claude about the recent refactor, and my decision to replace the previous unit tests. I asked Claude to then implement a test fixture for a single dao unit, and to let me review it; if I liked it, we could use that as a pattern for testing the remaining dao's. Claude acknoledged, and spun for a good 5 to 10 minutes, and then did nothing. Claude started reporting errors in red; the requests to the backend were timing out. Claude attempted to retry up to 10 times, and then it just stopped. When it stopped, it stopped with no real discussion; Claude didn't tell me that there were problems. It didn't tell me where it was in it's implementation. It gave me no feedback at all, other than requests were timing out.
This was not a good beginning. I had literally just signed up for the $100/month MAX plan, and at 6pm central on a Sunday evening, Anthropic didn't have enough capacity to serve my requests. That was exceptionally frustrating. Furthermore, the user experience/ergonimics were lacking. I had to tell Claude that it wasn't working and to ask it to resume it's work.
Eventually, Claude was able to give me a candidate test fixture for review. But because of the capacity issue, it required multiple manual nudges from me to keep it going. The test was higher value than whatever was spit out by Cursor's "auto" agent two weeks prior. So I approved, and asked Claude to follow that pattern to implement tests for the remaining dao's.
Let me say this - Claude 4 Opus is slow. Like, really slow. I understand that's a function of its reasoning ability, but my initial impressions were not good. I had to babysit Claude for several hours as it attempted to write tests, run them, and work through an endless series of compile/type/lint errors. I eventually ran out of Claude 4 Opus requests within my 5 hour time window. The code being tested isn't particularly complicated; if Claude was having problems with it, how would it perform elsewhere?
I was quickly losing trust. It was getting late, so I decided to resume the next morning. When I did resume, it took another good half-hour to work through the remaining compile/lint/failing test issues.
The Missing IDE: /ide Misdirection
One of the "features" recently introduced with Claude Code is the ability to integrate with an ide; that is, if you're running CC from a terminal window within an IDE, it can be context aware. The extent of this integration wasn't exactly clear, and although it was mentioned in the videos, this feature wasn't explored in any depth.
So after we were done with the unit tests, I tried out the ide integration. Supposedly, it was as easy as:
/ide
But it wasn't. When I tried that command, Claude showed me the detected IDE's, and there was nothing in the list. I explained to Claude that it was running in a terminal from within Cursor, and Claude tried to debug the issue. It tried to set bash environment variables, and when that didn't work, fish environment variables. After several failed attempts to detect the ide (including restarts of the IDE, and even my gui shell), nothing worked. After roughly 30 minutes of debugging, Claude came back and said that the ide integration feature had been removed.
That was incredibly frustrating. It was a complete waste of time. Confusing things even more is the fact that this feature remains on anthropic's site. I guess I'll just continue without any kind of ide integration.
How It Compares to Cursor (So Far)
So how does it compare to Cursor? That's a tough question, and I'm not sure it's the right question.
It's way too early for me to know for sure, but I don't think it's an either-or situation. I think the tools can be complimentary, but I need more time to explore. I will say though that in the day I've been using Claude Code, I haven't actually done anything in Cursor.
Conclusion
Keep in mind that these are initial impressions and will definitely change. I was initially frustrated by the fact that functionality I had purchased (i.e., capacity) was not available. That's not a good look. The fact that I wasted time trying to enable Cursor integration, only to later learn that feature has been removed, was also frustrating. And lastly, I was unimpressed by the difficulty Claude 4 Opus had generating new, functional, unit tests for my DAO layer. In fact, I was surprised by how many iterations were required. But we eventually got there.
The next steps are to give it something more complicated, and to better learn its strengths and weaknesses and determine how to incorporate it into my workflow.