Cursor AI and Claude - impressions from personal projects in 2025
I played a bit with Cursor Composer + Claude 3.7-sonnet in the last couple of months. The results were great - in emotions, I even posted a LinkedIn update:
Exciting times!! A year ago AI assistance was just a bit smarter than autocomplete. Meaningful productivity gains were uncertain.
In 2025? It just saved me hours this week, if not a couple of afternoons.
Where AI assistance already works well
Dependency update. The Gatsby update mentioned above was quite complex. I mentioned 10k words of docs. Proof:
We’ve seen in the past projects like Renovate aiming to reduce the toil related to updates. If you install it early and continuously use it, you’ll be in a good place. In this case, however, I used AI to recover the project from being years behind in updates.
PoC. Recently at my consulting job, I had an interesting case. The customer works on a sophisticated hardware technology related to AI-based sensors. We plan to create a demo of this technology as a physical stand and make it entertaining for the visitors. I came up with the idea that we could use OpenAI API to simulate real-time conversation with the sensor. I knew that with threejs I could make a 3D visual changing its shape when sound is being played. All in all, I knew that was possible, but I still needed a working PoC.
With Cursor AI + Claude I was able to combine all of the above in a single afternoon, using NodeJS that I’m not experienced with. It didn’t matter. It was jaw-dropping how quickly I was able to support my idea with working PoC.
Page visual refresh. Maybe you’ve noticed that my blog is refreshed. Some of the pages like offer or recommended look absolutely different. That’s the effect of asking the AI assistant to “make this page look more modern” and applying some of the suggestions.
There’s more - some time ago I decided to build my own SaaS. One of my first steps was to release a simple landing page with a waitlist. I wanted the simplest possible approach - ideally a static page hosted on S3. Dead simple. On the other hand, I could use website-builder software, with drag ‘n drop UI features.
AI assistant made it possible to generate https://feedscout.io/ static page in a couple of hours. The majority of that time was spent on creating visual diagrams and tweaking messaging (now I know I could use Canva AI instead…). Without Cursor I would probably buy a template since my frontend skills are rusty and I didn’t want to push an ugly site to the world.
SEO improvements. I asked AI to design an SEO improvement plan and commit it as a document to the repository. The plan had points as a checklist, and Claude had to check completed points as they went through it. My SEO knowledge is pretty basic so AI did a great job at filling up my gaps and reminding me about practices I haven’t paid much attention to. I also like how plans or documents committed to the repository radically improve the predictability of working with Cursor.
JS in general. I was using Claude previously with ruby and elixir and it was way less useful compared to JS and TS. More learning material I suppose!
Basic self-check. A year ago I was happy that when I passed my stack trace to AI, it would quickly figure out what was wrong - faster than I’d find a response on stack overflow. But now when Cursor has visibility into your terminal and can run terminal commands, it feels like Composer has another “sense”. It figures out what has to be done, attempts to do it, and then checks if it was done; for example - was the file modified (using cat
)? Is the API call returning the correct status (using curl
)? Is npm run develop
returning any errors? I have to do much less sense-checking in 2025 compared to myself a year ago.
Where AI still struggles
Introducing breaking changes. I often had a situation when AI introduced a change which broke other pages fully. Could be styling, responsiveness, or logic. Anything may go wrong. At the same time, AI is not the best at reversing the changes it proposed. Many people on social media argue that they broke their project and have no tool to fix it now… 🤦♂️
Of course, version control is our friend, but so are the tests. Visual-breaking changes though? I haven’t figured this out yet besides manual code & change review.
Creating overcomplicated solutions. I’ve seen AI proposing bloated solutions multiple times. For my projects, it was not a big problem because I didn’t care too much & I didn’t work with other developers. However, complexity made it more difficult to debug for both AI & me when something went wrong. Now I often do separate prompting exercises to try to lean/simplify/clean the solution before committing.
Weird workarounds. For some reason, AI wasn’t able to introduce changes to some of my .scss
files, while it worked perfectly for .css
. I’ve noticed that the Composer started creating temporary .css
copies of .scss
files and replacing .scss
usage. I’ve seen a similar situation with other type of files (temporary .html
files).
After I started rejecting this as a solution, the Composer did another weird thing - it started using terminal echo
commands to modify .scss
files. In other words, there was some bug in Cursor IDE affecting Claude - it couldn’t use a regular interface to interact with files… so AI found a workaround using a different interface it had access. Pretty brilliant. Of course, in my case, absolutely unnecessary, but still I was surprised.
Looping while debugging. Every day I had a situation when a weird bug would cause Cursor to try something, see it’s not working, then attempt a different solution, see it’s not working either and come back to the first solution again. After 2 iterations it’s usually clear that you have to dig deeper into the code yourself. It’s interesting that in my case I could “prompt a suggestion” that would direct AI to the right path.
For example, AI once fixated on using column-count
to change the visual design of the website. It worked in 90% of the places. For the rest, it would loop into trying slight variations of the same approach (5-6 cycles). When I prompted that I thought the number of columns was not controlled by that css property but by several inner HTML elements, and it immediately fixed it. That looks like a real pair-programming experience 😂.
Simple improvements I’d see
Some things I’d love to see in Cursor in the future:
- Ability to visualize frontend styling changes in multiple propositions/variations and ability to drill down into desired one (like Midjourney, but for frontend)
- Self-diagnose if the issue may be related to the fact that I haven’t accepted all of the suggestions yet. I’ve noticed that it’s possible to run the commands (as the next steps of the Cursor suggestion) before file changes (earlier steps) are confirmed & saved. That looks like a suggestion which didn’t work, so Cursor proposed an alternative approach, usually overcomplicated.
- Self-diagnose based on browser developer toolkit. I think it’s possible through “Operator” API, a browser in the container, a plugin or many other approaches.
- Propose writing contract and unit tests in the first phases of the prompting exercise and running those tests on subsequent prompts. Creating visual snapshots to notice UI regressions which don’t make sense. If it’s inconclusive, Cursor asks follow-up questions.
Summary
I think Cursor is going in the right direction and it quickly became my fav IDE, despite I worked for years with JetBrains software. Excited about what it brings next!