in Software Development

AI Coding Tools I Use to Ship Faster – June 2025

As I mentioned a few weeks back, I'm leaning really hard into generative AI development. The impact genAI has had on society is hard to miss, but as a developer in my 40s, it's easy to take the low road and dismiss these tools because yadda yadda, AI writes bad code.

Stop Complaining, Start Shipping

Got some bad news for y'all:

A few decades in the trenches makes it pretty clear that "done" really is better than perfect. Sure, there's a time for perfect - but most SaaS just needs to work.

If nothing else, you should be shipping fast, gathering feedback, and then shipping again. Repeat until the stack of money is taller than you are.

I've said it before and I'll say it again, shipping fixes everything.

This experience has also let me both do, and watch people do some dumb shit. Like, really fucking dumb. I don't want to formally document any of it because it's really that bad.

But we all survived and moved on.

My Current AI Tool Stack

You're not here to listen to me soapbox about shipping, so here's the tech stack I'm personally using on a daily basis to ship code faster:

Conversational AI - ChatGPT

ChatGPT is my ride or die. It's where I started and where I still do the majority of my brain dumping and research tasks. Before I write any code, I will chat through sanity-checking my assumptions and ideas. Then we'll work on how to construct prompts to boss other robots around.

IDE - Cursor

Let the record show that I do not like VS Code. I'll always be a Vim/Neovim truther, but Cursor is where I do the majority of my (watching the robot) coding these days.

That's not to say I work directly in Cursor. I still hit the command-line and will work on things, mostly laying out a series of TODO comments for the robot to knock out for me in Cursor.

Cursor brings a handful of models to the table, but I mostly stick to "auto" mode. I'll save the details on when I switch models and how that whole workflow looks for another post.

CLI - Claude AI (via Claude Code)

These tools are a bit out of order, as I tend to start with ChatGPT then move over to Claude Code to help with context generation. As mentioned, we'll get in the weeds at a later date on my actual workflows.

Claude Code does a great job of scanning my codebase, and working through generating documentation to help keep all the robots on track. I also tag Claude in when Cursor makes me want to cry because I can't will it to do what I want.

GitHub Copilot

Copilot blew all of our minds when it first came out. Typing in a few keystrokes and then having 100+ lines of perfectly formatted code appear was downright magical.

At this point though, I would say that I barely use Copilot for code generation, in my editor at least. I have started to use it fairly religiously on PR reviews. This is extremely helpful, especially when I'm the only developer on a project.

Real Tools Still Matter

Everybody wants to talk about the low quality code the robots write, but they actually try their best to match the style of the codebase.

Garbage in, garbage out, right?

The best way I've found to combat this is to already have good development practices in place. For most of my project this looks like this:

I'd like to think this is a no-brainer, but I couldn't tell you how many times I've had to call out a lack of tests on a project. Humans are inherently lazy and the first things to get cut are things like this.

Also, people say that the robots like to write tests, but in my experience, they're hit or miss. So effectively, the same output as a human developer.

Ensuring you have the tools in place and making sure the robots are aware of them gets you really far. When the robots have a way to test their work, they will. And when they don't, you can scold them like a good parent.

Budgeting for Velocity

There's no such thing as a free lunch. If you reframe your thinking around "hiring" a robot, it becomes a lot easier, at least for me, to justify the costs associated with them.

First, we have some static recurring costs:

There's of course discounts for annual subscriptions, and different tiers for some of these options. These tools are moving so fast that I struggle with the idea of committing to a full year, because next week I may be onto the next big thing.

Next, we have usage costs, if you're willing to pay the cost of admission:

Claude's pricing is pretty steep, so I tend to approach it on a per-task basis, usually costs me $5 to $15 a pop.

These costs are more prohibitive to an indie developer, but not to a team. You can't act like giving your team a $100/month bonus is going to get you any additional output either.

That’s the unfair advantage nobody talks about - just like baseball: deep pockets mean more pennants - and these tools are my roster.

The AI Stack Changes Fast

This post is not evergreen. In fact, I'll call it right now, it will age like milk. If I wrote this post a week or so ago, I couldn't mention that Claude 4 seems significantly better than Claude 3.7 in terms of output quality.

I can't even imagine what these tools will look like by end of the year, let alone by July. I still have my own list of things I want to mess with, like messing with OpenAI Codex more, trying out Aider, and experimenting with local LLMs.

It's an exciting time to say the least.

Speed Is the Only Thing That Matters

The tools mentioned make up the stack that helps me ship faster today. They help me with both greenfield projects and legacy codebases. I've been able to punch the gas on features large and small, both at work and on personal projects.

It's not perfect, but I'll be the last person to tell you that I write perfect code. Hell, when I write code that gets tests to pass on the first try, I'm skeptical and spend twice as long trying to figure out where the bug actually is.

Not everybody is built for speed, I totally get that. But if you're hyper focused on the code quality from genAI, I'd say you're missing the biggest point of all: bugs happen.

The thing is, when you read about the success and failures of companies, it generally doesn't include tales of the code quality. It includes speed to market, speed to ship new features, speed to iterate on feedback, and the ability to pivot fast.

Engineers are gonna engineer - but at the end of the day, speed wins.