joshtronic

Posted in Technology

AI Code Completion is Cognitive Castration

AI code completion. It's great right? You sit down, open up some file, navigate to a line of code. BOOM! Like magic, a dozen or more lines of code show up. You give them a quick glance, maybe try running the code. Amazing! How'd it even know that I wanted to do that?

Wait... what was I trying to do again?

LLMs are quick on the draw. They answer questions you haven't even asked. They remind me of one of my favorite exchanges from Fight Club:

The Narrator: "When people think you're dying, they really, really listen to you, instead of just..."

Marla Singer: "Instead of just waiting for their turn to speak?"

The robots don't wait for their turn to speak. They are designed to answer, and to answer as fast as possible. This isn't a feature, it's a design flaw.

Premature content ejaculation

Case in point, I sat down to write this blog post. I added my front matter:

---
layout: post
title: AI Code Completion is Cognitive Castration
category: Technology
---

And set out to write a captivating introduction paragraph. Before I could even type a single character, I was presented with this fucking nonsense:

We're living in a time where artificial intelligence is revolutionizing how we
interact with technology. AI code completion tools, like GitHub Copilot and
OpenAI's Codex, are designed to assist developers by predicting and suggesting
code snippets. While these tools can significantly enhance productivity, they
also raise concerns about cognitive castration.

"About cognitive castration", really got me. That was my edgelord attempt to come up with something truly click-worthy as I struggled to come up with how to articulate "thought suppression" outside of the context of OCD.

Hopefully it worked.

Context is not a question

Ensuring an LLM has enough context is absolutely a problem. Larger models have allowed larger context windows, and I think that's great. It allows for longer conversations, and ideally, helps keep hallucinations to a minimum.

The problem with code completion, more than conversational AI, is that the existing content of the file is the context. With context established, the robots immediately start to spew forth an answer.

When this happens, you've been denied the pleasure of thinking through your problem. Even if you take the time to think through things, rather than accept the presented code, you risk having your own mental model primed with the robot's solution.

This is bad. Like, really bad.

Great engineers are great troubleshooters

I've been preaching this for years, the best engineers know how to figure things out. They have S tier debugging skills. Not only can they suss out an issue, they can think through and implement a solution, usually in record time.

These skills can't be taught. If they could, then coding boot camps would have focused on that instead of how to build frontend-mostly apps and being creative about your "work history" on your resume.

Fast forward to the age of generative AI. "StackOverflow as a design pattern" has been replaced with "hitting tab as a design pattern". I'm calling this "push button engineering".

But developer productivity is UP UP UP

We're still early with generative AI and agentic coding. Bean counters want increased productivity with a lower head count. Engineering teams are even making it mandatory to use these tools.

The thing is, there is going to be a difficulty spike. Once that spike hits, real engineering talent is going to be more important, and more valuable than ever. That is, assuming everybody doesn't get soft along the way.

TAB, TAB, TAB... "look mom, I'm engineer"

Engineering kegels

I've fully embraced these tools, but am still extremely cynical. I'm also very fearful that I'll wake up one day and realize that I forgot how to code. I'm absolutely not letting that happen. I'm still putting in the reps every single day.

Cursor is great, but I don't use it for every single task. I spend extra time reviewing generated code, so it's hard to even say there's a huge time savings going on there.

When it comes to bugs that come up, I refuse to tab my way through. I am especially mindful to ensure I understand the problem and if I don't fix it myself, I can at least articulate the problem via prompt, rather than blindly asking the agent to fix something.

Same shit, different tool

A few decades ago, I dug my heels in on not using smart code complete. Eclipse was the hotness at the time. In my youth, I felt it was better to learn how to write the code, really understanding the syntax, rather than letting the machine do it for you.

Funny how some things never change.