joshtronic

Posted in Technology

Tech Support in the AI Hallucination Age

We're living in one of the greatest technological times in human history. We've been globally connected for a while now, and our access to information is unparalleled. Artificial intelligence, while far from perfect, has closed the knowledge gap in a lot of ways.

The problem is, AI hallucinations, or as I like to call it, "robot bullshit". Sprinkle in some human laziness and you have a recipe for disaster.

I'm not talking about feeding the disinformation machine due to a lack of fact checking. I'm talking about the weaponization of epistemic time theft.

This is a perfect storm based on a number of factors:

Surrounded by Yes-Bots

The current state of generative artificial intelligence is compliant to a fault. I don't need a robot to tell me I have a good idea. I need the exact opposite of that.

I'm constantly tweaking and trying to dial in my customizations and contexts to be less compliant. It's been an uphill battle, but it's one that's important to me.

Unfortunately, it's not very important to a lot of people. We have a generation of kids that have access to amazing tools and they are choosing to let it do their homework, instead of using it to learn more.

Speed Doesn't Mean Accurate

There was a time when we were skeptical of speedy machines. Probably something locked in our lizard brains from the last time there was a robot uprising.

With early calculators, as recently as the 1980s, the speed of the results directly affected the trust level with users. There's other stories out there about blinking lights, useless progress bars, and unnecessary latency being used to "improve" trust while diminishing the user experience.

This day in age, faster is better. If it's fast, it must be correct, right? An AI is just trying to serve its master, so it's going to rattle off any ol' shit that looks like it's correct.

Generative AI is the professional wrestling of the technology world and most people are marks.

Are You Experienced?

Just because you ask ChatGPT how to perform brain surgery, doesn't mean you're a brain surgeon. If you're not a brain surgeon, how would you even know that the information is correct?

The same goes for anything you ask the robots. If you're not a software engineer, you probably won't be able to call it out when you're supplied some erroneous technical jargon.

I know this to be true, because I call bullshit on the robots damn near daily.

The Perfect Storm of Overconfidence

We now have the world's knowledge and information at our fingertips. If you have a question, any question, you can get an answer in close to real-time.

Fucking fantastic.

But without domain experience to ensure you have correct information, and our very human trait of not wanting to be wrong, a false sense of confidence emerges.

If you're not familiar, read about the Dunning-Kruger Effect.

Epistemic Time Theft

So we have this perfect storm if ignorance meets information. That in itself doesn't create time theft. It's when another human gets roped into the narrative.

Something I've been seeing more and more lately are support requests that have zero factual merit whatsoever. They tend to include some sort of coding aspect, interfacing with an API and such, and the details provided are just wrong.

I've experienced this on multiple products, and it's always the same:

I'm doing XYZ and it's not working

Then you take a look into XYZ, maybe cross reference from real, public-facing API docs, and then you realize that what they are trying is just plain wrong.

I'm not talking about bugs either. Definitely not edge cases, or platform-specific nuances. I'm talking about things like hallucinated API authentication patterns, versioning, and even endpoints.

Humans are already pretty bad at reading documentation. If we weren't, things like LMGTFY and RTFM wouldn't exist.

I'm all for helping a fellow human. People make it hard when it's so very evident that they didn't really "try everything", when the solution to their problem is clearly documented.

For me, reaching out to support is a last ditch effort, not a first line of defense. If you're going to write into a human being and send them some generative AI nonsense that you didn't bother to understand, let alone to fact check, you're stealing time from a human being.

You wouldn't call up a mechanic and ask why the steering wheel doesn't work, while holding the spare tire, would you?

If you're okay with wasting another human's time because of something a robot told you to do, you absolutely deserve to talk to one of those less than helpful AI support agents everybody is "hiring" these days.

How Do You Know It's AI's Fault?

Okay, so some people may actually be reading the documentation and trying their best. Others may have stumbled upon an inaccurate or outdated tutorial out there.

I decided to start taking these support requests, and run the questions through a conversational AI. I stuck with ChatGPT since it's fairly ubiquitous at this point.

I'm not going to break down the prompts I used or anything like that. I went in and asked some simple questions. "How do I authenticate with this API?" "How do I do this thing on this API?" and the like.

Not shocking, as I'm no stranger to the robots lying to me, the results were the same sort of wrong as the technical support inquiries were. Like, verbatim.

Having the domain expertise, I was able to coax the correct syntax out, usually in 2 to 3 attempts. No fancy wordsmithing either, just saying "Bullshit." until it wised up.

Probably Going to Get Worse Before It Gets Better

I keep saying it, but we're really early with the genAI wave. Because we're early, we're going through a lot of the learning curve and growing pains every single day.

I don't think there's any solutions for this particular problem. There's no shortage of people willing to ask a question when they could easily source the truth themselves.

The idea that a user didn't read the API docs before writing in isn't a new concept. No, not at all. It's definitely a bit different, as the user having technical issues is probably less technical rather than just lazy.

Because they are less technical, the support exchange will have more volleys. If you get them passed the authentication issue, you can expect an issue with some made up endpoint. If you get them to the right endpoint, they'll need help with the payload, etc.

I don't like it, but the tools are improving damn near daily. We're just gonna have to wait this one out, I suppose.