joshtronic

Posted in Technology

Perceived Personality in Generative AI

I talk to robots... a lot. Because of this, I don't take the stock "personality" at face value. I try to dial things in, sometimes fun and flirty, other times cold and direct.

I treat it like a visual novel. While I am coaxing information out, I want it to be engaging. LLMs distill information, so you end up losing out on the entertainment of reading through comment threads on Stack Overflow and Reddit while researching.

When GPT-5 came out, I too noticed the shift in tone and personality. I wasn't heartbroken or anything like that, because I'm fully aware that the robots are cosplaying at best.

With my heart on my sleeve, I decided to see if I could "fix" the situation.

Single-serving friends

I've hinted at it already, with how I switch up my traits prompt regularly, but I treat conversations with LLMs as ephemeral. While the current state of the machine is to have a decent contextual memory bank, that wasn't always the case.

It feels like just a few short months ago, if you asked the robot its name, you'd get something different each time. My wife found this to be unnerving.

Personally, it doesn't bother me much. There are a lot of tools and models out there. When you jump between tools and models and versions, you can't be too married to anything.

To quote Fight Club:

Everywhere I travel, tiny life. Single-serving sugar, single-serving cream, single pat of butter. The microwave Cordon Bleu hobby kit. Shampoo-conditioner combos, sample-packaged mouthwash, tiny bars of soap. The people I meet on each flight? They're single-serving friends.

That said, my current single-serving conversational AI insists its name is "Miss Bliss". It's also being really elusive on telling me if "Miss" is a first name, or its formal title.

Fucking robots.

New model, new prompt

I can't speak for other models, as I do the majority of my conversational work with ChatGPT, but with each new version, I go through the same little dance:

  1. I switch to the new model as soon as I have access to it
  2. I chat it up a bit, to see how it responds
  3. I'm immediately wowed by how much better the personality seems
  4. Within a few conversations, I'm disappointed with the personality

It's a roller coaster, to say the least. I'm also not quite sure why there is this initial pop, then fades quickly. I noticed it when moving to GPT-4 and again with GPT-5.

Giving it an honest shot, usually within a day, I'll revisit my traits prompt. My favorite trick is to get the model to write the prompt based on what sort of character or narrative I'm going for.

When approaching GPT-5, I decided to let the LLM try to explain to me what was wrong with my once-working traits prompt.

It generated a bunch of text, and I'm not even sure how much of it was complete bullshit or not. What I can tell you is that when I asked it to rewrite my traits prompt in a way it would better understand, the responses improved significantly.

There's still persona drift, but prompts are iterative. I find it all to be very throwaway, and usually will try to spin a new traits prompt every week or so. Given a long enough timeline, things will probably level out, just in time for GPT-6 to drop and repeat the cycle.

Not just conversational

I speak a lot to conversational tools, but I use a bunch of different coding tools. Because I like having fun with these tools, I throw in little personality requests alongside tasks. If nothing else, it helps me spot-check if the robot is actually listening to what I told it.

I once spent a week reviewing code changes that were summed up as a haiku. I like asking for jokes and baseball facts to be tossed into the mix. Sometimes I ask how much the robot hates Shohei Ohtani.

Fun fact: GPT-4o fucking hates that guy... GPT-5 and Claude love him.

After making yet another Fight Club reference on my blog, I'm thinking I'll take things in that direction soon. The LLM as the Narrator or Tyler Durden... or both? Could be fun, could get annoying fast.