The Limitations of AI - Dealing With Personality

Page content

For the last two weeks I have been playing with AI heavily, to get it to help with the task of re-organising my libraries. I played with Gemini, Le Chat and MyAI. I focused on Gemini because it gave me good results, whereas Le Chat gave good answers but I hit the token limit too easily, and MyAI is better, but the answers made me waste time, rather than move forward.

When using Gemini I find that the lines of code it gives me are good. I always run them in dry run first to ensure that behaviour is as expected. It often grates on me that the answers are “since you live in … and you do a lot of A and B…” before giving information. It also grates me that it keeps saying “And then let’s do this” rather than let me finish the task I am currently focused on.

Persistent but Forgetful

It is also erratic in what it remembers and what it doesn’t remember. If you tell it something, it will remember it, and repeat it for hundreds of responses, but for other things it forgets instantly.

If I speak about using an HP machine with Photoprism it will keep thinking that I’m using the Pi. It gets fixated. If you tell it “I’m doing this with Photoprism on the HP machine, it doesn’t remember.

If I was paying for a limited amount of tokens then this behaviour would make it very expensive, without providing me with the quality of service I would expect for 7CHF per month.

Just now I provided it with a screenshot from Photoprism with text that illustrated the problem of duplicate filenames but rather than provide a usable answer it kept hallucinating modified screenshots. After four hallucinations in a row I started a new chat, and tried to discuss the topic for a fifth time and it hallucinated again so I told it off.

Character

When I use Gemini it reminds me of a former alcoholic bi-polar friend. It loves to pigeon hole you, and remind you of something that is not related to the topic you’re getting help with. My cycling and hiking habits are not relevant to dealing with my photo library.

When I tell it “I’m using machine A for task B I expect it to remember within the same chat. It doesn’t. It’s object fixated on the fact that I use a Pi.

Context Switching

If I designed an AI tool I would teach it to switch between context A and Context B, rather than getting fixated. Context A = Using the Pi, Context B = using the HP machine. It doesn’t take on board that I switch from context to context so it gives answers that are filled with wasteful information that is wrong, and irrelevant.

Verbosity

Of course, we can tell AI to be concise, but I’d like it to be context smart. Is the question a simple line of code or a yes or no answer, or did I say I wanted to understand how something works. If AI could automatically detect how concise or verbose to be, that would be fantastic.

Skittish

I found, multiple times that Gemini is skittish. You’re going through a task that it knows will take hours but rather than asking “how is the progress going?” it encourages you to skip to the next step. That can be welcome but if you’re sorting tens of thousands of photos, it takes hours, so it would be better to focus on the current task before moving on.

If I post about the progress, I don’t need a long response. “ok” would be enough. In effect I could simply keep quiet until the task is done and tell it of the result.

  • “Since you’re on a Pi, would you like me to show you how to check if the CPU is being “throttled” due to heat while it’s crunching these hashes?”

The type of assumption I dislike.

And Finally - Dealing With AI Personalities

One of the things that is rarely discussed is that dealing with AI is dealing with the personality that was programmed into it. The more you interact with the character of that personality, the more it can become toxic. It’s good to learn to use some AI models sparingly, to avoid their character flaws becoming toxic. This morning, after my run I found Gemini toxic.