Muntjac Numbers: Playing with Artificial Intelligence

The people selling them might be lying, and the LLM’s often generate output that you can call a ‘lie’, but the underlying logic is robust.

We are drifting off topic, but even calling it a ‘lie’ would be to assign to an LLM a degree of consciousness and intelligence that is way beyond their capabilities. We are not yet living in the world of Heinlein’s “The Moon is a Harsh Mistress”! LLM’s are not conscious, and nor do they have a conscience.….at least in their current iterations.

Without getting too philosophical, a lie is based on the premise of understanding what is right and wrong, followed by the selection of the wrong answer with the intent to deceive.

This is not how LLM’s work.

When an LLM gives the wrong answer there is no intent to deceive. LLM’s don’t ‘think’, ‘consider’ or ‘rationalise’ in the way that humans do. Rather, the LLM has determined an answer based on the data it can access and/or has been trained on, the prompts the user has provided, and the rules and logic it has been programmed or requested to follow.

If we were to ask a young child “what is the cube root of 512?” and they answer “10” we don’t accuse the child of lying. They have simply given what they believe to be the correct answer based on their knowledge and understanding. At its simplest we might say that the answer given was incorrect, and that the correct answer should be 8. We might also go on to explain why the answer is 8. If the child happens to answer “8”, and we want to know whether or not that was a lucky guess, we might say “well done, and can you please explain how you arrived at that answer?”

We should look at LLM’s in the same way. For all their ability to consume vast amounts of data, and to consider billions (if not trillions) of parameters, they can still come up with the wrong answer, particularly if we happen to phrase the question incorrectly or don’t provide the necessary inputs, data, or guidelines required to come up with the correct answer. This is why Prompt Engineering has become so crucial - it is what prompts LLM’s to produce accurate answers by means of specific instructions, context, and/or data.

Most importantly, when an LLM gives an answer don’t simply believe it. Regardless of whether the answer happens to be right or wrong, ask it to explain how it came to that conclusion.
 
We are drifting off topic, but even calling it a ‘lie’ would be to assign to an LLM a degree of consciousness and intelligence that is way beyond their capabilities. We are not yet living in the world of Heinlein’s “The Moon is a Harsh Mistress”! LLM’s are not conscious, and nor do they have a conscience.….at least in their current iterations.

Without getting too philosophical, a lie is based on the premise of understanding what is right and wrong, followed by the selection of the wrong answer with the intent to deceive.

This is not how LLM’s work.

When an LLM gives the wrong answer there is no intent to deceive. LLM’s don’t ‘think’, ‘consider’ or ‘rationalise’ in the way that humans do. Rather, the LLM has determined an answer based on the data it can access and/or has been trained on, the prompts the user has provided, and the rules and logic it has been programmed or requested to follow.

If we were to ask a young child “what is the cube root of 512?” and they answer “10” we don’t accuse the child of lying. They have simply given what they believe to be the correct answer based on their knowledge and understanding. At its simplest we might say that the answer given was incorrect, and that the correct answer should be 8. We might also go on to explain why the answer is 8. If the child happens to answer “8”, and we want to know whether or not that was a lucky guess, we might say “well done, and can you please explain how you arrived at that answer?”

We should look at LLM’s in the same way. For all their ability to consume vast amounts of data, and to consider billions (if not trillions) of parameters, they can still come up with the wrong answer, particularly if we happen to phrase the question incorrectly or don’t provide the necessary inputs, data, or guidelines required to come up with the correct answer. This is why Prompt Engineering has become so crucial - it is what prompts LLM’s to produce accurate answers by means of specific instructions, context, and/or data.

Most importantly, when an LLM gives an answer don’t simply believe it. Regardless of whether the answer happens to be right or wrong, ask it to explain how it came to that conclusion.
Yes - I understand all that.

Hence I said you can CALL it a ‘lie’. I was careful to put lie in inverted commas for that very reason.

They produce incorrect answers that LOOK like lies, because the rest of the text APPEARS to be goal directed.

More generally, it’s a convenient (if misleading) shorthand to call the errors ‘lies’.

I don’t think we disagree on what they can do, or their utility and risk. I think we are (shock horror) splitting semantic hairs…
 
Yes - I understand all that.

Hence I said you can CALL it a ‘lie’. I was careful to put lie in inverted commas for that very reason.

They produce incorrect answers that LOOK like lies, because the rest of the text APPEARS to be goal directed.

More generally, it’s a convenient (if misleading) shorthand to call the errors ‘lies’.

I don’t think we disagree on what they can do, or their utility and risk. I think we are (shock horror) splitting semantic hairs…

My post wasn’t intended as a direct criticism of yours, and apologies if it came across that way.

It was meant as a general explanation as to why the widespread use of the terms ‘lie’ and ‘lying’ frequently turn up - mistakenly - in association with the usage of LLM’s.
 
Or it could just be that the “stats” boys are able to manage more than one thing at a time. :-|
Nope, when you start quoting "cull" deer numbers in decimal points that is up on top of the boring "stats boys" list :doh:
All most on a level insisting a stalker comes the day before to confirm "zero" 🤣 and they sill miss.
 
Back
Top