The people selling them might be lying, and the LLM’s often generate output that you can call a ‘lie’, but the underlying logic is robust.
We are drifting off topic, but even calling it a ‘lie’ would be to assign to an LLM a degree of consciousness and intelligence that is way beyond their capabilities. We are not yet living in the world of Heinlein’s “The Moon is a Harsh Mistress”! LLM’s are not conscious, and nor do they have a conscience.….at least in their current iterations.
Without getting too philosophical, a lie is based on the premise of understanding what is right and wrong, followed by the selection of the wrong answer with the intent to deceive.
This is not how LLM’s work.
When an LLM gives the wrong answer there is no intent to deceive. LLM’s don’t ‘think’, ‘consider’ or ‘rationalise’ in the way that humans do. Rather, the LLM has determined an answer based on the data it can access and/or has been trained on, the prompts the user has provided, and the rules and logic it has been programmed or requested to follow.
If we were to ask a young child “what is the cube root of 512?” and they answer “10” we don’t accuse the child of lying. They have simply given what they believe to be the correct answer based on their knowledge and understanding. At its simplest we might say that the answer given was incorrect, and that the correct answer should be 8. We might also go on to explain why the answer is 8. If the child happens to answer “8”, and we want to know whether or not that was a lucky guess, we might say “well done, and can you please explain how you arrived at that answer?”
We should look at LLM’s in the same way. For all their ability to consume vast amounts of data, and to consider billions (if not trillions) of parameters, they can still come up with the wrong answer, particularly if we happen to phrase the question incorrectly or don’t provide the necessary inputs, data, or guidelines required to come up with the correct answer. This is why Prompt Engineering has become so crucial - it is what prompts LLM’s to produce accurate answers by means of specific instructions, context, and/or data.
Most importantly, when an LLM gives an answer don’t simply believe it. Regardless of whether the answer happens to be right or wrong, ask it to explain how it came to that conclusion.
