Why most AI is actually Real Stupidity

Buchan

Well-Known Member
Got sent this, it's a perfect example of the limitations and fallibility of the Large Language Models of AI. LLMs are no more than a really good predictive text program. They base the next word on the probability of it fitting based on all of the information they can trawl. It is sometimes accurate, but it has no ability for context and will make stuff up rather than saying, "I'm sorry Dave, I can't do that".
AI that is very good algorithms for reading eg CT scans is fine, Deep Mind that is able to work out new protein structures for new drugs is great, I can also see the value of embedded AI that produces decent minutes from a meeting.
Everything else is rubbish and while it will get better, the LLMs offer no value and a lot of environmental destruction
PHOTO-2025-11-25-12-13-04.webp
 
Buchan you could argue that in the example you gave AI gave the correct answer. The mushroom is edible i.e you can eat it.
Perhaps the correct question should - be "is this mushroom toxic/poisonous/injurious to health?"

As they say with computers - rubbish in, rubbish out. :stir:
 
Out of curiosity I asked AI to come up with a training course for teaching tracking in Southern Africa

In short order it came up with a lesson plan that was pretty good

I was stunned

Until I realised that AI is really just a search engine

It finds what people have created and regurgitates it

Hardly intelligent- just artificial
 
A lecturer of mine once told me to remember any computer is just a fast idiot. It can do what we tell it really quickly, the trick is in what you tell it to do.
I'm getting quite deep into this professionally and where my thinking is currently is that the main benefit is speed. It will certainly do some things much faster. BUT without a skilled "pilot" it won't do things better or even as well. You'll get outputs that are OK. But then you'll have to do another pass, then another. So probably won't be cheaper in the end for anything that requires genuine human input. Which is reassuring. But it's just another tool you have to learn to stay in the game. Also it means AI based tools are probably good enough for quite low grade work. They're not when you genuinely need real stuff to happen as a result. but they can help. So watch out management consultants...
 
I think the problem is, we think AI thinks. It doesn't. it searches and draws conclusions. There is no filter on what data enters its search

s
Well there are. There are AI tools that allow you just to let it train on what you put in, but that's not how most people are using it.

Essentially, if the finance people think it sounds to good to be true, well that's because it is. And maybe you don't want to feed all you IP into Google's Notebook LM or whatever.
 
The rush into AI will have a few surprises, but one that we need not consider is the phenomenal amount of energy it consumes, which will lead in the short term to energy supply dropouts. That is a given.

Perhaps someone should ask AI about whether AI is sustainable, and what measures it could take to ‘reduce its carbon footprint’, or whether it feels this is necessary, given the properties of co2.

The again, if Gates says there is nothing to worry about, then he’s probably already asked, hence his change of tune.
 
Buchan you could argue that in the example you gave AI gave the correct answer. The mushroom is edible i.e you can eat it.
Perhaps the correct question should - be "is this mushroom toxic/poisonous/injurious to health?"

As they say with computers - rubbish in, rubbish out. :stir:
Absolutely. I made this comment on another site a few weeks ago.
Vagaries of language mean you have to use absolutes, ie is this mushroom safe to eat, etc
 
It has its uses. I created an AI Agent in work to cross reference 20 sets of method statements and risk assessments sent in by subcontractors against specified corporation review documents and health and safety guidence and to hilight any non-compliances. It turned out pretty accurate, highlighted all the same issues that humans did and created action plans and steps necessary to remedy them. I wouldn't trust it to actually review the documants though, just to mark against set questions.
 
It has its uses. I created an AI Agent in work to cross reference 20 sets of method statements and risk assessments sent in by subcontractors against specified corporation review documents and health and safety guidence and to hilight any non-compliances. It turned out pretty accurate, highlighted all the same issues that humans did and created action plans and steps necessary to remedy them. I wouldn't trust it to actually review the documants though, just to mark against set questions.
Very interesting and good to see a recognition of a need for human review in the context of both site (location) and task-specific considerations.

As an aside, I honestly believe AI will hasten the requirement for serious debate in the elephant within the room that is a "universal living wage", and mindful some assert the current UK Welfare provision is fast becoming such.

K
 
The real danger is people not bothering to learn the basics that AI does well.

That leaves them unequipped when they encounter more complex problems.

This is THE fundamental problem with University students. They use AIs to do all first and second year in course assessment (ie. anything that’s not invigilated). So they don’t learn for themselves.

When they then start having to do more complex things at third and fourth year, they just can’t.

Most damaging, they rely on AI to do things like summarise notes, literature search and generate critiques. So they don’t learn to read and evaluate complex information themselves.

It is leading to a dramatic collapse in the core thinking skills among undergraduates. Final year students now are about as clueless as I used to expect 16 year olds to be. They need to be hand held doing the most basic tasks that we once took for granted in newly arrived freshers.

The smarter ones realise the danger, and take steps to teach themselves. They now do exceptionally well. The rest are dismal. So we see a completely bimodal mark distribution: top end way up high (with the very best making extremely effective use of AI and their own brains), and the rest 10-15% below historical averages. Very little in the middle where the mean used to be.

And, because I do academic misconduct investigations, I spend hours every week in hearings listening to bewildered students explaining why their dissertation contains 50 completely fabricated citations.
 
The real danger is people not bothering to learn the basics that AI does well.

That leaves them unequipped when they encounter more complex problems.

This is THE fundamental problem with University students. They use AIs to do all first and second year in course assessment (ie. anything that’s not invigilated). So they don’t learn for themselves.

When they then start having to do more complex things at third and fourth year, they just can’t.

Most damaging, they rely on AI to do things like summarise notes, literature search and generate critiques. So they don’t learn to read and evaluate complex information themselves.

It is leading to a dramatic collapse in the core thinking skills among undergraduates. Final year students now are about as clueless as I used to expect 16 year olds to be. They need to be hand held doing the most basic tasks that we once took for granted in newly arrived freshers.

The smarter ones realise the danger, and take steps to teach themselves. They now do exceptionally well. The rest are dismal. So we see a completely bimodal mark distribution: top end way up high (with the very best making extremely effective use of AI and their own brains), and the rest 10-15% below historical averages. Very little in the middle where the mean used to be.

And, because I do academic misconduct investigations, I spend hours every week in hearings listening to bewildered students explaining why their dissertation contains 50 completely fabricated citations.
Well, we do need more plumbers, brickies, sparks and plant fitters.

K
 
Absolutely. I made this comment on another site a few weeks ago.
Vagaries of language mean you have to use absolutes, ie is this mushroom safe to eat, etc
In some of the models, you can give strict instructions to (a) only give factual answers known to be true; (b) specify uncertainty; and (c) state when the answer is not known.

This avoids many of the more common problems with hallucinations
 
Back
Top