Why most AI is actually Real Stupidity

Got sent this, it's a perfect example of the limitations and fallibility of the Large Language Models of AI. LLMs are no more than a really good predictive text program. They base the next word on the probability of it fitting based on all of the information they can trawl. It is sometimes accurate, but it has no ability for context and will make stuff up rather than saying, "I'm sorry Dave, I can't do that".
AI that is very good algorithms for reading eg CT scans is fine, Deep Mind that is able to work out new protein structures for new drugs is great, I can also see the value of embedded AI that produces decent minutes from a meeting.
Everything else is rubbish and while it will get better, the LLMs offer no value and a lot of environmental destruction
View attachment 448584

Open the bay doors please Hal…..
 
I was recently at a seminar (related to investments, not medical) where questions were asked about how AI might affect investment strategies in the future.

One of the attendees was spouting about their advances in AI diagnosis I cant recall the particular condition it was being used for, the discussion turned to who was labile for the 'decisions' made by AI. To which the guy said words to the effect that "it's ok, we will have insurance", which kind of misses the point when the diagnosis is incorrect and the patient (or victim as i called them) dies or is seriously injured.

The point being, and this does not apply just to AI, in our current world decisions we make, whether on a personal or 'business' basis have the potential to give rise to some 'liability'. Liability in the sense that if the decisions turns out to have negative consequences then it may be a significant problem for the end user.

When considering AI, whether its an application for medicine or anything else, when we are subject to the outputs form AI models we need confidence in a) how good or bad the AI generating the response is, and b) if the outputs are incorrect someone will be liable for those, potentially in both a civil and criminal sense.

Currently when i search using google I often find the 'AI' bits helpful in leading the way to source material, but thats about as far as i go.

There is a court report doing the rounds about a case where a legal professional (i think it was a solicitor as opposed to a barrister) used AI to prepare proceedings used in a court case.

The AI not only misquoted case law iirc, but also invented some new cases that did not exist. There were other reasons why this hit the headlines related to the individuals behaviour in response their submissions, but it raises the serious question about how to check the outputs of AI in the kind of context most people come across it.

I am struggling at the moment to see how in general terms AI is going to take over traditional roles requiring professional inputs. Support it as in the examples @Buchan made, but replace it, not sure.
 
The rush into AI will have a few surprises, but one that we need not consider is the phenomenal amount of energy it consumes, which will lead in the short term to energy supply dropouts. That is a given.

Perhaps someone should ask AI about whether AI is sustainable, and what measures it could take to ‘reduce its carbon footprint’, or whether it feels this is necessary, given the properties of co2.

The again, if Gates says there is nothing to worry about, then he’s probably already asked, hence his change of tune.
Yes,not much that Billy Boy Gates PhD hasn't got his talons into.
One dangerous skinwalker that entity.
 
In some of the models, you can give strict instructions to (a) only give factual answers known to be true; (b) specify uncertainty; and (c) state when the answer is not known.

This avoids many of the more common problems with hallucinations
Yup- it’s all in the question or instruction that you give it.
In the example given, the word edible has two meanings, ie can your body digest it or is it safe to eat.

A death cap mushroom is digestible (edible) but is also poisonous (inedible)! You play lucky dip on which one it’s used!!

What is interesting is that those who use AI (chat gpt, etc) a lot are losing the ability to think and work creatively without an AI prompt.
A recent study in the US at a major uni tested the relative cognitive abilities of students and then gave 50% AI tools and told them to use the tools for all of their work, etc.
in tests at the end of the period (1 month iirc) the AI students performed significantly worse than the non-AI students and when retested a month later the AI students were still lagging, thus suggesting that the use of generative AI has a rewiring effect on the brain, causing core skills and learning capabilities to be impaired even after they have stopped using it.

So could AI be leading us to our own oblivion by making the masses a bunch of sheeplike cabbages who can’t think for themselves and follow every command given to them by a piece of software?
 
I was recently at a seminar (related to investments, not medical) where questions were asked about how AI might affect investment strategies in the future.

One of the attendees was spouting about their advances in AI diagnosis I cant recall the particular condition it was being used for, the discussion turned to who was labile for the 'decisions' made by AI. To which the guy said words to the effect that "it's ok, we will have insurance", which kind of misses the point when the diagnosis is incorrect and the patient (or victim as i called them) dies or is seriously injured.

The point being, and this does not apply just to AI, in our current world decisions we make, whether on a personal or 'business' basis have the potential to give rise to some 'liability'. Liability in the sense that if the decisions turns out to have negative consequences then it may be a significant problem for the end user.

When considering AI, whether its an application for medicine or anything else, when we are subject to the outputs form AI models we need confidence in a) how good or bad the AI generating the response is, and b) if the outputs are incorrect someone will be liable for those, potentially in both a civil and criminal sense.

Currently when i search using google I often find the 'AI' bits helpful in leading the way to source material, but thats about as far as i go.

There is a court report doing the rounds about a case where a legal professional (i think it was a solicitor as opposed to a barrister) used AI to prepare proceedings used in a court case.

The AI not only misquoted case law iirc, but also invented some new cases that did not exist. There were other reasons why this hit the headlines related to the individuals behaviour in response their submissions, but it raises the serious question about how to check the outputs of AI in the kind of context most people come across it.

I am struggling at the moment to see how in general terms AI is going to take over traditional roles requiring professional inputs. Support it as in the examples @Buchan made, but replace it, not sure.
Very good point about liability - the excuse of ‘the computer said so’ won’t wash.

In financial services the EU and UK are already bringing in a phased set of controls for AI, with institutions having to undertake periodic risk assessments in the use of AI both for staff and customers with a grading of 1-4 relating to the possible scale of impact or risk.

For banks (especially lenders and investment specialists) the use of AI could be very expensive if they get it wrong an someone get duff ‘advice’ or is allowed to borrow beyond their means simply because the human checks and balances don’t happen.

No bank want to be the poster boy of the first AI law suit for mis-selling!!
 
LLMs are no more than a really good predictive text program.

Everything else is rubbish and while it will get better, the LLMs offer no value
Well without wishing to **** you off, that’s not really true.

If predictive text is a good analogy then that it gives rubbish results when you try to use it as a database. It’s not a database.

It is trained to process input and deliver answers based on the input (sort of thats a bit simplistic) so if we only give it a question and nothing to work on then of course you are going to get **** answers. It does not remember all the data it was trained on (not a database, and a poor predictive model if it did) so it is not going to magically pull the correct answer out of a hat.

If you told it to do a web search on reliable resources then you would have got a much better answer. Just as you would if you had asked someone about mushrooms. Still not something you would bet your life on.

To say they don’t add any value isn’t true I am afraid. The modern systems are extremely good at writing code for one. As an assistant not the leader, which is fine. I wrote a wee app in 2 days that would have (did) take a month before because research and implementation are just miles faster.

It is easy to dismiss this stuff by putting a one liner into ChatGPT and laughing at the output, which is universally crap, but that’s just misusing the system.

FWIW I dislike ChatGPT as I did find it was more wrong than right so I switched to Claude and never went back. OpenAI’s free tier stuff puts you on the crappy old models. I was on their $20 per month tier for a while and that’s what I binned as it was wrong about 50% of the time.

Pick a decent model and give it something to work off (even just tell it to do a web search) and they do work. Plenty people who think they know mushrooms have killed themselves without ChatGPT’s advice, if you get the drift.
 
What is interesting is that those who use AI (chat gpt, etc) a lot are losing the ability to think and work creatively without an AI prompt.

thus suggesting that the use of generative AI has a rewiring effect on the brain, causing core skills and learning capabilities to be impaired even after they have stopped
It’s an interesting thing the brain isn’t it. It adapts to the things it is asked to most frequently by hardwiring ‘it’ in with myelin.

So presumably the people not using AI are reusing and reinforcing these pathways and the others are losing it.

Sadly this is one reason why we will never be as good as George Digweed, his brain has shooting hard wired and we are thinking ‘how much lead?’
 
It’s an interesting thing the brain isn’t it. It adapts to the things it is asked to most frequently by hardwiring ‘it’ in with myelin.

So presumably the people not using AI are reusing and reinforcing these pathways and the others are losing it.

Sadly this is one reason why we will never be as good as George Digweed, his brain has shooting hard wired and we are thinking ‘how much lead?’
The brain is a ‘muscle’ that needs to be exercised.
I’m not a reader of books (takes too long and I get bored) but I read a lot of factual stuff and I’m always testing myself with trying new things, be it practical, problem solving, creative, etc. I even do all of the maths for my proposals on paper using long hand just to keep that skill going, cos once it’s gone then it’s gone!!

I even have a morning routine when eating breakfast of ‘waffle’ game, then wordle, connections and the mini crossword on the NYT games app.
 
Well without wishing to **** you off, that’s not really true.

If predictive text is a good analogy then that it gives rubbish results when you try to use it as a database. It’s not a database.

It is trained to process input and deliver answers based on the input (sort of thats a bit simplistic) so if we only give it a question and nothing to work on then of course you are going to get **** answers. It does not remember all the data it was trained on (not a database, and a poor predictive model if it did) so it is not going to magically pull the correct answer out of a hat.

If you told it to do a web search on reliable resources then you would have got a much better answer. Just as you would if you had asked someone about mushrooms. Still not something you would bet your life on.

To say they don’t add any value isn’t true I am afraid. The modern systems are extremely good at writing code for one. As an assistant not the leader, which is fine. I wrote a wee app in 2 days that would have (did) take a month before because research and implementation are just miles faster.

It is easy to dismiss this stuff by putting a one liner into ChatGPT and laughing at the output, which is universally crap, but that’s just misusing the system.

FWIW I dislike ChatGPT as I did find it was more wrong than right so I switched to Claude and never went back. OpenAI’s free tier stuff puts you on the crappy old models. I was on their $20 per month tier for a while and that’s what I binned as it was wrong about 50% of the time.

Pick a decent model and give it something to work off (even just tell it to do a web search) and they do work. Plenty people who think they know mushrooms have killed themselves without ChatGPT’s advice, if you get the drift.
Fair comment, I can understand the value for coding, so I'll amend any future versions. I've yet to find a use of LLM that is of genuine value.
 
Back
Top