Oh, dear god...

I'd agree with Zambezi's concise summary. The Large Language Models (ChatGPT, Grok etc) function by essentially predicting the next word. If you ask it, "Write me 1000 words on reintroducing lynx to Scotland" it will trawl the net, looking for all the information and then write 1000 words based on the probability of one word following another. (I'll stand corrected on this, as the precise way they work isn't known). This means they write very descriptive text, usually grammatically accurate. But, they cannot analyse and have no idea about strength of evidence. They "know" that a scientific articles has references, so they make them up. Originally (2 years ago) these were very made up, the algorithm have improved.
This means if you asked "I am lonely and my life is ****" it will serve up platitudes based on thousand of forum chat, within that there will be stuff you agree with (a bit like horoscopes or Briggs-Myers)
They are also hallucinating (see 2001 A Space Odyssey - on tonight) and will never say, "I don't know"
Its text but its limited in terms of critical content.
 
I'd agree with Zambezi's concise summary. The Large Language Models (ChatGPT, Grok etc) function by essentially predicting the next word. If you ask it, "Write me 1000 words on reintroducing lynx to Scotland" it will trawl the net, looking for all the information and then write 1000 words based on the probability of one word following another. (I'll stand corrected on this, as the precise way they work isn't known). This means they write very descriptive text, usually grammatically accurate. But, they cannot analyse and have no idea about strength of evidence. They "know" that a scientific articles has references, so they make them up. Originally (2 years ago) these were very made up, the algorithm have improved.
This means if you asked "I am lonely and my life is ****" it will serve up platitudes based on thousand of forum chat, within that there will be stuff you agree with (a bit like horoscopes or Briggs-Myers)
They are also hallucinating (see 2001 A Space Odyssey - on tonight) and will never say, "I don't know"
The last I heard there are even published papers written by AI, complete with fabricated data. Although also based on what I've heard, that matches some PhDs...
 
The last I heard there are even published papers written by AI, complete with fabricated data. Although also based on what I've heard, that matches some PhDs...

And yet...the Starmer govt would have us believe that AI -streamlined management of our lives via digital ID is the safe future we should all be racing toward...
 
I'd agree with Zambezi's concise summary. The Large Language Models (ChatGPT, Grok etc) function by essentially predicting the next word. If you ask it, "Write me 1000 words on reintroducing lynx to Scotland" it will trawl the net, looking for all the information and then write 1000 words based on the probability of one word following another. (I'll stand corrected on this, as the precise way they work isn't known). This means they write very descriptive text, usually grammatically accurate. But, they cannot analyse and have no idea about strength of evidence. They "know" that a scientific articles has references, so they make them up. Originally (2 years ago) these were very made up, the algorithm have improved.
This means if you asked "I am lonely and my life is ****" it will serve up platitudes based on thousand of forum chat, within that there will be stuff you agree with (a bit like horoscopes or Briggs-Myers)
They are also hallucinating (see 2001 A Space Odyssey - on tonight) and will never say, "I don't know"
Cheers
 
And yet...the Starmer govt would have us believe that AI -streamlined management of our lives via digital ID is the safe future we should all be racing toward...
Thing with AI is that there is AI and AI.
The first one is pattern matching and RPA and useful stuff like that which analyses data and makes real savings.
The other AI is for making fake girl friends, silly memes and political videos.
One of them is useful, whilst the other is a pile of cat $hit!
 
Back
Top