I was recently at a seminar (related to investments, not medical) where questions were asked about how AI might affect investment strategies in the future.
One of the attendees was spouting about their advances in AI diagnosis I cant recall the particular condition it was being used for, the discussion turned to who was labile for the 'decisions' made by AI. To which the guy said words to the effect that "it's ok, we will have insurance", which kind of misses the point when the diagnosis is incorrect and the patient (or victim as i called them) dies or is seriously injured.
The point being, and this does not apply just to AI, in our current world decisions we make, whether on a personal or 'business' basis have the potential to give rise to some 'liability'. Liability in the sense that if the decisions turns out to have negative consequences then it may be a significant problem for the end user.
When considering AI, whether its an application for medicine or anything else, when we are subject to the outputs form AI models we need confidence in a) how good or bad the AI generating the response is, and b) if the outputs are incorrect someone will be liable for those, potentially in both a civil and criminal sense.
Currently when i search using google I often find the 'AI' bits helpful in leading the way to source material, but thats about as far as i go.
There is a court report doing the rounds about a case where a legal professional (i think it was a solicitor as opposed to a barrister) used AI to prepare proceedings used in a court case.
The AI not only misquoted case law iirc, but also invented some new cases that did not exist. There were other reasons why this hit the headlines related to the individuals behaviour in response their submissions, but it raises the serious question about how to check the outputs of AI in the kind of context most people come across it.
I am struggling at the moment to see how in general terms AI is going to take over traditional roles requiring professional inputs. Support it as in the examples
@Buchan made, but replace it, not sure.