Sinks, taxes and toolbars.
- cleatlearning
- Apr 8
- 5 min read
Back in January, I had an awful shock. I got a tax bill that blew my socks off.
My wife thought I had messed up.
I thought the accountant had messed up.
But as it turned out, HMRC had messed up.
I still owed the money – there was no denying that. But the error had come about because even though HMRC had access to all my tax records and my various jobs and hats that I wear which are PAYE – they didn’t put them together and realise that my tax codes were wildly incorrect.
So, what happened? I had to agree a payment plan with HMRC and for the next 6 months I will not be going on holiday or drinking expensive coffee.
But basically, it was “their fault” and I had to suck it up.
Which leads nicely onto AI and liability sinks.
Let’s take a step back from AI, and think about a spellchecker.
If I misspell something, there are tools available that help me know if I have written something that doesn’t make sense, or if it isn’t in a dictionary. I went to school – in fact I briefly went to pubic school, so my education for a few years at least was heavily based on grammar and Latin derivatives.
By the way, did you realise the type of school I went to? If you re-read the last sentence of that paragraph, you might notice the typo. Not a spelling mistake though, and not picked up with a friendly wriggly red line.
If I published this, or wrote an article I would have to “own it”. This is what happened to Gale Jones, a local reporter in Kansas who was writing about a Disability Mentoring Day, where students race all over town gaining “hands on experience.”
“Students get first hand job experience” was the headline. Awks.
If you hunt around online, there’s some absolute clangers out there. From “Shoplifters will be prostituted” to someone on Twitter/X who talked about how great it is “when you hug a guy and can smell his colon”, and if I am honest, BoredPanda gave me quite a few chuckles when I was hunting around.
But my point is, it is on you.
If we dial it up a notch, what about translation toolbars. These are in some areas, championed for using technology to break down language barriers. But in other areas, where a lot of thought has gone into this including in guidance for #NHS Commissioners, there are phrases such as
“Automated online translating systems or services such as Google Translate should be avoided in healthcare settings as there is no assurance of the quality of the translations.”
Some people will point out that this is from an old document, and as we all know, the pace of change in LLMs and AI is astonishing. But the key thing here is a relative lack of “assurance of the quality” that will be very hard to overcome.
The interesting thing to consider with translation toolbars, is that the product will always be able to have disclaimers saying that translations will be inaccurate. If you are curious, search for the disclaimer for your personal favourite translation tool and see how little responsibility or liability they take for any mistranslation. [Spoiler – none].
I’m no lawyer, but it seems to mean that if your patient receives incorrectly interpreted digital information, whether it is clinical or not, this is your problem. And not their fault.
And for those who think that despite this, it helps drive down digital inequalities, think again. A critical literature review and qualitative meta-analysis of published research concluded that “MT [machine translation] technology can in its current state exacerbate social inequalities and put certain communities of users at greater risk”.
A brilliant article from the Conversation website [link below], talks about the fact that “the foreseeable future for LLMs is one in which they are excellent at a few tasks, mediocre in others, and unreliable elsewhere. We will use them where the risks are low, while they may harm unsuspecting users in high-risk settings”. I would argue that need to be really careful about our risk appetite in this instance, despite the fanfare.
The next level of automation that is worth thinking about is the feels like the fancier AI. And this is very much the world of the Centre for Assuring Autonomy .
Let’s think about an Autopilot collision in a Tesla. If you use it, you have to take over control when required. In a number of Tesla collisions, the Autopilot control was aborted in less than 1 second. Who’s fault is the crash? Don’t expect Elon to pay out.
As AI becomes adopted with both terrifying and exciting rapidity, Tom Lawton and colleagues argue that humans in healthcare will absorb the liability for the consequences of the recommendation whilst not really knowing what the black box is doing or how it is getting there. This is a “liability sink”.
For the record, I am not going to stop using my spellchecker. Nor am I going to stop using DALL-E Open Ai for drawing me pictures. Nor should we shy away from engaging in experimenting. I cannot wait to collate Gloucestershire Hospitals NHS Foundation Trust multiple endocrine and perioperative policies and putting them in a format that will help me make the correct decision for my patient. But for now I have to make that decision, and I have to own it.
In robust trials, AI has been used for successfully detecting mitosis in breast cancer histology images, diagnosing and classifying skin cancer diagnosing diabetic retinopathy and predicting cardiovascular risk factors from retinal fundus photographs.
In 2015, the FDA had authorised 15 AI devices. In August last year, it was 950. At the time of writing it is 1016 (and this is now outdated). Meanwhile, the UK has with the Medicines and Healthcare products Regulatory Agency has begun its sandbox pilot with 5 devices. And they might seem relatively vanilla, but this feels like just the start.
What is very clear, is that AI can improve patient care, improve resource allocation and patient engagement. It is really exciting.
From a medicolegal perspective, I know the Medical Protection Society are actively working out the impacts of this as the calls for regulation, collaboration and clarity are getting louder. This is really important and I would urge all frontline healthcare workers to engage in this debate and discussion. There is a seminar on the 29th April 2025 hosted by the MPS. It might well be sold out by now, but if it's not - I'd suggest taking a peek [link below].
If we circle back to my tax bill, I had to front up the huge amount of money in a short space of time. Even though the error wasn’t directly mine. When it comes to the really exciting boom of agentic AI, I really hope the same doesn’t happen in healthcare.
Thank you for reading. I really appreciate feedback so please let me know what you think.
Комментарии