IF artificial intelligence becomes a judge or adjudicator, how might it interpret or operate the idea of “reasonable doubt” algorithmically?
Lipsync Lawyer wants to say this upfront and early:—
- Would you (singular or plural) trust a machine on self-generating code that took in every Internet content on a topic, and it chooses either the average or median of those discussions, bickering and trolling as the ‘truth’ for preparing or generating its output?
That’s what the algorithms are doing in A.I.
The A.I. algorithms train on LLMs (large language models), not on facts or perceptions. LLMs are distillers of ‘average.’
They don’t think.
They can’t perceive.
They can’t sense.
They can only regurgitate the median — the midpoint, the 50% cutoff.
Two problems
Lawyers had long identified two major problems with such technology in legal and judicial matters:—
- A.I. ultimately jeopardises or compromises the confidentiality of past, present and future lawsuits and judicial reviews because you have to keep feeding what in reality is privileged and confidential information into the A.I. databases in order to train up and maintain the A.I. algorithms.
This concern has proven to be true because many corporations now contractually bar their contractors and lawyers from using or disclosing case details in A.I.
- Due to A.I.’s programmed directive to output the median of things, it will downplay, disregard or (worse) ‘medianise’ the test cases due to test cases being inherently distant from the statistical median.
Lipsync Lawyer remembers those two were already talking points back in the 1980s. Man, oh man.

Visual representation of the median (via /b/)
Have you noticed how A.I. often ends up ‘interpreting’ stuff in very exact and simple ways but still with errors?
Now imagine that with the law.
It’s not that A.I. cannot create a clean model. It’s that it can’t create a reasonably realistic functioning model.
Fire torches, fire extinguishers, guns and ammo, legal cases, judicial decisions, caselaw, etc — they aren’t made of a bunch of bits and bobs. There can be literally hundreds of moving parts.
No matter how much data you feed it, A.I. (at least for the time being) won’t be able to create even the super-simple, bog-standard internals of the WW2 STEN Mk. 1 cal. 9mm submachine gun of 1940 — and the Sten is as simple as simple goes.
To achieve that, you need to be sapient — alive, real — to actually live and understand the physical world around us.
A.I. can tell us where we can shave metal off to lighten a gun on a CAD/CAM/CAE model. It can process cartridge ballistics to min/max a load. It can tell us what material strengths allow us to handle chamber pressures on bolt lugs.
But it cannot ‘imagine’ why a gun needs a takedown pin or a spring-retaining floorplate.
The law and the lawsuits — what with their moving goalposts and all — are easily a thousand times more complex than a gun.
Only until A.I. is quite literally ‘alive’ and a ‘me’ could it do that.
Lipsync Lawyer
Sat, 14 June 2025
