AI In The Courtroom: A Closer Look At AlphaGo

Linkilaw Business News

Go Baby, Go Go Go: what AlphaGo can teach us about AI in the courtroom.

Last March, AlphaGo, a computing program pioneered by Google’s DeepMind research team, was pitted against Lee Sedol in a face-off watched live by hundreds of millions around the world (60 million watched it in China alone).

The game was Go, an ancient Chinese game of strategy where a player aims to surround more space on the board than his opponent. The rules of the game are deceptively simple; the game itself is incredibly complex.

AlphaGo is, as the name suggests, an expert in Go. It uses deep reinforcement learning – a process where it plays millions of games against itself, using trial and error and incrementally learning from its mistakes – to improve its decision-making capability and get better at the game. In one day, AlphaGo was able to play Go against itself 1 million times – more than Lee Sedol will play in his lifetime.

Sedol is a world-champion Go player and has been granted the highest grandmaster ‘Nine Dan’ rank. He’s the best that humans could manage.

Over five games, AlphaGo won 4-1.

Comparisons were immediately drawn between that result and the 1997 victory of Deep Blue over Gary Kasparov,  the famous chess grandmaster. This result, like Deep Blue’s before it, sparked a familiar response: This was technology transgressing its boundaries. Go, like chess before it, was the human realm. It required instinct, feeling and improvisation, none of which were robot traits. The human should have beaten the robot.

But how does this relate to the courtroom?

AI’s influence has been growing rapidly over the last few decades. It has well and truly entered the public sphere and we interact with it on a daily basis through our computers, calculators, and phones. So too has it entered the legal sphere. In the UK, companies like Ravn and Luminance are using AI for contract and due diligence checking. Across the pond, IBM’s ROSS has been adopted and utilised by numerous international law firms. These law firms, both in the UK and America, are recognising the benefits that AI can bring. With intelligent algorithms, it can process and disseminate large quantities of information far more efficiently than a paralegal or a legal assistant would ever be able to do.

When people talk about AI taking lawyers’ jobs, their doomsday predictions are normally reserved for those at the junior end of the spectrum – those who spend their time wading through large amounts of files, contracts or cases. In contrast, numerous commentators have pointed out that the domain of the top solicitors, the solicitor-advocates and the barristers will remain untarnished. These are the domains that require human judgement: the ability to interact with a situation on a psychosomatic level, to read the tension in a client’s body language or the scepticism in a judge’s voice, and to respond in a perceptive way.

It seems, however, that this viewpoint may be short-sighted. To explain why, we need to look closer at the face-off between AlphaGo and Lee Sedol.

Game 2; Move 37

It was the 37th move of the second game that AlphaGo played its trump card. In a hugely unexpected move that stunned commentators and spectators alike, one of AlphaGo’s black pieces was moved to an open area on the right of the 19×19 board. Sedol was stunned; his mouth comically agape. He took himself outside of the room to compose himself and, even after returning, took another fifteen minutes to respond.

The move would have been a perfectly normal one in a different situation, but in that situation no one could work it out. At least, no human could. It was a turning point in a game that Sedol was performing well in; from then on it was a losing battle. He lost the game three hours later.

The reason this move was so shocking was not because it was a really good move but because it was a move a human would almost certainly never have done. In fact, if AlphaGo’s calculations are correct, the chance that a human would play that move were 1 in 1,000 or less. Sedol would almost never have considered this move, not because he wasn’t intelligent enough, because his brain would never have assessed the board in the same way. That move would never have crossed his mind.

In the words of Fan Hui, spectator and three-time European Go champion:“It’s not a human move. I’ve never seen a human play this move…so beautiful. So beautiful.”

Before returning to the courtroom, it is important to point out how the machine was able to see the move. There were two steps to its learning process. The first used a ‘Deep Neural Network’ to mimic the way the human brain learns – by trial and error. By feeding millions of Go matches into the machine, it was able to analyze strategy, working out which moves were the most effective and which were less so, then replicate them. The second stage took this a step further and used Reinforcement Learning, a process where AlphaGo plays different versions of itself and learns from itself, to gradually improve itself as it plays each game.  In other words, AlphaGo got so good first by replicating humans, then by using its own technology to teach itself. Humans can take it only so far.

The fact that a human would be so unlikely to even spot this move demonstrates AI’s radically different way of looking at the world. AI thoughts are qualitatively different from human ones, they are not simply a magnification of our thought processes.

Again: how does this relate to the courtroom?

The argument suggested above – that advocates will always be humans since the role requires human traits – seems to struggle in the face of the Go victory. This was a ‘human’ game and the robot won. It didn’t win by replicating human thought processes; however, it won by using its own technology to better itself.

It may be the case that an advocate has a better array of ‘human’ traits than an AI does, but does that really matter? Does it really matter how a barrister wins his or her case? Barristers and solicitor-advocates all have their own styles, yet at the close of a case there is always a winner and a loser. When an AI is pitched against a QC, it will not matter what the thought processes behind the argument are: At the end of the trial, a verdict will be given in favour of one side. If an AI can work out a better way to win a case – perhaps one that doesn’t require a huge amount of ‘human’ perception – then there is little reason to think an AI couldn’t be an exceptional courtroom advocate.

So, we can expect machine barristers?

Not necessarily, but it is important to highlight the fallacies in the argument that robots will never be courtroom advocates. If the history of technology has taught us anything it is that we must not attempt to pigeonhole technologies like this in our predictions. It is naive to think that an AI will forever be confined to paralegal work and legal admin. It will expand beyond this, in some direction.

It may, in fact, be the case that an AI is simply never as good in court as a seasoned barrister. It may be the case that the two work in tandem – the AI preparing the argument and the barrister delivering it. It may even be the case that AI creates novel ways to approach disputes, eliminating the need to go to court at all. Or perhaps it turns out that AIs are better employed as judges rather than as advocates. Whichever route it takes, those in the legal profession need to seriously and openly consider the role AI will play in the law going forward.

There are also practical concerns over data and privacy that are raised by the prospect of AI-advocates. Can a robot be trusted to remain confidential before, during and after a case? How would it know who to trust when revealing information? And what about hackers?

Then there is a crucial aesthetic question: will an AI wear a wig? Perhaps it will be an e-Wig.

Conclusion

This blog hasn’t addressed the question of whether or not an AI advocating in court would be a good thing. This is a difficult question. Aside from the jarring image of a computer delivering an opening statement in court, however, there is a worrying financial concern that should briefly be mentioned.

Without the right regulation, a side in a dispute with a stronger financial position would be able to employ a better AI to fight their case. They would receive better representation and have a greater chance of success. This is not to say that all barristers are equal (of course, a side with greater finances will be able to hire a better barrister) but with the way computers work, as opposed to the way humans work, the imbalance would be more damaging than the current situation. In short, a junior barrister may defeat a Silk on a good day, but a bad computer would never beat a good computer. Financial imbalances between parties would be amplified by inequalities in technology available.

There would, of course, be ways to remedy this – by standardising AI across the justice system, for example – but, as with the general role of AI in law, the transitional phase will be difficult. The regulatory framework must be be stringent and comprehensive.

Whether we end up with AI barristers or not, lawyers must take note of AlphaGo’s victory. It was a stark reminder that lawyers must always be on their toes. It would be foolish and short-sighted to be otherwise.

Comments

comments