My current academic work explores the notion of Richard Dworkin’s Hercules set within an AI work, primarily focusing on the need for both a symbiotic relationship between AI and the need to embed empathy within the machine. One of the wider issues that the work touches on, and I hope to explore in my PhD, is the framework (diagram A below) that sees the Law within a holistic societal perspective. Often whenever law is discussed it focused on the judicial/court aspects or the political machinations that happen within senates and parliaments. There is usually tangential mention of public perception and reaction to laws, mainly within the framework of ethics and morality, but as a means of seeing how the law actually works it is secondary to the work of legal professionals. My posit, one I believe will become more critical as we see a plethora of AI emerge in the legal sphere, is that for any legal AI to taken on even a fraction of Hercules’ mantle it needs to take account and be aware of law in its totality, not just the work of legal professionals. This article utilises a common law lens, though the principles could easily be applied to any jurisdiction.
There are three factors at play when considering this:
1. What is Hercules, and is such an idea even feasible in within an AI
2. There will likely be more than one AI, indeed a multitude, that co-exist and interact
3. The Law, as a concept, may simply be too ephemeral for algorithmic AI justice
Ronald Dworkin suggested that the perfect jurist, Hercules, would be all seeing and able to account for every weighting within the law, reaching decisions in hard cases that brought through rights and treated humans with dignity. This concept is theoretical for humans, as no one person can scope out the entirety of the law in their lifetime, let alone being able to conceive of how every piece of legislation, case law, social commentary and other influences would weigh upon law itself. With the emergence of AI both within society and within legal practice it does become conceivable that Hercules could develop out of an AI algorithm or through the interplay of AIs with each other. The questions then needs to be asked, how could this be achieved, and is this something we actually want for our legal system.
The short answer is that AIs are already having an impact on the legal sector and civil service in both the UK and the USA. Weapons of Math Destruction details at length core issues with algorithmic abuses on the lives of citizens, and while it is not my intention to prophesize doom, one of the critical lessons the book teaches is that all AIs are flawed both by the data sets they use and by the very fact that humans programmed them. No system is perfect, be it on an individual micro level or an interconnected macro level, and no matter how hard we try to rationalise our machines they will contain a degree of irrationality precisely because we programmed them. This matters when considering the impact that AI has on the law because if no system is perfect, is the indeed what we want to loose on our legal systems.
This then leads on to the idea that humans, as flawed decision makers, jurists, and citizens involved in the Law, are already irrational actors. This very irrationality leads to miscarriages of justice, laws which penalise minorities, and court systems that punish those same minorities than the rest of society. The argument for AI involvement is that it can cut through our own biases and come to just legal decisions, which in turn will reduce and tamp down human irrationality. However, the reality is that because we have centuries of historic biases accumulated within our legal systems, any data fed to AI algorithms will inevitably be contaminated with those same biases. The argument has been made that the machine is merely following the lead that data shows, but even if that data is scrubbed and sanitized it still reflects our intrinsic systemic biases.
At this point it is worth considering what part justice has to play in this, and whether we can truly overcome our biases through the use of AI if indeed our biases will contaminate the process. Justice is a broad and vital concept that cuts to the heart of what the Law is. On the one hand it can represent a communal desire for norms and security where by the law is used to ensure every member of the community is treated justly; on the other is the personal notion of justice that inhabits a person’s dignity and right to belong to humanity. This draws on Hannah Arendt’s concept of the right to belong to society and the obligation of all humans to ensure that everyone is treated as part of society. When this collides with algorithms invariably justice and the Law begin to warp with the inherent tension between individual dignity and rights and the security and obligations of the wider society.
Law by its very nature is an algorithm, as set of codes by which we live under. AIs are potentially an extension of this, and given that not all laws are ‘just’ or there application even handed, there is that intrinsic worry that if Hercules in whatever form emerges this injustice will be hard coded into the system. Thus, we come back to the factor of where the law is too ephemeral for an AI to truly be aware of it panalopy. If ethics and morals are ever changing, and the Law itself is ever in flux, how can an AI ever keep up with these shifting sands?
The same is often said about economics and trading, yet AI have emerged to dominate global trading markets. However, we presently would not trust an AI to set our monetary policy or provide macro-economic holistic visions (despite increasingly relying on them to provide the raw economic number crunching). We utilise AI within bail systems, to consider mortgage applications, for facial recognition, exam grading, asylum applications, where to target police, benefits applications, and a whole wide rage of legal issues that most people are dimly aware of. We entrust the common good, and in turn the common justice, to these algorithms, yet due to their flawed datasets and underlying assumptions injustices abound. That is not to say AI algorithms are rogue or abhorrent; rather, it is our biases and assumptions that bleed through into the systems that serve up these injustices.
These then leads back to the notion of crafting our Hercules, a panopticon through which justice is served, a right hand to the law. Given all the reservations above, do we even want such a machine to exists? Are we ready for an intrusive data mining and gathering algorithm that would by necessity be required to dispense just law? The answer is a qualified yes to the first question, and a firm no to the second. While we invariably want equitable just that both upholds the dignity of the individual and serves the needs of the wider society, we are not socially inclined to wave away our right to privacy and right of individual space. The State can ask much, but in democratic societies in particular the right of autonomy and right to privacy are significant hurdles that need to be addressed before a fully fledged AI Hercules is acceptable.
However, this is where the interplay between the multitude of existing AI comes into play. Algorithms already play off each other, from news algorithms releasing stories that denounce an exam marking algorithm to social media algorithms then trending the story in turn aggregating social media comments that impact on the u turn that politicians then take to remove the marking algorithm’s results. This interplay is based on biases, vested interests, flaws data, and a Chinese whisper effect that no one part of the system is holistically aware of what the other is doing. Humans are very much a part of this process, our decisions paramount at each stage, but we are directly influenced and almost directed by what we are being fed. Mis-information can easily, and does easily, slip into the process, and no-one algorithm has the broad panopticon that acts as the all seeing eye.
Given this, there is a clear argument to be made that we have already reached the stage where instead of Hercules we almost have Medusa, whose many snakes feed from the same sources and give us glimpses of the full picture. This can be for good or ill, and while I am not suggesting any of us act as Perseus and slay Medusa, I think there is an urgent need to address this interplay between AI that impacts on all our decision making. Technology from stone tablets to the printing press to the digital revolution has always changed how we codify and perceive the law, and by allowing Medusa to affect our interactions we risk further embedding Chinese whispers into the system. Our panopticon instead becomes a hall of mirrors that distorts ad reflects the rights and dignity of individuals, and warps communal security to better suit the needs of those who control the algorithms.
To get to the stage where Hercules is truly viable will take research, understanding of the fully interplay between each element of the law, and legislating to make AI code transparent and accessible to those beyond the core programming team. I do not feel that Hercules would as much a risk as simply leaving Medusa on the field. While I do not see Medusa as a monster, for she is an extrapolation of society as a whole in all its complex messiness, without corralling the interplay between AI into a more symbiotic and synergistic form we risk ever more cascading misinformation, hearsay, and echo chambers that silo each of us from what reality actually is.
First, I do think that an AI Hercules is feasible, though in what form would have to be a wider societal decision. There needs to be a clear headed discussion about civil liberties, civil rights, especially in light of the PATRIOT act and the ongoing Medusa effect. How far are we as a society prepared to go in allowing a singular AI to aid and guide our legal processes. That is cuts to the heart of education, computer science and beyond.
Second, the Medusa effect is significant and not widely acknowledged. While we critique and observe social media’s influence on our lives, we are not openly discussing how Google, Apple, Amazon, Tesla, IBM and many other smaller players are co-opting each other’s platforms for their own ends without considering the consequences. Indeed, it stacks up to many layers of AI all intersecting around and within each other to the point that the Medusa effect becomes an almost Gordian Knot that is impossible for any one person to unravel.
Finally, I do believe that the law in all its complexities is too complex for Hercules to fully unravel, as that requires a degree of intrusiveness in human lives that few of us would agree to. However, that is not to say that an AI that operates at say a 90% awareness of Law’s panalopy would not provide a far better grounding for dignity and justice if it had an effective symbiosis with humans at its core. Yet, there will be irrationality, bias, and possibly miscarriages of justice, but by exploring the symbiotic relationship with AI, rather than setting it off running and leaving it to be, we can both fix those inherent biases as we become aware of them and avoid the Medusan Chinese whispers that we presently have.
Dworkin grappled with this notion of perfect imperfection, and his conclusion that Hercules would forever be an exemplar rather than a reality. I am inclined to agree, not because I do not think it is possible, but because we as a society would not wish it to be so. For to truly have a legal AI Hercules would probably require us to give us intrinsic rights that we simply cherish and value too much. This does not mean that some for of Hercules could not exist, but to affect it will necessarily mean that a symbiotic relationship rather than an all-seeing separate machine that benevolently sits over us in benign grace dispensing justice. Such things are either utterly utopian or the stuff of our worst 1984 visions. Reality is more grounded, and to resolve the Medusan effect requires urgent and pressing research and conversations to ensure that humans are not side-lined and injustices prevail.
I am interested in collaborating and having conversations about this, so if you are interested in talking to me about it please leave a comment.