News & Features

Summoning the devil? How the legal system must reckon with AI

In his Opinion Piece for InDaily, TGB Lawyers’ Morry Bailes says Australia is lagging on regulation of AI, and sets out the challenges posed by the technology.

Summoning the devil? How the legal system must reckon with AI

We will remember 2023 as the year when Artificial Intelligence (AI) reached out to touch all of us in a direct way when OpenAI launched Chat GPT4, following picture and imagery AI, DALL-E, released in late 2022.

While it is not the only big tech company in the market, OpenAI’s ChatGPT4 marked a turning point. We could now all start directly using the technology in our own homes, even though AI had indirectly been a part of our lives for at least a decade before.

In just one instant, all the talk, all the hypothesising, was seemingly swept aside and we took a giant step toward realising the true potential of AI. That potential sent a shiver down the spine of collective humanity. Was this the death of literature, music and arts as we know it, or a great leap into the unknown that would bring us prosperity and improved lives forever?

Stephen Hawking said of AI: ‘The development of full artificial intelligence could spell the end of the human race… It would take off on its own, and re-design itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”

A year on, we are more reflective. AI surely has a major role to play in business, and the legal profession is one of many industries that must grapple both with its upside and downside. However, its wide-reaching implications are as sobering as they are exciting. The starting point is to imagine, if we can, the possible power of generative AI.

Generative AI grew from earlier models of AIs trained for specific functions and given specific data. An example in the law is a search-specific AI to assist in the litigious process of discovery. Required of every party to litigation, this is the process of identifying and listing all documents and evidence held by a party, relevant to the case. This process was made faster and easier by specifically trained AIs.

This type of AI was first described by academics at Stanford University as “foundation models”. These models were ultimately joined together, fed terabytes of data, and could be used in many different applications. This next step was termed generative AI, because the AIs became able to generate a new idea, drawing from the vast amounts of data input to them. Through a process of tuning generative AI, the generative model could turn itself toward specific functions for the purpose, say of assisting a business with a particular task. Tuning is the adding of a small amount of additional data in order to have the generative AI perform a specific task.

In short, the performance power of having many foundation AIs connected to produce generative AI vastly outperforms a single model designed for a single task. Additionally, the productive power of generative AI, when only a small amount of additional data is required to achieve a specific gain of function, is huge. It harnesses that productivity power from the vast sum of the data earlier collected.

However, there are clear downsides to generative AI. It is hugely expensive to run given the vast amount of data required. Smaller enterprises may be prevented from using them purely due to cost. Secondly and critically, the huge amount of data that creates generative AI’s advantage also leads to questions about its reliability and whether its data sources can be trusted. Just like Achilles’ heel could not be protected by Thetis, generative AI has a weakness. The collected data is found on the internet. The size of the data pool makes it literally impossible to comprehend all of its data sources.

This potentially applies to all models of generative AI which at this stage includes the most well-known of the models, the large language model (LLM), but also includes vision models and coding models, where code may be able to be completed by generative AI. Models are also being developed in the areas of chemistry and climate change, according to IBM from whom this explanation of the evolution of generative AI comes.

Challenges for the courts

Application of generative AI in the legal profession is largely the domain of the big legal firms who can afford it, although AI-powered applications are gradually being offered to mid-sized and smaller firms, through third-party providers. Most of its application at present is to drive process and thus productivity.

Outside of firms, there is consideration being given to how AI will be deployed by courts, in particular in supporting and supplementing the judicial role. Although many recoil from the notion, it is inevitable, as in all walks of life and business, that AI will have a presence in the courts. The other challenge for the courts is to deal with AI-generated documents, and in understanding what AI may lie behind certain processes engaged in by litigants. Then there’s deciding disputes involving the use of AI itself, ranging from arguments about intellectual property to making decisions about self-executing ‘smart’ contracts that rely on AI to function, to name a few. Judicial life is about to get a great deal more complex, both internally within the courts as well as from external agencies and parties who will bring AI disputes to the courts to resolve, whether they are currently equipped to do so or not.

The immediate concern for all jurists is an understanding of how AI algorithms may function. Products from OpenAI at this point lack reliability, accuracy and truth. It is an anathema to courts that such products could be deployed in the judicial process because they risk bad decision-making. It is made worse by the proprietary nature of AI algorithms which remain a closely guarded secret by developers and providers of AI.

As to disputes about copyright and the theft of intellectual property, the likelihood of AI throwing up such litigation is inevitable and has already started. What is AI-generated art for example, and who does it belong to – the artist, the internet, or the technology through which it was born? There is an obvious “blurring of the lines between human and machine creativity”. At the heart of last year’s Hollywood film writers’ strike was how to protect artists against generative AI. Goldman Sachs has estimated that AI could subsume 25 per cent of jobs in the US in areas such as arts and entertainment, sports and media, and design. It is fair to suggest that these are unlikely to be the only industries so significantly impacted.

The need for regulation

Some commentators have taken what one may describe as a “Terminator” view about the future of AI, based on what might be a well-placed fear: that uncontrolled generative AI may get beyond itself resulting in activity that is not only unhelpful to humanity but that may indeed be harmful. Erosion of privacy for instance is a major concern expressed by the Australian Human Rights Commission, which is calling for increased regulation in the area and the creation and appointment of an AI Commissioner. The Commission also holds concerns about AI based on the risk of algorithmic discrimination, automation bias, and the promulgation of misinformation and disinformation.

The ethical dilemma was aptly put by Gray Scott, futurist and techno philosopher, when he said “we must address, individually and collectively, moral and ethical issues raised by cutting-edge research in artificial intelligence and biotechnology, which will enable significant life extension, designer babies, and memory extraction”.

People such as Gray are not alone, with dire predictions being made by Elon Musk, Steve Wozniak and other experts. After the release of Chat GPT4, they were amongst many who signed a letter and a warning to the world, which called for a pause in the development of open-source AI, stating that we are “locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one — not even their creators — can understand, predict or reliably control.”

It seems that we are at a crossroads with little ability to turn back. The genie is well and truly out of the bottle. So where are we as a nation at with legal regulation and laws to control the growth of AI? Toby Walsh, who is the chief scientist at the University of New South Wales AI Institute, says Australia is not doing too well and “… sadly remains at the back of the pack in terms of responding to the opportunities and risks AI poses”.

“The pace of the development of AI means that we are already behind and playing catch up.”

What about worldwide? Leaders and government representatives in the field of AI met in the UK late last year and on November 1 signed and published the “Bletchley Declaration”. It was a start: 28 countries including Australia, the UK, US, and many other larger nations such as Brazil and our near neighbour Indonesia, along with the EU, were signatories. A reading of the declaration shows we are at least waking up to the risk. The statement says at one point that: “There is potential for serious, even catastrophic, harm, either deliberate or unintentional, stemming from the most significant capabilities of these AI models.”

Notwithstanding the Bletchley Declaration’s commitment to “support an internationally inclusive network of scientific research on frontier AI safety”, we must still wrestle with the specifics of the risks identified. If as an advanced and technologically literate democratic nation, Australia is indeed “at the back of the pack”, that is not good enough and suggests flat-footedness by our Federal Government. We surely must not wait on international scientific research to respond to the calls made by the Australian Human Rights Commission, and many other bodies and commentators?

During 2023 the Federal Parliament conducted an inquiry into the safe and responsible use of AI. It received more than 500 submissions. The Australian Government’s Interim Response to the inquiry was published last week, identifying at least 10 legislative regimes that require amendment to allow the safe and responsible use of AI, reciting the concerns expressed by the Australian Human Rights Commission, and citing concern of “a lack of transparency about how, and when AI systems are being used”. This is troubling not only lawyers but the public at large. We want to know if AI algorithms are influencing our choices in order to guard against it, lest we turn into a nation of dunderheads, and further whether we are interacting with an AI source or system rather than a human being. Those things need to be enshrined in regulation.

Positive examples are cited in the Interim Response of countries that are ahead of us such as Singapore, the UK, the US and Canada and the EU. All of whom have either voluntary commitments from industry or have legislated to protect the public from the negative impacts of AI as it steps up. The response suggests that the Australian Government is concerned a voluntary code is unlikely to be sufficient.

It outlines the ‘next steps’ and ’actions’ that will be taken by the Australian Government (too detailed to recite here) referring to the Bletchley Declaration, and the need to cooperate internationally and to learn from existing regulatory frameworks already operating in other countries. As a whole the response looks positive but it is here and now that we need to see the rubber hit the road.

The pace of the development of AI means that we are already behind and playing catch up. We probably always were.

If the harm caused by AI is now, the regulation to control it must also be now.

Last words go to James Barrat, author of Our Final Invention: Artificial Intelligence and the End of the Human Era, when he said: “I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish. I mean with artificial intelligence we’re summoning the devil.”

For some time, many have been calling for adequate regulation of AI: let us hesitate as a nation no more.