Opinion and Commentary

We need to chat about ChatGPT (and other AI)

Will artificial intelligence wipe out a range of occupations? What about ethics and privacy? In this opinion piece written for InDaily, TGB Senior Lawyers & Business Advisor Morry Bailes ponders some prickly questions – and the future of lawyers.

We need to chat about ChatGPT (and other AI)

The much vaunted Musk inspired ChatGPT has, since its launch in November, drawn commentary that would have it changing the world in ways we have barely yet imagined.

Artificial Intelligence is back as a hot topic, and has us all wondering if we will have a job tomorrow. How will AI take greater prominence in our daily lives? Will the fabric of society unalterably change?

Amidst the noise are predications that Google is dead, and that AI will run our lives for us, replete with accompanying lists of what occupations and tasks will be subsumed by your friendly neighbourhood machine.

Lawyers are interested in this technology for dual reasons. First, we too are wondering if what we do will replicated by AI, but also we have a second concern from the standpoint of ethics, the rule of law and what law reform may be required to maintain societal norms and boundaries.

WHAT IS CHATGPT?

Firstly what exactly is ChatGPT and why is it regarded as different from what has preceded it? Well, what it is not is a mere search engine, although it may search source materials, including the internet for data. An AI is nothing more than an algorithm or algorithms that problem solve, with a capacity to ‘learn’ and evolve. An algorithm itself is defined as ‘a process or set of rules to be followed in calculations or other problem-solving operations, especially by a computer’ according to the Oxford dictionary. So nothing much new there. It is the ability to incrementally change that is the interesting element of artificial intelligence.

The ‘intelligent’ part of an AI is that it is able to perform functions that ordinarily would be performed by the human mind. However that is a far call from being intelligent in a sentient way. They are not and never will have anything akin to human intelligence.

If you are in a room and you speak English and not Chinese, and a person who speaks Chinese slips a note written in Chinese under the door, and you had a complete computerised catalogue of Chinese phrases and responses, you could conduct an entire conversation in Chinese, such that the person outside the room would think you had a complete knowledge of Chinese when in fact you know nothing. You are entirely devoid of any actual knowledge of Chinese whatsoever. Welcome to the world of artificial intelligence, as explained by John Searle, who in 1980 first advanced this explanation now known as the Chinese Room Argument.

HOW IS CHATGPT DIFFERENT?

ChatGPT is novel in a couple of ways. It is partly open source and its code is thus available to anyone. In fact it is owned by Open AI, initially launched as a not for profit by amongst others, Elon Musk. He has now disavowed himself of it, on the basis that the company has departed from its original directions. The AI tool is now effectively run by Microsoft, as its largest investor.

For a fee, anyone can take the ChatGPT code and use it in other applications. It can be adapted for use in a legal office, or in any environment. Law firm Clayton Utz just announced it is using ChatGPT for its ESG related work. However as it is currently, ChatGPT is very much in a developmental stage.

The advantage for the developer with an open source model is that as users build on one another’s work (in the case of ChatGPT, some 100 million and counting), it becomes better and better. Well that’s the theory at least. It is certainly developing but just what Frankenstein are we likely to be left with?

Early on it described how to make methamphetamine and a bomb, going to show not just how ethically challenging this area is, but how fraught with potential danger. Those, and similar functions have now been disabled.

The other obvious weakness is what ChatGPT serves up is a version of what it has found and produced using its algorithmic code. Its responses therefore are directly contributed to by its users, who have their own bias, or may be just plain malicious, attempting to sabotage the tool. The Republican Ted Cruz discovered that ChatGPT refuses to write a song about his life because he was thought to be divisive, but it will about Fidel Castro. It will create a poem about Biden but not Trump. It has been rude to some users, accusing one of being like Hitler, and suggesting that a newspaper columnist leave his wife.

In short, it cannot, at least at present, be relied on as unbiased, or reliable, or reasonable. The designers say they have received feedback from users that its AI is politically biased, causes offence and has responded in ways that are ‘otherwise objectionable’. They acknowledge ‘real limitations of our systems which we want to address’.

However it had broken new ground and has arguably gone further than other AI’s. It has also captured world attention, and the more it is used the more useful, we hope, it may become.

ARE THERE OTHER EQUIVALENT AI TECHNOLOGIES

A bit like the race between Edison and Westinghouse as to whether AC or DC electricity currents would ultimately be used, the battle for AI supremacy has more than one contender. Another AI recently launched is Perplexity. Its primary tool, Perplexity Ask, is described by its designers “as a search engine delivering answers to complex questions”.  It has a more scholarly application to problems, responding to questions not only with a capacity to produce detailed answers, but also with an ability to cite its source materials. It is less about creating an end product as it is with providing reliable answers to questions. There is a clear and obvious difference in emphasis when compared to ChatCPT.

Perplexity is also partly open source, and thus is learning on the job, and it will be interesting to compare the development of ChatGPT versus Perplexity as the months pass.

However as the medical and legal professions are aware, these AI’s are by no means the first, to be used in business. IBM was an early mover in the AI race with IBM Watson and then IBM Ross. Watson was initially aimed at the medical and health industry, and last decade grand predications, a little like we are hearing now about ChatGPT, were being made. However there was a difference. IBM had it, that Watson would revolutionise business. The world would be changed forever. But it was IBM peddling the product and the PR. And it was not open source in the way ChatGPT and Perplexity are. What IBM was selling was a proprietary product that would be developed with and for the customer.

After Watson, IBM Ross was rolled out for the legal industry. It was another moment when the death of the lawyer was being actively discussed, even amongst Oxford dons.

In each case Watson and Ross were advanced search engines instantly bringing answers and resource material to medical and legal practitioners, based on real medical and legal cases. As data was collected better and better decision making could be made by practitioners, we were told, to the betterment of patients and clients. Well, that was what was supposed to happen. They both failed.

In the case of Watson it was used in cancer research to inform clinical decisions being made about patients. It turned out it was not basing its responses entirely on real patient histories. In short Watson’s cancer diagnostic tool was not trained with real patient data. It quietly folded, having over promised and under delivered. The medical world remained unchanged. IMB Ross was sued by Thomson Reuthers, a legal publisher which provides research and support for the legal profession, on the basis that Ross was allegedly pinching content from Thomson Reuthers’ Westlaw product. It too closed shop. The legal world remained unchanged. We still need more doctors and we still need more lawyers. Back to that subject later.

LESSONS LEARNT

ChatGPT’s founders obviously learnt from the IBM experience that biting off far more than it could chew, is detrimental. So is over egging the product. Notice that every time a criticism is made of ChatGPT the developers just admit its imperfections. This is smart, because they do not promise the world. In fact they have promised nothing. It’s ‘soft’ and developmental launch phase demonstrates an intention to build a tool from bottom up as users contribute to its expansion and growth. They will also harvest a vast trove of data and experience thrown up by the exercise.

What lessons have been learnt about how to guarantee its ethical application? And for that matter, how and in what way might it alter the world as we know it?

IBM Watson offering clinicians data to treat cancer sufferers based on other than real patient cases was as unethical as it is unacceptable. So was ChatGPT providing information to would be terrorists and drug lords. To become mainstream tools AI’s need to do a lot better than that.  Merely turning off functionality also results in less capacity for development, so there is a loss in purely evolutionary terms in adopting that approach.

When we consider the failings of IBM’s product we  must remind ourselves that 10 years in AI development, as with tech generally, is a veritable eternity. ChatGTP and Perplexity ought to be viewed as just another step, another generation of AI, where lessons can be learnt to improve their capacity, as part of a constant evolution of technological advancement. This is not in any sense the end game.

Yet what of the legal and ethical challenges? By way of example, schools as well as higher learning institutions have been seized with the dilemma that such AI applications may create an unequal and unethical playing field if used by some students to skirt around the requirements of real learning. Teachers grappling with an assault from the use of AI generated composition are trying to test the student not only on the material submitted, but also on their own personal knowledge, to counter academic dishonesty. Will it be enough, when so much learning is now assignment and writing based?

Some may argue, with somewhat perverse logic, that if faced with the prospect of being quizzed on an AI generated assignment, a student may learn the subject matter anyway, so where’s the downside?

In the legal world we once hand wrote, then we typed, then we dictated. Now we might ask ChatGPT for it to be composed and written. Or, in other industries, we’ll ask DALL-E to produce an image. Or CALA to design clothing. Many however currently view that as cheating.

But will things change? Will courts accept AI generated submissions? Will they know? Will AI draft legal documents; determine the outcome of disputes and put us lawyers out to pasture? Why have fashion designers or artists when DALL-E is available? Da Vinci eat your heart out.

LAW REFORM AND REGULATION

What there is to like about Open AI as a company, is the preparedness to listen, acknowledge problems and talk outcomes. So when it comes to what to do, Sam Altman, a co-founder of the company recently said this:

‘…it’ll be tempting to go super quickly , which is frightening—society needs time to adapt to something so big. We think showing these tools to the world early, while still somewhat broken, is critical if we are going to have sufficient input to get it right…We also need enough time for our institutions to figure out what to do. Regulation will be critical and will take time to figure out; although current-generation AI tools aren’t very scary, I think we are potentially not that far away from potentially scary ones’.

He also referred to the ‘serious challenges’ of such technology. He is entirely right. There is also no point denying humankind this tech. It’s coming ready or not. What we need to do is prepare for it.

How then do we tackle the question of regulation? Because of the complexity of imagining every example of where an AI might engage in unethical or unacceptable decision making, the approach to regulation really must be principle based. Whilst the risk may be motherhood regulatory statements, open to interpretation by the courts, there is little choice but to talk in general principle. Rule based legalisation and regulation has two obvious flaws in this area. Firstly it cannot imagine all case scenarios. Second, it is flat footed. This technology has the capacity to bolt ahead of us, with rule and prescription based regulation forever playing catch up. A principle based approach makes more senior members of a corporation responsible, and focuses more on outcomes than the black letter. It also appeals to the current trend of social corporate responsibility which seems so popular.

The Australian Government through the Department of Industry Science and Resources, has adopted eight principles for ethical AI use. At this stage the framework is voluntary. Amongst the principles are human centred values and human well-being, fairness, privacy protections, reliability and safety (whatever that word may mean in this context), transparency and explainability, contestability and accountability. Ratchet up the food chain and you will find international economic and social bodies with similar principles purportedly to govern international application and use of AI tools.

One is left with a feeling that the code at present is best described as aspirational. At worst perhaps naive. We will have to go a fair bit further to curtail big tech unleashing artificial intelligence on us you’d think, going on the abuses we have endured at the hands of big tech through social media.

The problem is that there really is no other meaningful way to approach regulation in this field other than through the principles based approach. What we likely need, is to drop the voluntary idea, make it law, and give it some teeth.

THE DEATH OF LAWYERS AND THE REST OF THE WORKPLACE

I’m certain you’ve read to the end of this article to ascertain whether lawyers are about to become extinct, or will we just reinvent and carry on? For that matter, what about every other professional services job, and job based industries?

Technology’s promises and threats are by no means consistent. Some innovations fail while others succeed. It is difficult to predict or judge without the benefit of hindsight. In addition we as humans have shown an extraordinary capacity to change and adapt. The disruption we have seen in this century so far is breathtaking but we seem to just carry on.

With regard to AI the first challenge is to ask whether it’s real or fake. Much that is paraded as AI just isn’t. But something that algorithmically  improves itself is. In ChatGTP we have the real thing, even if it won’t compose a song about a democratically elected US Senator yet will about a regressive, communist dictator.

It’s really what happens next that matters. There have been plenty of tech innovations greeted by fanfare that have gone by the wayside. Whilst ChatGPT is a plaything now, its potential application in workplaces and in offices make it intriguing. But it’s got to be right. Even if it may be less ambitious, Perplexity is far more practically useful to us right now, or so it would seem. It is very approachable and very useable.

If we get AI to function adequately, then the door will open. However I have a number of reasons why we might doubt our vocations will evaporate into thin air. First, it’s all been predicted before, and it never happens. The End of Lawyers was published in 2008. As at 2023 there is no end of lawyers in sight – not in the slightest. But Richard Susskind became famous and sold lots of books. In reality what there is, is no end of academics providing startling if questionable predictions of the future.

We also adapt. Is the lawyer of today the lawyer of 100 years ago? Not in the slightest. Technology took care of that, but we are still here in ever increasing numbers, and unmet legal demand in Australia and elsewhere will guarantee we will still be here in another 100, functioning, of course, in an entirely different way. It is this capacity by humankind to change and adjust that is at play.

We also like to work. We aren’t going to let machines take our jobs. What will we do?

Finally we don’t trust them. Even Siri, who some speak to on a regular basis, is hopeless. Not only do we not trust the machine, think Bladerunner; we all crave the human experience. As humans, we want interaction with other humans.

At the end of the day, what AI is, is an enabler. We can go places with it. But unless we’re bonkers we will not let it rule our lives.

ROKO’S BASILISK

A final comment however ought to be made about Roko’s Basilisk theorem which would have us all end up in something like a real life Terminator movie. Elon Musk has said that an AI could be the cause of another world war. Whilst short of Roko’s Basilisk dystopian vision, it is a profoundly sobering thought. Stephen Hawking in his lifetime said ‘AI has the potential to destroy its human creators’. Philosopher Nick Bostrom challenged us with the thought that if an AI bot’s only job was to make paper clips and it ran out of metal, would it attack humans to access more metal?

Meantime, let us take comfort in what seems true for the whole of human history so far, and was best put by the writer Jean-Baptiste Alphonse Karr, who reminded us that ‘the more things change, the more they stay the same’. I predict you will all have the pleasure of the company of lawyers for a long time yet.