Generative AI – friend or foe?
Generative AI has been on the radar for a while, but with the latest release of ChatGPT and Microsoft’s 365 Copilot it is very much a reality and law firms will have to determine how such technology can help them, writes Mark Andrews.
I have been playing around with ChatGPT, including posing some questions which are not especially relevant for the legal sector, but nonetheless interesting.
A few days ago, I asked it to write a short essay comparing Munch’s The Scream and Van Gogh’s Sunflowers and it did a reasonable job that would at least compete with a senior high school or perhaps an early-stage bachelor’s degree piece (minus the referencing).
What it did well was to quickly pull together key facts and descriptions about the two paintings and the historical contexts for these paintings. It also made a reasonable attempt at trying to find comparison points. You could not physically write something covering all the points that were addressed in the time it took to generate the short essay. What is, of course, blatantly obvious is that it could not see the paintings or make more direct comparisons that an art critic would make based on their judgements, what they see and what they feel.
In reflecting on this, I think it is not so difficult to understand when Generative AI may be a friend and when it may be a foe.
Generative AI as friend
Assuming there is relevant data within the information base used by Generative AI, it can do a very effective job at pulling together a summary on an enormous range of issues. It is far less time-consuming than a web search where we may have to go through multiple sites, and it yields a richer, narrative result. This is powerful from a productivity standpoint and from the perspective of using the best tools for the job – recognising that processing massive data sets is something technology can do well.
Generative AI can also provide definitions of terms and concepts quickly and with a good level of context.
Using Generative AI to come up with a template or foundational document such as, in my example, an art essay is another effective use as it provides a springboard from which we can conduct further research and add a human touch to the work.
The AI technology can support an apprenticeship model, too, as junior lawyers can test their research and writing against what Generative AI produces. Comparing the two outputs would provide learning opportunities in a safe and non-judgemental way.
On a global scale, laws and regulations change rapidly and court decisions influence how the law evolves and is applied. Law firms make sense of these laws and regulations and often publish guides which can then be amalgamated to provide a view on a particular issue. The process of generating and compiling these guides is time-consuming and Generative AI could significantly accelerate the time to compile and amalgamate content.
Part of the value that law firms provide is around precedent and a knowledge base derived from extensive experience in areas of law and specific jurisdictions. Generative AI can unlock significantly more value from precedents and knowledge bases than traditional search tools.
Generative AI as foe
GIGO (garbage in, garbage out) would be a familiar term to any student of computer science. We have tended to think of this as linear; if I feed rubbish data into a program then I am going to get rubbish output. Generative AI is best thought of as exponential, rather than linear; a little bit of garbage in can produce a lot of garbage out, and the potential for a vicious cycle to occur increases once we add the learning dimension to the AI model.
To explain this further, let us say that we have a data set containing all of the worldwide regulations around statutes of limitation. If some of our dataset is wrong, but we do not realise it, then whatever the AI model generates will have some inaccuracy. If we feed output back into the model, it will learn more inaccurate information – and so the cycle continues.
Unless Generative AI adopts academic referencing, we don’t know the source and so we cannot test the accuracy or have a view on potential bias in the system. Blindly trusting the output is a risk and while that does not make AI a foe, it is certainly not a friend in this situation.
Generative AI is a foe to the extent that we either have too much faith in our ability as a lawyer, or too little faith. What do I mean by this? Well, if we take a view that all we do can only be done by us, or perhaps only by a human, then Generative AI will be a foe because it is going to disrupt what we do. Equally, if we fail to recognise aspects of what we do as being uniquely human and requiring human judgement, then Generative AI will be a foe due to its ability to outperform us on speed and efficiency of content generation.
Areas for further exploration and what to watch
I think it would be misguided to take a view that Generative AI will not disrupt the legal profession. The maturity of the technology, while recognising there is much still to develop, is sufficient to make it a clear and present disruptor.
Commercially, it is clear that the likes of Microsoft see Generative AI as a key part of their ecosystems and there is little doubt that a range of specialist legal technology vendors are being forced to rethink their offerings.
Some of the early signs of Generative AI’s real impact are not, I would suggest, to do with firms announcing their use of Generative AI. Rather, it is about specialist legal technology vendors disappearing, or radically changing their product offerings. As Generative AI takes hold, it will be harder to find product niches related to the processing of massive amounts of text.
In terms of areas for further exploration, I think we need to consider various domains and I have outlined below some questions within each of just some of these domains. These are by no means exhaustive, but I hope they help in considering what you can do now re Generative AI.
What would happen if you took the material from every internal training course you have on file and fed it to Generative AI? Would you be confident that people interrogating the data would receive reliable and helpful information and training content? If not, what could you do to ensure they do receive reliable and helpful content?
Do you often get people trying to find what the policy is for something or who to contact?
Does your technology team respond to identical how-to questions and repetitive basic-support needs?
Are you supporting an intranet with pages that are never visited or updated?
How much time is spent compiling and summarising content?
How much time is spent on activities that don’t really require a lawyer?
How much content is generated that does not need to be? This one is interesting, but if Generative AI can provide tailored marketing information on your firm to prospective clients, what should proposal responses and panel submissions contain?
How can you really showcase the depth of talent and skill across the firm when prospective clients may be looking at your firm with a particular issue in mind and a niche set of skill requirements?
What would you need to do about the data you capture on the people within your firm so that Generative AI could build a narrative about the capability of your firm to address a particular niche issue?
You may have noticed that ChatGPT quite often ends with ‘In conclusion’, but I can assure you that ChatGPT did not write this article. In fact, it tells me that it cannot answer the question of whether it is a friend or a foe since “I am neutral and do not have personal motivations or intentions”.
This article merely scratches the surface of questions and considerations re Generative AI, but I hope it gives you some food for thought and ideas of where you may want to explore further.
Mark Andrews is Director – Global IT Service Delivery at Baker McKenzie. He has a varied background, including time in the public and private sectors, along with considerable professional services experience. He has held roles ranging from HR to management consulting and has previously been a guest lecturer in the business faculty of the University of Technology, Sydney – teaching at both Bachelor and Masters (MBA) level.