ALMJ
How AI is changing legal practice now … and what we can expect
While artificial intelligence (AI) is already having an impact on the practise of law, we can expect it in the future to help us recruit staff and manage work streams (and do so much more), writes Mark Andrews.
Intelligence can be thought of as the ability to acquire knowledge and skills, and to apply acquired knowledge and skills.
The Cambridge Dictionary is a little more specific, identifying learning, understanding and judgement as key elements of intelligence. We can debate whether intelligence needs to be any or all those things, but something we accept as a sign of intelligence is the knowledge of how multiplication works and the ability to multiply two numbers together. We also accept as intelligence the ability to match patterns, mix ingredients to make a cake, write an email, and answer a question about how to do something.
My point is that intelligence comes in many flavours and levels of complexity. AI is no different and it is helpful to demystify AI by thinking of it simply as the way to encode human intelligence.
The area of discovery in litigation has benefited from AI, even if it has not always been called AI, for a considerable time. Clearly, machines process large volumes of data and identify relevance far more quickly than a team of people. Rather than investing time at the less complex end of discovery, teams of people can now focus on more complex and valuable activity.
When we look at the current and potential impact of AI, the question to ask is whether, given the computational power available, an aspect of knowledge and understanding can be encoded and performed by AI. We should not think of this in all or nothing terms, but instead consider what machines can now do more efficiently than we can.
Importantly, we need to extend our thinking to not just encoding, but to producing knowledge from what we know now – in other words, a generative application of AI.
Invaluable research tool
Legal research is one of the areas in which we are seeing the impact of generative AI. Major vendors are investing in gen-AI that allows legal research to be completed more efficiently. This includes critical components of legal research, such as citation. The solutions are available now and, while there is still a premium for these services over time, it will become an expected standard in legal research platforms.
We can predict with a high degree of confidence that these generative AI solutions are very quickly going to reach a point where they are performing legal research as effectively and probably more effectively than their human counterparts.
Just as we see gen-AI being applied to external research content, within law firms we are now seeing the technology being applied to precedents and client-specific content. Client-specific content is especially interesting as it is where law firms are taking what they know of a client based on all of the advice they have given, combined with legal knowledge, to provide customised responses to questions in specific domains. These solutions are assisting clients to obtain advice more rapidly and allowing law firms and clients to spend more time higher up the value tree.
Generative AI as a coach is another area of common application and this benefit is certainty not confined to the legal profession. Gen-AI can now be used as a presentation/meeting coaching tool, as a colleague and I recently discovered when assessing how often we use phrases such as ‘you know’, ‘um’ and ‘I mean’. It is interesting that receiving feedback about the overuse of filler words from generative AI is, I think, more powerful than a presentation skills training session where we might video ourselves and then receive a critique.
Generative AI gives us immediate feedback in the form of a set of facts and a simple statement about trying to avoid filler words. This purely fact-based feedback is not something we can argue with or get upset about. What is even better is that we can gamify this straight away: I wonder if I can improve my score next meeting? In which meetings do I naturally get higher scores or lower scores? What are the culturally specific filler words?
Generative AI as an assistant is yet another current application. One example of this is when it transcribes meetings, provides a summary, identifies key actions and offers a catch-up service if we miss some or all of a meeting.
There are many other current applications of AI and generative AI and, while there is a long way to go, there is sufficient maturity in the solutions for them to be mainstream business tools. The price premium is a factor in adoption now, but over time that will be less of an obstacle.
The impact of AI today is that the business and practise of law are becoming more efficient in a way that is allowing firms to shift up the value tree, and for clients to obtain more value from what they are spending. It is allowing individuals to improve their work habits and interactions. It is streamlining legal research. It is allowing us to consider more cases, precedents and knowledge in making judgements. It is guiding where we are best placed to invest our time.
While I am not suggesting all of these impacts are being seen in every firm and in-house team, they are real impacts and they are being seen in a significant number of firms (although we should always beware of marketing hype and are better to look at client-specific and case-specific applications to see how real the impact is).
What can we expect?
Now to the less concrete focus of this article. Predicting the future of technology is fraught. Thomas Watson, president of IBM, commented in 1943, “I think there is a world market for maybe five computers.”
Predicting the future application of technology remains difficult, with AI being no exception. What I think is helpful to do is consider what current challenges we would like to address with AI and to then be constantly open and inquisitive as the technology develops.
Humans are flawed when it comes to knowledge recall because memories enter our brains through filters and we recall things through filters. Aside from the quality and longevity of storage material, computers have perfect knowledge recall – that is, if something is available physically, it can be recalled and it will be as recorded.
Humans are biased in making judgements and, while we may claim to follow the same set of rules and patterns, there is no end of research papers demonstrating that, whether we like it or not, we are biased.
Humans are also incredibly resourceful and creative. We have an ability to hypothesise with very limited data and to imagine and dream in ways for which we struggle to provide a rational explanation. Our biological processes are analogue rather than digital in that we deal with infinite variation, rather than a binary world.
Considering these points, I think we can expect AI to significantly reduce bias, adding the significant caveat that bias does not get built into the AI in the first place. Filler words discussed earlier in this article are a simple example of unbiased AI, since we did not instruct the AI that saying ‘you know’ is a common habit in Australia and is therefore okay – we just allowed it to intake all relevant data (the entire meeting/presentation) and to apply a set of algorithms to determine whether a word or phrase was a filler and how often it was used.
Recruitment is an area in which AI is already being applied and my prediction is that humas will play less and less of a role in the recruitment process. We may instead flip the burden to the humans deciding on an appointment to justify why they are selecting a candidate if a comprehensive AI screening process has recommended someone else.
AI will have the ability to assess skills, background, knowledge and risk factors associated with candidates. Making sure the AI is unbiased will be a challenge, but in time this will be less of a challenge than trying to stop humans being biased.
AI will fundamentally change our daily work pattern. It will not be too far in the future where an AI assistant will do the following at the start of the day:
- assess all approvals requests (e.g. leave, expenses, time sheets, contracts, promotions) and provide a summary in a form that allows us to focus attention only where it is needed. Imagine this:
"I have reviewed all expense approvals and note that three of the four expense claims are all in policy, have full documentation and are consistent with budget and usual expenditure patterns. There is one trip that Jim made which included a $1000 dinner for a team of four people. Would you like me to go ahead and approve the three expenses and follow up with Jim to find out more about the $1000 dinner?"
- provide a summary of all new emails
"You received 20 emails from vendors you don’t know, but there is one that seems interesting given the meeting you had yesterday about generative AI. Would you like me to reply to the 19, or just delete them, and would you like me to generate a more detailed summary of the one that looks of interest?"
"There was a conversation between Esmerelda and Jose about a promising development in the Carfox Matter. It looks like they will have news by Friday, so I have set up a tentative meeting for you. I will track progress and move the meeting if necessary.”
"You received 200 emails, but none of them need your attention today, so I have blocked your diary so you can focus on the proposal to Brington about their wind farm, which is due today."
A second prediction is that within the generative AI landscape the greatest value will come from solutions that are grounded and bounded. Grounding is a much discussed and documented technique of connecting model output to verifiable information. I use the term bounded to convey the concept of setting content domains in which the solution operates. This is already occurring as mentioned earlier regarding legal research. Vendors are exposing a set of curated content – not an entire universe of content. For firms, the true AI-enabled competitive advantage will come from using data which only that firm knows, along with other relevant data which may be more public. A major challenge will be deciding what is verifiable and where content domains should be bounded. In a world drowning in information, this is not a simple undertaking.
A third prediction is that we are still a long time away from AI being able to handle ‘reasonableness’ tests. The Reasonable Person test is a common base for legal judgements, but trying to encode this with any degree of reliability is incredibly challenging. I would extend this prediction to any forms of higher-order judgement.
I predict this with a degree of trepidation as our natural tendency is to think of more of what we do as being higher-order judgement than is the case. To try to combat this, it is useful to keep a judgement complexity log over a few weeks. By this I mean a log that scores each judgement you make on its degree of complexity. The score matters less than the relative scores and the distribution of scores. This will help determine the extent to which AI can perform aspects of your role.
What next?
In this article I have dipped into some current applications of AI, considered some future implications, and made some predictions. AI poses challenges and risks, but the two that I want to highlight in concluding this article are what I would call the ‘learning risk’ and the ‘focus risk’.
The learning risk is that the profession fails to learn enough about AI to leverage and adapt, and instead is disrupted from outside. The focus risk is that we end up being firms that invest everything in getting the AI right at the expense of our true business, that of providing legal services. The good news is that thought, reflection, realism and an open mind are excellent risk mitigators – and I hope this article has contributed to mitigating the risks.
Mark Andrews is Director – Global IT Service Delivery at Baker McKenzie. He has a varied background, including time in the public and private sectors, along with considerable professional services experience. He has held roles ranging from HR to management consulting and has previously been a guest lecturer in the business faculty of the University of Technology, Sydney – teaching at both Bachelor and Masters (MBA) level.