AI and intelligence based learning

 

In The Robots Are Coming Andrés Oppenheimer writes that no one is safe, including a chapter titled “They’re Coming for Lawyers!” So what are the implications of AI and the study of Law?

Well AI is here; we write with Grammarly or Hemmingway; Google answers all our questions and legal firms use chatbots such as ‘Billy Bot’ claiming to do the work of a barristers’ clerk. (1) And in a recent report by the IES, commissioned by the Law Society, it is predicted that the  AI revolution will potentially cost 35,000 jobs in the legal sector; though admittedly mostly administrative roles that are already in decline.(2)

Before going any further here I tried to work out a definition of artificial intelligence and I think I found it helpful to realise that AI is interdisciplinary; made up of cognitive psychology, philosophy, computer science, math, linguistics as well as that notion of ‘intelligence’. Some could argue that the definition of intelligence that surrounds AI is only based on content or knowledge rather than what makes up the complexity of human intelligence; which includes our physicality and emotional reality.

Across the education sector, at all levels, there have been debates about moving away from knowledge-based curriculum towards a more intelligence-based one. If this is the case that would mean that any discipline, including Law, is focusing more on those non-cognitive skills such as creativity, critical thinking, communication and decision making and not forgetting digital and data literacy. All skills, that we are increasingly told by employers, are needed in the workplace. Indeed the IES report emphasizes this:

A common theme from the employer interviews was that firms were paying more attention to softer people skills, such as communication and team working, when recruiting legal professionals, whereas in the past they had only looked at the technical legal skills. IES report 9th December 2019

Central to this is the idea of learning to learn, the ability to be flexible rather than tied down to hard rules. Individuals need to be able to navigate a world of fake news or deep fake videos, for example, by developing their self-efficacy and critical skills. The ability to think very hard about ethical behaviours as well, particularly in view of events like the Cambridge Analytica data misuse. It was Cambridge Analytica’s whistleblower Christopher Wylie who later spoke about how the unchecked use of data reduces human beings to a commodity: 

‘There are very few examples in human history of industries where people themselves become products and those are scary industries – slavery and the sex trade. And now, we have social media’ (Campaign 2018)

In the business world, different approaches are developing. For example, the Swedish fashion company H&M argues for the importance of responsibility in using AI:

“AI is a powerful tool and allows for many new possibilities. But that we can do something doesn’t mean that we should do it.”Linda Leopold, Head of AI Policy (3)

 One of H&M’s key methods is an Ethical AI Debate Club where employees meet to debate dilemmas that potentially could arise in their industry. Indeed in a Jisc 2019 presentation on Why-ethical-debate-is-crucial-in-the-classroom (4) Dr. Miranda Mowbray uses the example of Cambridge Analytica’s Christopher Wylie, who at the age of 24 was a research director, to argue that young professionals are increasingly under great pressure when faced with complex decisions and that it is crucial that universities support students with building these ethical decision-making skills; as educational institutions are the safe and trusted spaces to do so.

AI itself could be an ally to a more intelligence-based learning approach. It could support the challenge of assessing away from the knowledge curriculum or help in assessing those students who might not perform well in standard assessments.

‘Through data collection approaches that work on modelling techniques, students can benefit from a richer fair assessment system that can evaluate students across a longer period of time and from an evidence-based, value-added perspective (Luckin 2017a). 

So if the likes of AI legal chatbots are increasingly going to cover the ‘content’  stuff such as answering basic questions from clients or overturning parking tickets could that mean that legal practitioners are really only going to be dealing with the more complex and subtle matters of Law?

“Lawyering requires human-human interaction, creativity, language processing at the highest level, deep understanding of how society works, and a sort of experience that can (currently) only be gained by humans.”(7)

(1) https://www.lawsociety.org.uk/news/stories/chat-show/

(2)https://www.lawgazette.co.uk/news/ai-revolution-could-cost-35000-uk-legal-jobs-law-society-research/5102442.article

 (3 )https://hmgroup.com/media/news/general-news-2019/meet-linda-leopold–head-of-ai-policy-.html

(4) https://www.jisc.ac.uk/blog/why-ethical-debate-is-crucial-in-the-classroom-09-apr-2019

(6)https://www.theguardian.com/technology/2016/jun/28/chatbot-ai-lawyer-donotpay-parking-tickets-london-new-york

(7)https://static.legalsolutions.thomsonreuters.com/static/pdf/S045388_2_Final.pdf

Print Friendly, PDF & Email

One thought on “AI and intelligence based learning

Leave a Reply to Kathryn Drumm Cancel reply

Your email address will not be published. Required fields are marked *