Session 1C: (Paper 2) What Machine Learning should remind us about human learning & its implications for assessments

Dr Neil Saunders (Senior Lecturer in Mathematics) School of Science and Technology, City, University of London

[Paper 2]

The process of machine learning fundamentally relies on ‘learning from errors’ via human (or other forms of) feedback. Designing assessments that allow students to learn from errors as opposed to being punished for making them is therefore essential.  

During and since the covid-19 pandemic, HE institutions have been forced into a long-overdue rethink of the effective assessments in the context of greater online learning. Some institutions have removed on-campus exams for a variety of reasons ranging from that fact that they undoubtedly cause students undue stress, and are not effective forms of assessment (so they argue). Yet at the same time, the rise and widespread use of generative AI has raised concerns about take-home exams and coursework more generally. 

In the context of mathematics and the technical sciences more broadly, the concern over assessment regimes prompted a joint statement by the London Mathematical Society (LMS 2022), the Institute for Mathematics & Its Applications and the Royal Statistical Society urging universities to design assessments that are “fit for purpose and fair” underlining that in these disciplines, “there are specific bodies of knowledge that students are expected to know and understand”, adding that “examinations afford the ability to test this in a fair and reliable way”. 

This talk will present a better-mark assessment regime that was piloted by the author some years ago, where items of assessment are conditionally weighted against one another, thereby allowing students to genuinely make mistakes and be rewarded for using feedback effectively to perform better at the next assessment task (Easdown et.al 2009). Drawing on analogies with machine learning, in particular reinforcement learning, and recent scholarship on theory of mind (Dennett 2017), it will argue that the better-mark mechanism allows for all of the teaching resources and assessments to work together coherently to maximise student learning, whilst minimising the stress that exams undoubtedly cause.

References

Dennett, D. (2017) From Bacteria to Bach and Back, the evolution of minds. W. W. Norton & Company 

Easdown .et. al (2009) “Learning and teaching in summer: is it better and why?” Ideas and Interventions, Alexandra Hugman (ed.), UniServe Science Conference Proceedings, UniServe Science, The University of Sydney.  

LMS (2022) Statement on Methods of Assessment in the Mathematical Sciences. Available at https://www.lms.ac.uk/node/1740 (accessed 2 April 2024) 

Saunders (2023) “Embracing Technology & Overcoming Institutional Barriers to Innovation and Student Partnership”  

Print Friendly, PDF & Email