Session 1C (Paper 1) : ​​What are we assessing when assessing writing? Conversations with programme teams in the wake of generative AI 

Dr Olga Rodriguez Falcon (Lecturer in Higher Education and Learning Development) Centre for the Innovation and Development of Education (CIDE), St George’s University of London

Dr Rosie MacLachlan (Senior Lecturer in Higher Education and Learning Development) Centre for the Innovation and Development of Education (CIDE), St George’s University of London

[Paper 1]

​​We conducted focus group discussions with the teaching teams at 6 different programs at St George’s, University of London, where participants explored ideas around the assessment of writing in their disciplines and the impact that new language models using generative AI is having in assessment design, assessment practices and, more generally, writing skills for employability in the different disciplines.

The recent emergence of generative artificial intelligence (AI) – technology capable of automating the generation of texts through large language models – has been predicted to transform literacy practices in coming years (Farrokhnia et al, 2023; Hill-Yardin et al, 2023), posing questions for universities both around what writing skills graduates will need in the future, and how current degree courses can adapt in order to develop these. The ‘threat’ generative AI poses to existing assessment methods is also widely discussed (Bagshaw, 2022; Sims, 2023; Warner, 2023), and may require us to transform our approach to assessment in a short space of time. Recent research suggests that university students have quickly become familiar with this technology, and are likely to have a better understanding of what it can provide than most university staff do (Strzelecki, A., 2023), suggesting the urgent need to provide guidance and support to assessors of academic writing

From April to July 2023, we carried out a research project with the aim of exploring the purposes and practices of assessing writing in St George’s, University of London undergraduate and postgraduate degrees. As part of this research, we conducted focus group discussions with the academic team of 6 different programs at St George’s. The discussions focused on issues around writing within disciplines, graduate expectations and the impact of new AI techonology in assessment design and assessment practices. Our presentation will introduce some of the key emerging themes from these discussions and how they can illuminate new possible approaches to the assessment of writing at university.  

References

​​Bagshaw, J. (2022) ‘What Implications does Chat GPT have for assessment?’ Accessed on 11.05.23  https://wonkhe.com/blogs/what-implications-does-chatgpt-have-for-assessment/ 

​Farrokhnia, M. Banihashem, S. K., Noroozi, O & Wals, A (2023) A SWOT analysis of ChatGPT: Implications for educational practice and research, Innovations in Education and Teaching International, DOI: 10.1080/14703297.2023.2195846 

​Hill-Yardin, E.L. et al. (2023) ‘A Chat(GPT) about the future of scientific publishing’, Brain, behavior, and immunity, 110, pp. 152–154. doi:10.1016/j.bbi.2023.02.022. 

​Sims, A. (2023) ‘Chat GPT and the future of university assessment’ Accessed on 11.05.23 https://www.timeshighereducation.com/campus/chatgpt-and-future-university-assessment 

Strzelecki, A. (2023) To use or not to use ChatGPT in higher education? A study of students’ acceptance and use of technology, Interactive Learning Environments, DOI: 10.1080/10494820.2023.2209881 

​Warner, J. (2023) ‘Chat GPT and writing assessment: an old problem made new’ Accessed on 11.05.23 https://www.insidehighered.com/opinion/blogs/just-visiting/2023/04/21/chatgpt-and-writing-assessment-old-problem-made-new 

 

Print Friendly, PDF & Email