SEDA Spring Teaching and Learning and Assessment Conference 2016 Innovations in Assessment and Feedback Practice 12th -13th May 2016 Edinburgh

Keynote The feedback conundrum: finding the resource for effective engagement Professor Margaret Price, Oxford Brookes University

SEDA 2016 15

Margaret favours discussing assessment as ORA which is ownership, Responsibility and Autonomy. We know that assessment is a key driver for learning and yet we do not draw on pedagogic developments for assessments. The discourse used is so often focused on checking the grade, grade inflation and fairness. Many have moved to anonymous marking and yet we know students want personalised feedback. Measurement in higher education has now got an unintended consequence of fragmentation because of modularity. Authentic assessment is what the employer market wants and yet we continue to use traditional approaches. Our assessment cultures come from a collusion between staff and students (Eccleston 2006).

There is an assumption that designing assessment at the module level is sufficient but assessments need to be viewed at programme level and we have to move to this. Research has shown that where the where the assessments are designed across the programme students do well and yet where they are a collection of modules students did poorly. Variety in assessment can be good but too much is disconnecting for students.

There is an assumption that constructive alignment is enough for coherent assessment and feedback. There is also an assumption that standards are straightforward but we know students do not understand criteria and consistency between markers is not robust. Students go through programmes mostly seeing only their own work and so have no chance to compare. We need to create a community of practice that includes students.

Assessment judgements are complicated with markers interpreting criteria in light of their experience and sometimes using other criteria. There is an assumption that there is a common view of assessment however we know that students don’t read it, sometimes it is vague, it is open to interpretation and it damages self-efficiency. Students have emotional responses to the feedback.

So what makes good feedback? Staff from ASKe and Cardiff under took and HEA funded project. Students became researchers and did semi-structured interviews with students using one example of good feedback and one example of bad/poor feedback. Students also undertook analysis with support. 3 key findings were developed which were:

  1. The feedback itself was often very technical and looked at presentation, legibility and levels of explanation. There was recognition of effort and time spent on work but we don’t know what time students have spent just based on the work
  2. The context of the feedback was important so the assessment design was important in terms of purpose and relevance. Feedback was given related to criteria but this was not a dialogue.
  3. Students’ development and expectations were the last area. Students showed the mark had some influence but some were upset but comments. Some students can self -evaluate but others can’t.

The summary of the findings was that you don’t need to get it right all the time but students perceptions are shaped by pre-feedback conditions. There is a need to improve student learning development so they are better equipped to engage with assessment.

Leave a Reply