This blogpost details how I redesigned and taught an undergraduate history module in 2019, with a particular focus on the development and implementation of enabling students to choose the format of their assessments. I have discussed this briefly before for Jisc, here I go into more detail in support of a poster presentation on this case study at the Assessment for Higher Education (AHE) conference in Manchester June 22-23 2023. The poster can be viewed as a PDF here. Please come say hi on Thursday June 22 from 16:40-17:40 in Room 5!
Contents
What was the module?
This module had previously been taught traditionally, with students completing readings for three weekly lectures, with three large assignments over the 15 weeks. The department knew that these assessments were often a challenging area for students, leading to lower numbers of students completing the course.
The module was titled “Asia since 1500”, with a geographic span covering South, South-east, and East Asia, a thematic range including politics, economics, and social/cultural history, and a time period of 1500 to the present. The module was intended to expand students’ knowledge of this region and time-period and introduce students to historical methods like primary and secondary sources analyses. Both the content and skills of the module are traditionally new skills for students at university level, no assumptions could be made about building on experience from secondary school.
Who were the students?
My first priority was to understand who the students were in the module, in order to design relevant learning experiences for them. The module was an undergraduate survey history course for a US university, which meant that only some of the 39 students signed up were history majors. Furthermore, the students were a mix of years, with some just entering university, and others in their final year. Additionally, some students were mature and/or part-time students, with caring and job commitments outside of the classroom. Finally, while this module had been taught previously and the former module leader was positively reviewed by students, the module also had non-completion rates that were unusually high within the department.
Additional context & constraints
I was hired on Boxing Day for a module starting on January 6, with no access to the previous module leader’s notes or materials. I was also based in the UK, with my students in Illinois, with a department with no previous experience of online teaching or learning (this was 2019). The institution did use Blackboard, which students were familiar with, and was integrated with the institution’s library.
Given, however, the institution’s position, all materials had to be available for free or the library (not via external paywalls) and in English, as the module did not have a language requirement for participation. Furthermore, as I knew that some students would be commuting to campus, I aimed to make all resources available online, in additional to being freely available and in English.
Finally, in order to maintain its qualification as an introductory module in the department, the final assignment of the module had to be kept: a 1,000-word annotated bibliography of primary and secondary sources.
Making the module entirely online, and entirely asynchronous
Given the location and time constraints and before the use of platforms like Zoom or Teams, I decided to make the entire module fully online and fully asynchronous. Each week had three “sessions,” each with a micro-assignment that was to be completed before the next session.
Each session followed the same consistent structure to make it straightforward for students to plan and execute their engagement with the module. Each session was designed to be completed in the same amount of time that previously would have been devoted to preparing for and attending a lecture.
For each session students received an email from me that included:
- The session’s question for them to answer:
- This was always a “how” or “why” question that required making an argument about how different information went together.
- Students were explicitly told that there was no “right” answer and that they would be graded on how they put the source material together and the clarity of their answer. I found that this also minimised plagiarism issues – as students were responding to analytical “why” or “how” questions that relied on synthesising different sources rather than multiple-choice or true/false fact-based questions, some of the pitfalls of introductory history assessment (especially online) were avoided as successful completion of the assessment was unrelated to ability to look up answers.
- Questions included: “Why was the Ming dynasty in China politically powerful in the 1600s?” or “What does this primary source painting suggest about the economy of 17th century Bengal?”
- A list of 4-5 resources to engage with to answer the question that were always:
-
- Freely available online and fully in English.
- A mix of primary and secondary sources
- A range of types like paintings, YouTube videos, academic articles, and podcast episodes (I’ve written more about using podcasts in reading lists if you’re interested).
- The resources were organised so that the further down the list students went, the more fully they could answer the question, but even if students only used the first one or two of the resources, they could submit at least a partial answer to the question. This was explicitly explained to students.
Students then had three options for the format of their answer, their micro-assignment, to be submitted via Blackboard:
- 200-300 words in writing
- 1-2 minute audio clip
- A4 mindmap or diagram
Following their submission, each student would then get a response from me via Blackboard before the next session detailing one thing they had done well in their answer, one area to improve on in the next micro-assignment, and a grade.
These mini assessments acted as in-class activities, in an extreme flipped classroom model. Additionally, I utilised principles of micro-learning and generative learning. The larger assignments were summative, whereas the mini assignments were graded, but students could drop lower grades and the remaining ones were averaged. I chose this method because it enabled students to develop their knowledge and skills and get feedback that directly scaffolded to larger assignments with consistent constructive alignment across the module.
Why give students a choice of assessment type?
The students were initially required to use a range of response types to the micro-assignments to try out new forms of engagement and develop basic skills in different types of communication. Students were then given a choice of how to respond in the later stages of the module. This promoted student agency, was more flexible for the time needs of different students, and was more inclusive of different learning preferences. I explained these reasons to students via clearly written documents available on the VLE and assigned time in sessions to process the information to help them understand the structure and format of assessments in the module and ensure that our expectations and assumptions were aligned.
Providing one piece of positive feedback and one area to improve for the next assignment also helped ensure constructive alignment between each assignment, as it kept me accountable as an instructor to ensure that each session was clearly aligned with the next and that each piece of improvement-focused feedback I was giving was actually relevant and possible to be implemented within the context of the course. Doing this consistently also helped students develop an understanding of the wider goals of the course and see the development of skills as being just as important as the acquisition of historical knowledge, which was a key departmental priority.
If I were to do this again, I would likely make these micro-assignments more formative than summative, and in a UK context would have likely done that in the first place. But, as US universities focus on Grade Point Averages, I wanted to ensure that the three large assignments in the module, that had stopped previous students from completing the module, were less likely to loom large in GPAs. I decided that by giving the micro-assignments (41 of them total) grades, this would create a cumulative impact for students’ GPAs, meaning that the larger assignments would be less “do or die”. Student feedback after the module confirmed that this did help some students, and that others found it helpful to build a habit of consistent engagement with the module that meant the larger assignments were less rushed.
How did students respond?
The feedback from students, both qualitative and behavioural, was overwhelmingly positive about what was a quite radical change from what they had expected when they signed up for the module. 90% of students completed the module, with 73% of students submitting all assignments on-time, both significantly higher than departmental averages across all modules and higher than all previous iterations of this module.
The supervising professor said that my course had “Opened their [the faculty’s] eyes” to the possibility of achieving a high standard of teaching and learning online.
Some students reported that the micro-assessment helped them stay on track with fast pace of the module. Additionally, two students noted that smaller frequent assignments were easier to accomplish as mature students with caring and work responsibilities, and therefore they felt more able to be fully included in the module on the same basis as traditional undergraduate students.
Bibliography
Biggs, J.B. and Tang, C. (2011) Teaching for quality learning at university: What the student does. Berkshire etc.: Society for Research into Higher Education.
Carless, D. (2006) “Differing perceptions in the feedback process,” Studies in Higher Education, 31(2), pp. 219–233. Available at: https://doi.org/10.1080/03075070600572132.
Collier, P.J. and Morgan, D.L. (2007) “‘is that paper really due today?’: Differences in first-generation and traditional college students’ understandings of faculty expectations,” Higher Education, 55(4), pp. 425–446. Available at: https://doi.org/10.1007/s10734-007-9065-5.
Eddy, S.L. and Hogan, K.A. (2014) “Getting under the hood: How and for whom does increasing course structure work?,” CBE—Life Sciences Education, 13(3), pp. 453–468. Available at: https://doi.org/10.1187/cbe.14-03-0050.
Fiorella, L. and Mayer, R.E. (2015) “Eight ways to promote generative learning,” Educational Psychology Review, 28(4), pp. 717–741. Available at: https://doi.org/10.1007/s10648-015-9348-9.
Hattie, J. and Timperley, H. (2007) “The power of feedback,” Review of Educational Research, 77(1), pp. 81–112. Available at: https://doi.org/10.3102/003465430298487.
Morris, C., Milton, E. and Goldstone, R. (2019) “Case study: Suggesting choice: Inclusive assessment processes,” Higher Education Pedagogies, 4(1), pp. 435–447. Available at: https://doi.org/10.1080/23752696.2019.1669479.
Nieminen, J.H. (2022) “Assessment for Inclusion: Rethinking Inclusive Assessment in Higher Education,” Teaching in Higher Education, pp. 1–19. Available at: https://doi.org/10.1080/13562517.2021.2021395.
Sadler, D.R. (2010) “Beyond feedback: Developing student capability in complex appraisal,” Assessment & Evaluation in Higher Education, 35(5), pp. 535–550. Available at: https://doi.org/10.1080/02602930903541015.
Sharples, M. (2014) Innovating pedagogy 2014 – Open University. Available at: https://www.openuniversity.edu/sites/www.openuniversity.edu/files/The_Open_University_Innovating_Pedagogy_2014_0.pdf.
Starr-Glass, D. (2020) “Significant learning experiences and implied students,” On the Horizon, 28(1), pp. 55–62. Available at: https://doi.org/10.1108/oth-09-2019-0067.
Ulriksen, L. (2009) “The implied student,” Studies in Higher Education, 34(5), pp. 517–532. Available at: https://doi.org/10.1080/03075070802597135.
Warren, D. (2002) “Curriculum design in a context of widening participation in Higher Education,” Arts and Humanities in Higher Education, 1(1), pp. 85–99. Available at: https://doi.org/10.1177/1474022202001001007.
Wingate, U. (2006) “Doing away with ‘study skills,’” Teaching in Higher Education, 11(4), pp. 457–469. Available at: https://doi.org/10.1080/13562510600874268.