This year’s Association for Learning Technology conference (ALT-C) was themed Data, Dialogue, Doing. As we have just finished phase one of our Learning Analytics Project (LeAP), I attended with my Learning Analytics hat on to find out more about what others in the sector are doing.
Sue Beckingham’s opening keynote set the scene by talking about the use of data in everyday life with reference to smart fridges, loyalty cards and the role of surveillance. She highlighted the amount of data being produced, especially via social media, and the notion of information overload, albeit not a new idea as noted in a quote by Adrien Baillet from 1685 about the “multitude of books which grows every day”. In an era of fake news, Sue encouraged us to look at the SHARE checklist as guidance to avoid spreading fake/misleading news or content. She also asked us to reflect on the cookies that we accept, noting the importance of keeping our data safe, the right to erasure and the role of GDPR. How does this relate to education? We need to ensure we are transparent about how we collect and use student data and teach our students how to keep their data safe.
@suebecks encourages us to know our different flavours of cookies 🍪 🍪 …… which one are you happy to consume? #altc pic.twitter.com/8tMNnW4xVQ
— Suzanne Faulkner 👻📱🛴 (@SFaulknerPandO) September 3, 2019
“Where are the students hiding?” asked Andrew Kitchenham and David Biggins from the University of Bournemouth. They suggested that students can be found in hidden learning environments, such as WhatsApp or Facebook, leaving analytics trails that we don’t have access to.
What is the 'hidden learning environment'? Social media, internet search, iTunesU, YouTube. It has always existed and is a way of avoiding institutional oversight. Some mentioned having a lack of confidence in using the VLE. #altc
— Dr Julie Voce (@julievoce) September 4, 2019
They provided an example of an experiment whereby students were provided with an assignment brief on the virtual learning environment (VLE) that contained one piece of key information about the format of their submission. The VLE analytics reported that 38% of students downloaded the assignment brief, yet 100% submitted in the correct format. Further investigation found that student reps were responsible for downloading documents from the VLE and uploading them to the ‘hidden learning environments’. This draws into question not only the value of analytics data from the VLE, but also the role of the VLE if the majority of students aren’t using it and are relying on others for information.
We need to take the hidden learning environment into account and accept there are limitations to VLE data. Need to consider alternative indicators of engagement such as attendance and formative assessment. #altc pic.twitter.com/ML9uMtEp71
— Dr Julie Voce (@julievoce) September 4, 2019
Moira Sarsfield and Helen Walkey from Imperial College London used a scenario-based approach to get us thinking about the use of learning analytics in relation to GDPR and ethics. In groups, we were tasked with identifying the actionable insight for the question ‘how does student use of tools in the VLE relate to attainment?’ We decided that being able to identify the tools used by the students with the highest attainment could benefit guidance given to students about how to study and potentially lead to a course re-design. Then we were asked to consider the legalities of carrying out a study on the use of tools vs attainment. We agreed that legitimate interest applies here, so we would need to carry out a legitimate interest assessment and ensure privacy notices cover this use. If we were to add sensitive data (e.g. ethnicity), then explicit consent would be required. Finally, we talked about ethics in relation to the study.
The thorny subject of ethics. Main consideration for applying is whether the research is for internal evaluation or to be published. Also need to consider things like power relationships (is academic analysing the data) & timing of request in relation to assessment #altc
— Dr Julie Voce (@julievoce) September 4, 2019
One of the most fascinating talks on learning analytics was from Martin Lynch from the University of South Wales, who introduced their research into the effectiveness of the Jisc Predictor tool. They capture a number of data sources in their learning analytics system, including VLE activity, book borrowing, attendance, lecture recording and electronic reading list access.
Hearing about the use of #learninganalytics at @UniSouthWales. Good to see a number of data sources being included in their system. #altc pic.twitter.com/gLgqnP6Uz0
— Dr Julie Voce (@julievoce) September 3, 2019
As part of testing the Jisc Predictor tool, they ran predictions weekly for 61 courses with a view to comparing them with actual results in order to determine the overall precision of the prediction and at what points in the academic year they are most/least accurate. They found that more than half of the students who were actually at risk were predicted to be at risk at least once during the year and that the accuracy of the predictions improves towards the end of the academic year, which isn’t great for early identification of at-risk students. The accuracy also varied depending on the course, with one course showing almost all students at risk for the majority of the year. This was attributed to the richness of the data sources available, especially attendance monitoring.
Interesting analysis of the @Jisc #learninganalytics Predictor tool from @UniSouthWales. The predictor performed well for courses with rich data sources (e.g. blanket attendance mnonitoring), but not for every course. Also predictions weren't useful until later in the year. #altc pic.twitter.com/uWJb6RJLzA
— Dr Julie Voce (@julievoce) September 3, 2019
Finally, Isobel Gowers from Anglia Ruskin University reported on analysis of data from four modules using Talis Elevate, combined with data from the Canvas VLE, attendance monitoring and library access. The analysis found that students with high attendance accessed a higher percentage of resources and found a link with attainment. Overall, the number of resources accessed was lower than expected. In addition, the time spent accessing resources on mobiles was lower than on other devices.
Overall, the conference was useful in consolidating my knowledge of learning analytics and a useful reminder of the perils of dealing with data. It also provided some inspiration for phase 2 of City’s Learning Analytics project, especially with regard to new data sources and the value of predictive analytics.