Apr
2016
INTED 2016 Conference (International Technology, Education and Development) Monday 7th March and Tuesday 8th March
Competence evaluation parallel session
Jennifer Kidd started the session with a paper on the effects of peer review on students learning a comparison of positive and negative feedback. The purpose of peer review is formatively to provide feedback and summatively to give a grade. You do need however to teach the students to give feedback as well as receive it. Formative feedback is effective and promotes learning. The study focused on formative feedback and asked what prompts students to revise their work? The study asked students when submitting their final version of their work to respond to some questions which included:
What changes did you make?
What prompted the changes?
If you had more time what other changes would you make?
What would motivate you to make the changes?
The logistics of the feedback were complex with online support, face to face, computer supported but also pen and paper. There was some ranking and rating and there were feedback checklists. The tools used included e-mail, track changes, blackboard and google docs. The study gained both negative and positive feedback. The negative feedback might prompt revision but it might also lead to students ignoring the feedback. Positive feedback was more effective when given to junior students and students sought feedback from those they know better. The study took place in an undergraduate programme and students were given directions as reviewers and asked to focus on specific bits of essays. The data collection was comparing drafts and final essays. There was then a survey with students where they reported making changes of 70 -80%. Most changes were made in relation to negative comments. The challenges were how to measure the change and defining what prompted the response.
Julia Morris discussed teaching students to give and receive – improving disciplinary writing through peer review. This was a snapshot of a one year project which was at the six months point and involved education, biology, engineering, special education and English. The focus was on improving undergraduate writing and a faculty specific assessment was used with guidelines and a rubric. Students submitted drafts, received feedback and the submitted the final version. There was online reflection and a survey after every round of peer review and this was anonymous. The technologies used were found to have some issues but google docs did allow some colour coding in the peer review process. Expertiza was also used as peer review tool.
Results showed students valued specific feedback, giving and receiving feedback, they did find it uncomfortable sometimes and got contradictory results sometimes. The first round of the reviews was the most helpful. The teachers found that this approach was good and they could see how the quality improved and the grades. Students made submission deadlines and made revisions. Negative comments were about the technology and that some cheating ad occurred where some reviewers did others reviews too and some reviewers were not critical enough.
Miguel then spoke about the contemporary issue knowledge outcome assessment in first year degree students. This was focused on an accredited outcome based programme with 13 outcomes for university programmes. There were 120 students divided into groups of 4-6 and the students chose a topic that they worked on together on google docs. There was an outline presentation provided by all groups. This was assessed using a rubric and assessing as a team worked well. The results from 116 students were that 24 screencasts had been produced for their topics.
Evaluation of interdisciplinary projects in pre-primary education degree was presented by Omar. This focused on first year students and natural sciences and maths. The subjects were combined and students had to solve a problem. The main aim was to get the students working together so 45 students divided into groups of 4-5. They had objectives to achieve and teachers form both the subjects worked with them in class. The assessment was undertaken by the teachers and there were 6 phases. Once a project was chosen, students had to design the project plan and structure it. Then had competencies to achieve and then they were assessed. 2 hours a week was allocated to work with the students and there were 3 control and monitoring sessions. There was self and peer assessment. Students did interviews and they liked the self assessment, expert assessment and teachers rubrics.
Tanju discussed freshman communication students’ development of lexical competence. Projects were undertaken by students to engage them in reading, research, using book chapters and seminar texts. There was often a lack of focus by students on lexis. Academic texts could have a high frequency of words (top 2,000) usually 87%, academic vocabulary was 8%, technical vocabulary was 3% and low frequency words was 2%. The teachers wanted to check the seminar texts and so some were analysed and students needed to exposed to a word 8 – 12 times to learn it. Students had to actively engage with the word and so instructional activities were designed to support this. There was self-assessment with the students and a self-evaluation survey. The students were given a list of words in their seminar texts as a handout. There was a research tool used which was profiler of words. The evaluation reviewed reflective writing papers. The experimental group used more academic words then the control group and had fewer spelling mistakes. Quizzes were also used. The students liked the online tools least and liked the quizzes , games and writing exercises more.