(Ln(x))3

The everyday blog of Richard Bartle.

RSS feeds: v0.91; v1.0 (RDF); v2.0; Atom.

Previous entry. Next entry.


6:54pm on Monday, 10th December, 2018:

Marx Marks

Anecdote

This week is "interim report oral" week for final-year project students. Each one gives a 30-minute presentation to their second assessor, who marks it using a spreadsheet. I did two today and have a third on Wednesday; some lecturers have 8 or 9, so I do have it luckier than most.

The spreadsheet is a new innovation. We have 8 dimensions of various weights along which to grade the students. We check a checkbox at the mark we think they should have for each dimension, press an update button, and all kinds of magic occurs. This is good, because it means less work for markers (apart from the 30 minutes of listening to student presentations part). However, it's also bad.

The first reason it's bad is because the minimum mark in any dimension is 35%. If a student has done nothing in one of the dimensions then they get 35% for it. The students are supposed to have used GitLab (basically as a version-control repository), but if they haven't then OK, 35% it is. Bear in mind that the pass mark is 40%.

The second reason it's bad is because the maximum mark in any dimension is 85%. Genius students or ones with a crazy work ethic can't get more than this in the spreadsheet. We're pretty well marking the students 0-5, then multiplying the result by 10 and adding it to 35 to get the final percentage. I did question this practice before the process began, and as a result it is possible to ignore the spreadsheet and give a different mark — but only if you go and explain to the module supervisor (who is also the Head of School) why you want to do it. I'm fairly sure he'd let a mark of 0% for no work go through, but the mere fact we have to go and explain it will be enough to put a lot of us off.

The third reason it's bad is because of the feedback. So, we're constantly getting told we have to write feedback for the students. This takes time and some lecturers aren't exactly good at it. Part of the reason for using the spreadsheet is that it automatically generates feedback. It may be pretty anodyne, but it's better than nothing (or the useless "good"). The way it works is to take your 0-5 mark for each dimension, look it up in a table, then add it to a list of comments. These are then collated and written out as a Word file. The idea is fine if you think that students will appreciate individualised feedback that's the same for everyone who got the same mark in that dimension; if you don't, you can always overwrite it with your own feedback (as if any of us are going to do that). This isn't the problem, though.

To show the problem, look at this text that the student feedback will contain if the student receives a mark of 45% in the General Use of GitLab category:

"There is some evidence that work on the techncial documentaiton has started but no issues raised to link to this work."

That's right: two spelling mistakes ("techncial" and "documentaiton"). The template is riddled with these. If we want to give students the impression that they are receiving personalised feedback, duplicating spelling mistakes is not the way to do it.

Overall, I laud this exercise as a pilot project (which is what it is). There's bound to be some teething troubles with it, and the overall aims are good: to reduce lecturer time; to ensure consistency of marking; to give students feedback so they rate us higher for having given them feedback. It's just that the limited marking range risks inflating the marks and the typo-ridden feedback template is working against it at the moment.

Given we have nearly 300 students to mark, though, I expect the wrinkles will be ironed out next year and we'll all be using it again.

Maybe the bit where we count the number of Jira issues raised and award up to 5% based on that will go, though.




Latest entries.

Archived entries.

About this blog.

Copyright © 2018 Richard Bartle (richard@mud.co.uk).