One of the activities this week was to watch some videos (without transcripts! Shame UNSW, that’s hardly meeting minimum standards for accessibility) by Richard Buckland on assessment for learning. We had to choose one to comment on, and I chose “gamification”.
The videos are here: https://teaching.unsw.edu.au/designing-assessment-learning-0
There were three things that I thought were really key, and the first two didn’t initially strike me as gamifiying at all. These were using coloured progress bars and “karma” points.
Progress bars are standard in just about any MOOC environment – its that bar, usually at the top, which gradually gets coloured in or changes colour as you progress. The one in FutureLearn is truly craptacular – it’s always wrong, and until you realise that its frustrating and a great source of annoyance. I’ve had it stuck on “8 steps remaining” for six steps, and then it suddenly changes – sometimes up, and sometimes down. I now ignore it. But what Richard Buckland uses is a set of progress bars, which indicate things like how many activities done and done correctly and the ratio of correctness to total submissions. And apparently students were motivated by this to get completely green bars. I can see that that would work – my kids use “mathletics” and “reading eggs” and they have similar bars that the kids like to see changing.
The “karma” points is cool – although I wonder how much it can be “gamed” by just getting all your friends to “like” you. The intention is that when you do something helpful for another student, they click a “like” button or similar, and you get “karma” which your icon carries around with it so people can see how nice you are. I’d love to see some research on how this affects student online behaviour, and the student culture. Potentially it could be very valuable if it reduces competitive interactions and encourages collaboration. And there’s a big literature around how effective cooperative learning is.
So are progress bars and karma points gamification? I’m still not sure, but I guess if the students treat them as such, then they are…
Finally, the obvious gamification. The puzzle hidden within the course, with the fake student as guide to a treasure hunt.
I love the idea of the puzzle hidden within the course, with clues and “Easter eggs” hidden amongst the activities. That’s the sort of thing that would be really fun and value-add for keen students, but as it wasn’t worth any marks still allows students with a more strategic approach to just get on with it.
Would I have engaged with that as a student?
Probably not – I was mostly pretty strategic because I had a job, rent to pay, and for part of my undergrad an injured husband to care for. Would I do it now if it was built in to FULT? No – I have a job, a mortgage to pay and 3 kids to look after. I just don’t have time.
The big questions that Buckland didn’t address though were
1. what impact did it have on student learning? Show me the data. And if you didn’t evaluate it, why not? Sure it’s cool, but as a scientist, looking cool isn’t enough, it needs to have evidence to support it. Oh, and you need baseline data to compare it to.
2. What impact did it have on student engagement? How much time did students spend on this, and which students spent time? Did it improve retention rates? Again, you need baseline data.
3. what’s the cost/benefit look like? And you need to evaluate (see questions 1 and 2) before you can answer this. How much of Buckland’s time (at an hourly rate of around $150/hr), plus education designers, etc, did this cost? And was it worth it for the learning gains?
And these are not just questions for Richard Buckland, but for anyone doing education innovation. If you’re not actually evaluating, including doing comparisons to base line data, then you aren’t taking a rigorous, evidence based approach. And as academics, people who are supposed to be critical thinkers, that’s just not good enough – we wouldn’t accept that in our research, so why do we think it’s okay to not use or gather evidence in education?