This is to address task 3b:
Reflect on the three prompts below:
- discuss how your learnings from the course relate to your context and practice
- synthesise key strategies you have learned in the course to enhance your practice
- identify how evaluation (e.g. your mini-evaluation task) may contribute to enhancing learning and teaching
My learnings from the course:
There were many learnings, but in 600 words I’ll limit this discussion to just one. There were many valuable aspects to the course. But I think we learn most from our own mistakes, and we also learn a lot from things that don’t work well for us in general. So I’m going to draw on a “what not to do” example.
First, the assessment was just confusing and vague in places. The basic vocabulary needs to be made clear – does “course” refer to the FutureLearn course on evaluation, or to the entire FULT course? What are we supposed to evaluate? What if we’re not teaching? More broadly, what is assessable and what is not? What if submit the ePortfolio but don’t do anything else? Are the ePortfolio sections weighted in some way? Will we get a mark, or just a pass/fail? Confusion around the assessment has come up several time in the face to face sessions in Canberra, and all these questions and more have been asked, and most of them I couldn’t answer.
Plus, this semester, for the first time, one of my GTTP students has found my GTTP assessment regime confusing.
So a key learning from all this for me is that assessment, even when dealing with mature age and post-graduate adult learners needs to be absolutely clear and explicit.
For my practice – next year I will make a simple assessment table for the GTTP giving all the info in short simple words. I’ll also include a separate assessment box at the top of the moodle site.
Synthesising key strategies:
The key strategies for me are:
- making sure my courses are well aligned (Biggs), which means LOs drive assessment, which then drive learning.
- being reflective and evaluating my teaching using student feedback, peer review and learning measures. Note that I mean formative peer review, not summative. The new summative system is not intended to (and wont) improve teaching. I like to use action learning cycles or a DTE (design-test-evaluate) approach rather than the more complicated ones (eg the Vignetti one) presented in FULT. I prefer a more engineering approach – one that focusses on outcomes rather feelings. Not that there isn’t a place for both – but feelings are very transient and inconsistent and influenced by external factors.
- using teaching strategies that work, ie. that are supported by evidence – my own or from the literature (noting issue of publication bias).
- using teaching and assessment strategies that push students towards deeper learning, critical thinking, and taking responsibility for their own learning.
- use more blended approaches, but only those that have supporting evidence for their effectiveness, not just the current bandwagon.
Actually, these are all things I try to do anyway (I’ve been doing education research for 15 years now, so it would be sad if I didn’t) except for the blended learning. And I’m mainly doing that because its an institutional imperative. But if I wasn’t doing these things then I hope that I would have learnt about them from FULT because they were all covered.
Identify how evaluation (e.g. your mini-evaluation task) may contribute to enhancing learning and teaching.
Ummm… by improving the thing I evaluated? I feel like I’m missing something in this question – is this not really obvious?
I evaluate how effective my “observation exercise” assessment in the GTTP is, and if it’s not as effective as I want it to be then I change it… Ideally, I change it based on student feedback, student learning outcomes and information in the education research literature.
Student learning outcomes are the most important of these to me. I think student feedback, even from post-grads, needs to be taken with a grain of salt.