FULT ePortfolio Blog #14: Reflection for action

This is to address task 3b:

Reflect on the three prompts below:

  • discuss how your learnings from the course relate to your context and practice
  • synthesise key strategies you have learned in the course to enhance your practice
  • identify how evaluation (e.g. your mini-evaluation task) may contribute to enhancing learning and teaching

 

My learnings from the course:

There were many learnings, but in 600 words I’ll limit this discussion to just one. There were many valuable aspects to the course. But I think we learn most from our own mistakes, and we also learn a lot from things that don’t work well for us in general. So I’m going to draw on a “what not to do” example.

First, the assessment was just confusing and vague in places. The basic vocabulary needs to be made clear – does “course” refer to the FutureLearn course on evaluation, or to the entire FULT course? What are we supposed to evaluate? What if we’re not teaching? More broadly, what is assessable and what is not? What if submit the ePortfolio but don’t do anything else? Are the ePortfolio sections weighted in some way? Will we get a mark, or just a pass/fail? Confusion around the assessment has come up several time in the face to face sessions in Canberra, and all these questions and more have been asked, and most of them I couldn’t answer.

Plus, this semester, for the first time, one of my GTTP students has found my GTTP assessment regime confusing.

So a key learning from all this for me is that assessment, even when dealing with mature age and post-graduate adult learners needs to be absolutely clear and explicit.

For my practice – next year I will make a simple assessment table for the GTTP giving all the info in short simple words. I’ll also include a separate assessment box at the top of the moodle site.

 

Synthesising key strategies:

The key strategies for me are:

Course level

  • making sure my courses are well aligned (Biggs), which means LOs drive assessment, which then drive learning.
  • being reflective and evaluating my teaching using student feedback, peer review and learning measures. Note that I mean formative peer review, not summative.  The new summative system is not intended to (and wont) improve teaching.  I like to use action learning cycles or a DTE (design-test-evaluate) approach rather than the more complicated ones (eg the Vignetti one) presented in FULT. I prefer a more engineering approach – one that focusses on outcomes rather feelings. Not that there isn’t a place for both – but feelings are very transient and inconsistent and influenced by external factors.

Class level

  • using teaching strategies that work, ie. that are supported by evidence – my own or from the literature (noting issue of publication bias).
  • using teaching and assessment strategies that push students towards deeper learning, critical thinking, and taking responsibility for their own learning.
  • use more blended approaches, but only those that have supporting evidence for their effectiveness, not just the current bandwagon.

Actually, these are all things I try to do anyway (I’ve been doing education research for 15 years now, so it would be sad if I didn’t) except for the blended learning. And I’m mainly doing that because its an institutional imperative.  But if I wasn’t doing these things then I hope that I would have learnt about them from FULT because they were all covered.

 

Identify how evaluation (e.g. your mini-evaluation task) may contribute to enhancing learning and teaching.

Ummm… by improving the thing I evaluated? I feel like I’m missing something in this question – is this not really obvious?

I evaluate how effective my “observation exercise” assessment in the GTTP is, and if it’s not as effective as I want it to be then I change it… Ideally, I change it based on student feedback, student learning outcomes and information in the education research literature.

Student learning outcomes are the most important of these to me. I think student feedback, even from post-grads, needs to be taken with a grain of salt.

 

Advertisements

Leave a comment

Filed under Uncategorized

FULT ePortfolio Blog #13 Evaluation

The third module of FULT is on evaluation. The plan (I think) is to run a small evaluation on some part of our teaching  that can be written up for the portfolio. Hopefully that will be clarifoed tomorrow in the F2F session.

In the meantime, these are the questions we are supposed to ask ourselves:

  1. What is the purpose of evaluation (what are the key questions)?
  2. Who are the stakeholders?
  3. What are the most appropriate sources of evidence?
  4. What is the most appropriate methodology to answer your questions?

 

As I’m only teaching teaching this semester, not engineering, I’m going to choose a task from the Graduate Teaching Training Program – the observation task that is one of the main assessment tasks of the course.   I talked about this task in a previous blog post.

So, to start answering the questions:

  1. What is the purpose of evaluation (what are the key questions)?

To determine how effective the observation activity is as a learning task for the GTTP students – do they learn sometine new from it? Are they able to link what they learn to the GTTP course content? Does the expereince influence their own practice?

 

2. Who are the stakeholders?

Me, and the students, and UNSW Canberra.

 

3. What are the most appropriate sources of evidence?

The GTTP students themselves – their observation reflections, their comments and informal discussion of the task.

The people they observed.

Their own students and supervisors later on – probably v. hard to get this info as requires later tracking and they mainly move on to other places.

 

4. What is the most appropriate methodology to answer your questions?

I don’t know. I would guess a qualitative analysis of their reflections, maybe some interviews. Perhaps I will get some ideas in this FULT course!

 

Leave a comment

Filed under Uncategorized

FULT ePortfolio Blogs#12 assessment task

This is portfolio task 2c:

Review an assessment task for a course you have been involved with supporting, designing or evaluating. This assessment can be a written, oral, practical or a group task.

In your submission of approximately 300 words, you should:

  • briefly explain the assessment task for review
  • evaluate the assessment considering how the task aligns with the course learning outcomes, how feedback will be provided to students and whether the SOLO taxonomy influenced your design
  • briefly discuss the relevance of the assessment rubric, if one exists.

 

The task:

Class observation exercise in the Graduate Teaching Training program.

This is one of three assessment tasks for the GTTP – the other 2 are a lesson plan for a 50 minute first year class, and then presentation of a 10 minute segment of that lesson plan to the group.

The task potentially aligns with the following program learning outcomes, depending on what they observe:

Understand your own teaching and learning styles and preferences and how these influence your approach to teaching and your classroom practice

Develop skills in effectively facilitating small group work

Increase your confidence in teaching

And clearly aligns with:

Become a more reflective teacher, helping you to continue improving your teaching after the GTTP

Students submit a reflective summary of their observation online. Some years I’ve done this as a forum posting, so they can all read and comment on each others, which is good for demonstrating the diversity of teaching on campus. Other years I’ve used the moodle assignment box to give them the experience of online assignment submission and feedback. This way makes it quicker for me to give feedback because of the built in tools. This year the students said they preferred the forum option, so we’re going with that.

This is the marking rubric that I use:

Marking rubric:

A – Outstanding achievement

The student has demonstrated extensive knowledge and understanding of the teaching processes used, and clearly and explicitly related them to GTTP content and processes. They have demonstrated an ability to think critically and evaluate the effectiveness of the processes they observed. The student has demonstrated the ability to reflect on their own learning and make links between their experience, the GTTP class content and processes, and the observed class. They have identified implications for their own teaching practice.

 

B – High achievement

The student has demonstrated thorough knowledge and understanding of the teaching processes used, and explicitly related them to GTTP content. The student has made some attempt to evaluate the effectiveness of the class that they observed. They have demonstrated an ability to reflect on their own learning and make links between their experience, the GTTP class content and processes, and the observed class. They have identified implications for their own teaching practice.

 

C – Sound achievement

The student has demonstrated an adequate knowledge and understanding of the teaching processes used, and explicitly related them to GTTP content. The student has made some attempt to evaluate the effectiveness of the class that they observed and to reflect on their own learning and make links between their experience, the GTTP class content and processes, and the observed class.

 

D – Basic achievement

The student has demonstrated a basic knowledge and understanding of the teaching processes used. The student has made some attempt to evaluate the effectiveness of the class that they observed and to reflect on their own learning from it.

 

E – Limited achievement

The student has observed a class and reported on what happened in that class.

 

This rubric is based on the NSW BOSTES rubric for assessment in NSW schools. I chose that as a base because it does use SOLO as a basis, as well as using Bloom’s taxonomy, as can be seen from the verb choices. Another reason for choosing it is because it is what our undergraduates (or many of them anyway) come into uni already used to.

All rubrics are limited, and I do find myself sometimes wanting to give a higher grade than I can strictly justify, or more occasionally a lower one. So I usually allow one of the criteria within a grade to  be missed without moving the grade down, especially at the top end.

It’s only in the last couple of years I started using a rubric for this. Previously I just asked them to write a reflection on their observation and share it. But I wasn’t happy with how simplistic many of the reflections were, so I’ve formalised it with a rubric. In general the program has been becoming increasingly formal in the last 5 years, from the fairly free-form teaching support group it used to be at ANU. The quality of the submissions are much higher now, and as a result I think students are getting more out of the program.

 

Leave a comment

Filed under Uncategorized

FULT ePortfolio blogs #11: gamification

One of the activities this week was to watch some videos (without transcripts! Shame UNSW, that’s hardly meeting minimum standards for accessibility) by Richard Buckland on assessment for learning. We had to choose one to comment on, and I chose “gamification”.

The videos are here:  https://teaching.unsw.edu.au/designing-assessment-learning-0

There were three things that I thought were really key, and the first two didn’t initially strike me as gamifiying at all. These were using coloured progress bars and “karma” points.

Progress bars are standard in just about any MOOC environment – its that bar, usually at the top, which gradually gets coloured in or changes colour as you progress. The one in FutureLearn is truly craptacular – it’s always wrong, and until you realise that its frustrating and a great source of annoyance. I’ve had it stuck on “8 steps remaining” for six steps, and then it suddenly changes – sometimes up, and sometimes down. I now ignore it. But what Richard Buckland uses is a set of progress bars, which indicate things like how many activities done and done correctly and the ratio of correctness to total submissions.  And apparently students were motivated by this to get completely green bars. I can see that that would work – my kids use “mathletics” and “reading eggs” and they have similar bars that the kids like to see changing.

The “karma” points is cool – although I wonder how much it can be “gamed” by just getting all your friends to “like” you. The intention is that when you do something helpful for another student, they click a “like” button or similar, and you get “karma” which your icon carries around with it so people can see how nice you are. I’d love to see some research on how this affects student online behaviour, and the student culture. Potentially it could be very valuable if it reduces competitive interactions and encourages collaboration. And there’s a big literature around how effective cooperative learning is.

So are progress bars and karma points gamification? I’m still not sure, but I guess if the students treat them as such, then they are…

Finally, the obvious gamification. The puzzle hidden within the course, with the fake student as guide to a treasure hunt.

I love the idea of the puzzle hidden within the course, with clues and “Easter eggs” hidden amongst the activities. That’s the sort of thing that would be really fun and value-add for keen students, but as it wasn’t worth any marks still allows students with a more strategic approach to just get on with it.

Would I have engaged with that as a student?

Probably not – I was mostly pretty strategic because I had a job, rent to pay, and for part of my undergrad an injured husband to care for. Would I do it now if it was built in to FULT? No – I have a job, a mortgage to pay and 3 kids to look after. I just don’t have time.

The big questions that Buckland didn’t address though were

1. what impact did it have on student learning?  Show me the data. And if you didn’t evaluate it, why not? Sure it’s cool, but as a scientist, looking cool isn’t enough, it needs to have evidence to support it.  Oh, and you need baseline data to compare it to.

2. What impact did it have on student engagement? How much time did students spend on this, and which students spent time? Did it improve retention rates? Again, you need baseline data.

3. what’s the cost/benefit look like? And you need to evaluate (see questions 1 and 2) before you can answer this. How much of Buckland’s time (at an hourly rate of around $150/hr), plus education designers, etc, did this cost? And was it worth it for the learning gains?

And these are not just questions for Richard Buckland, but for anyone doing education innovation. If you’re not actually evaluating, including doing comparisons to base line data, then you aren’t taking a rigorous, evidence based approach. And as academics, people who are supposed to be critical thinkers, that’s just not good enough – we wouldn’t accept that in our research, so why do we think it’s okay to not use or gather evidence in education?

 

Leave a comment

Filed under Uncategorized

FULT ePortfolio Blogs #10: reflection on blended learning (Activity 2a)

The task:

Reflect on active blended learning drawing from the videos, course resources, your perspective, and discussions with your peers

Active blended learning aligns well with the idea of Universal Design for Learning as long as it is well suported by face to face (for those who don’t learn well online), the online components are very carefully designed to ensure accessibility (cater to vision and hearing impaired etc) and are not bandwidth intense or too time consuming (not everyone has broadband, eg me, we don’t even have mail delivery 50 km from Canberra, and not everyone wants to spend 6 hours of their weekend watching videos).

I can see that blended learning, particularly flipping, is the solution to boring, transmission based lectures with poor attendance. Blended learning also provides an alternative means of content delivery to students who cannot get to lectures. This is important for distance education, which is a growing industry. Postgraduate online courses are a significant income stream for UNSW Canberra.

Is there a strong driver though when you’re teaching to students who are on campus, are required to be in class, are hard working and prefer face to face to online? In FULT, the statement was made that the pedagogy should drive the use of technology. This sounds like a no-brainer, but yet we rae constantly pushed to adopt technology when there is no evidence base in terms of learning outcomes to support its adoption. In fact one of the stars of the videos we have been watching has research showing that her face to face courses give better learning gains on standard conceptual tests than the online versions.

The scientist in me says “show me the evidence that it works”, especially given how often it was stated in the vidoes that it takes a lot of time. The cynic in me asks “where is the money going?”. So much of what we are being encouraged to do relies on the use, and payment of, commercial companies. A seriously huge flaw in the use of teh FutureLearn platform for FULT is that we all lose access to each course 2 weeks after it finishes, and the courses are only a few weeks long. We effectively need to try to capture our own copy of everything before we get shut out. This makes reflection later on a bit hard… doesn’t sem like a great imporvement on lectures and a textbook to me.

 

Leave a comment

Filed under Uncategorized

FULT ePortfolio Blogs#9: lesson plan for blended learning (activity 2b).

The task:

Propose an active blended learning plan, based on the principles of constructive alignment which includes an aligned class learning outcome, activities, and assessment .

  • Learning Outcome.

Students will be able to correctly identify Newton’s third law force pairs for forces including gravity and contact forces.

  • Who are the learners.

First year undergraduate engineering students, in the Aero, Mech and Civil dgree programs.

  • Pre Class Independent Activity

Students will read the relevant chapter of their textbook, and watch an existing online video on Newton’s laws, for example a Kahn academy video.

They will then do a short online quiz.

  • In-Class Collaborative Activity.

Students will work in small groups on a set of questions involving identifying Newton’s third law pairs, and drawing free body diagrams. The questions will include a hands-on activity where students use a set of bathroom scales with one student standing on them and other students pushing and pulling on them. The aim is to help them recognise that gravity and the nromal force are not a Newton’s third law pair, a misconception held by around 90% of students entering engineering.

  • Post-Class Activity

Students will do an online quiz in which they identify force pairs. This include a series of questions working from the common misconception that gravity and normal are a Newton’s third pair on Earth to the absurd proposition (typically accepted by around 20% pre-instruction) that the moon experiences a normal (contact) force by the Earth.

  • Assessment

Formative assessment via quizzes will be used at 3 points:

  1. pre-instruction, at the beginning of semester.
  2. post-online instruction, after reading text and watching videos
  3. post face to face instruction after the workshop.

Summative assessment of the same concept will be included on the next summative quiz and/or on the end of semester exam.

  • Evaluation / reflection

Three aspects will be evaluated:

  1. Learning gains from pre-instruction to post online instruction to post face to face instruction using data from quizzes. These give information on immediate learning gains from each type of instruction, and the late summative assessment gives an indication of how well these gains are retained.
  2. Student experience – minute papers or surveys. Identify whether the myExperience evaluation results will be negatievly impacted – this is now an important metric at UNSW so cannot be ignored even when good learning gains can be demonstrated.
  3. Cost-benefit analysis: identify how effective each component of instruction was, compare it to previously used instruction, and decide whether to continue with each component.

Leave a comment

Filed under FULT

FULT ePortfolio Blogs #8: Leaning Outcomes

The task is to rewrite an existing learning outcome using the algorithm:

writing learning outcomes

I like the really simple algorithm for constructing learning outcomes. The idea of always making them testable is useful for ensuring that alignment with assessment happens too.
The NSW Board of Studies used to have a list of verbs for use in all assessment tasks, individual questions etc. This was handy for teachers and textbook writers (like me) to be able to construct questions, eg “Calculate…” or “List..”, but it was very prescriptive and not extensive enough, for example “Draw” wasn’t on it, which meant I couldn’t include an approved BOSTES verb in questions asking students to draw a diagram. It also limited the vocabulary that could be used, for example “find” wasn’t there either, so everything with a number turned into “calculate” and asking for algebraic expressions was an effort in verbal contortion.

So I’m thinking that maybe having such a simple prescriptive algorithm is also a bit simplistic – we can’t include “appreciate” or “develop confidence”, but only measurable things. We lose access to the affective domain.

Maybe that doesn’t matter, but it starts to feel a bit like a Turing test or the “Chinese room”. As long as you can act like you appreciate diversity or feel confident in your ability to teach, perhaps it doesn’t matter what’s actually going on in your head (as long as you keep it there).  Actually, this is something I tell my kids when they get mouthy – if you keep it in your head, it wont get you into trouble, but as soon as it comes out your mouth I will deal with it.

But there’s a lovely quote, of questionable origin:

“Watch your thoughts, they become words;
watch your words, they become actions;
watch your actions, they become habits;
watch your habits, they become character;
watch your character, for it becomes your destiny.”

As anyone who has suffered from depression can tell you, watching your thoughts is really important. What goes on inside your head is important.

Old learning outcome from Graduate Teaching Program:
Students will develop confidence in themselves as teachers.
New learning outcome: Students will demonstrate confident body language and voice patterns in the classroom.

Or maybe we need to be able to have course aims as well as learning outcomes, and then I could keep developing confidence as an aim, but not have to turn it into a measurable outcome.

Leave a comment

Filed under Uncategorized