Happy New Year!
Here is something I was thinking about while completing a very dull but worthy course in diversity awareness. What's the best way to get representative feedback from students on your teaching?
As far as I can see there are three common ways of doing this in HE.
1. Module feedback form at end of semester. Usually designed by an idiot (other than oneself) these can often be biased, personal and insulting. And there is low completion rate because everyone knows students have dug themselves into burrows by week 12 to weep or study. Plus, even if you work up courage to look at them then you forget what they said by the next year when you teach the course.
2. Staff student liaison committees. This relies on class reps actually doing their jobs and coming to meetings having first spoken to their classmates. More on this later. And also enough staff being able to attend the meeting in spite of the six other meetings scheduled at the same time.
3. Wait until students start squawking very loudly and then calling an emergency meeting to appease them before the next National Student Survey/ professional accreditation starts. By that time they already have grievances which they have been nursing to keep warm, and its may be too late to do anything about it.
My issue here is really to do with the representativeness of the feedback. I am less and less convinced that methods 2 or 3 actually give feedback which represents the concerns of the whole student body, and so you might spend time and energy trying to fix things which concern only a few people. This may be an issue with how class reps are trained. I am worried that class rep feedback of is filtered through the prism of the class rep's own views and what we hear is their preception of what might be happening with other students. Do they talk to all students in their cohort? Or just their friends? How do they phrase their questions? How do they guard against bias? Guess what? There is an entire set of research and practical knowledge based around running effective and non-biased consultation with people. And do we teach it to class reps? I certainly hope so, but you never know...
As an example, I got feedback on my Second Life module from the class rep recently. He said that ""second life is a good breaker for new c.s/i.s who may have never done programming before, but i can understand why some of the more experienced students seem to have a problem with it [object to it]". His class rep feedback in general is pretty good, but one is left wondering: how many of the more experienced students have a problem? What level of experience? What is the nature of their problem? I have a more nuanced view of this, given that I read student learning logs and chat to them in the labs as well as mark their work. I can tell you that it is not a matter of experience (see previous post on twitter, SL etc) per se, but a more complex mixture of motivation and attitude to self directed learning which make the difference. I have made a study of this issue over a couple of years. I have analysed a lot of data and read a lot of background literature. So in this case, I know more about it than any individual student.
Student centredness is a Good Thing, as we know*. But the students don't have the whole picture. Often the staff have more of an overview - and let's face it - more experience. We have the class averages, marks for individual students and our professional judgement based on everyday teaching interactions with all the students. We have also taught the modules before and know how the current cohort compares to previous ones. The class rep's comments add to this information, but if you are relying only on class rep feedback to inform your teaching, you're blind. This is why I think it's a good idea to get regular feedback from students as part of your teaching, all the way through them module. If you can do it individually, all the better. For example, my colleague uses e-voting handsets to find out from students after particular classes what concepts they don't understand and what they would like him to spend more time on next lesson. I try to have a class discussion mind semester to talk about concerns people have with the module so I have time to correct them before the module ends.
If we're going to have class reps and student feedback forms, we need to improve the quality of the data we get. It's too important to rely on anecdote and questionnaires with dreadful response rates.
* Actually, not everyone agrees with that, oddly enough. My colleague told me about a Danish professor of psychology who thought that it was a bad idea to consult students about teaching because the whole point of the teacher/student relationship is that one knows more than the other. It is not intended to be an equal relationship. It's similar to the issue of child centred/user centred design and my opinion is similar for both. Sure, consult stakeholders and listen with respect. But equally, don't devalue the professional opinions of those with the expertise and experience to make informed judgements based on more complete data.
The feedback cycle, surely, should be something along the lines of
- staff gather student feedback on module (or whatever)
- staff mull over feedback, have fifty-three meetings about it, decide what is to be done
- staff email students telling them what will be done
At the moment this cycle appears to omit the third point. It'd be nice to be told, every so often...
"Dear students,
With reference to the module feedback form for $any_module, I take on board your comments about lack of tutorials (insert here excuse about money). I also take on board your criticism of $lecturer and we are going to improve this by $how".
Perhaps feedback does get back to students, but it seems to be such a long time. Most of the stuff I've found out has been incidental in conversation with staff, rather than being told officially.
Posted by: Tom | January 12, 2010 at 10:22 PM