Posted by: Associate Professor Andrea Chester, Deputy Pro Vice Chancellor, Learning and Teaching, Design and Social Context, RMIT University.
As we move towards the end of semester we begin the process of collecting student feedback via the Course Experience Survey (CES). Student feedback on teaching is a complex topic and it typically raises a range of issues for academics.
Get any group of teaching staff together to discuss student feedback and you will be guaranteed a lively discussion. In addition to the many hours clocked up in our staff rooms on this topic, it has generated thousands of articles examining the validity of student evaluation tools; the best time in the semester for such feedback; how to most effectively close the feedback loop and how to communicate with students about changes made as a result of their feedback.
Previous tomtom posts like this one and this one have effectively captured the ups and downs of the process and both make mention of the importance of putting the CES in context for students. The phenomenon of “survey fatigue” too (as we know from our own lives) is a risk in any drive to increase response rates, particularly as we move to online administration of the survey.
There is one issue, however, on which there is widespread agreement: student feedback is only one source of information available to us about our courses and our teaching. Triangulation is crucial. This means complementing student feedback with information from:
- assessment tasks, giving due consideration to the learning your students demonstrate
- peer observation, such as via Peer Partnerships, in which you invite colleagues to experience your teaching and provide feedback and your own reflections on what seems to work and not work and why.
The CES can provide us with useful information, but we do need to remember what it measures, namely student experience. In his useful summary of research on student evaluations, Terry Doyle (2004) reminds us that while student feedback can provide valuable information, there are a number of aspects about which students are not well qualified to provide feedback including:
- if the teaching methods used were appropriate for the course
- if the content covered was appropriate for the course
- if the content covered was up-to-date
- if the assignments were appropriate for aiding student learning
- if what they learned has real world application
- if what they learned will help them in future classes
- if the type of assistance, help or support given to students was appropriate to the learning goals of the class
- if the difficulty level of the course material was at an appropriate level.
What Doyle also provides here I think is a structure for a teacher or lecturer to speak to towards the end of her or his course. A quick reminder about each of the elements above would also be an appropriate introduction to students before they complete their survey.
Before making changes in response to student feedback, we need to be confident in the validity of the data provided and this brings us to response rates. This semester the Survey Services Group has developed a reliability band calculator. During the administration period of the survey (May 6 – June 2) you will be able to check how your own response rates are tracking against the reliability bands (good, sufficient and insufficient). You can check the response rates by program and school here (RMIT Staff login required). Contact your L&T group if you’d like to use a short presentation that has been designed by the Survey Centre to be displayed in a class so that students can follow the links and complete any outstanding surveys.
The RMIT Academic Expectations have set expected and aspirational targets for the Good Teaching Scale. In the coming years there will be more pressure on academics to provide reliable snapshots of the student perspective on their teaching. The vast majority of academics have always used the surveys as a tool for self-reflection.
I’m confident that we can continue a culture at RMIT that puts an appropriate emphasis on major surveys like the CES as one way in which we identify both evidence of excellence and areas for improvement.
- Read more about Terry Doyle’s research into surveys and teacher effectiveness at his blog Learner Centered Teaching.
- For more on the CES, read this FAQ published by the Survey Services Centre.
Share your thoughts about the CES in the comments section below!