Student feedback: What it can and can’t tell us

Posted by: Associate Professor Andrea Chester, Deputy Pro Vice Chancellor, Learning and Teaching, Design and Social Context, RMIT University.

As we move towards the end of semester we begin the process of collecting student feedback via the Course Experience Survey (CES). Student feedback on teaching is a complex topic and it typically raises a range of issues for academics.

Get any group of teaching staff together to discuss student feedback and you will be guaranteed a lively discussion. In addition to the many hours clocked up in our staff rooms on this topic, it has generated thousands of articles examining the validity of student evaluation tools; the best time in the semester for such feedback; how to most effectively close the feedback loop and how to communicate with students about changes made as a result of their feedback.

Lecturer showing a mindmap on an overhead projector.

Copyright © RMIT University. Photographer: Margund Sallowsky.

Previous tomtom posts like this one and this one have effectively captured the ups and downs of the process and both make mention of the importance of putting the CES in context for students.  The phenomenon of “survey fatigue” too (as we know from our own lives) is a risk in any drive to increase response rates, particularly as we move to online administration of the survey.

There is one issue, however, on which there is widespread agreement: student feedback is only one source of information available to us about our courses and our teaching. Triangulation is crucial. This means complementing student feedback with information from:

  • assessment tasks, giving due consideration to the learning your students demonstrate
  • peer observation, such as via Peer Partnerships, in which you invite colleagues to experience your teaching and provide feedback and your own reflections on what seems to work and not work and why.

The CES can provide us with useful information, but we do need to remember what it measures, namely student experience. In his useful summary of research on student evaluations, Terry Doyle (2004) reminds us that while student feedback can provide valuable information, there are a number of aspects about which students are not well qualified to provide feedback including:

  • if the teaching methods used were appropriate for the course
  • if the content covered was appropriate for the course
  • if the content covered was up-to-date
  • if the assignments were appropriate for aiding student learning
  • if what they learned has real world application
  • if what they learned will help them in future classes
  • if the type of assistance, help or support given to students was appropriate to the learning goals of the class
  • if the difficulty level of the course material was at an appropriate level.

What Doyle also provides here I think is a structure for a teacher or lecturer to speak to towards the end of her or his course. A quick reminder about each of the elements above would also be an appropriate introduction to students before they complete their survey.

RMIT TAFE Students in class.

Copyright © RMIT University. Photographer: Margund Sallowsky.

Before making changes in response to student feedback, we need to be confident in the validity of the data provided and this brings us to response rates. This semester the Survey Services Group has developed a reliability band calculator. During the administration period of the survey (May 6 – June 2) you will be able to check how your own response rates are tracking against the reliability bands (good, sufficient and insufficient). You can check the response rates by program and school here (RMIT Staff login required). Contact your L&T group if you’d like to use a short presentation that has been designed by the Survey Centre to be displayed in a class so that students can follow the links and complete any outstanding surveys.

The RMIT Academic Expectations have set expected and aspirational targets for the Good Teaching Scale. In the coming years there will be more pressure on academics to provide reliable snapshots of the student perspective on their teaching. The vast majority of academics have always used the surveys as a tool for self-reflection.

I’m confident that we can continue a culture at RMIT that puts an appropriate emphasis on major surveys like the CES as one way in which we identify both evidence of excellence and areas for improvement.

Resources:

  • Read more about Terry Doyle’s research into surveys and teacher effectiveness at his blog Learner Centered Teaching.
  • For more on the CES, read this FAQ published by the Survey Services Centre.

Share your thoughts about the CES in the comments section below!

10 responses to “Student feedback: What it can and can’t tell us

  1. Peter Murphy 18 May, 2013 at 17:41

    CES – can’t live with it, can’t live without it. Great post Andrea, especially the timely reminder about the need to triangulate feedback on our teaching.

  2. Ian Thomas 27 May, 2013 at 16:34

    Fair points and good to see the work of Doyle highlighted. My concern is that the CES questions do little to inform us of what students learn, and can be ambiguous about what is being asked. So the ‘results’ of the CES being used to assess ‘good teaching’ is a worry, and moreso when these results will be fed (uncritically) into a bureaucratic system which is about ‘compliance’ rather than gaining understanding of how learning can be improved.

  3. Andrea Chester 29 May, 2013 at 23:09

    Thanks Peter and thanks Ian: I too am hesitant about the CES taking on too much importance in our assessment of teachers but I do think we are closing in on the right balance: it would be hard to think of a profession these days whose members weren’t subject to some form of performance metric. Reliability bands are one way of making sure that a teaching staff member’s results have validity and I can also assure you that I’ve never viewed data from the CES uncritically. It’s as a tool of reflection, viewed over time (and with the skepticism appropriate to survey data) that I propose the CES as one valuable source of student feedback on our profession.

  4. Paul Lewis 29 May, 2013 at 23:28

    I worry about the low response rates and issues of validity. Can we perhaps learn something from the excellent staff survey conducted recently, which was more comprehensive in nature?

    • Michael Nott 30 May, 2013 at 11:33

      Paul… if you are worried about poor response rates then you will probably not be reassured by the current “reliability band calculator” offered by the Survey Centre. Perhaps there is a bug in it!
      Anyway… just for fun I ran the data for our survey of NTEU academic staff, about the “Academic Expectations and Development” policy document, through it. We had 201 respondents and I wondered how that would shape up in the calculator. Guess what? I only needed 19 respondents for “sufficient” reliability; 69 for “good” reliability.
      I couldn’t help myself. I thought of the classes I taught, plugged the numbers in, and then wondered who the handful of students needed for “sufficient” reliability might be. Not the ones that were penalised for plagiarism I hope!
      Good luck,
      Michael

  5. Michael Nott 29 May, 2013 at 23:33

    Thank you Andrea for positioning the CES as a valuable tool for development of our teaching. If only this could have happened.

    Sadly, the critical GTS component of the CES has been pinched by senior management and made it into a set of hurdles that academic staff must jump in order to secure conference leave, teaching recognition for promotion purposes, and other benefits that used to be our rights. They did this of course by incorporating it into their “Academic Expectations and Development Policy” document.

    We were so concerned about this that, earlier this month, we conducted a survey of RMIT NTEU members who do the actual teaching! Only 7% of teaching staff think that the GTS score is a good indicator of teaching excellence! And 93% of staff did not know that Academic Expectations and Development Policy, with its GTS and Research hurdles that bear on the essentials of academic life, was adopted without being presented to Academic Board!

    Have a look at the May survey results from 200 academic staff to gauge the feeling of the RMIT L&T community about the GTS and Academic Expectations and Development generally. Here is the link:
http://www.nteu.org.au/library/view/id/3903

    Meanwhile, your NTEU Enterprise Bargaining Team is insisting that “Academic Expectations and Development” is NOT used for performance management.

    Many thanks… Michael Nott, VP-Academic, RMIT NTEU

  6. Andrea Chester 30 May, 2013 at 09:48

    Thanks for your comments Paul & Michael.

    Paul: I also have concerns about the low response rates for the CES. However, I see the new reliability calculator and publication of response rates during the administration period (http://www.rmit.edu.au/browse;ID=qj1s0x16bppo) as useful tools to help us in this area. It’s interesting also to look at the programs and schools where responses rates have been the highest. The Survey Services Centre will be following up with these areas to refine the best strategies to improve response rate in the future.

    Michael: I certainly agree that we need to ensure the CES is not the only information used to make decisions about teaching quality, however, given that we have this tool, it does seem useful to have benchmarks. This doesn’t of course mean the expectations will be used to performance manage people out of their jobs. How the academic expectations will be used at a local level is still being negotiated in Schools and Colleges and I encourage staff to help shape this process via their L&T committees.

    - Andrea

    • Michael Nott 30 May, 2013 at 11:04

      Thank you Andrea,

      Our issue with the CES, and particularly the GTS scores, within the “Academic Expectations and Development” policy document, are pretty clear. We do not consider the document as just advisory and formative. Rather, it is prescriptive insofar as it sets bands for performance that must be met by academics. And if they are not met then penalties apply.

      I’ve got the document in front of me right now and on page 7 it states: “Academic leaders will determine what constitutes outstanding performance for their disciplines by using the scales and ranges within the performance expectations for performance appraisal and promotion purposes”. Please note the verb “will”. Supervisors and HoSs “will” use these scales.

      As far as rewards for such performance goes in the same section we see “The University also offers research leave, travel grants, internal research funding schemes and international exchange fellowships as forms of recognition and reward” and then “Academic promotion itself is a traditional reward for high performance”.

      It’s a sort of 1 + 1 = 2 system. If a staff member does not meet the targets set out in the document then that staff member has abrogated his or her rights to critically important aspects of academic life.

      The NTEU, through its bargaining team in the current round of EB, is determined that “Academic Expectations and Development” is not used for work-planning and performance management. Of course we are equally concerned about the targets set for research which are for most of us, in a word, “impossible”.

      I’m sorry that the CES, and particularly the GTS, which could have been a great formative tool, has ended up being seen as a crude tool for management. I hope we do not make the same mistake with peer reviewing of teaching. I can see it coming! After that there is not much left to help our staff in their teaching efforts.

      Michael Nott
      RMIT NTEU VP Academic

      • Andrea Chester 31 May, 2013 at 19:32

        Thanks Michael, a lovely opening for an update on Peer Partnerships! RMIT Peer Partnerships (http://www.rmit.edu.au/teaching/peerpartnerships) offer a great opportunity to triangulate feedback data. We have very deliberately established this process as a formative and collaborative one and will work hard to maintain this position.

        Perhaps we can invite Professor Geoffrey Crisp to write a post on the separate process of peer review being introduced into teaching awards this year and the promotion process in 2014?

  7. Daryl 30 May, 2013 at 16:05

    I am equally concerned about the linkages between the Management for Performance policy document and the AE&D document. The FAQ presents a friendly resolution process where targets are not met, when in fact it should clearly articulate the potential outcomes. The survey that Michael ran on behalf of the NTEU says it all. With the ownership of this document resting with HR, it seems that Academic Board has no jurisdiction over it (which was confirmed at Academic Board), and this presents an even bigger concern to me.

    So, Andrea, while you may not share the same concerns I do hope that we can discuss these issues to be as comfortable as you might be with the GTS and its use as a management stick (no other way to describe it, I’m afraid, I’ve witnessed such use already in the SEH College).

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 1,179 other followers

%d bloggers like this: