Post by Dallas Wingrove
Image by pgcap
The previous post, Do students read your feedback? looked at the issue of feedback from the student perspective. This post considers the view of the teacher, and particularly in relation to the Course Experience Survey (CES).
Conversations with teaching staff have driven home that the focus on teacher scores and rankings (GTS & OSI) and the design and administration of the CES can lead to significant degrees of personal and professional angst for some staff. What is heartening though is that teaching staff have begun to share their experiences and concerns, moving beyond self-deprecation and feelings of deflation that can occur when the CES report is received following the semester’s teaching.
The purpose of the CES is to systematically capture feedback from students about their course experience to in turn provide feedback to assist teachers to continue to improve the quality of learning and teaching in their course. Yet whilst the CES, and the data it produces, carries so much weight, particularly in relation to university systems of academic promotion, teaching awards and wider career development there is some work to be done if the survey is to foster improvement in teaching practice and for CES data to provide useful feedback to teachers.
Following discussions with staff at my School’s Learning and Teaching Committee it was decided that a project team be formed to investigate how to do things better. As discussions about the CES unfolded in the school, a colleague and I starting exploring whether there was room for changes to improve how the survey was administered and ultimately experienced by both staff and students. We discussed the issues with teaching staff and had a series of follow-up one on one discussions which resulted in the development of a CES Project brief. Our project brief includes a summary of the views staff expressed regarding how they experience the CES.
Fundamentally, teachers expressed:
- Feelings of disillusionment, and concerns in relation to the rigour of the survey, its design and implementation
- Feelings of concern about the rigor of a survey where the very staff administering it are not briefed about, nor supported to engage with its purpose and its role in improving learning and teaching
- Concern as to the use of the Likehart scale, with not all points on the scale marked on the CES
- A strong sense of disenfranchisement with the survey questions having been delivered top down, with no input from teaching staff on the ground
- Questions as to the appropriateness of the survey questions for the first year cohort
- The experience of receiving the CES report demoralising and disillusioning, with no guidelines for Heads of School/Senior Academic Management regarding how they debrief with teachers about their scores and the CES data the survey generates.
I suspect that what staff in this school articulated both in relation to design and administration for the CES is common, including how student feedback is communicated to them. As this university moves to a full on-line administration of the CES in 2013, there are some burning unanswered questions. And I hope some unrealised opportunities for lasting positive change.
To begin with we decided to explore further the issue of survey administration in the school. This we believed had relevance not just to the current roll out of hard copy surveying, but perhaps also to the next phase of the CES as this university moves to a near 100% on-line surveying of its students by 2013. The administration of the CES in the school in which I work is delegated to administration staff. I understand from colleagues in some other schools across this university that this is common practice. In reality, this means that the administration of the survey really is ad hoc since without briefing, induction, many variables can come into play. Commonly, the survey is devolved to admin staff who may not be appropriately briefed as to the design purpose and significance of the survey, and who may or may not introduce the survey in accordance with the guidelines.
From the student perspective, it must be at the very least perplexing to be faced with surveys for which there is minimal context, nor rationale, and yet which also carry such weight which they may or may not be aware of.
We met with our Head of school and the College Associate PVC Learning and Teaching; both were supportive, with the latter referring us to practice within the Australian sector in which undergraduates are engaged in the course administration process. Our next step was to meet with the Director of the university’s Student Survey Centre. This discussion highlighted the siloed ways we practice in higher education as the centre has, through no fault of its own, little interface with teaching staff. We learnt much, including that the university is reviewing its administration of the survey, and already changes have been made to the Graduate Destination Survey Likehart scale with the middle point now clearly marked as neutral. Given the close alignment between the GDS and the CEQ, and in turn the CES, it seems that some positive change may be around the corner.
The three of us agreed to develop a project involving the administration of the hard copy CES which would in turn inform the process for on-line delivery. This would involve volunteer undergraduate students inducted as mentors, which we intend to enhance rigor and consistency in the administration of the CES and we hope an enhanced understanding from our students as to the importance and purpose of their feedback. We have committed to embark on a pilot in which we will compare uptake, response rates etc from the hard copy and on line sample within a course, with findings to inform the roll out of the administration of the CES on-line. It may be that our students who completed the survey on-line talk to one another via video prior to commencing the survey for example.
What can be left as an unspoken in tertiary education is how teachers experience the CES, both in terms of how the data is collected and received. Our project is only emerging, but it highlights that teaching staff need to feel supported and listened to, to put on the table their experience of the CES. As we move into a world of increasing quality assurance, it’s a win for all if the instruments we use to measure the quality of teaching and learning replicate rigor in their design and administration.