//
you're reading...
Change one thing, Experiences

The Survey and the Damage Done

Student surveys and teaching evaluation methods used are emerging as a hot issue amongst casual academics who rely on positive feedback as they teeter on the edge of unemployment. Just days after writing this post it appeared as the topic on #Adjunctchat.  The various locations and institutions of the participants in discussions such as these reveal that the issue extends well beyond Australian universities. It’s concerning that surveys may be poorly constructed. It’s significant that their purpose has become ambiguous. It’s potentially very damaging that this is resulting in a change in standards and expectations.

Student surveys can help us to identify areas that we can work to improve. Agreed. But when it becomes a requirement to share student comments in employment applications or expressions of interest, student feedback takes on a new significance. It seems less about professional development and more about ploys to accrue choice comments, positive reviews and unprecedented popularity.

There is a haze around who actually benefits from asking students to evaluate their teachers.  It’s highly debatable that one’s success as a teacher is accurately measurable in a survey. Popularity and likeability are no measure of the learning that has taken place (which sometimes involves tough love). I’m also doubtful that one can offer the most valuable learning experience when there is such a power shift. Tertiary students have become very aware that they assume the role of their teachers’ assessors.

Let’s take a sharper look at how the evaluation process and purpose has reshaped the dynamic of the classroom and student expectations of the casual academic. Some of the changes I have witnessed are:

• Expectation of 24 hour teacher availability and rapid responses
• Softer or generous marking
• Tolerated absences and overlooking the attendance requirement
• Teacher-provided sweets and pizza in the classroom
• Offers to students of work and collaboration
• Self-promotion as an expert in the field
• Students’ difficulties attributed only to classroom instruction provided
• An increase in the number of students requesting teachers to revisit grades in order to make a positive adjustment

This lecturer is throwing out chocolates when we answer questions right … it’s like she is Sheldon and we are all Penny’s [student tweet]

Is this becoming the norm at your institution too? Some of these behaviors may have emerged anyway, but the requirement to report evaluation results to secure future employment casts a large shadow of doubt over the timing and motivation behind the changes. Thus far I have fared well in student evaluations even though I haven’t made concessions (with the exception that I never seem to be off the clock anymore). However, it feels like the pressure has intensified and it’s inevitable that I’ll see a slide in student satisfaction with my teaching unless I succumb to these newly molded expectations.

Then there’s the level of control that the teacher is afforded over much of the standard criteria they are evaluated on. Take for example these questions:

Was my teacher prepared? Did they know the subject material? Do they communicate clearly in a way that you understand?

Try as we might, the subject matter for the tutorial is quite often: not provided to tutors/teaching assistants at all; shared in the eleventh hour so there is no time for thorough preparation; a very different direction to the lecture content which you have based the tutorial on; too much content to cover in the allotted tutorial time.

Then the evaluation criteria that lies more within teacher’s bounds of control relates to questions that are difficult to glean comments from for self-promotion. Was my tutor on time? Yes, each and every day but how does a student elaborate on that? Did my tutor know me by name? Yes, miraculously, although she had less than 12 hours of face to face time per semester with 145 students. Once more, not something the student is likely to expand on or be conscious of what an achievement that is.

Students are so heavily surveyed that it seems an unfair use of their time to administer yet another questionnaire that will take up almost half an hour of a one hour tutorial. Again, how students actually benefit from the teacher evaluations is elusive.

Teaching in other tiers of education is often measured in student outcomes but I’m not sure that’s the answer either. The most prudent approach to evaluation may very well be that it is voluntary, completed outside of teaching time, more immediate, and the results remain completely confidential between teacher and students.

Advertisement

Discussion

8 thoughts on “The Survey and the Damage Done

    • A pertinent article Andrew. Thanks for sharing the link. On a broader scale there is the problem of students being positioned as consumers and being asked if they are happy with their purchase. As educators we are interested in enabling and improving student learning however the branders seem interested in the student experience. This has me thinking about the role of academics in university identity narratives.

      Posted by mstexta | March 22, 2014, 9:27 am
  1. • Expectation of 24 hour teacher availability and rapid responses
    Yes, students email me on a 24 hour cycle 7 days a week. If I dont answer emails on a weekend they just pile up for monday anyway.
    • Softer or generous marking
    Yes, you are always marking with 1. an eye on the evaluations, and 2. mindful of reducing the number of students who complain and want a remark.
    • Tolerated absences and overlooking the attendance requirement
    Sort of. At our uni we dont worry about attendance anyway. Its not high school!
    • Teacher-provided sweets and pizza in the classroom
    huh, no. how desperate do you have to be to do that?
    • Offers to students of work and collaboration
    No.
    • Self-promotion as an expert in the field
    No
    • Students’ difficulties attributed only to classroom instruction provided
    Well it is the standard ploy. When they do well on exams they attribute it to their intelligence and hard work. When they do poorly they attribute it to a poor teacher.
    • An increase in the number of students requesting teachers to revisit grades in order to make a positive adjustment
    All the time. See “softer or generous marking”.

    “Was my teacher prepared? Did they know the subject material? Do they communicate clearly in a way that you understand?”

    I taught a unit last semester on which I knew next to nothing. Every student except for one indicated on the evaluations “agree” or “strongly agree” to that question. I guess I didn’t fool that one student 🙂

    Posted by Rob | March 21, 2014, 5:05 pm
  2. Surveys are a problem across the sector — in many universities the quantitative results are mandatory for probation and promotion for full-time academics, which means the whole complexity of the teaching situation is boiled down to “5.2”, with serious impact. And the questions are unsatisfactory in all the ways you say.

    But you’re also absolutely right that survey design is one of the places where casual work is made particularly invisible: the implication is that the person teaching has full and timely access to the materials being taught, and to all the resources they need to teach well, and that there are no other constraints or stresses impacting on students that are well beyond the tutor’s control.

    In this context, small human gestures like bringing food to class etc, that are often just ways of acknowledging factors like evening classes, get tangled up in more suspect practices, and I think this is something genuinely sad. Even teaching well can look like teaching to the survey in this challenging light, if the way you teach well involves students coming away feeling good — for whatever reason.

    How on earth can surveys be neutral in these situations? And yet we place so much faith in them.

    Posted by Kate | March 22, 2014, 12:56 pm
    • Bringing food to class for students may be a well-intended gesture. Deeply underlying it may simply be the desire to be liked and to be human – at least initially. There is a problem with this though that is magnified by teacher evaluations. It escalates student expectations and snowballs across institutions. Is the teacher no longer doing their job as well as they might when they don’t take into consideration the nutritional needs and biorhythms of students or respond to correct answers with a flurry of candy? Is ordering and organizing snacks additional unpaid work we should now consider adding to our list? Even if these practices are feasible for the tutor, it sullies already imprecise survey data.

      Posted by mstexta | March 22, 2014, 1:58 pm
      • Yes, I think what you’re looking at here is a systemic escalation problem. It’s happening in multiple ways in service culture: once one person decides to work an 80 hour week, or bring prizes to class, expectation creep throws out the function of things like survey instruments very quickly. Here I think there is real potential for strong support from FT academics who raise all the same questions — and of course the sector itself gets very interested if the exercise of personal judgment about how to act in teaching situations extends, however subtly, to grading.

        I feel we do don’t know much about the casual academic experience of student surveys — how consistent these practices are across the Australian sector. For a while casual academics couldn’t even access surveys if they wanted to because of the costs involved; now, from what you’re saying, it seems as though there’s a been a change to an expectation that they must.

        I’m interested to hear how widespread these survey issues might be.

        Posted by Kate | March 22, 2014, 2:16 pm
  3. Great article, thank you.
    In my workplace, surveys are all done electronically, and they are voluntary and done in the student’s own time, as you suggest. It’s undoubtedly a better method than doing them in class, but response rates tend to be pretty low and I think this tends to skew the results; the people who take the time to fill it out are often the people who either loved the course/you or hated the course/you.
    As you say, casuals often get blamed for issues with the course itself, that they have very little (if any) influence over, and this then comes out in their surveys. It’s a rare student who can differentiate between issues with the course itself (structure, assessment, etc.) and the teaching, especially early on in their studies.
    Also, I’ve never much understood the point of these evaluations when they are not supported by any further training or development for casuals. It’s all well and good that a survey flags that a casual tutor needs to work on their facilitation skills, for example, but if they are not offered any support or training to develop those skills…what’s the point? But casuals don’t get many opportunities for professional development. It’s a problem.
    All that said, I always urge new casual tutors to request that their course convenor sets up a teaching evaluation survey for them. Plenty don’t, yet student evaluations – for all their problems – are important for future job opportunities in the sector, and I know plenty of casual academics who don’t have *any* after years of teaching, and that doesn’t look good for them.

    Posted by Natalie Osborne | March 24, 2014, 2:36 pm

Trackbacks/Pingbacks

  1. Pingback: CASA weekly news 08/14 | CASA - April 27, 2014

We welcome your thoughts (update: oldest comments now appear first!)

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: