Assessing Assessment: Strategies for Providing Feedback on Assessment Practices

By Jeremy D. PennDecember 18, 2012 | Print

A central part of my role as an assessment director is to continuously improve assessment practices on campus. Recently, when driving to the airport through a construction zone, I noticed a new traffic sign that may have implications for this work: the dynamic speed limit display. These displays use radar to post your current speed directly above a speed limit sign. If your speed exceeds the limit, the numbers displaying your speed begin to blink rapidly to call attention to your excessive speed. These dynamic speed limit signs have reduced drivers’ speeds by about 10 percent (Goetz 2011), and some traffic engineers now consider them “more effective at changing driving habits than a cop with a radar gun” (Goetz 2011, 130).

Just like a dynamic speed display seeks to change drivers’ behaviors, a big part of our work as assessment leaders is to help faculty and administrators change and improve their assessment practices. A common approach involves using a rubric that describes the elements of high-quality assessment: development of feedback by a small committee of reviewers and the sharing of that feedback in a written report (see Fulcher 2010 for a useful guide and rubric).

Too often I have been frustrated to find little change in assessment practices in response to feedback. In one common scenario the review team looks at some assessment plans and, finding no change from the previous year’s plans, writes “ditto: see last year’s feedback.” Just as commonly, the review team finds assessment plans submitted without much thought (that is, “learned helplessness”), the authors clearly hoping that the reviewers will provide The Answers. In both cases, reviewers become disenchanted with the process and it is more difficult to recruit reviewers willing to give their time and effort when the activity is perceived as having little value. Worst of all, our progress in improving assessment practices on campus can be stymied.

Yet whenever I drive through a construction zone, the dynamic speed limit displays remind me that feedback works in changing behavior. It is surprising that dynamic speed limit displays are so effective. All cars are already equipped with speedometers, which are themselves a feedback tool, and our roads are littered with speed limit signs. So why does the combination of a public speedometer and a passive speed limit sign result in such a significant change in driving behavior? Do dynamic speed limit signs suggest strategies that we can apply to improving assessment practices?

Research on Feedback

Research on feedback suggests that it can positively or negatively influence performance. In a meta-analysis of 131 papers on feedback interventions, Kluger and DeNisi (1996) found that in 38 percent of those papers the impact of feedback resulted in a decrease in performance. For example, some drivers may respond to the feedback provided by dynamic speed limit displays by decreasing their speed. Another group of drivers may ignore the feedback while other drivers may enjoy watching the numbers blink and increase their speed to cause the numbers to do so. In the same way, some faculty members are much more receptive to feedback on their assessment practices than others. I have seen departments make great strides in improving assessment practice in response to feedback and others reject assessment altogether because a reviewer suggested a wording change to a student learning outcome.

When faculty receive negative feedback about the large gap between their assessment practices and what they should be doing, there are four basic responses. The first is called “abandon the standard” (Kluger and DeNisi 1996, 260), where faculty members perceive a very low chance that they will ever be able to meet the standard. This results in learned helplessness. The second is to lower the standard, essentially giving up and acknowledging defeat in the face of limited faculty engagement or resources. The third is to reject the feedback message itself, perhaps by suggesting that the review committee “doesn’t understand our unique discipline.” The fourth possible response, which is the response we hope to encourage, is to increase efforts to achieve the standard. As assessment leaders, it is not enough to simply give feedback. Rather, we must carefully consider the characteristics of the feedback so that it will be likely to result in the positive changes we seek.

Kluger and DeNisi (1996) identified three groups of variables that influence the effect of feedback interventions on performance: cues from the feedback message itself, the nature of the task performed, and situational and personality variables.

Feedback cues determine which level of action regulation will receive the most attention: the task itself, the self, the motivational level, or the meta-task processes. That is, when responding to feedback and considering how to move forward, faculty members will focus on different aspects of the task. For example, some faculty may choose to focus on elements of the task itself (such as changing the way departmental colleagues were involved in assessment or building in additional time over the next year to work on assessment), while others might focus more on meta-task processes (such as looking at the membership of the review committee to decide whether they are qualified to provide feedback or worrying about the impact on his or her campus reputation). Generally, feedback that encourages faculty to focus on meta-task processes tends to reduce performance, while feedback that directs attention toward the task and toward the motivational level tends to improve performance.

The nature of the task itself can serve to moderate the role of feedback in improving performance. For example, a very simple task, such as moving furniture out of a room, can easily be improved by increasing motivation (say, by offering an additional $50 if the room is cleared in ten minutes). But tasks that are cognitively demanding, such as creating a rubric to assess writing, are limited by the cognitive resources available (such as limited knowledge on how to create a rubric) and require attention to aspects of the characteristics of the task. I am able to easily motivate my golden retriever using a small amount of peanut butter. However, he is not likely to be able to differentiate between “proficient” and “developing” in the style category of the written communication rubric no matter how much feedback I provide. Feedback must exist in a context that includes support, knowledge, and resources to truly be effective in improving success in the task.

Situational and personality variables that focus on goal setting and avoid threats to self-esteem increase the effect of feedback on subsequent performance. Faculty members spend years developing a deep expertise in a particular area but may not have much experience or background in assessment. When someone is learning a new skill, we must be careful that our feedback does not damage the person’s ego. It is also helpful to focus on goal setting and to be clear about the purpose of assessment and the feedback process.

Applying Feedback Research to Strategies for Improving Assessment

The lessons learned from dynamic speed limit signs and research on feedback suggest some revisions to how we use feedback to improve faculty members’ and administrators’ assessment practices.

Direct Attention to the Work of Assessment, Not to the People Doing the Assessment. It is clear from the research that feedback that directs attention to the task is more effective than feedback that directs attention to the self (Black and Wiliam 1998; Kluger and DeNisi 1996). Just as in the classroom, we should promote the idea “that success is due to internal, unstable, specific factors such as effort, rather than on stable general factors such as ability (internal) or whether one is positively regarded by the teacher (external)” (Black and Wiliam 2005, 225).

The fact that assessment was not effective in a degree program says a great deal about the effort the degree program put forth in outcomes assessment but little about the character or quality of the individuals working in that degree program. We need to take care to protect the egos and self-esteem of faculty members participating in assessment and remember that a failure to complete assessment can be due to a variety of uncontrollable factors such as severe illness, departmental leadership change, or faculty turnover. In addition, when giving feedback on assessment practice, provide specific comments on errors and suggestions for improvement (Black and Wiliam 1998), not just a grade, and keep that feedback focused on the assessment task, not on the individuals performing the task.

Use Formative Feedback to Encourage Incremental Change over Time. Assessment is often implemented quickly due to pressure from an externally imposed timeline. Many institutions become involved in assessment only when the accreditor’s visit is looming. Often our desire is to see degree programs and departments make great leaps and advances in assessment sophistication in short periods of time. As a result we give generic feedback (“this assessment plan is a good start, but keep working on it until it really shows a commitment to student learning”) and often grade assessment work (“developing”) in the hope that it will inspire departments and degree programs to improve their efforts.

Research suggests that descriptive, detailed feedback is strongly related to improvement, but the addition of a grade can decrease subsequent performance (Lipnevich and Smith 2009). Providing feedback to programs and departments should therefore take a formative approach, helping faculty members make small but incremental improvements in their assessment work. Although we should recognize those doing exceptional assessment work, “grading” assessment should be avoided as it can have a negative result on future performance, particularly for those who would receive the lowest grades.

Be Clear about What Is Expected for Participation in Assessment and Regularly Communicate Progress Toward That Goal. Assessment work can be bureaucratic, involving procedures, templates, paperwork, and jargon. Although institutions do have legitimate reasons for collecting assessment information and using some standardized approaches to reporting, bureaucratic elements can get in the way of the real goals of assessment: improving student learning and being accountable to our constituents. Complex, jargon-filled forms make it difficult to communicate with faculty about how their degree program is progressing in assessment or even what is expected from their participation.

When we are tempted to obfuscate or complicate the assessment process, it is helpful to remember the simplicity of the dynamic speed limit sign: one simple sign showing one simple goal with clear communication for successfully achieving that goal.

Seek Out Assessment Leaders Who Have a Positive Orientation Toward Feedback. When helping departments and degree programs identify assessment leaders, it is important to determine who has a positive orientation toward feedback. Assessment work can be challenging and it often takes several years to develop an assessment approach that works well. As a result the leader will receive feedback from many different individuals, both from within the degree program and externally. If the assessment leader has a positive orientation toward feedback, it is much more likely that feedback will produce improvement in the assessment work over time.

Avoid Playing Politics with Assessment. Nothing will derail an assessment program faster than having an ulterior motive or agenda. Faculty members have been burned too many times by “innocent” requests for reports that resulted in budget cuts, loss of laboratory or office space, or other negative consequences. If we get away from the primary purposes of assessment—improving student learning and being accountable to our constituents—and begin to use assessment for political strategizing or as a power play for office space or resources we will quickly lose any good will and engagement that has been developed between faculty members and assessment.

Future Research and Summary

To be successfully engaged in assessment, faculty members need support, resources, encouragement, and guidance, not speeding tickets and STOP classes. As Goetz (2011) noted, “the true power of feedback loops is not to control people but to give them control” (132). Ideally, our goal as assessment directors should be to work ourselves out of a job. That is, to develop faculty expertise to the point where they are able to self-assess their assessment practices and take responsibility for the quality of their assessment work.

Feedback is an essential tool in helping faculty and administrators improve their assessment practices. But we need to identify and test new strategies for providing feedback that reflect research on effective feedback and are more likely to result in improvements to assessment practices. Some strategies may include:

• Providing faculty development that is focused on the ability to self-assess assessment practices.

• Using assessment mentors who will walk alongside faculty through the assessment process, instead of using a review committee (which can take a long time to organize and to develop feedback).

• Developing collaborations between similar degree programs at different institutions to learn from each other and provide feedback on assessment work.

• Establishing online communities of practice for real-time peer-to-peer discussions on assessment practices.

• Creating clear principles—using the language specific to each discipline—that provide guidance on the characteristics of high-quality assessment processes.

Faculty and administrators are under pressure from all sides—there is much to do and too little time to do it. If we are to develop meaningful and manageable assessment practices on our campuses, we must ensure that our efforts to improve assessment practices are not wasting anyone’s time and are likely to result in improvement. Perhaps the lessons of the dynamic speed limit sign can help us move in the right direction (and can help us avoid speeding tickets).

References

Black, Paul, and Dylan Wiliam. 1998. “Assessment and Classroom Learning.” Assessment in Education: Principles, Policy and Practice 5 (1): 7–74.

Black, Paul, and Dylan Wiliam. 2005. “Changing Teaching through Formative Assessment: Research and Practice.” In Formative Assessment: Improving Learning in Secondary Classrooms, 223–40. Paris: Centre for Education Research and Innovation.

Fulcher, Keston. 2010. Assessment Progress Template for Annual Academic Degree Program Reporting. Accessed June 26, 2012. http://www.jmu.edu/assessment/
JMUAssess/APT_Help_
Package2011.pdf
.

Goetz, Tomas. 2011. “The Feedback Loop: How Technology Has Turned an Age-Old Concept into an Exciting New Strategy for Changing Human Behavior.” Wired 19 (7): 126-33, 162–64.

Kluger, Avraham N., and Angelo DeNisi. 1996. “The Effects of Feedback Intervention on Performance: A Historical Review, a Meta-Analysis, and a Preliminary Feedback Intervention Theory.” Psychological Bulletin 119 (2): 254–84.

Lipnevich, Anastasiya A., and Jeffrey K. Smith. 2009. “Effects of Differential Feedback on Students’ Examination Performance.” Journal of Experimental Psychology: Applied 15 (4): 319–33.

Jeremy D. Penn is director of university assessment and testing at Oklahoma State University in Stillwater, Oklahoma.

Why Wait?

Get the current newsletter and
Join Our Email List
Sign up to receive exclusive content and special offers in the areas that interest you.
Send