(Article first published in the January/February 2011 issue of Assessment Update, which was available electronically to subscribers on the week of February 3.)
As readers of Assessment Update know, the call for assessment is sweeping over the globe, so much so that faculty and administrators are finding it difficult to ignore it—though many may wish they could. Unfortunately, awareness does not lead to an increase in assessment implementation because fear of assessment is still widespread. The push for academic improvement by those who value and understand the process is a huge struggle, and they must feel as though they are swimming upstream. So many potential contributors timidly stick a toe into the water or watch passively from the shore.
The irony in this struggle is that the biggest stakeholders involved, students, are largely left out of the process of convincing others of the value of assessment practices. Students are not just passive subjects being fed knowledge. Many actively care about their learning and the quality of their education, and would love to be involved with program improvement. As Trudy Banta stated after hearing a student presentation at an international meeting, “Perhaps it is our students who will be most effective in engaging faculty in assessing student learning outcomes as a component of enhancing teaching and learning!” (Banta, 2009, p. 7).
As a student myself, I care deeply about the quality of my education. However, I was not aware of the state of assessment at many institutions, including my own undergraduate college, until recently. I am a 2010 psychology graduate of a small liberal arts institution with about 1500 students. During the summer of 2009, I was an intern at James Madison University (JMU), an institution with one of the leading assessment programs in the nation. Under the supervision of Professor Donna Sundre, I began my exploration of assessment and the assessment cycle. Prior to this internship, I was completely unaware of assessment processes in higher education. After reading material about assessment and learning how it works, it became hard for me to understand why a college or university wouldn’t use it for continual program improvement.
At JMU I studied my undergraduate institution’s current assessment practices. To my surprise, I found that such practices were virtually nonexistent. Institutional assessment relied heavily on the National Survey of Student Engagement (NSSE), with no direct measures of student learning. I learned that our regional accrediting body had established criteria for reaffirmation that require systematic and direct evidence of student learning, and I knew that my institution was not ready for its pending accreditation review. I felt that our administration needed to know this and to realize the importance of assessment for both quality improvement and accountability reasons.
When I returned to my college, as Student Government Vice President of Academic Affairs, I started an information campaign. However, I discovered that the effects of the economic downturn on the college’s endowment were the main concern for the institution. To deal with this, ten individuals were appointed to study options over the summer of 2009. These ten people, with no student representation, developed three scenarios that would significantly reduce the expenditures of the college, with enormous changes that would affect everyone. This process was not at all transparent.
Faculty, administrators, and involved students were all frightened by the proposed changes. Worse, there was no evidence to support aspects of the scenarios that would drastically change the institution. Instead, the proposed changes were simply a collection of ideas occasionally backed up by anecdotal or case study evidence. So I attended many of the open meetings with the dean of the faculty and others to discuss ideas that were not included in the proposed changes. At these meetings, I explained what assessment was, and why it was crucial to the future of our institution. I received a variety of reactions: curiosity, excitement, skepticism, and cynicism. At one meeting, a professor of history asked me, “How can you possibly assess something like poetry?” and I replied, “Well, you do it all the time, so you tell me.” Many faculty did not realize that they were already assessing students on a regular basis using classroom tests and papers. There simply was no system for gathering solid evidence of student learning and growth.
In addition to attending open meetings regarding the future of my college, I was able to meet with the dean of the faculty to discuss the assessment cycle. She seemed very excited about assessment and surprised when I mentioned to her that our accrediting body had changed its guidelines to require evidence of student learning. Her interest led her to call Donna Sundre to discuss assessment implementation. However, communication about assessment ended after one highly engaging and enthusiastic conversation.
My last attempt to bring assessment forward to my college was through the Student Government Association (SGA). The president of the SGA also was not aware of assessment practices and immediately jumped on board when he learned more about them. Together, we organized a convocation of students in which we addressed several issues that were not mentioned in the proposed changes to my institution. I presented to about 100 students on the core principles of assessment, and how we desperately needed it for program improvement—financial crisis or not. In fact, it seemed that a crisis should help everyone to focus more clearly on what was needed and what was really important.
Unfortunately, this story does not highlight a success. The faculty and board of trustees adopted a reorganization of the college drafted by the president and his committee. Administrators were no longer open to suggestions. The political atmosphere became charged for everyone during this time, and most people felt our rallying for change was useless, as most suggestions were not considered. Assessment vocabulary was included in this new document in the following way: “. . . the Faculty (will) have developed an ongoing assessment process for determining which majors should be created, retained, or ended and apply this process in a multiyear framework.” How can faculty ever gain trust in assessment when it is put into language that instills fear of ending their program? This violates a primary assumption and purpose of assessment: continuous program improvement.
In the end, I was not successful in my venture to get my small institution to implement assessment in a meaningful way. However, I did spark an interest and cause awareness of assessment that I hope will not be ignored. Likewise, the battle for assessment has not ended for me as I am now a graduate student at JMU working with the Center for Assessment and Research Studies. From my experiences as an undergraduate, I feel comfortable saying that the greatest thing I learned was to take responsibility for my own education. Many students share these sentiments and may have more sway with administrators as a whole than faculty members. Thus, I urge faculty readers of Assessment Update to share the cause of assessment with their students; help them to understand why it is so important. I have learned that it is easy to spark an interest in assessment, but it is difficult to make an actual change. Enacting change is very difficult; it will require that everyone get involved. Get off the shore and into the water! We all have a common goal of improving the quality of education, so do not fight the battle alone, let the students help! We care!
Banta, T. (2009) Sour notes from Europe. Assessment Update, 21(6), 3–7.
Megan Rodgers is a graduate student at the Center for Assessment and Research Studies at James Madison University in Harrisonburg, Virginia.
©John Wiley & Sons, Professional and Trade Subscription Content
989 Market Street, 5th Floor, San Francisco, CA 94103 | Phone: 888.378.2537 | Fax: 888.481.2665