jossey-bass
Nonprofit Business Advisor, Strategies to Survive and Grow in Tough Times

Will We Just Stand By and Watch This Happen?

By Trudy W. Banta October 1, 2011

(All articles detailed in this one appear in the September-October issue of Assessment Updateavailable electronically to subscribers at publication.)

In 2006, hints began to emerge regarding recommendations that would be made by Secretary of Education Margaret Spellings’ Commission on the Future of Higher Education. Indeed the Commission’s final report issued in September of that year contained the expected calls for ways to compare institutions of higher education, including value added measures of learning that could be reported publicly. Thus, in 2006 assessment scholars began to raise concerns that a fresh emphasis on accountability, institutional comparisons, and public reporting of test scores could hijack the outcomes assessment movement. Would faculty and administrators who were finally beginning to see assessment of learning outcomes as a helpful source of guidance for improving curriculum, instruction, and student support services abandon this perspective and adopt a view of assessment simply as an externally-imposed accountability tool?

Our recent experiences on some individual campuses have increased our fears that the promise of assessment for improvement might be diminished by increased focus on assessment for accountability. In the first article in this issue Jana M. Hanson and Lavonne Mohn provide evidence that confirms these concerns. In 1999, 73% of respondents to a survey ACT, Inc. administered to users of its Collegiate Assessment of Academic Proficiency (CAAP) reported that their most important use of CAAP was to assess “the effectiveness of instructional programs,” presumably a precursor to using CAAP findings to suggest improvements in instructional programs. When this question about important uses was asked of CAAP users again in 2009, not even half responded that assessing “the effectiveness of instructional programs” was a “very important” use of CAAP. In fact, that item was not in the top three very important uses in 2009. It had been replaced by assessing students’ mastery of generic skills, providing information for accreditation, and—most suggestive of all—“comparing performance of your students with that of students nationally.”

Of course the sample of institutions invited to participate in ACT’s survey is almost certainly biased in favor of those for whom demonstrating accountability with test scores is important. So we cannot readily ascribe the perspectives of CAAP users to institutions not using a standardized test of college students’ generic skills (e.g., writing and critical thinking). But even Hanson and Mohn’s revelation that 424 institutions were using CAAP in 2009 is telling. By consulting websites and calling company representatives, we have determined that this year a combined total of more than 1,000 institutions have used CAAP and the other two tests recommended for the Voluntary System of Accountability (VSA), the Collegiate Learning Assessment (CLA) and the ETS Proficiency Profile (EPP). With the ever-changing number of postsecondary institutions hovering in the neighborhood of 4,500, we could estimate that 20–25% now administer standardized tests of generic skills.

This is troubling. It means that a significant portion of US colleges and universities may be moving in the direction of providing to the public information based on scores on standardized tests of generic skills that inevitably will be used to compare the quality of institutions. It just seems to be human nature to hone in on those easy numbers when we seek a standard for making comparisons! Gary Pike and I, in many of our respective columns, and more recently Daniel McCollum in an article (2011, 23(2)) have written in Assessment Update concerning the very real evidence that currently available standardized tests of generic skills are not valid for the purpose of comparing institutions.

In all fairness to ACT, CAAP can boast a quarter century of experience as a rising junior exam producing scores that can be compared in a rough way with scores on the ACT exam for entering students for purposes of advising students concerning skills they need to enhance and suggesting curricular strengths and weaknesses. But ACT staff have not argued that CAAP scores should constitute a primary source of information for judging institutional quality, and only since 2007–08, when the VSA was implemented, have they begun to emphasize (and, I sense, somewhat reluctantly) the value added dimension of the instrument’s applicability.

Gary Pike and I frankly admit that our work at the University of Tennessee on the technical quality of CAAP and other tests of generic skills is dated. We keep waiting for similar studies of CAAP, CLA, and EPP to emerge from current experience. In particular, at least two multi-campus state university systems have abundant CLA data to be mined to address the advisability of using standardized tests of generic skills to make inter-institutional comparisons. They also could shed more light on the validity of the value-added statistics called for in the VSA.

Will we stand silent and watch standardized test scores become the standard of choice for judging institutional quality, as they have in the K–12 sector? Will gauging the effectiveness of faculty become the next use of these scores, which are, without question, influenced primarily by the learning students bring with them to college? Will instructors feel pressure to increase their students’ test scores? Will institutions raise their admissions criteria in hopes of raising future test scores by bringing in better-prepared beginners? Will colleges and universities with lower scores fail to attract students, go out of business, or be taken over by an outside entity? Is it only a matter of time until all the current concerns in primary and secondary schools plague the postsecondary sector as well?

What actions should we take to ensure that standardized tests, as well as other assessment approaches, are used to guide improvements in higher education rather than to make unfair and unwarranted judgments about the relative quality of institutions? We invite and encourage comments and suggestions from our readers.


Copyright © 2000-2015 by John Wiley & Sons, Inc. or related companies. All rights reserved.
Wiley