jossey-bass
Nonprofit Business Advisor, Strategies to Survive and Grow in Tough Times

A Kind of Heresy: Assessing Student Learning in Philosophy

By Charles W. Wright June 3, 2011

(Originally published in the May-June issue of Assessment Update, available electronically to subscribers in May.)

Philosophers are among the most resolute assessment holdouts. It was only in 2008 that the American Philosophical Association abandoned its principled opposition to the assessment of student learning outcomes and adopted a qualified endorsement of the endeavor. I encountered this legacy of skepticism when I volunteered to serve as the assessment coordinator for the Department of Philosophy at the College of Saint Benedict and Saint John’s University, two private, residential, Catholic-Benedictine institutions with a combined academic program located in central Minnesota.

I inherited a departmental assessment plan written in 1997 by a pair of my colleagues in response to a mandate from our academic administration. The authors did not understand assessment. When I took over as coordinator in autumn 1999, neither did I. Nonetheless, I struggled for the next eight years to develop a workable strategy for assessing the goals contained in this document. In time it became clear to me that we were going to have to start from scratch.

In the fall of 2007, I initiated a series of meetings with my faculty colleagues to try to pin down exactly what it was that we were trying to accomplish as a program. Because only a small portion of our majors go on to pursue graduate studies in philosophy, it did not make sense for us to emphasize the role of our curriculum in preparing students for this path (though we could not completely neglect it, either). Instead we focused on what our curriculum and pedagogy offered to the vast majority of our majors (not to mention minors) who pursued educational paths different from academic philosophy. Frequent anecdotal reports from our alumni, as well as stories in the media, suggested to us that the particular excellence of philosophy majors is their capacity to engage in independent and creative problem solving. So, we decided to focus on how we thought we contributed to our students’ intellectual development.

Unsurprisingly, one factor complicating these discussions was that my colleagues were still reluctant to take assessment seriously. This was due, in part, to a residual suspicion that the tools of assessment could not accurately reflect our students’ learning. It was also due to their assumption that assessment was a strictly summative endeavor. A few colleagues openly expressed misgivings about the punitive uses to which evidence of weak outcomes might be put by academic administrators. By emphasizing that assessment could and should serve a formative purpose, I was able to convince them at least to engage seriously in identifying our department’s learning goals. In hindsight, it seems to me that a decisive moment came when I asked whether any of those present were prepared to assert that there was no room for improvement in their teaching. None was willing to go so far. This allowed me to emphasize that while we were required by administrative mandate to engage in assessing student learning, we could and should put the endeavor to genuinely constructive use in an effort to identify ways we might improve our performance as teachers.

We reached agreement on three fundamental skills and four dispositions that we believed our curriculum inculcated in our students. The skills consisted of (a) an increased ability to engage in charitable reading (to suspend judgment on views or ideas presented in order to understand “from the inside”), (b) an improved ability to articulate philosophical ideas and arguments, and (c) improved reasoning ability. The dispositions consisted of (d) an increased propensity to engage in charitable reading, (e) increased comfort with ambiguity, (f) increased capacity to resist the urge for quick and easy answers, and (g) taking pleasure in struggling with difficult ideas. We ended with this set of assessment goals partly out of fatigue—it required several, occasionally contentious, meetings to reach these agreements—and partly because they seemed like good places to start.

My initial effort to gather evidence on our students’ philosophical dispositions was guided by research literature in social psychology. It occurred to me that some social psychologists must already have carried out research on a trait like comfort with ambiguity. A search of electronic archives yielded a half-dozen or so different instruments, all oriented toward what the researchers called tolerance for ambiguity. Validation studies for these instruments gave me access to individual items. Careful review of these different instruments allowed me to identify what I thought was a collection of suitable items to create our own 35-item “comfort with ambiguity” assessment scale.

My colleagues, however, were not nearly so pleased with the result as I was. One major objection was that such an indirect, survey-based approach to assessment could not possibly capture deeper dimensions of our students’ learning. This concern arose from some of my colleagues’ genuine philosophical commitments to careful hermeneutic engagement with philosophical texts. Because they understood such careful reading to be fundamental to the task of philosophy, they objected to an approach to assessment that did not adhere to these same exegetical principles.

It was important that I honor these philosophical commitments and their methodological implications. I explained to my colleagues that qualitative approaches to data gathering in the social sciences—built on carefully structured analysis of transcripts of open-ended interviews and written documents—would come closer to meeting their expectations, but I also pointed out that such procedures could be time-consuming. They would require not just that we develop appropriate series of questions and that we conduct interviews with students, but also that we produce transcripts of the interviews as well as instruments for scoring them. And then we would have to read the transcripts, score the students’ responses, and test for inter-rater reliability. We could do it, I said, if they were ready to contribute some time and effort. They admitted that they were not ready to make such a commitment. At this point I explained how the indirect and quantitative approach I was proposing would allow us to satisfy institutional requirements with a minimum of demands on their time, and might also yield evidence that could guide future efforts at pedagogical or curricular improvement.

Paradoxically, then, I was able to get my colleagues to take seriously the particular approach to assessment that I was proposing—by showing them that the approaches more compatible with their own philosophical commitments would involve burdens they did not wish to bear. This allowed them to endorse an approach that fit their commitments less well, but that demanded less of their time. This level of engagement is clearly not optimal, but it allowed me to make some progress.

Their other major objection was that items based on existing instruments were too lifestyle oriented, and did not highlight the kinds of cognitive development that interested us. We agreed that any such instrument I might develop would have to track more closely the dispositions that mattered to us as philosophers and that might be cultivated by our curriculum and pedagogy. Having reached this agreement, I then went back to work. After some weeks I managed to produce measures for all four philosophical dispositions—38 items in total—which were embedded in a 112-item questionnaire that also solicited student reports about the pedagogical techniques utilized in their philosophy classes, self-reports of students’ own learning behavior, observations about the classroom ethos they experienced, and questions concerning whether philosophy had changed their way of thinking about certain issues of knowledge and morality. The dramatic expansion of the instrument was motivated by my naïve and optimistic anticipation that we might learn something about whether our pedagogy or the social environment we foster in our classrooms had any relation to how the students scored on the disposition measures.

Using a combination of financial incentives—an opportunity to win one of forty $10.00 gift certificates to the campus bookstore—and moral suasion, we obtained a sample of 152 responses, a little over 50% of the students enrolled in the classes offered by the Department of Philosophy in spring 2009. The Cronbach’s alpha (internal consistency) values for the four dispositions scales were: charitable reading (CR, 11 items), .693; comfort with ambiguity (CA, 9 items), .560; resisting quick and easy answers (RQA, 9 items), .698; and pleasure in studying difficult texts (PDT, 9 items), .743. (One result of the furious pace at which I constructed the instrument was that, unnoticed by myself as well as my colleagues, the original learning goal of “pleasure in struggling with difficult ideas” was translated into “pleasure in struggling with difficult texts”. This will have to be corrected in future versions.)

An independent samples t-test comparing the mean scores of self-identified philosophy majors with nonmajors on each of the disposition measures revealed that in each instance majors scored higher than nonmajors. While this was encouraging news, it is also possible that this might simply have been a result of self-selection. Individuals who identified themselves as majors might be people already likely to score higher on these measures. To control for this possibility, we ran an analysis to see whether there was a statistically significant relationship between the number of philosophy classes a student had taken and their disposition scores. If we saw a consistent positive correlation between these two variables, we might then be able to have more confidence that the difference we had already observed could be attributed to our work.

Unfortunately we encountered the problem of small sample size. The number of respondents declined precipitously as the number of philosophy courses they had taken went up. While 75 (49.3%) had taken one class, and 28 (18.4%) had taken two, only 11 (7.2 %) had taken three classes, 6 (3.9 %) had taken four, 10 (6.6%) had taken five, and 4 (2.6%) had taken six. That was followed by a cohort of 18 (11.8%) advanced majors who reported having taken more than six classes. As a consequence, the results of these analyses were inconclusive.

Two steps remain to make this approach to assessing our students’ learning effective. First, the disposition measures need to be refined. With the aid of both student research assistants in May 2010 and my colleagues in February 2011, we have already significantly improved the construct and content validity of these measures. The next step is for my colleagues and me to determine what it is we do or might do in our classes that could foster these dispositions. The aim is to construct a series of questionnaire items that would allow us to collect student reports on the frequency with which they observe the techniques we identify in their classrooms. Once those items are ready, we can invite students to respond to both measures of classroom techniques and the disposition measures. The hypothesis to test is whether our techniques actually do affect disposition scores. Correlation analyses would provide preliminary evidence—weak outcomes would give us reason to think our initial ideas were mistaken. If correlations are sufficiently strong, we could begin to cultivate greater intentionality with regard to these learning goals and teaching techniques. If the techniques we identify foster the philosophical dispositions we are interested in, then by making a greater effort to incorporate them into our classroom practices, we should see increased scores in the disposition measures in subsequent samples. If we do not see increased scores, then we know that what we thought we did to foster these dispositions is not in fact what fosters them. In either case, we have learned something useful in our efforts to improve our teaching.

There may be some lessons for humanists interested in pursuing assessment of student learning in their own discipline. The first is that successful assessment may require us to expand our disciplinary horizon to incorporate some techniques from the social sciences. It might be something as simple as introducing sampling procedures and developing scoring rubrics that yield numeric evaluations. Assessing every essay our students write would be terrifically burdensome, and it is far more difficult to reach general conclusions on the basis of a collection of written comments than on numeric scores. But these two small steps create the possibility for effective assessment of students’ written work that is broad in scope and comprehensible to nonspecialists.

The second lesson follows from this first, and that is that humanists should seek to develop consultative, possibly even collaborative, relations with colleagues from the social sciences in developing their assessment strategies. The process of formulating and testing hypotheses about student learning is hard, unfamiliar, and daunting for us. But it can be straightforward, familiar, and manageable for our colleagues in education, psychology, and sociology. We need not develop their expertise, but simply learn to benefit from it. For some of us it might be difficult to go back to school in this fashion. For others, like myself, it is a first-rate opportunity to learn something new and interesting. My experience (a sample of one, admittedly) is that when I approached colleagues in the social sciences for guidance and feedback, they were both generous and patient. If our current efforts to assess the development of philosophical dispositions among our students are successful, it is largely due to the guidance I have received from my colleagues with social scientific training.

Note

Based on a presentation with the same name given at the Assessment Institute in Indianapolis, Indiana, October 25–27, 2009.

Charles W. Wright is associate professor of philosophy at the College of Saint Benedict and Saint John’s University in St. Joseph, Minnesota.


Copyright © 2000-2015 by John Wiley & Sons, Inc. or related companies. All rights reserved.
Wiley