(Originally published in the July-August issue of Assessment Update, available electronically to subscribers at publication.)
The quality assurance model at our institution focuses significant energy on the course examination process. Our institution, located in Mainland China, is a liberal arts college of approximately 4,000 students where English is the medium of instruction. The college, now in its sixth year of operation, relies on its Hong Kong affiliation to assist in the quality assurance of the examinations. In the Hong Kong (British) model of education, the examination should assess the learning outcomes for the course comprehensively, and the examination generally constitutes 50% or more of the course grade. The pedagogical model presumes an emphasis on open-ended questions (short answers and essays). However, the faculty begrudgingly acknowledge the practicality of including closed-ended questions on examinations in large classes. Additionally, every section of a course must share a single examination. The shared examination model has the potential for outcomes-based learning with questions addressing comprehension, application of knowledge, and problem-solving; however, we are more interested in sharing the quality assurance model’s emphasis on collegiality, openness, and focus on higher order learning.
Following the time line established by the quality assurance and examination divisions of our academic registry, faculty members begin designing their examinations around the fourth week of instruction and submit them by the beginning of week seven or eight. After drafting the examination, two or three individuals read and provide advice on the questions and format. First, a knowledgeable colleague reviews the examination and completes the initial vetting form. Ideally, this colleague is capable of teaching the course. Second, the programme coordinator (equivalent to a department chair) reviews and provisionally approves the examinations. The practice for the past two years involved sending the drafts to an external examiner in Hong Kong. At each level, the reader provides feedback to the writer. The exam writer may revise or respond to the comments on the form; however, the programme coordinator, as well as the external examiner, can require revisions. Most experienced American faculty members at our institution take offense at this “micromanagement,” “interference,” or “violation of academic freedom.” We will offer four examples of the peer-engaged process in public relations and advertising from a single semester to demonstrate the value of this examination model in improving teaching, learning, and assessment practices.
A lecturer in his first semester of teaching approached one of the authors for assistance in drafting his examination for Consumer Behavior. The colleague was more focused on the details of writing the questions and using the test bank as opposed to examining and matching the test questions to the outcomes. Consequently, the initial discussion focused heavily on matching examination questions to course learning outcomes, especially which outcomes could be assessed with short answers versus essays.
The discussion on how to assess the outcomes led to the discovery of a gap in course content. The final course outcome, about using research data, had not been directly incorporated as a lecture topic or in any of the drafted assignments. The senior colleague proposed two solutions: (1) treat research data use as applying knowledge and skills from other courses; or (2) add the content and project assignment to address the research outcome. The students enrolled had completed an introductory research class and were concurrently taking advanced research methods. The junior colleague chose to revise his future lectures and assignments because he felt more comfortable assessing the outcome based on direct instruction. This example illustrates how the process of writing the examination early and getting feedback from colleagues helped to identify and rectify a disparity in course content and learning outcomes. In addition, this demonstrates the success of a collegial and open process.
Our second example involves a more complicated problem. A senior colleague joined our staff with over thirty years of teaching experience (K–12 and university level). Our colleague taught a section of Advertising Copy Writing. We informed this individual of some problems with the learning and teaching in the section of Principles of Advertising the students had taken the year before. We suggested the teacher not presume prior knowledge of these principles and begin with a review of advertising principles. The colleague complained that it was too early to write the examination when the process began. The first draft of the examination was completed by the deadline. The teacher had written an examination based on the content already covered—primarily the review of the principles rather than copy writing itself. Based on feedback from the second author, the colleague revised the exam to assess knowledge about writing but did not assess writing skills—even though it was strongly recommended. Upon review by the external examiner, concern was raised about the lack of copy writing itself on the examination—a major outcome of the course. The colleague made token revision and argued that the writing outcomes were assessed in the course’s writing assignments, not the examination, based on her experience at American universities. The senior colleague continued to lament the interference in her teaching and the violation of her academic freedom. There is no happy resolution to this example; the teacher was assigned skills courses without examinations in the next semester to allow her to work more independently.
The examinations submitted to the review process by a third colleague raised major concerns about teaching and learning. This assistant professor had both industry and teaching experience. The colleague presented two examinations for peer and supervisory review. The questions on both of the examinations focused on knowledge and recall, even though both were upper level courses. The first examination in Public Relations Writing demonstrated significant flaws in question design. For example, short answer questions were equally weighted with regard to points possible but asked for significantly different amounts of information (list and define three things in one question and seven in another). The second examination in Advertising Agency Management, a capstone course, lacked any big picture questions, but instead focused on details. The capstone course examination had no questions demanding application, let alone analysis, synthesis, or evaluation.
Apparently the draft examinations were not truly vetted by a colleague, but instead simply signed off by an office mate. Consequently, the examinations were rejected at the programme coordinator level. After several revisions, it became apparent that the instructor lacked the ability to revise the examinations quickly by adding questions at this higher level. One of the authors revised the writing examination so that questions required a consistent number of items per response for the same number of potential points. The instructor attempted to reject these revisions because the answers were necessarily incomplete (three steps of a five-step process). The programme coordinator and external examiner approved the quality and equity of the new questions and anticipated answers. The examination questions in the capstone course, which asked for application and analysis, became knowledge-based questions because the model responses were presented to the students in lectures. As a result of this process, the instructor did not continue teaching upper level courses requiring higher level learning outcomes. The outcome of this process ended up being summative rather than formative, as did the earlier example with the first-time teacher.
We offer a fourth example of the peer-engaged process which we hope indicates our hopeful aspirations for working with colleagues on drafting, revising, and finalizing examinations. Quite simply, we wrote each other’s course examination based on our working preferences, training, and skills. Both authors are experienced educators and qualified to teach both courses. The first author’s academic experience emerges from the humanistic tradition and interpretive qualitative methods. The second author’s experience comes from the quantitative social sciences with emphases in statistics and measurement. The first enjoys thinking through and writing essay questions. The second enjoys specific detail and concrete answers. The first author taught a communication theory survey with outcomes stated with key words like recognize, know, and remember. The course also had an enrollment in excess of 150 students, so an objective examination was allowed. The second author taught a senior level course in advertising with outcomes measuring analysis, synthesis, and evaluation—thus requiring essay questions. Both professors possessed content expertise for the courses, but had a personal preference for writing the other format of questions for an examination, so we wrote the other person’s exam using the course materials (e.g., textbook, PowerPoint slides, and handouts). In both cases, the examination assessed the complete breadth of the course learning outcomes and measured student mastery. The effort required to write these exams provided mental satisfaction rather than frustration.
Based on our four examples, we have some suggestions for implementing a peer-engaged examination process. We describe the process as engaged because it is best practiced by individuals who want to collaborate for the good of their students and major field. We prefer not to describe it as a “review” process because of the defensiveness implied, as well as a fear that some institutions would use our experience to justify requiring collaborative work. Our recommendations follow:
1. Interested faculty members should meet with colleagues in week four or five of a semester to discuss how to assess the outcomes on a course or final examination.
2. The conversation needs to focus on the end goals of the class, but is also likely to provide an opportunity to consider alternatives for assignments in addition to examinations.
3. In an American context, the collaborating faculty members can also discuss whether particular outcomes can be assessed most effectively within the examination model, since most universities do not mandate comprehensive assessment of learning outcomes in a single measure.
4. The collaborators should discuss their comfort or discomfort with completing the writing tasks associated with preparing an examination that genuinely assesses the learning outcomes.
5. The process should allow colleagues to collaborate in a forum that makes every member feel engaged, respected, and valuable.
6. The process should close with simple rewards: Say thank you and compliment colleagues for their strengths.
In addition to these six recommendations, we have some basic conclusions to share:
Generally, this process can improve teaching and learning by correcting disparities among learning outcomes, course content, and assessment models. This process can also improve the match between instructor strength and assessment models. Our essay provides the basic background and justification for engaging in peer review of draft examinations.
Bradley A. Gangnon is an instructor at the Minnesota School of Business in Plymouth, Minnesota, and Constance C. Milbourne is associate professor of Public Relations & Advertising at BNU-HKBU United International College in Zhuhai, Guangdong Province, China.
©John Wiley & Sons, Professional and Trade Subscription Content
989 Market Street, 5th Floor, San Francisco, CA 94103 | Phone: 888.378.2537 | Fax: 888.481.2665