jossey-bass
Nonprofit Business Advisor, Strategies to Survive and Grow in Tough Times

Focus on the Bottom-Line: Assessing Business Writing

By Michael Cherry, George Klemic April 17, 2013

Introduction

The authors are members of the Business Administration Department of the College of Business (COB) at Lewis University in Romeoville, Illinois. Lewis is a private, Catholic, LaSallian institution comprising four colleges. The COB has seven undergraduate departments, including business administration and marketing, its two largest undergraduate departments. Lewis University has over 6,000 students, of which 651 are undergraduates in the COB. The Business Administration Department has 292 students, and Marketing has 68. These departments represent over 55 percent of the undergraduate business programs.

During the 2010 fall semester, the authors and two partners (the team) engaged in an assessment process to determine student achievement in writing and level of improvement as students progress through the business programs. Findings from this assessment were presented at the 2011 Assessment Institute in Indianapolis. This article reflects the presentation and describes the assessment process, results, suggestions for remedial actions, and next steps.

Project Goals

The COB strives to improve its quality continually, in part by assessing student learning outcomes, analyzing results, identifying steps that might improve quality, testing those steps, and implementing steps that seem to improve quality. To this end, the COB maintains a five-year assessment plan that calls for assessment of the content areas of business disciplines and parts of four of the university-wide baccalaureate characteristics (BCs). Writing is one of the BCs. The assessment plan is congruent with the missions of Lewis University and the COB and with the requirements of the university’s and COB’s accreditors.

Over a two-year period, the team established a writing assessment plan, including a timeline, development of a rubric, identification of suitable assignments already in place, execution of the assessment, analysis of the results, creation of recommendations, and communication with faculty. The team’s goals were to determine the level of student achievement for a meaningful sample of COB students and to determine if the programs appeared to add value.

Writing Assessment Initiative—Rubric and Assignments

The team examined writing rubrics currently used by the faculty and explored externally available rubrics. The team ultimately created a ten-category rubric, which was largely based on a popular writing model used in an upper-division writing course, Business Communications. The team identified as acceptable rubric scores designated as C-level performance.

The team identified suitable embedded assignments in principles of management and principles of marketing courses, the entry-level courses in the business administration and marketing programs, and in the business communications course, the upper-division writing course in both programs. The team tested the rubric on a sample of assignments the semester before assessment to ensure the efficacy of the rubric and to calibrate the internal consistency of the team’s ratings.

The Assessment: Findings

The team assessed papers in the classes described earlier. Addressing goal 1 of the project: The results suggested that, based on average scores, seniors achieved an acceptable level of performance in writing, with 85 percent achieving an acceptable score. Addressing goal 2 of the project: Seventy-five percent of entry-level students achieved an acceptable score. These findings suggest that students’ performance leaves significant room for improvement, but the programs do appear to offer added value.

What Was Learned

Upon reflection, the team was not surprised by the results. It had anecdotally perceived students to be adequate, though not great, writers. As the team prepared to take the findings to the COB faculty, it tried to identify specific student issues that might be addressed as well as faculty concerns. The team also recognized that it should examine its performance as assessors.

Student Issues

Three sorts of writing deficiencies led to unacceptable ratings often enough that they deserved separate, systematic attention: “mechanical” issues, attentiveness issues, and coherence issues. Students sometimes failed to employ the digitally available proofreading cues available in the software used to produce the papers. Many students failed to attend to the format requirements specified by their instructors. Many students either wrote in a stream-of-consciousness manner or assumed facts not in evidence as they wrote, with these deficiencies yielding untenable conclusions.

Of particular interest was the concentration of errors in the papers of the students who scored below the acceptable level. The “Concise” and “Courteous” categories reflected few errors in all student papers, but the “Unacceptable” student papers reflected a great number of errors in all of the remaining categories. The team wrestled with how to address these many elements.

The team’s discussion of the nature of writing was intense. It had to ask if “good writing” is a holistic, summative phenomenon or if it is a “divisible,” analytic phenomenon. The team may never have come to consensus on an approach to this issue. Instead, it decided to treat student performance in writing in a divisible way, as long as some consequences were included in the procedures. This will be explained below.

One sticking point in the summative/ analytic discussion revolved around the elements “Coherent” and “Correct.”

Correct: Message must include proper spelling, grammar, punctuation, and format.

Coherent: Message needs to hang together so that ideas flow from one to the next through smooth transitions.

The team noticed a flaw in the rubric used for this assessment. It was possible for a student to achieve an acceptable aggregate score even if some of the key elements of good writing, such as coherence, received a less-than-acceptable score. That is, in some cases, students’ work was judged not to be coherent and/or correct, yet these student papers received a passing score. It was difficult for the team to accept that. The team decided to deal with the situation by scoring and grading papers in a divisible way and by recommending adding to the rubric, syllabi, and course practice mandatory referrals to the writing center for COB students who received less-than-acceptable scores for “Coherent” and “Correct.”

Faculty Issues

The team found that the use of rubrics improved grading and provided a teachable moment that crossed curricula. This has led to the creation of a common writing rubric (further discussed below) that can be used across all COB disciplines. In addition to a common rubric, it was suggested that the COB faculty standardize and adopt several writing assignments that could be used early in the semester. This would allow for early identification of need for support from the Writing Center. The writing rubric includes faculty prompts to direct those with specific performance deficiencies (“Coherent” and “Correct”) to the Writing Center.

The team also learned that a number of student writing errors could be easily corrected with greater attentiveness to both digital tools and prompts provided by the instructor, with the scoring rubric being one of the potential prompts. The team wondered if instructor provision and/or explanation of various prompts might positively affect student writing and resolved to test this in the following year.

Finally, the team sensed a relationship between the effectiveness of the faculty writing prompt/writing assignment directions and the work product of the students. The team had a similar feeling two years earlier when critical thinking was assessed. The team decided to recommend in its report to the faculty that colleagues give students more focused writing assignments.

Assessor Issues

As assessors, team members identified several areas of concern. Members discussed the possibility that a higher “Acceptable” score should have been adopted. There is a strong suggestion in the goal-setting literature that goals should be achievable but must be challenging to motivate greater effort (Locke and Latham 1990). When presented later with this issue, the faculty opted to retain the existing Acceptable score but increase the targeted percentage of students achieving that level.

Another issue involved what effectively was overweighting of several factors because of some redundancy in the rubric. Further, the equal weighting of all the factors assessed created a philosophical dilemma. The rubric’s scoring features as originally designed might allow an “incoherent” paper to get a passing score. This caused the team to amend the rubric to deal with those shortcomings.

In addition, discussion led to the thought that a writing assessment could be integrated in the admissions process for the COB. Were this to occur, a single writing prompt might be developed for use in assessment at all student levels.

The team also recognized that the sample size was small and acknowledged that “the larger the sample, the better” (Leedy and Ormrod 2005, 207). The sample might simply be increased at the next assessment cycle time, or the team might add one or more mini-assessments between cycle time assessments.

Faculty Decisions Relative to the Team’s Recommendations

The team presented recommendations to the faculty, secured feedback and direction, and adjusted plans for the future accordingly.

The faculty decided to retain the existing passing score for the assessment but to increase the desired passing percentage for exit students to 88 percent. Faculty members accepted the amendments to the rubric and agreed to test it themselves. They liked the idea of early diagnosis and direction of students to the Writing Center, but a decision about automatic Writing Center referrals was delayed until results of assessment using the revised rubric could enable better estimates of the volume and timing of these referrals. Faculty members indicated that in future writing assessments, we should score only COB students. (A few non-COB students in the sample were taking the courses as an elective.)

Conclusions

The team set out to learn about students’ accomplishment as writers. This was the third attempt to assess a Baccalaureate Characteristic as defined by the Lewis faculty. As with the two prior attempts, the team developed standards and applied them against student papers using a rubric. As “Net Income,” the bottom line in an income statement, represents an indicator of company performance, this assessment has provided an indicator of student performance. The assessment helped to identify and inform faculty of successes and shortcomings about student writing and allowed identification of next steps in the effort to improve student writing. This, coupled with the assessments of the use of quantitative and qualitative models, of critical thinking, of oral presentations (to be assessed next), and further complemented by three years of results from the ETS Major Field Test in Business, should provide significant information to evaluate student learning performance and to adjust plans and resource commitments accordingly.

Acknowledgment

The authors sincerely thank the other members of the team, Maureen Culleeney and Laura Leli-Carmine.

References

Leedy, P. D., and Ormrod, J. E. 2005. Practical Research: Planning and Design (8th ed.). Upper Saddle River, N.J.: Pearson Education.

Locke, E., and Latham, G. 1990. A Theory of Goal Setting and Task Performance. Englewood Cliffs, N.J.: Prentice Hall.

Michael Cherry is academic coordinator of Adult Business Programs and George Klemic is associate professor in the College of Business at Lewis University in Romeoville, Illinois.


Copyright © 2000-2015 by John Wiley & Sons, Inc. or related companies. All rights reserved.
Wiley