By Gayle Greene
The Terrible Tedium of “Learning Outcomes”Accreditors’ box-checking and baroque language have taken over the university.
Every six years, the accountability police swoop down on my campus in the form of WASC, the Western Association of Schools and Colleges. The West Coast accreditation organization comes to Scripps, as it comes to all colleges in our region, to do our reaccreditation. The process used to take a couple of months, generating a flurry of meetings, self-studies, reports to demonstrate we’re measuring up. We’d write a WASC report — “wasp,” we called it, for the way it buzzed around making a pest of itself.
The WASC committee would come to campus, stirring up much hoopla and more meetings. They’d write up a report on our report, and after their visit, we’d write a report responding to their report on our report; the reports would be circulated, and more meetings would take place. Then it was over, and we could get back to work. It’s fairly pro forma with us; Scripps College runs a tight ship.
At least that’s how it used to be, just one of those annoying things to be got through, like taxes. Now that the reaccreditation process has become snarled in proliferating state and federal demands, it’s morphed from a wasp into Godzilla, a much bigger deal — more meetings, reports, interim reports, committees sprouting like mold on a basement wall. WASC demands that we come up with “appropriate student-outcome measures to demonstrate evidence of student learning and success,” then develop tools to monitor our progress and track changes we’ve made in response to the last assessment.
There are pre-WASC preps and post-WASC post mortems, a flurry of further meetings to make sure we’re carrying out assessment plans, updating our progress, and updating those updates. Every professor and administrator is involved, and every course and program is brought into the review. The air is abuzz with words like models and measures, performance metrics, rubrics, assessment standards, accountability, algorithms, benchmarks, and best practices. Hyphenated words have a special pizzazz — value-added, capacity-building, performance-based, high-performance — especially when one of the words is data: data-driven, data-based, benchmarked-data. The air is thick with this polysyllabic pestilence, a high-wire hum like a plague of locusts. Lots of shiny new boilerplate is mandated for syllabi, spelling out the specifics of style and content, and the penalties for infringements, down to the last detail.
. . .
Then the boxes with “comments, results, and summaries” are to be incorporated into an Educational Effectiveness Review Report. “By applying the rubric to last year’s senior theses enables you to evaluate both the rubric and your results to help fine-tune the assessment of this year’s theses.” (That sentence is why some of us still care about dangling participles.) This is all written in a language so abstract and bloodless that it’s hard to believe it came from a human being. But that is the point, phasing out the erring human being and replacing the professor with a system that’s “objective.” It’s lunacy to think you can do this with teaching, or that anyone would want to.
. . .
Do not think I am singling out Scripps College for special criticism. From what I’ve heard, it’s as bad or worse elsewhere. I think most of our faculty see our dean and president as indefatigable women who work for and not against us and genuinely respect the liberal arts. This outcomes-assessment rigmarole has been foisted on all colleges, adding a whole new layer of bureaucratic make-work. Reports and meetings bleed into one another like endless war. Forests die for the paperwork, brain cells die, spirits too — as precious time and energy are sucked into this black hole. And this is to make us more … efficient? Only in an Orwellian universe. This is to establish a “culture of evidence,” we’re told. Evidence of what? Evidence of compliance, I’m afraid.
. . .
Outcomes are “what a student must be able to do at the conclusion of the course,” explains an online source, and in order to assure these, it is best to use verbs that are measurable, that avoid misinterpretation. Verbs like write, recite, identify, sort, solve, build, contract, prioritize, arrange, implement, summarize, estimate are good because they are open to fewer interpretations than verbs like know, understand, appreciate, grasp the significance of, enjoy, comprehend, feel, learn, appreciate. This latter set of verbs is weak because the words are less measurable, more open to interpretation.
. . .
“Academics are grown-up people who do not need the language police to instruct them about what kind of verbs to use,” wrote Frank Furedi in a blistering denunciation of “learning outcomes” in Times Higher Education in 2012. Warning faculty against using words like know, understand, appreciate because “they’re not subject to unambiguous test” is fostering “a climate that inhibits the capacity of students and teachers to deal with uncertainty.” Dealing with ambiguity is one of the most important things the liberal arts can teach.
. . .
We in the humanities try to teach students to think, question, analyze, evaluate, weigh alternatives, tolerate ambiguity. Now we are being forced to cram these complex processes into crude, reductive slots, to wedge learning into narrowly prescribed goal outcomes, to say to our students, “here is the outcome, here is how you demonstrate you’ve attained it, no thought or imagination allowed.”
Find the whole article HERE
* * *
- The Misguided Drive to Measure ‘Learning Outcomes’, NYT, Feb. 23, 2018
- Learning Outcomes Are Corrosive, by Frank Furedi, January 2013
- DISSENT! Contra anti-intellectualism (DtB, April 18, 2015)
- Fall of the house of Beno (DtB, December 10, 2015)
- Part 2: the fall of the house of Beno (obscuring the deeper issue) (DtB, December 23, 2015)
1 comment:
I saw "useless, baseless bullshit, wasting time and money, with no end in sight" and thought it was going to be a post about Eliot Stern.
Post a Comment