Outcomes assessment is an odd business. It is not to the credit of higher education that we have tolerated this external assault on our work. Its origins are suspect, its justifications abjure the science we would ordinarily require, it demands enormous efforts for very little payoff, it renounces wisdom, it requires yielding to misunderstandings, and it displaces and distracts us from more urgent tasks, like the teaching and learning it would allegedly help.The core alleged insight of the “outcomes” philosophy goes like this. Traditionally, our schools and their functioning have been viewed in relation to “inputs”: money, equipment, training, increased instruction hours, and so on. We invest all of these things in our educational system, and then we expect it to produce educated students.
—John W. Powell
That’s all wrong, say the proponents of “outcomes.” We need to focus instead on the desired output—for instance, the kinds of skills and knowledge we want our kids to have at the end of their schooling—and then work backwards from there. It’s just logical! How do we get from where we are, point A, to where we want to end up, point B? That’s the $64,000 question. So our emphasis should be first on “outcomes,” which should be measurable, and second on how to achieve them.
(For a sympathetic overview of the "outcomes" philosophy, see here. For a very different overview that emphasizes the politics, see here.)
WAIT A MINUTE. Observe, however, that it is not obvious that this approach is workable. (In fact, states and nations who have embraced OBE have tended to find it unworkable and have tended to abandoned it.) Can one step back from, say, my “Introduction to Philosophy” course and capture “that which I intend to teach my students” in a set of sentences of the form, “upon taking this course, a student will be able to...”? Among thoughtful and intelligent people, the notion will arise that this task will be difficult. Perhaps impossible.
When I step back from my Intro course and ask about the intended “takeaways” for students, I gravitate toward such things as:
• Rational confidence: I want students to begin to reject widely-held skepticism about the possibility of profitably thinking about and discussing/debating difficult (philosophical and other) issues. They need to begin to see that some ideas are more defensible than others and that a community of good thinkers can make progress reasoning about and discussing issues (rejection of the notion that any idea is as good as another).
• Epistemological moderation: I want students to begin to settle into a middle area between skepticism (the notion that there is no point in thinking/arguing for the sake of maximally justifiable positions, for none exist) and dogmatism (the notion that the truth is had and that, therefore, it should not be questioned or challenged).
• The particularity of our conceptions/forms of life: that our conceptions and traditions are properly understood in relation to history and that there exist and have existed a multiplicity of intellectual/moral traditions in the world of which ours is one example (please note that this is not necessarily an expression of “relativism”*).
• Etc.When I consider these and other intended “takeaways” for my students, I am doubtful of the prospect of successfully reducing or capturing them in a set of “measurable learning outcomes” that can be listed on the syllabus. Happily, I am inclined to think that students’ exposure to my course and various other courses, assuming some effort at balance, will tend to produce the above “outcomes,” among others, unmeasurable (through, at any rate, SLOs) though they might be.
HOW TO IMPOVERISH TEACHING/LEARNING. It is true, of course, that, to some extent, takeaways from my Intro course are amenable to the “measurable outcomes” approach. But why would anyone assume that that subset represents what is most important in the course? Mightn’t one worry that that subset represents what is least important and that a focus on securing that subset would dumb down the course or rid it of intellectual depth or gravity?
And further: why should it be assumed that the ability to reduce the “takeaways” of a course (or a course of study, including a degree) to a set of discrete and “measurable” learning outcomes is necessary? That this reduction (or essence-capture) is available would, I suppose, be desirable. But that such a thing is desirable does not in any way imply that it is available or necessary.
It is no secret that we—our society, the Academy, et al.—have long debated the question, “What is a college education for?”. But does anyone suppose that the unavailability of some received view, some firmly held consensus, means that our college students generally fail to be educated? (Well, obviously, in fact, some of them do fail. But many do not.)
* * *
FACTOIDS. Let’s be sensible. Toward that end, here are some factoids to consider:1. Near as I can tell, there is no scientific evidence (i.e., studies that do the appropriate comparisons between systems that embrace OBE and systems that do not) supporting the efficacy or unique efficacy of OBE. (Please see John W. Powell’s Outcomes Assessment: Conceptual and Other Problems.) To be sure, proponents of OBE are able to produce enormous piles of studies and the like in "support" of OBE. But these tend to consist of, not evidence that OBE (compared to anything else) works, but that someone somewhere has in fact implemented OBE. Such muddled thinking should not be surprising among "experts" who have never heard of the need for replicating study results.
2. Given the recent emergence of such tests as the PISA, we are living in an era of improved data concerning the efficacy of education approaches. It is more possible than ever to compare nations with regard to the success of their educational efforts. Guess what? The most successful countries do not embrace OBE. They embrace more traditional approaches.
3. It is no secret that our nation’s efforts at reform (in the nation’s system of elementary/secondary education) have been dismal. In fact, it has been characterized by (1) a gravitation to meretricious shiny objects (better equipment, smaller class size, educationist trends [whole language, self-esteem]) and (2) being repeatedly hijacked or undermined by political movements and interest groups. Despite spending enormous sums of money, our efforts have plainly failed. (And, no, the data do not suggest that poverty or diversity are insurmountable problems in establishing an effective school system. See what countries like Poland and Singapore have achieved, despite poverty or diversity.)
4. Thanks to muddled national and state politics, accreditors have been pressured to adopt OBE, that once shiny object, even at the level of higher education. As a result, the accrediting agencies, including our own ACCJC, have indeed adopted OBE, and, consequently, colleges and universities across California continually dedicate vast monies and energies to writing SLOs (for programs, courses, etc.). In general, the “SLO” mandate is poorly received, especially by faculty. Consequently, institutions confront a choice between two options: (1) trying to make a bad system work, to the extent that that is possible (and it is typically judged to be not very possible), or (2) bad faith implementation—i.e., simply giving our benighted accreditors what they want and then, to the extent possible, ignoring SLOs. In fact, the most common response to the SLOs mandate is a mix of (1) and (2). (Some will dispute this. Some believe that the Earth is flat.) If so, our system is wasting vast amounts of energy and money on an approach that is very substantially inferior to previous approaches. Nice going, America.
This is a fiasco, a disaster. Under the circumstances, it is very odd indeed that there isn’t more push-back.
* * *
THE STATE SENATE. Community college “Academic Senates” represent faculty with regard to academic matters and are key players in so-called “shared governance” or “collegial consultation.” The Academic Senates of California’s community colleges long ago (1970) organized into the Academic Senate for California Community Colleges (ASCCC), commonly referred to as the “State Senate.”As you know (see recent post), thirteen years ago, the State Senate tried to push back against the accreditors (ACCJC), challenging them to justify abandonment of old standards and embrace of the new "measurable outcomes" standards. It was a rare moment of good sense and spunk.
That got them nowhere.
The State Senate had its big Spring meeting last week, from Thursday until Saturday.
Saturday was reserved for “resolutions.” One of the resolutions that (as I understand it) was voted upon yesterday was #2.04 S15: “Justification of SLO Use.” Here it is:
Whereas, In the last 15 years, new attempts to track the success of school systems around the world (e.g., Program International Student Assessment) have achieved impressive bodies of data useful in measuring the effectiveness of education approaches;
Whereas, These data indicate that the more successful countries do not embrace the notion of “measurable student learning outcomes” that are central to the Accrediting Commission for Community and Junior Colleges’ (ACCJC) existing standards for evaluating and reviewing institutions and the philosophy that emphasizes that tool; and
Whereas, It continues to be the case that research fails clearly to establish that continuous monitoring of course-level student learning outcomes (SLOs) results in measurable improvements in student success at a given institution but does engender frustration that continues to characterize community colleges’ attempts to implement the SLO approach;
Resolved, That the Academic Senate for California Community Colleges request no later than July 1, 2015 that ACCJC justify its continued implementation of SLOs and explain why it does not opt for approaches more consistent with the approaches of successful countries in educating their students.I don’t yet know the outcome of the vote. I'll get back atcha when I do.
P.S.: yesterday (4/13) I contacted someone who attended the Plenary and asked her what happened to our/my resolution. She wrote back, explaining that
I’m still trying to figure it out (in terms of bigger picture): In Area D it was accepted, someone from another Area “improved” it by adding a due date; Area D left it on the consent calendar. People from Area D & other regions came up to me in the hall to praise it.Pretty disappointing.
And then, somehow, it failed at the vote. I didn’t understand the opposition. The arguments against it seemed to be small detail stuff....
*That there are multiple "traditions" does not in itself imply that they are necessarily equal or incommensurable. Such notions, of course, are matters of controversy.