Web Exclusives: PawPlus


April 2, 2008:

Assessment and general education: resisting reductionism without resisting responsibility

By Stanley N. Katz

Editor's note: Following is the text of an address given by Stanley N. Katz, Lecturer with the rank of Professor in Public and International Affairs in the Woodrow Wilson School, at a conference of the Association of American Colleges and Universities in Boston Feb. 23, 2008.

How should we think about the problem of assessment of general education in the pluralistic environment of American higher education? The two extreme approaches would be, on the one hand, to attempt to assess the entirety of the student learning experience over the four years of a collegiate education; and, on the other, to deny that such an attempt was either possible or desirable. There are probably compelling versions to support each of these two polar positions, but as an admirer of Aristotelian moderation, I am going to try to locate an intermediate position this morning.

Let me simply mention some of the obvious factors that make generalizations about longitudinal collegiate assessment difficult if not problematic. The first is the incredible range of types of four-year institutions, ranging from colleges (public and private, secular and religious, liberal arts and vocational) to universities (public and private, secular and religious, variously configured). Even if we stipulate that we are primarily interested in four-year liberal arts institutions, almost all of the colleges and universities in the universe lay claim to this designation, however plausible their claims may be. It is a wonderfully messy and variegated universe. Apart from their differing educational structures and missions, the institutions differ in access to resources, geographical locations, faculty and much more. Secondly, there is no modal "student" in American higher education. Public discussions of higher ed frequently imagine student bodies of recent high school graduates, mostly native-born, attending school full-time, graduating in not much more than four years, living on or near campus and seeking a liberal education – an image that is truer of elite colleges and universities, but not fully accurate even as to them. But of course this is not the case in most institutions, in which the majority of students are part-time, many attending two or even three institutions before gaining a degree, and for whom liberal education is not remotely a personal goal.

If we cannot readily generalize about the universe we would like to assess, how do we even begin thinking about the problem? Here, again, there are several possibilities. At one pole we could take a deep breath, as many have done, and assert that whatever the institutional differences, the relevant group in each institution is the students, so students are the constant, and student learning is what we need to measure. At the other pole, we could throw up our hands and say that both students and institutions are so different that there is no point to making comparisons that are inevitably artificial at best and meaningless at worst. But even then we might conclude, as I am tempted to do, that such objections do not hold for intra-institutional assessment (or perhaps assessment within peer groups of institutions).

But I realize that there may be objections to any possible attempt at institutional assessment, along the lines of those arising in reaction to the original orientation of the Spellings Commission, when mandatory national outcome assessment through standardized tests was being touted as the reform of the future in the name of educational accountability. As we now know, the commission ultimately backed off what many of us thought an extreme and ill-considered recommendation, but the specter of a Spellings future turned many educators into adamant opponents of any sort of outcome assessment at all. The rationale for these objections needs to be taken seriously, however, for it seems to be that there is something inherent in liberal education itself that is hostile to formal assessment. From this point of view, liberally educated students are works of art in progress, even upon graduation, so that we will not be able to know whether they have truly been liberally educated for many years – if ever.

These are a few of the obstacles that I have considered as I undertook the ill-considered invitation of the AAC&U to deliver this plenary – ill-considered, since it is not clear that I know enough about assessment to speak confidently to an audience of serious educators. But ignorance has never stopped me before (I was, after all, generally-educated at an elite institution), and here I go, so buckle your seat belts.

I WANT TO BEGIN BY BRIEFLY SKETCHING my operating assumptions. Then I will backtrack in order to explain to why I have made them. I will refer generally to "higher education," but you will have to accept that my personal experience is almost entirely in elite universities, and my secondary knowledge is primarily of liberal arts colleges. I try to visit a much broader range of institutions each year as I give lectures and seminars, but I simply cannot claim personal knowledge of the incredibly broad terrain that is American higher education.

I am firmly committed to the view that we need to attempt to assess the effectiveness of liberal education, but "liberal education" means different things to different people, and different sorts of institutions have quite different formal duties of accountability (state versus private, religious versus secular, for instance). I refuse to accept the commercial-consumerist assumptions that chairman Charles Miller of the Spellings Commission and many others apply to higher education, but I also reject the notion that the higher-education community as a whole does not share important values and goals. I am not yet convinced that there is any adequate way to assess across this vast universe, but my mind is open on the subject. I do, however, feel fairly sure that we can assess across comparable groups of institutions – and that both the community and the individual institution would benefit from the effort.

I am very much committed to the idea that individual institutions can and should attempt to determine how much and how well their students have learned, but it is not clear to me how best that could be done. I take this view since I think that each institution has a responsibility both to itself and to the broader educational community to learn to what extent it is meeting its educational goals. My instinct, therefore, is that institutional self-accountability is what matters most. We need to be able to say to ourselves (as faculty members and administrators) that we are clear both about our objectives and about whether we are achieving them. From this point of view, the most important use of assessment is the re-evaluation of curriculum, extracurriculum, and anything else we deem relevant to the student learning experience. I suppose this means, in simpler language, I am committed to what in the schools we call formative assessment.

SO THAT IS WHERE I AM, and now I will try to work out how I got here. It starts, frankly, with an adverse intellectual reaction on a beautiful June day in Princeton. I am on a committee that each year selects four high school teachers for recognition as the best in New Jersey. We present their awards at the University commencement service, just as we award honorary degrees, and it is a simply wonderful experience for the teachers – as it is for the whole University community. I always have breakfast with the teachers, then retreat to my office to work, planning to meet them again at the splendid luncheon President Shirley Tilghman gives for the trustees, honorary-degree recipients, and the teachers. In order to know when the commencement service is over, I bring up the streaming video Webcast silently on my computer, and then turn up the sound when the president gives her address. Last June I was just beginning to think about what I would say to you today, when I realized that Shirley was in fact talking about assessment. I turned up the sound further, and was pretty perplexed to hear what she had to say.

She began by acknowledging that for the past couple of years assessment of collegiate education had become a political hot potato (not her term, I assure you), and she particularly mentioned that the Department of Education would "for the first time in American history impose external measures of student learning – in other words, standardized testing – on colleges and universities." She acknowledged that the parents in the audience might wonder what was so wrong with that, for after all, "Our faculty spend a significant percentage of their time assessing student learning and providing feedback to students." But, she argued, "a federally mandated standardized test … to measure learning flies in the face of one of the greatest strengths of the U.S. education system – the tremendous diversity among universities and colleges." She went on to explain that one of the glories of the American system was that "for each college-bound student, there is a college or university designed with his or her talents and interests in mind." The result of the Spellings proposal, she argued, would be "homogeneity bred by standardization [which] would almost certainly drain color and vitality from this rich national tapestry. Where we see our students as prime numbers, standardization see them as elements of the least common denominator." (It is great having a scientist as one's president!)

President Tilghman also argued that the suggested standardization would imperil academic freedom. But her bottom line was that "When applied from outside the academic community, standardized testing as a means to assess student learning jeopardizes the freedom that universities need to craft their educational programs and fulfill the individualized goals of our own students." Her basic argument, harkening back to the educational ideas of Woodrow Wilson a hundred years ago, was that the "spirit of learning" that Princeton has long aimed at does not "lend itself to standardized testing." Besides, our seniors "participate in what we believe is the most rigorous test of all – the writing of a comprehensive thesis on the completion of a major independent research project" [the thesis, required of all Princeton seniors]. But Shirley was not giving up on outcome assessment. She contended that "when it comes to the question of ‘How do you know you are providing your students with a good education?' my answer is as follows: ‘We can't really know until their 25th reunion, because the real measure of a Princeton education is the manifold ways it is used by Princetonians after they leave the University.' " And then she described the striking accomplishments of six very distinguished members of the Class of 1982.

As I told Shirley in an e-mail message the next day, I thought she was quite wrong on a number of counts, but before I place them before you, let me describe a strikingly similar speech given by another friend and university president – the inaugural address of Drew Gilpin Faust as president of Harvard on Oct. 12 of last year. This was a very elegant speech built around the question of what accountability means for an institution like my alma mater, Harvard. Faust acknowledged that universities must be accountable, but urged that they must "seize the initiative in defining what we are accountable for." She detailed the many measures the college uses to assess the "value added" of the college experience, "but such measures cannot themselves capture the achievements, let alone the aspirations of universities … our purposes are far more ambitious and our accountability thus far more difficult to explain." The point, she went on to say, is that "The essence of a university is that it is uniquely accountable to the past and to the future – not simply or even primarily to the present. … It is about learning that molds a lifetime, learning that transmits the heritage of millennia; learning that shapes the future. … Universities make commitments to the timeless, and these investments have yields we cannot predict and often cannot measure. … We are uncomfortable with efforts to justify these endeavors by defining them as instrumental, as measurably useful to particular contemporary needs. Instead we pursue them in part ‘for their own sake,' because they define what has over the centuries made us human, not because they can enhance our global competitiveness." In the end, Faust concluded that forward- and backward-looking accountability represent "at once a privilege and a responsibility. … We need better to comprehend [Harvard's] purposes – not simply to explain ourselves to an often-critical public, but to hold ourselves to our account. … We must regard ourselves as accountable to one another." And also, she added, "Accountability to the future encompasses special accountability to our students," though she did not quite explain what that means or how it is to be done.

I apologize for citing so much of these two speeches, but they strike me as very important statements, and indications of what is both best and most worrisome in the response of the elite institutions to the current challenge to be accountable. I hope I do not have to say that I consider Shirley Tilghman, with whom I have worked for the past decade, and Drew Faust, whom I have known as a fellow American historian for a much longer period of time, as two of the leading intellects and institutional leaders in our profession. But so far as I can determine from these texts, they have both dismissed the possibility of serious, systematic, ongoing assessment of current undergraduate education. Both of them, whatever they intended, are likely to be heard by those in other sectors of the higher-educational system (not to mention the general public) as saying that a good college chooses its students well, exposes them to great teachers, assesses their work in courses and capstone exercises, and them sends them forth in the hope and expectation that most of them will be good and useful citizens.

I am not really sure how, philosophically speaking, one is "accountable to the future," but when President Faust speaks in such terms she reminds me a lot of President Tilghman cherry-picking the 25th-reunion class to demonstrate that a Princeton education works. I do not think that either argument is convincing. Clearly every president is entitled to speak for her own institution, but all educational leaders (and especially those of the elites among us) need to speak with the entire educational community in mind. I think we need to be accountable to the present.

SO IF WE CONSIDER that it might be useful to evaluate a four-year college education, what would the necessary parameters of assessment be? Clearly we would need to identify measurable proxies for the student educational experience, for we would need some fixed points to compare over time. Ideally, we would be able to identify proxies that exist across all forms of liberal education, in order to be able to compare across institutions, should we want to do so. The more subjective the proxies (and thus the more difficult to measure with a high degree of confidence), the less useful they will be. The more objective (and easily measured) the proxies, the less subtle and meaningful they may be – so the specification of fixed points to measure is the most significant challenge.

On the one hand, most advocates of liberal education will want to deny that a primarily content-based evaluation of what seniors have learned represents an assessment of the totality of their educational experience. The graduating history or biology major should obviously be able to demonstrate a very considerably greater subject matter competence than the student entering the major. Nor would we think that the senior's comparably broader general knowledge, beyond her field of concentration, is a proxy for liberal education. But these are the aspects of student learning that are most easily and objectively measured, and the ones normally assessed.

Our unwillingness to accept these fairly objective proxies is derived from our commitment to the notion that liberal learning has more to do with the cultivation of qualities of mind, the capacity to recognize and analyze significance, than with the mastery of any quantum of information. Most of us here would accept Carol Schneider's formulation that "the question is not what courses the student [has] completed, but the habits of mind, breadth of perspective, and the actual capacities the student is developing." (3 Oct., p.6). What the liberal educator seeks is the capacity to recognize meaningful problems, to identify the information and modes of analysis necessary to address the problems, and the instinct to bring these to bear in problem-solving. One way to express this is with Howard Gardner's deceivingly simple formulation, "learning for understanding" – the point of learning is to be able to understand, and the objective is to enable the student to use his understanding to solve the problem he is addressing. This is obviously much more difficult to measure, although we have been constructing a variety of tests and other assessment exercises to try to assess these capacities with some precision.

But here is where many right-minded skeptics will assert that the very attempt to measure learning outcomes is likely to stifle the student's creativity, since the forms of assessment create incentives to mimic what the student assumes the assessors seek. This is a nontrivial problem, and Shirley Tilghman is right to be concerned about it. This is why she (and many others) stress the need for culminating demonstrations of knowledge in the senior year. In Princeton's scheme of things, that means the senior thesis or a laboratory project, but many alternative demonstrations are possible. Having directed senior theses for half a century, I am very sympathetic to this view, but I do not think that senior theses are enough.

So my own conclusion is that concerned liberal educators should seek more adequate means of both culminating and longitudinal assessment of undergraduate learning. Everyone in this audience knows that for the past decade, there have been a number of serious attempts to do just that. In helping me prepare this talk, my research assistant came across a useful report that surveyed 27 different institutional assessment instruments of a wide variety – some addressing institution-specific college student experiences, others topic-specific, and still others portraying national information relating to higher education. I am sure there are others. Like many of you, I am not sufficiently expert in the social science of institutional assessment to be able to make tough judgments about the comparative merits of these instruments, so let me simply say that so far as I can tell two of them are currently thought to hold the greatest promise: The National Survey of Student Engagement (NSSE) and the Collegiate Learning Assessment (CLA).

NSSE and CLA are, however, very different sorts of assessment instruments. NSSE is designed to obtain, on an annual basis, information from large numbers of colleges and universities about student participation in programs and activities that the colleges provide for their learning and personal development. NSSE inquires about institutional actions and behavior, student behavior inside and outside the classroom, and student reactions to their own collegiate experiences. CLA, on the other hand, is an approach to assessing an institution's contribution to student learning by measuring the outcomes of simulations of complex, ambiguous situations that students may face after graduation. CLA attempts to measure critical thinking, analytical reasoning, written communication and problem solving through the use of "performance tasks" and "analytical writing tasks."

These two assessment projects are both the result of substantial financial investment and research. Both are clearly serious efforts to create quantifiable measures of the outcomes of liberal education for samples of college students. They are very different in the proxies they use for learning outcomes and in their assessment strategies, but I will simply assume (since experts I respect take them seriously) that they provide meaningful institutional data. The general project to collect such data strikes me as a good-faith effort to respond to national calls for accountability, but we are in the early days of this movement, and I am not sure that we are yet capable of confidently assessing the assessments.

THE QUESTION THAT INTERESTS ME MORE is what can and should be done with this data? Clearly, some hope that cross-institutional comparisons based on this data will provide more reliable quality ranking of institutions than the current faux-scientific rankings in popular magazines. If one takes a consumerist view of higher education, this makes sense, since it would give prospective college students (and their parents) more objective information on the respective educational merits of different colleges and universities. But many of the institutions participating in these new assessment exercises will not release the findings concerning their institutions, and even if they did, it is not altogether clear to me that what is being measured will provide very clear purchasing signals to prospective students. However, to the extent that making the findings available can genuinely facilitate informed college selection, I should think all responsible educators should favor it. But I don't think we are there yet, and I suspect we shall just have to live with U.S. News & World Report in the foreseeable future, doing our best to be noncooperative and noncomplicit in a deeply flawed and poorly motivated process.

What may be more consequential is that it is obvious that we will also have to live with continuing Spellings-like demands for accountability in higher education. I suppose that is why Carol Schneider and this organization have collaborated with the Council for Higher Education Accreditation to issue their very recent report, New Leadership for Student Learning and Accountability. What is most striking about this report is what, in contrast to the Spellings Commission report, it does not include – a demand for standardized metrics of outcome assessment that would permit systematic comparison across institutions of higher education.

The principle that undergirds the AAC&U report emphasizes the responsibility of colleges and universities to "develop ambitious, specific, and clearly stated goals for student learning" appropriate to their "mission, resources, tradition, student body and community setting" (how's that for a complete list of reservations?) in order to "achiev[e] excellence."

Each college and university should gather evidence about how well students in various programs are achieving learning goals across the curriculum and about the ability of its graduates to succeed in a challenging and rapidly changing world. The evidence gathered … should be used by each institution and its faculty to develop coherent, effective strategies for educational improvement.

This is a mouthful. I agree with almost all of it – and why not, since the statement carefully avoids endorsing cross-institutional comparisons?

But I confess to concern about the double bottom line here – the first bottom line is the assessment of the achievement of "learning goals across the curriculum" (to which I say, d'accord), but the second is to assess "the ability of graduates to succeed …" (to which I plead dubitante). Are we really responsible for the future success of our graduates in the same way that we are for their learning while in college? Can we really assess future success in ways that do not privilege income and social status as measures of accomplishment? I doubt it, and I firmly oppose such a commitment.

WHAT INTRIGUES ME is the potentially creative use that the individual universities themselves might be able to make of data from institutional assessment instruments in order to determine what works and what does not in their own learning programs. What I take away from the rather tedious ongoing debate about accountability is that the one certain thing is that each college has a duty to itself rigorously to evaluate the effectiveness of the student learning that it facilitates. It follows that we owe this duty to our students, to their parents, to the faculty, to our (public and private) donors, though I am not sure we owe it to "the future."

Public institutions will of course have additional responsibilities to external stakeholders, not least of which will be their state legislatures. In this regard, I was fascinated to read a few weeks ago about the "Voluntary System of Accountability" that has been announced by NASULGC and AASCU – a highly objective and potentially comparable cross-institutional database that I take to be a response to the special political pressures for accountability on public institutions. Setting aside such a project as unlikely for all of higher education, and acknowledging that the noncompliance of the University of California system does not bode well for the VSA, it seems to me that American educational pluralism dictates that each institution must think through its own standards of accountability. In doing so, it should involve all of the stakeholders I have just named (though there may be more).

It seems quite reasonable that colleges and universities should avail themselves of national assessment instruments. They are carefully thought out and in a constant state of improvement, so at the least they should provide significant benchmarks for evaluating the success of student learning. They may also make it possible (depending on the open availability of data) for institutions to compare themselves to one another, and that should be quite interesting to a self-critical institution. On the other hand, I am sure that many institutions will not consider them sufficient (or perhaps adequate). But in that case the burden, in my judgment, should fall on that college to develop its own internal modes of assessment. I want to claim that we must at least be accountable to ourselves.

But I have been in higher education long enough to realize that this is not so commonly done. Even in these days when the language of "reflective engagement" is broadly used to describe our ideal for student learning, institutional reflective engagement with our own teaching and learning practices is a more or less unnatural act. And there's the pity. Conscientious institutions are careful in constructing curricula. We devote time and human resources to designing both general education and field of concentration curricula. We take evaluation of student performance in courses seriously. We are increasingly concerned to support new types of learning experiences, from freshman seminars and undergraduate research to service learning and study abroad. e are experimenting with the potential of information technology and new media to enhance student learning. We are steadily making new fields of knowledge, most of them interdisciplinary, available to our students. These are exciting days in American higher education, and it is easy to see why presidents Tilghman and Faust are so proud of what their universities offer to students.

But do we really know whether and why we are successful in promoting student learning, if we are? A brand-new ETS report, A Culture of Evidence: An Evidence-Centered Approach to Accountability for Student Learning Outcomes (2008), advocates seven steps for creating "an evidence-based accountability system for student learning outcomes." The penultimate step is to determine "What Institutional Changes Need to be Made to Address Learning Shortfalls and Ensure Continued Success," and suggests communicating the results of data analysis and determining "using the internal decision-making processes of the institution, the meaning of the successes and the shortfalls," and then making educational-policy decisions based on this analysis. Duh? This makes the double assumption that the data analysis will lead to clear policy decisions and that the institution will have the will and capacity to shift educational policies accordingly. Am I too cynical in suggesting that there are few institutions fully capable of carrying out step six?

THE HISTORY OF HIGHER EDUCATION in this regard is not encouraging, for we have been worrying about just these sorts of problems for more than a century. One of the major early efforts was the 1923 AAUP Committee on College and University Teaching, a very thoughtful effort to address many of the questions that vex us in the beginning of the 21st century. The committee's report began by noting the problem that remains at the core of our assessment challenge: "The college classroom is the professor's castle. He does not object to the invasion of it by his own colleagues who understand his problems and difficulties [the report was surely too generous on this point], but he reacts against the intrusion of any one outside that circle who undertakes to scrutinize and appraise his work."

The report then picked up on what is today one of the leading criticisms, "Does the college teacher, as such, have a clear conception of what he is trying to do, or indeed of what his institution is seeking to do?" And its assumption of what constitutes good teaching is one that still commands respect: Good teaching "is the kind of teaching which inspires the student to take an active part in the educational process, in other words inspires him to educate himself rather than to expect that someone else will do it for him." The report goes on to assert that "Any teacher who gains the desired end, who induces self-education on the part of his students, is an effective teacher no matter what his methods or personal attributes may be." It then asserts that "the main function of the teacher is to stimulate critical thinking, to train his students in methods of reasoning and to carry them back to the sources of the facts, as well as to encourage them to form their own conclusions." Apart from the gendering, this statement is one we will all agree with, and indeed these are precisely the same words that many of us here have written on the topic.

Reading the 1923 AAUP report is therefore an unsettling experience, since it might well have been written last year. It notes that many of our present difficulties "are connected with the great expansion in college enrollment which has taken place during the past twenty years." And the bottom line is devastatingly reminiscent of our current assessment debates:

… there is reason to feel that the general standards of college teaching in the United States have been, on the whole commendably high. Unfortunately, when any one takes issue with this assertion, there is no convincing way of substantiating it. For college teachers have as yet devised no systematic means of having the results of their own work fairly evaluated. They have worked out no objective way of determining whether their work is good or bad.

The college teacher plans his own course and gives his own instruction; at the end of the term he prepares his own examinations, tests his own students, and renders his own verdict upon what he has accomplished. He looks on his handiwork and says that it is good. This self-appraisal of results is not checked by anyone else.

Take away the current student evaluation system, a very partial reform at best; can we claim much more today in most colleges and universities? The AAUP concluded that "If even a small portion of the ingenuity and persistence which are now being expended on research of the usual type in American colleges and universities could be deflected … toward research into the results of their own teaching, the improvement in the general standards of collegiate instruction might be considerable." Remember that these words were written 85 years ago. Ouch!

The AAUP report makes me uncomfortable in another, more important, manner. Its authors were propagandists for "general education" in the same way that the AAC&U currently advocates for "liberal education." But the history of higher education in the last century suggests that in fact "general education" was a weak idea with little unifying or binding power either within or across institutions. If that is correct, is it likely that switching from "general" to "liberal" is going to solve the problem? If we are to have meaningful assessment, is it possible that we shall need to assess something more precise than "liberal education" and broader than student performance in courses? The courage to assess, and the capacity to assess, is dependent upon the courage of an institution to do something other than to put the pea under a different shell. Frankly, I think this is the real issue, and resolving it will take a massive effort of educational reimagination.

SO FOR THE MOMENT I AM LEFT with a puzzlement as to what extent we are actually enhancing student learning. As I argued earlier, I do not think that waiting to see how many of our students win Nobel Prizes or create new businesses is a strategy for understanding whether today we are succeeding as educators and fulfilling our obligations to our students. We owe it to ourselves to ask ourselves the Ed Koch question – "How are we doing?" And I do not think we can adequately answer the question until we are clearer about what the criteria for success are.

I am, however, an incrementalist and a pragmatist. For me, the present utility of institutional assessment is its potential capacity to enable us to begin to understand what we are doing, and to plan for educational change. If we truly can measure student learning at the levels of sophistication necessary to know whether we are achieving the outcomes we strive for, then assessment data should enable us to begin to make informed judgments about what we are doing wrong, and what we are doing right. And then we can adjust our learning strategies. Or at least we can have a more meaningful debate about our goals and strategies. Right now we too often fail to see beyond tactics.

I am aware that I have not addressed the very real and reasonable concern of Shirley Tilghman, Drew Faust, and many other thoughtful critics that a Spellings-like assessment mandate would not only violate institutional educational self-determination, but move higher education in the direction of robotic self-imitation. That is a terrible prospect, but Secretary Spellings moved away from such an approach, and I do not think we need to behave like Chicken Little. I believe that it is possible for us to assess ourselves in ways that will not only help institutions to help themselves improve student learning, but might create the norms and benchmarks that will enable us to move ahead nationally in our quest to improve the quality of undergraduate education.

Finally, having invoked the names of the presidents of two of the leading institutions of higher education in the country, let me say that I would like to urge them to think of themselves as national leaders as much as the presidents of Harvard and Princeton. Last week [Feb. 15, 2008] in The Chronicle of Higher Education, David Breneman of the University of Virginia made an urgent plea for the leaders of elite universities to take national calls for institutional accountability, such as that issued by Secretary Spellings, seriously. He noted that "a worrisome disconnect exists between public policy and the nation's higher-education institutions." He acknowledged that many popular critiques of higher education probably do not seem either pertinent or urgent for the elite schools, especially "with employers and top graduate schools eager to snap their alumni. … Granted, that stance avoids the question of the value added by the education that they offer, but in such an environment, it is easy to not be concerned about assessment." But Breneman reminds us that the leaders of elite universities "are given a platform from which to discuss concerns about higher education as a whole. The presidents of Eastern Midwestern State University or Big City Community College simply do not command the attention of academe or the public." It follows that it will not be "healthy for higher education in the long run if external parties conclude that the leadership of our institutions will not respond to reasonable concerns."

I know both Shirley Tilghman and Drew Faust well enough to know that they do not intend to be complacent and self-congratulatory, though I think their recent comments might be taken as such. Let me add my voice to David's in inviting them, and the other leaders of elite institutions, to join us in advocating assessment-based evaluation and, where necessary, reform of liberal undergraduate education. END