Kevin Carey, author of the very nice Washington Monthly article Is our students learning? that I discussed in Educational value added, Sept.1, 2006, has sent me a copy of his new report College Rankings Reformed: The Case for a New Order in Higher Education. This report nicely fleshes out a number of arguments made in his shorter article, and adds some new recommendations on how to improve rankings.
Carey understandably focuses his discussion on the US News and World Report (USNWR) rankings of colleges. He argues that one can catagorize what is actually being measured by the various components of these rankings. When he does this categorization, he find that the USNWR rankings are based 25% on fame, 30% on institutional wealth, 40% on exclusivity, and only 5% on quality! That, indeed, does not seem to be the best way to measure the effectiveness of the colleges or the quality of their programs.
Carey then elaborates on the benefits of the measures he first proposed in his Washington Monthly article: the National Survey of Student Engagement (NSSE), the Collegiate Learning Assessment (CLA), and a variety of newly collected/organized data on employment and satisfaction after graduation. The NSSE is a measure of educational best practices in higher education, and thus would reflect on an institution's commitment to using the most effective teaching practices currently known. The CLA attempts to measure higher order thinking skills, and Carey would essentially use a freshman/senior differential in CLA to identify the institutions where greatest growth in higher order thinking skills occur. This emphasis on growth in thinking skills during the college experience rather than on the CLA level (or of a correlated similar exam, such as ACT) of the entering class then would not penalize colleges that accept less well prepared students. The employment and satisfaction data would, of course, seek to show what the ultimate outcome of all this education is. To these three measures, Carey would add two measures based on freshman retention and graduation. Each of these would compare actual rates to rates predicated using “peer comparison” data. Again, as with the CLA data, Carey is seeking to identify "value-added" institutions. Using this set of data, Carey proposes weighting the various components such that:
NSSE, measuring teaching, is 20%
CLA, measuring learning, is 30%
retention and graduation are 20%
employment and life satisfaction are 30%.
I continue to have concerns about the employment and life satisfaction numbers for reasons explained in my first post on this subject. In addition, I suspect that there are significant, but highly nonlinear, correlations between the employment and satisfaction data and the data used by USNWR. Finally, for all of these measures the consumers really need some measure of standard deviation, so that they can see which rankings are statistically identical.
However, my greatest concern is simply - I think any ranking system of higher education is inherently and fatally flawed. A ranking system implies that one institution is better for all students than is another. This simply is not true. We desperately need better data that parents and students can use to make choices - more transparency - and in this, Carey has done a great service in looking hard at some of the alternatives available. But we don’t need a ranking system. Much better would be a data base of information that allowed the users to weight the various inputs according to their own interests. For every user, a unique ranking system!
Comments