A recent article in InsideHigherEd reported on a meeting of higher ed experts to talk about the just-released Academically Adrift: Limited Learning on College Campuses by Richard Arum and Josipa Roksa. Both the description of the meeting, and the numerous comments made by readers of the article are quite interesting and informative.
The book describes a study of 2,300 college students moving through a range of 2 and 4 year colleges and universities. The authors utilize surveys, transcript analyses, and the Collegiate Learning Assessment (CLA) to evaluate how much the students learn during their college experience. The answer: not much. As reported in an earlier InsideHigherEd article about this book:
- 45 percent of students "did not demonstrate any significant improvement in learning" during the first two years of college.
- 36 percent of students "did not demonstrate any significant improvement in learning" over four years of college.
- Those students who do show improvements tend to show only modest improvements. Students improved on average only 0.18 standard deviations over the first two years of college and 0.47 over four years. What this means is that a student who entered college in the 50th percentile of students in his or her cohort would move up to the 68th percentile four years later -- but that's the 68th percentile of a new group of freshmen who haven't experienced any college learning.
Participants at the meeting discussed several limitations of the study, including important questions not asked in surveys, and the use of the CLA as a surrogate for “learning”. Nevertheless, the article reports that the participants at the meeting concluded that this is a very important – and disquieting - report that may lead to some positive actions.
Ignored in all of this discussion seem to be the large number of earlier studies and reports that reach essentially the same conclusions. Derek Bok’s Our Underachieving Colleges summarized a large number of these studies quite effectively. As he points out, not only do not improve much in critical thinking (CLA), but they improve even less in the important area of postformal reasoning – understanding how to work with ill-structured problems. By the way, the sad results described above from Academically Adrift are almost exactly those reported by Pascarella and Terinzini in How College Affects Students V 2 based on other studies. Consistency, unfortunately.
My favorite quote from Bok’s book is:
“With all the controversy over the college curriculum, it is impressive to find faculty members agreeing almost unanimously that teaching students to think critically is the principal aim of undergraduate education......Ironically, the fact that college faculties rarely stop to consider what a full-blown commitment to critical thinking would entail may help to explain why they have been so quick to agree on its importance to the undergraduate program.”
From the InsideHigherEd articles and their attached comments, it seems that some are rethinking the statement that critical thinking should be (or is) the “principal aim of undergraduate education” as they are faced with increasing data that it isn’t happening. Instead, there are statements along the lines of “ the students are certainly learning domain knowledge.” Unfortunately, that also does not seem to be the case, or more precisely, they are not learning it in the most useful and important ways. In areas where data exist, it seems we often do a good job of giving our students an “algorithmic” understanding of their domains, but few end up with an understanding of the underlying concepts that really define the domains, or move very far in the development of expert thinking in the domains. (See previous discussion of expert thinking in How people learn)
In my own field of physics, tests given to students at hundreds of institutions ranging from community colleges to the best research universities show that students who have finished a physics course on the average think less like experts than when they started the course. There is a “universal” result, valid across institutional size and quality, size of class, etc. that students on the average understand less than 30% of the concepts that were taught in their class, even though they may answer traditional test questions quite well. I wrote about this in A D- in science education. All in all, this is not data that support the idea that students are currently are learning domain knowledge significantly better than they are learning how to think critically. And, indeed, how can one really learn domain knowledge if one cannot think critically?
The good news, as pointed out in Bok’s book and numerous other works, is that learning research has shown a number of ways to change traditional approaches to teaching that lead to greatly increased student learning. It is “just” a question of implementation.
The comments made in the articles mentioned above about the weaknesses of the CLA and other tests being used to measure student learning certainly are correct. None of the instruments being used are perfect in measuring what they claim to measure. In addition, as many critics point out, there are certainly major outcomes of education that are not being measured at all. However, a very broad spectrum of studies using a variety of instruments do show consistently that student learning is much less than we commonly claim.
To me, this suggests two responses. First, we should not continue to spend time picking at the flaws of current instruments that seek to measure learning – rather, we should focus on developing better instruments. At a minimum, this will require that we in higher education ask ourselves what we believe the valuable outcomes of higher education to be. In addition, however, we need to know what our students and alumni and their employers think are the valuable outcomes of higher education. Like it or not, without the approval and support of those groups, we are an endangered field. With that information in hand, we can begin to develop better instruments to measure these outcomes. And using these new instruments, we can work much more effectively to improve learning in our institutions. (This is happening now in Europe through the Bologna Process. See The Bologna Process- a significant step in the modularization of higher education )
Second, for the shorter term, there are numerous approaches that have been shown to lead to better learning. These should be more universally adopted while we wait for even better approaches to be developed. Pressure will continue to grow on academe to show improved outcomes, and the patience of the society that supports us is finite.