In Another ranking of higher education - with a radical twist, I reported that OECD was thinking of doing international comparisons of national outcome measures for higher education. I suggested that this focus on outcomes rather than inputs might lead to an interesting and enlightening set of information.
It is not surprising that the American Council on Education is working to derail this effort since that
organization has been very active in efforts to stop outcomes testing in the US. Once again, David Ward (Pres. of ACE) argues that there are too many variables - funding methods, mission, etc. Further, he raises the possibility that the national rankings will be used to rank individual institutions, and that funding agencies might misuse this data. Clearly, ACE is simply continuing to trot out its usual “we are too complicated to be held accountable” arguments, in this case bolstered by a threat of yet another ranking system for institutions of higher education.
It is not obvious that the OECD effort would produce data that could be used to create another ranking system, given the way that previous OECD data has been reported. However, since there are a number of such ranking systems already in existence that focus on input measures, the fact that Ward focuses his displeasure on this particular possibility would seem to indicate that he thinks rankings based on outputs are even more misleading than rankings based on inputs. I would have argued the opposite.
There are some obvious weaknesses in the system that OECD has outlined. However, I think American higher education would be greatly strengthened if its leaders stopped focusing on preventing accountability measures, and worked instead to make sure the right measures exist and are adopted.