Voluntary accreditation has been a central feature of the higher education landscape in the United States for more than 100 years. The first regional accrediting organizations were put in place to distinguish “collegiate” study from secondary schooling and all had begun recognizing institutions as “accredited” according to defined standards by the 1930s. Organizing on a geographic basis made sense at that time because institutions in different parts of the country had recognizably different structural and cultural characteristics, and because it made travel for peer review easier. A regional structure also meant decisions about quality were kept reasonably proximal to the institutions about which they were made.
By the mid-1950s, the current approach to accreditation was well established. The key features remained a detailed examination of each institution against its own mission, a thorough self-study conducted by the institution and organized around the accreditor’s standards, a multiday site visit conducted by a team of peer reviewers, and a recommendation about accredited status to a regional commission. While accredited status thus constitutes a public statement about an institution’s quality and integrity for prospective students and the public, the process was never explicitly designed for public accountability or to inform student choice. Instead, the primary purposes were to help the schools make careful, thorough judgments about academic quality based on institutional mission, and to continually enhance that quality.
When the federal government began systematically investing in higher education with the Veterans Readjustment Assistance Act of 1952 (otherwise known as the Korean War GI Bill), it sought a way to certify the suitability of individual colleges and universities to act as stewards of taxpayer dollars and provide a quality education for students who spent federal money to enroll. Accreditation was consequently “deputized” to play this role, an assignment formalized and extended by the original Higher Education Act (HEA) of 1965. This was the origin of the current “gatekeeping” function played by American Council on Education accreditors. Institutions must be accredited in order to participate in federal student aid programs; in turn, accreditors in this role must be “recognized” by the U.S. secretary of education on the basis of the standards and review processes they apply to institutions. Over the years, the terms of recognition by the federal government have become increasingly specific and compliance-oriented.
A decisive tilt toward requiring accreditors to play a more aggressive accountability function occurred in the Higher Education Amendments of 1992. This required accreditors to focus greater attention on explicit evidence of educational quality and review institutional compliance with a growing array of federal regulations and procedures at an increasingly fine level of detail.
These accountability concerns have become particularly prominent in recent years, a period in which the effectiveness of American higher education has been questioned and the nation’s ranking with respect to the educational attainment of young adults has declined according to rankings by the Organisation for Economic Co-operation and Development (OECD). These concerns have been expressed in many forms, including the report of The Secretary of Education’s Commission on the Future of Higher Education (commonly known as the Spellings Commission), a series of congressional hearings about the practices at some postsecondary institutions, and accounts in the popular media and academic circles about how much (or little) students are learning in college. As the arbiter of academic quality, accreditation is at the center of these discussions. As such, not surprisingly, what has long been regarded as an important but quiet backwater of higher education has found itself in the middle of policy discussions and debates about its role and effectiveness.