Beth Keserauskis

Building relationships and making connections

Forbes College Ranking: True Gauge or Sketchy Data?

Forbes recently released their annual rankings of colleges. The rankings were calculated in partnership with the Center for College Affordability and Productivity. My institution was conspicuously absent, and I was asked to investigate why. (The responsibility for responding to external credibility surveys lies within my unit.) What I have subsequently found is that the rankings are based on existing, publicly available data (IE no surveys were sent to each institution requesting data, information.) Which in and of itself would not trouble me, except that the data points used, to me and others, are questionable at best. Additionally, information like the infamous “other cost” category institutions report as an allowable cost at the discretion of the student to cover expenses like mileage to clinicals, internships, etc. in their financial aid package, are treated as actual billed charges, therefore increasing the “cost of attendance” and subsequently this elusive “net price” calculation and the debt load calculation (predicted, I might add).

I have several excerpts from articles, blog posts, and even their own methodology posted below giving a glimpse of the, in my humble opinion, “sketchiness” of this ranking. I can only hope that Joe and Jane Sixpack are able to sort through the variables…oh wait, they likely can’t. So now we have another “think tank” with a clear agenda (political or otherwise) leveraging a brand like Forbes to advance their cause.

How important are rankings like this and U.S. News and World Report Best Colleges? Only the audiences we are trying to attract will tell. And believe me, I intend to ask them just that, so we can tailor our approach accordingly to these surveys.

Compiling the Forbes /CCAP Rankings (excerpt from the methodology document, full document can be found on their site)

By the Staff of the Center for College Affordability and Productivity

 Ranking Factors and Weights

The Center for College Affordability and Productivity (CCAP), in conjunction with Forbes , compiled its college rankings using five general categories, with several components within each general category. The weightings are listed in parentheses:

1. Student Satisfaction (27.5%)

  • Student Evaluations from RateMyProfessor.com (17.5%)
  • Actual Freshman-to-Sophomore Retention Rates (5%)
  • Predicted vs. Actual Freshman-to-Sophomore Retention Rates (5%)

2. Post-Graduate Success (30%)

  • Listings of Alumni in Who’s Who in America (10%)
  • Salary of Alumni from Payscale.com (15%)
  • Alumni in Forbes/CCAP Corporate Officers List (5%)

3. Student Debt (17.5%)

  • Average Federal Student Loan Debt Load (10%)
  • Student Loan Default Rates (5%)
  • Predicted vs. Actual Percent of Students Taking Federal Loans (2.5%)

4. Four-year Graduation Rate (17.5%)

  • Actual Four-year Graduation Rate (8.75%)
  • Predicted vs. Actual Four-year Graduation Rate(8.75%)

5. Competitive Awards (7.5%)

  • Student Nationally Competitive Awards (7.5%)

School Selection

The 650 institutions of higher education in this ranking are schools which award undergraduate degrees or certificates requiring ―4 or more years‖ of study, according to the U.S. Department of Education, and only those schools categorized by The Carnegie Foundation as Doctorate-granting Universities, Master‘s Colleges and Universities, or Baccalaureate Colleges are included in this sample of schools.

Of the 650 schools included in the sample, 608 wereincluded in the 2010 college ranking. (A total of 610 schools were ranked in 2010, but two of them, Bryant University and Missouri University of Science and Technology are now classified as ―Special Focus‖ institutions by the Carnegie Foundation). We have accounted for any name changes that have occurred over the past year. The 42 schools added this year to the sample are all institutions classified by the Carnegie Foundation as Doctoral/Research Universities and were added based upon undergraduate enrollment size.

A Little History of the Forbes Rankings from 2008-present, excerpt from a commentary on methodology (full commentary can be found at: http://bestcollegerankings.org/popular-rankings/forbes-college-rankings/)

2008 marked the first year that Forbes entered the college ranking fray. They choose to use a methodology that included the following percentages: Listing of Alumni in the 2008 Who’s Who in America (25 percent); student evaluations of professors from Ratemyprofessors.com (25 percent); four-year graduation rates (16 2/3 percent); enrollment-adjusted numbers of students and faculty receiving nationally competitive awards (16 2/3 percent); average four year accumulated student debt of those borrowing money (16 2/3 percent). They did not break colleges down into different schools as U.S. News does, but instead choose to separate private and public colleges instead.

Methodology: In conjunction with Dr. Richard Vedder, an economist at Ohio University, and the Center for College Affordability and Productivity (CCAP), Forbes inaugurated its first ranking of America’s Best Colleges in 2008. They based 25 percent of their rankings on seven million student evaluations of courses and instructors, as recorded on the Web site RateMyProfessors.com. Another 25 percent depended upon how many of the school’s alumni, adjusted for enrollment, are listed among the notable people in Who’s Who in America. The other half of the ranking was based equally on three factors: the average amount of student debt at graduation held by those who borrowed; the percentage of students graduating in four years; and the number of students or faculty, adjusted for enrollment, who have won nationally competitive awards like Rhodes Scholarships or Nobel Prizes. CCAP ranked only the top 15 percent or so of all undergraduate institutions.

Negative Commentary on the Methodology (Excerpt from Suite101.com: The Forbes Best College Rankings 2011: Are They Kidding?

What Goes in Must Come Out

First of all, a quick review of the Forbes methodology. It is the goal of the rankings to evaluate college as a consumer or investor would evaluate a commercial product. The focus is on the return on investment–for what you pay, do you get a good “value”? The most important element in assessing this value is “Post-Graduate Success,” accounting for 30 percent of the total.

This “success” is measured by the salaries of graduates as reported by Payscale.com; membership in “Who’s Who”; and by alumni representation on a list of corporate officers chosen by Forbes and the Center for College Affordability and Productivity (CCAP). CEOs and board members of leading companies are the only persons who are eligible, thereby narrowing the definition of “success” to achievement in the business world only.

It is interesting that Forbes would allow use of “Who’s Who” listings as a measure of college success. In a 1999 article for the magazine called “The Hall of Lame,” Tucker Carlson, a Fox News commentator, derisively showed how inclusion in Who’s Who publications did not require notable achievement.

Another 17.5 percent of the total is based on student evaluations of instructors, taken from the website Ratemyprofessors.com. While student evaluations are useful, they can also lead professors to emphasize popularity at the expense of scholastic rigor.

An additional 17.5 percent of the total comes from actual and anticipated four-year graduation rates. Using four-year rates rather than six-year rates clearly favors colleges that are wealthy enough to subsidize virtually all eligible students based on need or merit, or whose student body is made up of highly-prepared students with sufficient economic support. State universities, whose students often have to work part-time or even take a semester off from school, usually cannot match the four-year graduation rates of private colleges.

Likewise, the rankings penalize colleges whose students have higher student debt loads, and this also slants the rankings toward wealthy colleges and parents.

Academic Reputation—Forget It

The most glaring deficiency of the Forbes survey is that the only standard it uses to assess the intellectual credibility of a college is the data from Ratemyprofessor.com. Academic reputation and faculty achievement count for nothing, even though a recent UCLA study of more than 200,000 freshman across the country revealed that undergraduate academic reputation was the most important factor for these students when they were choosing a college. Forbes wants to change that perception, but does the magazine really believe that reputation counts for nothing in the business world as well?

It is ironic that a survey that is supposed to be student-centered disregards the one factor that students themselves cite as being most important to them: quality. Interestingly, the UCLA study also showed that prospective students are learning to be guarded in their use of college rankings, a healthy sign indeed.

August 10, 2011 Posted by | higher education, marketing, reputation management | , , , , , , | Leave a comment

Hunger Strike to Challenge College Rankings

Continuing with the season of college rankings, here is an interesting story about a student embarking on a hunger strike to draw attention to the inadequacy of the U.S. News and World Report college rankings process. I don’t know about anyone else, but I think there are more important issues in the world about which we should go on a hunger strike.

Washington Monthly College RankingsWashington Monthly puts out an interesting college guide. They rate schools based on their contribution to the public good in three broad categories:

  • Social Mobility (recruiting and graduating low-income students)
  • Research (producing cutting-edge scholarship and PhDs)
  • Service (encouraging students to give something back to their country)

This certainly sounds like a much more worthwhile ranking system for prospective students and parents than the U.S. News rankings based on fame, exclusivity and money.

September 7, 2010 Posted by | higher education, marketing, public relations | , , , , | Leave a comment

College Rankings: Popularity Contest or External Credibility?

Last week was what many in higher education considered a stressful week. The U.S.News and World Report rankings were released to the schools on Monday (8/16), with a press embargo until 12:00 midnight Eastern time Tuesday. So most college communications teams spent the day either breathing a sigh of relief and sending the release that they achieved a good rank, or frantically scrambling to craft the message drawing attention away from the fact that they slipped in the rankings.

In addition to the usual stress, U.S. News made significant changes to the methodology and presentation of the rankings this year. Full details can be found on their blog, but in summary they:

  • changed the category names
  • listed all schools, not just the top tier
  • increased the weight of the graduation rate
  • included the opinion of high school counselors in the calculation

There has always been a question about whether rankings like these and countless others are just a popularity contest, or rather a valid external assessment of college choices for prospective students and their parents. The subjective opinions of peers, and now this year high school counselors, factor into the rankings. The chief admissions officers, provosts and presidents of all colleges and universities have the opportunity to provide their opinion of the institutions in their geographic region. This peer assessment variable accounts for 25% of the total score–the most heavily weighted variable. If we are trying to assess outcomes of an institution, why aren’t the managers at companies hiring the graduates asked?

You could argue that this skews the rankings, as surely an institution can influence those opinions through a variety of communication channels timed with the survey response due date. Or, you can view this an opportunity to educate your peers on the accomplishments and accolades your institution has recently achieved, and create a communication strategy for this target audience.

Have you ever noticed how the underdogs who make it to the Sweet Sixteen in the NCAA Division I Men’s Basketball tournament manage to place high in the rankings? (Think Butler, Northern Iowa this year.) And how the tournament is right around the time that the survey is completed? Coincidence? Or is that there is increased visibility and communication about those schools while they are featured on TV?

Assessment is always a big topic at universities. To me, this is one more way to assess success. There are qualitative and quantitive, objective and subjective, ways to measure nearly everything.

Additionally, when you increase factors like graduation rate, your overall score increases. So, in theory, would your rank.

Regardless of what side of the fence you fall on, there is something to be said from a marketing perspective about credibility through external validation. Several of the categories, like Up and Comers and Focus of Student Success, are great to use in a communication strategy highlighting recent innovations you have added to your institution.

There are also those schools that do not appear on the rankings who try and use that to their advantage. I have seen taglines such as “awards won’t change the world, but our graduates will” on billboards.

Has anyone asked whether prospective students and parents are using these rankings in their decision making process? If you appear favorably in the rankings, are you calling attention to it and asking your prospective students and parents to pay attention?

An article appeared in the Journal of Marketing for Higher Education in 2008, titled De-Mystifying the U.S. News Rankings: How to Understand What Matters, What Doesn’t and What You can Actually Do About It. I highly recommend reading this article.

August 22, 2010 Posted by | higher education, marketing, public relations, reputation management | , , , , | Leave a comment