Late last week, a few major gaming Web sites posted reviews for the upcoming downloadable PlayStation 3 title "Calling All Cars," a game whose chief creator, David Jaffe, formerly developed "God of War" and "Twisted Metal." The game got an 8.5 from IGN and an 8.0 from 1up.com. As Jaffe would post in his blog, he was happy with these scores. GameSpot gave the game a 6.7. As Jaffe also posted on his blog, this displeased him.
There have been game review controversies before. Late last year, GameSpot became the only major gaming outlet to score "The Legend of Zelda: The Twilight Princess" under a 9. In 2004, Game Informer magazine gave "Paper Mario: The Thousand-Year Door" a 6.75, and fans were incensed at the magazine reviewer's claim that the score was knocked because it was a game that, no matter how good, wasn't something the readership of Game Informer would particularly like. In 2003, IGN and GameSpot gave the highly anticipated "Mario Kart: Double Dash" a 7.9, and for a while the phrase "getting 7.9'd" was taken to mean what happens when a good game gets an undeserved review.
So what does a review score really indicate?
I wrote a column for IGN back in 2004, just a freelance thing about once a week centered on ideas related to the Nintendo GameCube. Inspired by the frenzy of that 7.9 "Mario Kart" review and having never reviewed a game on a 10-point scale myself, I gave myself a test.
I needed a point of reference. Actually, I needed 10 points of reference. So I set up a challenge for myself that I hoped other reviewers would take on as well: Name two games for every point of a 10-point scale. That would determine how bad or how good the "Mario Kart" score really was. Today, it might even help indicate whether a 6.7, if fairly given, is worth getting angry over.
So back in 2004, I named two games I thought deserved perfect 10s. I picked, probably not too controversially at the time, "The Legend of Zelda: The Ocarina of Time" and "Metroid Prime." What I think about those games these days is another story. I quite intentionally don't review games. But here was my rationale: "A 10 probably won't be flawless. After all, what is? 'Ocarina,' for example, has the somewhat clumsy, tacked-on Skultulla fetch-quest. 'Metroid Prime,' in my book, has some minor issues, such as the requirement to hold the triggers all the way down while reading wall-markings. These are the most minor of quibbles, and they barely affect my enjoyment of the games."
I picked "Perfect Dark" and "The Legend of Zelda: The Wind Waker" as my 9's. My thinking then was that each game had its flaws, but those flaws didn't get in the way of having a good time. "Perfect Dark" had its unplayable counter-op side-mode, for example, but did that destroy the pleasure of the core game?
For 8's, I named "Resident Evil" and "Grand Theft Auto: Vice City." The difference between these games and the 9's, I wrote, was that as wonderful as the titles might be, the flaws in these games are significant enough that a friend coming over to try the game for the first time would notice the glaring shortcoming right away. "Wind Waker" could hide its bad parts from most newcomers. But who that played "Vice City" didn't have trouble with the aiming system the first time out?
The 7's were the first games that I said would have multiple significant flaws. I named "Star Wars: Rogue Leader III - Rebel Strike" and "Beyond Good & Evil," citing issues with controls, lack of originality and other less-than-polished qualities.
I barely remember playing the games now, but I cited "Lord of the Rings: Return of the King" and "Luigi's Mansion" as 6's. Why? "Like 7's they have some great moments of gameplay in them, but long portions of the game make you feel like the developer was just going through the motions, saving things up for the big moments here and there. A 6's doesn't have more than a rental's worth of really fun, unique stuff to do and see."
I described 5's as games that show a spark of imagination, a hint of something that could have been grand wrapped in an otherwise mediocre experience. I named, perhaps controversially, "1080: Avalanche" and "P.N. 03." I distinguished these 5's from my 4's — "Wario World" and "Star Fox Adventures" — which I described as pure vanilla, competent products that don't do anything special.
Distinguishing among the lower numbers of the scale was tricky. I described 3's as a grade below competence: "A game like 'Yoshi's Story' can't get a competence score of 4, because it serves as a sequel to a very good game and just messes things up. A 3 is a dropped ball, a step in the wrong direction, etc."
I said a 2 was an outdated game. Whereas a 3 might feel like a modern product, a 2 was "an unwelcome blast from the past, a sign of a developer just not able to keep up with current game design and technical proficiency." I named the 2003 edition of "NBA Jam."
That left me with 1's and 0's. The 1's were the worst games companies put out, like the notorious "Superman 64," and 0's were the bad games companies won't even put out.
Looking over this list today, I could write some angry blog posts to my 2004 self who, thankfully, wouldn't be able to reply with any vitriol of his own. I don't agree with the games I named any more. But I do agree with the rationales. On my scale, a game given even a 6 is easily worth recommending, but in a way that is clearly distinct from a game given an 8 or a 9.
This doesn't close the debate. It goes on. But if everyone took a 10-point challenge — including game makers and game reviewers — wouldn't we know a little better where reviewers actually stand?
(If you are a paying IGN subscriber, you can read my original article here.)