The Truth About Wine Ratings No One Wants To Talk About

Walk into a wine shop and chances are you’ll encounter shelf talkers and displays touting the ratings of the wines in front of you. Not only will signs proclaim whether the wine is rated an 88 or a 92, but whole brands are even based around the rating system, guaranteeing the wine inside is rated at least a 90+ even if you have no clue who the winemaker is or what vineyard the grapes came from. Make no mistake, the 100-point scale, introduced by Robert Parker in the 1970s as an evolution of the 20-point British system – which is based on how French students are graded in high school and college – and now used by outlets and critics such as Wine Spectator, Wine Enthusiast, Wine & Spirits, James Sucking, Jonathan Newman and many more, sells wine, and it sells it very well.

And while the people who actually make and sell this wine, as well as many who write about it, increasingly claim they are turning against a system they say is corrupt and bad for wine, in many ways that system is still stronger than ever. With so many people seemingly trying to move away from ratings, we decided to investigate what causes the ratings system to continue to have so much power, and what you need to know in order to be a more informed consumer.

Ratings are as divisive an issue as they’ve ever been, and it’s clear even in 2014 that they have a stranglehold over the industry. That was immediately apparent in the fact that most people we spoke to for this article wanted to be quoted anonymously or off the record. While there is disdain for the system, no one wants to truly bite the hand that feeds. Even a large majority of other wine sites that tout different ways of discovering wine that they claim are “better” than the ratings system don’t outright say ratings are bad, and many look to find as much of the good in them as they can, because that’s just how ingrained they are in our current wine purchasing system.

Get the latest in beer, wine, and cocktail culture sent straight to your inbox.

The rating scale works because it’s relatively simple: wines receive a grade on a 100-point scale – though no wine has ever been rated below 50. The higher the grade, the better the wine. Everyone who has gone through grade school knows the value difference between a 92 and a 93; hell, you probably tried to explain that difference to your parents one night after a school test when you tried to claim a 94 was just as good as a 96, and your parents just weren’t buying it. The same is true with wine.

And that’s why, no matter the type of wine, or the outlet giving the rating, the highest score seems to alway win. One prominent New York area wine shop is well known for often saying that the rating they choose to put on the shelf next to the bottle is always the highest one that bottle received, no matter the outlet. “The industry relies on ratings because those ratings get one wine into the shop over another, and usually help that wine sell,” says Will Schragis, a former Zachy’s wine employee.

It’s the idea that the higher point always wins that has caused us to see ratings inflation, much like academic grade inflation, over the last decade. “There’s massive grade inflation with wine,” Schragis told me, “for many publications, seeing a wine in the upper 90s used to be rare, now it seems to be a regular occurrence.” The amount of wines that are actually rated is a small pool, and many of these wines receive attention from multiple outlets. It’s not unimaginable to believe a race to the top is occurring, especially when the outlets themselves also realize that only one of their names is making it next to the bottle with the score, and if their score is the highest, that name will be theirs – after all, they have to market their product too.

In a recent Wine Searcher article, the outlet analyzed the most famous ratings giver, Robert Parker, and noticed a disturbing trend: a massive upswing in the amount of wines receiving the perfect score of 100. As written on the site: “So far this year, 69 wines have been elevated to perfect status, following on from 102 last year and the end of the year tends to see a bumper crop. Yet only five years ago, the number of wines awarded 100 points was 38 and even that more than doubled the number from 2004, which was just 17.” Whether we’re seeing more wines that are being created specifically for a critic’s taste buds, or score inflation, neither is good for wine.

Shelf Talkers

One of the easiest knocks on ratings is that they are usually based on one person’s opinion. That person is usually someone who is said to specialize in a certain region or style, and therefore is better equipped to analyze the wines than someone else. “What most people don’t seem to realize is that ratings usually reflect one person’s taste related to a specific region,” one Napa winemaker told me recently, “and often wineries make wine to appeal to that specific person, especially if that person’s rating will help move wine. This is a tough business after all.”

Yet it’s tough to educate consumers about this fact. When you see a rating in a wine shop, it almost never has the critic’s name associated with it, but the outlet instead. “We had a hard time educating customers on when and when not to use ratings,” says Schragis, “a good rating doesn’t mean a wine is better or worse than another, because it’s one person’s opinion. Choosing wine isn’t about the best wine, it’s about the right wine. Understanding that is hard.” As a winery marketing director told me, “ratings are only beneficial for consumers who can see through the sea of critics out there and actually realize how each one is different. That takes effort and most people don’t have that time.”

But the fact that only a handful of people are responsible for the ratings given out, and that it is hard to distinguish how each is different, is not the main issue most wine industry people have with the system; the real issue is that these ratings have become so influential that they’ve shut small producers out, becoming something many winemakers feel is a pay-to-play system. As one winemaker told me when we spoke, “look at the ads, and then look at the scores, there is a correlation. I have no problem with a magazine selling ads and those ads trying to convince me of what to purchase, but when those ads seem to influence how a wine is rated, that’s a problem.” Interestingly, Parker, who’s credited with starting the whole rating movement used to never take ads, instead relying on a subscription model to support his Wine Advocate, but his new magazine, 100 Points, does. As another winemaker put it, “there are tons of asks of wineries [asks include things like product donations, free participation in events, requests to open old vintages, appearances, etc.] by many of the more influential people doling out ratings, and there is sort of an unspoken understanding that if you don’t fulfill the requests, the consequence is you may fall off the radar and if you fall off the radar your wines don’t get rated and you don’t sell as much of your wine.” “Ratings are inherently political,” the same winery marketing director told me, “we used to get great ratings in the 90s and then our winemaker had a personal falling out with one of the more well-known publications and we haven’t made it out of the 80s since.”

Ads that potentially influence a score are bad, but as a wine shop employee who wanted to remain anonymous told me, it can go much further than that: “I’ve seen publications that even have financial stakes in the wineries they’re rating,” and that seems even less kosher. “What everyone understands is that ratings are powerful, they help you sell your wine,” said another winemaker who also wished to be unnamed, “as people continue to seek out the best, a good rating can take you from a small producer to a much larger one very quickly, selling out all your stock. It’s not surprising some places play the game, it can be cheaper than building a brand or marketing yourself.”

Yet that game seems more and more to be speaking to an older, established wine audience while the trend of ratings being incredibly important may be changing among the younger generation, at least if their peers, the younger winemakers and wine store owners, have anything to do with it. “There’s some very practical things about a number rating,” says Alan Greene, the owner of Brooklyn wine shop Tipsy,“but it doesn’t tell the whole story.” More and more, wine shops and winemakers around the country seem to feel this way. A traditional focus on ratings by regions such as Bordeaux and Napa could be why these wines have fallen out of favor with younger wine consumers; a good rating, after all, significantly drives up the price.

No one wants to truly talk about the problem

Winemakers are taking matters into their own hands as well, showing how foolish the number system can be. Last year, winery owner Robert Hodgson proved how random the scoring was by releasing the results of an experiment he conducted, unbeknownst to the critics, where tasters were told that the wines they were being served were different, but in many cases, they were served the exact same wine three or more times during their tasting experience. The results were pretty striking: critics would one time give that wine an 86, another time a 90 and then a 94. It was clear from the scoring all being in a range that they liked the wine, but the randomness of the scores proved why a numbers system like the one currently dominating the industry is flawed. Especially when people live and die by the digit – bear in mind in the ratings world 86 means acceptable while 94 is excellent.

The younger generation isn’t turning away from recommendations entirely, they’re just rejecting the 100-point system. The fact is, there is a ton of wine out there, and it can be tough to know everything. Critics and other publications drink a ton of wine, so hearing what they think about a bottle can be useful, in the right context. “I myself have a few writers I really trust, like Eric Asimov and Jon Bonne,” says Schragis “but they place wine in context. They let you know this $20 wine is great against the other $20 wines they tasted in the circumstances they said it was good in. They don’t just slap a rating on it.”

While the numerical ratings system may seem ingrained in the fabric of the wine industry, it’s important to remember it’s only been around for a little over 40 years. Things can change, especially as consumers become more proactive, and outlets begin guiding consumers, helping them explore in order to discover what they like, instead of using a ratings system to tell them what to drink. “I hope beer and liquor learn from wine’s mistakes,” says Schragis.

Black and white images via Shutterstock.com