HomeoFCGCredibility* David Honig May 19, 2010 FCG, Food & Conversation 19 Comments credi•bil•i•ty- ˌkre-də-ˈbi-lə-tē n. — the quality or power of inspiring belief If I were less than honest as a critic, I think people would spot that right away, and it would destroy my credibility. —Leonard Maltin Robert M. Parker Jr. is the most important wine critic in the world. Period. If you add the word “Bordeaux” to that sentence, as in “Robert M. Parker Jr. is the most important Bordeaux wine critic in the world,” it might be possible to add “ever” to the sentence. Indeed, Parker might be the most influential critic of any kind in modern history. To what does Parker owe his influence? One word, credibility. People believe him. Over time, he has earned credibility. He and his uniquely American 100-point scoring system exploded onto the wine world with his evaluation of the 1982 Bordeaux vintage. Since then, Parker’s rankings on a scale of 50-100 (everything starts with 50, even the most foul laboratory concoctions of chemicals and bizarre grapes) have been the single most important number to the sale of a bottle of wine. Parker’s review of the 2005 Bordeaux vintage was entitled “Is 2005 the Perfect Vintage?” He called it “the greatest…produced during my 30-year career.” Under his system, anything scoring 90 points or more is “An outstanding wine of exceptional complexity and character…terrific wines,” and anything scoring 96 points or more is “An extraordinary wine of profound and complex character displaying all the attributes expected of a classic wine of its variety…worth a special effort to find, purchase, and consume.” In 2005, he scored one-hundred-eighty-seven different wines 90 points or more (including a few he rated “89+” or “88-90″). He rated thirty-four different wines at 96 points or more, including two wines at 100 points, plus another four at “95+,” meaning they might turn out to be 96-point wines or more with time. And people bought. They bought and bought and bought, the points and the buying driving prices to a height not seen since the Dutch went a little crazy with tulips. But where do you go from there? What do you do if another vintage is as good as “The Perfect Vintage,” if you are butting up against the ceiling of your 100-point system, perfection having been achieved, and then nature and winemakers do it again or—heaven forbid—surpass “perfection?” You add an asterisk. Yup, an asterisk. Instead of coming out and saying “perhaps ‘perfection’ wasn’t the word I should have used,” or deflating the score the way countries in trouble deflate currency, Parker added an asterisk. Don’t take my word for it. This is what he wrote: Readers will note an asterisk (*) after some wine scores. I added this to signify when I thought a wine had the finest potential of all the offerings I had ever tasted from that estate in nearly 32 years of barrel tasting samples in Bordeaux. Instead of going to a 101 point scale, or admitting that he blew the curve in 2005, Parker added an asterisk. That is what he added, but did he subtract credibility in the process? Ultimately, that’s the consumer’s decision to make. But let’s explore how he used it, to help the consumers along. First, let’s look broadly at the 2009 review. For this vintage, Parker rated a whopping two-hundred and fifty-six wines as high as 90 points, with sixty at 96 points or more, and an amazing twenty-one flirting with 100-point perfection. For those keeping score at home, that’s a 38% increase in “terrific” wines, a 57% increase in “extraordinary” wines, and 1050% increase in perfect wines. That, I must say, is one heck of an increase on “perfection.” It’s like improving on the Miami Dolphins perfect season of 17-0 by going 178-0. Okay, let’s look at the asterisks, meaning the eighty-five different wines* that were the best he has ever tasted from a particular estate. This jump in perfection means either that in 2009, Bordeaux produced the best wine in the history of, well, history, or that we’re seeing a sort of inflation not observed since Weimar Germany. Did the asterisk really mean “the finest potential of all the offerings I had ever tasted from that estate?” Or was it a way of adding a few points to the 100-point scale without saying so? Actually, that’s pretty easy to figure out. First, anything with an asterisk, if it really means that, should either have 100 points or, if lower, still be the highest score the wine ever got from Parker. Let’s look at a few: Wine 2009 2005 Carbonnieux 89-92* 91 Clos Floridène 88-90* 86-89 Clos l’Eglise 96-100* 96 Clos Marsalette 90-92* 92 Clos Saint-Julien 91-93+* 94 Cos d’Estournel 98-100* 98 d’Armailhac 90-93* 90+ Dalem 89-91* 89 Destieux 91-94* 93 Duhart-Milon-Rothschild 94-96* 94 Feytit-Clinet 93-95* 93 Franc-Mayne 90-93* 91 Gazin 94-96* 94 Giscours 91-93* 91 Grand-Puy-Ducasse 90-93* 91 la Chapelle de la Mission Haut-Brion 91-94* 91 la Tour-Carnet 91-93* 91 Lalande-Borie 88-90* 90 le Dôme 95-97* 96 les Grands Chênes 90-92* 91 Magrez-Fombrauge 92-95+ 95 Malartic-Lagravière 93-95+* 91-93 Montviel 87-90* 87 Olivier 88-90+* 86-88 Pauillac de Château Latour 91-93* 91-93 Potensac 89-91* 90 Poumey 88-90* 87-89 Saint-Pierre 94-98* 92-94 Trottevieille 91-93+* 92 In other words, every one of those twenty-nine wines scored, in 2005, within or above the range of their scores in 2009. How could the “91-93+” point Clos Saint-Julien be the best Saint-Julien ever, when the 2005 scored 94 points? Is the 90-92 point 2009 Clos Marsalette really better than the 92 point 2005? How about the Pauillac de Chateau Latour, which earned identical “91-93″ point scores in both years? No, something is clearly wrong here. Either the points don’t mean the same thing year to year ,or something fishy is happening. What could it be? Before we answer the last question, let’s look at a couple more ratings from 2009. These really stood out to me: Ad Vitam: The 2009 Ad Vitam is the first example of Ad Vitam I have tasted. Dufort Vivens: Normally the proprietor at this estate refuses to let me taste, so I was surprised when he did so this year. Pedesclaux: The 2009 is the first Pedesclaux in over thirty years that I can recommend. None of those wines got an asterisk. The first Ad Vitam is, by definition, the “wine [that] had the finest potential of all the offerings I had ever tasted from that estate in nearly 32 years.” Right? Maybe we give Parker a pass on that one. How about the Dufor Vivens? If they did not let him taste it in the past, isn’t it in the same boat as the Ad Vitam? But most interesting is the Pedesclaux. If it is the first he can recommend in over thirty years, why doesn’t it get an asterisk? “Over thirty years” goes all the way back before the famed 1982 vintage, so unless Parker was absolutely crazy about the 1978 Pedesclaux, something is missing. An asterisk, to be precise. So is the 100-point scale inconsistent from year to year? If so, then it has no meaning at all. Or if it has any meaning, it’s only valid as a comparison within a single vintage—but nothing in the explanation of scores says anything about that. If the 100-point scale turns out to be an arbitrary single-vintage rating method, what does that do to Parker’s credibility? I respectfully submit that it makes it as dead as Pliny the Elder, the only reviewer with comparable heft in wine history. The other possibility, of course, is that the asterisk is not really an indicator of “best ever,” but just a way to add a few points to the 100-point scale without admitting it. You can’t give a wine 102 points in a 100-point scale, but you can add an asterisk and pretend it means something else. Ultimately, it doesn’t really matter what Mr. Parker intended. What matters are conclusions consumers reach. Will they buy wine based upon a new 100*-point scale? I don’t care about motivation. I care about credibility. —Eliot Spitzer David Honig, the Publisher, looked at the thousands of quality wine blogs and realized there was a ready-made staff for the best wine magazine in the world. David has been running 2 Days per Bottle for two years now, and started up The 89 Project, focusing on that most unfortunate of scores, “89.” He is a self-educated oenophile and defers to the tremendous experience and wisdom of the amazing staff at PALATE PRESS: The Online Wine Magazine. http://joecorkscrew.wordpress.com/ Rory Conroy well done. there are huge flaws with 100-point rating systems, and the implementation of the asterisk only highlights the lack of credibility. still, the masses of parker followers, and retailers perpetuating the system, will continue to base buying decisions on pieces of paper with bold numbers hanging from wine shop shelves. http://norcalwingman.com Brian Regardless of the asterisk, people who like the wines Parker rates high will sell. They will sell because they meet a certain palate profile that parkerites look for. In addition, the unfortunate (for the drinker, not the consumer) side effect is the price effect of the 90+ (* or not) wines for the rest of us. Good or bad, it is what it is and as long as RP is rating wines this effect will continue. As a side note, we all know that judging is subjective, perhaps, hopefully not, Mr. Parker is concerned with what his judgements affects on a wine/winery will be and therfore…* My 2cents, Brian Pingback: Tweets that mention Credibility* : PALATE PRESS -- Topsy.com Pingback: asteriskforce.com » Blog Archive » Credibility* : PALATE PRESS - Just another WordPress weblog http://Www.Vintuba.com Vintuba Not only is the 100 point system not based on any systematic approach to tasting it is also 100% subjective. I would love to see the methodology behind the tasting and awarding of points. I am reminded of how arbitrary the awarding of point are when I watch Gary V. Pull points out of his backside when he reviews a wine. Credibility can be regained if the vail on how points are awarded is lifted. FYI the 100 point system is actually the 50 point system (50-100) and in most cases the 20 point system because rarely do we see a wine rated below 80 points. http://www.icebucketselections.com Todd Wernstrom ALL rating systems are flawed because they are employed by humans. I defy anyone to honestly say that the same wine tastes the same when it’s tasted more than once. Even putting aside bottle variation, something that most producers and so-called experts simply ignore, it’s just not possible to capture any wine’s essence with a number, letter or any other symbol. And I don’t blame the Parkers or the retailers or the producers or the consumers for the continued reliance on ratings. We’re all to blame. It does seem that other than at the really, really high end (read: the crazy people who actually try to invest in wine to make a profit), the rating thing is losing traction. Slowly, true, but surely. Thanks, I think, to the explosion of blogs. Which dovetails with the new vs. old media debate. Like the millennials or not, they are changing the game. Nothing wrong with that. http://www.newcellars.com Fred tregaskis Subjective to a point yes, but can we (mostly) agree that Mozart’s piano concerto No.21 is abosolutley awsome? As is Miles Davis’ Kinda Blue? But I also believe these two pieces have to be judged from completely different reference points. To say one is a 94 and the other a 92* is ludicrous. Why should we judge great wines any differently? “Oh that Matisse is only an 89″ sounds pretty stupid. Ratings can suck the real beauty out of great wine (and art). “This wine is really going to be great in a few years.” is enough information for me. http://www.icebucketselections.com Todd Wernstrom Finally someone got it just right! Kind of Blue awesome? Indeed. Sibelius 1 lovely? Granted, he was no Mozart, but indeed again. Perhaps we should just stick to actual adjectives to “rate” wines. The fact that our subject matter is mostly (or all, depending on your perspective) subjective doesn’t make it not rateable. Just not quantifiable. Works for me. http://www.overabarrel.net Larry Chandler I would definitely say that Fred Tregaskis’ comment is worth 96 points. In fact, this entire blog post is probably 93+*. I’m not sure what Matisse is worth only 89 points, most I’ve seen are well in their 90s. And Mozart too is rarely below 91 points. I was reading this over my morning coffee (84 points) and bagel (87 points), and thought everything should be rated with points. It would make life simpler with less time wasted on actual thought. Having other people do my thinking for me lifts a great burden from my life (82 points). Well, time to get to work (70 points). http://www.maratenes.com Gregg Burke Points are just silly! But never the less Parker, Speculator, and the rest will be with us for a while longer. I do not use points in my shop and it has served me well, people have begun to trust me in selecting wines for them because I take the time to find out what they like and not just rely on the palates of self appointed wine gods. Great blog. Cheers http://www.njmonthly.com Susan Guerra I like Larry’s idea of rating everything. So here goes: Sitting here reading Palate Press when I really should be writing or cleaning my house or organizing my desk: *Priceless. Note: The use of the (*) after my score of priceless signifies that this particular procrastination activity has the finest potential of all the other procrastination activities I have engaged in today. http://wwworonzai.com Aiken Hamilton ALL rating systems are flawed because they are employed by humans. I defy anyone to honestly say that the same wine tastes the same when it’s tasted more than once. Even putting aside bottle variation, something that most producers and so-called experts simply ignore, it’s just not possible to capture any wine’s essence with a number, letter or any other symbol. And I don’t blame the Parkers or the retailers or the producers or the consumers for the continued reliance on ratings. We’re all to blame. It does seem that other than at the really, really high end (read: the crazy people who actually try to invest in wine to make a profit), the rating thing is losing traction. Slowly, true, but surely. Thanks, I think, to the explosion of blogs. Which dovetails with the new vs. old media debate. Like the millennials or not, they are changing the game. Nothing wrong with that. +1 Don R Remember, Parker is a lawyer by training. And in the case of the estimable O. J. Simpson, it was lawyers who got him off in the murder of his wife. And now, O.J. is a criminal, years later, for common thuggery. Parker made a good living off of his ability to write inflated reviews. Marvin Shanken ran with the concept and sells tons of advertising to the wealthy, and wine is just a vehicle for his ad sales. One is P.T. Barnum in verbose print, and the 2nd is P.T. Barnum redux in glossy pages. Dan Berger Anyone who uses the word “system” to speak of the 100-point scheme knows nothing of systems. This article is not only brilliantly done, but should use used as a starting point to analyze how numbers are mis-used routinely in society and still believed. Mathematician John Allen Paulos of Temple University, in his many books (including Innumeracy) has long decried this obsession Americans have to “prove” something with numbers. Is that sound I hear wool being pulled down…? Matt I agree that there is some truth to your basic point that Parker is using the asterisk as a way to squeeze a little more space into a point scale that is getting cramped at the top. I think you are overstating some of the evidence you bring to bear on that point, however. For instance, your comparison of the number of wines that Parker rated over particular thresholds in the 2009 vintage (vs. the 2005 vintage) is a valid comparison only if he reviewed the same number of wines. My recollection — admittedly a vague one — is that he published significantly more reviews for 2009 than he did in his 2005 barrel sample report. Not sure how to compare the total number of wines tasted (since not every review is published) unless he mentioned it in his introduction. Also, if you want to nitpick about particular asterisks, it makes no sense to do so by comparing barrel sample scores from 2009 with in-bottle scores from 2005, as you do in a number of cases. Parker says the asterisk is a comparison of what the wines tasted like to him from barrel, not from bottle. And it is downright silly to argue that a range of 93-95+* (Malartic-Lagraviere ’09) is not more promising than a range of 91-93 (Malartic-Lagraviere ’05), simply because they admit the possibility that the wines could end up in the same place. Sampling from barrel is an inexact science, but the higher range unambiguously indicates the critic thought that wine more promising. I actually think that a more interesting question would be WHY the scale is getting so cramped at the top that Parker seems to see a need for more room. My understanding is that Parker maintains that his “90″ in 2010 is the same as his “90″ in 1978, but the quality of winemaking in general has improved so much over those 30+ years that more wines deserve high ratings. There is no doubt some truth to his premise about improved quality, but I sincerely doubt that it accounts for all of the variation in ratings. Parker’s 100-point scale was originally correlated to the typical American academic grading scale as it was used (or as he interpreted it) at the time. Thus, 96-100 = extraordinary, 90-95 = outstanding, 80-89 = above average to very good, 70-79 = average, etc. It should not be surprising therefore — even if Parker is loathe to admit it — that as academic grading norms in this country have shifted, so have the connotations accorded to point ratings on the Parker scale. Although there may still be places in this country today where a flat C is “average,” my impression is that the norm on most grading scales is much much higher — more like the B+ range. Naturally, wine drinkers who are used to seeing B+ as an average grade do not get terribly excited by wines that are rated 87-89; and just as naturally, critics who are trying to communicate with those wine drinkers adjust their scales accordingly. To repeat an analogy I have made elsewhere in the past, I do not rate wines for a living, but I do teach in a graduate program. When I assign grades, should I base them on the scale I experienced when I was a student, or should I use what I understand to be the prevailing scale in my program? If I want to communicate effectively with my intended audience — my students and their prospective employers — my grading scale needs to be similar to the grading scales of my colleagues (and to those used in similar academic institutions), which will provide the most immediate comparison both for students and for prospective employers. I assume that wine critics try to be equally sensitive to their intended audiences in calibrating their rating scales. http://www.1winedude.com 1WineDude I just want to say that Fred’s and Todd’s comments were AWESOME. 98*! Pingback: Nectar Monthly Honors May | Drink Nectar Pingback: “The First Serious Wine Blogger”: The 1WineDude Robert Parker Interview - Wine news, tastings and reviews - Wine Enthusiast http://www.myownwebtool.com Albert Greer I just want to tell you that I am just very new to blogs and certainly enjoyed this web page. Likely I’m planning to bookmark your blog post . You really come with awesome posts. Regards for sharing with us your blog.