After I published a post about the very personal and idiosyncratic nature of 100-point scores given out by Robert Parker and co., on my own blog The Wine Case, I was struck by the angle a certain number of commentators took, as they reacted to this story.

In that piece, I endeavored to show that scores should be read with great caution, as a personal opinion and not as gospel truth or an objective evaluation, and that more attention should be paid to the review than to numbers of stars. To this, many replied by saying, in a nutshell, that if scores were so powerful, it’s because the customer doesn’t care.

Tai-Ran Niew, a former “accidental investment banker” and now very thoughtful student of wine, quoted a rather depressing bit from Jancis Robinson, in a comment on my blog post:

My somewhat vain aim has always been to enlighten and enthuse my readers and viewers as much as possible, so that they can make as informed a choice as possible, based on their own tastes. But more and more I reach the conclusion that however hard I try and instill confidence in wine consumers, the great majority of them just want to be told what to buy.” Or as a friend said: “I am too busy to spend time on this, just tell me what to buy.”

Call me naïve or optimistic, but I don’t really believe that’s true. Do consumers take shortcuts, sometimes? Of course. Do some rely on Parker, Suckling and Wine Spectator scores more than on actual tasting notes or personal taste? Sadly, yes.

Should we just throw in the towel and all turn to stars and scores? I’ve already debated that question, and my answer is very clearly no. And half of the Palate Press readers who answered an online poll about how wine reviews should be presented without scores of any kind. That’s a lot of readers who actually want to read the details and understand why – not just how much – a wine is recommended.

Actually, I believe that if the consumer is sometimes lazy, it’s because the wine world has sometimes been lazy. In the world of restaurants, literature or music, neither writers, readers or the industry rely so heavily on scores. Even with movies, your weekly ads in the newspapers use at least one qualifier (“Oustanding” “A must-see”) with the stars or thumbs up. Only in wine ads or stores do numbers and stars, all by themselves, count for so much. Only in the wine world does a critic promote his work with a video of himself shouting out “I’m 95 points on that”, and apparently think he is going to be taken more seriously because of that.

If we – producers, distributors, retailers, writers, etc. – really want the consumer to be interested in more nuances and details about wine, then we have to stop seeing scores and other such shortcuts as an essential part of wine reviewing.

Consumers don’t care? I think they’ll care more if we all care more.


Rémy Charest is a Quebec City based journalist, writer, and translator. He has been writing about wine and food for over 12 years in various magazines and newspapers. He writes two wine blogs (The Wine Case, in English, and À chacun sa bouteille, in French) and, as if he didn’t have enough things to do, he recently started a food blog called The Food Case.

About The Author

Remy Charest

Rémy Charest is a Quebec City based journalist, writer, and translator. He has been writing about wine and food for over 12 years in various magazines and newspapers. He writes two wine blogs (The Wine Case, in English, and À chacun sa bouteille, in French) and, as if he didn’t have enough things to do, he also started a food blog in English, The Food Case, and one in French, À chacun sa fourchette.

  • Pingback: Tweets that mention The Consumer Doesn’t Care : PALATE PRESS -- Topsy.com

  • http://www.fred-co.com/ Fred

    “If we . . . really want the consumer to be interested in more nuances and details about wine, then we have to stop seeing scores and other such shortcuts as an essential part of wine reviewing.”

    I disagree, Remy. Here’s why.

    Setting aside all the arguments about what a score means/implies/accounts for, in the end, it’s simply a number that accompanies the wine, much like the price that (supposedly) reflects its quality. The reason why critics put scores in their reviews is because they believe (rightly) that people need them. They need them when they’re reading about a wine, and they need them at the point of purchase.

    I would argue that most people, when presented with a paragraph of tasting notes and a score, are going to look at the score first and then decide if the review is worth reading. Would you bother reading a review of a wine with a dreadful score? Didn’t think so.

    I would also argue that a high score on an unfamiliar label/varietal/appellation might intrigue a reader enough to spend time with the actual review. In this instance, (high) scores have the effect of broadening a drinker’s horizons, instead of narrowing them as the naysayers fear.
    It doesn’t matter who the author of the score is, readers still benefit from them in this manner.

    At the point of purchase, however, it absolutely matters who the author is. As tireless a taster as he is, BevMo’s Wilfred Wong is no Robert Parker. Ditto for Alder Yarrow and every other blogger. They simply reach too small an audience to have value at retail.

    I should add that all of this applies only to wine enthusiasts – people who care enough to read all the blather put out about a given wine. The vast majority of wine consumers do not share this behavior. They just want to buy a decent bottle and not feel ripped off.

    The trade side of the equation is rather different. Retailers and restaurants have their own reputations to consider. Retailers who make the effort to learn what their customers like and steer them to it can do quite well without outside opinion. But it’s a lot of work. Oftentimes, in the absence of a warm body, a WS score works just fine. Restaurants who care about food and wine pairing will list wines that accentuate their cooking and respect their customer’s wallets. But it’s a lot of work. Especially when a few trophy wines can burnish their reputation just as well.

    Wine score aren’t going away anytime soon. We need them because there is both an overwhelming amount of product choice and an overwhelming amount of product reviews. Things might be different were wineries capable of producing some meaningful advertising of their own, as is done in virtually every other consumer products category.

    • http://www.violentfermentation.blogspot.com Hoke Harden

      Ah, umm.

      “Setting aside all the arguments about what a score means/implies/accounts for, in the end, it’s simply a number that accompanies the wine, much like the price that (supposedly) reflects its quality. The reason why critics put scores in their reviews is because they believe (rightly) that people need them. They need them when they’re reading about a wine, and they need them at the point of purchase.”

      Which would exactly support Remy’s contention that scores are for lazy consumers, no?

      “Would you bother reading a review of a wine with a dreadful score? Didn’t think so.” Yes I would. And so would Remy and others, I suspect. I would want to know why it got the bad review.

      “They simply reach too small an audience to have value at retail.” Ah! Now you’re getting to the nub—well, a nub, anyway. It’s all about value to retail. (See: lazy consumer, wanting to be told what to buy.)

      Attaching a precise numerical scale score to a wine is about as bogus as a wine writer can get. It is totally subjective, susceptible to confusion and mis-interpretation because it is at the very best a massive over-generalization, and has no merit or basis other than to make lazy consumers happy while providing them with what they lack (the ability to make their own decisions?)

      • Tom Wark

        Actually Hoke, I used to love to read the Wine Spectator’s and other’s reviews of 55 point wines. They were often hilarious and usually delivered deadpan, making them even funnier when you read through the lines.

        Personally, I think written reviews of horrid wines are particularly useful, especially when the wine in question is of impressive pedigree.

  • Tom Wark

    Here’s an interesting experiment…

    Put two wines with similar labels in case stacks next to each other in a retail outlet.

    Price them the same..around $10.00.

    Each carries a short description of the wine that is similar in nature.

    Below each description, have one wine with a “92″ point score. Have the other with a “87″ point score.

    See which sells fastest.

    Tom…

    • http://www.palatepress.com David Honig

      Tom, You propose a flawed experiment. There is no question that, given everything else equal, the number will prevail. I don’t think anybody supposes otherwise. The better question, though, is whether a better number will prevail over more information. With that in mind, I suggest a different experiment.

      Put two wines with similar labels in case stacks next to each other in a retail outlet.

      Price them the same..around $10.00.

      One carries a 150 word or more description, including tastes, anticipated cellar life, and a suggested food pairing.

      The other has just a number.

      They both score 89.

      See which sells fastest.

      “But wait,” you say, “that won’t tell us anything.”

      Right.

      A better experiment than yours or mine would be the following:

      Put two wines with similar labels in case stacks next to each other in a retail outlet.

      Price them the same..around $10.00.

      One carries a 150 word or more description, including tastes, anticipated cellar life, and a suggested food pairing. It has no number, but the review is positive, and somewhere in it the word “outstanding”* is used.

      The other has just a number, 90.

      Both reviews are from the same reviewer.

      See which sells fastest.

      *The use of the word “outstanding” is to normalize the experiment, as that is the definition of a 90 pt wine.

      In the last experiment you will truly test numbers versus content.

      • http://www.violentfermentation.blogspot.com Hoke Harden

        Another experiment. This one was actually done.

        Put a series of wines out for a professional tasting (all highly tuned pro palates ITB).

        Each wine is accompanied by two reviews, both written and pointillist; all indication of source or critic was expunged; in most cases the two reviews either directly conflict one another, or seem to describe two very different wines; often the point scores are in wide variance.

        Tasters were asked to suggest A)what the wines were, B) who the critic/source was.

        The pros were pretty damned good (some wholesalers, some retailers, some importers), otherwise they wouldn’t have been there.

        They got most of the wines. They had an even better success at identifying who the critics were–almost totally nailed it.

        And almost every one there agreed that the reviews (and especially the scores) had surprisingly little to do with the wines. One snark suggested a random choice generator would have been about as reliable an indicator. :^)

        • http://www.fullglassresearch.com Christian Miller

          But wait, there’s more…

          1) In one experiment, regular wine consumers (not aficionados or wine geeks) were exposed to 3 bottles of the same type of wine, same retail merchandising and price. One had a shelf talker with a positive review by Parker, one a positive review shelf talker by a made up name, one had no shelf talker. The two reviewed wines were far more likely to be purchased than the one with no shelf talker, but there was no statistical difference between Parker and Mr. Jones.

          2) In another experiment, high frequency/high end wine drinkers were proposed a celebratory dinner for a wine expert friend (high end wine consumption situation with plenty of social exposure) and given a list of renowned or cult California Cabs to choose from for the occasion. There was no correlation between Spectator points and likelihood of being chosen.

          • http://www.violentfermentation.blogspot.com Hoke Harden

            Summation? The Parkerise’ are sheep and the Spectatoristas are gullible but clueless? :^)

          • http://www.fullglassresearch.com Christian Miller

            Ha Ha Ha, but my conclusion would be that hard-core Parkerators are in much smaller supply than people think. #1 tells us that the presence of endorsement is more important than the source. The results of experiment #2 in detail showed how a slowly built reputation trumps a score.

  • Warren

    The points system is essentially symptomatic of the society that we live in at present. In this society people abrogate responsibility to someone else. Best example is the reliance on “ratings agencies” to evaluate financial investments. Investors are too lazy to do it themselves and so they rely on someone else to do it for them. Wine scores are basically the same. The consumer wants to rely on someone else. But, overlay that with the concept of “conspicuous consumption” and “bragging rights” and you get the modern phenomenon of the wine scorer. In this world, people who wish to be seen as sucessful and important, want external confirmation in support of their quest. So the big mansion, the porsche, the trophy wife. All part of it. And the big one, the ability to pull out bottle of wine from the “cellar” and brag that its a Parker 96 pointer is also part ot the phenomenon. And I think this is predominantly and American characteristic. The USA is certainly far more focussed on wine scores than just about anywhere else I have ever been. (and I have been around a lot, not being American).

    • Tom Wark

      Let’s all first concede that certain disciplines and crafts lend themsevles to critical evaluation. We know this is the case because things like art, film, music, food and wine have been critically examined for 100s (1000s?) of years. Let’s further concede that those disciplines and crafts that are surrounded by critical reviews and examination exist because their is an audience for someone else’s opinion where this things exist. There is, for example, no audience for critical examination of drink coasters. Hence, no body of critical examination of drink coasters.

      Once you concede this, then the question of the 100 point scale or 20 point Davis Scale or stars is nothing more than an issue of the level of tolerance for specific way of critically examining a wine.

      Assigning a point or score to a wine (that is accompanied by the scores meaning) is indeed a legitimate way of critically examining wine, particularly if you are comparing two or more wines.

      But it should be said that nearly every person or critical body that utilizes scores accompanies them with words describing the wine. In my view the use of scores is an addendum to the critical review, and a useful one at that.

  • http://www.atfirstglass.com Nancy

    Tom W. is essentially right. The riff on his experiment, in which a wine labeled “90″ would stand next to a wine labeled “outstanding,” would certainly result in the “90″ wine selling better than its partner. People trust numbers, not because they are lazy (or what have you), but because they think the 100 point score is the default way of experiencing wine. For better or worse I can’t foresee this changing until wine is as normal a part of life as things like movies, which aren’t thought of that way. Note: the wines people normally buy — Franzia, Livingston — of course don’t need number scores.

  • http://tairanniew.tumblr.com Tai-Ran Niew

    Remy, thanks for the kind intro.

    Looking to the music industry is very interesting. If we can digitize wine samples and pump it through YouTube …

    Clearly we are not quite there yet! But check out what The Sampler and Vagabond Wines are doing in London. “Trust your palate” is a real movement. We’ll see where it gets to in the next 10-20 years!