I know of friends who swear by their wine, and for good reason. I enjoy a glass or two from time to time. Dry or sweet, smooth or tart, red or white or one of hundreds of shades of the beverage, wine is a taste that millions of people enjoy. When picking up some at the store, I'm rarely paying much attention to price except to avoid paying premium prices. I've always enjoyed the inexpensive brands, and not just because they are cheap. While there are some vintages and makers that have disappointed, most bottles, whatever the cost or vinyard, have satisfied my desire.
There are plenty of people, perhaps including some of my wine-drinking friends, who hold on to a belief that there is a significant difference between wines, that there are significant and discernable differences between wines rated at 95 and those rated at 85. Perhaps there are. Maybe it does matter what a judge or panel of judges has determined about the wine's quality. How would we know if they might be on to something, and more importantly, are they capable of consistently telling the difference?
The answer, of course, is to test judges in a scientific way. Can they truly tell the difference between an expensive rare vintage and a "Two-Buck Chuck" also-ran? Is a wine rated 90 so much different than one scored at 86? Wine connoisseurs would argue that there certainly is a chasm of quality. And that opinion matters to winemakers where it counts: sales:
A few points may not sound much but it is enough to swing a contest – and gold medals are worth a significant amount in extra sales for wineries.
There have been several attempts to study the value that judges add to the wine world. Medals and sales contracts depend on getting good scores, and many a vintage has been made or broken based on a few digits recorded after spare drops were swilled and spat from a judge's mouth.
An article in the Guardian offers an intriguing look at one of the latest attempts to measure and quantify how well judges do when it comes to wine tasting and consistency. The results are not terribly surprising if you've followed earlier, similar research by Richard Wiseman, Robin Goldstein, Frederic Brochet, and others. The phrase 'coin-flip' seems to apply to what Robert Hodgson found.
"The results are disturbing," says Hodgson from the Fieldbrook Winery in Humboldt County, described by its owner as a rural paradise. "Only about 10% of judges are consistent and those judges who were consistent one year were ordinary the next year.
"Chance has a great deal to do with the awards that wines win."
These judges are not amateurs either. They read like a who's who of the American wine industry from winemakers, sommeliers, critics and buyers to wine consultants and academics. In Hodgson's tests, judges rated wines on a scale running from 50 to 100. In practice, most wines scored in the 70s, 80s and low 90s.
Results from the first four years of the experiment, published in the Journal of Wine Economics, showed a typical judge's scores varied by plus or minus four points over the three blind tastings. A wine deemed to be a good 90 would be rated as an acceptable 86 by the same judge minutes later and then an excellent 94.
Hodgson's research has been ongoing since 2005, and what he's published on the study hasn't made him any friends in the judges circles. What he discovered is what, if you really think about it, we'd expect to find. A lot goes into the experience of tasting wine (or any other experience you may have). The way the breeze or fans are blowing, the people and conversations around at that moment the wine touches lips, even the effect the last wine had on your thoughts, it is nigh impossible to taste a wine in a vacuum.
"I think there are individual expert tasters with exceptional abilities sitting alone who have a good sense, but when you sit 100 wines in front of them the task is beyond human ability," he says. "We have won our fair share of gold medals but now I have to say we were lucky."
So what's the take away for my friends, for me, those of us who enjoy wine? Forget the critics, forget the judges, forget the labels and price tags. Pick up a bottle or two and try them. You'll soon discover what types of wine are your favorites and which you don't enjoy as well. You'll find winemakers you like more and less. Don't be afraid to take advice from friends, and even from the 'experts,' but don't rely on scores and medals to direct you to the ones which you'll find most pleasing. Use your tongue to find what works best for you, and explore the numerous options available to you.
And don't be surprised when something as subjective as the taste of wine turns out to be hard to 'judge.' The lesson that Hodgson and other researchers really are hammering into us is that following the science, doing research and analyzing the cold data can help us decide more about the quality of the advice we're getting than simply assuming that a network of 'experts' has it all figured out. Wine judges are just one set of experts whose opinions matter a lot but seem to be poor value for their money. I'm sure we can think of more than a few more like them. I'm looking at you, Dr. Oz.
As if you need one more example, I'll leave you with one of my favorite segments on the Penn and Teller show Bullshit, which includes a 'water sommelier.'