Strictly speaking, the beer that we make in the pilot brewery on campus has to be destroyed as it is not duty paid, but we’re allowed to do taste tests, so we obviously do. And there’s a good reason too. Quality control is an important part of a profitable business and if you’re brewing beer for a big brewer with a mainstream product with an established reputation, you have carry out taste tests to ensure that the product meets specifications and, well, tastes the way it is supposed to taste.
We only use one of the two common tests: the triangle test. This involves trying to pick the odd one out from a panel of three. The beers are all presented in dark blue glasses so you don’t get any clues from the visual appearance of the beer (you can still see the presence of absence of froth, but that could equally be due to the way the beer is poured, so it’s not much of a guide). The other test asks you to pick which of two beers is the same as a third, or which is different from a third. It seems very much the same as the first test, so I was initially completely stumped to think why the chances of guessing right in the first test were one in three, while the chances of guessing right in the second were one in two (I’ve worked it out now, don’t worry).
But for such a simple thing, the triangle test is really rather interesting (and that is not just the test samples talking). There are beers from two batches, one from Batch X and two from Batch Y, and you have three glasses A, B, and C. And you have to say which glass contains the beer from X. As I said, the chances of getting it right by chance are 1 in 3. But ‘getting it right’ turns out to be quite a complicated sort of thing.
If the two beers really are different e.g. beers from different companies or different styles of beer, then identifying the odd one out is straight forwardly getting it right. And we would expect most people to pick up on the difference. There might be 70 or 80 or more percent of people who can tell the difference (surely even more). That is way higher than 1 in 3 (33.3%). And if you can’t get it right, maybe that reflects badly on your palette or something (perhaps you have a cold).
On the other hand, if Batch X and Batch Y were made according to the same recipe in the same factory on consecutive days, and the brewery merely wants to test to see if it’s product is consistent, then there shouldn’t really be any difference between them so a “success” rate of 1 in 3 (33.3%) is the desired outcome. That shows testers are not detecting anything different between the two beers, because their evaluations don’t differ from chance. In this case, though, it seems a little strange to say selecting the beer from batch X is getting it right, because there really shouldn’t be any difference between Batch X and Batch Y.
And yet some people can do it consistently. We have one guy on the course, who has so far always chosen “correctly,” despite the fact that thus far, we have been testing batches produced with the same ingredients in the same plant, processed the same way. There class as a whole has not significantly differed from chance. It’s, as it were, a huge pass mark for the brewery on consistency.
It could be the guy is just fluking it, or it could be he is really tasting a genuine difference between the beers that the rest of us cannot detect. But how are we really to tell? To say his palette is acute because he always gets it right is to assume that there is a genuine difference there to be detected and that can’t be assumed if the point of the tests is to measure brewery consistency. On the other hand, if he’s always getting it right, after a bit that can’t be down to chance either.