This article on what it means to “have an opinion” is not bad, but it muddles two fundamentally different types of “opinion” and as such fails to get at the root of the problem, and misses important ideas about diversity and knowledge.
People use words in messy and problematic ways and always will. As linguistic purists and logicians we may grind our teeth when we see ambiguity (single terms that are used to mean multiple things in the same argument) and amphiboly (whole sentences or clauses that can be interpreted in more than one way, and are, in a single argument) used to justify all manner of nonsense, even while as poets we can revel in those same phenomena.
As such, a great deal of philosophy has always started with getting our terms clear. Aristotle frequently introduced discussions of some idea or other with some variant of the statement “X is said in many ways…” and then went on to discuss them, teasing out the nuances and differences as well as the similarities.
The difference between “opinions” and “being wrong” is not that some opinions are wrong, but that opinions, properly understood, are statements of fact about the person giving them but stated as if they were a fact about some part of the world that is not the person giving them. Judgements, which the author conflates with opinions, are different animals.
To repeat myself, opinions are facts about ourselves stated as if they were facts about the world: “Penguins are the best!” is identical in meaning to “I like penguins” or “I think penguins are the best”, both of which are clearly and purely facts about me: what I like, what I think. In the first form, “Penguins are the best!”, I appear to be making a claim about the world when I’m actually making a claim about me.
Distinguishing between ourselves and the rest of the world is one of the greatest challenges any human being faces. It’s a process that starts in infancy and frequently also seems to stop there, which leads to us having all kinds of muddled opinions and ideas. Our internal state–our emotions, our ideas, our attitudes–are all real, but they are all facts about us, not facts about any other part of the world.
Confusion on this issue is commonplace. People say things like “We need to look at facts, not emotions!” or “We need to follow our hearts, not the facts!” as if “fact” and “emotion” were different kinds of thing.
They aren’t: emotions are facts… about us.
Facts about us may or may not be as relevant as facts about things that are not us when we’re making a decision, but treating emotions and the rest of our internal state as if it wasn’t factual is as big an error as ignoring facts about other things in favour of facts about us. All facts matter. Which facts matter most depends on the circumstances.
And when we get confused on this score, we get into trouble.
To take a personal example, “You don’t love me” has been known to come out of my mouth when an accurate statement of knowable fact has been, “I don’t feel loved.” This is the same phenomena that drives so many wacky claims in the public sphere.
One very common fact about themselves that people bring up when arguing for wacky ideas is what they personally can or cannot imagine. I harp on this a lot, but it bears repeating in this context, because what someone does or does not, or can or cannot, imagine is in most cases almost entirely due to facts about them, not facts about the rest of the world. We know the world is full of stuff–from quantum mechanics to Darwinian evolution–that no one could imagine, until it turned out to be true.
Its important to keep this view of opinions in mind because it makes clear that justifying a claim about something that is not us by pointing to a fact about ourselves makes no sense. It’s as if we said, “I have upward-sloping ear canals [+], therefore evolution is false.” Yet this is what a great many “it’s my opinion” claims amount to, and the facts being pointed to are generally the person’s feelings or emotional response to some part of the world.
Many people are unhappy with the idea of anthropogenic climate change, but “I am upset that my lifestyle might be contributing to a major economic and ecological disaster” is not a fact that should change the plausibility anyone gives to “The success of the anti-nuclear movement in the ’70’s and ’80’s means we are now facing a major crisis due to our vast CO2 emissions, 80% of which come from the coal and oil power plants that nuclear would have replaced.”
The other kind of “opinion”, which in contrast to the author of the article I’ve linked above is what I want to call “judgement”, is quite different. It’s just the legitimate effect of our prior beliefs and biases in an inherently uncertain world, which we know must influence our beliefs if they are to remain consistent.
This is a necessary consequence of the Great Bayesian Revolution that is slowly sweeping the world: the realization that for our beliefs to be consistent, they must reflect something about the subject who holds them. There is no view from nowhere–any more than there is a view of nowhere–and it turns out there is just one way of correctly accounting for where we stand when evaluating the evidence for or against some idea. [*]
In Bayesian language, our biases are called “priors”, as in “prior beliefs”, which are the things we bring to any idea when we are faced with new evidence. Bayes’ Rule, which I’ll describe below, is the only provably correct way of updating our prior beliefs in the face of new evidence, and that updated belief will be our prior the next time new evidence comes along.
Two people starting with different priors will always reach slightly different conclusions from the same evidence, and anything else would be a violation of Bayes’ Rule, which says the strength of our belief in an idea after we see some new evidence should be proportional to the strength of our prior belief, multiplied by a factor that depends on the strength of the evidence.
Strong evidence will make everyone’s beliefs stronger, but if I start out thinking something is pretty unlikely and you start out thinking “hey, it could happen”, then after we’ve seen the same strong evidence for it you’re still going to have a stronger belief than me, although my belief will be much stronger than before. “Differences of opinion” of this kind aren’t just natural, they are necessary if we are all to keep a reasonably consistent set of beliefs in our heads.
The Bayesian idea of “strength” of evidence for an idea is also pretty simple: if the evidence–the facts, the data–would be pretty likely if the idea was true and pretty unlikely regardless of whether it’s true or not, then the evidence is strong. When Galileo saw moving lights around Jupiter and only Jupiter using his telescope, he realized that if there were objects like the Earth’s moon orbiting Jupiter such an observation would be really likely. Someone objected that the objects could just be some kind of optical effect in his telescope, and he replied that if that was the case, why didn’t he see moving lights around any other celestial body? It’s that combination–the observed effect is likely if the idea is true, unlikely otherwise–that makes something good evidence for an idea.
If the evidence would be pretty likely to happen regardless, it’s not so good. And if the evidence is unlikely to happen if the idea is true but pretty likely otherwise, it’s actually a counter-argument.
As an example of evidence that shouldn’t change anyone’s beliefs very much, an acquaintance once argued that a psychic she’d been to was uncannily accurate because they had predicted “you will take a trip to the East, over water” and she was indeed travelling from Nanaimo to Vancouver in the next month, an occurrence that happens so frequently its prediction really doesn’t count for much.
Finally, the question of “how much evidence is enough” is a tricky one, because we don’t know what we don’t know in many cases, and nothing is certain… not even Bayes’ Rule itself: if you give me evidence against it, I’ll take it seriously, although I’m not holding my breath. But “maximally plausible” and “maximally implausible” are not “certainty” (which is also called “faith”: an idea held in such a way that no amount of evidence will change someone’s mind about it.)
So when we argue, we shouldn’t as good Bayesians be trying to prove anyone wrong in the sense of demonstrating that their belief is impossible. There could be unicorns. We should instead focus on demonstrating on the basis of the evidence that one particular proposition is way more (or less) plausible than the others. “The nineteen nitwits acted with a relatively small number of other faith-addled idiots of the same ilk to commit 9/11” is way more plausible than “Zionists faked it all”, and we don’t need to demonstrate anything more than that. Favouring a less plausible belief over a more plausible one is the root of a great deal of evil, and once it is clear someone is committing that error, we can consider the argument over.
Beyond that, however, we all don’t know much more than we do know.
There is a vast literature in economics, politics, sociology and history that I’m not conversant with, for example, although I’ve done a lot of reading on those subjects. And my experiences as a straight, white, Anglo, educated, middle-class, professional male might not cover the entire ground of human experience. Just guessing about that, mind.
Policy questions are hard, and our disagreements about them often stem from judgements that arise in a reasonably Bayesian way from our prior beliefs and the diversity of evidence–including personal experiences–we’ve encountered. Some ideas–vaccines cause autism, for example, or homeopathic treatments are more effective than placebos–are contradicted by so much data that no one can credibly claim to honestly believe them without admitting to priors that are conspiracy-theory-crazy. But many differences aren’t like that, even though they are nearly as radical on the face of it as those between the pro-vax and anti-vax sides. In those cases, digging in to “WyTF do you believe that?” is often surprisingly fruitful, if you can get past the “This person is nuts” reflex on both sides.
Bayes’ Rule tells us that priors matter, and a diversity of priors in any group is likely to bring us closer to convergence on the most reasonable set of beliefs faster and more effectively than a prior monoculture would. The body of evidence worth considering, especially for complex social issues, is large and diverse and not uniformly available to everyone.
As such, Bayesiaism favours diversity even as it encourages and enables convergence on a common set of well-supported beliefs. And it helps us understand why and how people with different backgrounds can have legitimate differences in judgement–not opinion–about all kinds of things that seem pretty obvious to us, while at the same time making clear that anyone who persists in favouring the overwhelmingly implausible over the extremely plausible is probably just nuts.
[+] Which is true, by the way: every time a physician or audiologist sticks that thing in my ear and looks inside my head they say, “That’s odd…”, which I’ve learned not to take personally, as there really is a great deal that’s odd inside my head.
[*] In Bayesian terms, I am neither a subjectivist nor an objectivist. I distinguish between the subjective “plausibilities” that we assign to our ideas and the objective “probabilities” that we use as evidence. Since Bayes’ Rule is written as a ratio of probabilities that is used to update a prior plausibility to a posterior one, the difference in kind is not obvious, but understanding it is crucial to accounting for both the significance and limitations of subjectivity and objectivity in the Bayesian picture.