More and more it seems to me that we are living in a world where a critical mass of people lack the necessary critical thinking skills to engage in reasonable discussions about the most important issues of our time. This is a big problem. There is a long lost territory known as “reasonable disagreement” that used to be a starting place for dialogue and engagement and understanding the various perspectives that intelligent, sensitive humans can hold. But in order to get to that hallowed, fertile ground of reasonable disagreement, we must presuppose reason, for we can’t hope to transcend the limits of the tool without using it to its full potential first.
I have lamented–ad nauseum, perhaps–about how little we seem to value critical thinking in these strange, strange times, and have argued that a minimum level of development of critical thinking skills is essential for any and all of us to move forward effectively in any of our respective endeavors, including our attempts to make sense of and utilize health-related information. However, the truth is – if indeed there is such a thing as truth! – that even when one possesses the requisite skills, it’s still an extremely difficult task to evaluate the validity of people’s claims, interpretations, and recommendations, even when those people are regarded as experts in their respective fields. This is where things get especially gnarly, because anti-intellectualism would be a big enough problem were it simply a matter of the ignoramus stubbornly refusing to yield to the expert’s relatively enlightened perspective. Things get gnarly because the ignoramus might be right some of the time, albeit for the wrong reasons. After all, experts and authorities in a variety of fields (e.g., psychiatry) are so often corrupted by misaligned incentives, biases, and conflicts of interest, that we are right not to place our trust in them. At the same time, we are wrong to throw out the baby with the bathwater, and so we clearly need to find some way of reforming our information and knowledge systems/structures, rather than surrendering to the forces of obfuscation and corruption.
In a recent article (America’s Cult of Ignorance—And the Death of Expertise) by Tom Nichols, the author laments the various campaigns against “established knowledge” that, while nothing new, have reached somewhat of a fever pitch in our current information environment, where accusations of “fake news” are hurled at everything from completely fabricated nonsense on the one hand, to major news outlets like The New York Times on the other. The ascendant forces of anti-intellectualism have speciously twisted the concept of democracy into the notion that, as Isaac Asimov put it, “my ignorance is just as good as your knowledge.” “Established knowledge,” however, can be shaped and distorted so much by corrupting influences that it is only through the relentless questioning of authority that dangerous bullshit gets exposed. So, how can we expose and challenge the bullshit and corruption that has corroded the information pipeline without undermining confidence in the entire project of a scientifically-informed approach to knowledge?
Back in 2005, John Ioannidis wrote a paper called Why Most Published Research Findings Are False, and apparently it has since become the most cited piece related to the topic at hand. In the paper Ioannidis laid bare many of the influences – from flawed methodology to statistical hocus-pocus to flat-out financial conflicts of interest – that had shaken his confidence in the validity of most scientific research across many academic disciplines. It was quite a gut-punch to the entire academy, delivered by one of their own, and it definitely changed the way I consume and evaluate information coming from both academic and media outlets. In my case, at least, the change was unambiguously positive. When I was in graduate school working toward a master’s degree in mental health counseling, I put forth the extra effort to check and challenge the interpretations and conclusions of my textbook authors and my instructors. Most of my peers, however, seemed quite content to passively accept as “established knowledge” any statement in a textbook that happened to have a citation after it, as if the name and date inside those parentheses was all that was needed to validate a given claim.
Since Ioannidis’s seminal paper, there have been many follow-up articles and discussions that are well worth reading, including the following:
After reading these articles, it’s hard not to have one’s confidence in scientific authority seriously undermined, and it’s easy to see how anti-intellectuals could seize upon and distort this “soul searching in the sciences” to further their agenda of obfuscation and false equivalencies. It should be obvious, however, that there is nothing anti-intellectual about challenging established knowledge with reason, sound argument, and the critical review of evidence.
One of the best ways I know of to recognize a sound, strong argument is when the proponent of that argument is willing to directly address counter-arguments, ideally in their “steelman” form. Steel-manning is basically the opposite of straw-manning, and so instead of caricaturing another person’s point of view in your take-down of it, you instead seek to engage with the strongest possible version of their idea, even if they didn’t present their thoughts very cogently in the first place. This show of good faith can go a long way in creating the conditions in which a productive conversation can flourish, but there are always potential landmines. The biggest difficulty I’ve encountered when engaging in “difficult discussions” is in coming to basic agreement on the validity of whatever data or research is referenced. Even with the advent of Google, one can’t be expected to fact-check information and critique complex interpretations of data in real time, during a discussion. My rule of thumb is that if one’s argument rests on research and data which is unfamiliar to your interlocutor, one is obliged to accurately summarize the details of whatever research and data is being referred to, as opposed to expecting the other person to accept a fallacious appeal to authority, as when one weakly insists that “research shows” or “it’s been proven that” such and such is the case. Unfortunately, it is rarely the case that people actually familiarize themselves first-hand with the evidence supporting their points of view, and instead they merely place their trust in some expert or voice of authority who, more often than not, is selected as a trusted information source through a process of blind confirmation bias.
Take, for example, the idea of “power posing” as an effective way to use intentional changes in body language to improve our self-confidence and performance. Let’s say that my friend Toby watches the millions-of-times-viewed TED talk by Dr. Amy Cuddy called “Your body language shapes who you are.” In that talk, Dr. Cuddy makes the case, based on scientific research, that holding one’s body in a powerful pose, like raising one’s fists in the air, can trigger physiological changes that increase one’s confidence and performance in a variety of contexts. Toby might say something to me like, “You should do a power pose before recording your podcasts. Studies show that doing so changes your biochemistry and makes you perform better.” Now, Toby himself has never read the studies in question, but he trusts that the smart-sounding and charismatic Dr. Cuddy has given him objective, sound information that can be taken to the bank. I might then say to Toby, “Not true dude. I just watched a different TED talk, by someone named Laura Arnold, where she totally debunks all of Cuddy’s research. Turns out it’s total bullshit dude.” Now, of course, I didn’t read any of the research cited by Laura Arnold in her Ted talk. I just have an affinity for debunkers, skeptics, and takedowns of authority in general. So now Toby and I seem, on the surface, to have evidence-based points of view that are colliding in a zone of reasonable disagreement, yet neither one of us has even bothered to look at the evidence in question. Rather, we have both simply argued from authority, selecting experts that have put forth points of view that appeal to each of us, respectively. Ideally, Toby and I would recognize the confirmation bias at play, and then either dig into the relevant research before reconvening, or else set the validity of the research aside and move on to explore other aspects of the issue, like our own personal experiences with body language and its effects, that don’t depend on questionable scientific evidence.
I suppose I’ve rambled on long enough about this. Below are the relevant links, and some more that I haven’t yet checked out fully. I particularly look forward to checking out the website Calling Bullshit (In the Age of Big Data). It seems like it might be just the medicine for what ails me. The authors pose the problem in such a refreshingly frank way: “The world is awash in bullshit” and “we’re sick of it.” And so their mission is “to help students navigate the bullshit-rich modern environment by identifying bullshit, seeing through it, and combating it with effective analysis and argument.”
Sounds like just what the doctor ordered. But then again, aren’t they all just shills in white coats?
- Lies, Damned Lies, and Medical Science, by David H. Freedman
- Why Most Published Research Findings Are False, by John Ioannidis
- “Evidence-based medicine has been hijacked:” A confession from John Ioannidis, by Retraction Watch
- Evidence-based medicine has been hijacked: a report to David Sackett, by John Ioannidis
- The hijacking of evidence-based medicine, by David Gorski
- Calling Bullshit (In the Age of Big Data), by Carl T. Bergstrom and Jevin West
- Psychology’s Meta-Analysis Problem, by Hilda Bastian
- The Four Most Dangerous Words? A New Study Shows, by Laura Arnold
- How Flawed Science Is Undermining Good Medicine, by Richard Harris