Legitimate criticism of scientific authority in truthless times

More and more it seems to me that we are living in a world where a critical mass of people lack the necessary critical thinking skills to engage in reasonable discussions about the most important issues of our time. This is a big problem. There is a long lost territory known as “reasonable disagreement” that used to be a starting place for dialogue and engagement and understanding the various perspectives that intelligent, sensitive humans can hold. But in order to get to that hallowed, fertile ground of reasonable disagreement, we must presuppose reason, for we can’t hope to transcend the limits of the tool without using it to its full potential first.

I have lamented–ad nauseum, perhaps–about how little we seem to value critical thinking in these strange, strange times, and have argued that a minimum level of development of critical thinking skills is essential for any and all of us to move forward effectively in any of our respective endeavors, including our attempts to make sense of and utilize health-related information. However, the truth is – if indeed there is such a thing as truth! – that even when one possesses the requisite skills, it’s still an extremely difficult task to evaluate the validity of people’s claims, interpretations, and recommendations, even when those people are regarded as experts in their respective fields. This is where things get especially gnarly, because anti-intellectualism would be a big enough problem were it simply a matter of the ignoramus stubbornly refusing to yield to the expert’s relatively enlightened perspective. Things get gnarly because the ignoramus might be right some of the time, albeit for the wrong reasons. After all, experts and authorities in a variety of fields (e.g., psychiatry) are so often corrupted by misaligned incentives, biases, and conflicts of interest, that we are right not to place our trust in them. At the same time, we are wrong to throw out the baby with the bathwater, and so we clearly need to find some way of reforming our information and knowledge systems/structures, rather than surrendering to the forces of obfuscation and corruption.

In a recent article (America’s Cult of Ignorance—And the Death of Expertise) by Tom Nichols, the author laments the various campaigns against “established knowledge” that, while nothing new, have reached somewhat of a fever pitch in our current information environment, where accusations of “fake news” are hurled at everything from completely fabricated nonsense on the one hand, to major news outlets like The New York Times on the other.  The ascendant forces of anti-intellectualism have speciously twisted the concept of democracy into the notion that, as Isaac Asimov put it, “my ignorance is just as good as your knowledge.”  “Established knowledge,” however, can be shaped and distorted so much by corrupting influences that it is only through the relentless questioning of authority that dangerous bullshit gets exposed. So, how can we expose and challenge the bullshit and corruption that has corroded the information pipeline without undermining confidence in the entire project of a scientifically-informed approach to knowledge?

Back in 2005, John Ioannidis wrote a paper called Why Most Published Research Findings Are False, and apparently it has since become the most cited piece related to the topic at hand. In the paper Ioannidis laid bare many of the influences – from flawed methodology to statistical hocus-pocus to flat-out financial conflicts of interest – that had shaken his confidence in the validity of most scientific research across many academic disciplines. It was quite a gut-punch to the entire academy, delivered by one of their own, and it definitely changed the way I consume and evaluate information coming from both academic and media outlets. In my case, at least, the change was unambiguously positive. When I was in graduate school working toward a master’s degree in mental health counseling, I put forth the extra effort to check and challenge the interpretations and conclusions of my textbook authors and my instructors. Most of my peers, however, seemed quite content to passively accept as “established knowledge” any statement in a textbook that happened to have a citation after it, as if the name and date inside those parentheses was all that was needed to validate a given claim.

Since Ioannidis’s seminal paper, there have been many follow-up articles and discussions that are well worth reading, including the following:

After reading these articles, it’s hard not to have one’s confidence in scientific authority seriously undermined, and it’s easy to see how anti-intellectuals could seize upon and distort this “soul searching in the sciences” to further their agenda of obfuscation and false equivalencies. It should be obvious, however, that there is nothing anti-intellectual about challenging established knowledge with reason, sound argument, and the critical review of evidence.

One of the best ways I know of to recognize a sound, strong argument is when the proponent of that argument is willing to directly address counter-arguments, ideally in their “steelman” form. Steel-manning is basically the opposite of straw-manning, and so instead of caricaturing another person’s point of view in your take-down of it, you instead seek to engage with the strongest possible version of their idea, even if they didn’t present their thoughts very cogently in the first place. This show of good faith can go a long way in creating the conditions in which a productive conversation can flourish, but there are always potential landmines. The biggest difficulty I’ve encountered when engaging in “difficult discussions” is in coming to basic agreement on the validity of whatever data or research is referenced. Even with the advent of Google, one can’t be expected to fact-check information and critique complex interpretations of data in real time, during a discussion. My rule of thumb is that if one’s argument rests on research and data which is unfamiliar to your interlocutor, one is obliged to accurately summarize the details of whatever research and data is being referred to, as opposed to expecting the other person to accept a fallacious appeal to authority, as when one weakly insists that “research shows” or “it’s been proven that” such and such is the case. Unfortunately, it is rarely the case that people actually familiarize themselves first-hand with the evidence supporting their points of view, and instead they merely place their trust in some expert or voice of authority who, more often than not, is selected as a trusted information source through a process of blind confirmation bias.

Take, for example, the idea of “power posing” as an effective way to use intentional changes in body language to improve our self-confidence and performance. Let’s say that my friend Toby watches the millions-of-times-viewed TED talk by Dr. Amy Cuddy called “Your body language shapes who you are.” In that talk, Dr. Cuddy makes the case, based on scientific research, that holding one’s body in a powerful pose, like raising one’s fists in the air, can trigger physiological changes that increase one’s confidence and performance in a variety of contexts. Toby might say something to me like, “You should do a power pose before recording your podcasts. Studies show that doing so changes your biochemistry and makes you perform better.” Now, Toby himself has never read the studies in question, but he trusts that the smart-sounding and charismatic Dr. Cuddy has given him objective, sound information that can be taken to the bank. I might then say to Toby, “Not true dude. I just watched a different TED talk, by someone named Laura Arnold, where she totally debunks all of Cuddy’s research. Turns out it’s total bullshit dude.” Now, of course, I didn’t read any of the research cited by Laura Arnold in her Ted talk. I just have an affinity for debunkers, skeptics, and takedowns of authority in general. So now Toby and I seem, on the surface, to have evidence-based points of view that are colliding in a zone of reasonable disagreement, yet neither one of us has even bothered to look at the evidence in question. Rather, we have both simply argued from authority, selecting experts that have put forth points of view that appeal to each of us, respectively. Ideally, Toby and I would recognize the confirmation bias at play, and then either dig into the relevant research before reconvening, or else set the validity of the research aside and move on to explore other aspects of the issue, like our own personal experiences with body language and its effects, that don’t depend on questionable scientific evidence.

I suppose I’ve rambled on long enough about this. Below are the relevant links, and some more that I haven’t yet checked out fully. I particularly look forward to checking out the website Calling Bullshit (In the Age of Big Data). It seems like it might be just the medicine for what ails me. The authors pose the problem in such a refreshingly frank way: “The world is awash in bullshit” and “we’re sick of it.” And so their mission is “to help students navigate the bullshit-rich modern environment by identifying bullshit, seeing through it, and combating it with effective analysis and argument.”

Sounds like just what the doctor ordered. But then again, aren’t they all just shills in white coats?

Links:

The role of biology in problems of thinking, feeling, and behaving

Pissing in the wind

It’s a new year, and I find myself living in a “post-fact” world of “fake news” with catastrophic failures of critical thinking everywhere on display. Happy New Year everybody! What holds true–if anything holds true these days–in the realm of politics is not fundamentally different from what holds true in other areas of discourse, like say, behavioral health. And that true thing is this: our current capacity for critical thinking cannot seem to adequately process, evaluate, and analyze the constant flow of information that is being channeled through structures designed to further agendas rather than deepen knowledge and improve understanding. That was a mouthful, I know. I just can’t help wondering though, Has all this blogging been just pissing in the wind? Have I myself been duped, or been duping myself, into a false sense of certainty and self-righteousness? Maybe. But at least I’m trying. At least I care enough to ask questions.

The first Friday of every month I attend a continuing education training for mental health professionals. The training takes place in a local psychiatric hospital, and is conducted by various local leaders in the mental health profession. This last training was on the topic of addiction treatment, and I was expecting to get a heavy dose of twelve-step and brain disease dogmatism, and that’s just what happened. What took me by surprise was how starkly unscientific the presentation was–not a single reference to a single piece of research, and how uncritical the audience was as they nodded their heads to statements like “This disease wants you dead!” I felt like I was in a church listening to a sermon. I left the training deflated and discouraged. How can there be any hope of a sane, scientifically grounded approach to drug abuse (or any mental health problem for that matter) when the thought leaders, experts, and armies of professionals are all in lock-step headed in the wrong direction? Fortunately, there are dissident voices breaking through via the internet ether waves. But again, perhaps I have constructed my own cozy echo-chamber in this regard. You be the judge.

Johann Hari, he of “Chasing the Scream” and TED notoriety, wrote an interesting op-ed in the LA Times the other day called “What’s really causing the prescription drug crisis?” The piece pokes holes in the most well-subscribed narrative regarding the current opiate crisis in America, namely that Big Pharma has hooked everyone on irresistible drugs, and that what we need to do now is restrict access to these powerful life-ruining substances. The holes in this theory might not seem obvious. Even John Oliver, whose entertaining critiques usually strike the right tone, seems to have blown past them.

First of all, Hari points out that less than one percent of opiate prescriptions lead to addiction, and that super strong opiates (like diamorphine) are routinely administered in hospital settings in other countries without causing people to become addicted. So, then, the drugs themselves can’t be root of the problem, right? If it were the drugs themselves, then opiate addiction should be spread evenly across the country to match prescription rates. But it isn’t. Opiate addiction is concentrated in areas where times are the toughest, like in the Rust Belt. It’s the tough times–and their impact on people who may lack the resources (internal and external) to cope with them–that are more likely to be the root of the problem, rather than any specific numbing agent. Furthermore, how can stringent opiate restriction be the best response to the problem, when the vast majority of people who use the drugs to manage pain don’t show problematic use, and when cutting addicted folks off from their prescriptions so clearly leads them to black market heroin use? This “War on Drugs” mentality might be well-intentioned, but it’s just making things worse. In order to come up with a more effective solution, we need to fully understand the problem, which means taking into account all of the facts, which would lead us toward addressing root causes (like poverty, social isolation, poor coping skills) instead of restricting the latest, most available, most potent means of killing the associated pain.

Of course, addiction is just one category of so-called “mental illness,” and a broader argument can (and has) been made against viewing problems of thinking, feeling, and behaving, in general, as biologically driven processes best suited for physiologically focused interventions. I have been pissing in that wind for years as well, but I have not come across a more thorough critique of the predominant psychiatric paradigm than in this recent article by Phil Hickey called The Biological Evidence for “Mental Illness.” Hickey makes many of the points that I have made–ad nauseam–in previous posts (e.g., HERE), but he makes them far more meticulously and convincingly. He also grounds his arguments in research and years of clinical experience. Here are a few of Hickey’s ideas from this article that are well worth chewing on:

Depression, either mild or severe, transient or lasting, is not a pathological condition. It is the natural, appropriate, and adaptive response when a feeling-capable organism confronts an adverse event or circumstance. And the only sensible and effective way to ameliorate depression is to deal appropriately and constructively with the depressing situation. Misguided tampering with the person’s feeling apparatus is analogous to deliberately damaging a person’s hearing because he is upset by the noise pollution in his neighborhood, or damaging his eyesight because of complaints about litter in the street.

What psychiatry calls mental illnesses are actually nothing more than loose collections of vaguely-defined problems of thinking, feeling, and/or behaving. In most cases the “diagnosis” is polythetic (five out of nine, four out of six, etc.), so the labels aren’t coherent entities of any sort, let alone illnesses. But the problems set out in the so-called symptom lists are real problems. That’s not the issue. I refer to these labels as inventions, because of psychiatry’s assertion that the loose clusters of problems are real diseases. In reality, they are not genuine diseases; they are inventions. They are not discovered in nature, but rather are voted into existence by APA committees.

Both Hari and Hickey hit the nail on the head by pointing out what should be obvious, namely that addiction and other psychological problems are most often matters of adaptation, of learning, which are process that all healthy, normal brains participate in as they interact with their respective environments. How else could it be that the vast majority of people with such problems get better through such means as talking things out, rearranging their priorities, determination to change habits, and improving relationships? While it’s true–again, obviously–that every subjective human experience is grounded in some activity happening in the brain from moment to moment, it is sheer nonsense to assume that common problems faced by vast numbers of human beings are matters of hardware malfunction. This might be true for the very few. But it is only through misaligned incentives and misapplied critical thinking that the brain disease paradigm has become mental health dogma.

*Mic drop*