A More Perfect Truth

credit: Flickr user shawncalhoun

The scientific method, at its heart, is a set of steps to keep us from fooling ourselves, and one another, and thus to arrive at our best approximation of truth. Each step in the traditional scientific method is a way we reduce bias, eliminate confusion, and further our collective knowledge. But recent high profile research has highlighted some of the ways it can break down along the way, especially in preliminary research.

The idea that preliminary research is mostly inconclusive or incorrect isn’t surprising—preliminary studies are the way a scientific community investigates new ideas. Contrary to public perception, publication in a scientific journal is not so much verification of truth as it is the beginning of a debate. Collective knowledge, in theory, builds from that point onward.

So, when I read recently that more than two-thirds of a group of psychological studies could not be replicated, I wasn’t too surprised. Whatever the media might make of a single small study, and however much they might tout it as a breakthrough (and they do, for everything), the chances are that the results are flawed somehow. Scientists, of course, are still human, and they still get pulled toward positive data. There are a number of habits, like abusing the P value (try it yourself to see how it works) or choosing what measures to focus on after the fact, that can lead to a researcher misrepresenting results, even unintentionally. And, of course, there are a few bad actors who inflate the results of their studies on purpose.

There is a secondary problem in science as well, which is that journals tend to publish positive studies, and researchers tend not to even submit negative studies, leading to publication bias. If you’re a drug company, you might abuse publication bias on purpose to make your products look more effective than they actually are. To makes things worse, journals have their own bias towards new research and often don’t want to publish negative studies or failed replications of previous research. Combined with the set of problems I mentioned above, which lead to iffy research, publication bias effectively hobbles scientific debate by letting lots of ideas in, but silencing the voices that would weed out the bad ones.

You might have noticed that the first set of problems arises from individual biases, while the second set arises from systemic biases. In the first case, researchers are accidentally or intentionally allowing bias into their studies and tainting their results. The scientific method is still subject to human error, or intentional gaming of the system. In the second case, the scientific method has few tools for eliminating systemic biases, so a slightly more developed solution is needed. The only current tool is peer review, but that has its own host of limitations and problems.

I think, however, there is a solution that would reduce problems at both levels simultaneously, and it’s one we already know works: pre-registering research.

Pre-registration of clinical trials is a tool recently employed to deal with publication bias in medicine, and especially to prevent bad actors (such as drug companies with a financial stake in the matter) from gaming the system by hiding negative research. It also eliminates researcher biases because they have to register their methodology before conducting the study, and thus cannot modify that methodology during or after the fact to generate a positive result. The effect has been a dramatic decline in false-positive results.

Some people have rightly pointed out the problems with pre-registering all research, and how difficult it would be to figure out who to register with and how to keep track. This is where the second part of the solution comes in: journals already admit that there is value in publishing negative results, so register prospective research methodologies with scientific journals, which in turn must commit to publishing the end result. Even if that commitment came with some caveats, this would simultaneously prevent researchers from modifying their methodology, thus reducing biased results, and force journals to accept research based on the methodological merits when they are still blind to the outcomes, thus reducing biased publication.

Of course this wouldn’t solve every potential problem in science, but, as I said, science is not a perfect enterprise—it is a collective endeavor to arrive at the best approximation of truth. We can always do better, and we can always learn more, and it’s time to take the next step in that direction. We know we need to eliminate our individual biases, and now we know that we need to address collective biases as well. We also know that we can—it only remains to do so.

Image credit: Flickr user shawncalhoun

Ecological Jenga

jenga_viaKellyTeagueOne quality of human societies is that we shape our environment to fit our needs. Sometimes we do this intentionally, such as when we clear land for agriculture or human occupation. Sometimes we do it as a byproduct of our actions, such as greenhouse gas emissions leading to runaway warming of the global climate. In the past, human societies have endured, or collapsed, or adapted, depending on how much their environment could withstand change.

I think this begs the question, how resilient is our global ecosystem? How much can it handle? What are the limits?

Those with a narrower view tend to dismiss environmental concerns as frivolous, uneconomical, or overblown. The earth has always carried on, they sometimes argue, and even if it doesn’t, we can invent technologies to replace and improve on anything an ecosystem can do. Yet I wonder.

Ecosystems around the world have evolved to be both diverse and redundant, with animals and plants and insects and microbes all functioning together to support the system. Most of that diversity and redundancy is structural—the evolution of an ecosystem, like the evolution of any given species, does not tend to generate and maintain traits with no purpose. I don’t mean that an ecosystem is designed with a place for everything and everything in its place, but rather that diversity and redundancy in a natural system are present because regular stress on the system requires them. They are buffers that protect the system from failure.

From the human perspective, redundancy is usually perceived as an abundance of parts—a river full of salmon, a forest full of old growth trees, a sky full of passenger pigeons. This leaves us with the comforting sense that however many we take away, there are more than enough; the system will not falter.

That can be true on a small scale, but global human society does not act on the small scale. We have an economic engine dedicated to mobilizing resources, and it is very good at it. If a resource is found, there is an effectively unending line of people ready to use it and transform it into human economic capital. But that engine is very bad at asking questions of stability; if a resource is abundant, we use it rapidly and heavily without concern for the broader system. That old individual view, that taking a few doesn’t matter, seems to have evolved into the idea that natural systems can be processed and repurposed by humans without consequences.

Unfortunately, the data says otherwise. The declining biodiversity of forests and the strangled flow of major rivers are examples of what happens to natural systems when their natural buffers are carted off for human purposes. Current complex systems science shows us that the natural systems we rely on are being driven to the edge of catastrophic failure. Ecologists and complex system scientists call this “overshoot,” a state in which the key ecological foundations of a system are exploited much faster than they can regenerate.

Put more simply, we are playing ecological Jenga. Globally, systematically, we are stealing away the foundations of critical natural systems to build a human superstructure on top. Yet questions about that same foundation receive more derision than consideration; with a curious bootstrapping logic, we convince ourselves that the titanic edifice of human society is unsinkable.

That ideological position is so much stranger in the face of the evidence. We have known for some time that we are drawing down natural capital much faster than the rate of replenishment. In the U.S., California is a poster child for depletion of water. In Canada, Alberta is scraping off their largest intact natural forest to dig up tar sands. In the tropics, slash-and-burn agriculture is depleting nutrient-rich topsoil that took thousands of years to form.

As we busily remove the redundancy of natural systems that sustain us, the growing specter of climate change looms large. We are carefully pulling bricks from the base of our tower, scarcely noticing the wind of change ruffling our shirtsleeves. Systems evolved redundancy to cope with stressors, and the biggest stressor for an ecosystem is a changing climate.

Some say human ingenuity will avert any catastrophe. I think they’re right that it could, if we would just look honestly at the implications of our choices. If we could bring ourselves to take them seriously. If we could bring ourselves to alter those choices.

But the tower is getting taller, and the wind is getting stronger.

The science shows us that we can’t continue the game into perpetuity. Natural systems will reach points of change; many already are. Many already have. Some have collapsed.

So let’s hear it for human ingenuity, and let’s fix it. But I have a sneaking suspicion that ingenuity isn’t our problem now. We’re plenty ingenious. What we’re not, is honest.

The Spirit of Inquiry

via Flickr user Massimo VarioloIn conversation a few weeks ago I guessed that there were some thirty republican presidential candidates at this point. It turns out I was wrong—as I write this, the actual (and only slightly less absurd) number is seventeen. Being wrong about that didn’t bother me all that much; thirty felt like a true number, but I have now revised my knowledge because I encountered new information.

Revising based on new information is something (I hope) I do quite often. When I want to know something, I try to reason out the answer first, but then go look up the truth. Both parts of that are important—if I look something up without chewing on it first, I tend to forget it easily. If I guess but don’t bother to check my guess then the distinction between estimate and reality is easily lost to memory.

Guessing and revision is a somewhat Bayesian way of encountering the world, but I think it reflects a spirit of inquiry and exploration. In one sense, it is a personal application of the scientific method. In the broadest sense I can envision, it is a fundamental part of human nature to experiment and discover. We all build predictive stories for ourselves about the world to explain what has happened before and help us expect what will happen next.

Sometimes, though, the link between guessing and checking gets lost. Maybe I guess something and forget to check it later, or maybe I hear someone else’s guess and don’t realize that they didn’t check it first. The provisional story starts to lose its hesitancy and become Real, and True, and Important, and other similarly calcifying adjectives. The story developed to model the world starts to become a world in itself. Ideas drift into ideologies.

When I listen to people pushing an ideology, I sometimes hear the ghost of inquiry in the background. They say with certainty the things I want to ask as questions.

“Nuclear power is not a viable option for mitigating climate change.” But I want to ask, “Is nuclear power a viable option for mitigating climate change?”

“GMOs are harmful and can’t help with worldwide hunger and nutrition.” And I think, “Are GMOs harmful, and can they help with worldwide hunger and nutrition?”

“Cutting social security, medicare, and other entitlements is the only way to balance the federal budget.” And I reply, “Is cutting social security, medicare, and other entitlements the only way to balance the federal budget?”

“Environmental concerns have to be economically profitable to be effective.” Do they really, I wonder?

What are these ideas? Guesses we received from others, but didn’t really check? If you ask someone who fervently believes one of these positions to support it, they will, and vigorously. Motivated reasoning is easy, and unfortunately common. But did they ever think to doubt it? Did they look beyond the favored “evidence” swirling around them from people who agree with the idea, and instead seek out some more dispassionate analysis of the facts?

And if I disagree, did I?

I don’t know. I think much less often than I would like. In the words of the old Russian proverb, appropriated by a certain person who largely ignored it in his domestic policies, “trust, but verify.”

So I keep guessing, and I keep checking. My greatest worry is for those ideas that seem immediately true. Such ideas slip easily past our defenses and set up shop in our stories without scrutiny, bending and distorting our subsequent knowledge of the world. There is no way to investigate all of these—we hear them everywhere, and verifying takes effort. We even create them unknowingly.

The only course left to us, I think, is to doubt our own stories along with the stories of others. To breathe that spirit of inquiry back into our ideas, especially when they have died into ideologies. We may always be chasing the truth, but I think that better, on the whole, than embracing fictions.

Embracing Fuzzy Edges

Courtesy of NASA/Johns Hopkins APL

Pluto – Courtesy of NASA/Johns Hopkins APL

I love words. I also love science. And, because I love both, I pay attention to the interesting places where the words of science and the words of society do not quite match. Scientific terms need to reflect a series of characteristics shared by the things they describe, and we need to know what those characteristics are. Scientists need to be able to look at new astronomical bodies and categorize them, so the “I know it when I see it” approach that works for most of the rest of us most of the time just isn’t good enough in science. From that perspective, it makes sense to update the definitions of things to reflect our growing knowledge.

The word “planet” is one such example. Yes, it was infamously revised (scientifically) to exclude Pluto as a planet. There were lots of good reasons for this, not the least of which being that scientific language, unlike regular language, needs to limit its fuzzy edges. And people, including me, were sad to see Pluto dropped from the A-list in our solar system. Scientifically, it makes sense for now—but even in science, language changes.

Last week I was traveling and exploring places few people ever go. At the same time, the New Horizons spacecraft made its closest approach to Pluto, visiting it more closely than we ever have before. Over the coming weeks and months we will learn more about that tantalizing little world than we have ever known. Maybe Pluto will shift categories yet again.

Pluto is a good example of the fuzziness of words, because the scientific redefinition of the word “planet” exposed the fuzzy definition we had been using for a long time. It’s not that “planet” is unique, either. In the rest of society, words are not defined by hard and fast categories, because language is fundamentally democratic.

What about dictionaries, you ask? Dictionaries describe words; they don’t prescribe meaning. If most people use a word a particular way, that is a meaning of that word. If other people use it differently, the word has a second meaning. And so on. And there are lots of places where the dictionary or scientific definition of a word does not encompass everything it can mean. For example, have you ever witnessed an argument about whether tomatoes are fruits or vegetables? It depends on your definition. Colloquially, they are vegetables. Get a little more specific and they are fruit. Get even more specific and they are vegetables again—along with all fruits, and grains besides. But the whole argument misunderstands the point; tomatoes are not inherently one word or another, they just are; we decide what to call them, and when, and why.

Let’s try again—how about the word “theory?” In scientific terms, a theory is a general explanation of some phenomena that is deeply and broadly supported by the evidence. In colloquial terms, a theory may be nothing more than an educated guess. This is why many people arguing about the “theory of evolution” consistently and fundamentally misunderstand the subject. The edges of the word “theory” are fuzzy.

And let’s look at one more example, something you might not think has fuzzy edges (except in reality): a dog. Picture a dog in your mind. Dogs have paws, long noses, ears, four legs, tails. What about a dog with only three legs? Is it still a dog? What about dogs without tails? Still dogs? What about a cross between a dog and a wolf? It meets all the qualifications to be a dog, except something weird happens there because it meets the qualifications for a wolf as well. So which is it? Or is it both? The edges are fuzzy.

A moment ago you might have tried to think up a definition of “dog” that excludes my edge cases, maybe something about DNA. If so, you’re not alone; the reaction of most people, when faced with a fuzzy edge, is to clarify it—to draw an arbitrary line and defend that line heartily. Tomatoes are fruit, end of story.

But must we? Language doesn’t actually work this way—everything has a fuzzy edge of you look close enough, and that’s a good thing. It is a deep reminder that words are imperfect stand-ins for reality. Scientific language can and must be precise, but the rest of the time we can have fun. We can have words that mean different things to different people. This is how language works. This is how language lives and grows.

So let me return to Pluto for a moment. Now we are closer to Pluto than we have ever been. New Horizons has taken hundreds of photos and collected amazing data about an alien world. It is a world that has sparked our imaginations, inspired us, captivated us, and all from a long cold orbit on the edge of our solar system. It is a tiny world on the fuzzy edge between our definitions, and between us and the rest of the universe. Is Pluto a planet? It doesn’t matter. The edges may be fuzzy, but today, Pluto is looking beautifully clear.

The Mad Misnomer

The trope of the Mad Scientist pervades popular culture and popular awareness. One of the major archetypes, Victor Frankenstein, embodies the trope as an obsessed man reanimating the dead through perverse experimentation. In some cases, such as that of Doctor Jekyll, the mad scientist engages in well-intentioned but equally doomed self-experimentation. In still other cases—Lex Luthor, for example—the mad scientist creates fantastical devices that enable his madness. Superhero stories are rife with Mad Scientists as villains and heroes both, though even the heroes seem untethered and at risk from their own brilliance. Tony Stark invents Iron Man suits in his basement out of devices he invented, yet his devices are constantly being turned against him. Even the delightful Dr. Horrible fails his endeavors mainly through the failure of his own inventions. For the Mad Scientist, their brilliance is also their Achilles Heel.

But are any of these really scientists? I rather agree with Sanjay Kulkacek who said we ought to call them Mad Engineers:

madengineers

Or perhaps just Mad Inventors. But key elements of science—awareness of bias, quantifying uncertainty, testing ideas slowly and methodically, cooperatively generating knowledge—seem totally absent from the archetype. Mad Scientists are people who work alone, fueled by the own brilliance, creating fantastical things by reorganizing bodies, technology, or both. Scientists pursue knowledge with uncertainty, but the Mad Scientist is recklessly sure of their own conclusions. Scientists work in groups, but the Mad Scientist works alone, cut off from society and their peers. Scientists employ method to reach understanding, but Mad Scientists achieve their goals through leaps of unreachable brilliance. Scientists are slow and careful, and Mad Scientists are capricious and haphazard.

The Mad Scientist can be an entertaining character, but I fear the associations with science drag the public perception of science in the wrong direction. And the one feature of the Mad Scientist that I most dislike is their isolation, because when you take away the “mad” part, you are left, not with science, but with another mistaken trope: the Lone Genius. Of course, real science is collaborative, but that’s not how we, the public, tend to think about it. We’d rather imagine a Nicola Tesla holed up in a mansion inventing a ray gun than a collaborative group of hundreds of people carefully planning, funding, building, and launching a mission to Pluto.

At the risk of being repetitive, collaboration in science is fundamental. Understanding that is the difference between two opposing views of science: the first view is of a whimsical group of unintelligible geniuses who argue about whether eggs are good for you, but the other is a collective endeavor of humanity to achieve the best possible knowledge of the world.

In the first view, the public view, the view that underlies the majority of science news reporting, there is no way for the public to assess to truth of science, and thus it is a view that fundamentally mistrusts science. One person says one thing, another person says differently; how can we laypeople tell the difference? In this paradigm, we can’t even grasp the concept of scientific consensus, because every study on evolution or climate change is divorced from every other. Every scientist is just one step away from being mad.

In the second view, however, science is a body of knowledge. We can and do assess truth by consensus—when 98% of climate scientists agree that the earth is warming and humans are causing it, that means far more than any one study or argument by one dissenter. What’s more, we can look at the body of data to ask and answer questions about where the truth lies and what our best knowledge is at the present. That makes science—real science—accessible, but only if we know to look.

The Mad Scientist undercuts true science, and also represents our failure to understand it. In the trope, the scientists are the ones who lose touch with reality, but scientists in the real world are diligently and deeply engaging it. When it comes to understanding science, the ones who lose touch with reality are the rest of us.