The Collective Endeavor

Philae on Comet 67P Churyumov-GerasimenkoWe know very well that science is not truth—indeed, central to science is the idea that we must acknowledge both the subjectivity of human perception and the limits of individual knowledge. So science does not aim to replace those perceptions; it aims instead to refine them. Science strips away the biases of the individual, the biases of the community, the biases of particularity, and the biases of variable conditions. In so doing it aims not for a pure and incontrovertible view of the world, but merely a verifiable view.

Science is thus not truth, merely the lens through which we view truth. Yes, its role is to bring truth as nearly into focus as it may, but greater than that is its role of ensuring that we are all using the same lens. By prescribing the method, science drives us all to view the world with the limits of our collective understanding rather than the limits of our personal understanding.

Continue reading

Quantum Ex Machina

UntoldStory_viaAftabUzzamanWe all tell ourselves stories about the world—stories to help us reduce the component parts into things we can understand. Sometimes those stories describe the world, and sometimes they describe what we wish the world could be. Usually, I think, they are a little of both.

The edges are always fuzzy, and the connections can be tenuous, and sometimes there are gaps in the stories we want to tell ourselves. Sometimes we just leave those gaps there, unanswered and honest. But sometimes we flail in the fuzzy gaps, and sometimes we try to fill them in.

It’s almost a meme, outside of scientific circles, to use quantum physics for this; after all, quantum physics is pretty cool, pretty attention-grabbing, and pretty unintuitive. Can’t quantum effects be that little bit of magic we secretly hope for?

Continue reading

Climate Denihilism

climatechange_viaADBThe overwhelming scientific consensus is that human-caused climate change is real, ongoing, and extremely dangerous. For those who missed the most recent data point, February of 2016 was the hottest temperature anomaly in recorded history. That, on top of us having racked up most of the hottest overall years on record during the past decade. And yet, somehow, there are still intellectually dishonest people who stand up and argue that climate change isn’t happening, or maybe isn’t so bad, or maybe will just not be a problem because we’ll adapt (or something, and who needs those ecosystems anyway).

It’s enough to make you want to give up. What’s the point in trying to stop climate change when we keep electing people who are happy to disbelieve it? Isn’t it basically inevitable at this point? Realistically, we’ll be lucky if we can all agree that it’s real before we pass the point of no return, let alone do anything about it.

But I think that’s a pretty dangerous point of view.

Continue reading

A Few Bad Apples

Apples_viaThomasTeichert
Reliably, whenever issues of sexism, racism, and prejudice appear, so too does the phrase “a few bad apples.” University professors are harassing their students, but universities and media hasten to remind us that they are just a few bad apples. Police officers are abusing the people they are supposed to protect and serve, but mostly when those people are black—still, it’s a few bad apples.

“A few bad apples” is in-group language. It’s what you say when you identify with the group in question, and you just can’t believe anything bad about that group because it would also mean something bad about yourself. It is, in essence, group-level denial: that person did something I can’t be associated with, so that must mean they don’t really represent my group.

Continue reading

Infectious Ideas

Online_viaGDC

The qualities that lead to an idea going viral and the qualities that make an idea credible, apparently, do not overlap. Personally, I think this is because many of us have bad ideological immune systems: we accept ideas based on whether they fit what we already agree with, not based on whether they are well-supported by the evidence. That’s what the research seems to show so far, anyway.

The latest wrinkle in the puzzle of how bad ideas spread as easily as good ones comes from a study recently published in the Proceedings of the National Academy of Sciences. You can read the abstract or the full study online, but the shorthand version is that Vicario et al. looked at conspiracy theories and science news to see how they spread on Facebook and to try to learn something about the dynamics of that spread. Like a number of studies in the past, the results showed that both kinds of ideas tend to spread in “homogeneous clusters” of users (essentially echo-chambers), slowly diffusing through a cluster over a period of days or weeks.

What I find interesting about this study is that it also shows that assessment of information is lost along the way; science news and conspiracy theories both rapidly become background information as they diffuse through an echo-chamber. By a few weeks out, users in a group will consider the new information fact and resist attempts to change it.

Continue reading

“Causes Cancer”

One can and should simplify scientific research to make it intelligible, but there is a level of imprecision beyond which simplification becomes mere fiction. I think at this point, in most cases, saying something “causes cancer” is effectively fiction. It wasn’t always, but that phrase has been so abused that it now creates a one-to-one link in the popular imagination between the item of the week and our most potent medical boogeyman.

The recent announcement by the IARC (International Agency for Research on Cancer) and the associated statements by the WHO (World Health Organization) have created a current hullabaloo over red meat, and processed red meat in particular. If you want a good summary of that issue, please read this one, and not any of the more sensationalized pieces exploding into your news feeds.

Because those sensationalized pieces dominate. Most of the media are busily reporting, nuance-free, how red meat and processed meat “causes cancer.” Most are using the most inflated statistic—an increase in risk of 17%. Most are not mentioning baseline risk. Most are not discussing potency. Most are not mentioning that this information is not new, but instead a result of slowly progressing scientific research.

In my view, reporting that something “causes cancer” gives you all the panic with none of the information. What is the baseline risk? In this case, it is 6%, or 6 people out of 100 will get bowel cancer in their lifetimes. What is the increased rate if you eat a lot of processed meat daily? 7%, or 7 people out of 100. So, if everyone ate processed meats only in moderation (about 50 grams is suggested, or two slices of bacon per day, or one bacon cheeseburger per week), we could avoid one additional case of bowel cancer for every 100 people who decreased their consumption.

That matters. That’s significant. But it also isn’t a one-to-one relationship. Processed meat does not “cause cancer” so much as it contributes to a slight increase in your risk of one type of cancer over your lifetime. And that isn’t even that much harder to say. Headline writers, please take note: your hyperbole is helping no one.

A More Perfect Truth

credit: Flickr user shawncalhoun

The scientific method, at its heart, is a set of steps to keep us from fooling ourselves, and one another, and thus to arrive at our best approximation of truth. Each step in the traditional scientific method is a way we reduce bias, eliminate confusion, and further our collective knowledge. But recent high profile research has highlighted some of the ways it can break down along the way, especially in preliminary research.

The idea that preliminary research is mostly inconclusive or incorrect isn’t surprising—preliminary studies are the way a scientific community investigates new ideas. Contrary to public perception, publication in a scientific journal is not so much verification of truth as it is the beginning of a debate. Collective knowledge, in theory, builds from that point onward.

So, when I read recently that more than two-thirds of a group of psychological studies could not be replicated, I wasn’t too surprised. Whatever the media might make of a single small study, and however much they might tout it as a breakthrough (and they do, for everything), the chances are that the results are flawed somehow. Scientists, of course, are still human, and they still get pulled toward positive data. There are a number of habits, like abusing the P value (try it yourself to see how it works) or choosing what measures to focus on after the fact, that can lead to a researcher misrepresenting results, even unintentionally. And, of course, there are a few bad actors who inflate the results of their studies on purpose.

There is a secondary problem in science as well, which is that journals tend to publish positive studies, and researchers tend not to even submit negative studies, leading to publication bias. If you’re a drug company, you might abuse publication bias on purpose to make your products look more effective than they actually are. To makes things worse, journals have their own bias towards new research and often don’t want to publish negative studies or failed replications of previous research. Combined with the set of problems I mentioned above, which lead to iffy research, publication bias effectively hobbles scientific debate by letting lots of ideas in, but silencing the voices that would weed out the bad ones.

You might have noticed that the first set of problems arises from individual biases, while the second set arises from systemic biases. In the first case, researchers are accidentally or intentionally allowing bias into their studies and tainting their results. The scientific method is still subject to human error, or intentional gaming of the system. In the second case, the scientific method has few tools for eliminating systemic biases, so a slightly more developed solution is needed. The only current tool is peer review, but that has its own host of limitations and problems.

I think, however, there is a solution that would reduce problems at both levels simultaneously, and it’s one we already know works: pre-registering research.

Pre-registration of clinical trials is a tool recently employed to deal with publication bias in medicine, and especially to prevent bad actors (such as drug companies with a financial stake in the matter) from gaming the system by hiding negative research. It also eliminates researcher biases because they have to register their methodology before conducting the study, and thus cannot modify that methodology during or after the fact to generate a positive result. The effect has been a dramatic decline in false-positive results.

Some people have rightly pointed out the problems with pre-registering all research, and how difficult it would be to figure out who to register with and how to keep track. This is where the second part of the solution comes in: journals already admit that there is value in publishing negative results, so register prospective research methodologies with scientific journals, which in turn must commit to publishing the end result. Even if that commitment came with some caveats, this would simultaneously prevent researchers from modifying their methodology, thus reducing biased results, and force journals to accept research based on the methodological merits when they are still blind to the outcomes, thus reducing biased publication.

Of course this wouldn’t solve every potential problem in science, but, as I said, science is not a perfect enterprise—it is a collective endeavor to arrive at the best approximation of truth. We can always do better, and we can always learn more, and it’s time to take the next step in that direction. We know we need to eliminate our individual biases, and now we know that we need to address collective biases as well. We also know that we can—it only remains to do so.

Image credit: Flickr user shawncalhoun