Feb 26, 2010

Why aren't we publishing negative results?

This is a question The All Results Journals have had in mind since longtime. Kevin Dunbar , a researcher who studies how scientists study things, how they fail and succeed, began in 1990 an unprecedented research project: observing four biochemistry labs at Stanford University. Philosophers have long theorized about how science happens, but Dunbar wanted to get beyond theory. He wasn’t satisfied with abstract models of the scientific method — that seven-step process we teach schoolkids before the science fair — or the dogmatic faith scientists place in logic and objectivity.

Dunbar knew that scientists often don’t think the way the textbooks say they are supposed to. He suspected that all those philosophers of science — from Aristotle to Karl Popper — had missed something important about what goes on in the lab. (As Richard Feynman famously quipped, “Philosophy of science is about as useful to scientists as ornithology is to birds.”) So Dunbar decided to launch an “in vivo” investigation, attempting to learn from the messiness of real experiments.


He ended up spending the next year staring at postdocs and test tubes: The researchers were his flock, and he was the ornithologist. Dunbar brought tape recorders into meeting rooms and loitered in the hallway; he read grant proposals and the rough drafts of papers; he peeked at notebooks, attended lab meetings, and videotaped interview after interview. He spent four years analyzing the data. “I’m not sure I appreciated what I was getting myself into,” Dunbar says. “I asked for complete access, and I got it. But there was just so much to keep track of.”

Dunbar came away from his in vivo studies with an unsettling insight: Science is a deeply frustrating pursuit. Although the researchers were mostly using established techniques, more than 50 percent of their data was unexpected. (In some labs, the figure exceeded 75 percent.) “The scientists had these elaborate theories about what was supposed to happen,” Dunbar says. “But the results kept contradicting their theories. It wasn’t uncommon for someone to spend a month on a project and then just discard all their data because the data didn’t make sense.” Perhaps they hoped to see a specific protein but it wasn’t there. Or maybe their DNA sample showed the presence of an aberrant gene. The details always changed, but the story remained the same: The scientists were looking for X, but they found Y.

Dunbar was fascinated by these statistics. The scientific process, after all, is supposed to be an orderly pursuit of the truth, full of elegant hypotheses and control variables. (Twentieth-century science philosopher Thomas Kuhn, for instance, defined normal science as the kind of research in which “everything but the most esoteric detail of the result is known in advance.”) However, when experiments were observed up close — and Dunbar interviewed the scientists about even the most trifling details — this idealized version of the lab fell apart, replaced by an endless supply of disappointing surprises. There were models that didn’t work and data that couldn’t be replicated and simple studies riddled with anomalies. “These weren’t sloppy people,” Dunbar says. “They were working in some of the finest labs in the world. But experiments rarely tell us what we think they’re going to tell us. That’s the dirty secret of science.”

How did the researchers cope with all this unexpected data? How did they deal with so much failure? Dunbar realized that the vast majority of people in the lab followed the same basic strategy. First, they would blame the method. The surprising finding was classified as a mere mistake; perhaps a machine malfunctioned or an enzyme had gone stale. “The scientists were trying to explain away what they didn’t understand,” Dunbar says. “It’s as if they didn’t want to believe it.”

The experiment would then be carefully repeated. Sometimes, the weird blip would disappear, in which case the problem was solved. But the weirdness usually remained, an anomaly that wouldn’t go away.

This is when things get interesting. According to Dunbar, even after scientists had generated their “error” multiple times — it was a consistent inconsistency — they might fail to follow it up. “Given the amount of unexpected data in science, it’s just not feasible to pursue everything,” Dunbar says. “People have to pick and choose what’s interesting and what’s not, but they often choose badly.” And so the result was tossed aside, filed in a quickly forgotten notebook. The scientists had discovered a new fact, but they called it a failure.

The reason we’re so resistant to anomalous information — the real reason researchers automatically assume that every unexpected result is a stupid mistake — is rooted in the way the human brain works. Over the past few decades, psychologists have dismantled the myth of objectivity. The fact is, we carefully edit our reality, searching for evidence that confirms what we already believe. Although we pretend we’re empiricists — our views dictated by nothing but the facts — we’re actually blinkered, especially when it comes to information that contradicts our theories. The problem with science, then, isn’t that most experiments fail — it’s that most failures are ignored.

As he tried to further understand how people deal with dissonant data, Dunbar conducted some experiments of his own. In one 2003 study, he had undergraduates at Dartmouth College watch a couple of short videos of two different-size balls falling. The first clip showed the two balls falling at the same rate. The second clip showed the larger ball falling at a faster rate. The footage was a reconstruction of the famous (and probably apocryphal) experiment performed by Galileo, in which he dropped cannonballs of different sizes from the Tower of Pisa. Galileo’s metal balls all landed at the exact same time — a refutation of Aristotle, who claimed that heavier objects fell faster.

While the students were watching the footage, Dunbar asked them to select the more accurate representation of gravity. Not surprisingly, undergraduates without a physics background disagreed with Galileo. (Intuitively, we’re all Aristotelians.) They found the two balls falling at the same rate to be deeply unrealistic, despite the fact that it’s how objects actually behave.

Furthermore, when Dunbar monitored the subjects in an fMRI machine, he found that showing non-physics majors the correct video triggered a particular pattern of brain activation: There was a squirt of blood to the anterior cingulate cortex, a collar of tissue located in the center of the brain. The ACC is typically associated with the perception of errors and contradictions — neuroscientists often refer to it as part of the “Oh shit!” circuit — so it makes sense that it would be turned on when we watch a video of something that seems wrong.

So far, so obvious: Most undergrads are scientifically illiterate. But Dunbar also conducted the experiment with physics majors. As expected, their education enabled them to see the error, and for them it was the inaccurate video that triggered the ACC.

But there’s another region of the brain that can be activated as we go about editing reality. It’s called the dorsolateral prefrontal cortex, or DLPFC. It’s located just behind the forehead and is one of the last brain areas to develop in young adults. It plays a crucial role in suppressing so-called unwanted representations, getting rid of those thoughts that don’t square with our preconceptions. For scientists, it’s a problem.

When physics students saw the Aristotelian video with the aberrant balls, their DLPFCs kicked into gear and they quickly deleted the image from their consciousness. In most contexts, this act of editing is an essential cognitive skill. (When the DLPFC is damaged, people often struggle to pay attention, since they can’t filter out irrelevant stimuli.) However, when it comes to noticing anomalies, an efficient prefrontal cortex can actually be a serious liability. The DLPFC is constantly censoring the world, erasing facts from our experience. If the ACC is the “Oh shit!” circuit, the DLPFC is the Delete key. When the ACC and DLPFC “turn on together, people aren’t just noticing that something doesn’t look right,” Dunbar says. “They’re also inhibiting that information.”

The lesson is that not all data is created equal in our mind’s eye: When it comes to interpreting our experiments, we see what we want to see and disregard the rest. The physics students, for instance, didn’t watch the video and wonder whether Galileo might be wrong. Instead, they put their trust in theory, tuning out whatever it couldn’t explain. Belief, in other words, is a kind of blindness.
But this research raises an obvious question: If humans — scientists included — are apt to cling to their beliefs, why is science so successful? How do our theories ever change? How do we learn to reinterpret a failure so we can see the answer?

Modern science is populated by expert insiders, schooled in narrow disciplines. Researchers have all studied the same thick textbooks, which make the world of fact seem settled. This led Kuhn, the philosopher of science, to argue that the only scientists capable of acknowledging the anomalies — and thus shifting paradigms and starting revolutions — are “either very young or very new to the field.” In other words, they are classic outsiders, naive and untenured. They aren’t inhibited from noticing the failures that point toward new possibilities.

But Dunbar, who had spent all those years watching Stanford scientists struggle and fail, realized that the romantic narrative of the brilliant and perceptive newcomer left something out. After all, most scientific change isn’t abrupt and dramatic; revolutions are rare. Instead, the epiphanies of modern science tend to be subtle and obscure and often come from researchers safely ensconced on the inside. “These aren’t Einstein figures, working from the outside,” Dunbar says. “These are the guys with big NIH grants.” How do they overcome failure-blindness?

While the scientific process is typically seen as a lonely pursuit — researchers solve problems by themselves — Dunbar found that most new scientific ideas emerged from lab meetings, those weekly sessions in which people publicly present their data. Interestingly, the most important element of the lab meeting wasn’t the presentation — it was the debate that followed. Dunbar observed that the skeptical (and sometimes heated) questions asked during a group session frequently triggered breakthroughs, as the scientists were forced to reconsider data they’d previously ignored. The new theory was a product of spontaneous conversation, not solitude; a single bracing query was enough to turn scientists into temporary outsiders, able to look anew at their own work.

But not every lab meeting was equally effective. Dunbar tells the story of two labs that both ran into the same experimental problem: The proteins they were trying to measure were sticking to a filter, making it impossible to analyze the data. “One of the labs was full of people from different backgrounds,” Dunbar says. “They had biochemists and molecular biologists and geneticists and students in medical school.” The other lab, in contrast, was made up of E. coli experts. “They knew more about E. coli than anyone else, but that was what they knew,” he says. Dunbar watched how each of these labs dealt with their protein problem. The E. coli group took a brute-force approach, spending several weeks methodically testing various fixes. “It was extremely inefficient,” Dunbar says. “They eventually solved it, but they wasted a lot of valuable time.”

The diverse lab, in contrast, mulled the problem at a group meeting. None of the scientists were protein experts, so they began a wide-ranging discussion of possible solutions. At first, the conversation seemed rather useless. But then, as the chemists traded ideas with the biologists and the biologists bounced ideas off the med students, potential answers began to emerge. “After another 10 minutes of talking, the protein problem was solved,” Dunbar says. “They made it look easy.”

When Dunbar reviewed the transcripts of the meeting, he found that the intellectual mix generated a distinct type of interaction in which the scientists were forced to rely on metaphors and analogies to express themselves. (That’s because, unlike the E. coli group, the second lab lacked a specialized language that everyone could understand.) These abstractions proved essential for problem-solving, as they encouraged the scientists to reconsider their assumptions. Having to explain the problem to someone else forced them to think, if only for a moment, like an intellectual on the margins, filled with self-skepticism.

This is why other people are so helpful: They shock us out of our cognitive box. “I saw this happen all the time,” Dunbar says. “A scientist would be trying to describe their approach, and they’d be getting a little defensive, and then they’d get this quizzical look on their face. It was like they’d finally understood what was important.”

What turned out to be so important, of course, was the unexpected result, the experimental error that felt like a failure. The answer had been there all along — it was just obscured by the imperfect theory, rendered invisible by our small-minded brain. It’s not until we talk to a colleague or translate our idea into an analogy that we glimpse the meaning in our mistake. Bob Dylan, in other words, was right: There’s no success quite like failure!

Source: wired

No comments:

Post a Comment