Are you a science person? Most of us aren’t. And that is because communicating complex science to the layperson is hard. Even if you can come up with a metaphor that perfectly explains astrophysics using the costumes from The Masked Singer, it’s hard to make people care. So a lot of science journalists don’t bother to try. Rather than accurately explaining the intricacies of a new discovery, they just tell us that scientists have discovered whatever they think would be most exciting to us. This results in some headlines that you can be assured have nothing to do with real life. Such as…
It Isn’t You, It’s Your Brain
If you’re always late to work, you can blame it on your brain. Similarly, lying, hating the sound of people chewing, and even ignoring PC security warnings are the fault of your dumb brain. In fact, the existence of “it isn’t you, it’s your brain” articles may be the best examples of people not engaging their brains.
Every single thing you do is connected to an electrochemical change in your brain. The fact that we’ve identified a particular correspondence between one type of change and people deciding to wear ugly necklaces doesn’t mean we get to excuse their behavior and just say, “Oh, that’s how they’re wired.”
Just because scientists have identified something happening between my ears doesn’t make it any less a part of my personality. They’re just different types of description. If someone is accused of prostitution, they can’t defend themselves by saying, “No, it wasn’t prostitution, see what had happened was, they gave me money and in exchange I made X-Men 3.” They’ve just described the exact same situation using different terms. You aren’t any less culpable because it can also be described in terms of neurons firing.
The same goes for articles with this recurring theme: “Learning this skill physically changes your brain.” They should be studied for their almost supernatural ability to create an absolute vacuum of meaning. It turns out, when you learn anything, your brain physically changes. The real headline would be if one of these things had nothing to do with your brain. Like if someone discovered that liking Bones in no way involved your brain. Wait, that one might be true.
“We Found The ________ Gene”
Genetic manipulation always captures the public consciousness. If genes are the blueprints for life and we are figuring out how to decode that blueprint, we should be a mere training montage away from recreating dinosaurs, having designer babies, and even creating designer dinosaurs. Right?
When we first started mapping the human genome it seemed like there was no limit to the things we’d be able to control with our new genetic knowledge. If the headlines were to be believed, we were discovering genes that determined our IQ, our left-handedness, and our appetite for movies with goofy fight scenes. In reality, science cannot adequately explain those things.
The problem with thinking there’s a gene for liking broccoli, another for having six fingers, and another for being good at breakdancing, is that you’re going to run out of genes real fast. We only have about 20,000 genes. That sounds like a lot, but remember that we share 96 percent of those genes with chimps. Plus, the world is a rich and diverse place. Your genetic code is more complex than that and works in a complex network to produce effects. Genes can also be turned off or on depending on the context they’re in, and a lot of their effects are conditional on the environment. Some people get blonder when they’re getting more sunlight and that isn’t because they got bit by a radioactive blonde. Still, people talk about what a gene “does” as though each one is going to have some simple surface-level function. That’s like pointing at a blueprint for a car and asking which single part makes it safe.
“We’ve Found The ________ Center Of The Brain”
Another version of the headline “we found the gene for being able to reconcile liking Woody Allen movies with the allegations against him” is often “we found the ‘thinking Logan is pretty good’ center of the brain.”
Typically headlines like these are based on brain imaging studies that find an area of the brain that is active when doing an activity (say, biking or running) and that isn’t active when doing a closely-related activity (say, riding a stationary bike or stabbing yourself in the lungs with hat pins). People then conclude that, “we’ve found the ‘riding a bike’ area of the brain!” But the problem is that any reasonably complex task (and things like walking over rocky terrain are so complex we still don’t have robots that can do it reliably) is going to require a bunch of different parts of the brain.
On top of that, it’s really easy to characterize a brain process too narrowly or too broadly. Imagine if all of California’s wine bottles were made in Fresno but the actual wine was produced all over the state. If you knew as little about winemaking as we know about brains (which would mean you would know that it’s purple and therefore probably made from plums), you could easily draw incorrect conclusions based on a map of how activity increases across the state when wine production goes up. When California starts really pumping out wine, you’d see activity to go up all over the state but you’d particularly see activity in Fresno go nuts, whereas it wouldn’t budge an inch for an increase in making normal grape juice. It would be very tempting, therefore, to think Fresno is the wine capital of California. And that, since it’s arguably the meth capital of California, would make for some pretty interesting theories about California wine.
There’s no reason to think the brain is organized in a way that would be intuitive to us humans examining it. The pretty colors we see in brain imaging don’t necessarily mean the “activated” part of the brain is what’s doing the relevant work. The brain isn’t organized into our most common uses of it, just like a computer isn’t organized by the applications you use it for. Saying “we found the love center of the brain” is like saying “this is the word processing part of the computer.” It doesn’t make any sense.
“Study Finds [Unbelievable Thing]”[/subtitle]
Because of the economic pressures to publish new and exciting results, some researchers have stooped to shady practices like testing the same hypothesis over and over, then only reporting their successes while hiding their failures.
But even if you’ve done all the science right, and you haven’t been influenced by outside money or ambition, and even if you haven’t just repeated the experiment until you got the result you were looking for, one study still doesn’t mean anything. Sometimes coincidences happen.
In most fields, the current standard for saying a result is “significant” is to do a statistical test that says, “if your hypothesis weren’t true, you’d only expect to see these data five percent of the time.” So, if you’re watching Bones and data keeps coming in saying that it’s a terrible show during every episode, the evidence starts to pile up. Now, it’s possible that future episodes will turn the trend around, maybe even into not being one of the worst shows to ever make it past the “stoned teenager’s musings” stage of development. But the odds don’t look good. If the statistics show that less than five percent of good shows have a streak of insanely bad episodes this long, you have “statistically significant” evidence that this is not a good show. You can now move on to more surprising statistical results like “water is wet.”
But here’s the thing about things only happening five percent of the time: they actually do happen. In fact, they happen about five percent of the time. If you do enough studies, you’re going to get some false effects that meet this standard of evidence. If you have tens of thousands of studies that meet a five percent threshold for being “significant,” you can expect that literally hundreds of them will be incorrectly labeled “significant” just by chance.
There are legions of scientists all over the world, each testing dozens of hypotheses. In at least some of those cases, the data are bound to look funny purely by chance even when there’s no effect there. In fact, if that never occurred, that would be the most incredible coincidence of all.
Basically Anything About How Memory Physically Works
One of the jobs your brain does is processing information. It takes in information from the senses, it manipulates that information in various ways, and then it stores some of that information for use later. We have basically zero idea how this works. It’s actually a bit worse than that: we have a bad idea that keeps leading us astray.
Some of the earliest and most influential work on memory and learning was Pavlov’s dog experiments. You may remember his groundbreaking work showing that you can gradually connect two unrelated things in the brain – say, hearing a bell ringing and salivating as though it’s meal time. The better part of a century later, researchers discovered a process by which the brain rewires itself called long-term potentiation. Basically, if you have two connected neurons, one triggering the other over and over, they actually grow closer together. That means over time it becomes even easier for one neuron firing to kick off the other.
Two basically unrelated entities that, when activated in quick succession, slowly forge a causal relationship? Not only is that a great idea for a movie starring two neurons, it’s clearly the explanation for how learning works on a cellular level. That’s why it’s been neuroscience’s basic understanding of how memory and learning work for half a century now. And since so much of understanding the brain also entails understanding memory and learning, that must mean this must be bedrock scientific knowledge, right? In fact, it’s Bedrock scientific knowledge because it’s as accurate as cavemen using brontosauruses as cranes.
For one thing, a lot of “classical conditioning” studies that seemed to show that animals gradually get better at a task actually show the opposite. Reanalyzing the data from landmark studies, researchers found that the animals learned their tasks abruptly, as though they had a “eureka” moment. The fact that animals seemed like they were gradually getting better for decades was just because scientists were averaging the data from multiple subjects. If eight mice all have “eureka” moments at different times and you average their data, it looks like the mice are all gradually getting better at the task – like their behavior is being strengthened by positive reinforcement or weakened by negative reinforcement.
Also, animals can learn things in one trial. You don’t have to touch every part of your body to a hot stove in order to gradually learn that you don’t want to touch any part of your body to it. You learn that the first time you touch a hot stove with your tongue to see if it “tastes red.”
In general, people working at the cutting edge of memory research have found a lot of data that simply can’t be explained by long-term potentiation being the physical basis of memory. And that sucks because it’s basically the only story we had. Recently, researchers have found exciting new possibilities involving purkinje cells, micro RNA, and other things more foreign to a psych undergrad than the respect of any other major.
But the bottom line is that, when it comes to how information is physically written in your brain, we have no idea at this point. Our current story about neurons and memory is so wrong, it’s like trying to stick your house key in your computer to log into Gmail. Or something you’d see on Bones.