RSS
 

Posts Tagged ‘Frontal Cortex’

Why Russians Don’t Get Depressed

12 Aug

The saddest short story I’ve ever read is “The Overcoat,” by Gogol. (It starts out bleak and only gets bleaker.) The second saddest story is “Grief,” by Chekhov. (Nabokov famously said that Chekhov wrote “sad books for humorous people; that is, only a reader with a sense of humor can really appreciate their sadness.”) And then, if I had to make a list of really depressing fiction, I’d probably put everything written by Dostoyevsky. Those narratives never end well.

Notice a theme? Russians write some seriously sad stuff. This has led to the cultural cliche of Russians as a brooding people, immersed in gloomy moods and existential despair. In a new paper in Psychological Science, Igor Grossmann and Ethan Kross of the University of Michigan summarize this stereotype:

One needs look no further than the local Russian newspaper or library to find evidence supporting this belief [that Russians are sad] – brooding and emotional suffering are common themes in Russian discourse. These observations, coupled with ethnographic evidence indicating that Russians focus more on unpleasant memories and feelings than Westerners do, have led some researchers to go so far as to describe Russia as a “clinically masochistic” culture.

This cliche raises two questions. Firstly, is it true? And if it is true, then what are the psychological implications of thinking so many sad thoughts?

The first experiment was straightforward. The psychologists gave subjects in Moscow and Michigan a series of vignettes that described a protagonist who either does or does not analyze her feelings when she is upset. After reading the short stories, the students were then asked to choose the protagonist that most closely resembled their own coping tendencies. The results were clear: While the American undergraduates were evenly divided between people who engaged in self-analysis (the brooders) and those who didn’t, the Russian students were overwhelmingly self-analytical. (Eighty-three Russians read the vignettes; sixty-eight of them identified with the brooders.) In other words, the cliche is true: Russians are ruminators. They are obsessed with their problems.

At first glance, this data would seem like really bad news for Russian mental health. It’s long been recognized, for instance, that the tendency to ruminate on one’s problems is closely correlated with depression. (The verb is derived from the Latin word for “chewed over,” which describes the process of digestion in cattle, in which they swallow, regurgitate and then rechew their food.) The mental version of rumination has a darker side, as it leads people to fixate on their flaws and mistakes, preoccupied with their problems. What separates depression from ordinary sadness is the intensity of these ruminations, and the tendency of depressed subjects to get stuck in a recursive loop of negativity.

According to Grossman and Kross, however, not  all brooders and ruminators are created equal. While American brooders showed extremely high levels of depressive symptomatology (as measured by the Beck Depression Inventory, or BDI), Russian brooders were actually less likely to be depressed than non-brooders. This suggests that brooding, or ruminative self-reflection, has extremely different psychiatric outcomes depending on the culture. While rumination makes Americans depressed, it actually seems to provide an emotional buffer for Russians.

What explains these cultural differences? Grossman and Kross then asked students in Moscow and Michigan to “recall and analyze their “deepest thoughts and feelings surrounding a recent anger-related interpersonal experience”. Then, the subjects were quizzed about the details of their self-analysis. They were asked to rate, on a seven point scale, the extent to which they adopted a self-immersed perspective (a 1 rating meant that they “saw the event replay through your own eyes as if you were right there”) versus a self- distanced perspective (a 7 rating meant that they “watched the event unfold as an observer, in which you could see yourself from afar”). Finally, the subjects were asked about how the exercise made them feel. Did they get angry again when they recalled the “anger-related” experience? Did the memory trigger intense emotions?

Here’s where the cultural differences became clear.* When Russians engaged in brooding self-analysis, they were much more likely to engage in self-distancing, or looking at the past experience from the detached perspective of someone else. Instead of reliving their confused and visceral feelings, they reinterpreted the negative memory , which helped them make sense of it. According to the researchers, this led to significantly less “emotional distress” among the Russian subjects. (It also made them less likely to blame another person for the event.) Furthermore, the habit of self-distancing seemed to explain the striking differences in depressive symptoms between Russian and Americans. Brooding wasn’t the problem. Instead, it was brooding without self-distance. Here’s Grossman and Kross:

Our results highlighted a psychological mechanism that explains these cultural differences: Russians self-distance more when analyzing their feelings than Americans do. These findings add to a growing body of research demonstrating that it is possible for people to reflect either adaptively or maladaptively over negative experiences. In addition, they extend previous findings cross-culturally by highlighting the role that self-distancing plays in determining which type of self-reflection—the adaptive or maladaptive one—different cultures engage in.

The lesson is clear: If you’re going to brood, then brood like a Russian. Just remember to go easy on the vodka.

*I think cross-cultural studies like this are an important reminder than American undergrads are W.E.I.R.D.

PS. Thanks to Jad for the tip! And if you’re interested in a controversial new take on depression and rumination, you might be interested in this article.

 
 

Ritalin in the Water

06 Aug

One of my blogging policies is to not engage with trolls. I don’t answer their nasty emails or respond to their comments. Life is too short. But every once in a great while, a troll can send me somewhere interesting. And that’s what happened in the dozens of recent comments that referenced the same “white paper” by the Oxford bioethicist Julian Savulescu, “Fluoride and the Future: Population Level Cognitive Enhancement”. Here’s a sample comment:

Jonah’s friend and colleague Savulescu suggests that the government add ritalin and prozac to the public water supply. Now, sadly if I speak against that dangerous idea I am labeled a blowhard and conspiracy theorist. Fine. I’m ok with that. Guilty as charged. Please read Julian Savulescu’s white paper “Fluoride and the Future: Population Level Cognitive Enhancement”. He’s your friend Jonah.

For the record, I don’t know Professor Savulescu. So I googled this “white paper” – apparently, a “white paper” is how conspiracy theorists make an academic blog post sound really scary – and found this utilitarian proposal:

Fluoridation of the water is an example of human enhancement. Tooth decay is a part of the human condition but we now have the ability to prevent it through a safe, cheap, easy intervention – adding fluoride to the water. Many parts of the civilized world have been employing this strategy for decades with dramatic success. England is now debating whether to fully embrace this simple enhancement technology.

Fluoridation is the tip of the enhancement iceberg. Science is progressing fast to develop safe and effective cognitive enhancers, drugs which will improve our mental abilities. For years, people have used crude enhancers, usually to promote wakefulness, like nicotine, caffeine and amphetamines. A new generation of more effective enhancers is emerging modafenil, ritalin, Adderral and ampakines and the piracetam family of memory improvers. Students and professionals are using these to gain a competitive edge, just as athletes are doping in sport.

But once highly safe and effective cognitive enhancers are developed – as they almost surely will be – the question will arise whether they should be added to the water, like fluoride, or our cereals, like folate.

It seems likely that widespread population level cognitive enhancement will be irresistible. Studies based on removing lead, which reduces cognitive ability, from the water and paint, have estimated that a 3 point IQ increase would lead to: 25 percent reduction in the poverty rate, 25 percent few males in jail, 28 percent fewer high school dropouts, 20 percent fewer parentless children, 18 percent fewer welfare recipients, and 15 percent fewer out of wedlock births.

I think it’s pretty obvious that Savulescu is being deliberately provocative here. (It’s worth stating for the record that, contrary to the paranoid delusions of the trolls, there is no government conspiracy to put stimulants in the water. This is an academic philosopher blogging about a hypothetical. I can’t believe I had to spell that out.) What Savulescu is trying to explore is the hazy line between ordering a double espresso at Starbucks and snorting a bit of Ritalin. Both compounds are uppers, and both induce a similar set of cognitive effects: sharpened attention, improved learning and memory, temporary boosts in IQ scores, etc. Society has clearly benefited from the invention of caffeine (especially since the morning coffee replaced the morning beer as the 17th century drink of choice), so why shouldn’t we also put a touch of amphetamine in the water?

Well, I think there are many good reasons why this is a very bad idea. Last year, I had an article in Nature on the thirty-three different rodent strains that show dramatically enhanced learning and memory. The genetically tweaked animals can learn faster, remember events for longer and are able to solve complex mazes that confuse their ordinary littermates. At first glance, these strains seem like the rodent of the future, a case-study in the infinite possibilities of cognitive enhancement. But I think that’s a blinkered view. When you look closer at the mice, it becomes clear that many of these animal models of enhanced cognition co-exist with subtle negative side-effects. Consider a mutant strain that overexpresses adenylyl-cyclase in the forebrain: Although the mice exhibit improved recognition memory and LTP, they show decreased performance on memory extinction tasks. (In other words, they struggle to forget irrelevant information.) Other strains of “smart mice” excel at solving complex exercises, such as the Morris Water Maze, but struggle with simpler conditions. It’s as if they remember too much.

And then there’s “Doogie,” the rodent strain named after the fictional television prodigy Doogie Howser. These mice overexpress a particular subunit of the NMDA receptor, known as NR2B, which allows their receptors to stay open for twice as long as normal. The end result is that it’s easier for disparate events to get linked together in the brain. The only downside is that Doogie mice also seem to suffer from increased sensitivity to chronic pain. Their intelligence literally hurts.

And these tradeoffs don’t just exist in mice. Martha Farah, a neuroscientist and neuroethicist at Penn, is currently looking at the tradeoff between enhanced attention – she gives subjects a mild amphetamine – and performance on creative tasks. As she told me for the Nature article, “The brain appears to have made a compromise in that having a more accurate memory interferes with the ability to generalize…You need a little noise in order to be able to think abstractly, to get beyond the concrete and literal.”

That’s also the lesson of one of the few case studies of an individual with profoundly enhanced memory. In the early 1920s, the Russian neurologist A.R. Luria began studying the mnemonic skills of a newspaper reporter named Sherashevsky, who had been referred to the doctor by his editor. Luria quickly realized that Sherashevsky was a freak of recollection, a man with such a perfect memory that he often struggled to forget irrelevant details. After a single read of Dante’s Divine Comedy, he was able to recite the complete poem by heart. When given a random string of numbers hundreds of digits long, Sherashevsky easily remembered all the numbers, even weeks later. While this flawless memory occasionally helped Sherashevsky at work – he never needed to take notes – Luria also documented the profound disadvantages of such an infinite memory. Sherashevsky, for instance, was almost entirely unable to grasp metaphors, since his mind was so fixated on particulars. “He [Sherashevsky] tried to read poetry, but the obstacles to his understanding were overwhelming,” Luria wrote. “Each expression gave rise to a remembered image; this, in turn, would conflict with another image that had been evoked.”

For Luria, the struggles of Sherashevsky were a powerful reminder that the ability to forget is just as important as the ability to remember. As Jorge Luis Borges wrote in Funes the Memorious, a short story about a man with a perfect memory that was likely inspired by Luria’s case-study, “To think is to forget a difference, to generalize, to abstract. In the overly replete world of Funes, there were nothing but details.”

These unintended consequences are why I think it’s way too soon to start thinking about cognitive enhancement for people without cognitive deficits. (It’s also why I don’t dabble in modafinil or adderral – I’ll stick with my cappuccinos, thank you very much.) After all, if we can’t even improve the intelligence of mice without causing worrisome side-effects, then what hope is there for the endlessly complex human cortex, full of feedback loops and interacting pathways? The brain is a precisely equilibrated machine, constructed over tens of millions of years by natural selection. Too many of our “improvements” come with a steep cost.

 
 

We Are All Talk Radio Hosts

05 Aug

Let me tell you a story about strawberry jam. In 1991, the psychologists Timothy Wilson and Jonathan Schooler decided to replicate a Consumer Reports taste test that carefully ranked forty-five different jams. Their scientific question was simple: Would random undergrads have the same preferences as the experts at the magazine? Did everybody agree on which strawberry jams tasted the best?

Wilson and Schooler took the 1st, 11th, 24th, 32nd, and 44th best tasting jams (at least according to Consumer Reports) and asked the students for their opinion. In general, the preferences of the college students closely mirrored the preferences of the experts. Both groups thought Knott’s Berry Farm and Alpha Beta were the two best-tasting brands, with Featherweight a close third. They also agreed that the worst strawberry jams were Acme and Sorrel Ridge. When Wilson and Schooler compared the preferences of the students and the Consumer Reports panelists, he found that they had a statistical correlation of .55. When it comes to judging jam, we are all natural experts. We can automatically pick out the products that provide us with the most pleasure.

But that was only the first part of the experiment. The psychologists then repeated the jam taste test with a separate group of college students, only this time they asked them to explain why they preferred one brand over another. As the undergrads tasted the jams, the students filled out written questionnaires, which forced them to analyze their first impressions, to consciously explain their impulsive preferences. All this extra analysis seriously warped their jam judgment. The students now preferred Sorrel-Ridge—the worst tasting jam according to Consumer Reports—to Knott’s Berry farm, which was the experts’ favorite jam. The correlation plummeted to .11, which means that there was virtually no relationship between the rankings of the experts and the opinions of these introspective students.

What happened? Wilson and Schooler argue that “thinking too much” about strawberry jam causes us to focus on all sorts of variables that don’t actually matter. Instead of just listening to our instinctive preferences, we start searching for reasons to prefer one jam over another.  For example, we might notice that the Acme brand is particularly easy to spread, and so we’ll give it a high ranking, even if we don’t actually care about the spreadability of jam. Or we might notice that Knott’s Berry Farm has a chunky texture, which seems like a bad thing, even if we’ve never really thought about the texture of jam before. But having a chunky texture sounds like a plausible reason to dislike a jam, and so we revise our preferences to reflect this convoluted logic.

And it’s not just jam: Wilson and others have since demonstrated that the same effect can interfere with our choice of posters, jelly beans, cars, IKEA couches and apartments. We assume that more rational analysis leads to better choices but, in many instances, that assumption is exactly backwards.

These studies represent an important reevaluation of the human reasoning process. Instead of celebrating our analytical powers, these experiments document our foibles and flaws. They explore why human reason can so often lead us to believe blatantly irrational things, or why it’s reliably associated with mistakes like cognitive dissonance or confirmation bias. And this leads me to a wonderful new paper by Hugo Mercier and Dan Sperber (I found it via this insightful talk by Jonathan Haidt) that summons a wide range of evidence – such as the strawberry jam study above – to argue that human reason has nothing to do with finding the truth, or locating the best alternative. Instead, it’s all about argumentation. Here’s their abstract:

Reasoning is generally seen as a mean to improve knowledge and make better decisions. Much evidence, however, shows that reasoning often leads to epistemic distortions and poor decisions. This suggests rethinking the function of reasoning. Our hypothesis is that the function of reasoning is argumentative. It is to devise and evaluate arguments intended to persuade. Reasoning so conceived is adaptive given human exceptional dependence on communication and vulnerability to misinformation. A wide range of evidence in the psychology or reasoning and decision making can be reinterpreted and better explained in the light of this hypothesis. Poor performance in standard reasoning tasks is explained by the lack of argumentative context. When the same problems are placed in a proper argumentative setting, people turn out to be skilled arguers. Skilled arguers, however, are not after the truth but after arguments supporting their views. This explains the notorious confirmation bias. This bias is apparent not only when people are actually arguing but also when they are reasoning proactively with the perspective of having to defend their opinions. Reasoning so motivated can distort evaluations and attitudes and allow the persistence of erroneous beliefs. Proactively used reasoning also favors decisions that are easy to justify but not necessarily better. In all of these instances traditionally described as failures or flaws, reasoning does exactly what can be expected of an argumentative device: look for arguments that support a given conclusion, and favor conclusions in support of which arguments can be found.

Needless to say, this new theory paints a rather bleak portrait of human nature. Ever since the Ancient Greeks, we’ve defined ourselves in terms of our rationality, the Promethean gift of reason. It’s what allows us to make sense of the world and uncover all sorts of hidden truths. It’s what separates us from other Old World primates. But Mercier and Sperber argue that reason has nothing to do with reality. Instead, it’s rooted in communication, in the act of trying to persuade other people that what we believe is true. And that’s why thinking more about strawberry jam doesn’t lead to better jam decisions. What it does do, however, is provide up with more ammunition to convince someone else that the chunky texture of Knott’s Berry Farm is really delicious, even if it’s not.

The larger moral is that our metaphors for reasoning are all wrong. We like to believe that the gift of human reason lets us think like scientists, so that our conscious thoughts lead us closer to the truth. But here’s the paradox: all that reasoning and confabulation can often lead us astray, so that we end up knowing less about what jams/cars/jelly beans we actually prefer. So here’s my new metaphor for human reason: our rational faculty isn’t a scientist – it’s a talk radio host. That voice in your head spewing out eloquent reasons to do this or do that doesn’t actually know what’s going on, and it’s not particularly adept at getting you nearer to reality. Instead, it only cares about finding reasons that sound good, even if the reasons are actually irrelevant or false. (Put another way, we’re not being rational – we’re rationalizing.) While it’s easy to read these crazy blog comments and feel smug, secure in our own sober thinking, it’s also worth remembering that we’re all vulnerable to sloppy reasoning and the confirmation bias. Everybody has a blowhard inside them. And this is why it’s so important to be aware of our cognitive limitations. Unless we take our innate flaws into account, the blessing of human reason can easily become a curse.

Image: The Image Spot

 
 

Twitter Strangers

20 Jul

Over at Gizmodo, Joel Johnson makes a convincing argument for adding random strangers to your twitter feed:

I realized most of my Twitter friends are like me: white dorks. So I picked out my new friend and started to pay attention.

She’s a Christian, but isn’t afraid of sex. She seems to have some problems trusting men, but she’s not afraid of them, either. She’s very proud of her fiscal responsibility. She looks lovely in her faux modeling shots, although I am surprised how much her style aligns with what I consider mall fashion when she’s a grown woman in her twenties. Her home is Detroit and she’s finding the process of buying a new car totally frustrating. She spends an embarrassing amount of time tweeting responses to the Kardashian family.

One of the best things about Twitter is that, once you’ve populated it with friends genuine or aspirational, it feels like a slow-burn house party you can pop into whenever you like. Yet even though adding random people on Twitter is just a one-click action, most of us prune our follow list very judiciously to prevent tedious or random tweets to pollute our streams. Understandable! But don’t discount the joy of discovery that can come by weaving a stranger’s life into your own.

I’d argue that the benefits of these twitter strangers extend beyond the fleeting pleasures of electronic eavesdropping. Instead, being exposed to a constant stream of unexpected tweets – even when the tweets seem wrong, or nonsensical, or just plain silly – can actually expand our creative potential.

The explanation returns us to the banal predictability of the human imagination. In study after study, when people free-associate, they turn out to not be very free. For instance, if I ask you to free-associate on the word “blue,” chances are your first answer will be “sky”. Your next answer will probably be “ocean,” followed by “green” and, if you’re feeling creative, a noun like “jeans”. The reason for this is simple: Our associations are shaped by language, and language is full of cliches.

How do we escape these cliches? Charlan Nemeth, a psychologist at UC-Berkeley, has found a simple fix. Her experiment went like this: A lab assistant surreptitiously sat in on a group of subjects being shown a variety of color slides. The subjects were asked to identify each of the colors. Most of the slides were obvious, and the group quickly settled into a tedious routine. However, Nemeth instructed her lab assistant to occasionally shout out the wrong answer, so that a red slide would trigger a response of “yellow,” or a blue slide would lead to a reply of “green”. After a few minutes, the group was then asked to free-associate on these same colors. The results were impressive: Groups in the “dissent condition” – these were the people exposed to inaccurate descriptions – came up with much more original associations. Instead of saying that “blue” reminded them of “sky,” or that “green” made them think of “grass,” they were able to expand their loom of associations, so that “blue” might trigger thoughts of “Miles Davis” and “smurfs” and “pie”. The obvious answer had stopped being their only answer. More recently, Nemeth has found that a similar strategy can also lead to improved problem solving on a variety of creative tasks, such as free-associating on ways to improve traffic in the Bay Area.

The power of such “dissent” is really about the power of surprise. After hearing someone shout out an errant answer – this is the shock of hearing blue called “green” – we start to reconsider the meaning of the color. We try to understand this strange reply, which leads us to think about the problem from a new perspective. And so our comfortable associations – the easy association of blue and sky – gets left behind. Our imagination has been stretched by an encounter that we didn’t expect.

And this is why we should all follow strangers on Twitter. We naturally lead manicured lives, so that our favorite blogs and writers and friends all look and think and sound a lot like us. (While waiting in line for my cappuccino this weekend, I was ready to punch myself in the face, as I realized that everyone in line was wearing the exact same uniform: artfully frayed jeans, quirky printed t-shirts, flannel shirts, messy hair, etc. And we were all staring at the same gadget, and probably reading the same damn website. In other words, our pose of idiosyncratic uniqueness was a big charade. Self-loathing alert!) While this strategy might make life a bit more comfortable – strangers can say such strange things – it also means that our cliches of free-association get reinforced. We start thinking in ever more constricted ways.

And this is why following someone unexpected on Twitter can be a small step towards a more open mind. Because not everybody reacts to the same thing in the same way. Sometimes, it takes a confederate in an experiment to remind us of that. And sometimes, all it takes is a stranger on the internet, exposing us to a new way of thinking about God, Detroit and the Kardashians.