New Tools Detect Autism Disorders Earlier in Lives

Aside from genetics, any parental contribution to the disorders is probably nil

By

SUSAN PINKER

Oct. 5, 2017 10:56 a.m. ET

See the column on the Wall Street Journal site

When a child is diagnosed with an autistic-spectrum disorder, a parent’s emotions can swing from disbelief to worry to despair, and many ask themselves the understandable question: Why did this happen?

Genes are the answer, though which combinations are responsible remains a mystery.

The mounting evidence for a heritable cause hasn’t stopped some people from trying to pin the disorder on parents, fueling parental guilt and damaging families that are already struggling with a child’s diagnosis. Now a new study shows that the roots of autistic disorders are detectable so early in life that, other than genes, any parental contribution to the disorder is probably nil.

There is a long and bitter history of baseless finger-pointing around autism. In one of 20th-century psychology’s most shameful mistakes, supposed experts blamed the childhood disorder on “refrigerator mothers,” who were said to cause autism by being emotionally distant. Ultimately, studies showed that a crucial clue to the disorder’s origin was the babies’ inability to respond to their mother’s nurturing—not the other way around. Fifty years later, activists tied autism to childhood vaccines. This false idea led to fewer immunized children and a resurgence of dangerous infectious childhood diseases.

In this new study, John Lewis, the lead author and a neuroscientist at the Montreal Neurological Institute, analyzed data from the MRIs of 260 babies to chart the trajectory of their developing brains. (It was published this summer in Biological Psychiatry.) His previous work had revealed that toddlers with a strong family history of autistic spectrum disorders show sluggish neural pathways in areas critical to language and social development. Such pathways, composed of nerve fibers, transmit information from the body’s five senses and allow regions of the brain to communicate with each other. Dr. Lewis wanted to see how early these neural inefficiencies appeared.

Using MRI-based data, Dr. Lewis and his team charted—at six months of age and again at 12 months—the length and strength of fibers connecting different regions of the babies’ brains. Shorter and stronger connections are more efficient.

As children grow, their brains typically streamline such connections by “pruning”—a form of neural housekeeping whereby unnecessary or unused connections between distant brain regions are weeded out.

His research team the tracked neural pathways of two groups of infants. One group had a sibling on the autistic spectrum—which meant the baby was at high risk of developing the disorder. The control group had no family history of autistic spectrum disorders.

A comparison of the two groups revealed that, when analyzed as a group, the brains of 6-month-olds with an autistic sibling showed inefficiencies in the right auditory cortex, an area that processes speech sounds. By 12 months of age, certain neural areas critical for language, touch and self-awareness were also less efficient than those of the control group. “If your brain starts off not processing the sensory inputs efficiently, then it can’t do the proper pruning. It’s just passing on noise,” said Dr. Lewis.

The study was launched seven years ago, and by the time it was complete, the researchers knew which of the high-risk infants ended up with an autism spectrum diagnosis. (Almost 17% of the high-risk group received an autism diagnosis, compared with 1.3% of the control group.) Yet they found that the biological markers of their disorder were evident at 6 months of age.

A computer analysis of the high-risk group’s MRIs could retroactively identify which babies would ultimately show behavioral signs of autism spectrum diagnosis years later—and which babies would be unaffected. What’s more, the degree of neural inefficiency predicted how severe that child’s symptoms would be.

This research suggests that very early diagnosis—and early intervention—is on our doorstep. It also means that parents can’t be blamed.

Social Ties Are Key for Survivors of a Disaster

In the aftermath of the 2011 tsunami, studies show that how people are relocated can affect their recovery

By

SUSAN PINKER

Aug. 17, 2017 10:21 a.m. ET

See the column on the Wall Street Journal site

Twelve years ago this month, Hurricane Katrina devastated New Orleans and much of the rest of the Gulf Coast, killing some 1,500 people and displacing more than a million others. Six years later, when an earthquake and tsunami hit eastern Japan in 2011, about 18,500 people lost their lives, and another 345,000 lost their homes, some permanently.

Researchers have found that disaster survivors often suffer from a range of long-term mental and physical problems. Daniel Aldrich of Northeastern University has shown, for example, that those forced to relocate subsequently experience higher rates of depression and divorce. Survivors of the Japanese tsunami were found to have a sharply higher rate of cognitive decline for their population, according to researchers at the Harvard T.H. Chan School of Public Health and several Japanese universities.

For those concerned about responding more effectively to future catastrophes, the question is whether it is possible to prevent such negative effects. Scientists studying the Japanese tsunami now seem to have discovered at least part of the solution: It turns out that howpeople are moved after a disaster has a big impact on their social relationships and, ultimately, on their health.

The findings, reported last month in the journal Science Advances, emerged by “pure serendipity,” according to Ichiro Kawachi, the lead author of the new study who had worked on the earlier cognitive-decline paper.

In 2010, Prof. Kawachi, an epidemiologist at the T.H. Chan School of Public Health, along with his postdoctoral fellow Hiroyuki Hikichi and several colleagues, launched a study in Japan focused on the predictors of healthy aging. The researchers sent detailed questionnaires about lifestyle and social habits to everyone over age 65 in 20 Japanese municipalities. Seven months later the tsunami hit. By then, the researchers had extensive data on about 3,420 people in the Miyagi Prefecture, a densely inhabited area about 50 miles from the disaster’s epicenter.

Of these people, 175 had to be permanently resettled, because their homes were destroyed. First they had to endure “stressful living situations in school gyms, where there were few toilets and no privacy. People were in a hurry to get out of there,” said Prof. Kawachi. To get to the next stage, for more permanent housing, individual survivors could either sign up for a lottery that gave winners access to the front of the line for trailers or could wait and move into emergency shelters as a neighborhood or group. Ultimately, everyone ended up in the same remote, unheated and rather dismal type of shelter.

Two and a half years after the disaster, the researchers again reached out to the survivors. The 96 people who had relocated on their own, the researchers learned, ended up with an impoverished social life. Compared with their pre-tsunami lives, they met less with friends and joined in fewer civic and leisure activities. They were also less likely to support other people or get help themselves. The disaster had effectively stranded them.

In comparison, the 79 people who relocated as a group preserved and enhanced their patterns of informal socializing, not only with friends, but also in how they engaged with the community—in sports, church, hobbies or volunteering. The researchers were careful to control for any independent effect that people’s personality traits or other factors might have had on the results.

The upshot? The weight of evidence shows that disaster-response managers should focus less on speed, sea walls and sandbags and more on preserving people’s social ties, says Prof. Aldrich, who has studied both the Japanese and Gulf Coast catastrophes. “After Katrina, people were put on a bus and not told where they were going. When they arrived they were told, ‘Welcome. You’re living in Arkansas now.’ ”

Prof. Kawachi agrees. “Losing contact with neighbors…hastens dementia and loss of physical function. An underappreciated aspect of disaster policy is that human relations matter as much as giving out timely aid.”

 

What Chimps Understand About Reciprocity

A new study suggests that one of our closest relatives can show intuitions of fairness

By

SUSAN PINKER

July 21, 2017 11:52 a.m. ET

See the column on the Wall Street Journal site

 

When someone does something nice for you, do you return the favor? Most of us do, and not just because our mothers said we should.

Basic fairness is probably written into our genetic code. Human societies depend on the expectation of reciprocity: We assume that a neighbor will collect our mail if we’ve mowed their lawn, or that drivers will take turns braking at stop signs.

Fundamental as this trait might seem, however, its evolutionary origins are hazy. Previous research has shown that chimpanzees—one of our closest relatives—are less motivated by fairness than by what they immediately stand to gain from a transaction.

A new study shows that chimps can go beyond such reflexive selfishness and cooperate, even if it costs them something. But they don’t just give up what’s theirs, even to their kin. They are particular about when they will share some of their food, according to researchled by University of Vienna biologist Martin Schmelz and just published in Proceedings of the National Academy of Sciences.

Like many of us, the team found, chimps keep score: They’re most likely to allot treats to a partner if that chimp helped them first.

How do we know this, since chimps can’t discuss their intentions? Dr. Schmelz and his colleagues constructed an apparatus in which two chimps face one another with a cage between them. It contains several bowls of food, which the chimps can’t reach.

One chimp can pull on a string, thus lifting a latch that gives the other chimp access to the food in the cage. That chimp then has a decision to make: It can keep a larger portion entirely for itself—with nothing for its partner—or opt for two smaller but equal amounts. The helper chimp had been trained to always pull the string, thus giving its partner access to the bowls of food and the impression of generosity.

In these circumstances, the team found, the chimp given access to the bowls usually chose to reward the string-pulling helper chimp with an equal amount. The decider chimp seemed aware that the other chimp had provided a crucial benefit and wanted to reciprocate.

In other trials, no string was available to the helper chimp. A human experimenter opened the latch, while the helper chimp rocked on its haunches, apparently powerless. In this case, when the helper chimp wasn’t the one opening the hatch, the decider chimp seemed unconcerned about repaying any debt: Sometimes it allocated food to its partner, other times it didn’t.

The chimps were keeping a mental tally—or, to put it more charitably, showing intuitions of fairness. When the helper chimp opened the latch, the decider chimp “chose the option where they both got food more often,” Dr. Schmelz said. “But when I opened it, the [decider chimp] chose randomly.”

The study was small: It involved six chimps from the Leipzig Zoo that were more or less related to one another and took turns playing the role of helper and decider. Some of the chimps at the zoo couldn’t master the apparatus and were cut from the experiment. This raises the question of whether reciprocity surfaces only in more intelligent chimps.

Moreover, the chimps showed not pure generosity but tit-for-tat reciprocity. “They didn’t share spontaneously,” Dr. Schmelz noted. “They only gave a partner food when the partner had assisted them before.”

Still, the chimps had the smarts—and the primitive moral sense—to keep track of who just did what for whom, and they were motivated to reward or punish recent past behavior. The results were strikingly consistent, even in a limited sample, and suggest how more nuanced social exchanges in humans might have evolved.

We don’t usually expect animals to care about debts to each other. But the chimps’ ability to do just that may tell us something about the universal human capacity to form complex societies based on fairness.

 

 

 

 

Can an Entire Generation Change Its Personality?

By

SUSAN PINKER

See the column on the Wall Street Journal site

When the cartoon character Popeye proclaims, “I yam what I yam,” he personifies the idea that personality traits are more or less fixed in adulthood. Whether shy or outgoing, a leader or a follower, we are—notwithstanding minor tweaks—who we are.

But what if a whole generation can learn a new set of tricks?

A new study suggests as much. Published this month in the Proceedings of the National Academy of Sciences, the research shows a shift in many personality traits in men that could make them more successful in the workplace. This huge study included 80% of the male population born in Finland between 1962 and 1976, or 419,523 young men. All of them had taken standardized cognitive and personality tests when they entered the Finnish Defense Forces under a compulsory male draft at the age 19 or 20.

The researchers then looked at these Finnish men’s average annual earnings at age 30 to 34, which the scientists considered a good predictor of their lifetime earnings.

What struck researchers the most was that self-confidence, sociability and leadership motivation all rose on average from the group of men born in 1962 to those born in 1976. Striving, deliberation and dutifulness crept up too, though not as much. Levels of intelligence or family income didn’t seem to be driving these generational shifts, given that they surfaced at all cognitive levels and social strata.

And here’s the kicker: When the researchers compared personality scores when the men entered the draft with earnings at age 30 to 34, they found that even small upward shifts in personality ratings predicted a higher income 10 years later—with the 1976 group earning as much as 12% more than its 1962 counterpart, when other factors such as inflation, overall wage rises and education were stripped away.

“We don’t want to say that personality improves, because reasonable people can perfectly well disagree on what constitutes a good personality,” wrote Matti Sarvimäki, one of five authors of the study and an economist at Aalto University in Finland. “What we show is that the types of personality traits that predict higher earnings rise” from birth year to birth year.

Of course, we should be cautious when extrapolating from the experience of Finland, a country of about 5.5 million people, to the U.S. Nor is it clear whether these results would have a positive effect on the American labor pool. In addition, the Finnish study, due to the male-only draft, was limited to men.

Still, this study’s findings line up with some other positive trends. U.S. college students have become more outgoing, self-confident and self-absorbed—though that last trait may not be quite as positive as the others. Another rising measure: IQ scores are improving about three points a decade, a phenomenon known as the Flynn effect.

As with IQs, we don’t know why personality traits are changing. “It’s kind of a mystery,” says Richard Haier, an emeritus professor at the University of California, Irvine, and the author of “The Neuroscience of Intelligence.”

If extraversion is rising, he said, then—as is the case with IQ-test levels—improvements in general health and nutrition could be affecting the change. Education too could play a role, he says—for example, more exposure to problem-solving at school. Indeed, education is one of the factors Prof. Sarvimäki will explore next.

Does Facebook Make Us Unhappy and Unhealthy?

A look at new research covering thousands of adults

By

SUSAN PINKER

See the column on the Wall Street Journal site

 

If you’re one of the almost two billion active users of Facebook , the site’s blend of gossip, news, animal videos and bragging opportunities can be irresistible. But is it good for you?

A rigorous study recently published in the American Journal of Epidemiology suggests that it isn’t. Researchers found that the more people use Facebook, the less healthy they are and the less satisfied with their lives. To put it baldly: The more times you click “like,” the worse you feel.

The study’s authors, Holly Shakya, an assistant professor of public health at the University of California, San Diego, and Nicholas Christakis, the director of the Human Nature Lab at Yale University, monitored the mental health and social lives of 5,208 adults over two years. The subjects agreed to participate in national surveys collected by the Gallup organization between 2013 and 2015 and, during that time, to share information with the researchers about their health, social lives and Facebook use.

The use of the Gallup survey at three different points let the scientists take informational snapshots of the participants’ health and social lives and chart how their feelings and behavior changed over the two years. The researchers also kept direct tabs on the subjects’ Facebook usage: how often they clicked “like,” clicked others’ posts or updated their own status.

Using standardized questionnaires, the researchers also asked about participants’ social lives: How often did they get together with friends and acquaintances in the real world, and how close did the participants feel to them? There were queries about life satisfaction, mental health and body weight, too.

The findings? Using Facebook was tightly linked to compromised social, physical and psychological health. For example, for each statistical jump (away from the average) in “liking” other people’s posts, clicking their links or updating one’s own status, there was a 5% to 8% increase in the likelihood that the person would later experience mental-health problems.

Responding to the study, Facebook cited an earlier paper by a company scientist and Carnegie Mellon University Prof. Robert Kraut. “The internet’s effect on your well-being depends on how you use it,” the authors wrote. Participants who received more Facebook comments than average from close friends reported a 1% to 3% uptick in satisfaction with life, mood and social support, the study reported. It also acknowledged that it’s hard to measure the emotional effect of the internet.

In the last couple of months, two other studies have cast a negative light on the social-media use of teenagers and young adults. One, of 1,787 Americans, found that social media increased feelings of isolation; the other, of 1,500 Britons, found that the websites—image-based sites in particular—exacerbated feelings of anxiety and inadequacy.

In their own study of adults on Facebook, Profs. Shakya and Christakis found a powerful link between real world, face-to-face social contact and better psychological and physical health all around, a finding matched by dozens of previous studies. What’s astonishing about this research is that the investigators had direct access to people’s Facebook data over a two-year period. With a dynamic picture of how the participants’ activities and outlook evolved over that time, they could see whether someone who was already sad or in poor health used Facebook more often—or whether their symptoms started or worsened in tandem with their online social activities.

Still, there are some nuances to consider. Why would online social activity be so damaging to health and well-being in this study when the same activity was found to be correlated with longevity in a 2016 study co-written by Prof. Christakis? The bottom line, he says, is that replacing in-person interactions with online contact can be a threat to your mental health. “What people really need is real friendships and real interactions,” he adds.

Appeared in the May 27, 2017, print edition as ‘Does Facebook Make Us Unhappy And Unhealthy?.’

‘Momnesia’? No, Pregnancy May Boost Intelligence

Research suggests that mental changes last well past delivery

By

SUSAN PINKER

See the column on the Wall Street Journal site

 

As a mother of three, I had always thought of pregnancy as a time of increased girth but decreased smarts. I wasn’t alone in thinking that my mental capacities were temporarily making way for the needs of the new arrival. Mommy Brain and Momnesia—pop terms for the sleepiness of pregnancy and the postpartum period—have branded such folk wisdom with the veneer of truth.

But that’s where the proof for Mommy Brain ends. Though many women say pregnancy makes their thinking fuzzy, an impressive body of research shows the opposite. Pregnancy and motherhood seem to make mothers smarter.

A 2014 study led by the late Craig Kinsley of the University of Richmond and published in the journal Hormones and Behavior showed that lactating mother rats beat childless rats at hunting—an asset not linked to any detectable uptick in their ability to hear, see or smell. Instead, mother rats had a fertility-related boost in mental power making them better at providing for themselves and their young.

Thanks to work by Dr. Kinsley and his team, we also know that the pregnancy-related hormones of motherhood restructure some brain areas not typically linked to reproduction, such as the hippocampus. This seahorse-shaped brain area consolidates memories and helps us figure out how to navigate through space. Motherhood-related hormones might explain why pregnant and lactating rats beat their non-reproducing female peers at running mazes, for example.

Amazingly, these hormones can also protect mothers’ brains from injury, says Adam Franssen, an associate professor of biology at Longwood University in Virginia and a former colleague of Dr. Kinsley’s. Five years ago, Dr. Franssen led a study that exposed mother rats and childless rats to new experiences and then injected an acid into the rats’ hippocampi to create amnesia. The mother rats’ memory and problem-solving abilities rebounded more quickly, compared with female rats without offspring.

Recent evidence suggests that pregnancy induces changes in the human brain, too. In a study published a few months ago in the journal Nature Neuroscience, Elseline Hoekzema of Leiden University in the Netherlands and colleagues scanned the brains of about 80 women and men, half of them hoping to become parents. The couples who wanted to have a baby were scanned before pregnancy, then again if they got pregnant, after the baby was born and when the baby turned 2.

Women who became pregnant between the scanning sessions showed neural changes so distinct that a computer could distinguish between pregnant and nonpregnant women based on their brain scans alone. The heightened estrogen and progesterone hormones of pregnancy trimmed back some “gray matter”—the cell branches that connect neurons to each other-—which has the effect of sharpening, not diminishing, mental capacities. The neural pathways that remain are streamlined and strengthened in the process.

“An analogy would be moving to a neighborhood and trying to find the best way home from work. Each day you take different routes, eventually settling on the most efficient route. You’ve pruned out the less efficient neural pathways and can travel the main route with almost no effort. The same is likely true during pregnancy and birth. The hormones of pregnancy help the brain refocus on the new priority in the mother’s life,” Dr. Franssen says.

The regions affected included ones linked to maternal bonding and memory. And the changes weren’t temporary. As Dr. Hoekzema said in an interview, they lasted for at least two years after a child was born.

Cognitive tests showed that this pruning of gray matter was associated with greater social acuity, and there was no attendant decline in the women’s intelligence. The mothers’ neural pruning affected the same areas that were activated—brain regions linked to empathy and nurturing—when they saw photos of their own babies. The change is “really about refinement and specialization…and a better recognition of emotions,” Dr. Hoekzema told me.

Add these findings to a big Swedish study released in March, showing that parenthood is linked to living longer in both sexes, and we can discredit two old saws: Children do not take years off your life, and pregnancy does not erode your smarts.

You Can’t Be Fooled by a Con? Don’t Count On It

Sobering April Fool’s news from research on deception

By

SUSAN PINKER

See the column on the Wall Street Journal site

 

A friend recently discovered a surprise on her credit-card account: nearly $20,000 in cash withdrawals, along with charges for private-jet flights and a trip to Acapulco. This was no April Fool’s joke. A young family friend, to whom she had offered moral support and a couch in her basement when he was in dire straits, had conned her.

If you think that you couldn’t be tricked so easily, you’d be wrong. Some 35 million Americans fall for scams each year, according to the Federal Trade Commission.

One reason for our susceptibility to deception is that evolution has tipped humans toward trusting and cooperating with each other for mutual advantage. “Any of us can be fooled, because con artists give us the best possible version of ourselves,” says psychologist Maria Konnikova, author of the 2016 book “The Confidence Game.”

Research shows how hard it is for us to detect a swindler. Hundreds of lie-detection studies suggest that we succeed at it less than half the time—flipping a coin would be more accurate. This may be because most of our stereotypes about liars are wrong. People don’t avoid eye contact or fidget when they’re lying, says Leanne ten Brinke, a researcher on deception at the University of Denver. A telling clue, “such as Pinocchio’s nose,” just doesn’t exist, she writes.

A forensic psychologist, Dr. ten Brinke had long suspected that people’s indirect assessments of deception might be more effective than their conscious attempts. After all, studies have found that nonhuman primates like chimpanzees can detect fakery designed to distract them from hidden food; other research shows that some people who can’t understand speech due to brain damage are better at sussing out swindlers than people with no language deficits. Could it be that the unconscious mind is better than rational focus when it come to detecting lies?

In a study published in 2014 in the journal Psychological Science by Dr. ten Brinke and her colleagues, a total of 138 subjects participated in two experiments designed to answer this question. In the first experiment, researchers randomly set up two groups. The first was told to steal $100 from an envelope deliberately placed in the testing room and then to lie about it. A second group was told not to steal the money and to tell the truth.

The experimenters posed questions to both groups like, “Did you steal the money?” along with neutral queries about subjects like the weather. Later, when a new set of participants saw the video footage of one truth-teller paired with one liar, they could distinguish between honesty and fibs only 45% of the time—a dismal figure in line with previous research.

The researchers then tried to get below the mind’s surface with a second experiment. They tested whether the image of someone telling a lie—even if glimpsed for just fractions of a second—would prime the viewer to recognize notions related to dishonesty, and conversely, if a very brief flash of a truth-teller would do the same for ideas related to honesty.

The experimenters showed participants still photos from the first experiment’s videos and then asked them to categorize a set of words related to veracity. They found that viewers were able to categorize lie-connected words—such as dishonest and deceitfulmore quickly and accurately when they had just glimpsed an image of someone who was lying. Similarly, after a subliminal flash of a truth-teller’s face, the participants responded more quickly and accurately to words like genuine or sincere. This suggests that they had somehow registered who was telling the truth and who was lying.

But the priming only went so far. When shown the images of people lying and telling the truth and specifically asked to judge which was which, the participants didn’t do nearly as well at the task.

The sobering April Fool’s message for the rationalists among us is that a sucker is born every minute. And that sucker is us.

A Poor Sense of Smell May Point to Brain Trouble

New research ties dementia predictions to olfactory results

By

SUSAN PINKER

See the column on the Wall Street Journal site

 

If you can smell the difference between pineapple and paint-thinner, that’s a good sign: You may be more likely to keep your marbles well into the future. So says a new study on the predictive power of our sense of smell.

In a paper published in the journal Neurology in December, neurologist Kristine Yaffe and her team discovered that older adults with a keen sense of smell are less likely than their peers to develop dementia as they age. In contrast, a blunted sense of smell, much like an altered sense of humor or sense of direction, may presage more pervasive cognitive losses in the years ahead.

Previous research had led Dr. Yaffe, who is a professor of psychiatry and neurology at the University of California, San Francisco, to suspect that people with a diminished ability to identify odors might be at increased risk of dementia. Olfactory nerve fibers project into the brain’s centers for memory and emotional processing, which suggests a connection between those cells and smell-related ones.

Dr. Yaffe’s research team followed 2,428 healthy adults in their 70s, all participants in a national, ongoing study of aging. Previous studies on the topic were often confined to white populations. This time the research team created a group that was half black, half white as well as half male, half female.

Three years into the study, the researchers assessed the participants’ sense of smell using a standardized test of 12 common scents. The test required the subjects to inhale a series of airborne chemical compounds present in foods like onion, lemon and chocolate, as well as in environmental odors, such as smoke and gasoline. After each whiff, the subjects had to identify what they had just smelled, which resulted in their “odor identification score.”

The researchers then monitored the participants’ psychological and medical status for the next nine years, using a memory test and their medication and hospitalization records. The team controlled for other factors that could increase the risk of memory loss, such as a prior smoking habit, a history of head trauma, depression or a genetic predisposition for Alzheimer’s disease. With those factors statistically stripped away, the experimenters found that on tests for sense of smell, men trailed women slightly and blacks lagged a bit behind whites.

What mattered most, though, were comparisons within the racial and gender groups. People of either race or sex who had more trouble identifying smells were indeed more likely to develop dementia—despite having no signs of cognitive decline when they signed up for the study, nor during its first three years.

Among whites, the weakest smellers (the lowest third of the group) had three times the dementia risk as the highest third. Among black participants, the weakest group had two times the risk. Subtypes of dementia may explain the difference. Smell is a better early predictor of Alzheimer’s. Blacks, Dr. Yaffe said, have a greater risk of developing vascular dementia, which causes diffuse neural deficits, than Alzheimer’s, which leaves more tangles and plaques near the olfactory nerve.

“What’s happening,” she added, “is that abnormal proteins are building up over decades, and some of the early changes start in the olfactory bulb, the brain structure that receives neural information about odors. “When you think about how our brains evolved, it’s not a shocker that olfaction, considered an older part of the brain, would reflect degenerative processes first.”

In all, 491 of the participants developed dementia at the end of 12 years.

That a compromised sense of smell may be a biomarker for dementia is both good news and bad news. An early warning signal has limited use, since no drug yet exists to head off Alzheimer’s (though several medications are being developed that target its symptoms). Still, those who can’t smell shouldn’t panic. Researchers emphasize that a mildly dialed down sense of smell and taste—much like mild hearing loss—is just a feature of aging.

 

Watching Terror and Other Traumas Can Deeply Hurt Teenagers

The focus: web exposure to the Boston Marathon bombing

By

SUSAN PINKER

See the column on the Wall Street Journal site

 

Several weeks into the Gulf War in 1990, when I was working as a clinical psychologist in Montreal, pediatricians started referring children with mysterious behavior problems. A few 8- to 10-year-olds had started to wet their beds for the first time since they were toddlers. Others were refusing to go to school. Their first interviews revealed a common thread: Though strangers to each other, the children and their families were all visitors or immigrants from Israel, and they had been watching Scud missile attacks against Israel on the news every day.

These children were living more than 5,000 miles from the bombing, but observing the devastation from afar—and their parents’ reactions to it—had elicited an emotional response that skewed their habits. New research is now helping us to understand how second-hand exposure to disasters can change our brains and behavior, and who is most at risk.

“We used to think of it as a bull’s-eye. The closer you were to the trauma, the more likely you were to show symptoms,” said Jonathan Comer, a professor of psychology at Florida International University who investigates the impact of terrorist attacks on children and families. “But a summary of post-9/11 reactions really challenged that idea.” Those closest to the event didn’t necessarily show the most trauma, said Prof. Comer.

Prof. Comer’s own research upends our assumptions about who is most prone to feel psychological fallout after a terrorist attack. In a 2016 study of the Boston Marathon bombing published last summer in Evidence-Based Practice in Child and Adolescent Mental Health, the research team surveyed nearly 500 parents of 4-to-19-year-olds about their children’s internet exposure during the bombing and the manhunt that followed. The surveyed families all lived within 25 miles of either the bombing site itself or Watertown, Mass., where the police had ordered the local population to shelter in place during a search for suspects.

The findings? As one would expect, exposure to graphic online content rose with age. Over 23% of children saw internet images of the bombing, and 68% viewed footage of police with military-grade weapons searching their neighborhoods. The average was two to three hours of daily online activity per child. While those under age 8 were less exposed, three-quarters of children over 12 spent up to 6 hours daily viewing online news and social media coverage of the crisis.

Teenagers were the most prone of all age groups to experience psychological trauma after the bombing. The researchers found that the more internet news and social media contact they had about the bombing and manhunt, the more severe were their PTSD symptoms, which ranged from intrusive flashbacks to emotional numbing. What’s more, even if 87% of parents believed that online exposure to the crisis could be damaging, very few restricted their children’s access to it. And what parents thought was private—their own fear—turned out to be contagious, too, as shown in a 2014 study of parents’ reactions to the marathon bombing by Prof. Comer’s team.

To understand how observational distress works, a team led by Alexei Morozov at Virginia Tech Carilion Research Institute looked at what happens when mice watch a sibling experience fear. He first learned that vicarious fear leaves a neural trace that predisposes animals to experience trauma later on. A Morozov-team study published last month in Neuropsychopharmacology found that vicarious trauma changes the connections between the amygdala (roughly, our emotion and survival center) and the prefrontal cortex (the planning and decision-making area). “After the animal has the experience of the other’s pain, it allows excitation to be more robust…it reduces the ability of the prefrontal cortex to act rationally,” he told me.

That’s one reason why “watching around-the-clock breaking news is not in our best interest—either for adults or children,” said Jonathan Comer. Because it’s not how much danger we’re in that matters. It’s how much threat we perceive in others.

What You Just Forgot May Be ‘Sleeping’

Can’t remember what you were just thinking about? A new study amends our understanding of how memory works

By

SUSAN PINKER

See the column on the Wall Street Journal site

 

If you’ve ever forgotten why you just entered a room, you know how fickle memory can be. One moment it’s obvious why you walked down the hall, and the next moment you’re standing there befuddled.

Here today, gone in a millisecond. At least that’s how we used to think about short-term, or working, memory. But a study just published in the journal Science tells a different story. A recent idea or word that you’re trying to recall has not, in fact, gone AWOL, as we previously thought. According to new brain-decoding techniques, it’s just sleeping.

“Earlier experiments show that a neural representation of a word disappeared,” said the study’s lead author, Brad Postle, a professor of psychology and psychiatry at the University of Wisconsin-Madison. But by using a trio of cutting-edge techniques, Dr. Postle and his team have revealed just where the neural trace of that word is held until it can be cued up again.

Their study amends the long-standing view of how memory works. Until now, psychologists thought that short-term memory evaporates when you stop thinking about something, while long-term memory permanently rewires neural connections. The new research reveals a neural signature for a third type of memory: behind-the-scenes thoughts that are warehoused in the brain.

In the study’s four experiments, a total of 65 students viewed a pair of images—some combination of a word, a face or a cloud of moving dots—on a screen. During a 10-second period, the students were prompted to think about one of the two images they had seen. After a brief delay, they had to confirm whether a picture they saw matched one of the first two images.

Throughout the experiment, the Postle team monitored the students’ pattern of neural activity. When they prompted the students to remember a particular image, a unique 3-D display of neural activity corresponding to that idea popped up. What really interested the experimenters, though, was what the brain was doing with the image it had effectively set aside. Where did that memory go while it was waiting in the wings?

To find out, the team used a new technique to see what happened when the participants were warned that they would be tested on the set-aside image. This novel approach created a dynamic 3-D display of electroencephalogram (brain wave) and brain-imaging data that let the researchers see beyond what part of the brain “lights up” and zoom in on the pattern of activity within a region. That’s how the team learned that the students’ brain activity had indeed shifted to the “on hold” image’s distinctive pattern—which until then had been invisible.

To confirm that the memory still existed even while a person was not thinking about it, the scientists used another recent technique, transcranial magnetic stimulation, or TMS. They positioned a wand over a participant’s scalp and delivered a harmless magnetic pulse to the brain areas that held the images. The pulse made the distinctive neural signature of those fleeting memories visible to the scientists and triggered their recall in the students.

Dr. Postle compared working memory to paper inscribed with invisible ink. Words written in lemon juice are initially imperceptible, but by passing a hot cup of coffee over the paper, “you can see the part of the message that was heated up…. Our TMS is like the coffee cup.” In this way the team activated a memory that was not only temporary but below the student’s level of consciousness.

Using Dr. Postle’s new trifecta of brain-imaging and brain-stimulation techniques to reactivate forgotten memories has enticing—though still remote—therapeutic possibilities. It is neuroscience’s most faithful reading yet of the real-time content of our thoughts—about as close as we have ever come to mind-reading.

“Our study suggests that there’s information in the penumbra of our awareness. We are not aware that it’s there, but it’s potentially accessible,” said Dr. Postle.

Page 1 of 3123