When a Better Neighborhood Is Bad for Boys

Research shows that when poor families move into more expensive housing, girls’ lives improve while boys’ get worse. What explains the difference?

By

SUSAN PINKER

Sept. 26, 2018 11:10 a.m. ET

See the column on the Wall Street Journal site

Imagine you’re a single mother living at or below the poverty line in a troubled neighborhood. If you want to shield your teenager from drinking and mental distress, should you try to move to a better area or stay put? The answer depends on whether your teen is a boy or a girl, according to a new paper published in the journal Addiction.

The lead author of the study, University of Minnesota epidemiologist Theresa Osypuk, investigated the drinking habits and mental health of teenagers whose families lived in public housing in the late 1990s. About two-thirds of the families were randomly chosen to receive housing vouchers, allowing them to move into better areas.

Between four and seven years later, the researchers found, adolescent girls who had moved into more expensive neighborhoods were far less likely to drink to excess than girls who remained in public housing. But boys whose families had moved binged more. This surprising finding challenges the assumption that behavioral risks increase with economic hardship and that poverty affects women and men the same way.

It all started with a controversial social experiment called Moving to Opportunity. The goal was to give the mostly female-led families living in public housing a leg-up in the labor market, not by improving their skills but by improving their housing. From 1994 to 1998, almost 5,000 low- income families in five cities—New York, Boston, Chicago, L.A. and Baltimore—were offered the chance to participate in a lottery.

Those who opted in were randomly assigned to one of three groups. The first group received a voucher that tripled their rent budget. With this windfall they were expected to move into a nicer neighborhood. A second group got the same voucher along with relocation counseling. A third was the control group: They stayed in public housing and presumably nothing would change for them.

The results were disappointing at first. To the chagrin of the policy wonks who designed the program, improving where women lived had absolutely no effect on their employment. But it had a big impact on their health. “Rates of obesity were lower, markers of diabetes were better, mental health was better,” Prof. Osypuk said.

The second eye-opener was that moving to better neighborhoods affected men and women differently. “The households were mainly led by moms, who saw mental health benefits, and their girls did, too. But the boys saw no mental health effects, or negative effects,” said Prof. Osypuk.

The key factor was how vulnerable people were before the move. Boys are developmentally more fragile than girls, with higher rates of learning and behavior problems. That’s one reason why the well-being of the boys in the voucher groups tanked, according to Prof. Osypuk. Boys who moved out of public housing not only drank more but also showed higher rates of distress, depression and behavior problems, according to a 2012 paper that she and her team published in the journal Pediatrics.

“Boys have mental health disadvantages, and the stress of moving adds insult to injury,” Prof. Osypuk said. Just when these vulnerable boys most needed predictability, their social worlds were upended. “They moved down in the social hierarchy and hung out with riskier boys,” speculated Prof. Osypuk. Meanwhile, girls who moved to better neighborhoods experienced fewer sexual stressors and adapted to their new circumstances more easily.

When it comes to moving out of poverty, it would seem that equal treatment for everyone is only fair. This research, however, hammers home the idea that one size does not fit all.

An Unforgettable Memory Expert Muses at 100

Brenda Milner is celebrated for her insight into recollections as a feature of neurobiology; the man who could only live in the present

By

SUSAN PINKER

Updated Aug. 23, 2018 10:28 a.m. ET

See the column on the Wall Street Journal site

If you’ve ever wondered where your memory has gone, ask Brenda Milner. The British-Canadian, who just turned 100, was one of the first researchers to discover how memories are stashed in the brain. Having spent the last 68 years investigating how we consolidate new knowledge, you could say that she knows a thing or two about remembering.

Dr. Milner began her career as one of a handful of women admitted to study mathematics at Cambridge University in 1936. Her determination was evident even then. “Cambridge was associated with mathematics and physics—you know Isaac Newton went there. That’s where I wanted to

go and nowhere else,” she told me in 2007 (I recently interviewed her again by email).

This tenacity served Dr. Milner well when she moved from crunching numbers at the British Defense Ministry to Montreal in 1944, to pursue a Ph.D. in psychology. There she worked with the neurologist Wilder Penfield at McGill’s Montreal Neurological Institute. Their research on the post-surgical brain function of epileptic patients led her to reject the then-fashionable theories that memory was a product of Freudian urges or behaviorist stimulus-response chains. Her key insight was to see memory as a feature of human neurobiology.

Dr. Milner is now considered one of the founders of cognitive neuroscience, which links the mind—perceiving, thinking, remembering—to the brain. One of the current leaders in the field, Michael Gazzaniga of the University of California, Santa Barbara, calls her “a true pioneer.”

When she started in the 1950s, the only way to localize mental activity was to see what had changed after injuries or surgery. One patient was Henry Molaison, a 24-year-old

from Connecticut who suffered from debilitating epilepsy. H.M., as he was known until his death in 2008, underwent surgery to remove parts of his temporal lobe—including his hippocampus—which the doctors thought to be the locus of his seizures.

Dr. Milner tested his cognitive function after surgery and in a 1957 paper described what happened next. Though H.M.’s personality and intelligence seemed unchanged “there has been one striking and totally unexpected behavioral result: a grave loss of recent memory. After the operation this young man could no longer recognize the hospital staff nor find his way to the bathroom.” H.M. remembered events from his distant past and with practice could learn new motor skills, but without his hippocampus, any novel experience—who he just met or what he ate for lunch—never jelled into a long-term memory.

H.M. was forced to live in the present, which despite its Zen billing, had its downsides. He had to learn of his father’s death over and over again. Each time he grieved anew. Ultimately he kept a reminder in his pocket as a form of self-protection.

By showing how distinct types of memory are stored in different brain systems—how to ride a bike or sing a Broadway tune is stored differently than the name of your third-grade teacher—Dr. Milner revamped neuroscience’s atlas of

memory. Knowing how to do something does not require the hippocampus. Knowing that you’ve learned something does.

Dr. Milner still goes to the lab a few days a week. Though most people associate her with H.M., she is “more excited about my frontal lobe work,” which helped to define the seat of self-control, planning and decision-making.

“Brain imaging is a huge thing,” she said in a recent email, when I asked what had changed in 60 years. “Back then, you had to wait until the subject died because the only way to see the brain was to dissect it.” Now you can assess healthy young adults. “To see the brain images of a living person while testing them is extremely exciting.” After all, she added, “we all go downhill after our mid-40s.”

Clearly, Brenda Milner is the ultimate exception to that rule.

Kids Today Are Actually More Patient Than Kids 50 Years Ago

Research shows that today’s children can control their impulses better than earlier generations. What’s changed?

By

SUSAN PINKER

July 11, 2018 1:25 p.m. ET

See the column on the Wall Street Journal site

Kids today. The phrase is usually followed by eye-rolling and words like self-absorbed, impatient and entitled. But the idea that today’s children need immediate gratification turns out to be wrong. In fact, research published last month in the journal Developmental Psychology shows that they are much more patient than kids were 50 years ago.

Yes, you read that correctly. Twenty-first century children are able to wait longer for a reward than children of the same age a generation ago, and a generation before that. The new study shows that today’s preschoolers are better at what psychologists call self-regulation, which is the conscious control of one’s immediate desires—the ability to hold off and wait until the time is right.

Stephanie Carlson, the lead author of the paper and a professor at the University of Minnesota’s Institute of Child Development, knows that her findings will come as a surprise: “The implicit assumption is that there’s no way that kids can delay. They’re used to being gratified immediately and don’t know what it’s like to be bored anymore.”

But faithful re-enactments of the famous “marshmallow experiment” have upended that notion. The experiment was first designed in 1968 by Walter Mischel of Stanford University, with the participation of 165 children between ages 3 and 5 who were attending the university’s Bing Preschool. The set up was simple: Each child was left alone in a quiet room facing two plates of goodies. One plate held a single treat—one Oreo cookie or one marshmallow, for example—while the other plate had two.

The children were then told that the adult needed to leave “to do some work” but would return immediately if the child rang a bell. If that happened, the child was allowed to eat one treat. But if the child waited until the adult came back without being summoned, they could eat the larger portion. Watching through a one-way mirror, the experimenters saw whether the child licked or ate the treats while they waited, or controlled themselves until the researcher returned.

According to Dr. Carlson’s new paper, the same experiment was replicated in the 1980s, with 135 children attending the Toddler Center at Columbia University, and once again in the 2000s, with 540 children at preschools associated with the University of Washington and the University of Minnesota. As it turns out, preschoolers in this millennium were able to wait about seven minutes on average, one minute longer than preschoolers in the 1980s and two minutes longer than children in the 1960s. Over a span of 50 years, children of the same age were essentially getting better and better at controlling their impulses.

What accounts for this surprising development? “We’re trying to understand what changed…so that kids of similar backgrounds have increased their ability to delay gratification, despite expectations,” Dr. Carlson said. Improvements in nutrition and GDP might have had the effect of expanding children’s opportunities and cognitive horizons.

Parenting has also evolved. Contrary to popular belief, parents are spending more time interacting with their children than they used to. In the mid-60s, parents spent an average of 36 minutes a day teaching and playing with their children. By 1998, that figure had more than doubled, to 78 minutes a day. Parents have also become more focused on cultivating a child’s ability to make decisions for themselves. Perhaps most important is that, compared with 1968, many times more 3- and 4-year-olds are in preschool, and their teachers are better educated than ever before.

Important questions remain about the study’s findings. The children at the university preschools were mostly from white, educated, middle-to-upper class families. Their self-control is getting better all the time. But it remains to be seen if children from other backgrounds are also learning the crucial lesson that good things come to those who wait.

New Skills Build New Brain Architecture, Research Shows

The brain’s structure can change when human beings—in this case, dyslexic children—learn a new ability

By

SUSAN PINKER

June 14, 2018 12:27 p.m. ET

See the column on the Wall Street Journal site

The latest tools of neuroscience allow us to witness, as never before, the electrical flares, chemical landslides and sluicing of water from zone to zone that alter the geography of the brain as it changes.

Evidence of the ways neural tissue is partially destroyed after a stroke or the onset of dementia has been around for decades. But proof that missing or miswired human brain connections can grow again—what neuroscientists call plasticity—has so far been thin on the ground. In 2014 a study showed that for mice, novel experiences prompt almost immediate changes in white matter—the brain’s connective tissue, or highway system.

Does this structural transformation linked to learning a new skill hold for humans too? The answer appears to be yes. A study just published in the journal Nature Communications found distinct shifts in brain architecture that mirrored the growing reading skills of children with dyslexia.

“The way the connections between different brain regions had changed was startling,” said Jason Yeatman, an assistant professor at the University of Washington who led the study.

Dr. Yeatman’s team, including postdoctoral student Elizabeth Huber, began by recruiting 24 dyslexic children, ages 7 and 12, who had been struggling to learn to read. Few of them could decipher more than simple three-letter words, which largely excluded them from the classroom experience, said Dr. Yeatman.

The researchers thoroughly tested the children’s reading skills and assessed their brain architecture using diffusion magnetic resonance imaging. This noninvasive type of brain imaging tracks how quickly water flows among regions of the brain. It provides a measure of brain density, which increases with the formation of new brain cells, connections and membrane

The children’s initial MRI was followed by three subsequent imaging sessions, evenly spaced over the course of their participation in an intensive, eight-week summer reading program. Designed by the Seattle-based tutoring company Lindamood-Bell, the program provided one-on-one instruction for four hours a day, five days a week. Unlike much recent research on children’s learning, the instruction was in person, not screen-based.

The results showed significant improvement in reading skills—and as the children’s reading fluency increased, large tracts of the white matter in their brains were visibly revamped. “It was not known before that the physical structure and efficiency of the brain could change in just a few weeks,” said Dr. Yeatman.

The instructional approach was, by design, highly individualized and interpersonal. It targets the building blocks of reading and is intended to give children with dyslexia the tools they need to read. But it is just one of several evidence-based, effective approaches. In the future, the researchers hope to compare it to other reading programs to see which features of a curriculum are critical to stimulating rapid changes in white matter.

“It was not known before that the physical structure and efficiency of the brain could change in just a few weeks,” said Prof. Yeatman. “That was one surprising thing.” Another was that the renovation was so pervasive. The researchers expected the observed improvement in the brain’s language areas. “But we also saw changes in the corticospinal tract,” which allows sensation and movement to be sensed by the brain, Dr. Yeatman added.

Perhaps the bond between teacher and child or the frequency and intensity of the teaching program made the difference. It’s hard to pinpoint the cause—or to know how long the neural and behavioral changes will last. But the changes were still impressive.

“We knew it was possible for the brain to change in mice, but we didn’t know the time frame, and we didn’t know how extensive the remodeling was in humans,” said Dr. Yeatman. Now we know that education can physically alter the brains of mice and men—or, more important, boys and girls.

Smiles Hide Many Messages—Some Unfriendly

Faces that mean domination, reward or just ‘I want to get along with you’

By

SUSAN PINKER

April 5, 2018 10:20 a.m. ET

See the column on the Wall Street Journal site

 

Smile while your heart is breaking, put on a happy face, say cheese. We’re so used to smiling on demand that to do otherwise can seem antisocial. Even going through the motions of a smile, scientists have found, can make us feel happy.

But smiles take many forms, and not all of them sound a single, upbeat note. According to recent research, smiles are more like Morse code, silently broadcasting distinct, nuanced messages. A smile might be signaling “Do that again” (reward), “I want to get along with you” (affiliation) or “I’m No. 1 around here” (dominance). Most of us receive these nonverbal signals loud and clear; they register in the chemical cocktail infusing our saliva and the thrum of our heartbeat, says a study published last month in the journal Scientific Reports.

“Different smiles have different impacts on people’s bodies,” said Jared D. Martin, a doctoral student who led the study in the lab of University of Wisconsin psychology professor Paula Niedenthal, working in collaboration with Eva Gilboa-Schechtman of Israel’s Bar-Ilan University. Along with poker players, psychologists have long known that our facial expressions can betray our emotions. But no one has demonstrated exactly how this works, Mr. Martin said.

To explore whether certain types of smiles provoke distinct physiological responses, Mr. Martin’s team set up an experiment based on public speaking. Research shows that most people would rather get zapped with an electric shock than give a five-minute speech about themselves. It’s a handy way to examine how our bodies register stress. So in this experiment, 90 healthy male undergraduate students delivered three spontaneous speeches about themselves, each to an audience of one. The listener smiled away on Skype while they were talking.

That listener was supposedly chosen randomly but in reality was a plant trained to smile in one of three ways during the other’s short spiels: to signal reward, affiliation or dominance. The dominance smile is mildly lopsided, with closed lips and one or both eyes squeezed shut, whereas reward smiles show upturned lips exposing a row of teeth and crinkled eyes. Affiliation smiles feature pursed lips, the whites of the eyes and raised eyebrows.

The research team measured the impact of these three types of smiles by continuously monitoring the speaker’s heart rate and periodically assessing his salivary levels of cortisol, a hormone often used as a marker of stress.

The researchers found that there was eight times as much cortisol in the saliva of students facing a dominance smile as in those facing affiliative smiles and 16 times as much as in those facing reward smiles.

There were also intriguing differences in how people reacted to the different smiles. “Your heart doesn’t beat like a metronome,” Mr. Martin said, and “people with higher variability in their resting heart rate had more extreme cortisol responses to dominance smiles.” These new results are in line with a 2017 German study showing that people with more-variable heart rates are much better at reading others’ mental states in their facial expressions—what psychologists call mind-reading.

The current study tells us that the people with higher heart-rate variability are not only more stressed out by dominance but also more comforted by affiliative smiles. “They’re more attuned,” said Mr. Martin.

The study was on the small side, the subjects restricted to men, and each student received just one type of smile, so the experimenters couldn’t compare how a particular student would respond to different expressions.

But the study helps us to understand the arcane signals exchanged by our intensely social species. The sense of how others view us is read not just by the head but by the hormones coursing through our bodies and the rhythm of our hearts.

Why Aren’t There More Women in Science and Technology?

A new study finds puzzling national differences: a bigger share of STEM degrees for women in Tunisia than in Sweden

By

SUSAN PINKER

March 1, 2018 10:37 a.m. ET

See the column on the Wall Street Journal site

 

A key tenet of modern feminism is that women will have achieved equity only when they fill at least 50% of the positions once filled by men. In some fields, women have already surpassed that target—now comprising, for example, 50.7% of new American medical students, up from just 9% in 1965, and 80% of veterinary students. But the needle has hardly moved in many STEM fields—such as the physical sciences, technology, engineering and math, in which barely 20% of the students are female.

A new study suggests some surprising reasons for this enduring gap. Published last month in the journal Psychological Science, the study looked at nearly a half million adolescents from 67 countries who participated in the Program for International Student Assessment, the world’s largest educational survey. Every three years, PISA gauges the skills of 15-year-olds in science, reading and math reasoning. In each testing year, the survey focuses in depth on one of those categories.

In 2015 the focus was on science literacy, which gave the psychologists Gijsbert Stoet of Leeds Beckett University and David Geary of the University of Missouri a rich data set for examining not only national differences but also the range of academic strengths and weaknesses within each student.

Some fascinating gender differences surfaced. Girls were at least as strong in science and math as boys in 60% of the PISA countries, and they were capable of college-level STEM studies nearly everywhere the researchers looked. But when they examined individual students’ strengths more closely, they found that the girls, though successful in STEM, had even higher scores in reading. The boys’ strengths were more likely to be in STEM areas. The skills of the boys, in other words, were more lopsided—a finding that confirms several previous studies.

If boys chose careers based on their own strengths—the approach usually suggested by parents and guidance counselors—they would be most likely to land in a STEM discipline or another field drawing on the same sorts of skills. Girls could choose more widely, based on their own strengths. And both, of course, would pursue their particular interests, as best they could.

Which leads to the study’s most thought-provoking finding. Based on how female students did in math and science in high school, the researchers predicted that at least 41% of girls would pursue a college STEM degree. This was indeed what they found, using Unesco education data—but only in countries with relatively weak legal protections for women, such as Algeria, Tunisia, Albania and the United Arab Emirates. So the nations with the least gender equality, as determined by the World Economic Forum’s Global Gender Gap Report, had the highest representation of women in STEM.

Conversely, nations with the strongest protections for women and the most dependable social safety nets—such as Sweden, Switzerland, Norway and Finland—had the fewest female STEM graduates, about 20% overall. The study puts the American STEM graduation rate at 24%.

I asked Wendy Williams, founder and director of the Cornell Institute for Women in Science, what she makes of these findings. She wrote that if girls expect they can “live a good life” while working in the arts, health or sciences, then girls choose to pursue what they are best at—which could be STEM, or it could be law or psychology. She added, “However, if the environment offers limited options, and the best ones are in STEM, girls focus there…Stoet’s and Geary’s findings deservedly complicate the simplistic narrative that sex differences in STEM careers are the result of societal gender biases.”

That conclusion should prompt a rethink. If women are most likely to choose STEM careers in societies that offer less equality and fewer personal freedoms, then that’s a steep price to pay just to say we’re 50/50.

For Long-Term Happiness, the Wedded Win the Race

Multiyear surveys involving thousands bolster the results

By

SUSAN PINKER

Jan. 26, 2018 10:41 a.m. ET

See the column on the Wall Street Journal site

 

Last month I attended a family wedding in a city better known for its frigid winter winds and industrial history than for its natural beauty. Let’s just say it wasn’t a destination wedding. Still, the young couple and their families were over the moon with joy, as were the hundreds who attended.

In an era when the number of unmarried couples living together continues to rise, up by nearly 30% in the U.S. in the last decade alone, why make a big fuss over marriage?

One could say that by tying the knot, the 20-somethings were fulfilling religious and family traditions, making a public declaration of their love or expressing a vote of confidence in the future. A new study also suggests a more practical motive: Getting married is one of the best ways to cement a couple’s long-term happiness.

Published in December in the Journal of Happiness Studies, the study analyzed people’s responses to two huge British surveys about life satisfaction. The study’s authors— John Helliwell, an economist at the University of British Columbia, and his former graduate student, Shawn Grover —hoped to answer two questions: Is marriage’s effect on happiness short-lived? And does marriage cause happiness, or is it that happy people are more likely to get and stay married in the first place?

To isolate these different variables, the researchers turned first to a longitudinal survey project by a British institute. From 1991 to 2009, demographers asked 30,000 adults of varying ages about their lifestyles and moods, posing the same questions year after year. This allowed the Helliwell research team to see how happy people were long before—as well as a decade after—they met their partners, and also to assess how their levels of happiness changed over the long run. Whether they ultimately married, divorced, stayed single or were living together, the researchers thus had a snapshot in hand of their pre-relationship sense of fulfillment.

This is crucial, because happiness is U-shaped across the adult life-span, meaning that it normally rises when we’re young adults, drops during middle age when life’s stresses and existential questions loom large, and then rises again as older adults regain their equilibrium. To bolster their findings, the researchers also used a second survey, 10 times as large as the first and based on U.K. census data. This Annual Population Survey by the national statistics office queried more than 300,000 people from 2011 to 2013 about their anxieties, social lives and happiness.

The researchers’ answer to their first question—whether marriage had a merely short-lived effect on happiness—was definitively no. When they controlled for their pre-marriage status, married people were 10% more satisfied than people who were single—and were more likely to stay that way. While cohabiting couples were happier than single folks, they were only three-quarters as happy as marrieds. “Marriage seems to be most important in middle age, when people of every marital status experience a dip in well-being,” the economists wrote.

Not all marriages are created equal, of course. At least a dozen studies show that marriages characterized by stonewalling, contempt or conflict are bad for us, undermining our sleep, immunity and cardiovascular health. Conversely, one of the most telling findings of the Helliwell study is that a close marital bond spurs long-term happiness. People who named their spouse as their best friend were twice as happy as those who didn’t. “The more likely you are to regard someone as your friend, the more likely you’ll think the best of them, and not take them for granted. If that’s true, it’s a very successful marriage,” Prof. Helliwell said.

It’s good to know that someone has your back. That level of commitment, formalized by a ritual and a legal document, may be one reason why the advantages of marriage trump those of just living together. Along with chocolate, it’s all food for thought as we approach Valentine’s Day.

 

Spanking for Misbehavior? It Causes More

A big new study finds a clear negative effect

By

SUSAN PINKER

Dec. 14, 2017 11:08 a.m. ET

See the column on the Wall Street Journal site 

Most children under 7 can neither master their emotions nor reason like adults, so power struggles with them are inevitable. Who gets to control the TV remote or the smartphone? Does junior resist taking a bath, wander around after bedtime, gleefully use curse words or pound on his siblings every chance he gets?

The answer to at least some of these questions must be yes, if the child is a growing human being and not a robot. Experimenting with autonomy and observing how his parents react is part of the job of a child. Setting age-appropriate boundaries is the role of the adult.

The dynamics become even more complex when a child is defiant or impulsive by nature, when a parent is under inordinate pressure, or all of the above. That is perhaps one reason why two-thirds of American parents, when asked by the federally funded General Social Survey in 2016, agreed with the statement, “Sometimes a child just needs a good, hard spanking.” (The number has dropped by about 15 points in the past three decades.)

A host of studies link spanking to later behavior problems. A 2016 meta-analysis of five decades of research on the topic suggests that spanking a young child is not only an ineffective form of discipline but a catalyst for more serious acting out and mental health problems in the future. Indeed, corporal punishment of children is now illegal in 53 countries, and banning any kind of hitting of children—with a hand or an object—is a growing international movement.

Whether striking a preschooler’s bottom with an open hand discourages or exacerbates misbehavior remains a controversial topic in the U.S. Adding grist to the debate: The studies that have been conducted are observational—that is, they show that spanking and future behavior problems are tightly linked but not that the former definitively causes the latter. Children can’t be randomly assigned, for experimental purposes, to spanked and not spanked groups, so it’s hard to discern whether later behavior problems can be attributed to that one factor.

A new study led by Elizabeth Gershoff, a professor of human development at the University of Texas at Austin, aims to settle this dispute. Published last month in the journal Psychological Science, the study statistically controlled for children’s initial behavior problems and the characteristics of their parents. More than 12,000 American families were surveyed, from their children’s kindergarten year through eighth grade, as part of the nationally representative Early Childhood Longitudinal Study.

The researchers paired subjects who had and had not been spanked at 5 years old but were equivalent on 38 other factors. Those included the child’s initial level of behavior problems as rated by the teacher, and the parents’ marital status, mental health, stress levels and parenting style as defined by their answers to interview questions.

The researchers found that a child who was spanked at age 5 was far more likely to have behavior problems at age 6, and more serious ones again at age 8, according to teachers’ ratings. The relationship between corporal punishment and later acting out was even stronger if parents said that they had spanked the 5-year-olds the week before the survey, an indication that spanking may have been relatively frequent.

“This is the closest we can get, outside of an experiment, to say that spanking causes negative changes in children’s behavior. I can’t think of another way to explain our results,” Ms. Gershoff told me.

The American Pediatric Society advises parents to avoid spanking, and the American Psychological Association cautions against the practice. American parents seem to be left with a choice: To use a form of physical discipline that gambles with the future of their children or to find other ways to help them learn self-control.

Can Brain Scans Curb the Rising Rate of Suicide?

Aside from genetics, any parental contribution to the disorders is probably nil

By

SUSAN PINKER

Nov. 10, 2017 10:17 a.m. ET

See the column on the Wall Street Journal site

Suicide is the 10th leading cause of death in the U.S. After a period of decline, it rose 24% in the 15 years ending in 2014, and a gender gap has persisted—four times as many men as women kill themselves.

For clinicians, identifying who is at risk for suicide has long posed a challenge. How do you tell the difference between a person who is distressed but not in danger from someone quietly planning to take his own life? If we could answer that question, we could prevent many untimely deaths. Research shows that taking one’s life is rarely a spur-of-the-moment idea and that most suicidal people have a plan in mind before they act on it.

Now, it seems, computers may be able to help discern who is in danger. A study published last month in the journal Nature Human Behaviour shows that machines can learn to identify suicidal people based on their brain scans.

“Human brains have a common way to represent objects and emotions,” said Marcel Just, a professor of cognitive neuroscience at Carnegie Mellon University and the paper’s first author. That process is so universal, Dr. Just said, that his team’s scanning studies have found the same brain-activation patterns for a word like apple in English, Portuguese and Mandarin speakers.

Four years ago, Dr. Just’s team began capturing brain-activation patterns for emotions by putting actors into brain scanners. Researchers asked them to imagine scenarios that would make them feel anger, envy, shame and other emotions, thus capturing neural signatures for these mental states. The researchers then had a basic visual dictionary of how the brain represents emotions, a resource that would come in handy for their study on suicide risk.

In that study, Dr. Just’s team exposed 34 adults under age 30 to over two dozen words repeated randomly while they were lying in the scanner. Of the subjects, half had a history of suicidal thoughts or suicide attempts. The other half, with no history of mental illness, were the control group.

​While in the scanner, the subjects saw slides of emotionally evocative words, including “carefree,” “cruelty,” “praise,” “gloom” and “lifeless.” The participants’ neural responses to these words were carefully mapped using voxel analysis—which captures varying patterns of brain activation according to a 3-D grid of about 20,000 voxels. These are created when neuroscientists electronically dice up the brain into 3-D cubes so they can measure and compare what’s happening in various locations.

Only 120 voxels or so reflect how the brain processes emotional concepts, said Dr. Just, adding: “You show me the activation pattern in those 120 voxels and I’ll show you what word you’re thinking about.”

By analyzing small pattern differences in the neural signatures, the team created a computer algorithm—a set of rules to follow in digital calculations—that could learn to differentiate people with suicidal intentions from members of the control group.

Based on the computer’s prototype of each group’s neural responses, the algorithm could predict whether a subject had previously thought about suicide—or had no such history—with 90% accuracy. The machine could also separate those who had contemplated suicide from those who had really tried it, correctly distinguishing between the two 94% of the time.

“I’m a cognitive psychologist. I used to think that the human mind was for arithmetic, reading and planning where to park your car,” Dr. Just said, remarking on the early preoccupations of cognitive science with pure problem solving. “But when I started to do brain imaging, I saw the networks that become activated” when a person thinks of other people, their intentions or their goals.

Predicting the chances of suicide based on biological markers like brain scans is a monumental achievement. Let’s hope a reliable and affordable version will be available to medicine sometime soon.

New Tools Detect Autism Disorders Earlier in Lives

Aside from genetics, any parental contribution to the disorders is probably nil

By

SUSAN PINKER

Oct. 5, 2017 10:56 a.m. ET

See the column on the Wall Street Journal site

When a child is diagnosed with an autistic-spectrum disorder, a parent’s emotions can swing from disbelief to worry to despair, and many ask themselves the understandable question: Why did this happen?

Genes are the answer, though which combinations are responsible remains a mystery.

The mounting evidence for a heritable cause hasn’t stopped some people from trying to pin the disorder on parents, fueling parental guilt and damaging families that are already struggling with a child’s diagnosis. Now a new study shows that the roots of autistic disorders are detectable so early in life that, other than genes, any parental contribution to the disorder is probably nil.

There is a long and bitter history of baseless finger-pointing around autism. In one of 20th-century psychology’s most shameful mistakes, supposed experts blamed the childhood disorder on “refrigerator mothers,” who were said to cause autism by being emotionally distant. Ultimately, studies showed that a crucial clue to the disorder’s origin was the babies’ inability to respond to their mother’s nurturing—not the other way around. Fifty years later, activists tied autism to childhood vaccines. This false idea led to fewer immunized children and a resurgence of dangerous infectious childhood diseases.

In this new study, John Lewis, the lead author and a neuroscientist at the Montreal Neurological Institute, analyzed data from the MRIs of 260 babies to chart the trajectory of their developing brains. (It was published this summer in Biological Psychiatry.) His previous work had revealed that toddlers with a strong family history of autistic spectrum disorders show sluggish neural pathways in areas critical to language and social development. Such pathways, composed of nerve fibers, transmit information from the body’s five senses and allow regions of the brain to communicate with each other. Dr. Lewis wanted to see how early these neural inefficiencies appeared.

Using MRI-based data, Dr. Lewis and his team charted—at six months of age and again at 12 months—the length and strength of fibers connecting different regions of the babies’ brains. Shorter and stronger connections are more efficient.

As children grow, their brains typically streamline such connections by “pruning”—a form of neural housekeeping whereby unnecessary or unused connections between distant brain regions are weeded out.

His research team the tracked neural pathways of two groups of infants. One group had a sibling on the autistic spectrum—which meant the baby was at high risk of developing the disorder. The control group had no family history of autistic spectrum disorders.

A comparison of the two groups revealed that, when analyzed as a group, the brains of 6-month-olds with an autistic sibling showed inefficiencies in the right auditory cortex, an area that processes speech sounds. By 12 months of age, certain neural areas critical for language, touch and self-awareness were also less efficient than those of the control group. “If your brain starts off not processing the sensory inputs efficiently, then it can’t do the proper pruning. It’s just passing on noise,” said Dr. Lewis.

The study was launched seven years ago, and by the time it was complete, the researchers knew which of the high-risk infants ended up with an autism spectrum diagnosis. (Almost 17% of the high-risk group received an autism diagnosis, compared with 1.3% of the control group.) Yet they found that the biological markers of their disorder were evident at 6 months of age.

A computer analysis of the high-risk group’s MRIs could retroactively identify which babies would ultimately show behavioral signs of autism spectrum diagnosis years later—and which babies would be unaffected. What’s more, the degree of neural inefficiency predicted how severe that child’s symptoms would be.

This research suggests that very early diagnosis—and early intervention—is on our doorstep. It also means that parents can’t be blamed.

Page 1 of 41234