Alison Gopnik The Wall Street Journal Columns

 

Mind & Matter, on alternating Saturdays

Click on the title for a version (or the date for The Wall Street Journal link)*

 

Humans Naturally Follow Crowd Behavior (12 Sept. 2014)

Even Children Get More Outraged at 'Them' Than at 'Us' (27 Aug 2014)

In Life, Who WIns, the Fox or the Hedgehog? (15 Aug 2014)

Do We Know What We See? (31 July 2014)

Why Is It So Hard for Us to Do Nothing? (18 July 2014)

A Toddler's Souffles Aren't Just Child's Play (3 July 2014)

For Poor Kids, New Proof That Early help Is Key (13 June 2014)

Rice, Wheat and the Values They Sow (30 May 2014)

What Made Us Human? Perhaps Adorable Babies (16 May 2014)

Grandmothers: The Behind-the-Scenes Key to Human Culture? (2 May 2014)

See Jane Evolve: Picture Books Explain Darwin (18 Apr 2014)

Scientists Study Why Stories Exist (4 Apr 2014)

The Kid Who Wouldn't Let Go of 'The Device' (21 Mar 2014)

Why You're Not as Clever as a 4-Year-Old (7 Mar 2014)

Are Schools Asking to Drug Kids for Better Test Scores? (21 Feb 2014)

The Psychedelic Road to Other Conscious States (7 Feb 2014)

Time to Retire the Simplicity of Nature vs. Nurture (24 Jan 2014)

The Surprising Probability Gurus Wearing Diapers (10 Jan 2014)

What Children Really Think About Magic (28 Dec 2013)

Trial and Error in Toddlers and Scientists (14 Dec 2013)

Gratitude for the Cosmic Miracle of A Newborn Child (29 Nov 2013)

The Brain's Crowdsourcing Software (16 Nov 2013)

World Series Recap: May Baseball's Irrational Heart Keep On Beating (2 Nov 2013)

Drugged-out Mice Offer Insight into the Growing Brain (4 Oct 2013)

Poverty Can Trump a Winning Hand of Genes (20 Sep 2013)

Is It Possible to Reason about Having a Child? (7 Sep 2013)

Even Young Children Adopt Arbitrary Rituals (24 Aug 2013)

The Gorilla Lurking in Our Consciousness (9 Aug 2013)

Does Evolution Want Us to Be Unhappy? (27 Jul 2013)

How to Get Children to Eat Veggies (13 Jul 2013)

What Makes Some Children More Resilient? (29 Jun 2013)

Wordsworth, The Child Psychologist (15 Jun 2013)

Zazes, Flurps and the Moral World of Kids (31 May 2013)

How Early Do We Learn Racial 'Us and Them'? (18 May 2013)

How the Brain Really Works (4 May 2013)

Culture Begets Marriage - Gay or Straight (21 Apr 2013)

[sneak peek]

GopnikColumn_WSJ_21Apr13_top

For Innovation, Dodge the Prefrontal Police (5 Apr 2013)

Sleeping Like a Baby, Learning at Warp Speed (22 Mar 2013)

Why Are Our Kids Useless? Because We're Smart (8 Mar 2013)

 

 

HUMANS NATURALLY FOLLOW CROWD BEHAVIOR

It happened last Sunday at football stadiums around the country. Suddenly, 50,000 individuals became a single unit, almost a single mind, focused intently on what was happening on the field—that particular touchdown grab or dive into the end zone. Somehow, virtually simultaneously, each of those 50,000 people tuned into what the other 49,999 were looking at.

Becoming part of a crowd can be exhilarating or terrifying: The same mechanisms that make people fans can just as easily make them fanatics. And throughout human history we have constructed institutions that provide that dangerous, enthralling thrill. The Coliseum that hosts my local Oakland Raiders is, after all, just a modern knockoff of the massive theater that housed Roman crowds cheering their favorite gladiators 2,000 years ago.

(For Oakland fans, like my family, it's particularly clear that participating in the Raider Nation is responsible for much of the games' appeal—it certainly isn't the generally pathetic football.)

In fact, recent studies suggest that our sensitivity to crowds is built into our perceptual system and operates in a remarkably swift and automatic way. In a 2012 paper in the Proceedings of the National Academy of Sciences, A.C. Gallup, then at Princeton University, and colleagues looked at the crowds that gather in shopping centers and train stations.

In one study, a few ringers simply joined the crowd and stared up at a spot in the sky for 60 seconds. Then the researchers recorded and analyzed the movements of the people around them. The scientists found that within seconds hundreds of people coordinated their attention in a highly systematic way. People consistently stopped to look toward exactly the same spot as the ringers.

The number of ringers ranged from one to 15. People turn out to be very sensitive to how many other people are looking at something, as well as to where they look. Individuals were much more likely to follow the gaze of several people than just a few, so there was a cascade of looking as more people joined in.

In a new study in Psychological Science, Timothy Sweeny at the University of Denver and David Whitney at the University of California, Berkeley, looked at the mechanisms that let us follow a crowd in this way. They showed people a set of four faces, each looking in a slightly different direction. Then the researchers asked people to indicate where the whole group was looking (the observers had to swivel the eyes on a face on a computer screen to match the direction of the group).

Because we combine head and eye direction in calculating a gaze, the participants couldn't tell where each face was looking by tracking either the eyes or the head alone; they had to combine the two. The subjects saw the faces for less than a quarter of a second. That's much too short a time to look at each face individually, one by one.

It sounds impossibly hard. If you try the experiment, you can barely be sure of what you saw at all. But in fact, people were amazingly accurate. Somehow, in that split-second, they put all the faces together and worked out the average direction where the whole group was looking.

In other studies, Dr. Whitney has shown that people can swiftly calculate how happy or sad a crowd is in much the same way.

Other social animals have dedicated brain mechanisms for coordinating their action—that's what's behind the graceful rhythms of a flock of birds or a school of fish. It may be hard to think of the eccentric, gothic pirates of Oakland's Raider Nation in the same way. A fan I know says that going to a game is like being plunged into an unusually friendly and cooperative postapocalyptic dystopia—a marijuana-mellowed Mad Max.

But our brains seem built to forge a flock out of even such unlikely materials.

EVEN CHILDREN GET MORE OUTRAGED AT 'THEM' THAN AT 'US'

From Ferguson to Gaza, this has been a summer of outrage. But just how outraged people are often seems to depend on which group they belong to. Polls show that many more African-Americans think that Michael Brown's shooting by a Ferguson police officer was unjust than white Americans. How indignant you are about Hamas rockets or Israeli attacks that kill civilians often depends on whether you identify with the Israelis or the Palestinians. This is true even when people agree about the actual facts.

You might think that such views are a matter of history and context, and that is surely partly true. But a new study in the Proceedings of the National Academy of Sciences suggests that they may reflect a deeper fact about human nature. Even young children are more indignant about injustice when it comes from "them" and is directed at "us." And that is true even when "them" and "us" are defined by nothing more than the color of your hat.

Jillian Jordan, Kathleen McAuliffe and Felix Warneken at Harvard University looked at what economists and evolutionary biologists dryly call "costly third-person norm-violation punishment" and the rest of us call "righteous outrage." We take it for granted that someone who sees another person act unfairly will try to punish the bad guy, even at some cost to themselves.

From a purely economic point of view, this is puzzling—after all, the outraged person is doing fine themselves. But enforcing fairness helps ensure social cooperation, and we humans are the most cooperative of primates. So does outrage develop naturally, or does it have to be taught?

The experimenters gave some 6-year-old children a pile of Skittles candy. Then they told them that earlier on, another pair of children had played a Skittle-sharing game. For example, Johnny got six Skittles, and he could choose how many to give to Henry and how many to keep. Johnny had either divided the candies fairly or kept them all for himself.

Now the children could choose between two options. If they pushed a lever to the green side, Johnny and Henry would keep their Skittles, and so would the child. If they pushed it to the red side, all six Skittles would be thrown away, and the children would lose a Skittle themselves as well. Johnny would be punished, but they would lose too.

When Johnny was fair, the children pushed the lever to green. But when Johnny was selfish, the children acted as if they were outraged. They were much more likely to push the lever to red—even though that meant they would lose themselves.

How would being part of a group influence these judgments? The experimenters let the children choose a team. The blue team wore blue hats, and the yellow team wore yellow. They also told the children whether Johnny and Henry each belonged to their team or the other one.

The teams were totally arbitrary: There was no poisonous past, no history of conflict. Nevertheless, the children proved more likely to punish Johnny's unfairness if he came from the other team. They were also more likely to punish him if Henry, the victim, came from their own team.

As soon as they showed that they were outraged at all, the children were more outraged by "them" than "us." This is a grim result, but it fits with other research. Children have impulses toward compassion and justice—the twin pillars of morality—much earlier than we would have thought. But from very early on, they tend to reserve compassion and justice for their own group.

There was a ray of hope, though. Eight-year-olds turned out to be biased toward their own team but less biased than the younger children. They had already seemed to widen their circle of moral concern beyond people who wear the same hats. We can only hope that, eventually, the grown-up circle will expand to include us all.

IN LIFE, WHO WINS, THE FOX OR THE HEDGEHOG?

A philosopher once used an animal metaphor – the clever fox - to point out the most important feature of certain especially distinctive thinkers.

It was not, however, Isiah Berlin. Berlin did famously divide thinkers into the two categories, hedgehogs and foxes. He based the distinction on a saying by the ancient Greek philosopher Archilochus “The fox knows many things but the hedgehog knows one big thing”. Hedgehogs have a single grand idea they apply to everything, foxes come up with a new idea for every situation. Berlin said that Plato and Dostoevsky were hedgehogs, Aristotle and Shakespeare were foxes.

Berlin later regretted inventing this over-simplified dichotomy. But it's proved irresistible to writers ever after. After all, as Robert Benchley, said there are just two kinds of people in the world, those who think there are just two kinds of people in the world and those who don’t.

Philosophical and political hedgehogs got most of the glamour and attention in the twentieth century. But lately there has been a turn towards foxes. The psychologist Phillip Tetlock studied expert political predictions and found that foxy, flexible, pluralistic experts were much more accurate than the experts with one big hedgehog idea. The statistics wiz Nate Silver chose a fox as his logo in tribute to this finding.

But here is a question that Berlin, that archetypal Oxford don, never considered. What about the babies? What makes young hedgehogs and foxes turn out the way they do?

Biologists confirm that Archilocus, got it right, foxes are far more wily and flexible learners than hedgehogs. But hedgehogs also have a much shorter childhood than foxes. Hedgehogs develop their spines, that one big thing, almost as soon as they are born, and are independent in only six weeks. Fox cubs still return to the den for six months. As a result hedgehogs need much less parental care - hedgehog fathers disappear after they mate. Fox couples, in contrast, “pair-bond” - the fathers help bring food to the babies.

Baby foxes also play much more than hedgehogs, though in a slightly creepy way. Fox parents start out by feeding the babies their own regurgitated food. But then they actually bring the babies live prey, like mice, when they are still in the den, and the babies play at hunting them. That play gives them a chance to practice and develop the flexible hunting skills and wily intelligence that serve them so well later on.

In fact, the much earlier, anonymous, philosopher seems to have understood the behavioral ecology of foxes, and the link between intelligence, play and parental investment, rather better than Berlin did. The splendid song The Fox, beloved by every four-year-old, was first recorded on the blank flyleaf of a 15th century copy of “Sayings of the Philosophers”. The Chaucerian philosopher not only described the clever, sociable carnivore who outwits even the homo sapiens. But he (or perhaps she?) also noted that the fox is the kind of creature who brings back the prey to the little ones in his cozy den. That grey goose was a source of cognitive practice and skill formation as well as tasty bones-o.

Berlin doesn’t have much to say about whether Plato the hedgehog and Aristotle the fox had devoted or deadbeat Dads, or if they had much playtime as philosopher pups. Though, of course, the young cubs game of hunting down terrified live prey while their elders look on approvingly will seem familiar to those who have attended philosophy graduate seminars.

DO WE KNOW WHAT WE SEE?

In a shifty world, surely the one thing we can rely on is the evidence of our own eyes. I may doubt everything else, but I have no doubts about what I see right now. Even if I'm stuck in The Matrix, even if the things I see aren't real—I still know that I see them.

Or do I?

A new paper in the journal Trends in Cognitive Sciences by the New York University philosopher Ned Block demonstrates just how hard it is to tell if we really know what we see. Right now it looks to me as if I see the entire garden in front of me, each of the potted succulents, all of the mossy bricks, every one of the fuchsia blossoms. But I can only pay attention to and remember a few things at a time. If I just saw the garden for an instant, I'd only remember the few plants I was paying attention to just then.

How about all the things I'm not paying attention to? Do I actually see them, too? It may just feel as if I see the whole garden because I quickly shift my attention from the blossoms to the bricks and back.

Every time I attend to a particular plant, I see it clearly. That might make me think that I was seeing it clearly all along, like somebody who thinks the refrigerator light is always on, because it always turns on when you open the door to look. This "refrigerator light" illusion might make me think I see more than I actually do.

On the other hand, maybe I do see everything in the garden—it's just that I can't remember and report everything I see, only the things I pay attention to. But how can I tell if I saw something if I can't remember it?

Prof. Block focuses on a classic experiment originally done in 1960 by George Sperling, a cognitive psychologist at the University of California, Irvine. (You can try the experiment yourself online.) Say you see a three-by-three grid of nine letters flash up for a split second. What letters were they? You will only be able to report a few of them.

Now suppose the experimenter tells you that if you hear a high-pitched noise you should focus on the first row, and if you hear a low-pitched noise you should focus on the last row. This time, not surprisingly, you will accurately report all three letters in the cued row, though you can't report the letters in the other rows.

But here's the trick. Now you only hear the noise after the grid has disappeared. You will still be very good at remembering the letters in the cued row. But think about it—you didn't know beforehand which row you should focus on. So you must have actually seen all the letters in all the rows, even though you could only access and report a few of them at a time. It seems as if we do see more than we can say.

Or do we? Here's another possibility. We know that people can extract some information from images they can't actually see—in subliminal perception, for example. Perhaps you processed the letters unconsciously, but you didn't actually see them until you heard the cue. Or perhaps you just saw blurred fragments of the letters.

Prof. Block describes many complex and subtle further experiments designed to distinguish these options, and he concludes that we do see more than we remember.

But however the debate gets resolved, the real moral is the same. We don't actually know what we see at all! You can do the Sperling experiment hundreds of times and still not be sure whether you saw the letters. Philosophers sometimes argue that our conscious experience can't be doubted because it feels so immediate and certain. But scientists tell us that feeling is an illusion, too.

WHY IS IT SO HARD FOR US TO DO NOTHING?

It is summer time, and the living is easy. You can, at last, indulge in what is surely the most enjoyable of human activities—doing absolutely nothing. But is doing nothing really enjoyable? A new study in the journal Science shows that many people would rather get an electric shock than just sit and think.

Neuroscientists have inadvertently discovered a lot about doing nothing. In brain-imaging studies, people lie in a confined metal tube feeling bored as they wait for the actual experiment to start. Fortuitously, neuroscientists discovered that this tedium was associated with a distinctive pattern of brain activity. It turns out that when we do nothing, many parts of the brain that underpin complex kinds of thinking light up.

Though we take this kind of daydreaming for granted, it is actually a particularly powerful kind of thinking. Much more than any other animal, we humans have evolved the ability to live in our own thoughts, detached from the demands of our immediate actions and experiences. When people lie in a tube with nothing else to do, they reminisce, reliving events in the past ("Damn it, that guy was rude to me last week"), or they plan what they will do in the future ("I'll snub him next time"). And they fantasize: "Just imagine how crushed he would have been if I'd made that witty riposte."

Descartes had his most important insights sitting alone in a closet-sized stove, the only warm spot during a wintry Dutch military campaign. When someone asked Newton how he discovered the law of gravity, he replied, "By thinking on it continually." Doing nothing but thinking can be profound.

But is it fun? Psychologist Tim Wilson of the University of Virginia and his colleagues asked college students to sit for 15 minutes in a plain room doing nothing but thinking. The researchers also asked them to record how well they concentrated and how much they enjoyed doing it. Most of the students reported that they couldn't concentrate; half of them actively disliked the experience.

Maybe that was because of what they thought about. "Rumination"—brooding on unpleasant experiences, like the guy who snubbed you—can lead to depression, even clinical depression. But the researchers found no difference based on whether people recorded positive or negative thoughts.

Maybe it was something about the sterile lab room. But the researchers also got students just to sit and think in their own homes, and they disliked it even more. In fact, 32% of the students reported that they cheated, with a sneak peek at a cellphone or just one quick text.

But that's because they were young whippersnappers with Twitter-rotted brains, right? Wrong. The researchers also did the experiment with a middle-aged church group, and the results were the same. Age, gender, personality, social-media use—nothing made much difference.

But did people really hate thinking that much? The researchers gave students a mild electric shock and asked if they would pay to avoid another. The students sensibly said that they would. The researchers then put them back in the room with nothing to do but also gave them the shock button.

Amazingly, many of them voluntarily shocked themselves rather than doing nothing. Not so amazingly (at least to this mother of boys who played hockey), there was a big sex difference. Sixty-seven percent of the men preferred a shock to doing nothing, but only 25% of the women did.

Newton and neuroscience suggest that just thinking can be very valuable. Why is it so hard? It is easy to blame the modern world, but 1,000 years ago, Buddhist monks had the same problem. Meditation has proved benefits, but it takes discipline, practice and effort. Our animal impulse to be up and doing, or at least up and checking email, is hard to resist, even in a long, hazy cricket-song dream of a summer day.

A TODDLER'S SOUFFLES AREN'T JUST CHILD'S PLAY

Augie, my 2-year-old grandson, is working on his soufflés. This began by accident. Grandmom was trying to simultaneously look after a toddler and make dessert. But his delight in soufflé-making was so palpable that it has become a regular event.

The bar, and the soufflé, rise higher on each visit—each time he does a bit more and I do a bit less. He graduated from pushing the Cuisinart button and weighing the chocolate, to actually cracking and separating the eggs. Last week, he gravely demonstrated how you fold in egg whites to his clueless grandfather. (There is some cultural inspiration from Augie's favorite Pixar hero, Remy the rodent chef in "Ratatouille," though this leads to rather disturbing discussions about rats in the kitchen.)

It's startling to see just how enthusiastically and easily a 2-year-old can learn such a complex skill. And it's striking how different this kind of learning is from the kind children usually do in school.

New studies in the journal Human Development by Barbara Rogoff at the University of California, Santa Cruz and colleagues suggest that this kind of learning may actually be more fundamental than academic learning, and it may also influence how helpful children are later on.

Dr. Rogoff looked at children in indigenous Mayan communities in Latin America. She found that even toddlers do something she calls "learning by observing and pitching in." Like Augie with the soufflés, these children master useful, difficult skills, from making tortillas to using a machete, by watching the grown-ups around them intently and imitating the simpler parts of the process. Grown-ups gradually encourage them to do more—the pitching-in part. The product of this collaborative learning is a genuine contribution to the family and community: a delicious meal instead of a standardized test score.

This kind of learning has some long-term consequences, Dr. Rogoff suggests. She and her colleagues also looked at children growing up in Mexico City who either came from an indigenous heritage, where this kind of observational learning is ubiquitous, or a more Europeanized tradition. When they were 8 the children from the indigenous traditions were much more helpful than the Europeanized children: They did more work around the house, more spontaneously, including caring for younger siblings. And children from an indigenous heritage had a fundamentally different attitude toward helping. They didn't need to be asked to help—instead they were proud of their ability to contribute.

The Europeanized children and parents were more likely to negotiate over helping. Parents tried all kinds of different contracts and bargains, and different regimes of rewards and punishments. Mostly, as readers will recognize with a sigh, these had little effect. For these children, household chores were something that a grown-up made you do, not something you spontaneously contributed to the family.

Dr. Rogoff argues that there is a connection between such early learning by pitching in and the motivation and ability of school-age children to help. In the indigenous-tradition families, the toddler's enthusiastic imitation eventually morphed into real help. In the more Europeanized families, the toddler's abilities were discounted rather than encouraged.

The same kind of discounting happens in my middle-class American world. After all, when I make the soufflé without Augie's help there's a much speedier result and a lot less chocolate fresco on the walls. And it's true enough that in our culture, in the long run, learning to make a good soufflé or to help around the house, or to take care of a baby, may be less important to your success as an adult than more academic abilities.

But by observing and pitching in, Augie may be learning something even more fundamental than how to turn eggs and chocolate into soufflé. He may be learning how to turn into a responsible grown-up himself.

FOR POOR KIDS, NEW PROOF THAT EARLY HELP IS KEY

Twenty years ago, I would have said that social policies meant to help very young children are intrinsically valuable. If improving the lives of helpless, innocent babies isn't a moral good all by itself, what is? But I also would have said, as a scientist, that it would be really hard, perhaps impossible, to demonstrate the long-term economic benefits of those policies. Human development is a complicated, interactive and unpredictable business.

Individual children are all different. Early childhood experience is especially important, but it's not, by any means, the end of the story. Positive early experiences don't inoculate you against later deprivation; negative ones don't doom you to inevitable ruin. And determining the long-term effects of any social policy is notoriously difficult. Controlled experiments are hard, different programs may have different effects, and unintended consequences abound.

I still think I was right on the first point: The moral case for early childhood programs shouldn't depend on what happens later. But I was totally, resoundingly, dramatically wrong about whether one could demonstrate long-term effects. In fact, over the last 20 years, an increasing number of studies—many from hardheaded economists at business schools—have shown that programs that make life better for young children also have long-term economic benefits.

The most recent such study was published in the May 30 issue of Science. Paul Gertler, of the National Bureau of Economic Research and the Haas School of Business of the University of California, Berkeley, and colleagues looked at babies growing up in Jamaica. (Most earlier studies had just looked at children in the U.S.) These children were so poor that they were "nutritionally stunted"—that is, they had physical symptoms of malnourishment.

Health aides visited one group of babies every week for two years, starting at age 1. The aides themselves played with the babies and helped encourage the parents to play with them in stimulating ways. Another randomly determined group just got weekly nutritional supplements, a third received psychological and nutritional help, and a fourth group was left alone.

Twenty years later, when the children had grown up, the researchers returned and looked at their incomes. The young adults who had gotten the early psychological help had significantly higher incomes than those who hadn't. In fact, they earned 25% more than the control group, even including the children who had just gotten better food.

This study and others like it have had some unexpected findings. The children who were worst off to begin with reaped the greatest benefits. And the early interventions didn't just influence grades. The children who had the early help ended up spending more time in school, and doing better there, than the children who didn't. But the research has found equally important effects on earnings, physical health and even crime. And interventions that focus on improving early social interactions may be as important and effective as interventions focused on academic skills.

The program influenced the parents too, though in subtle ways. The researchers didn't find any obvious differences in the ways that parents treated their children when they were 7 and 11 years old. It might have looked as if the effects of the intervention had faded away.

Nevertheless, the parents of children who had had the psychological help were significantly more likely to immigrate to another country later on. Those health visits stopped when the children were only 4. But both the parents and the children seemed to gain a new sense of opportunity that could change their whole lives.

I'm really glad I was so wrong. In the U.S., 20% of children still grow up in poverty. The self-evident moral arguments for helping those children have fueled the movement toward early childhood programs in red Oklahoma and Georgia as well as blue New York and Massachusetts. But the scientific and economic arguments have become just as compelling.

RICE, WHEAT AND THE VALUES THEY SOW

Could what we eat shape how we think? A new paper in the journal Science by Thomas Talhelm at the University of Virginia and colleagues suggests that agriculture may shape psychology. A bread culture may think differently than a rice-bowl society.

Psychologists have long known that different cultures tend to think differently. In China and Japan, people think more communally, in terms of relationships. By contrast, people are more individualistic in what psychologist Joseph Henrich, in commenting on the new paper, calls "WEIRD cultures."

WEIRD stands for Western, educated, industrialized, rich and democratic. Dr. Henrich's point is that cultures like these are actually a tiny minority of all human societies, both geographically and historically. But almost all psychologists study only these WEIRD folks.

The differences show up in surprisingly varied ways. Suppose I were to ask you to draw a graph of your social network, with you and your friends represented as circles attached by lines. Americans make their own circle a quarter-inch larger than their friends' circles. In Japan, people make their own circle a bit smaller than the others.

Or you can ask people how much they would reward the honesty of a friend or a stranger and how much they would punish their dishonesty. Most Easterners tend to say they would reward a friend more than a stranger and punish a friend less; Westerners treat friends and strangers more equally.

These differences show up even in tests that have nothing to do with social relationships. You can give people a "Which of these things belongs together?" problem, like the old "Sesame Street" song. Say you see a picture of a dog, a rabbit and a carrot. Westerners tend to say the dog and the rabbit go together because they're both animals—they're in the same category. Easterners are more likely to say that the rabbit and the carrot go together—because rabbits eat carrots.

None of these questions has a right answer, of course. So why have people in different parts of the world developed such different thinking styles?

You might think that modern, industrial cultures would naturally develop more individualism than agricultural ones. But another possibility is that the kind of agriculture matters. Rice farming, in particular, demands a great deal of coordinated labor. To manage a rice paddy, a whole village has to cooperate and coordinate irrigation systems. By contrast, a single family can grow wheat.

Dr. Talhelm and colleagues used an ingenious design to test these possibilities. They looked at rice-growing and wheat-growing regions within China. (The people in these areas had the same language, history and traditions; they just grew different crops.) Then they gave people the psychological tests I just described. The people in wheat-growing areas looked more like WEIRD Westerners, but the rice growers showed the more classically Eastern communal and relational patterns. Most of the people they tested didn't actually grow rice or wheat themselves, but the cultural traditions of rice or wheat seemed to influence their thinking.

This agricultural difference predicted the psychological differences better than modernization did. Even industrialized parts of China with a rice-growing history showed the more communal thinking pattern.

The researchers also looked at two measures of what people do outside the lab: divorces and patents for new inventions. Conflict-averse communal cultures tend to have fewer divorces than individualistic ones, but they also create fewer individual innovations. Once again, wheat-growing areas looked more "WEIRD" than rice-growing ones.

In fact, Dr. Henrich suggests that rice-growing may have led to the psychological differences, which in turn may have sparked modernization. Aliens from outer space looking at the Earth in the year 1000 would never have bet that barbarian Northern Europe would become industrialized before civilized Asia. And they would surely never have guessed that eating sandwiches instead of stir-fry might make the difference.

THE WIDE REACH OF BABIES' WEBS OF ADORABLENESS

We've all seen the diorama in the natural history museum: the mighty cave men working together to bring down the mastodon. For a long time, evolutionary biologists pointed to guy stuff like hunting and warfare to explain the evolution of human cooperation.

But a recent research symposium at the University of California, San Diego, suggests that the children watching inconspicuously at the back of the picture may have been just as important. Caring for children may, literally, have made us human—and allowed us to develop our distinctive abilities for cognition, cooperation and culture. The same sort of thinking suggests that human mothering goes way beyond mothers. (You can see a video here.).

The anthropologist Sarah Hrdy argued that human evolution depends on the emergence of "cooperative breeding." Chimpanzee babies are exclusively cared for by their biological mothers; they'll fight off anyone else who comes near their babies. We humans, in contrast, have developed a caregiving triple threat: Grandmothers, fathers and "alloparents" help take care of babies. That makes us quite different from our closest primate relatives.

In my last column, I talked about the fascinating new research on grandmothers. The fact that fathers take care of kids may seem more obvious, but it also makes us distinctive. Humans "pair bond" in a way that most primates—indeed, most mammals—don't. Fathers and mothers develop close relationships, and we are substantially more monogamous than any of our close primate relatives. As in most monogamous species, even sorta-kinda-monogamous ones like us, human fathers help to take care of babies.

Father care varies more than mother care. Even in hunter-gatherer or forager societies, some biological fathers are deeply involved in parenting, while others do very little. For fathers, even more than for mothers, the very fact of intimacy with babies is what calls out the impulse to care for them. For example, when fathers touch and play with babies, they produce as much oxytocin (the "tend and befriend" hormone) as mothers do.

Humans also have "alloparents"—other adults who take care of babies even when they aren't related to them. In forager societies, those alloparents are often young women who haven't yet had babies themselves. Caring for other babies lets these women learn child-care skills while helping the babies to survive. Sometimes mothers swap caregiving, helping each other out. If you show pictures of especially cute babies to women who don't have children, the reward centers of their brains light up (though we really didn't need the imaging studies to conclude that cute babies are irresistible to just about everybody).

Dr. Hrdy thinks that this cooperative breeding strategy is what let us develop other distinctive human abilities. A lot of our human smartness is social intelligence; we're especially adept at learning about and from other people. Even tiny babies who can't sit up yet can smile and make eye contact, and studies show that they can figure out what other people want.

Dr. Hrdy suggests that cooperative breeding came first and that the extra investment of grandmothers, fathers and alloparents permitted the long human childhood that in turn allowed learning and culture. In fact, social intelligence may have been a direct result of the demands of cooperative breeding. As anybody who has carpooled can testify, organizing joint child care is just as cognitively challenging as bringing down a mastodon.

What's more, Dr. Hrdy suggests that in a world of cooperative breeding, babies became the agents of their own survival. The weapons-grade cuteness of human babies goes beyond their big eyes and fat cheeks. Babies first use their social intelligence to actively draw dads and grandmoms and alloparents into their web of adorableness. Then they can use it to do all sorts of other things—even take down a mastodon or two.

GRANDMOTHERS: THE BEHIND-THE-SCENES KEY TO HUMAN CULTURE?

Why do I exist? This isn't a philosophical cri de coeur; it's an evolutionary conundrum. At 58, I'm well past menopause, and yet I'll soldier on, with luck, for many years more. The conundrum is more vivid when you realize that human beings (and killer whales) are the only species where females outlive their fertility. Our closest primate relatives—chimpanzees, for example—usually die before their 50s, when they are still fertile.

It turns out that my existence may actually be the key to human nature. This isn't a megalomaniacal boast but a new biological theory: the "grandmother hypothesis." Twenty years ago, the anthropologist Kristen Hawkes at the University of Utah went to study the Hadza, a forager group in Africa, thinking that she would uncover the origins of hunting. But then she noticed the many wiry old women who dug roots and cooked dinners and took care of babies (much like me, though my root-digging skills are restricted to dividing the irises). It turned out that these old women played an important role in providing nutrition for the group, as much as the strapping young hunters. What's more, those old women provided an absolutely crucial resource by taking care of their grandchildren. This isn't just a miracle of modern medicine. Our human life expectancy is much longer than it used to be—but that's because far fewer children die in infancy. Anthropologists have looked at life spans in hunter-gatherer and forager societies, which are like the societies we evolved in. If you make it past childhood, you have a good chance of making it into your 60s or 70s.

There are many controversies about what happened in human evolution. But there's no debate that there were two dramatic changes in what biologists call our "life-history": Besides living much longer than our primate relatives, our babies depend on adults for much longer.

Young chimps gather as much food as they eat by the time they are 7 or so. But even in forager societies, human children pull their weight only when they are teenagers. Why would our babies be helpless for so long? That long immaturity helps make us so smart: It gives us a long protected time to grow large brains and to use those brains to learn about the world we live in. Human beings can learn to adapt to an exceptionally wide variety of environments, and those skills of learning and culture develop in the early years of life.

But that immaturity has a cost. It means that biological mothers can't keep babies going all by themselves: They need help. In forager societies grandmothers provide a substantial amount of child care as well as nutrition. Barry Hewlett at Washington State University and his colleagues found, much to their surprise, that grandmothers even shared breast-feeding with mothers. Some grandmoms just served as big pacifiers, but some, even after menopause, could "relactate," actually producing milk. (Though I think I'll stick to the high-tech, 21st-century version of helping to feed my 5-month-old granddaughter with electric pumps, freezers and bottles.)

Dr. Hawkes's "grandmother hypothesis" proposes that grandmotherhood developed in tandem with our long childhood. In fact, she argues that the evolution of grandmothers was exactly what allowed our long childhood, and the learning and culture that go with it, to emerge. In mathematical models, you can see what happens if, at first, just a few women live past menopause and use that time to support their grandchildren (who, of course, share their genes). The "grandmother trait" can rapidly take hold and spread. And the more grandmothers contribute, the longer the period of immaturity can be.

So on Mother's Day this Sunday, as we toast mothers over innumerable Bloody Marys and Eggs Benedicts across the country, we might add an additional toast for the gray-haired grandmoms behind the scenes.

SEE JANE EVOLVE: PICTURE BOOKS EXPLAIN DARWIN

Evolution by natural selection is one of the best ideas in all of science. It predicts and explains an incredibly wide range of biological facts. But only 60% of Americans believe evolution is true. This may partly be due to religious ideology, of course, but studies show that many secular people who say they believe in evolution still don't really understand it. Why is natural selection so hard to understand and accept? What can we do to make it easier?

A new study in Psychological Science by Deborah Kelemen of Boston University and colleagues helps to explain why evolution is hard to grasp. It also suggests that we should teach children the theory of natural selection while they are still in kindergarten instead of waiting, as we do now, until they are teenagers.

Scientific ideas always challenge our common sense. But some ideas, such as the heliocentric solar system, require only small tweaks to our everyday knowledge. We can easily understand what it would mean for the Earth to go around the sun, even though it looks as if the sun is going around the Earth. Other ideas, such as relativity or quantum mechanics, are so wildly counterintuitive that we shrug our shoulders, accept that only the mathematicians will really get it and fall back on vague metaphors.

But evolution by natural selection occupies a not-so-sweet spot between the intuitive and the counterintuitive. The trouble is that it's almost, but not really, like intentional design, and that's confusing. Adaptation through natural selection, like intentional design, makes things work better. But the mechanism that leads to that result is very different.

Intentional design is an excellent everyday theory of human artifacts. If you wanted to explain most of the complicated objects in my living room, you would say that somebody intentionally designed them to provide light or warmth or a place to put your drink—and you'd be right. Even babies understand that human actions are "teleological"—designed to accomplish particular goals. In earlier work, Dr. Kelemen showed that preschoolers begin to apply this kind of design thinking more generally, an attitude she calls "promiscuous teleology."

By elementary-school age, children start to invoke an ultimate God-like designer to explain the complexity of the world around them—even children brought up as atheists. Kids aged 6 to 10 have developed their own coherent "folk biological" theories. They explain biological facts in terms of intention and design, such as the idea that giraffes develop long necks because they are trying to reach the high leaves.

Dr. Kelemen and her colleagues thought that they might be able to get young children to understand the mechanism of natural selection before the alternative intentional-design theory had become too entrenched. They gave 5- to 8-year-olds 10-page picture books that illustrated an example of natural selection. The "pilosas," for example, are fictional mammals who eat insects. Some of them had thick trunks, and some had thin ones. A sudden change in the climate drove the insects into narrow underground tunnels. The thin-trunked pilosas could still eat the insects, but the ones with thick trunks died. So the next generation all had thin trunks.

Before the children heard the story, the experimenters asked them to explain why a different group of fictional animals had a particular trait. Most of the children gave explanations based on intentional design. But after the children heard the story, they answered similar questions very differently: They had genuinely begun to understand evolution by natural selection. That understanding persisted when the experimenters went back three months later.

One picture book, of course, won't solve all the problems of science education. But these results do suggest that simple story books like these could be powerful intellectual tools. The secret may be to reach children with the right theory before the wrong one is too firmly in place.

SCIENTISTS STUDY WHY STORIES EXIST

We human beings spend hours each day telling and hearing stories. We always have. We’ve passed heroic legends around hunting fires, kitchen tables and the web, and told sad tales of lost love on sailing ships, barstools and cell phones. We’ve been captivated by Oedipus and Citizen Kane and Tony Soprano.

Why? Why not just communicate information through equations or lists of facts? Why is it that even when we tell the story of our own random, accidental lives we impose heroes and villains, crises and resolutions?

You might think that academic English and literature departments, departments that are devoted to stories, would have tried to answer this question or would at least want to hear from scientists who had. But, for a long time, literary theory was dominated by zombie ideas that had died in the sciences. Marx and Freud haunted English departments long after they had disappeared from economics and psychology.

Recently, though, that has started to change. Literary scholars are starting to pay attention to cognitive science and neuroscience. Admittedly, some of the first attempts were misguided and reductive – “evolutionary psychology” just-so stories or efforts to locate literature in a particular brain area. But the conversation between literature and science is becoming more and more sophisticated and interesting.

At a fascinating workshop at Stanford last month called “The Science of Stories” scientists and scholars talked about why reading Harlequin romances may make you more empathetic, about how ten-year-olds create the fantastic fictional worlds called “paracosms”, and about the subtle psychological inferences in the great Chinese novel, the Story of the Stone.

One of the most interesting and surprising results came from the neuroscientist Uri Hasson at Princeton. As techniques for analyzing brain-imaging data have gotten more sophisticated, neuroscientists have gone beyond simply mapping particular brain regions to particular psychological functions. Instead, they use complex mathematical analyses to look for patterns in the activity of the whole brain as it changes over time. Hasson and his colleagues have gone beyond even that. They measure the relationship between the pattern in one person’s brain and the pattern in another’s.

They’ve been especially interested in how brains respond to stories, whether they’re watching a Clint Eastwood movie, listening to a Salinger short story, or just hearing someone’s personal “How We Met” drama. When different people watched the same vivid story as they lay in the scanner -- “The Good, the Bad and the Ugly”, for instance, -- their brain activity unfolded in a remarkably similar way. Sergio Leone really knew how to get into your head.

In another experiment they recorded the pattern of one person’s brain activity as she told a vivid personal story. Then someone else listened to the story on tape and they recorded his brain activity. Again, there was a remarkable degree of correlation between the two brain patterns. The storyteller, like Leone, had literally gotten in to the listener’s brain and altered it in predictable ways. But more than that, she had made the listener’s brain match her own brain.

The more tightly coupled the brains became, the more the listener said that he understood the story. This coupling effect disappeared if you scrambled the sentences in the story. There was something about the literary coherence of the tale that seemed to do the work.

One of my own favorite fictions, Star Trek, often includes stories about high-tech telepathic mind-control. Some alien has special powers that allows them to shape another person’s brain activity to match their own, or that produces brains that are so tightly linked that you can barely distinguish them. Hasson’s results suggest that we lowly humans are actually as good at mind-melding as the Vulcans or the Borg. We just do it with stories.

THE KID WHO WOULDN'T LET GO OF 'THE DEVICE'

How does technology reshape our children’s minds and brains? Here is a disturbing story from the near future.

They gave her The Device when she was only two. It worked through a powerful and sophisticated optic nerve brain-mind interface, injecting it’s content into her cortex. By the time she was five, she would immediately be swept away into the alternate universe that the device created. Throughout her childhood, she would become entirely oblivious to her surroundings in its grip, for hours at a time. She would surreptitiously hide it under her desk at school, and reach for it immediately as soon as she got home. By adolescence, the images of the device – a girl entering a ballroom, a man dying on a battlefield – were more vivid to her than her own memories.

As a grown woman her addiction to The Device continued. It dominated every room of her house, even the bathroom. Its images filled her head even when she made love. When she travelled, her first thought was to be sure that she had access to The Device and she was filled with panic at the thought that she would have to spend a day without it. When her child broke his arm, she paused to make sure that The Device would be with her in the emergency room. Even sadder, as soon as her children were old enough she did her very best to connect them to The Device, too.

The psychologists and neuroscientists showed just how powerful The Device had become. Psychological studies showed that its users literally could not avoid entering its world, the second they made contact their brains automatically and involuntarily engaged with it. More, large portions of their brains that had originally been designed for other purposes had been hijacked to the exclusive service of The Device.

Well, anyway, I hope that this is a story of the near future. It certainly is a story of the near past. The Device, you see, is the printed book, and the story is my autobiography.

Socrates was the first to raise the alarm about this powerful new technology – he argued, presciently, that the rise of reading would destroy the old arts of memory and discussion.

The latest Device to interface with my retina is “Its Complicated: The Social Networked Life of Teens” by Danah Boyd at NYU and Microsoft Research. Digital social network technologies play as large a role in the lives of current children as books once did for me. Boyd spent thousands of hours with teenagers from many different backgrounds, observing the way they use technology and talking to them about what technology meant to them.

Her conclusion is that young people use social media to do what they have always done – establish a community of friends and peers, distance themselves from their parents, flirt and gossip, bully, experiment, rebel. At the same time, she argues that the technology does make a difference, just as the book, the printing press and the telegraph did. An ugly taunt that once dissolved in the fetid locker-room air can travel across the world in a moment, and linger forever. Teenagers must learn to reckon with and navigate those new aspects of our current technologies, and for the most part that’s just what they do.

Boyd thoughtfully makes the case against both the alarmists and the techtopians. The kids are all right or at least as all right as kids have ever been.

So why all the worry? Perhaps it’s because of the inevitable difference between looking forward towards generational changes or looking back at them. As the parable of The Device illustrates we always look at our children’s future with equal parts unjustified alarm and unjustified hope – utopia and dystopia. We look at our own past with wistful nostalgia. It may be hard to believe but Boyd’s book suggests that someday even Facebook will be a fond memory.

WHY YOU'RE NOT AS CLEVER AS A 4-YEAR-OLD

Are young children stunningly dumb or amazingly smart? We usually think that children are much worse at solving problems than we are. After all, they can’t make lunch or tie their shoes, let alone figure out long division or ace the SAT’s. But, on the other hand, every parent finds herself exclaiming “Where did THAT come from!” all day long.

So we also have a sneaking suspicion that children might be a lot smarter than they seem. A new study from our lab that just appeared in the journal Cognition shows that four-year-olds may actually solve some problems better than grown-ups do.

Chris Lucas, Tom Griffiths, Sophie Bridgers and I wanted to know how preschoolers learn about cause and effect. We used a machine that lights up when you put some combinations of blocks on it and not others. Your job is to figure out which blocks make it go. (Actually, we secretly activate the machine with a hidden pedal. but fortunately nobody ever guesses that).

Try it yourself. Imagine that you, a clever grown-up, see me put a round block on the machine three times. Nothing happens. But when I put a square block on next to the round one the machine lights up. So the square one makes it go and the round one doesn’t, right?

Well, not necessarily. That’s true if individual blocks light up the machine. That’s the obvious idea and the one that grown-ups always think of first. But the machine could also work in a more unusual way. It could be that it takes a combination of two blocks to make the machine go, the way that my annoying microwave will only go if you press both the “cook” button and the “start” button. Maybe the square and round blocks both contribute, but they have to go on together.

Suppose I also show you that a triangular block does nothing and a rectangular one does nothing, but the machine lights up when you put them on together. That should tell you that the machine follows the unusual combination rule instead of the obvious individual block rule. Will that change how you think about the square and round blocks?

We showed patterns like these to kids ages 4 and 5 as well as to Berkeley undergraduates. First we showed them the triangle/rectangle kind of pattern, which suggested that the machine might use the unusual combination rule. Then we showed them the ambiguous round/square kind of pattern.

The kids got it. They figured out that the machine might work in this unusual way and so that you should put both blocks on together. But the best and brightest students acted as if the machine would always follow the common and obvious rule, even when we showed them that it might work differently.

Does this go beyond blocks and machines? We think it might reflect a much more general difference between children and adults. Children might be especially good at thinking about unlikely possibilities. After all, grown-ups know a tremendous amount about how the world works. It makes sense that we mostly rely on what we already know.

In fact, computer scientists talk about two different kinds of learning and problem solving – “exploit” versus “explore.” In “exploit” learning we try to quickly find the solution that is most likely to work right now. In “explore” learning we try out lots of possibilities, including unlikely ones, even if they may not have much immediate pay-off. To thrive in a complicated world you need both kinds of learning.

A particularly effective strategy is to start off exploring and then narrow in to exploit. Childhood, especially our unusually long and helpless human childhood, may be evolution’s way of balancing exploration and exploitation. Grown-ups stick with the tried and true; 4-year-olds have the luxury of looking for the weird and wonderful.

ARE SCHOOLS ASKING TO DRUG KIDS FOR BETTER TEST SCORES?

In the past two decades, the number of children diagnosed with Attention Deficit Hyperactivity Disorder has nearly doubled. One in five American boys receives a diagnosis by age 17. More than 70% of those who are diagnosed—millions of children—are prescribed drugs.

A new book, "The ADHD Explosion" by Stephen Hinshaw and Richard Scheffler, looks at this extraordinary increase. What's the explanation? Some rise in environmental toxins? Worse parenting? Better detection?

Many people have suspected that there is a relationship between the explosion in ADHD diagnoses and the push by many states, over this same period, to evaluate schools and teachers based on test scores. But how could you tell? It could just be a coincidence that ADHD diagnoses and high-stakes testing have both increased so dramatically. Drs. Hinshaw and Scheffler—both of them at the University of California, Berkeley, my university—present some striking evidence that the answer lies, at least partly, in changes in educational policy.

Drs. Hinshaw and Scheffler used a kind of "natural experiment." Different parts of the country introduced new educational policies at different times. The researchers looked at the relationship between when a state introduced the policies and the rate of ADHD diagnoses. They found that right after the policies were introduced, the diagnoses increased dramatically. Moreover, the rise was particularly sharp for poor children in public schools.

The authors suggest that when schools are under pressure to produce high test scores, they become motivated, consciously or unconsciously, to encourage ADHD diagnoses—either because the drugs allow low-performing children to score better or because ADHD diagnoses can be used to exclude children from testing. They didn't see comparable increases in places where the law kept school personnel from recommending ADHD medication to parents.

These results have implications for the whole way we think about ADHD. We think we know the difference between a disease and a social problem. A disease happens when a body breaks or is invaded by viruses or bacteria. You give patients the right treatment, and they are cured. A social problem—poverty, illiteracy, crime—happens when institutions fail, when instead of helping people to thrive they make them miserable.

Much debate over ADHD has focused on whether it is a disease or a problem, "biological" or "social." But the research suggests that these are the wrong categories. Instead, it seems there is a biological continuum among children. Some have no trouble achieving even "unnatural" levels of highly focused attention, others find it nearly impossible to focus attention at all, and most are somewhere in between.

That variation didn't matter much when we were hunters or farmers. But in our society, it matters terrifically. School is more essential for success, and a particular kind of highly focused attention is more essential for school.

Stimulant drugs don't "cure" a disease called ADHD, the way that antibiotics cure pneumonia. Instead, they seem to shift attentional abilities along that continuum. They make everybody focus better, though sometimes with serious costs. For children at the far end of the continuum, the drugs may help make the difference between success and failure, or even life and death. But the drugs also lead to more focused attention, even in the elite college students who pop Adderall before an exam, risking substance abuse in the mad pursuit of even better grades.

For some children the benefits of the drugs may outweigh the drawbacks, but for many more the drugs don't help and may harm. ADHD is both biological and social, and altering medical and educational institutions could help children thrive. Behavioral therapies can be very effective, but our medical culture makes it much easier to prescribe a pill. Instead of drugging children's brains to get them to fit our schools, we could change our schools to accommodate a wider range of children's brains.

THE PSYCHEDELIC ROAD TO OTHER CONSCIOUS STATES

How do a few pounds of gray goo in our skulls create our conscious experience—the blue of the sky, the tweet of the birds? Few questions are so profound and important—or so hard. We are still very far from an answer. But we are learning more about what scientists call "the neural correlates of consciousness," the brain states that accompany particular kinds of conscious experience.

Most of these studies look at the sort of conscious experiences that people have in standard FMRI brain-scan experiments or that academics like me have all day long: bored woolgathering and daydreaming punctuated by desperate bursts of focused thinking and problem-solving. We've learned quite a lot about the neural correlates of these kinds of consciousness.

But some surprising new studies have looked for the correlates of more exotic kinds of consciousness. Psychedelic drugs such as LSD were designed to be used in scientific research and, potentially at least, as therapy for mental illness. But of course, those drugs long ago escaped from the lab into the streets. They disappeared from science as a result. Recently, though, scientific research on hallucinogens has been making a comeback.

Robin Carhart-Harris at Imperial College London and his colleagues review their work on psychedelic neuroscience in a new paper in the journal Frontiers in Neuroscience. Like other neuroscientists, they put people in FMRI brain scanners. But these scientists gave psilocybin—the active ingredient in consciousness-altering "magic mushrooms"—to volunteers with experience with psychedelic drugs. Others got a placebo. The scientists measured both groups' brain activity.

Normally, when we introspect, daydream or reflect, a group of brain areas called the "default mode network" is particularly active. These areas also seem to be connected to our sense of self. Another brain-area group is active when we consciously pay attention or work through a problem. In both rumination and attention, parts of the frontal cortex are particularly involved, and there is a lot of communication and coordination between those areas and other parts of the brain.

Some philosophers and neuroscientists have argued that consciousness itself is the result of this kind of coordinated brain activity. They think consciousness is deeply connected to our sense of the self and our capacities for reflection and control, though we might have other fleeting or faint kinds of awareness.

But what about psychedelic consciousness? Far from faint or fleeting, psychedelic experiences are more intense, vivid and expansive than everyday ones. So you might expect to see that the usual neural correlates of consciousness would be especially active when you take psilocybin. That's just what the scientists predicted. But consistently, over many experiments, they found the opposite. On psilocybin, the default mode network and frontal control systems were actually much less active than normal, and there was much less coordination between different brain areas. In fact, "shroom" consciousness looked neurologically like the inverse of introspective, reflective, attentive consciousness.

The researchers also got people to report on the quality of their psychedelic experiences. The more intense the experiences were and particularly, the more that people reported that they had lost the sense of a boundary between themselves and the world, the more they showed the distinctive pattern of deactivation.

Dr. Carhart-Harris and colleagues suggest the common theory that links consciousness and control is wrong. Instead, much of the brain activity accompanying workaday consciousness may be devoted to channeling, focusing and even shutting down experience and information, rather than creating them. The Carhart-Harris team points to other uncontrolled but vivid kinds of consciousness such as dreams, mystical experiences, early stages of psychosis and perhaps even infant consciousness as parallels to hallucinogenic drug experience.

To paraphrase Hamlet, it turns out that there are more, and stranger, kinds of consciousness than are dreamt of in our philosophy.

TIME TO RETIRE THE SIMPLICITY OF NATURE VS. NURTURE

Are we moral by nature or as a result of learning and culture? Are men and women “hard-wired” to think differently? Do our genes or our schools make us intelligent? These all seem like important questions, but maybe they have no good scientific answer.

Once, after all, it seemed equally important to ask whether light was a wave or a particle, or just what arcane force made living things different from rocks. Science didn’t answer these questions—it told us they were the wrong questions to ask. Light can be described either way; there is no single cause of life.

Every year on the Edge website the intellectual impresario and literary agent John Brockman asks a large group of thinkers to answer a single question. (Full disclosure: Brockman Inc. is my agency.) This year, the question is about which scientific ideas should be retired.

Surprisingly, many of the writers gave a similar answer: They think that the familiar distinction between nature and nurture has outlived its usefulness.

Scientists who focus on the “nature” side of the debate said that it no longer makes sense to study “culture” as an independent factor in human development. Scientists who focus on learning, including me, argued that “innateness” (often a synonym for nature) should go. But if you read these seemingly opposed answers more closely, you can see a remarkable degree of convergence.

Scientists have always believed that the human mind must be the result of some mix of genes and environment, innate structure and learning, evolution and culture. But it still seemed that these were different causal forces that combined to shape the human mind, and we could assess the contribution of each one separately. After all, you can’t have water without both hydrogen and oxygen, but it’s straightforward to say how the two elements are combined.

As many of the writers in the Edge symposium point out, however, recent scientific advances have made the very idea of these distinctions more dubious.

One is the explosion of work in the field of epigenetics. It turns out that there is a long and circuitous route, with many feedback loops, from a particular set of genes to a feature of the adult organism. Epigenetics explores the way that different environments shape this complex process, including whether a gene is expressed at all.

A famous epigenetic study looked at two different strains of mice. The mice in each strain were genetically identical to each other. Normally, one strain is much smarter than the other. But then the experimenters had the mothers of the smart strain raise the babies of the dumb strain. The babies not only got much smarter, they passed this advantage on to the next generation.

So were the mice’s abilities innate or learned? The result of nature or nurture? Genes or environment? The question just doesn’t make sense.

New theories of human evolution and culture have also undermined these distinctions. The old evolutionary psychology suggested that we had evolved with very specific “modules”—finely calibrated to a particular Stone Age environment.

But new research has led biologists to a different view. We didn’t adapt to a particular Stone Age environment. We adapted to a newly unpredictable and variable world. And we did it by developing new abilities for cultural transmission and change. Each generation could learn new skills for coping with new environments and could pass those skills on to the next generation.

As the anthropologist Pascal Boyer points out in his answer, it’s tempting to talk about “the culture” of a group as if this is some mysterious force outside the biological individual or independent of evolution. But culture is a biological phenomenon. It’s a set of abilities and practices that allow members of one generation to learn and change and to pass the results of that learning on to the next generation. Culture is our nature, and the ability to learn and change is our most important and fundamental instinct.

THE SURPRISING PROBABILITY GURUS WEARING DIAPERS

Two new studies in the journal Cognition describe how some brilliant decision makers expertly use probability for profit.

But you won't meet these economic whizzes at the World Economic Forum in Switzerland this month. Unlike the "Davos men," these analysts require a constant supply of breasts, bottles, shiny toys and unconditional adoration (well, maybe not so unlike the Davos men). Although some of them make do with bananas. The quants in question are 10-month-old babies and assorted nonhuman primates.

Ordinary grown-ups are terrible at explicit probabilistic and statistical reasoning. For example, how likely is it that there will be a massive flood in America this year? How about an earthquake leading to a massive flood in California? People illogically give the first event a lower likelihood than the second. But even babies and apes turn out to have remarkable implicit statistical abilities.

Stephanie Denison at the University of Waterloo in Canada and Fei Xu at the University of California, Berkeley, showed babies two large transparent jars full of lollipop-shaped toys. Some of the toys had plain black tops while some were pink with stars, glitter and blinking lights. Of course, economic acumen doesn't necessarily imply good taste, and most of the babies preferred pink bling to basic black.

The two jars had different proportions of black and pink toys. For example, one jar contained 12 pink and four black toys. The other jar had 12 pink toys too but also contained 36 black toys. The experimenter took out a toy from one jar, apparently at random, holding it by the "pop" so that the babies couldn't see what color it was. Then she put it in an opaque cup on the floor. She took a toy from the second jar in the same way and put it in another opaque cup. The babies crawled toward one cup or the other and got the toy. (Half the time she put the first cup in front of the first jar, half the time she switched them around.)

What should you do in this situation if you really want pink lollipops? The first cup is more likely to have a pink pop inside than the second, the odds are 3-1 versus 1-3, even though both jars have exactly the same number of pink toys inside. It isn't a sure thing, but that is where you would place your bets.

So did the babies. They consistently crawled to the cup that was more likely to have a pink payoff. In a second experiment, one jar had 16 pink and 4 black toys, while the other had 24 pink and 96 black ones. The second jar actually held more pink toys than the first one, but the cup was less likely to hold a pink toy. The babies still went for the rational choice.

In the second study, Hannes Rackoczy at the University of Göttingen in Germany and his colleagues did a similar experiment with a group of gorillas, bonobos, chimps and orangutans. They used banana and carrot pieces, and the experimenter hid the food in one or the other hand, not a cup. But the scientists got the same results: The apes chose the hand that was more likely to hold a banana.

So it seems that we're designed with a basic understanding of probability. The puzzle is this: Why are grown-ups often so stupid about probabilities when even babies and chimps can be so smart?

This intuitive, unconscious statistical ability may be completely separate from our conscious reasoning. But other studies suggest that babies' unconscious understanding of numbers may actually underpin their ability to explicitly learn math later. We don't usually even try to teach probability until high school. Maybe we could exploit these intuitive abilities to teach children, and adults, to understand probability better and to make better decisions as a result.

WHAT CHILDREN REALLY THINK ABOUT MAGIC

This Week we will counter the cold and dark with the warmth and light of fantasy, fiction and magic—from Santa to Scrooge, from Old Father Time and Baby New Year to the Three Kings of Epiphany. Children will listen to tales of dwarves and elves and magic rings in front of an old-fashioned fire or watch them on a new-fashioned screen.

But what do children really think about magic? The conventional wisdom is that young children can’t discriminate between the real and the imaginary, fact and fantasy. More recently, however, researchers like Jacqueline Woolley at the University of Texas and Paul Harris at Harvard have shown that even the youngest children understand magic in surprisingly sophisticated ways.

For instance, Dr. Woolley showed preschoolers a box of pencils and an empty box. She got them to vividly imagine that the empty box was full of pencils. The children enthusiastically pretended, but they also said that if someone wanted pencils, they should go to the real box rather than the imagined one.

Even young children make a sort of metaphysical distinction between two worlds. One is the current, real world with its observable events, incontrovertible facts and causal laws. The other is the world of pretense and possibility, fiction and fantasy.

Children understand the difference. They know that the beloved imaginary friend isn’t actually real and that the terrifying monster in the closet doesn’t actually exist (though that makes them no less beloved or scary). But children do spend more time than we do thinking about the world of imagination. They don’t actually confuse the fantasy world with the real one—they just prefer to hang out there.

Why do children spend so much time thinking about wild possibilities? We humans are remarkably good at imagining ways the world could be different and working out the consequences. Philosophers call it “counterfactual” thinking, and it’s one of our most valuable abilities.

Scientists work out what would happen if the physical world were different, and novelists work out what would happen if the social and psychological world were different. Scientific hypotheses and literary fictions both consider the consequences of small tweaks to our models of the world; mythologies consider much larger changes. But the fundamental psychology is the same. Young children seem to practice this powerful way of thinking in their everyday pretend play.

For scientists and novelists and 3-year-olds to be good at counterfactual reasoning, though, they must be able to preserve a bright line between imaginary possibilities and current reality.

But, particularly as they get older, children also begin to think that this bright line could be crossed. They recognize the possibility of “real” magic. It is conceivable to them, as it is to adults, that somehow the causal laws could be suspended, or creatures from the imaginary world could be transported to the real one. Dr. Harris did an experiment where children imagined a monster in the box instead of pencils. They still said that the monster wasn’t real, but when the experimenter left the room, they moved away from the box—just in case. Santa Claus is confusing because he is a fiction who at least seems to leave an observable trail of disappearing cookies and delivered presents.

The great conceptual advance of science was to reject this second kind of magic, the kind that bridges the real and the imagined, whether it is embodied in religious fundamentalism or New Age superstition. But at the same time, like the 3-year-olds, scientists and artists are united in their embrace of both reality and possibility, and their capacity to discriminate between them. There is no conflict between celebrating the magic of fiction, myth and metaphor and celebrating science. Counterfactual thinking is an essential part of science, and science requires and rewards imagination as much as literature or art.

Scientists, artists and 3-year-olds are united in their embrace of reality and possibility.

TRIAL AND ERROR IN TODDLERS AND SCIENTISTS

The Gopnik lab is rejoicing. My student Caren Walker and I have just published a paper in the well known journal Psychological Science. Usually when I write about scientific papers here, they sound neat and tidy. But since this was our own experiment, I can tell you the messy inside story too.

First, the study—and a small IQ test for you. Suppose you see an experimenter put two orange blocks on a machine, and it lights up. She then puts a green one and a blue one on the same machine, but nothing happens. Two red ones work, a black and white combination doesn't. Now you have to make the machine light up yourself. You can choose two purple blocks or a yellow one and a brown one.

But this simple problem actually requires some very abstract thinking. It's not that any particular block makes the machine go. It's the fact that the blocks are the same rather than different. Other animals have a very hard time understanding this. Chimpanzees can get hundreds of examples and still not get it, even with delicious bananas as a reward. As a clever (or even not so clever) reader of this newspaper, you'd surely choose the two purple blocks.

The conventional wisdom has been that young children also can't learn this kind of abstract logical principle. Scientists like Jean Piaget believed that young children's thinking was concrete and superficial. And in earlier studies, preschoolers couldn't solve this sort of "same/different" problem.

But in those studies, researchers asked children to say what they thought about pictures of objects. Children often look much smarter when you watch what they do instead of relying on what they say.

We did the experiment I just described with 18-to-24-month-olds. And they got it right, with just two examples. The secret was showing them real blocks on a real machine and asking them to use the blocks to make the machine go.

Tiny toddlers, barely walking and talking, could quickly learn abstract relationships. And they understood "different" as well as "same." If you reversed the examples so that the two different blocks made the machine go, they would choose the new, "different" pair.

The brilliant scientists of the Gopnik lab must have realized that babies could do better than prior research suggested and so designed this elegant experiment, right? Not exactly. Here's what really happened: We were doing a totally different experiment.

My student Caren wanted to see whether getting children to explain an event made them think about it more abstractly. We thought that a version of the "same block" problem would be tough for 4-year-olds and having them explain might help. We actually tried a problem a bit simpler than the one I just described, because the experimenter put the blocks on the machine one at a time instead of simultaneously. The trouble was that the 4-year-olds had no trouble at all! Caren tested 3-year-olds, then 2-year-olds and finally the babies, and they got it too.

We sent the paper to the journal. All scientists occasionally (OK, more than occasionally) curse journal editors and reviewers, but they contributed to the discovery too. They insisted that we do the more difficult simultaneous version of the task with babies and that we test "different" as well as "same." So we went back to the lab, muttering that the "different" task would be too hard. But we were wrong again.

Now we are looking at another weird result. Although the 4-year-olds did well on the easier sequential task, in a study we're still working on, they actually seem to be doing worse than the babies on the harder simultaneous one. So there's a new problem for us to solve.

Scientists legitimately worry about confirmation bias, our tendency to look for evidence that fits what we already think. But, fortunately, learning is most fun, for us and 18-month-olds too, when the answers are most surprising.

Scientific discoveries aren't about individual geniuses miraculously grasping the truth. Instead, they come when we all chase the unexpected together.

GRATITUTE FOR THE COSMIC MIRACLE OF A NEWBORN CHILD

Last week I witnessed three miracles. These miracles happen thousands of times a day but are no less miraculous for that. The first was the miracle of life. Amino acids combined to make just the right proteins, which sent out instructions to make just the right neurons, which made just the right connections to other neurons. And that brought a new, utterly unique, unprecedented consciousness—a new human soul—into the world.

Georgiana, my newborn granddaughter, already looks out at the world with wide-eyed amazement.

The second was the miracle of learning. This new consciousness can come to encompass the whole world. Georgiana is already tracking the movements of her toy giraffe. She’s learning to recognize her father’s voice and the scent of her mother’s breast. And she’s figuring out that the cries of “She’s so sweet!” are English rather than Japanese.

In just 20 years she may know about quarks and leptons, black holes and red dwarves, the beginning and end of the universe. Maybe by then she’ll know more than we do about how a newborn mind can learn so much, so quickly and so well. Her brain, to borrow from Emily Dickinson, is wider than the sky, deeper than the sea.

Georgie looks most intently at the admiring faces of the people who surround her. She is already focused on learning what we’re like. And that leads to the most important miracle of all: the miracle of love.

The coordination of amino acids and neurons that brought Georgiana to life is a stunning evolutionary achievement. But so is the coordination of human effort and ingenuity and devotion that keeps her alive and thriving.

Like all human babies, she is so heartbreakingly fragile, so helpless. And yet that very fragility almost instantly calls out a miraculous impulse to take care of her. Her mom and dad are utterly smitten, of course, not to mention her grandmom. But it goes far beyond just maternal hormones or shared genes.

The little hospital room is crowded with love—from two-year-old brother Augie to 70-year-old Grandpa, from Uncle Nick and Aunt Margo to the many in-laws and girlfriends and boyfriends. The friends who arrive with swaddling blankets, the neighbors who drop off a cake, the nurses and doctors, the baby sitters and child-care teachers—all are part of a network of care as powerful as the network of neurons.

That love and care will let Georgiana’s magnificent human brain, mind and consciousness grow and change, explore and create. The amino acids and proteins miraculously beat back chaos and create the order of life. But our ability to care for each other and our children—our capacity for culture—also creates miraculous new kinds of order: the poems and pictures of art, the theories and technologies of science.

It may seem that science depicts a cold, barren, indifferent universe—that Georgiana is just a scrap of carbon and water on a third-rate planet orbiting an unimpressive sun in an obscure galaxy. And it is true that, from a cosmic perspective, our whole species is as fragile, as evanescent, as helpless, as tiny as she is.

But science also tells us that the entirely natural miracles of life, learning and love are just as real as the cosmic chill. When we look at them scientifically, they turn out to be even more wonderful, more genuinely awesome, than we could have imagined. Like little Georgie, their fragility just makes them more precious.

Of course, on this memorable Thanksgiving my heart would be overflowing with gratitude for this one special, personal, miracle baby even if I’d never heard of amino acids, linguistic discrimination or non-kin investment. But I’ll also pause to give thanks for the general human miracle. And I’ll be thankful for the effort, ingenuity and devotion of the scientists who help us understand and celebrate it.

THE BRAIN'S CROWDSOURCING SOFTWARE

Over the past decade, popular science has been suffering from neuromania. The enthusiasm came from studies showing that particular areas of the brain “light up” when you have certain thoughts and experiences. It’s mystifying why so many people thought this explained the mind. What have you learned when you say that someone’s visual areas light up when they see things?

People still seem to be astonished at the very idea that the brain is responsible for the mind—a bunch of grey goo makes us see! It is astonishing. But scientists knew that a century ago; the really interesting question now is how the grey goo lets us see, think and act intelligently. New techniques are letting scientists understand the brain as a complex, dynamic, computational system, not just a collection of individual bits of meat associated with individual experiences. These new studies come much closer to answering the “how” question.

Take a study in the journal Nature this year by Stefano Fusi of Columbia University College of Physicians and Surgeons, Earl K. Miller of the Massachusetts Instutute of Technology and their colleagues. Fifty years ago David Hubel and Torsten Weisel made a great Nobel Prize-winning discovery. They recorded the signals from particular neurons in cats’ brains as the animals looked at different patterns. The neurons responded selectively to some images rather than others. One neuron might only respond to lines that slanted right, another only to those slanting left.

But many neurons don’t respond in this neatly selective way. This is especially true for the neurons in the parts of the brain that are associated with complex cognition and problem-solving, like the prefrontal cortex. Instead, these cells were a mysterious mess—they respond idiosyncratically to different complex collections of features. What were these neurons doing?

In the new study the researchers taught monkeys to remember and respond to one shape rather than another while they recorded their brain activity. But instead of just looking at one neuron at a time, they recorded the activity of many prefrontal neurons at once. A number of them showed weird, messy “mixed selectivity” patterns. One neuron might respond when the monkey remembered just one shape or only when it recognized the shape but not when it recalled it, while a neighboring cell showed a different pattern.

In order to analyze how the whole group of cells worked the researchers turned to the techniques of computer scientists who are trying to design machines that can learn. Computers aren’t made of carbon, of course, let alone neurons. But they have to solve some of the same problems, like identifying and remembering patterns. The techniques that work best for computers turn out to be remarkably similar to the techniques that brains use.

Essentially, the researchers found the brain was using the same general sort of technique that Google uses for its search algorithm. You might think that the best way to rank search results would be to pick out a few features of each Web page like “relevance” or “trustworthiness’”—in the same way as the neurons picked out whether an edge slanted right or left. Instead, Google does much better by combining all the many, messy, idiosyncratic linking decisions of individual users.

With neurons that detect just a few features, you can capture those features and combinations of features, but not much more. To capture more complex patterns, the brain does better by amalgamating and integrating information from many different neurons with very different response patterns. The brain crowd-sources.

Scientists have long argued that the mind is more like a general software program than like a particular hardware set-up. The new combination of neuroscience and computer science doesn’t just tell us that the grey goo lets us think, or even exactly where that grey goo is. Instead, it tells us what programs it runs.

Scientists are getting a clearer idea of what ‘programs’ the mind runs.

WORLD SERIES RECAP: MAY BASEBALL'S IRRATIONAL HEART KEEP ON BEATING

The last 15 years have been baseball's Age Of Enlightenment. The quants and nerds brought reason and science to the dark fortress of superstition and mythology that was Major League Baseball. The new movement was pioneered by the brilliant Bill James (adviser to this week's World Champion Red Sox), implemented by Billy Beane (the fabled general manager of my own Oakland Athletics) and immortalized in the book and movie "Moneyball."

Over this same period, psychologists have discovered many kinds of human irrationality. Just those biases and foibles that are exploited, in fact, by the moneyball approach. So if human reason has changed how we think about baseball, it might be baseball's turn to remind us of the limits of human reason.

We overestimate the causal power of human actions. So, in the old days, managers assumed that gutsy, active base stealers caused more runs than they actually do, and they discounted the more passive players who waited for walks. Statistical analysis, uninfluenced by the human bias toward action, led moneyballers to value base-stealing less and walking more.

We overgeneralize from small samples, inferring causal regularities where there is only noise. So we act as if the outcome of a best-of-7 playoff series genuinely indicates the relative strength of two teams that were practically evenly matched over a 162-game regular season. The moneyballer doesn't change his strategy in the playoffs, and he refuses to think that playoff defeats are as significant as regular season success.

We confuse moral and causal judgments. Jurors think a drunken driver who is in a fatal accident is more responsible for the crash than an equally drunken driver whose victim recovers. The same goes for fielders; we fans assign far more significance to a dramatically fumbled ball than to routine catches. The moneyball approach replaces the morally loaded statistic of "errors" with more meaningful numbers that include positive as well as negative outcomes.

By avoiding these mistakes, baseball quants have come much closer to understanding the true causal structure of baseball, and so their decisions are more effective.

But does the fact that even experts make so many mistakes about baseball prove that human beings are thoroughly irrational? Baseball, after all, is a human invention. It's a great game exactly because it's so hard to understand, and it produces such strange and compelling interactions between regularity and randomness, causality and chaos.

Most of the time in the natural environment, our evolved learning procedures get the right answers, just as most of the time our visual system lets us see the objects around us accurately. In fact, we really only notice our visual system on the rare occasions when it gives us the wrong answers, in perceptual illusions, for instance. A carnival funhouse delights us just because we can't make sense of it.

Baseball is a causal funhouse, a game designed precisely to confound our everyday causal reasoning. We can never tell just how much any event on the field is the result of skill, luck, intention or just grace. Baseball is a machine for generating stories, and stories are about the unexpected, the mysterious, even the miraculous.

Sheer random noise wouldn't keep us watching. But neither would the predictable, replicable causal regularities we rely on every day. Those are the regularities that evolution designed us to detect. But what can even the most rational mind do but wonder at the absurdist koan of the obstruction call, with its dizzying mix of rules, intentions and accidents, that ended World Series Game 3?

The truly remarkable thing about human reasoning isn't that we were designed by evolution to get the right answers about the world most of the time. It's that we enjoy trying to get the right answers so profoundly that we intentionally make it hard for ourselves. We humans, uniquely, invent art-forms serving no purpose except to stretch the very boundaries of rationality itself.

DRUGGED-OUT MICE OFFER INSIGHT INTO THE GROWING BRAIN

Imagine a scientist peeking into the skulls of glow-in-the-dark, cocaine-loving mice and watching their nerve cells send out feelers. It may sound more like something from cyberpunk writer William Gibson than from the journal Nature Neuroscience. But this kind of startling experiment promises to change how we think about the brain and mind.

Scientific progress often involves new methods as much as new ideas. The great methodological advance of the past few decades was Functional Magnetic Resonance Imaging: It lets scientists see which areas of the brain are active when a person thinks something.

But scientific methods can also shape ideas, for good and ill. The success of fMRI led to a misleadingly static picture of how the brain works, particularly in the popular imagination. When the brain lights up to show the distress of a mother hearing her baby cry, it's tempting to say that motherly concern is innate.

But that doesn't follow at all. A learned source of distress can produce the same effect. Logic tells you that every time we learn something, our brains must change, too. In fact, that kind of change is the whole point of having a brain in the first place. The fMRI pictures of brain areas "lighting up" don't show those changes. But there are remarkable new methods that do, at least for mice.

Slightly changing an animal's genes can make it produce fluorescent proteins. Scientists can use a similar technique to make mice with nerve cells that light up. Then they can see how the mouse neurons grow and connect through a transparent window in the mouse's skull.

The study that I cited from Nature Neuroscience, by Linda Wilbrecht and her colleagues, used this technique to trace one powerful and troubling kind of learning—learning to use drugs. Cocaine users quickly learn to associate their high with a particular setting, and when they find themselves there, the pull of the drug becomes particularly irresistible.

First, the researchers injected mice with either cocaine or (for the control group) salt water and watched what happened to the neurons in the prefrontal part of their brains, where decisions get made. The mice who got cocaine developed more "dendritic spines" than the other mice—their nerve cells sent out more potential connections that could support learning. So cocaine, just by itself, seems to make the brain more "plastic," more susceptible to learning.

But a second experiment was even more interesting. Mice, like humans, really like cocaine. The experimenters gave the mice cocaine on one side of the cage but not the other, and the mice learned to go to that side of the cage. The experimenters recorded how many new neural spines were formed and how many were still there five days later.

All the mice got the same dose of cocaine, but some of them showed a stronger preference for the cocaine side of the cage than others—they had learned the association between the cage and the drug better. The mice who learned better were much more likely to develop persistent new spines. The changes in behavior were correlated to changes in the brain.

It could be that some mice were more susceptible to the effects of the cocaine, which produced more spines, which made them learn better. Or it could be that the mice who were better learners developed more persistent spines.

We don't know how this drug-induced learning compares to more ordinary kinds of learning. But we do know, from similar studies, that young mice produce and maintain more new spines than older mice. So it may be that the quick, persistent learning that comes with cocaine, though destructive, is related to the profound and extensive learning we see early in life, in both mice and men.

POVERTY CAN TRUMP A WINNING HAND OF GENES

We all notice that some people are smarter than others. You might naturally wonder how much those differences in intelligence are the result of genes or of upbringing. But that question, it turns out, is impossible to answer.

That’s because changes in our environment can actually transform the relation between our traits, our upbringing, and our genes.

The textbook illustration of this is a dreadful disease called PKU. Some babies have a genetic mutation which means that they can’t process an amino acid in their food. That leads to severe mental retardation. For centuries, PKU was incurable. Genetics determined whether someone suffered from the syndrome, and so had a low IQ.

But then scientists discovered how PKU worked. Now, we can immediately put babies with the mutation on a special diet. So, now, whether a baby with PKU has a low IQ is determined by the food they eat—their environment.

We humans can figure out how our environment works and act to change it, as we did with PKU. So if you’re trying to measure the influence of human nature and nurture you have to consider not just the current environment, but also all the possible environments that we can create.

This doesn’t just apply to obscure diseases. In the latest issue of Psychological Science Timothy C. Bates of the University of Edinburgh and colleagues report a study of the relationship between genes, SES (socio-economic status, or how rich and educated you are) and IQ. They used statistics to analyze the differences between identical twins, who share all DNA, and fraternal twins, who share only some.

When psychologists first started studying twins, they found identical twins much more likely to have similar IQs than fraternal ones. They concluded that IQ was highly “heritable”—due to genetic differences. But those were all high SES twins. Erik Turkheimer of the University of Virginia and his colleagues discovered that the picture was very different for poor, low SES, twins. For these children, there was very little difference between identical and fraternal twins: IQ was hardly heritable at all. Differences in the environment, like whether you lucked out with a good teacher, seemed to be much more important.

In the new study the Bates team found this was even true when those children grew up. This might seem paradoxical—after all, your DNA stays the same no matter how you are raised. The explanation is that IQ is influenced by education. Historically, absolute IQ scores have risen substantially as we’ve changed our environment so that more people go to school longer.

Richer children all have similarly good educational opportunities, so that genetic differences become more apparent. And since richer children have more educational choice, they (or their parents) can choose environments that accentuate and amplify their particular skills. A child who has genetic abilities that make her just slightly better at math may be more likely to take a math class, and so become even better at math.

But for poor children haphazard differences in educational opportunity swamp genetic differences. Ending up in a really terrible school or one a bit better can make a big difference. And poor children have fewer opportunities to tailor their education to their particular strengths.

How much your genes shape your intelligence depends on whether you live in a world with no schooling at all, a world where you need good luck to get a good education, or a world with rich educational possibilities. If we could change the world for the PKU babies, we can change it for the next generation of poor children, too.

IS IT POSSIBLE TO REASON ABOUT HAVING A CHILD?

How can you decide whether to have a child? It’s a complex and profound question—a philosophical question. But it’s not a question traditional philosophers thought about much. In fact, the index of the 1967 “Encyclopedia of Philosophy” had only four references to children at all—though there were hundreds of references to angels. You could read our deepest thinkers and conclude that humans reproduced through asexual cloning.

Recently, though, the distinguished philosopher L.A. Paul (who usually works on abstruse problems in the metaphysics of causation) wrote a fascinating paper, forthcoming in the journal Res Philosophica. Prof. Paul argues that there is no rational way to decide to have children—or not to have them.

How do we make a rational decision? The classic answer is that we imagine the outcomes of different courses of action. Then we consider both the value and the probability of each outcome. Finally, we choose the option with the highest “utilities,” as the economists say. Does the glow of a baby’s smile outweigh all those sleepless nights?

It’s not just economists. You can find the same picture in the advice columns of Vogue and Parenting. In the modern world, we assume that we can decide whether to have children based on what we think the experience of having a child will be like.

But Prof. Paul thinks there’s a catch. The trouble is that, notoriously, there is no way to really know what having a child is like until you actually have one. You might get hints from watching other people’s children. But that overwhelming feeling of love for this one particular baby just isn’t something you can understand beforehand. You may not even like other people’s children and yet discover that you love your own child more than anything. Of course, you also can’t really understand the crushing responsibility beforehand, either. So, Prof. Paul says, you just can’t make the decision rationally.

I think the problem may be even worse. Rational decision-making assumes there is a single person with the same values before and after the decision. If I’m trying to decide whether to buy peaches or pears, I can safely assume that if I prefer peaches now, the same “I” will prefer them after my purchase. But what if making the decision turns me into a different person with different values?

Part of what makes having a child such a morally transformative experience is the fact that my child’s well-being can genuinely be more important to me than my own. It may sound melodramatic to say that I would give my life for my children, but, of course, that’s exactly what every parent does all the time, in ways both large and small.

Once I commit myself to a child, I’m literally not the same person I was before. My ego has expanded to include another person even though—especially though—that person is utterly helpless and unable to reciprocate.

The person I am before I have children has to make a decision for the person I will be afterward. If I have kids, chances are that my future self will care more about them than just about anything else, even her own happiness, and she’ll be unable to imagine life without them. But, of course, if I don’t have kids, my future self will also be a different person, with different interests and values. Deciding whether to have children isn’t just a matter of deciding what you want. It means deciding who you’re going to be.

L.A. Paul, by the way, is, like me, both a philosopher and a mother—a combination that’s still surprisingly rare. There are more and more of us, though, so maybe the 2067 Encyclopedia of Philosophy will have more to say on the subject of children. Or maybe even philosopher-mothers will decide it’s easier to stick to thinking about angels.

EVEN YOUNG CHILDREN ADOPT ARBITRARY RITUALS

Human beings love rituals. Of course, rituals are at the center of religious practice. But even secularists celebrate the great transitions of life with arbitrary actions, formalized words and peculiar outfits. To become part of my community of hardheaded, rational, scientific Ph.D.s., I had to put on a weird gown and even weirder hat, walk solemnly down the aisle of a cavernous building, and listen to rhythmically intoned Latin.

Our mundane actions are suffused with arbitrary conventions, too. Grabbing food with your hands is efficient and effective, but we purposely slow ourselves down with cutlery rituals. In fact, if you’re an American, chances are that you cut your food with your fork in your left hand, then transfer the fork to your right hand to eat the food, and then swap it back again. You may not even realize that you’re doing it. That elaborate fork and knife dance makes absolutely no sense.

But that’s the central paradox of ritual. Rituals are intentionally useless, purposely irrational. So why are they so important to us?

The cognitive psychologist Christine LeGare at the University of Texas at Austin has been trying to figure out where rituals come from and what functions they serve. One idea is that rituals declare that you are a member of a particular social group.

Everybody eats, but only Americans swap their knives and forks. (Several spy movies have used this as a plot point). Sharing your graduation ceremony marks you as part of the community of Ph.D.s. more effectively than the solitary act of finishing your dissertation.

The fact that rituals don’t make practical sense is just what makes them useful for social identification. If someone just puts tea in a pot and adds hot water then I know only that they are a sensible person who wants tea. If instead they kneel on a mat and revolve a special whisk a precise number of times, or carefully use silver tongs to drop exactly two lumps into a china cup, I can conclude that they are members of a particular aristocratic tea culture.

It turns out that rituals are deeply rooted and they emerge early. Surprisingly young children are already sensitive to the difference between purposeful actions and rituals, and they adopt rituals themselves.

In a new paper forthcoming in the Journal Cognition, LeGare and colleagues showed 3- to 6-year-old children a video of people performing a complicated sequence of eight actions with a mallet and a pegboard. Someone would pick up the mallet, place it to one side, push up a peg with her hand etc. Then the experimenters gave the children the mallet and pegboard and said, “Now it’s your turn.”

You could interpret this sequence of actions as an intelligent attempt to bring about a particular outcome, pushing up the pegs. Or you could interpret it as a ritual, a way of saying who you are.

Sometimes the children saw a single person perform the actions twice. Sometimes they saw two people perform the actions simultaneously. The identical synchronous actions suggested that the two people were from the same social group.

When they saw two people do exactly the same thing at the same time, the children produced exactly the same sequence of actions themselves. They also explained their actions by saying things like “I had to do it the way that they did.” They treated the actions as if they were a ritual.

When they saw the single actor, they were much less likely to imitate exactly what the other person did. Instead, they treated it like a purposeful action. They would vary what they did themselves to make the pegs pop up in a new way.

LeGare thinks that, from the time we are very young children, we have two ways of thinking about people—a “ritual stance” and an “instrumental stance.” We learn as much from the irrational and arbitrary things that people do, as from the intelligent and sensible ones.

THE GORILLA LURKING IN OUR CONSCIOUSNESS

Imagine that you are a radiologist searching through slides of lung tissue for abnormalities. On one slide, right next to a suspicious nodule, there is the image of a large, threatening, gorilla. What would you do? Write to the American Medical Association? Check yourself into the schizophrenia clinic next door? Track down the practical joker among the lab technicians?

In fact, you probably wouldn’t do anything. That is because, although you were staring right at the gorilla, you probably wouldn’t have seen it. That startling fact shows just how little we understand about consciousness.

In the journal Psychological Science, Trafton Drew and colleagues report that they got radiologists to look for abnormalities in a series of slides, as they usually do. But then they added a gorilla to some of the slides. The gorilla gradually faded into the slides and then gradually faded out, since people are more likely to notice a sudden change than a gradual one. When the experimenters asked the radiologists if they had seen anything unusual, 83% said no. An eye-tracking machine showed that radiologists missed the gorilla even when they were looking straight at it.

This study is just the latest to demonstrate what psychologists call “inattentional blindness.” When we pay careful attention to one thing, we become literally blind to others—even startling ones like gorillas.

In one classic study, Dan Simons and Christopher Chabris showed people a video of students passing a ball around. They asked the viewers to count the number of passes, so they had to pay attention to the balls. In the midst of the video, someone in a gorilla suit walked through the players. Most of the viewers, who were focused on counting the balls, didn’t see the gorilla at all. You can experience similar illusions yourself at invisiblegorilla.com. It is an amazingly robust phenomenon—I am still completely deceived by each new example.

You might think this is just a weird thing that happens with videos in a psychology lab. But in the new study, the radiologists were seasoned professionals practicing a real and vitally important skill. Yet they were also blind to the unexpected events.

In fact, we are all subject to inattentional blindness all the time. That is one of the foundations of magic acts. Psychologists have started collaborating with professional magicians to figure out how their tricks work. It turns out that if you just keep your audience’s attention focused on the rabbit, they literally won’t even see what you’re doing with the hat.

Inattentional blindness is as important for philosophers as it is for radiologists and magicians. Many philosophers have claimed that we can’t be wrong about our conscious experiences. It certainly feels that way. But these studies are troubling. If you asked the radiologist about the gorilla, she’d say that she just experienced a normal slide in exactly the way she experienced the other slides—except that we know that can’t be true. Did she have the experience of seeing the gorilla and somehow not know it? Or did she experience just the part of the slide with the nodule and invent the gorilla-free remainder?

At this very moment, as I stare at my screen and concentrate on this column, I’m absolutely sure that I’m also experiencing the whole visual field—the chair, the light, the view out my window. But for all I know, invisible gorillas may be all around me.

Many philosophical arguments about consciousness are based on the apparently certain and obvious intuitions we have about our experience. This includes, of course, arguments that consciousness just couldn’t be explained scientifically. But scientific experiments like this one show that those beautifully clear and self-evident intuitions are really incoherent and baffling. We will have to wrestle with many other confusing, tricky, elusive gorillas before we understand how consciousness works.

DOES EVOLUTION WANT US TO BE UNHAPPY?

Samuel Johnson called it the vanity of human wishes, and Buddhists talk about the endless cycle of desire. Social psychologists say we get trapped on a hedonic treadmill. What they all mean is that we wish, plan and work for things that we think will make us happy, but when we finally get them, we aren’t nearly as happy as we thought we’d be.

Summer makes this particularly vivid. All through the busy winter I longed and planned and saved for my current vacation. I daydreamed about peaceful summer days in this beautiful village by the Thames with nothing to do but write. Sure enough, the first walk down the towpath was sheer ecstasy—but by the fifth, it was just another walk. The long English evenings hang heavy, and the damned book I’m writing comes along no more easily than it did in December.

This looks like yet another example of human irrationality. But the economist Arthur Robson has an interesting evolutionary explanation. Evolution faces what economists call a principal-agent problem. Evolution is the principal, trying to get organisms (its agents) to increase their fitness. But how can it get those dumb animals to act in accordance with this plan? (This anthropomorphic language is just a metaphor, of course—a way of saying that the fitter organisms are more likely to survive and reproduce. Evolution doesn’t have intentions.)

For simple organisms like slugs, evolution can build in exactly the right motivations (move toward food and away from light). But it is harder with a complicated, cognitive organism like us. We act by imagining many alternative futures and deciding among them. Our motivational system has to be designed so that we do this in a way that tends to improve our fitness.

Suppose I am facing a decision between two alternative futures. I can stay where I am or go on to the next valley where the river is a bit purer, the meadows a bit greener and the food a bit better. My motivational system ensures that when I imagine the objectively better future it looks really great, far better than all the other options—I’ll be so happy! So I pack up and move. From evolution’s perspective that is all to the good: My fitness has increased.

But now suppose that I have actually already made the decision. I am in the next valley. It does me no additional good to continue admiring the river, savoring the green of the meadow and the taste of the fruit. I acted, I have gotten the benefit, and feeling happy now is, from evolution’s perspective, just a superfluous luxury.

Wanting to be happy and imagining the happy future made me act in a way that really did make me better off; feeling happy now doesn’t help. To keep increasing my fitness, I should now imagine the next potential source of happiness that will help me to make the next decision. (Doesn’t that tree just over the next hill have even better fruit?)

It is as if every time we make a decision that actually makes us better off, evolution resets our happiness meter to zero. That prods us to decide to take the next action, which will make us even better off—but no happier.

Of course, I care about what I want, not what evolution wants. But what do I want? Should I try to be better off objectively even if I don’t feel any happier? After all, the Thames really is beautiful, the meadows are green, the food—well, it’s better in England than it used to be. And the book really is getting done.

Or would it be better to defy evolution, step off the treadmill of desire and ambition and just rest serenely at home in Buddhist contentment? At least we humans can derive a bit of happiness, however fleeting, from asking these questions, perhaps because the answers always seem to be just over the next hill.

HOW TO GET CHILDREN TO EAT VEGGIES

To parents, there is no force known to science as powerful as the repulsion between children and vegetables.

Of course, just as supercooling fluids can suspend the law of electrical resistance, melting cheese can suspend the law of vegetable resistance. This is sometimes known as the Pizza Paradox. There is also the Edamame Exception, but this is generally considered to be due to the Snack Uncertainty Principle, by which a crunchy soybean is and is not a vegetable simultaneously. But when melty mozzarella conditions don’t apply, the law of vegetable repulsion would appear to be as immutable as gravity, magnetism or the equally mysterious law of child-godawful mess attraction.

In a new paper in Psychological Science, however, Sarah Gripshover and Ellen Markman of Stanford University have shown that scientists can overcome the child-vegetable repulsive principle. Remarkably, the scientists in question are the children themselves. It turns out that, by giving preschoolers a new theory of nutrition, you can get them to eat more vegetables.

My colleagues and I have argued that very young children construct intuitive theories of the world around them (my first book was called “The Scientist in the Crib”). These theories are coherent, causal representations of how things or people or animals work. Just like scientific theories, they let children make sense of the world, construct predictions and design intelligent actions.

Preschoolers already have some of the elements of an intuitive theory of biology. They understand that invisible germs can make you sick and that eating helps make you healthy, even if they don’t get all the details. One little boy explained about a peer, “He needs more to eat because he is growing long arms.”

The Stanford researchers got teachers to read 4- and 5-year olds a series of story books for several weeks. The stories gave the children a more detailed but still accessible theory of nutrition. They explained that food is made up of different invisible parts, the equivalent of nutrients; that when you eat, your body breaks up the food into those parts; and that different kinds of food have different invisible parts. They also explained that your body needs different nutrients to do different things, so that to function well you need to take in a lot of different nutrients.

In a control condition, the teachers read children similar stories based on the current United States Department of Agriculture website for healthy nutrition. These stories also talked about healthy eating and encouraged it. But they didn’t provide any causal framework to explain how eating works or why you should eat better.

The researchers also asked children questions to test whether they had acquired a deeper understanding of nutrition. And at snack time they offered the children vegetables as well as fruit, cheese and crackers. The children who had heard the theoretical stories understood the concepts better. More strikingly, they also were more likely to pick the vegetables at snack time.

We don’t yet know if this change in eating habits will be robust or permanent, but a number of other recent studies suggest that changing children’s theories can actually change their behavior too.

A quick summary of 30 years of research in developmental psychology yields two big propositions: Children are much smarter than we thought, and adults are much stupider. Studies like this one suggest that the foundations of scientific thinking—causal inference, coherent explanation, and rational prediction—are not a creation of advanced culture but our evolutionary birthright.

WHY ARE SOME CHILDREN MORE RESILIENT

The facts are grimly familiar: 20% of American children grow up in poverty, a number that has increased over the past decade. Many of those children also grow up in social isolation or chaos. This has predictably terrible effects on their development.

There is a moral mystery about why we allow this to happen in one of the richest societies in history. But there is also a scientific mystery. It's obvious why deprivation hurts development. The mystery is why some deprived children seem to do so much better than others. Is it something about their individual temperament or their particular environment?

The pediatrician Tom Boyce and the psychologist Jay Belsky, with their colleagues, suggest an interesting, complicated interaction between nature and nurture. They think that some children may be temperamentally more sensitive than others to the effects of the environment—both good and bad.

They describe these two types of children as orchids and dandelions. Orchids grow magnificently when conditions are just right and wither when they aren't. Dandelions grow about the same way in a wide range of conditions. A new study by Elisabeth Conradt at Brown University and her colleagues provides some support for this idea.

They studied a group of "at risk" babies when they were just five months old. The researchers recorded their RSA (Respiratory Sinus Arrhythmia)—that is, how their heart rates changed when they breathed in and out. Differences in RSA are connected to differences in temperament. People with higher RSA—heart rates that vary more as they breathe—seem to respond more strongly to their environment physiologically.

Then they looked at the babies' environments. They measured economic risk factors like poverty, medical factors like premature birth, and social factors like little family and community support. Most importantly, they also looked at the relationships between the children and their caregivers. Though all the families had problems, some had fewer risk factors, and those babies tended to have more stable and secure relationships. In other families, with more risk factors, the babies had disorganized and difficult relationships.

A year later, the researchers looked at whether the children had developed behavior problems. For example, they recorded how often the child hurt others, refused to eat or had tantrums. All children do things like this sometimes, but a child who acts this way a lot is likely to have trouble later on.

Finally, they analyzed the relationships among the children's early physiological temperament, their environment and relationships, and later behavior problems. The lower-RSA children were more like dandelions. Their risky environment did hurt them; they had more behavior problems than the average child in the general population, but they seemed less sensitive to variations in their environment. Lower-RSA children who grew up with relatively stable and secure relationships did no better than low-RSA children with more difficult lives.

The higher-RSA children were more like orchids. For them, the environment made an enormous difference. High-RSA children who grew up with more secure relationships had far fewer behavior problems than high-RSA children who grew up with difficult relationships. In good environments, these orchid children actually had fewer behavior problems than the average child. But they tended to do worse than average in bad environments.

From a scientific perspective, the results illustrate the complexity of interactions between nature and nurture. From a moral and policy perspective, all these children, dandelions and orchids both, need and deserve a better start in life. Emotionally, there is a special poignancy about what might have been. What could be sadder than a withered orchid?

THE WORDSWORTHS: CHILD PSYCHOLOGISTS

Last week, I made a pilgrimage to Dove Cottage—a tiny white house nestled among the meres and fells of England's Lake District. William Wordsworth and his sister Dorothy lived there while they wrote two of my favorite books: his "Lyrical Ballads" and her journal—both masterpieces of Romanticism.

The Romantics celebrated the sublime—an altered, expanded, oceanic state of consciousness. Byron and Shelley looked for it in sex. Wordsworth's friends, Coleridge and De Quincey, tried drugs (De Quincey's opium scales sit next to Dorothy's teacups in Dove Cottage).

But Wordsworth identified this exalted state with the very different world of young children. His best poems describe the "splendor in the grass," the "glory in the flower," of early childhood experience. His great "Ode: Intimations of Immortality From Recollections of Early Childhood" begins: There was a time when meadow, grove, and stream, / The earth, and every common sight, / To me did seem / Apparell'd in celestial light, / The glory and the freshness of a dream.

This picture of the child's mind is remarkably close to the newest scientific picture. Children's minds and brains are designed to be especially open to experience. They're unencumbered by the executive planning, focused attention and prefrontal control that fuels the mad endeavor of adult life, the getting and spending that lays waste our powers (and, to be fair, lets us feed our children).

This makes children vividly conscious of "every common sight" that habit has made invisible to adults. It might be Wordsworth's meadows or the dandelions and garbage trucks that enchant my 1-year-old grandson.

It's often said that the Romantics invented childhood, as if children had merely been small adults before. But scientifically speaking, Wordsworth discovered childhood—he saw children more clearly than others had. Where did this insight come from? Mere recollection can't explain it. After all, generations of poets and philosophers had recollected early childhood and seen only confusion and limitation.

I suspect it came at least partly from his sister Dorothy. She was an exceptionally sensitive and intelligent observer, and the descriptions she recorded in her journal famously made their way into William's poems. He said that she gave him eyes and ears. Dorothy was also what the evolutionary anthropologist Sarah Hrdy calls an "allomother." All her life, she devotedly looked after other people's children and observed their development.

In fact, when William was starting to do his greatest work, he and Dorothy were looking after a toddler together. They rescued 4-year-old Basil Montagu from his irresponsible father, who paid them 50 pounds a year to care for him. The young Wordsworth earned more as a nanny than as a poet. Dorothy wrote about Basil—"I do not think there is any pleasure more delightful than that of marking the development of a child's faculties." It could be the credo of every developmental psychologist.

There's been much prurient speculation about whether Dorothy and William slept together. But very little has been written about the undoubted fact that they raised a child together.

For centuries the people who knew young children best were women. But, sexism aside, just bearing and rearing children was such overwhelming work that it left little time for thinking or writing about them, especially in a world without birth control, vaccinations or running water.

Dorothy was a thinker and writer who lived intimately with children but didn't bear the full, crushing responsibility of motherhood. Perhaps she helped William to understand children's minds so profoundly and describe them so eloquently.

MORAL PUZZLES KIDS STRUGGLE WITH

Here's a question. There are two groups, Zazes and Flurps. A Zaz hits somebody. Who do you think it was, another Zaz or a Flurp?

It's depressing, but you have to admit that it's more likely that the Zaz hit the Flurp. That's an understandable reaction for an experienced, world-weary reader of The Wall Street Journal. But here's something even more depressing—4-year-olds give the same answer.

In a 2012 study, 4-year-olds predicted that people would be more likely to harm someone from another group than from their own group.

In my last column, I talked about some disturbing new research showing that preschoolers are already unconsciously biased against other racial groups. Where does this bias come from?

Marjorie Rhodes at New York University argues that children are "intuitive sociologists" trying to make sense of the social world. We already know that very young children make up theories about everyday physics, psychology and biology. Dr. Rhodes thinks that they have theories about social groups, too.

In 2012 she asked young children about the Zazes and Flurps. Even 4-year-olds predicted that people would be more likely to harm someone from another group than from their own group. So children aren't just biased against other racial groups: They also assume that everybody else will be biased against other groups. And this extends beyond race, gender and religion to the arbitrary realm of Zazes and Flurps.

In fact, a new study in Psychological Science by Dr. Rhodes and Lisa Chalik suggests that this intuitive social theory may even influence how children develop moral distinctions.

Back in the 1980s, Judith Smetana and colleagues discovered that very young kids could discriminate between genuinely moral principles and mere social conventions. First, the researchers asked about everyday rules—a rule that you can't be mean to other children, for instance, or that you have to hang up your clothes. The children said that, of course, breaking the rules was wrong. But then the researchers asked another question: What would you think if teachers and parents changed the rules to say that being mean and dropping clothes were OK?

Children as young as 2 said that, in that case, it would be OK to drop your clothes, but not to be mean. No matter what the authorities decreed, hurting others, even just hurting their feelings, was always wrong. It's a strikingly robust result—true for children from Brazil to Korea. Poignantly, even abused children thought that hurting other people was intrinsically wrong.

This might leave you feeling more cheerful about human nature. But in the new study, Dr. Rhodes asked similar moral questions about the Zazes and Flurps. The 4-year-olds said it would always be wrong for Zazes to hurt the feelings of others in their group. But if teachers decided that Zazes could hurt Flurps' feelings, then it would be OK to do so. Intrinsic moral obligations only extended to members of their own group.

The 4-year-olds demonstrate the deep roots of an ethical tension that has divided philosophers for centuries. We feel that our moral principles should be universal, but we simultaneously feel that there is something special about our obligations to our own group, whether it's a family, clan or country.

"You've got to be taught before it's too late / Before you are 6 or 7 or 8 / To hate all the people your relatives hate," wrote Oscar Hammerstein. Actually, though, it seems that you don't have to be taught to prefer your own group—you can pick that up fine by yourself. But we do have to teach our children how to widen the moral circle, and to extend their natural compassion and care even to the Flurps.

The facts are grimly familiar: 20% of American children grow up in poverty, a number that has increased over the past decade. Many of those children also grow up in social isolation or chaos. This has predictably terrible effects on their development.

There is a moral mystery about why we allow this to happen in one of the richest societies in history. But there is also a scientific mystery. It's obvious why deprivation hurts development. The mystery is why some deprived children seem to do so much better than others. Is it something about their individual temperament or their particular environment?

IMPLICIT RACIAL BIAS IN PRESCHOOLERS

Are human beings born good and corrupted by society or born bad and redeemed by civilization? Lately, goodness has been on a roll, scientifically speaking. It turns out that even 1-year-olds already sympathize with the distress of others and go out of their way to help them.

But the most recent work suggests that the origins of evil may be only a little later than the origins of good.

New studies show that even young children discriminate.

Our impulse to love and help the members of our own group is matched by an impulse to hate and fear the members of other groups. In "Gulliver's Travels," Swift described a vicious conflict between the Big-Enders, who ate their eggs with the big end up, and the Little-Enders, who started from the little end. Historically, largely arbitrary group differences (Catholic vs. Protestant, Hutu vs. Tutsi) have led to persecution and even genocide.

When and why does this particular human evil arise? A raft of new studies shows that even 5-year-olds discriminate between what psychologists call in-groups and out-groups. Moreover, children actually seem to learn subtle aspects of discrimination in early childhood.

In a recent paper, Yarrow Dunham at Princeton and colleagues explored when children begin to have negative thoughts about other racial groups. White kids aged 3 to 12 and adults saw computer-generated, racially ambiguous faces. They had to say whether they thought the face was black or white. Half the faces looked angry, half happy. The adults were more likely to say that angry faces were black. Even people who would hotly deny any racial prejudice unconsciously associate other racial groups with anger.

But what about the innocent kids? Even 3- and 4-year-olds were more likely to say that angry faces were black. In fact, younger children were just as prejudiced as older children and adults.

Is this just something about white attitudes toward black people? They did the same experiment with white and Asian faces. Although Asians aren't stereotypically angry, children also associated Asian faces with anger. Then the researchers tested Asian children in Taiwan with exactly the same white and Asian faces. The Asian children were more likely to think that angry faces were white. They also associated the out-group with anger, but for them the out-group was white.

Was this discrimination the result of some universal, innate tendency or were preschoolers subtly learning about discrimination? For black children, white people are the out-group. But, surprisingly, black children (and adults) were the only ones to show no bias at all; they categorized the white and black faces in the same way. The researchers suggest that this may be because black children pick up conflicting signals—they know that they belong to the black group, but they also know that the white group has higher status.

These findings show the deep roots of group conflict. But the last study also suggests that somehow children also quickly learn about how groups are related to each other.

Learning also was important in another way. The researchers began by asking the children to categorize unambiguously white, black or Asian faces. Children began to differentiate the racial groups at around age 4, but many of the children still did not recognize the racial categories. Moreover, children made the white/Asian distinction at a later age than the black/white distinction. Only children who recognized the racial categories were biased, but they were as biased as the adults tested at the same time. Still, it took kids from all races a while to learn those categories.

The studies of early altruism show that the natural state of man is not a war of all against all, as Thomas Hobbes said. But it may quickly become a war of us against them.

HOW THE BRAIN REALLY WORKS

For the last 20 years neuroscientists have shown us compelling pictures of brain areas "lighting up" when we see or hear, love or hate, plan or act. These studies were an important first step. But they also suggested a misleadingly simple view of how the brain works. They associated specific mental abilities with specific brain areas, in much the same way that phrenology, in the 19th century, claimed to associate psychological characteristics with skull shapes.

Most people really want to understand the mind, not the brain. Why do we experience and act on the world as we do? Associating a piece of the mind with a piece of the brain does very little to answer that question. After all, for more than a century we have known that our minds are the result of the stuff between our necks and the tops of our heads. Just adding that vision is the result of stuff at the back and that planning is the result of stuff in the front, it doesn't help us understand how vision or planning work.

But new techniques are letting researchers look at the activity of the whole brain at once. What emerges is very different from the phrenological view. In fact, most brain areas multitask; they are involved in many different kinds of experiences and actions. And the brain is dynamic. It can respond differently to the same events in different times and circumstances.

A new study in Nature Neuroscience by Jack L. Gallant, Tolga Çukur and colleagues at the University of California, Berkeley, dramatically illustrates this new view. People in an fMRI scanner watched a half-hour-long sequence combining very short video clips of everyday scenes. The scientists organized the video content into hundreds of categories, describing whether each segment included a plant or a building, a cat or a clock.

Then they divided the whole brain into small sections with a three-dimensional grid and recorded the activity in each section of the grid for each second. They used sophisticated statistical analyses to find the relationship between the patterns of brain activity and the content of the videos.

The twist was that the participants either looked for human beings in the videos or looked for vehicles. When they looked for humans, great swaths of the brain became a "human detector"—more sensitive to humans and less sensitive to vehicles. Looking for vehicles turned more of the brain into a "vehicle detector." And when people looked for humans their brains also became more sensitive to related objects, like cats and plants. When they looked for vehicles, their brains became more sensitive to clocks and buildings as well.

In fact, the response patterns of most brain areas changed when people changed the focus of their attention. Something as ineffable as where you focus your attention can make your whole brain work differently.

People often assume that knowing about the brain is all that you need to explain how the mind works, so that neuroscience will replace psychology. That may account for the curious popular enthusiasm for the phrenological "lighting up" studies. It is as if the very thought that something psychological is "in the brain" gives us a little explanatory frisson, even though we have known for at least a century that everything psychological is "in the brain" in some sense. But it would be just as accurate to say that knowing about the mind explains how the brain works.

The new, more dynamic picture of the brain makes psychology even more crucial. The researchers could only explain the very complex pattern of brain activity by relating it to what they knew about categorization and attention. In the same way, knowing the activity of every wire on every chip in my computer wouldn't tell me much if I didn't also know the program my machine was running.

Neuroscience may be sexier than psychology right now, and it certainly has a lot more money and celebrity. But they really cannot get along without each other.

NATURE, CULTURE AND GAY MARRIAGE

There's been a lot of talk about nature in the gay-marriage debate. Opponents point to the "natural" link between heterosexual sex and procreation. Supporters note nature's staggering diversity of sexual behavior and the ubiquity of homosexual sex in our close primate relatives. But, actually, gay marriage exemplifies a much more profound part of human nature: our capacity for cultural evolution.

The birds and the bees may be enough for the birds and the bees, but for us it's just the beginning.

Culture is our nature; the evolution of culture was one secret of our biological success. Evolutionary theorists like the philosopher Kim Sterelny, the biologist Kevin Laland and the psychologist Michael Tomasello emphasize our distinctively human ability to transmit new information and social practices from generation to generation. Other animals have more elements of culture than we once thought, but humans rely on cultural transmission far more than any other species

Still, there's a tension built into cultural evolution. If the new generation just slavishly copies the previous one this process of innovation will seize up. The advantage of the "cultural ratchet" is that we can use the discoveries of the previous generation as a jumping-off point for revisions and discoveries of our own.

Man may not be The Rational Animal, but we are The Empirical Animal—perpetually revising what we do in the light of our experience.

Studies show that children have a distinctively human tendency to precisely imitate what other people do. But they also can choose when to imitate exactly, when to modify what they've seen, and when to try something brand new.

Human adolescence, with its risk-taking and exploration, seems to be a particularly important locus of cultural innovation. Archaeologists think teenagers may have been the first cave-painters. We can even see this generational effect in other primates. Some macaque monkeys famously learned how to wash sweet potatoes and passed this skill to others. The innovator was the equivalent of a preteen girl, and other young macaques were the early adopters.

As in biological evolution, there is no guarantee that cultural evolution will always move forward, or that any particular cultural tradition or innovation will prove to be worth preserving. But although the arc of cultural evolution is long and irregular, overall it does seem to bend toward justice, or, at least, to human thriving.

Gay marriage demonstrates this dynamic of tradition and innovation in action. Marriage has itself evolved. It was once an institution that emphasized property and inheritance. It has become one that provides a way of both expressing and reinforcing values of commitment, loyalty and stability. When gay couples want marriage, rather than just civil unions, its precisely because they endorse those values and want to be part of that tradition.

At the same time, as more and more people have courageously come out, there have been more and more gay relationships to experience. That experience has led most of the millennial generation to conclude that the link between marital tradition and exclusive heterosexuality is unnecessary, indeed wrong. The generational shift at the heart of cultural evolution is especially plain. Again and again, parents report that they're being educated by their children.

It's ironic that the objections to gay marriage center on child-rearing. Our long protected human childhood, and the nurturing and investment that goes with it, is, in fact, exactly what allows social learning and cultural evolution. Nurture, like culture, is also our nature. We nurture our children so that they can learn from our experience, but also so that subsequent generations can learn from theirs.

Marriage and family are institutions designed, at least in part, to help create an autonomous new generation, free to try to make better, more satisfying kinds of marriage and family for the generations that follow.

PREFRONTAL CONTROL AND INNOVATION

Quick—what can you do with Kleenex? Easy, blow your nose. But what can you do with Kleenex that no one has ever done before? That's not so easy. Finally a bright idea pops up out of the blue—you could draw a face on it, put a string around the top and make it into a cute little Halloween ghost!

Why is thinking outside of the Kleenex box so hard? A study published in February suggests that our much-lauded prefrontal brain mechanisms for control and focus may actually make it more difficult to think innovatively.

The comedian Emo Philips said that he thought his brain was the most fascinating organ in his body—until he realized who was telling him this. Perhaps for similar reasons, the control system of the brain, which includes areas like the left lateral prefrontal cortex, gets particularly good press. It's like the brain's chief executive officer, responsible for long-term planning, focusing, monitoring and distraction-squelching (and apparently PR too). But there may be a down side to those "executive functions." Shutting down prefrontal control may actually help people get to unusual ideas like the Kleenex ghost.

Earlier studies used fMRI imaging to see which parts of the brain are active when we generate ideas. In 2008 Charles Limb at Johns Hopkins University and Alan Braun at the National Institutes of Health reported how they got jazz musicians to either play from a memorized score or improvise, and looked at their brains. Some "control" parts of the prefrontal cortex shut down, deactivated, during improvisation but not when the musicians played a memorized score. Dr. Braun and colleagues later found the same effect with freestyle rappers—improvisational genius is not limited by baby-boomer taste.

But it's important to remember that correlation is not causation. How could you prove that the frontal deactivation really did make the improvisers innovate? You'd need to show that if you deactivate those brain areas experimentally people will think more innovatively. Sharon Thompson-Schill at the University of Pennsylvania and colleagues did that in the new study.

They used a technique called transcranial direct current stimulation, or tDCS. If you pass a weak electrical current through part of the brain, it temporarily and safely disrupts neural activity. The researchers got volunteers to think up either ordinary or unusual uses for everyday objects like Kleenex. While the participants were doing this task, the scientists either disrupted their left prefrontal cortex with tDCS or used a sham control procedure. In the control, the researchers placed the electrodes in just the same way but surreptitiously turned off the juice before the task started.

Both groups were equally good at thinking up ordinary uses for the objects. But the volunteers who got zapped generated significantly more unusual uses than the unzapped control-group thinkers, and they produced those unusual uses much faster.

Portable frontal lobe zappers are still (thankfully) infeasible. But we can modify our own brain functions by thinking differently—improvising, freestyling, daydreaming or some types of meditation. I like hanging out with 3-year-olds. Preschool brains haven't yet fully developed the prefrontal system, and young kids' free-spirited thinking can be contagious.

There's a catch, though. It isn't quite right to say that losing control makes you more creative. Centuries before neuroscience, the philosopher John Locke distinguished two human faculties, wit and judgment. Wit allows you to think up wild new ideas, but judgment tells you which ideas are actually worth keeping. Other neuroscience studies have found that the prefrontal system re-engages when you have to decide whether an unlikely answer is actually the right one.

Yes—you could turn that Kleenex into an adorable little Halloween ghost. But would that be the aesthetically responsible thing to do? Our prefrontal control systems are the sensible parents of our inner 3-year-olds. They keep us from folly, even at the cost of reining in our wit.

SLEEPING AND LEARNING LIKE A BABY

Babies and children sleep a lot—12 hours a day or so to our eight. But why would children spend half their lives in a state of blind, deaf paralysis punctuated by insane hallucinations? Why, in fact, do all higher animals surrender their hard-won survival abilities for part of each day?

Children themselves can be baffled and indignant about the way that sleep robs them of consciousness. We weary grown-ups may welcome a little oblivion, but at nap time, toddlers will rage and rage against the dying of the light.

Part of the answer is that sleep helps us to learn. It may just be too hard for a brain to take in the flood of new experiences and make sense of them at the same time. Instead, our brains look at the world for a while and then shut out new input and sort through what they have seen.

Children learn in a particularly profound way. Some remarkable experiments show that even tiny babies can take in a complex statistical pattern of data and figure out the rules and principles that explain the pattern. Sleep seems to play an especially important role in this kind of learning.

In 2006, Rebecca Gómez and her colleagues at the University of Arizona taught 15-month-old babies a made-up language. The babies listened to 240 "sentences" made of nonsense words, like "Pel hiftam jic" or "Pel lago jic." Like real sentences, these sentences followed rules. If "pel" was the first word, for instance, "jic" would always be the third one.

Half the babies heard the sentences just before they had a nap, and the other half heard them just after they woke up, and they then stayed awake.

Four hours later, the experimenters tested whether the babies had learned the "first and third" rule by seeing how long the babies listened to brand-new sentences. Some of the new sentences followed exactly the same rule as the sentences that the babies had heard earlier. Some also followed a "first and third" rule that used different nonsense words.

Remarkably, the babies who had stayed awake had learned the specific rules behind the sentences they heard four hours before—like the rule about "pel" and "jic." Even more remarkably, the babies who had slept after the instruction seemed to learn the more abstract principle that the first and third words were important, no matter what those words actually were.

Just this month, a paper by Ines Wilhelm at the University of Tübingen and colleagues showed that older children also learn in their sleep. In fact, they learn better than grown-ups. They showed 8-to-11-year-olds and adults a grid of eight lights that lit up over and over in a particular sequence. Half the participants saw the lights before bedtime, half saw them in the morning. After 10 to 12 hours, the experimenters asked the participants to describe the sequence. The children and adults who had stayed awake got about half the transitions right, and the adults who had slept were only a little better. But the children who had slept were almost perfect—they learned substantially better than either group of adults.

There was another twist. While the participants slept, they wore an electronic cap to measure brain activity. The children had much more "slow-wave sleep" than the adults—that's an especially deep, dreamless kind of sleep. And both children and adults who had more slow-wave sleep learned better.

Children may sleep so much because they have so much to learn (though toddlers may find that scant consolation for the dreaded bedtime). It's paradoxical to try to get children to learn by making them wake up early to get to school and then stay up late to finish their homework.

Colin Powell reportedly said that on the eve of the Iraq war he was sleeping like a baby—he woke up every two hours screaming. But really sleeping like a baby might make us all smarter.

HELPLESS BABIES AND SMART GROWN-UPS

Why are children so, well, so helpless? Why did I spend a recent Sunday morning putting blueberry pancake bits on my 1-year-old grandson's fork and then picking them up again off the floor? And why are toddlers most helpless when they're trying to be helpful? Augie's vigorous efforts to sweep up the pancake detritus with a much-too-large broom ("I clean!") were adorable but not exactly effective.

This isn't just a caregiver's cri de coeur—it's also an important scientific question. Human babies and young children are an evolutionary paradox. Why must big animals invest so much time and energy just keeping the little ones alive? This is especially true of our human young, helpless and needy for far longer than the young of other primate

One idea is that our distinctive long childhood helps to develop our equally distinctive intelligence. We have both a much longer childhood and a much larger brain than other primates. Restless humans have to learn about more different physical environments than stay-at-home chimps, and with our propensity for culture, we constantly create new social environments. Childhood gives us a protected time to master new physical and social tools, from a whisk broom to a winning comment, before we have to use them to survive.

The usual museum diorama of our evolutionary origins features brave hunters pursuing a rearing mammoth. But a Pleistocene version of the scene in my kitchen, with ground cassava roots instead of pancakes, might be more accurate, if less exciting.

Of course, many scientists are justifiably skeptical about such "just-so stories" in evolutionary psychology. The idea that our useless babies are really useful learners is appealing, but what kind of evidence could support (or refute) it? There's still controversy, but two recent studies at least show how we might go about proving the idea empirically.

One of the problems with much evolutionary psychology is that it just concentrates on humans, or sometimes on humans and chimps. To really make an evolutionary argument, you need to study a much wider variety of animals. Is it just a coincidence that we humans have both needy children and big brains? Or will we find the same evolutionary pattern in animals who are very different from us? In 2010, Vera Weisbecker of Cambridge University and a colleague found a correlation between brain size and dependence across 52 different species of marsupials, from familiar ones like kangaroos and opossums to more exotic ones like quokkas.

Quokkas are about the same size as Virginia opossums, but baby quokkas nurse for three times as long, their parents invest more in each baby, and their brains are twice as big.

But do animals actually use their big brains and long childhoods to learn? In 2011, Jenny Holzhaider of the University of Auckland, New Zealand, and her colleagues looked at an even more distantly related species, New Caledonian crows. These brilliant big-brained birds make sophisticated insect-digging tools from palm leaves—and are fledglings for much longer than not-so-bright birds like chickens.

At first, the baby crows are about as good at digging as my Augie is at sweeping—they hold the leaves by the wrong end and trim them into the wrong shape. But the parents tolerate this blundering and keep the young crows full of bugs (rather than blueberries) until they eventually learn to master the leaves themselves.

Studying the development of quokkas and crows is one way to go beyond just-so stories in trying to understand how we got to be human. Our useless, needy offspring may be at least one secret of our success. The unglamorous work of caregiving may give human beings the chance to figure out just how those darned brooms work.

 

 

* may require subscription to read