Another Way Climate Change Might Make Hurricanes Worse

Four days can do a lot of damage. Photographer: Joe Raedle/Getty Images

In a recent talk about his new book, “Scale,” physicist Geoffrey West described climate change as a form of entropy –- disorder that’s created as the price of all the order and creative energy pent up in cities. In this view, climate change is not, as some argue, just a euphemism for global warming. It’s a broader term that reflects the unpredictable, disorderly way global warming will affect the planet’s oceans and atmosphere.

In other words, we won’t be so lucky as to see a regular, incremental increase in the earth’s average temperature. Instead, we’re seeing rapid, erratic changes in weather patterns that people have counted on for centuries.

Consider one of the more interesting hypotheses about global warming: that it will cause the wind patterns that normally keep storms moving from place to place to slow down, causing prolonged downpours as well as droughts. It’s an idea that’s been cited in the peer-reviewed literature and featured in Scientific American, but like many exciting ideas in science, it’s still not universally accepted. Some are waiting for more evidence.

For people who’ve looked into the slowing of wind circulation, however, Hurricane Harvey was a case in point. Part of the reason it was so destructive was because it got slowed down over Houston. The storm was caught between two high-pressure blocking systems shortly after it made landfall in Texas, so instead of rolling over the region, it got stuck for several days, dumping 50 inches of rain over an enormous area –- a total of 19 trillion gallons. The longer it lingered, the more rain fell; ultimately, some parts of the state saw a year’s worth of rainfall in less than a week.

Charles Greene, an atmospheric scientist at Cornell University, believes that warming in the Arctic led to a slowing down of a high-altitude, circulating wind known as the jet stream, which he argues contributed to Harvey’s lingering destruction. If that turns out to be the case, it portends more such events to come. He suspects recent droughts in the western United States may have been exacerbated by the same phenomenon, as a more sluggish jet stream allowed masses of dry air to get locked into place.

Why would global warming affect winds and storms? As Greene explains, warming isn’t happening in a uniform way. The Arctic is warming faster than the earth’s temperate zones, and so there’s less of a difference than there used to be between Arctic and mid-latitude temperatures. “These temperature differences are what drive atmospheric winds,” he said, which include the jet stream and a more northerly circulation pattern called the polar vortex. The polar vortex normally confines frigid air to the Arctic, and when it weakens, Arctic air can swing south and create unusually cold weather at lower latitudes.

The Arctic is warming faster than the rest of the planet because there’s a positive feedback loop at work. As reflective sea ice melts, it exposes dark ocean underneath, he said. That means more of the sun’s energy gets absorbed into the oceans, driving yet more warming in a positive feedback system. In the fall, some of the ocean’s heat is released back into the atmosphere. That change in Arctic temperature alters the polar vortex, slowing and weakening it. That has coincided with an increase in the number of tropical cyclones and nor’easters.

In his view, the warming Arctic is also causing the jet stream to slow, and thereby allowing the formation of more “blocks” of high pressure to lock storms such as Harvey in place. He acknowledges, however that there isn’t enough evidence yet to link cause and effect, or rule out natural variability.

Kevin Trenberth, climate scientist at the National Center for Atmospheric Research in Boulder, Colorado, says Greene and his colleagues have more work to do to demonstrate the links between Arctic melting, wind patterns and extreme weather. But there are already well-established links between global warming and storms.

Trenberth’s work focuses on the oceans, which are heating up along with the atmosphere. While the surface of the ocean has been slowly warming since the mid-20th century, the 1990s brought something new: Water started to warm up 700 to 2,000 meters below the surface. The increase is small, he said, but the total energy pent up under the surface is enormous. Normally, big storms churn up cold water from the depths, and this allows their energy to peter out. Now that there’s warmer water below the surface, there’s extra heat available, he said, and that can cause a storm to intensify and last longer.

And that’s not the only way global warming can lead to more destructive storms. It’s well understood that warmer air holds more moisture, which allows Harvey and other storms to pack more precipitation. Warmer oceans also likely added fuel to this storm, and will continue to do so over the course of the century. The water in the Gulf of Mexico is 2 to 4 degrees warmer than it has been historically this time of year, said Greene. Warmer water allows storms to intensify fast, as Harvey did by going from Category 2 to Category 4 without hours. Now, Hurricane Irma seems to be doing the same thing as it heads toward Florida.

The arguments among scientists are for the most part not about whether global warming is contributing to extreme weather, but which consequences of global warming will wreak the most havoc. In his talk, physicist Geoffrey West explained that the kind of disorder associated with global warming is the price we pay for our ordered civilization. There’s no reason to be ashamed that it’s happened — or to deny it. Better to look forward and realize it’s still possible to mitigate the damage, and to adapt.

(Bloomberg)

Big Data Shows Big Promise in Medicine

In handling some kinds of life-or-death medical judgments, computers have already have surpassed the abilities of doctors. We’re looking at something like promise of self-driving cars, according to Zak Kohane, a doctor and researcher at Harvard Medical School. On the roads, replacing drivers with computers could save thousands of lives that would otherwise be lost to human error. In medicine, replacing intuition with machine intelligence might save patients from deadly drug side effects or otherwise incurable cancers.

Consider precision medicine, which involves tailoring drugs to individual patients. And to understand its promise, look to Shirley Pepke, a physicist by training who migrated into computational biology. When she developed a deadly cancer, she responded like a scientist and fought it using big data. And she is winning. She shared her story at a recent conference organized by Kohane.

In 2013, Pepke was diagnosed with advanced ovarian cancer. She was 46, and her kids were 9 and 3. It was just two months after her annual gynecological exam. She had symptoms, which the doctors brushed off, until her bloating got so bad she insisted on an ultrasound. She was carrying six liters of fluid caused by the cancer, which had metastasized. Her doctor, she remembers, said, “I guess you weren’t making this up.”

She did what most people do in her position. She agreed to a course of chemotherapy that doctors thought would extend her life and offered a very slim chance of curing her. It was a harsh mixture pumped directly into her abdomen.

She also did something most people wouldn’t know how to do — she started looking for useful data. After all, tumors are full of data. They carry DNA with various abnormalities, some of which make them malignant or resistant to certain drugs. Armed with that information, doctors design more effective, individualized treatments. Already, breast cancers are treated differently depending on whether they have a mutation in a gene called HER2. So far, scientists have found no such genetic divisions for ovarian cancers.

But there was some data. Years earlier, scientists had started a data bank called the Cancer Genome Atlas. There were genetic sequences on about 400 ovarian tumors. To help her extract useful information from the data, she turned to Greg ver Steeg, a professor at the University of Southern California, who was working on an automated pattern-recognition technique called correlation explanation, or CorEx. It had not been used to evaluate cancer, but she and ver Steeg thought it might work. She also got genetic sequencing done on her tumor.

In the meantime, she found out she was not one of the lucky patients cured by chemotherapy. The cancer came back after a short remission. A doctor told her that she would only feel worse every day for the short remainder of her life.

But CorEx had turned up a clue. Her tumor had something on common with those of the luckier women who responded to the chemotherapy — an off-the-charts signal for an immune system product called cytokines. She reasoned that in those luckier patients, the immune system was helping kill the cancer, but in her case, there was something blocking it.

Eventually she concluded that her one shot at survival would be to take a drug called a checkpoint inhibitor, which is geared to break down cancer cells’ defenses against the immune system.

At the time, checkpoint inhibitors were only approved for melanoma. Doctors could still prescribe such drugs for other uses, though insurance companies wouldn’t necessarily cover them. She ended up paying thousands of dollars out of pocket. At the same time, she went in for another round of chemotherapy. The checkpoint inhibitor destroyed her thyroid gland, she said, and the chemotherapy was damaging her kidneys. She stopped, not knowing whether her cancer was still there or not. To the surprise of her doctors, she started to get better. Her cancer became undetectable. Still healthy today, she works on ways to allow other cancer patients to benefit from big data the way she did.

Kohane, the Harvard Medical School researcher, said similar data-driven efforts might help find side effects of approved drugs. Clinical trials are often not big enough or long-running enough to pick up even deadly side effects that show up when a drug is released to millions of people. Thousands died from heart attacks associated with the painkiller Vioxx before it was taken off the market.

Last month, an analysis by another health site suggested a connection between the rheumatoid arthritis drug Actemra and heart attack deaths, though the drug had been sold to doctors and their patients without warning of any added risk of death. Kohane suspects there could be many other unnecessary deaths from drugs whose side effects didn’t show up in testing.

So what’s holding this technology back? Others are putting big money into big data with the aim of selling us things and influencing our votes. Why not use it to save lives?

Bloomberg View

Why Scientific Consensus Is Worth Taking Seriously

Science- Global Warming Impacts On Australian Antarctic Territory

Yes, collective missteps happen. But if anything, history shows how hard it is to get scientists to agree in the first place.

Following the pack is not part of the scientific method. The point is to follow the evidence. And that leaves room for ambiguity in interpreting the survey results showing that 97 percent of climate scientists agree that global warming is real and that human-generated greenhouse gases are a major cause. The National Academy of Sciences, American Physical Society, American Chemical Society and other relevant scientific organizations all agree, too.

For some, this consensus proves that climate change is real and that humans must take immediate action against it. But others, citing history, say the consensus view has been wrong before. Why should we believe it now? For example, scientists once believed the earth was headed into an ice age. So why should we trust them when they say the globe is warming?

A look at the history books and some chats with historians suggest scientists of the past were not the fickle flip-floppers some make them out to be. There’s nothing contradictory about short-term global warming and a much longer-term cycle of ice ages, for example.

In his book “The Discovery of Global Warming,” the historian Spencer Weart writes that in the 19th century, there was a common folk belief that God would keep a hand on the planet’s thermostat. The growing understanding that there had been long-ago ice ages shook that up, and scientists started to recognize that we might head into another ice age in thousands of years. (Recent estimates put the next one at about 50,000 years in the future.)

People did consider the possibility of global cooling on near-term timescales, too, said Harvard historian of science Naomi Oreskes. Particulate matter in smog can dim the sun enough to cool the globe. In the mid-20th century, free of the belief in God’s hand on the thermostat, scientists debated which would win out: particulate-driven cooling or greenhouse gas-driven warming. As they learned more, scientists realized the warming would dominate. The idea that there was any scientific consensus predicting an impending ice age is a lie, Oreskes said.

Another commonly cited example of the fallibility of science is continental drift. In that case, scientists get blamed for failing to accept the theory, which was proposed way back in 1912 by Alfred Wegener, and in hindsight seems sort of obvious — just look at all those jigsaw relationships on the map. The general concept evolved, became known as plate tectonics, and finally gained widespread acceptance by the 1960s.

But between Wegener’s proposal and the 1960s, scientists held a wide range of views about what was going on with the continents, said Oreskes, who has written two books about the topic. Some thought they moved just vertically or just horizontally, or only a little. “There was debate, but no consensus,” she explained.

Both global warming and plate tectonics were proposed decades before they became part of scientific consensus. Scientists of the early 20th century understood the possibility that coal burning would lead to greenhouse gas warming, but there wasn’t consensus until experiments showed that indeed, carbon dioxide was building up in the atmosphere, and the global temperature was rising close to the predicted rate. Likewise, Wegener didn’t know how the continents moved. Scientists eventually figured out the continents were riding around on massive “plates” of crust, the motion driven by convection of material below them.

If anything, history shows how hard it is to get a new consensus in science. Scientists have proposed plenty of wrong ideas — from cold fusion to the connection between autism and childhood vaccines. But these are not consensus ideas. Wrong ideas that get into the heads of whole scientific communities generally don’t start with the scientists. They are part of the prevailing culture, or they represent holding places before scientists develop better theories.

Take the examples that came up a few years ago when the website The Edge posed this question to a group of scientists and other intellectuals: The flat earth and geocentric world are examples of wrong scientific beliefs that were held for long periods. Can you name your favorite example and for extra credit why it was believed to be true?

The flat earth isn’t an idea that traces back to scientists. Long before there were professional scientists, ancient philosophers realized the planet was spherical. You can get the long version of this in a good history-of-science book, such as the recent “To Explain the World” by Steven Weinberg, or a quick and dirty version on Wikipedia. To call this belief “scientific” is like calling the belief in God scientific because in pre-Darwinian times, many scientists believed in God.

The geocentric universe runs into the same problem. It’s a social belief — a religious belief — and it reflects the way things look from down here to those who haven’t been taught otherwise. Scientists were the ones who finally got it right. It undermines science to label past scientists as “wrong” for not knowing what hadn’t been discovered yet.

Physicists did wrongly believe in an invisible substance called “luminiferous ether” — an idea that dates back to Aristotle and was embraced by Isaac Newton. But that was really just an extension of the status quo belief that to have a wave, you needed a substance to make the wave out of — water, or air for sound, or ether for light. Eventually, experiments indicated the ether didn’t exist, and Einstein’s theory of special relativity showed light waves can travel through empty space. Ether wasn’t a dumb idea — it represented an intermediate step in understanding the nature of light.

One of the more interesting wrong ideas that came up in The Edge discussion was the Great Chain of Being: a philosophical idea that stacks everything — people, animals, plants, objects — in a hierarchy, from lowest to highest. Charles Darwin correctly threw the Great Chain of Being into the dumpster, and proposed instead that all life sprung from a common origin and all creatures have been evolving for the same amount of time. Consensus formed around this idea because the evidence kept piling up.
Similarly, until the 20th century there was a folk belief about the climate, which Weart expresses this way in his book: “Hardly anyone imagined that human actions, so puny among the vast natural powers, could upset the balance that governed the planet as a whole … such was the public belief and scientists are members of the public, sharing most of the assumptions of their culture.” The idea of a benevolent natural balance has emotional appeal — just as creationism did — but if history is any guide, the smart money is on the more science-based view that replaced it.

(Bloomberg)

Happy Earth Day. Enjoy It While You Last.

The people who know the most about life on Earth tend to be the most impressed by its staying power.

Harvard professor Andrew Knoll marvels that our planet has sustained life continuously for four billion years — most of its 4.5 billion years in existence. This is not just a matter of location, said Knoll, who is an earth and planetary scientist. Mars and Venus are both in what astronomers would consider a “habitable” zone, getting sunlight in a range suitable for living organisms. Now both are barren (or close to it).

Earth has special features that may or may not be present on many of the other planets detected around the galaxy. Earth’s geology helps regulate the climate through the cycling of carbon dioxide. When exposed rocks weather, carbon dioxide gets pulled out of the atmosphere, allowing the globe to cool. When those rocks get covered in ice, the weathering stops, and carbon can build up as it’s replenished by volcanoes.

We can thank Earth’s system of plate tectonics for this, said Peter Ward, a paleontologist from the University of Washington and co-author of the book “Rare Earth: Why Complex Life Is Uncommon in the Universe.” As new crust continues to be exposed in some places and old crust is buried, carbon can cycle in and of the atmosphere. We’re also very lucky, said Ward, that the Earth got just the right amount of water. It’s thought that most came from impacts with comets early in the history of the solar system. If we’d gotten a bit more, and ended up like that third-rate Kevin Costner movie, he said, Earth would be a lot hotter — maybe too hot for complex life.

Complex life, including plants and animals, are particular. They didn’t get going until the most recent 600 million years. Bacteria are another story. It’s hard to put a date on the origin of simple life because it happened so early. What we know, said Harvard’s Knoll, is that the very oldest rocks on Earth were formed 3.8 billion years ago, and they hold preserved signatures of life.

That’s fast given the widely held view that a few million years after its formation, the infant Earth collided with another early planet, creating debris that became the moon. After the crash, some scientists have calculated that the Earth’s surface temperature reached 3,600 degrees Fahrenheit and our planet shone like a star.

After it cooled off, there were further radical changes: periods when tropical plants grew at the poles, and periods when ice flowed down to the equator. But the extremes always eventually gave way to more moderate periods, and life was never extinguished.

All this recovery and cycling may sound reassuring, backing a longstanding popular belief in an inherent balance of nature. As historian Spencer Weart describes it in his book “The Discovery of Global Warming”: “Hardly anyone could imagine that human actions, so puny among the vast natural powers, could offset the balance that governed the planet as a whole. This view of Nature — suprahuman, benevolent and inherently stable — lay deep in most human cultures.”

But in the last few decades, scientists have learned that there’s no real barrier between the physical processes of the planet and the biological ones. Earth was not born a blue planet rich with oxygen. Single-celled organisms called cyanobacteria started releasing oxygen into the atmosphere. The emergence of plants changed the climate. Animals changed the climate. Even the evolution of poop changed the physical world, said Ward, by creating a new mechanism by which carbon and other materials would get packaged up and sink to the bottom of the ocean.

That still leaves the argument that human-generated greenhouse gases — like early fish poop — represent nothing the Earth can’t handle. Knoll said he recalled a newspaper column by George Will, still available online, arguing that current climate change is nothing to worry about because the past periods of climate change were not the end of the world. But the column focused on recent, small blips in the climate, not on the bigger, longer-term upheavals.

Some periods of climate change were terrible. Take one 252 million years ago called the End Permian extinction. Large volcanic eruptions, possibly combined with ignition of coal beds, led to a rapid enough global warming to kill off about 90 percent of the planet’s species. This was good for some — especially sulfur-excreting bacteria — whose flourishing is preserved in the fossil record. But it was bad for plants and animals. In another of his popular books, “Under a Green Sky,” Ward describes the End Permian seashore this way: “No fish break its surface, no birds of any kind. We are under a pale green sky and it has the smell of death and poison.”

So life went on, in an altered form, and plants and animals again flourished after a few million years. Knoll doesn’t find this particularly reassuring. “We are changing the climate at a geologically unusual rate,” he said — changes comparable to an era of volcanism a million times more powerful than anything in human history. Earth’s climate will probably recover from this human-fueled round of global warming, but “on time scales that are unimaginable to humans.” And perhaps without humans.

(Bloomberg)

Fighting Fake News With Science

News

People aren’t getting dumber, despite what a prolific writer of fake news told the Washington Post last fall, but something funny is going on with American media. There’s been an apparent surge in fabricated stories, while the president has accused the New York Times and other traditional journalism outlets of producing “fake news.” With facts seemingly up for grabs, scientists are starting to see evidence that both ends of the political spectrum have splintered off into alternative realities.

But it’s not just a matter of social media isolating conservatives and liberals in echo chambers. Instead, researchers who study how people share news via Facebook and Twitter say concerted efforts to misinform the public are becoming a threat. New forms of social media help deceivers reach a far larger audience than they could find using traditional outlets. So behavioral and computer scientists are searching for solutions.

Part of the problem dates back to our evolution as social animals, they say. “We have an innate tendency to copy popular behaviors,” said Filippo Menczer, a professor at the Center for Complex Networks and Systems Research at Indiana University, and one of several speakers at a recent two-day seminar on combating fake news.

That tendency can get people to notice and repeat not just fake news, but fake news from fake people — software creations called bots. Bots, which automatically post messages to social media, get their strength in numbers, making it look like thousands of people are tweeting or retweeting something. Menczer, who has a background in both behavioral and computer science, has studied the way bots can create the illusion that a person or idea is poplar. He and his colleagues estimate that between 9 percent and 15 percent of active Twitter users are bots.

The phenomenon he described reminded me of experiments with animals that engage in a behavior biologists call “mate copying.” In certain bird species, for example, females prefer males who are already getting attention from other females. Such species are prime targets for manipulation with fake birds. In an experiment on a bird called a black grouse, scientists surrounded otherwise unremarkable males with decoy females, after which real females mobbed the popular-looking males like groupies. (The males were also fooled, in that they immediately tried to mate with the decoys.)

In studying how this works with Twitter users, Menczer and his colleagues created a program to distinguish bots from people. What he learned was that ideas being promoted by bots can hit the popularity jackpot if they get retweeted from a well-connected or prominent human. Such people often get a lot of bots vying for their attention for just that reason, Menczer said. Shortly after the November election, he said, Donald Trump was inundated with bots telling him that 3 million illegal aliens voted for his opponent. Trump later tweeted this same information. A human source has been connected to the rumor, but the bots could have made it look like it had the backing of hundreds more people, as well.

Others mapping the social-media landscape see different patterns of deception on the right and left. Yochai Benkler, co-director of the Berkman Klein Center for Internet and Society at Harvard, has seen political asymmetry using an open-source system called Media Cloud, which follows how stories circulate on social media. Mapping the flow of more than a million stories, he found that people who share left-leaning partisan news also tend to share news from the New York Times, Wall Street Journal, CNN and other sources with traditions of accountability. Those who shared items from right-leaning sites such as Breitbart were much less likely to circulate stories from such mainstream news sources.

In a piece Benkler co-authored in the Columbia Journalism Review, he said his data revealed a pattern of deception among many right-leaning sites. “Rather than ‘fake news’ in the sense of wholly fabricated falsities,” he and his co-authors wrote, “many of the most-shared stories can more accurately be understood as disinformation: the purposeful construction of true or partly true bits of information into a message that is, at its core, misleading.”

In an ironic twist of fate, Indiana’s Menczer became the subject of just such a hodgepodge of true and false statements. He’d already received some media attention in the Wall Street Journal and other publications for his work on the way ideas, or “memes,” spread through social media. None of the mainstream stories suggested he was up to anything sinister. But then, in 2014, the Washington Free Beacon published a story headlined Feds Creating Database to Track ‘Hate Speech’ on Twitter.

The problem was that there was no database, and nobody had tried to define either hate speech or misinformation.

Bloomberg View

Science’s Biggest Blunder

Race is perhaps the worst idea ever to come out of science. Scientists were responsible for officially dividing human beings into Europeans, Africans, Asians and Native Americans and promoting these groups as sub-species or separate species altogether. That happened back in the 18th century, but the division lends the feel of scientific legitimacy to the prejudice that haunts the 21st.

Racial tension proved a major point of contention in the first 2016 presidential debate, and yet just days before, scientists announced they’d used wide-ranging samples of DNA to add new detail to the consensus story that we all share a relatively recent common origin in Africa. While many human species and sub-species once roamed the planet, there’s abundant evidence that beyond a small genetic contribution from Neanderthals and a couple of other sub-species, only one branch of humanity survived to the present day.

Up for grabs was whether modern non-Africans stemmed from one or more migrations out of Africa. The newest data suggests there was a single journey — that sometime between 50,000 and 80,000 years ago, a single population of humans left Africa and went on to settle in Asia, Europe, the Americas, the South Pacific, and everywhere else. But this finding amounts to just dotting the i’s and crossing the t’s on a scientific view that long ago rendered notion of human races obsolete.

“We never use the term ‘race,’ ” said Harvard geneticist Swapan Mallick, an author on one of the papers revealing the latest DNA-based human story. “We’re all part of the tapestry of humanity, and it’s interesting to see how we got where we are.”

That’s not to deny that people vary in skin color and other visible traits. Whether you’re dark or light, lanky or stocky depends in part on the sunlight intensity and climate in the regions where your ancestors lived. Nor is it to deny that racism exists — but in large part, it reflects a misinterpretation of those superficial characteristics.

“There is a profound misunderstanding of what race really is,” Harvard anthropology professor Daniel Lieberman said at an event the night after the presidential debate. “Race is a scientifically indefensible concept with no biological basis as applied to humans.”

Consider the fact that most of the race boxes people tick off on census forms were invented by creationists, such as Swedish biologist Carolus Linnaeus. In 1758, he declared that humans could be divided into races he described as white (European), red (Native American), black (African) and yellow (Asian). He also attributed various unflattering personality traits to all the races except for whites. In subsequent decades, scientists of European ancestry argued over whether God created the races separately or whether they diverged from a common creationist origin.

In the 19th century, scientists used race not just to classify people but to justify slavery by painting Africans as inferior, according to Joseph Graves, a geneticist at North Carolina A&T State University who spoke at Harvard this week. One of the world’s most prominent American scientists of the mid-1800s, Samuel Morton, collected skulls from all over the world and attempted to demonstrate that those of European ancestry had the world’s biggest heads and were, so he claimed, intellectually superior.

Scientists subsequently realized that Morton was wrong -– about whose heads were biggest and the connection between head size and intelligence. There is still controversy about whether Morton cheated or made a statistical error, but his conclusion remains debunked.

Graves — who is the author of several books, including 2005’s “The Race Myth” — said a key turning point occurred when Charles Darwin published “On the Origin of Species” in 1859. From his travels around the world, Darwin realized that there was no scientific reason to divide people into four races. It made just as much sense to him, he wrote later, to divide them into anywhere between two and 63 races.

But not everyone took Darwin’s side. Another influential figure in 19th century science was Swiss-American biologist Louis Agassiz, whom Graves describes as a “giant” — both in his accomplishments and his sway over his contemporaries. Even after Darwin published his book, Agassiz continued to promote the notion that Africans and Europeans were different species. Agassiz proposed that the children of mixed couples would be infertile, as are the offspring of horses and donkeys. He was wrong, just like he was wrong in never accepting evolution.

Darwin’s powerful idea didn’t put an end to scientific racism — the eugenics movement of the Progressive Era, for example, tried to cloak racism in evolutionary theory — but in general, 20th-century researchers pushed racism to the scientific fringes. (Historians have shown that Hitler could only fake the scientific credibility of his racist ideology.) And in the 1980s, scientists used DNA to trace all humans back to an origin 200,000 years ago in Africa. This is recent in evolutionary time, given that our lineages split from that of chimpanzees perhaps 7 million years ago.

Refining the story, contemporary scientists have analyzed DNA collected from diverse populations — Aboriginal Australians, Papua New Guineans, Basques, Bedouins and Pygmies. The very nature of the project acknowledges that these groups are distinct enough that their DNA matters in deciphering the human story — but not so distinct that they represent separate races. Bones and teeth scattered through the Middle East and Asia show people left Africa in many waves, but according to this latest DNA analysis, only one of those waves made a substantial contribution to the current population of humans.

Why are people still so determined to believe that racial categories are distinct, unchanging and rooted in biology? “It’s not rational,” said Graves. He said one reason Americans are stuck in the 19th century when it comes to race is that many teachers are unprepared to teach human evolution or refuse to out of fear.

Graves sometimes quizzes his students by showing them an image of a man and asking them to guess where he comes from. It appears to show someone most Americans would identify as a black man, and Graves says people assume he’s from Africa or an African American community in the U.S. But he’s from the Solomon Islands, which are in the South Pacific.

This exercise shows that race is real in the public consciousness, if not in biology. But the science shows it doesn’t have to be this way forever.

Bloomberg