Middle-east Arab News Opinion | Asharq Al-awsat

Fighting Fake News With Science | ASHARQ AL-AWSAT English Archive 2005 -2017
Select Page
Media ID: 55370683
Caption:

Eager reader.
Photograph: Evening Standard/Getty Images


People aren’t getting dumber, despite what a prolific writer of fake news told the Washington Post last fall, but something funny is going on with American media. There’s been an apparent surge in fabricated stories, while the president has accused the New York Times and other traditional journalism outlets of producing “fake news.” With facts seemingly up for grabs, scientists are starting to see evidence that both ends of the political spectrum have splintered off into alternative realities.

But it’s not just a matter of social media isolating conservatives and liberals in echo chambers. Instead, researchers who study how people share news via Facebook and Twitter say concerted efforts to misinform the public are becoming a threat. New forms of social media help deceivers reach a far larger audience than they could find using traditional outlets. So behavioral and computer scientists are searching for solutions.

Part of the problem dates back to our evolution as social animals, they say. “We have an innate tendency to copy popular behaviors,” said Filippo Menczer, a professor at the Center for Complex Networks and Systems Research at Indiana University, and one of several speakers at a recent two-day seminar on combating fake news.

That tendency can get people to notice and repeat not just fake news, but fake news from fake people — software creations called bots. Bots, which automatically post messages to social media, get their strength in numbers, making it look like thousands of people are tweeting or retweeting something. Menczer, who has a background in both behavioral and computer science, has studied the way bots can create the illusion that a person or idea is poplar. He and his colleagues estimate that between 9 percent and 15 percent of active Twitter users are bots.

The phenomenon he described reminded me of experiments with animals that engage in a behavior biologists call “mate copying.” In certain bird species, for example, females prefer males who are already getting attention from other females. Such species are prime targets for manipulation with fake birds. In an experiment on a bird called a black grouse, scientists surrounded otherwise unremarkable males with decoy females, after which real females mobbed the popular-looking males like groupies. (The males were also fooled, in that they immediately tried to mate with the decoys.)

In studying how this works with Twitter users, Menczer and his colleagues created a program to distinguish bots from people. What he learned was that ideas being promoted by bots can hit the popularity jackpot if they get retweeted from a well-connected or prominent human. Such people often get a lot of bots vying for their attention for just that reason, Menczer said. Shortly after the November election, he said, Donald Trump was inundated with bots telling him that 3 million illegal aliens voted for his opponent. Trump later tweeted this same information. A human source has been connected to the rumor, but the bots could have made it look like it had the backing of hundreds more people, as well.

Others mapping the social-media landscape see different patterns of deception on the right and left. Yochai Benkler, co-director of the Berkman Klein Center for Internet and Society at Harvard, has seen political asymmetry using an open-source system called Media Cloud, which follows how stories circulate on social media. Mapping the flow of more than a million stories, he found that people who share left-leaning partisan news also tend to share news from the New York Times, Wall Street Journal, CNN and other sources with traditions of accountability. Those who shared items from right-leaning sites such as Breitbart were much less likely to circulate stories from such mainstream news sources.

In a piece Benkler co-authored in the Columbia Journalism Review, he said his data revealed a pattern of deception among many right-leaning sites. “Rather than ‘fake news’ in the sense of wholly fabricated falsities,” he and his co-authors wrote, “many of the most-shared stories can more accurately be understood as disinformation: the purposeful construction of true or partly true bits of information into a message that is, at its core, misleading.”

In an ironic twist of fate, Indiana’s Menczer became the subject of just such a hodgepodge of true and false statements. He’d already received some media attention in the Wall Street Journal and other publications for his work on the way ideas, or “memes,” spread through social media. None of the mainstream stories suggested he was up to anything sinister. But then, in 2014, the Washington Free Beacon published a story headlined Feds Creating Database to Track ‘Hate Speech’ on Twitter.

The problem was that there was no database, and nobody had tried to define either hate speech or misinformation.

Bloomberg View