#MeToo Floods Social Media with Stories of Harassment, Assault

New York- Women are posting messages on social media to show how commonplace sexual assault and harassment are, using the hashtag #MeToo to express that they, too, have been victims of such misconduct.

The messages bearing witness began appearing frequently on Twitter, Facebook and Instagram on Sunday, when the actress Alyssa Milano posted a screenshot outlining the idea and writing “If you’ve been sexually harassed or assaulted write ‘me too’ as a reply to this tweet.”

Tens of thousands of people replied to the message. Some just wrote “me too,” while many others described their personal experiences of harassment or assault.

The author and poet Najwa Zebian wrote: “I was blamed for it. I was told not to talk about it. I was told that it wasn’t that bad. I was told to get over it.”

Other celebrities who took part include Anna Paquin, Debra Messing, Laura Dreyfuss, Lady Gaga and Evan Rachel Wood.
Men also expressed their support. The comedian and activist Nick Jack Pappas wrote: “Men, Don’t say you have a mother, a sister, a daughter… Say you have a father, a brother, a son who can do better. We all can.”

Since The New York Times published an investigative report on Oct. 5 detailing decades of sexual harassment allegations against the Hollywood producer Harvey Weinstein, social media has provided a galvanizing platform for women to discuss their experiences.

Twitter bolstered the #MeToo trend by promoting it on Moments, its platform of curated stories.

The company pointed to its statement from last week in which it said it was “proud to empower and support the voices on our platform, especially those that speak truth to power.” It also noted that its chief executive, Jack Dorsey, had tweeted about the company’s efforts to tackle abuse on the site.

The #MeToo movement is not the first to use social media to highlight abuse against women. In 2014, a #YesAllWomen campaign drew notice on social media after a man cited his hatred of women as his reason for killing people in Southern California. The activist Laura Bates started the #EverydaySexism campaign in 2012 to document widespread sexism, harassment and assault.

The New York Times

Facebook Is Ubiquitous. But Will It Be Trusted?


This has been the year of Facebook. Its stock price has increased by around 50 percent as it continues to assert its dominance in user activity and digital ad revenue. It undercut one of the companies trying to catch up with it, Snap, by adding a similar feature, Stories, to its increasingly dominant Instagram service. And scrutiny surrounding its role in the 2016 presidential election continues to grow, as politicians seek more information about Russia’s involvement on Facebook in the weeks and months leading up to last November.

How should Facebook respond to its moment? To think about where the company might go from here, consider the history of television and how its identity evolved with the culture.

As a news medium, television came of age in the 1960s. If the 2016 was the first “Facebook election,” 1960 might have been the first television election, with the aesthetic contrast between John F. Kennedy and Richard Nixon helping to put Kennedy in the White House. Asked about the influence of television on politics in the early 1960s, someone might have said that it made our leaders look better and strengthened our institutions. The period from Kennedy’s inauguration to his assassination might have been the high point for American institutions.

But those were early days for television as a news medium, and as institutional trust began to wane, television adapted. Video footage of the Vietnam War and protests back at home shocked Americans and helped create some of the cultural divisions in the country that persist to this day. The investigative journalism show “60 Minutes” debuted in 1968, and its formative years were during the tumultuous Nixon administration. In his book, “How We Got Here: The 70’s: The Decade That Brought You Modern Life,” David Frum noted that television was the only American institution to show a rise in confidence during this period, garnering credibility by attacking the credibility of everyone else. The cultural roots of Jon Stewart’s “Daily Show,” which mocked American leaders during the 2000s, or the “takedown journalism” of the present day’s “Last Week Tonight With John Oliver,” were formed decades ago.

Facebook is coming of age in a much different environment. Institutional trust is at a nadir. That’s presumably one reason that Facebook calls itself a technology company or a “platform,” rather than a news company. The problem with that “platform” claim comes when groups beyond the pale, like white supremacists, use the medium for hate, or when nefarious foreign governments use it in an attempt to sway American elections. A recent study of college undergraduates showed that only 29 percent believe Facebook has a positive impact on political discourse, compared to 57 percent who believe it has a negative impact.

Ultimately, if Facebook wants to be more influential and more valuable, it has to be a platform that garners the trust of its users and advertisers. It can’t be seen as being a den for hate groups and scam artists. It’s slowly making moves in this direction, trying to help users see where content comes from in order to build trust on the platform.

Strengthening institutions is impossible without a concerted effort from the public, voters, politicians and news mediums. Television’s ratings are in decline, and Facebook’s influence continues to grow. Its legacy does not have to be fake news and Russian influence. But those will be dominant themes unless Facebook takes control of its platform and earns the trust of the public.


Lights! Camera! Culinary School Will Teach Instagram Skills

Food photography

HYDE PARK, NY — Check out the tray liners at Martina, Danny Meyer’s new pizzeria in the East Village, which were designed as branded Instagram bait. Each has what the executive chef, Nick Anderer, calls “a frenetic doodle” of contemporary Roman slang phrases, images of wineglasses and pizza, and at the bottom left, the restaurant’s name.

And if you wonder why your Instagram shots of Martina’s pies look so good, credit the lighting system, which allows the staff to adjust bulbs individually, with “a warmer hue in the dining room than in the kitchen,” Mr. Anderer said, “so it doesn’t cast too much shade against the pizza.”

Almost two miles and a degree of insecurity to the south, the chef Gerardo Gonzalez relies on Instagram to sustain his first restaurant, Lalito, which he opened 10 months ago in Chinatown. He said a lunch special sells out immediately if he posts an Instagram photo of the new dish.

Seven years after its founding, Instagram announced last week that it has 800 million monthly users, and the camera-ready restaurant dish has become a cultural commonplace. High-quality images are as essential to a chef’s success these days as knife skills.

The people who teach those knife skills know it — which is why the venerable Culinary Institute of America will introduce two new elective courses in May, one in food photography and the other in food styling, to help students develop sophisticated skills not only for the plate but also for the app.

The classes will teach students how to work with digital cameras and lighting, how to compose and edit a shot, and how to cook for the still camera, “with the same values as if you were eating it, evoking the feeling that it’s going to be luscious,” said Kersti Bowser, a food stylist and institute alumna who is working to develop the courses with Phil Mansfield, a staff photographer at the school.

They hope to replace the excess they see on social media, with photos that communicate flavor. Ms. Bowser thinks people are becoming “numb to the shock value” of much of what they see. “It seems so fabulous, much of it,” she said. “I want food to keep its integrity.”

Students may start out with the same raw ingredients they use in cooking class, but the rules are different in the photo class: They may want to undercook chicken or fish to keep the skin from looking tired, and vegetables may be burned on purpose to better convey texture.

The students see proof every day of how important visuals have become. Jason Potanovich, an assistant professor and the executive chef at the institute’s showcase Bocuse Restaurant, monitors diners’ reactions from his glass-walled kitchen, as do the students who work there. Bocuse serves steak tartare on a small plate over a moat of herb tea, fresh herbs and dry ice, and the swirling cloud that surrounds the dish inspires many customers to reach for their phones before they reach for their forks.

A photogenic dish, Mr. Potanovich said, is “absolutely” likelier to stay on the menu.

But there’s another reason for the new classes: The goal of becoming a restaurant chef and owner is increasingly elusive, thanks to competition for top jobs and a stagnant restaurant market. So schools like the culinary institute hope to prepare their students for a broader range of careers. In the current job market, an expanded skill set can make the difference between being employed and still looking.

“We see a small but steady increased interest in careers specifically in beverages and wine, food education, nutrition and wellness, food media,” said Denise Bauer, dean of the institute’s three-year-old School of Liberal Arts and Food Studies. Students “want to prepare for a food career that might not focus on food service alone,” she said, but could also involve the creation of photos for a variety of businesses, from restaurants to media outlets to cookbooks.

Mr. Mansfield stressed that the photo and styling classes would not be Instagram-for-credit; real food photography requires many more skills and thoughtful judgment calls. The photographer has to decide how to position a pot of ratatouille in a shot, what bowls and utensils to use and which napkin evokes a rustic feel.

In one recent class-development session in the school’s photography studio, Mr. Mansfield tried a test shot, checked his computer monitor, adjusted the lighting equipment, tried again and still wasn’t satisfied. Ms. Bowser moved in with tweezers to rearrange some of the vegetables — which she had cooked one ingredient at a time, rather than as a stew, to get them ready for their close-up.

The institute is not the only culinary school thinking visually. At Johnson & Wales University’s main campus in Providence, RI, about 70 culinary arts students belong to a faculty-advised food photography club, and many keep digital portfolios during their four years at school.

“Our students have coined the term ‘plate-y,’ as in, ‘I’m taking a plate-y,’” said Susan Marshall, interim dean of the university’s College of Culinary Arts. “They’re proud of their work and want to share it,” which they do on the school’s multiple Instagram and Facebook accounts. Students can take an elective food-photography course through three other colleges within the university.

The Institute of Culinary Education, in Lower Manhattan, offers food photography and styling electives, said Michael Laiskonis, the school’s creative director and a former pastry chef at Le Bernardin. He estimates that “maybe only half” of the students he encounters aspire to a career in restaurant kitchens, and he anticipates more curriculum changes in the next five to 10 years to reflect that.

But for the die-hards who intend to open restaurants, mastering images is imperative — even if the definition of mastery is a moving target.

Mr. Gonzalez admits to being tired of the Instagram photo feed because, he said, “just food can get boring.” He thinks that the key to his restaurant’s survival is the Instagram Story, either photo or video, that disappears after 24 hours and encourages people to take a look more often.

He uses the feed like a bulletin board, to announce daily specials and events, but he also relies on the stories to provide a “visual cue” about Lalito’s personality by showing the scene inside the restaurant.

“The stories are not ‘here’s how to plate a dish,’ but ‘the people here are amazing,’” he said. “I understand that photography drives traffic, but I’m interested in having people feel part of something. I want to build regulars.”

(The New York Times)

Zuckerberg’s Preposterous Defense of Facebook

Mark Zuckerberg, shown in Spain last year, defended his company this week from President Trump’s assertion that “Facebook was always anti-Trump.” Credit Pau Barrena/Bloomberg

Responding to President Trump’s tweet this week that “Facebook was always anti-Trump,” Mark Zuckerberg, the chief executive of Facebook, defended the company by noting that Mr. Trump’s opponents also criticize it — as having aided Mr. Trump. If everyone is upset with you, Mr. Zuckerberg suggested, you must be doing something right.

“Both sides are upset about ideas and content they don’t like,” he wrote in a Facebook post. “That’s what running a platform for all ideas looks like.”

This doesn’t hold water at all.

Are you bothered by fake news, systematic misinformation campaigns and Facebook “dark posts” — micro-targeted ads not visible to the public — aimed at African-Americans to discourage them from voting? You must be one of those people “upset about ideas” you disagree with.

Are you troubled when agents of a foreign power pose online as American Muslims and post incendiary content that right-wing commentators can cite as evidence that all American Muslims are sympathizers of terrorist groups like ISIS? Sounds like you can’t handle a healthy debate.

Does it bother you that Russian actors bought advertisements aimed at swing states to sow political discord during the 2016 presidential campaign, and that it took eight months after the election to uncover any of this? Well, the marketplace of ideas isn’t for everyone.

Mr. Zuckerberg’s preposterous defense of Facebook’s failure in the 2016 presidential campaign is a reminder of a structural asymmetry in American politics. It’s true that mainstream news outlets employ many liberals, and that this creates some systemic distortions in coverage (effects of trade policies on lower-income workers and the plight of rural America tend to be underreported, for example). But bias in the digital sphere is structurally different from that in mass media, and a lot more complicated than what programmers believe.

In a largely automated platform like Facebook, what matters most is not the political beliefs of the employees but the structures, algorithms and incentives they set up, as well as what oversight, if any, they employ to guard against deception, misinformation and illegitimate meddling. And the unfortunate truth is that by design, business model and algorithm, Facebook has made it easy for it to be weaponized to spread misinformation and fraudulent content. Sadly, this business model is also lucrative, especially during elections. Sheryl Sandberg, Facebook’s chief operating officer, called the 2016 election “a big deal in terms of ad spend” for the company, and it was. No wonder there has been increasing scrutiny of the platform.

However, at the slightest sign that Facebook might be pressured to institute at least some sensible oversight (as has happened recently in the German and French elections, when the platform mass-deleted fake accounts), right-wing groups and politicians can swiftly bring Facebook to its heels with charges of bias, because Facebook responds to such pressure as much of the traditional media do: by caving and hiding behind flimsy “there are two sides to everything” arguments.

This right-wing strategy has been used to pressure Facebook since before the presidential election. It was revealed in April 2016, for example, that Facebook was employing a small team of contractors to vet its “trending topics,” providing quality control such as weeding out blatant fake news. A single source from that team claimed it had censored right-wing content, and a conservative uproar ensued, led by organizations like Breitbart. Mr. Zuckerberg promptly convened an apologetic meeting with right-wing media personalities and other prominent conservatives to assure them the site was not biased against them.

Facebook got rid of those contractors, who were already too few for meaningful quality control. So what did it do to stem the obvious rise in the scale and scope of misinformation, fake news and even foreign state meddling on the site in the months leading up to the election? Clearly not enough — for fear, no doubt, that it would again be accused of bias.

Make no mistake: The flood of misinformation and fake news that went viral on the site was visible even to casual observers. A good chunk of such content featured outrageous claims about Hillary Clinton — that she had murdered F.B.I. agents, for example — as well as unfounded assertions that millions of undocumented immigrants were illegally voting.

Even the conservative pundit and wild-eyed conspiracy theorist Glenn Beck, of all people, has expressed befuddlement at the charge that Facebook censored conservative content. He has correctly pointed out that Facebook had been a boon for right-wing groups, especially of the alt-right and Breitbart variety. There has been no change in this state of affairs since the election. Last week, the best-performing post on Facebook was a Breitbart article that called African-American athletes protesting police misconduct “millionaire ingrates.”

While there are plenty of left-wing conspiracy theories, outright fake news and fraudulent sites are more prevalent on the right, especially the far right. Opportunist fake news producers who were creating such content purely to make money typically gave up trying to monetize left-leaning fake news because it didn’t go viral as easily on Facebook.

After the election, Mr. Zuckerberg characterized the suggestion that such misinformation campaigns played an important role in the election to be a “crazy idea.” This week, Mr. Zuckerberg reconsidered that comment, saying it was too dismissive. But his latest comments are still too dismissive, portraying those of us who are worried about misinformation campaigns and deception online as intolerant censors bothered by “ideas and content.”

A more astute observer of American politics than Mr. Zuckerberg might consider that Mr. Trump’s comments are part of an effort to depict Facebook as anti-conservative, lest outrage about the company’s role in the 2016 election prompt the site to adopt policies that would make a repeat of 2016 more difficult.

For those of us who are tolerant of a wide range of ideas and arguments, but would still like deception and misinformation to not have such an easy foothold in society, Mr. Zuckerberg’s comments do not inspire hope. Indeed, people across the political spectrum should be able to agree that not making it so easy, and so lucrative, for fake news to spread widely is better for all of us, since fake news isn’t necessarily a right-wing phenomenon. But since Facebook has no effective competition, we can look forward only to being lectured on being more tolerant of “ideas” we don’t like, and to smug talk of the false equivalency of “both sides.”

(The New York Times)

Facebook Marks the End of Social Media’s Wild West

The news that Facebook will turn over details of Russian ad buys to Congress recalls a column written by my colleague Eli Lake early year. He wrote that in forcing National Security Adviser Michael Flynn to resign, President Donald Trump “caved in to his political and bureaucratic opposition.” That February column warned: “Flynn is only the appetizer. Trump is the entree.” In the case of Facebook Inc., the 3,000 advertisement buys turned over to Congress are indeed the appetizer. Regulation carrying the force of law is the inevitable entree.

It was only 16 months ago when reports surfaced that Facebook employees were removing stories of interest to conservative users from its trending news section. Facebook responded by automating the section, removing humans from the editorial process. Thus began Facebook’s uneasy journey into self-regulation.

Of course, removing humans from the editorial process and allowing unfiltered content to be distributed has its own issues, as Facebook learned during the election last year. Allegations of “fake news” influencing the 2016 presidential election were widespread after Trump defeated Hillary Clinton. The site was accused of being played by foreign entities promoting false articles. Facebook responded by pledging to take steps to combat fake news.

Increasingly, Facebook is finding itself in an impossible position as it tries to remain, in spirit at least, a content-agnostic platform that allows everyone to have a voice. Sometimes the company faces scrutiny when it allows certain content to remain, as in the case of fake news or neo-Nazi propaganda. Other times it faces scrutiny for removing content.

Recently Facebook’s algorithmic ad targeting has been faulted as well. ProPublica reported last week the disturbing finding that algorithms allowed the existence of an ad category for anti-Semitic content. The story also noted that algorithms correlated the behavior of anti-Semites with those in a “Second Amendment” category, a finding that upset gun-rights advocates who don’t want to be seen as anti-Semites.

What’s apparent in the past 16 months is a Wild West of self-regulation. Time and time again, Facebook has shown that if confronted with a challenge, the company will listen and often respond. Partisan trending topics, fake news, neo-Nazis, Russian meddling — if it generates enough outrage, it’ll get addressed eventually.

But Facebook’s power and influence seem likely to grow beyond the “self-regulation” phase. That’s why markets are willing to give the company a valuation of $500 billion when its 2017 profits will be in the neighborhood of only $15 billion. (Bloomberg data shows analysts expect Facebook’s revenue to grow to $76 billion in 2020, almost doubling projections for 2017.) The question remains how long self-regulation will be acceptable to the public and Congress.

Now Facebook has tipped its hand. Large, multi-national corporations don’t turn over documents to Congress out of the goodness of their hearts. Facebook’s statement about why it’s turning over information to Congress goes to great lengths to emphasize it was the company’s own decision, and that the first priority is to protect user privacy. Don’t be fooled. Self-regulation will fail, and real regulation will begin. This is how it starts.

Bloomberg View

Experts: Saudi Arabia Wants to End Exploitation of Its Youth

Undated image of explosives seized by Saudi security forces

Riyadh– A number of terrorism experts and researchers described the Saudi security decisive measures against intelligent services as “historic measures” to stop the activity of any foreign parties targeting the security of the Kingdom of Saudi Arabia.

Speaking to Asharq Al-Awsat, the experts revealed that such cells and groups had been operating for three decades serving foreign agendas.

A top official stated that recently, head of state security was able to monitor espionage activity of a group of people for foreign countries that aim at attacking the security and social stability of the kingdom.

The source assured that all the members, including Saudis and foreigners, had been apprehended and are under investigation.

Expert in Criminology and fighting terrorism Yusuf al-Rumaih declared that those members and many others like them had been trying to create strife for over 30 years, from the times of cassettes and brochures till the modern times of tweets and messages.

“They are the same people. Same methodology. Only the outer appearances changed. The same content and ideology but a different form,” he said.

Rumaih stated that those people use religion to access people’s minds, adding that: “as long as they can’t achieve their goals through secularism, liberalism, and socialism, they will try to approach people through religion to accept their project.”

Rumaihi went on to explain that people in our region will accept anything as long as it includes a religious side. He also added that they focus on mobilizing teenagers and kids, and thus it is time to cure the country of this disease.

Former security adviser at the Interior Ministry Saoud al-Msebeeh confirmed that King Abdulaziz suffered long from such members who misuse religion for political purposes. He explained that since the foundation of the Saudi kingdom, the King fought Muslim Brotherhood and other movements.

Msebeeh said that Prince Mohammed bin Naif had warned that Muslim Brotherhood is the source of troubles in the Arab Islamic World, adding that they controlled the education system and tried to brainwash young people, and sadly many Saudi scholars and university professors were fooled by their methods.

we witnessed their enthusiasm for the destruction of the Arab world and their methods in instigating hatred and strife, he added. They even misused the social media for their own purposes.

“They are encouraged and supported by foreign intelligence services such as the Iranian, Qatari, and Western services,” he reported.

Msebeeh revealed that the measures taken by Saudi security are an extension of the efforts of Custodian of Two Holy Mosques and Crown Prince. He added that Houthis were thwarted in Yemen which restored the dignity of the Islamic and Arab world.

He concluded that we are now before a historic stage and it is time to rectify the situation and put an end to all of those who offended the kingdom.

Germany vs. Twitter

Germany Vs. Twitter

Hamburg, Germany — Heiko Maas may be about to learn where the road being paved with his good intentions will lead. A Social Democrat and the current federal justice minister, he has announced ambitious plans to rid the internet of abusive and offensive language. His plans have incited concern in the German offices of Twitter and Facebook and may ensure that he goes into history books as the politician who brought the curtain down on free speech on social media in Germany.

Mr. Maas’s plans, which center on legislation allowing legal actions against online insults, libel and sedition, take aim at several real problems, including a sharp increase in hate crimes against more than one million new migrants and refugees to Germany and the spread of “fake news” that, in his view, helped Donald Trump win the American election. And they come after his own failed attempt to get social media companies operating in Germany to agree to self-regulation. Despite promises by Facebook to crack down on harmful speech, Mr. Maas says Facebook still deletes only 39 percent of punishable content, and Twitter only 1 percent. After years of negotiations the minister, quite understandably, has run out of patience.

Recently he proposed a law obliging social media to erase “obviously illegal” content within 24 hours after a complaint. In less obvious cases, the deadline is one week. If the networks don’t comply, they face fines of up to 50 million euros, or $55 million. The law is expected to be ratified by Parliament before it goes on its summer break. But even as a proposal, its chilling effect on freedom of expression can already be felt; Twitter is now blocking accounts in Germany that have even the slightest whiff of hate speech.

One such account is called @einzelfallinfos — roughly, “individual case reports.” The account’s name mocks the mainstream narrative in Germany that crimes committed by refugees and migrants are “individual cases” — something the account’s operators clearly dispute. Instead, they see a “recurring pattern” of sexual assaults against women perpetrated by young men of mostly Arab origin. So they persistently post official police reports about, as they put it, “crimes committed by refugees, migrants, and presumed migrants.”

As of May 15, the @einzelfallinfos account is no longer accessible in Germany. When I asked Twitter why it was being blocked, and how many other accounts are being blocked in Germany, a press officer said, “We do not comment on individual accounts for privacy and security reasons.” What were these privacy and security reasons, I asked? “Longstanding company policy — nothing further to add.”

Of course, as a private company, Twitter isn’t obliged to give any reasons for blocking users from its platform. In theory, it could do so because it doesn’t like someone’s face. But for a business that thrives on giving a voice to as many people as possible, such arbitrariness and opacity may be self-defeating.

Though narrower than in the United States, freedom of speech in postwar Germany has been broadly interpreted by the courts. In 2009, the Constitutional Court, Germany’s highest legal body, ruled that disseminating radical-right-wing and Nazi views is not per se unconstitutional. On the contrary, the judges ruled that the Constitution “relies on the power of free confrontation as the most effective weapon” against “totalitarian and inhuman ideologies.”

Twitter is less sure. Faced with the threat of being held responsible for offensive and illegal content, the company instead relies on the power of the algorithm. In March, the service announced that it had updated its software to restrict accounts that engage in “abusive behavior.” We don’t know how Twitter reaches these decisions, because it doesn’t disclose the process.

One possibility is that after an account attracts a certain number of complaints, algorithms intervene to silence it. That’s bad news for Twitter, as users are likely to flock to alternative platforms like gab.ai. And it’s bad for the rest of us, because it creates yet another bubble in our already filtered public discourse.

The truth about free speech in Germany is that its limits depend heavily on context, in ways that are far too complex for any algorithm to sort out, let alone to discern “obvious illegality.” An utterance that is punishable as an insult in a normal exchange can be perfectly legitimate if done in a satirical context.

So what is Twitter supposed to do when caught between its users’ interests in a broad debate and an ambitious leftist minister with ideological guidelines that, if in doubt, rule against free speech?

Rather than pull up the drawbridge, or fall back on algorithms, Twitter should hire a corps of well-trained personnel to deal with hundreds of thousands of contested cases. There will always be public courts to turn to as a last resort. But if Twitter wants to maintain its unique position as a leading marketplace for opinions and ideas, it needs to invest in the personnel to keep it there. Only then can it offer the most orderly and open debate possible.

(The New York Times)

UK Minister Urges Silicon Valley to Do More to Fight Online Extremism

Britain's Home Secretary, Amber Rudd, arrives in Downing Street for a cabinet meeting, in central London

Britain’s interior minister has warned Internet giants Facebook, Microsoft, Twitter, and YouTube to step up efforts to counter or remove content that incites militants.

Home Secretary Amber Rudd issued her challenge as she arrived in Silicon Valley in California on Tuesday.

After four militant attacks in Britain that killed 36 people this year, senior ministers have repeatedly demanded that internet companies do more to suppress extremist content and allow access to encrypted communications.

In the face of resistance from the industry, Prime Minister Theresa May – a former interior minister – proposed trying to regulate cyberspace after a deadly attack on London Bridge in June.

Rudd will meet executives of social media and internet service providers in San Francisco at the Global Internet Forum to Counter Terrorism, whose partners are Facebook (FB.O), Alphabet Inc’s Google (GOOGL.O), Microsoft (MSFT.O) and Twitter (TWTR.N).

The forum was set up to coordinate the companies’ efforts on removing militant content.

“Terrorists and extremists have sought to misuse your platforms… This Forum is a crucial way to start turning the tide,” Rudd will say, according to a statement from the interior ministry.

“The responsibility for tackling this threat at every level lies with both governments and with industry.”

A source familiar with Rudd’s trip said she had scheduled a meeting with representatives of YouTube, Alphabet’s video sharing platform. She met Facebook, which owns messaging platform WhatsApp, on Monday, the company said.

The industry says it wants to help governments remove extremist or criminal material but also has to balance the demands of state security with the freedoms enshrined in democratic societies.

“Our mission is to substantially disrupt terrorists’ ability to use the Internet in furthering their causes, while also respecting human rights,” Twitter said in a statement.

End-to-End Encryption

While internet companies are eager to remove obviously extremist content posted on their platforms they face a logistical challenge in identifying and then swiftly removing such material.

Rudd said three quarters of ISIS propaganda was shared within three hours of publication, underscoring the need for speed in taking down extremist posts.

“Often, by the time we react, the terrorists have already reached their audience,” she wrote in an article in the Daily Telegraph, adding that end-to-end encrypted messages were hindering security services from stopping potential plotters.

End-to-end encryption on services such as WhatsApp ensures only the sender and receiver can read a message as the key is kept on the devices. Without access to the devices, security services cannot read the messages.

Britain’s MI5 security service has said it needs access to encrypted communications to foil attacks.

In the United States, the Federal Bureau of Investigation has pushed for full access to encrypted communications and devices but Congress has so far refused.

Sheryl Sandberg, Chief Operating Officer of Facebook, told the BBC that the metadata of Whatsapp messages was not encrypted, allowing governments to collect details on who is messaging who, when and for how long. She said that if people moved off such messaging systems, that crucial metadata would not be available.

Shortly before his ouster by President Donald Trump, FBI Director James Comey said that 46 percent of more than 6,000 electronic devices seized by the FBI since October 1 last year could not be opened due to challenges posed by encryption.

German Parliament Passes Law to Fine Social Media over Hate Speech

German Parliament Passes Law to Fine Social Media over Hate Speech

The German parliament on Friday passed a law aimed at to fine social media networks up to 50 million euros ($57 million) if they fail to remove “obviously illegal” content in time, despite concerns the law could limit free expression.

The measure approved Friday is designed to enforce the country’s existing limits on speech, including the long-standing ban on Holocaust denial.

Germany has some of the world’s toughest laws covering defamation, public incitement to commit crimes and threats of violence, with prison sentences for Holocaust denial or inciting hatred against minorities. But few online cases are prosecuted.

The law gives Facebook, YouTube, and other sites 24 hours to delete or block hate speech or obviously criminal content and seven days to deal with less clear-cut cases, with an obligation to report back to the person who filed the complaint about how they handled the case.

Failure to comply could see a company fined up to 50 million euros, and the company’s chief representative in Germany fined up to 5 million euros.

Justice Minister Heiko Maas argued that social media networks have failed to prevent their sites from being used to spread inflammatory views and false information, adding that a measure to “end the Internet law of the jungle” would not infringe freedom of speech.

The issue has taken on more urgency as German politicians worry that proliferating fake news and racist content, particularly about migrants, could sway public opinion in the run-up to a national election on Sept. 24.

However, human rights experts and Internet companies have voiced their concern over the law risking privatizing the process of censorship and affecting free speech.

Organizations representing digital companies, consumers and journalists claim the tight time limits are unrealistic and will lead to accidental censorship as technology companies err on the side of caution and delete ambiguous posts to avoid paying penalties.

In response, the government has softened the legislation by excluding email and messenger providers and opening up the option of creating joint monitoring facilities to make decisions about what content to remove.

It also made clear that a fine would not necessarily be imposed after just one infraction, but only after a company systematically refused to act or does not set up a proper complaint management system.

Facebook, Free Expression and the Power of a Leak

A smartphone user shows the Facebook application on his phone in Zenica, in this photo illustration

The First Amendment protects our right to use social networks like Facebook and Twitter, the Supreme Court declared last week. That decision, which overturned a North Carolina law barring sex offenders from social networks, called social media “the modern public square” and “one of the most important places” for the exchange of views. The holding is a reminder of the enormous role such networks play in our speech, our access to information and, consequently, our democracy. But while the government cannot block people from social media, these private platforms can.

In some ways, online platforms can be thought of as the new speech governors: They shape and allow participation in our new digital and democratic culture in ways that we typically associate with governments. Even Facebook’s recently updated mission statement acknowledges this important role, with its vow to give “people the power to build community and bring the world closer together.” But social media sites are not bound by the First Amendment to protect user speech. Facebook’s mission statement says as much, with its commitment to “remove bad actors and their content quickly to keep a positive and safe environment.”

Until recently, the details of the types of posts Facebook prohibited were a mystery. That changed on May 21 when The Guardian released over 100 pages of leaked documents revealing Facebook’s internal rules. This newfound transparency could mean Facebook will be held accountable to the public when it comes to its decisions about user speech.

Facebook has often been pressured to explain or alter its approach to moderating users’ speech, in cases involving topics like breastfeeding pictures, Donald Trump’s posts about banning Muslims from entering the United States and the video of a Cleveland murder. But before this leak, nobody outside the company could say exactly how it made decisions — and it was under no legal obligation to share.

This leak provides some answers: Facebook’s content policies resemble United States law. But they also have important differences.

For example, Facebook generally allows the sharing of animal abuse, a category of speech the Supreme Court deemed protected in 2010. But diverging from First Amendment law, Facebook will remove that same imagery if a user shows sadism, defined as the “enjoyment of suffering.”

Similarly, Facebook’s manual on credible threats of violence echoes First Amendment law on incitement and true threats by focusing on the imminence of violence, the likelihood that it will actually occur, and an intent to credibly threaten a particular living victim.

But there are also crucial distinctions. Where First Amendment law protects speech about public figures more than speech about private individuals, Facebook does the opposite. If a user calls for violence, however generic, against a head of state, Facebook deems that a credible threat against a “vulnerable person.” It’s fine to say, “I hope someone kills you.” It is not fine to say, “Somebody shoot Trump.” While the government cannot arrest you for saying it, Facebook will remove the post.

These differences are to be expected. Courts protect speech about public officials because the Constitution gives them the job of protecting fundamental individual rights in the name of social values like autonomy or democratic self-governance. Facebook probably constrains speech about public officials because as a large corporate actor with meaningful assets, it and other sites can be pressured into cooperation with governments.

Unlike in the American court system, there’s no due process on these sites. Facebook users don’t have a way to easily appeal if their speech gets taken down. And unlike a government, Facebook doesn’t respond to elections or voters. Instead, it acts in response to bad press, powerful users, government requests and civil society organizations.

That’s why the transparency provided by the Guardian leak is important. If there’s any hope for individual users to influence Facebook’s speech governance, they’ll have to know how this system works — in the same way citizens understand what the Constitution protects — and leverage that knowledge.

For example, before the Guardian leak, a private Facebook group, Marines United, circulated nude photos of female Marines and other women. This prompted a group called Not in My Marine Corps to pressure Facebook to remove related pages, groups and users. Facebook announced in April that it would increase its attempts to remove nonconsensual nude pictures. But the Guardian leaks revealed that the pictures circulated by Marines United were largely not covered by Facebook’s substantive “revenge porn” policy. Advocates using information from the leaks have begun to pressure Facebook to do more to prevent the nonconsensual distribution of private photos.

Civil liberties groups and user rights groups should do just this: Take advantage of the increased transparency to pressure these sites to create policies advocates think are best for the users they represent.

Today, as social media sites are accused of spreading false news, influencing elections and allowing horrific speech, they may respond by increasing their policing of content. Clarity about their internal speech regulation is more important now than ever. The ways in which this newfound transparency is harnessed by the public could be as meaningful for online speech as any case decided in a United States court.

(The New York Times)

Margot E. Kaminski is an assistant professor at the Ohio State University Moritz College of Law. Kate Klonick is a Ph.D. candidate at Yale Law School.