KTLA

New Zealand, France Urge Facebook, Google to Keep Terrorists Off Platforms

A person holds an iPhone displaying the Facebook app logo in front of a computer screen showing the Facebook login page on August 3, 2016 in London, England. (Credit: Carl Court/Getty Images)

Tech CEOs have been invited to a summit of world leaders in Paris next month to agree on ways of preventing terrorists from using their platforms.

New Zealand Prime Minister Jacinda Ardern said Wednesday that she and French President Emmanuel Macron would host the meeting to discuss how to end extremist activity on social media in the wake of recent terror attacks.

Last month, a video of attacks that killed over 50 people at mosques in Christchurch was live-streamed on Facebook and shared on YouTube, Twitter and other outlets. The companies were heavily criticized for failing to curb the spread of that footage.

“The March 15 terrorist attacks saw social media used in an unprecedented way as a tool to promote an act of terrorism and hate,” Ardern said in a statement.

“We are asking for a show of leadership to ensure social media cannot be used again the way it was.”

Macron, who hosted Facebook CEO Mark Zuckerberg and other top execs at a “Tech for Good” summit in Paris last year, has been pushing the fight against hate speech online, his office said.

“These past years, the number of deaths resulting from terror attacks on the French territory has been considerable,” said a spokesperson for the president. “And given what has happened in New Zealand, of course there was a common initiative to combat hate speech on social media.”

At the May 15 meeting in Paris, world leaders and tech executives will be invited to sign a pledge called the “Christchurch Call,” which aims to “end the use of social media for acts of terrorism,” according to Ardern.

Twitter and Google, which owns YouTube, confirmed they would take part in the meeting but did not say whether CEOs Jack Dorsey or Sundar Pichai would be there. A spokesperson for Facebook told CNN Business it was evaluating “who among top Facebook executives will attend.”

Microsoft did not comment on whether it would send a representative.

Facebook, Twitter and YouTube were all forced to defend their practices last month after over a million copies of the video of the Christchurch shootings circulated online, saying they had removed or blocked posts from users who were trying to share it.

The pressure increased this week after hundreds were killed in terrorist attacks on churches and hotels in Sri Lanka. The government blocked social media platforms nationwide, citing “false news reports.”

Ardern has previously suggested social media companies could be doing more. She has pointed to Facebook’s capacity to automatically block videos from uploading as an example, saying “that tells me there are powers to take a very direct approach to instances of speech that incites violence, or that incites hate.”

Facebook says it shares “the commitment of world leaders to keep people safe,” adding that the New Zealand shooter’s original video was taken down within minutes of being contacted by police. About 1.5 million copies of the clip were also removed within 24 hours, it said.

Zuckerberg has resisted calls to start delaying live videos to limit the spread of offensive content, saying it would “fundamentally break what live-streaming is” for those who legitimately use the tool to share updates with friends.

Prime Minister Ardern again called out the company on Wednesday, saying that “it’s critical that technology platforms like Facebook are not perverted as a tool for terrorism.”

A Facebook spokesperson told CNN Business that it welcomed the opportunity to work with government and industry experts on establishing a clear framework of rules, and was “evaluating how we can best support this effort.”

Google also vowed to “continue to engage on this crucial issue,” saying that it had invested heavily in human review teams and would keep developing new standards and technologies to take down extremist content.

Twitter, which says that 95% of terrorist content on its platform “is removed proactively” using in-house technology, said it would continue to find ways to prevent new accounts from being opened by terrorists and halt the spread of propaganda after attacks.

“Our work will never be complete, as the threats we face constantly evolve,” a Twitter spokesperson said. “We share a common goal with governments all around the world, including in New Zealand, to find real, lasting solutions to building a safer internet and welcome the opportunity to work together with our peers towards a global solution.”

 

48.8566142.352222