Two weeks after a whistleblower filed an updated federal complaint accusing the network of promoting terrorism, Facebook continues to deal with pressure about questionable content. The details of the complaint to the Securities and Exchange Commission were outlined in an Associated Press story. issue in the complaint: the network’s failure to limit content designed to promote groups like ISIS and to interfere with elections. The complaint, filed with the support of the National Whistleblower Center, offers evidence that Facebook is auto-generating videos and pages for terrorist groups. It describes Facebook’s efforts to stamp out terror content as “weak and ineffectual.”

  • On Thursday, Facebook issued a “Community Standards Enforcement Report” that concluded “terrorist propaganda” accounted for .03 percent of aof the site’s views. The network finds 99.8 percent of all terrorist material before being alerted by users.

Facebook’s has said that much of its antiterrorism efforts rely on artificial intelligence (AI). Several Facebook executives painted a less positive picture of the company’s content moderation effort.

  • On Monday,The Verge website quoted Facebook’s top artificial intelligence (AI) scientist saying the company is “years away from being able to fully shoulder the burden of moderation, particularly when it comes to screening live video.”
  • On Friday, in a New York Times profile, Mike Schroepfer, Facebook’s chief technology officer, admitted that AI was not going to solve the problem completely.

In two of the interviews, he started with an optimistic message that A.I. could be the solution, before becoming emotional. At one point, he said coming to work had sometimes become a struggle. Each time, he choked up when discussing the scale of the issues that Facebook was confronting and his responsibilities in changing them. “It’s never going to go to zero,” he said of the problematic posts.

The story talks about a Facebook meeting where images of broccoli and marijuana were shows side by side

The problem was that the marijuana-versus-broccoli exercise was not just a sign of progress, but also of the limits that Facebook was hitting. Mr. Schroepfer’s team has built A.I systems that the company now uses to identify and remove pot images, nudity and terrorist-related content. But the systems are not catching all of those pictures, as there is always unexpected content, which means millions of nude, marijuana-related and terrorist-related posts continue reaching the eyes of Facebook users.

Everyone uses the Internet of Things to do things better faster easier cheaper and that includes in the unfortunate cases where people are strategizing real-world harm and that is something that as tech companies we have to face.

In her academic research into the process of radicalization, Saltman saw the growing role of the internet.

While we can’t blame the Internet entirely as this violence and terrorism predates the internet, we can see that there is a catalyst role.

  • In the meantime, The Washington Post reports that the U.S. declined to endorse an international effort designed to curb extremism online. White House officials said free-speech concerns prevented them from joining the campaign, which emerged in response to live-streams of shootings at two New Zealand mosques.
  • And, the auto-generated Facebook page identified in the AP report and whistleblower complaint remains online. As of this morning, more than 4,400 users like the page for the Syrian terrorist group Hay’at Tahrir Al Sham.

Take Action! Urge the SEC to investigate and hold Facebook accountable!