Twitter Says There’s No “Magical Algorithm” to Find Terrorists

Twitter announced on Friday that it has shut down over 125,000 user accounts for promoting violent threats or terrorist acts in less than a year — but it can't automate the process.

LONDON, ENGLAND - NOVEMBER 07:  In this photo illustration, the Twitter logo and hashtag '#Ring!' is displayed on a mobile device as the company announced its initial public offering and debut on the New York Stock Exchange on November 7, 2013 in London, England. Twitter went public on the NYSE opening at USD 26 per share, valuing the company's worth at an estimated USD 18 billion.  (Photo by Bethany Clarke/Getty Images)
LONDON, ENGLAND - NOVEMBER 07: In this photo illustration, the Twitter logo and hashtag '#Ring!' is displayed on a mobile device as the company announced its initial public offering and debut on the New York Stock Exchange on November 7, 2013 in London, England. Twitter went public on the NYSE opening at USD 26 per share, valuing the company's worth at an estimated USD 18 billion. (Photo by Bethany Clarke/Getty Images) Photo: Bethany Clarke/Getty Images

Twitter announced on Friday that it has shut down over 125,000 user accounts for promoting violent threats or terrorist acts, mostly having to do with ISIS, in less than a year.

At the same time, the company made it clear that there is no automated way of distinguishing between protected speech and what it considers violations of its rules.

“As many experts and other companies have noted, there is no ‘magic algorithm’ for identifying terrorist content on the internet, so global online platforms are forced to make challenging judgment calls based on very limited information and guidance,” the company said.

“As an open platform for expression, we have always sought to strike a balance between the enforcement of our own Twitter Rules covering prohibited behaviors, the legitimate needs of law enforcement, and the ability of users to share their views freely — including views that some people may disagree with or find offensive,” the company said.

Just last month, top national security officials parachuted into Silicon Valley to meet with technology executives and ask for technology “that could make it harder for terrorists to use the internet … or easier for us to find them when they do.”

Scientists tend to agree that this is impossible, based on the rarity of terrorist attacks and the unique, unpredictable circumstances surrounding them — though it hasn’t stopped companies like CIA-funded Palantir from trying. These efforts have been criticized because they generate too many false positives, and cast suspicion on far more innocent people than true terrorists lurking in their midst.

Algorithms are better for exerting social control or monitoring political views than they are for predicting large-scale violence.

Twitter’s new policy instead stresses the importance of human monitoring, reports from users, and delicate decision making.

Twitter’s system doesn’t sound all that different from what Facebook does. Facebook reportedly has a team dedicated to responding to user complaints, which then will look for similar content in the network of the offending accounts.

Silicon Valley has pushed back on efforts made by high-ranking Sens. Dianne Feinstein, D-Calif., and Richard Burr, R-N.C., to essentially delegate to them the task of reporting signs of possible terrorist activity.

Join The Conversation