Senators Want Social Media Firms to Hunt for Terrorists

But the last thing the FBI needs, experts say, is a deluge of poorly vetted data. And the risk to free speech rights is considerable.

Photo: Pablo Martinez Monsivais/AP

An increasing number of the FBI’s terror investigations now start with a post or a tweet. But distinguishing between the huge number of people who talk about terror on social media and the very few actual potential terrorists is a huge challenge for law enforcement.

One thing that could make it even worse, however, would be deputizing social media firms to decide who might be a terrorist and ordering them to send names to the government.

The last thing the FBI needs, experts say, is a deluge of poorly vetted data. And the potential risk to free-speech rights is considerable. Unlike staffers at tech companies feeling pressured to report content about vague “terrorist activity,” FBI agents are trained to recognize the not-always-clear line between constitutionally protected speech and a legitimate threat.

Nevertheless, Sen. Dianne Feinstein, D-Calif., and Sen. Burr, R-N.C. — eager  to do something about ISIS’ social media prowess, whether or not it actually makes sense — have reintroduced a previously rejected provision that would force technology companies to report to the government any instances of “terrorist activity” that they notice online. The measure was stripped out of the 2016 intelligence authorization bill in late September; now it’s being proposed as standalone legislation.

The proposed bill is “modeled on existing law requiring companies to report child pornography,” according to a press release sent out by Feinstein’s and Burr’s offices.

Critics of the bill insist it’s vague, overly broad, unnecessary, and wouldn’t actually help find any more terrorists.

Sen. Ron Wyden, D-Ore., held up the intelligence bill until Feinstein’s provision was removed, saying he “[took] the concerns that have been raised about its breadth and vagueness seriously.”

“It would certainly be fair to say that in addition to a perverse incentive for companies not to share, there could also be an incentive for companies to overshare,” Keith Chu, Wyden’s spokesperson, wrote in an email to The Intercept. “Both would be reasonable responses to the ambiguity the bill would create, and could well undermine efforts to identify legitimate terrorist threats.”

“While the bill is troubling and overbroad to begin with, it will also inevitably be expanded,” wrote Rachel Levinson-Waldman, senior counsel for the Brennan Center’s Liberty and National Security Program, in an email to The Intercept. “A bill that requires reporting of ‘terrorist activity’ today will require reporting of ‘criminal activity’ tomorrow and ‘potentially dangerous activity’ the next. Internet providers will essentially become government-appointed monitors of all communications, watching for anything that could be of interest to law enforcement,” she wrote.

Levinson-Waldman also warned of inevitable civil suits, “for example, from families of victims of the San Bernardino shooting, who will argue that Facebook not only should have shut down the Facebook accounts of the shooters (as they did) but should have reported them to the FBI (which they already often voluntarily do).”

“Social media companies are not intelligence agencies,” said Lee Rowland, a senior staff attorney with the ACLU’s Speech, Privacy, and Technology Project. “When we export the job to private actors who don’t have training in identifying terrorists, that line between a crime and totally protected speech gets thinner and thinner.”

That line is critical to preventing the criminalization of political dissent or ideological statements — no matter how heinous or disagreeable most people might find them.

The Supreme Court ruled in its 1969 Brandenburg decision that speech is protected unless it’s “directed to inciting or producing imminent lawless action and is likely to incite or produce such action.”

None of the recent spate of social media-spawned terror charges, for instance, were solely based on speech. Though some journalists have focused on the role of social media in spurring the investigations, the indictments all included an additional action — such as purchasing plane tickets to travel overseas, or facilitating someone else’s travel, or lying to the FBI about a specific statement made, or planning an attack and possessing firearms.

Though the FBI has broad latitude to look into threats on social media — there is no requirement for a factual basis or evidence of criminal behavior to open what’s called an assessment, or the first step in an investigation — agents are trained to pay close attention to what’s political speech and what might not be.

When the line between free speech and criminal speech is fuzzy, “there is a risk of chill, of self-censorship,” Rowland said, “especially for those who may be Muslim, or may have traveled to the Middle East.”

Software and technology companies agree. “By requiring companies to report on a fuzzy concept such as ‘terrorist activity,’ it would make industry the de facto authority over what is — or is not — free speech under the First Amendment,” wrote David LeDuc, senior director of the Software & Information Industry Association, in an email to The Intercept.

“Regardless of how policymakers might try to define ‘terrorist activity,’ it’s neither practical nor desirable to have industry serving in the role reserved for national security agencies. SIIA is also very concerned that this could lead to over-reporting by companies who fear liability for not complying with the mandate.”

Some social media sites have already overreacted to protected speech, as when the American Civil Liberties Union posted a photo of a partially nude bronze statue and got blocked entirely by Facebook last September. Similarly, Apple removed an app that tracked U.S. military drone strikes overseas. “Given this track record, there’s no reason to think they’ll act cautiously and judiciously when it comes to scanning for, and reporting, potential ‘terrorist activity,’” wrote Levinson-Waldman.

Meanwhile, FBI Director James Comey and other intelligence officials have said social media companies like Twitter already do a pretty good job of reporting questionable content.

And the biggest problem could already be too many tips, rather than too few. According to the recent book Chasing Ghosts by John Mueller and Mark G. Stewart, 263 U.S. agencies devoted to counterterrorism reportedly follow up on 5,000 “tips” a day, or about 915,000 a year. Since 2001, only 62 involved some sort of plot that may have been directed at the U.S.

“Asking people to report — see something say something — creates this tidal wave of information,” said Michael German, a former undercover FBI agent who now works at the Brennan Center for Justice. “This system that they set up is designed to cast suspicion on a lot of people unnecessarily, which is then impossible to remove.”

Top photo: Sen. Dianne Feinstein, D-Calif., and Sen. Richard Burr, R-N.C., in September 2015.

Join The Conversation