YouTube and Facebook Are Removing Evidence of Atrocities, Jeopardizing Cases Against War Criminals

After claiming to be forces for accountability in the Arab Spring, social media networks these days routinely remove war crimes evidence, human rights groups say.

Illustration: Nicolas Ortega for The Intercept

Abdulsalam was in the middle of Friday prayer at his neighborhood mosque in al-Bab, Aleppo, when he heard a crash — a nearby bakery had just disintegrated under the force of a barrel bomb, a deadly metal container filled with shrapnel and explosives, favored by the Syrian military.

Scanning the sky he saw the hovering chopper that had dropped the weapon. He tried to snap photos as it loomed above the rubble, but the images looked fuzzy. Abdulsalam hopped on the back of a passing ambulance and was among the first on the scene. He trained his camera on the smoldering facade of a bakery, panned to series of blasted-apart food stalls, and then settled his lens on mangled bodies. He kept snapping photos in rapid succession, until he spotted his cousin amid the carnage. Holstering his camera, Abdulsalam decided to join the rescue effort and helped his relative to a nearby hospital.

It was January 2014, almost two years after the Syrian army opened fire on protesters in Abdulsalam’s hometown of al-Bab, in the north of Aleppo province, bringing the country’s raging war to a farming community that had, until that point, remained largely untouched. Since then, Abdulsalam had worked with a group of local media activists to publicize the human toll of the civil war as rebel fighters established a foothold in al-Bab and the Assad regime pounded the town from above.

Within hours of the attack that injured his cousin, Abdulsalam uploaded his photos to Facebook. He thought it was the best way to simultaneously preserve the images — he didn’t know when his camera or computer could be destroyed — and get them out to the world. “It was a particularly horrific bombing,” he told me recently. There had been a pause in the fighting that week, and families who’d spent months cowering inside had just emerged to stroll through an outdoor bazaar near the mosque.

Seven months later, Abdulsalam got an automated email from Facebook notifying him that the images had been removed. Other users had complained that his photos were too gory. By the time he got the email, Abdulsalam’s other copies of the pictures were gone; his hard drive had been burned, along with his small office, when the Islamic State stormed al-Bab and Abdulsalam fled across the border to Turkey.

There’s good reason to believe Abdulsalam’s photos could have been used to address the atrocities he had witnessed. Since the beginning of the Syrian Civil War, investigators with Human Rights Watch have been making regular trips to Aleppo to document potential war crimes, including a disturbing pattern of Syrian helicopters blowing up bakeries with barrel bombs. By the time Abdulsalam snapped his pictures, the Islamic State had begun to move into the city, and HRW could no longer collect on-the-ground evidence. Ole Solvang, an HRW researcher who visited Aleppo more than a dozen times, in part to research the bakery attacks, said of Abdulsalam’s photos, which he never saw, “If there is ever a trial, this is the stuff that could become important evidence.”

Civilians react following a reported airstrike on the Tariq al-Bab district of the northern Syrian city of Aleppo on February 1, 2014. Syrian government and opposition delegations leave 10 days of peace talks with few results and a follow-up meeting uncertain, but analysts and negotiators say the discussions are an important beginning. AFP PHOTO/Mohammed Al-khatieb        (Photo credit should read MOHAMMED AL-KHATIEB/AFP/Getty Images)

Civilians react following a reported airstrike on the Tariq al-Bab district of the northern Syrian city of Aleppo on February 1, 2014.

Photo: Mohammed Al-Khatieb/AFP/Getty Images

The disappearance of Abdusalam’s photos are part of a pattern that’s causing a quiet panic among human rights groups and war crimes investigators. Social media companies can, and do, remove content with little regard for its evidentiary value. First-hand accounts of extrajudicial killings, ethnic cleansing, and the targeting of civilians by armies can disappear with little warning, sometimes before investigators notice. When groups do realize potential evidence has been erased, recovering it can be a kafkaesque ordeal. Facing a variety of pressures — to safeguard user privacy, neuter extremist propaganda, curb harassment and, most recently, combat the spread of so-called fake news — social media companies have over and over again chosen to ignore, and, at times, disrupt the work of human rights groups scrambling to build cases against war criminals.

“It’s something that keeps me awake at night,” says Julian Nicholls, a senior trial lawyer at the International Criminal Court,  where he’s responsible for prosecuting cases against war criminals, “the idea that there’s a video or photo out there that I could use, but before we identify it or preserve it, it disappears.”

Worries over disappearing evidence are not just theoretical. This past summer, YouTube rolled out a new artificial intelligence system designed to identify violent content that may be extremist propaganda or disturbing to viewers. Almost overnight, it shut down 900 groups and individuals documenting the civil war in Syria. That included a channel run by Bellingcat, a reputable U.K.-based organization devoted to analyzing images coming out of conflict zones including Syria, Ukraine, and Libya. YouTube also took down content from the group AirWars, which tracks the toll of U.S. airstrikes in Iraq and Syria. Countless media organizations run from Syria were also shut down, including the Idlib Media Center, one of the few groups producing videos from the last Syrian province controlled by rebels. Meanwhile, in September, Facebook began removing photos and images documenting ethnic cleansing and torture of the Rohingya ethnic minority at the hands of the Myanmar government. Like the images taken by Abdulsalam, other users had flagged the Rohingya images as disturbing, and Facebook agreed.

The takedowns, and the murky processes that led to them, represent a dramatic shift from the heady days of the Arab Spring, when protesters posted images of their governments firing on them, and social media chiefs promoted their platforms as nearly limitless tools for reform. “Anyone with a mobile handset and access to the Internet will be able to play a part in promoting accountability,” Google Executive Chair Eric Schmidt wrote in his 2013 book, “The New Digital Age.”  Around the same time, Facebook CEO Mark Zuckerberg declared, in a 10-page paper about wiring the world for internet: “I believe connectivity is a human right.”

“They could have said: ‘Don’t use your platforms for this,’” said Alexa Koenig, executive director at the Human Rights Center at UC Berkeley. “But they actually tried to get these people use their platforms [for it] — they held themselves up as arbiters of social good, and at that point of creating dependency, I would argue they acquired a heightened responsibility.”

“They had grandiose ideas,” added Keith Hiatt, a former software engineer turned human rights activist who’s served as a sort of intermediary for the tech industry and the human rights community. He is now vice president of Human Rights Programs at the NGO, Benetech, and serves on the Technology Advisory Board for the ICC, a group of experts trying to bridge the gap between investigators and technology. “The big story these companies told, justifying the massive freedom that they had to operate, was that their technologies would lead to openness — and openness will lead to democracy and human freedom,” he said.

Now that their own behavior is at issue, social media companies seem oblivious to the stakes, said Mohammad Al Abdallah, executive director of the Syrian Justice and Accountability Centre, an NGO backed by more 30 governments, including the U.S., which works to preserve social media evidence of atrocities.

“They just don’t appreciate what’s going on on their platforms,” he said. “They don’t take this as seriously as they should.”

Facebook would not answer specific questions about war crimes evidence. A spokesperson, who would not agree to sit for an interview or be named, said Facebook tried to be flexible and allow violent content to live on its platform when that content had some social or documentary value, and pointed to a year-old blog post in which the company said it would be “working closely with experts, publishers, journalists, photographers, law enforcement officials and safety advocates about how to do better when it comes to the kinds of items we allow.”

YouTube defended the way it deals with war crimes evidence and its relationship with the human rights groups who collect that evidence. “We are committed to ensuring human rights activists and citizen journalists have a voice on YouTube and are proud of how our service has been used to expose what is happening across the globe,” Juniper Downs, YouTube’s director of public policy, said. “We collaborate across civil society on many issues, including working with human rights groups to better understand the needs of content creators on the ground. Their expertise helps us make smarter policies and enforcement decisions, and we value the collaboration.”

Social media evidence is increasingly used to build cases against perpetrators of abuses by human rights organizations, by European courts that have “universal jurisdiction” and can bring war crimes charges, and by United Nations investigators. Over the summer, the ICC issued an arrest warrant for a Libyan commander accused of extrajudicial killings on the battlefield, basing the warrant, in part, on videos posted to Facebook. (One of the prosecutors on that case is Nicholls, the ICC lawyer who frets about atrocity evidence disappearing on social media.) Last year in Germany, an ISIS fighter was found guilty of posing with decapitated prisoners based in part on evidence found on Facebook. This year, in Sweden, Syrian regime and rebel fighters were successfully prosecuted for war crimes using evidence from both Facebook and YouTube. In total, there are 30 ongoing war crimes investigations in Swedish and German courts connected to crimes committed in Syria and Iraq. On the other side of the world, the government of Myanmar has barred NGOs and aid agencies from entering northern parts of the country, where human rights groups say a genocide is taking place against the Rohingya population. Human rights workers are often reliant on social media evidence to document the atrocities. At the same time, the U.N. has launched an independent investigation into the Syria conflict — known as the International, Impartial and Independent Mechanism — which has a specific mandate to collect evidence of war crimes in Syria, much of which is housed on social media platforms.

Mark Zuckerberg, chief executive officer and founder of Facebook Inc., speaks during the Oculus Connect 3 event in San Jose, California, U.S., on Thursday, Oct. 6, 2016. Facebook Inc. is working on a new virtual reality product that is more advanced than its Samsung Gear VR, but doesn't require connection to a personal computer, like the Oculus Rift does. Photographer: David Paul Morris/Bloomberg via Getty Images

Mark Zuckerberg, chief executive officer and founder of Facebook, speaks during the Oculus Connect 3 event in San Jose, Calif., on Thursday, Oct. 6, 2016.

Photo: David Paul Morris/Bloomberg/Getty Images

For some who post on social media to document ongoing atrocities, the takedowns seem, at best, a destruction of evidence — and, at worst, complicity in atrocities. “Three years of documentation, just gone, in a moment,” Obayda Abo-Al Bara, a manager at the Idlib Media Center, said. Mohammad Anwar, one of the Rohingya activists whose posts were deleted by Facebook, told The Intercept that “I did feel that Facebook was colluding with the Myanmar regime in the Rohingya genocide.”

Facebook declined to address that statement directly, but said through a spokesperson that it is now making exceptions to its community standards for that conflict, working with NGOs, and conceded some mistakes in its handling of posts from Myanmar after they were brought to light by the Daily Beast in September.

In the tribunals of the future, investigators imagine a constellation of evidence — social media content will be introduced alongside traditional materials, such as eyewitness testimony or official documents to build stronger, more durable, cases against war criminals. Social media will never replace flesh-and-blood witnesses or old-fashioned forensics. But such evidence is clearly growing in importance — and is uniquely concentrated on the servers of Silicon Valley corporations.

“These platforms are now essentially privately owned evidence lockers,” said Christoph Koettl, a senior analyst at Amnesty International.

“But they are not in the business of being a human rights evidence locker; that work is not included in their business model.”

Koettl, who is also the founder of Citizen Evidence Lab, a group that trains human rights researchers to use social media to gather evidence of atrocities, recently received a YouTube link from a source who said it depicted an extrajudicial killing in Nigeria. By the time he clicked the link, the material had been taken down. When he contacted the company to ask for it to be restored, he said, they told him it wasn’t possible. A spokesperson at YouTube told The Intercept that, in such a scenario, the company has to respect the wishes of the video’s original poster — even if a human rights group like Amnesty flags the media as potential war crimes evidence.

Cases like the Nigeria video place social media companies in a difficult position, trying to strike a balance between the thirst for evidence of atrocities and privacy guarantees made to users. Nonprofit rights groups see less noble priorities at play, as well. Koenig has worked for years to help forge cooperation between human rights crusaders and the major social media companies. In 2014, she helped convene a meeting between investigators at the ICC and major tech companies in San Francisco; Google sent a representative, but Facebook pulled out last minute. (Koenig was able to debrief Facebook the week after at the 2014 meeting of RightsCon, an annual digital human rights conference.) This was the first meeting of its kind, she said, and the differences between the two camps were laid bare. “When you’re talking about privately held companies, with loyalty to their shareholders, they think on quarterly timelines and about maximizing profits,” she added.  “With war crimes, we’re talking about a totally different set of priorities, and a timeline of five years minimum.”

Silicon Valley’s attitude is not the only obstacle to deploying social media content in war crimes proceedings. Courts and prosecutors are still hammering out how the evidence can be used, how much weight to give it, and how to make sure defense attorneys can fairly rebut it. There’s the perpetual question of how to distinguish real from fake: Is a YouTube video of an execution authentic or staged? To address such concerns, investigators and activists are racing to standardize how they archive social media evidence, to make it searchable and easier to verify. One priority is sifting through hundreds of thousands of videos and photos to separate so-called lead evidence, content that indicates a crime has taken place, from “linkage evidence,” content that connects perpetrators to that crime.

Beyond the question of how to handle evidence is the challenge of obtaining it in the first place. Courts in European countries where war crimes prosecutions often take place can only submit warrants to American social media companies using cumbersome processes that operate via mutual legal assistance treaties, or MLATs, between their nations and the U.S. Through such channels, it can take years for the data to make its way into the hands of prosecutors. On top of that, the ICC is blocked from getting any social media data (or other information) from U.S. companies, thanks to the American Service-Members’ Protection Act, a law signed by former President George W. Bush in 2002 that shields U.S. soldiers from war crimes prosecutions and also prevents U.S. companies from turning over evidence to the ICC.

Of course, information shared openly on social networks is fair game. And it’s hard to overstate the potency of such evidence — or the consequences of its deletion. Take the trial of Haisam Omar Sakhanh last February. A former Syrian rebel fighter, he sought asylum in Sweden and was then investigated by Swedish authorities for allegedly withholding details of his past. During that investigation, his role in an extrajudicial killing on the Syrian battlefield in 2012 came to light, and he was charged with a violation of international law. He was subsequently convicted and sentenced to life in prison.

Social media evidence proved crucial, the chief prosecutor on the case, Henrik Attorps, told The Intercept. A video published by the New York Times in 2013 showed Sakhanh with an anti-Assad militia, known as the Suleiman Soldiers, executing bound prisoners after a battle in the northern province of Idlib.

Sakhanh claimed that those prisoners had been sentenced to death in a lengthy trial. Attorps was able to use social media to eviscerate that defense. He started Googling Sakhanh’s name and found videos posted on YouTube showing Sakhanh participating in the Idlib battle. He later subpoenaed YouTube for precise times that those videos were posted. He also subpoenaed Facebook for data from a deleted account of the Suleiman Soldiers; this included time-stamped announcements of the group’s attack on Syrian soldiers. Attorps then built a timeline showing that between the announcement on Facebook by the Suleiman Soldiers, the battle itself, and the execution, no more than 48 hours could have elapsed.

If YouTube had removed videos showing Sakhanh participating in that Idlib battle, Attorps could very well have failed to convict. But, the prosecutors still had mixed feeling about social media companies taking down disturbing images that could also be war crimes evidence. “As a prosecutor in this field of law, I’m worried,” he admitted. “But as a citizen, I’m a bit relieved.” Such images and video, Attorps said, can be disturbing for the general public and can serve as propaganda for extremist groups like ISIS.

Social media companies are under tremendous pressure to deny these extremist groups a safe haven for their propaganda. In September, U.K. Prime Minister Theresa May demanded that such firms come up with a way to remove extremist content within two hours of its posting. This presents a real dilemma, an official at YouTube told The Intercept: one person’s extremist propaganda is another person’s war-crime evidence.

CAIRO, EGYPT - NOVEMBER 23:  A youth films the aftermath of a tear gas volley fired by police on protestors in Muhammed Mahmoud Street near Tahrir Square on November 23, 2011 in Cairo, Egypt. Thousands of Egyptians are continuing to occupy Tahrir Square after four days of clashes with security forces despite a promise from military leaders to bring forward Presidential elections.  (Photo by Peter Macdiarmid/Getty Images)

A youth films the aftermath of a tear gas volley fired by police on protesters in Muhammad Mahmoud Street near Tahrir Square on Nov. 23, 2011 in Cairo.

Photo: Peter Macdiarmid/Getty Images

“A video of a terrorist attack may be informative news reporting if uploaded by a news outlet or citizen journalist,” Downs of YouTube said. “But that same video clip can be glorification of violence if uploaded in a different context by a different user.”

Cognizant of these tensions, human rights groups are building ways to preserve potential war crimes evidence outside of the purview of social media companies — a sort of emergent, anarchic alternative architecture for media collection. That effort is centered on the Syrian Civil War; the group Syrian Archive, for example, is building a parallel evidence locker, downloading and organizing thousands of hours of video, with a team of six and a budget of $96,000. Researchers are also coming up with new ways to amass atrocity evidence in conflicts in other regions, including sub-Saharan Africa, Eastern Europe, and Asia.

“NGOs are doing the work that companies should do,” said Dia Kayyali, a tech and advocacy program manager at Witness, a group that also maintains a network of contacts in conflict zones documenting human rights abuses on video. “They should be paying people to be in contact with these groups and have relationships with them.”

Of course, the human rights groups, scouring social media for evidence, have longstanding relationships with the platforms — especially YouTube. In 2012, YouTube partnered with Witness to release a tool that allows activists to blur the faces in a video so investigators could collect testimony from anonymous witnesses. More recently, YouTube worked with Eliot Higgins, founder of Bellingcat, the U.K. NGO stung by YouTube’s AI this summer, to develop a tool called “Montage” to help investigators crowdsource analysis of conflict videos. Facebook, human rights activists say, has been less open to such collaborations. “Facebook has been a mess forever,” Higgins told The Intercept. He points to one egregious case: In 2013, after the Syrian regime launched a chemical attack on the civilian population of Damascus, Higgins said that 80 percent of the firsthand reports of the attack, including videos and images, posted to Facebook were erased from the platform. (Facebook declined to answer a question about Higgins’s claim.)

Even human rights groups with strong ties to social media companies often feel buffeted by the whims of the platforms. Rules about what can and can’t be shared can change without much warning. In 2014, for example, YouTube decided to change its Application Program Interface, or API, essentially the language that outside organizations rely on to create software to extract data from, or otherwise interact with, the platform. The Syria Justice and Accountability Centre, a nonprofit group that has collected hundreds of thousands of videos of potential war crimes in Syria, was caught totally unaware and its system crashed. This summer, when YouTube introduced its new AI, channels feeding SJAC again disappeared.

“These companies don’t consult us or even give us educational guidance about how to post to not avoid things being blocked,” Abdallah, the group’s executive director, told The Intercept.

Even Witness was caught off guard by the company’s AI rollout. Many of the groups they partner with in conflict zones found their content removed.

“Who designs this AI? What was their understanding of these conflicts? We don’t know,” said Kayyali. “Huge companies need to recognize that every change they make … will have an effect on human rights users,” Kayyali added. “Instead of working to fix issues after policies and tools have already been instituted, it just makes sense to reach out to stakeholders.”

Screen-Shot-2017-11-01-at-10.46.22-AM-1509571866

Screenshot of Syrian Archive website.

At times, activists said, it seems that the social media companies are just not tracking the issue closely. For example, last summer, when Syrian Archive spoke to YouTube about the urgency of restoring some of the videos the company’s AI had removed, YouTube officials didn’t seem aware that the International Criminal Court had just issued a landmark warrant citing social media evidence. “I don’t think they are intentionally destroying evidence, there’s just a real lack of understanding of what this stuff is,” said Jeff Deutch, a researcher at Syrian Archive and a fellow at the Center for Internet and Human Rights.

In August, when YouTube’s new AI removed thousands of videos associated with human rights and war crimes research, it caused a minor scandal. Higgins reamed the company to his 60,000 Twitter followers: “So far, YouTube’s attempts to remove ISIS and Jihadi content has proven to be a total flop, loads of false positives,” he wrote. YouTube has worked closely with human rights groups to restore videos and channels that its AI took down. A company spokesperson admitted that the rollout was executed poorly, and human rights groups should have been more in the loop.

“Inevitably, both humans and machines make mistakes. We use these errors to retrain our teams and our technology,” Downs, the YouTube spokesperson said. “We are also working on ways to educate those who share video meant to document or expose violence on how to add necessary context, so our reviewers can distinguish their videos from more malicious uploads.”

But more than three months after the botched AI rollout, human rights groups are still reeling:

“All our efforts have shifted to deal with YouTube thing,” said Hadi al-Khatib, the co-founder of the Syrian Archive.

He used to spend his days archiving potential war crimes evidence; now he’s engaged in a Sisyphean battle with YouTube’s algorithm. “We spend all our time helping Syrian media organizations whose content is deleted — we take their accounts, we check them out, make sure they are doing good work, then we we send to YouTube,” he explained. “A few days later, it’s deleted again.” This couldn’t be happening at a worse time, Khatib says. The Syrian regime and its allies are quickly retaking large swaths of Syria from ailing rebel groups and Islamic extremists. “They are demolishing all  types of evidence on purpose — some of what we collect is the the only thing that is left indicating that a crime took place,” he said. In mid-October, for example, YouTube removed bloody video evidence of a Russian airstrike that Khatib said targeted civilians in the Idlib countryside. “This was quite crucial,” he says. “It was a violation of international law and, until now, we can’t get it restored.”

This ad-hoc process for restoring videos raises a lot of questions. Groups and individuals in Europe and the U.S. who have ties to the social media companies have a shot at getting their content back. But for others it’s not within reach: Talal Kharrat, a manager with the Turkish-based Qasioun News Agency, said his organization draws on 80 correspondents, some of them undercover, spread across Syria. Since 2014, he says, nearly 6,000 videos from his agency have been removed by YouTube. Sometimes the content is restored, sometimes not. He’s tried over and over again, he said, to get in touch with someone at YouTube using the “help” button on his personal account.  “I receive no reply,” he told The Intercept. “Please separate between people like us, working in a conflict zone, putting our lives in danger, and someone who’s posting violent images from a normal place, or some extremist,” he said.

The Idlib Media Center was only able to restore some of its videos after it appealed to the Syrian Archive. And Ro Nay San Lwin, a Rohingya activist whose account was shut down and later restored by Facebook in September, says he was able to get in touch with someone at the company because a friend knew someone there. “It isn’t so easy to reach them,” he told The Intercept

This all makes Alexa Koenig, the human rights expert at UC Berkeley, nervous: “We have huge equity concerns: What stories are we losing? Whose voices are we not hearing? Who’s in a dire situation who we don’t know about?”

Abdulsalam, the Syrian photographer, is now in Turkey and quite sympathetic to the dilemma a company like YouTube faces.  He’s also thankful to the platform and credits it with helping “spread the voice” of Syrians under duress. “I also understand why they don’t want bloody content,” he said in a recent phone call. “All I’d ask is that they deal with this issue with more integrity.”

Abdulsattar Abogoda and Rajaai Bourhan contributed to this report.

Join The Conversation