Facebook Won’t Say If It Will Use Your Brain Activity for Advertisements

A forthcoming mental-input system from Facebook assumes that if you sent a thought to the speech center of your brain, you want to share it.

SAN JOSE, CA - APRIL 18:  Facebook CEO Mark Zuckerberg delivers the keynote address at Facebook's F8 Developer Conference on April 18, 2017 at McEnery Convention Center in San Jose, California. The conference will explore Facebook's new technology initiatives and products. (Photo by Justin Sullivan/Getty Images)
Facebook CEO Mark Zuckerberg delivers the keynote address at Facebook's F8 Developer Conference at McEnery Convention Center in San Jose, Calif., on April 18, 2017. Photo: Justin Sullivan/Getty Images

Every year, Facebook gathers hundreds of developers, corporate allies, and members of the press to hear CEO Mark Zuckerberg’s vision of our shared near future. The gathering is known as “F8,” and this year’s iteration included some radical plans, one of which could’ve been pulled from a William Gibson novel: Facebook is working on a means of using your brain as an input device.

Such technology is still many years off, as is, apparently, Facebook’s willingness to publicly think through its very serious implications.

Details on how the Facebook brain/computer interface would function are scant, likely because the company hasn’t invented it yet. But it’s fair to say the company has already put a great deal of effort into considering what capabilities such an interface would have, and how it would be designed, judging from its press announcement: “We have taken a distinctly different, non-invasive and deeply scientific approach to building a brain-computer speech-to-text interface,” the company says, describing the project as “a silent speech interface with the speed and flexibility of voice and the privacy of text,” with a stated goal of allowing “100 words per minute, straight from the speech center of your brain.” This process will be executed “via non-invasive sensors that can be shipped at scale” using “optical imaging” that can poll “brain activity hundreds of times per second.”

“The privacy of text” is an interesting turn of phrase for Facebook, which has, like its competitor Google, built itself into a multi-hundred-billion-dollar company more or less on the basis of text typed into a computer not being private but rather an excellent vector through which to target advertising. For its thought-to-text project, Facebook claims it’s built a team of “over 60 scientists, engineers and system integrators” from some of the most esteemed research universities around the U.S. (headed by a former DARPA director, no less). Privacy concerns drove some of the very first questions from journalists after the F8 announcement, including in this passage from The Verge:

[Facebook research director Regina] Dugan stresses that it’s not about invading your thoughts — an important disclaimer, given the public’s anxiety over privacy violations from social networks as large as Facebook. Rather, “this is about decoding the words you’ve already decided to share by sending them to the speech center of your brain,” reads the company’s official announcement. “Think of it like this: You take many photos and choose to share only some of them. Similarly, you have many thoughts and choose to share only some of them.”

Facebook was clearly prepared to face at least some questions about the privacy impact of using the brain as an input source. So, then, a fair question even for this nascent technology is whether it too will be part of the company’s mammoth advertising machine, and I asked Facebook precisely that on the day the tech was announced: Is Facebook able to, as of right now, make a commitment that user brain activity will not be used in any way for advertising purposes of any kind?

Facebook spokesperson Ha Thai replied:

We are developing an interface that allows you to communicate with the speed and flexibility of voice and the privacy of text. Specifically, only communications that you have already decided to share by sending them to the speech center of your brain. Privacy will be built into this system, as every Facebook effort.

This didn’t answer the question, so I replied:

My question is this: Is Facebook able, as of right now, to make a commitment that user brain activity will not be used in any way for advertising purposes of any kind?

To which Thai replied:

Sam, that’s the best answer I can provide as of right now.

Fair enough — but also an implicit answer that no, Facebook is at least at the moment not able to assure users that their brain activity will not be appropriated to sell ads. This is of course not an indication that the company will do this, only that it is not prepared to rule it out. And to be sure, this is still a hypothetical — it’s possible the company’s neural keyboard will remain somewhere between vaporware and marketing stunt, as has been the case with its solar-powered flying internet relay, or Amazon’s national delivery drone fleet.

But while the tech may be far off, its privacy implications aren’t far fetched — ignore at your own peril Facebook’s history of experimenting with the thoughts of its users, whether by deliberately manipulating their emotions or by putting their faces on advertisements without consent (“They trust me — dumb fucks,” Zuckerberg famously quipped to a friend via IM as he built Facebook in his Harvard dorm).

Facebook’s interest in mental typing was certainly noted by neuroethicists; for them, it helped underline that recent breakthroughs in brain-computer interfaces, or BCIs, really will bring what was once a science fiction scenario into the real world.

“I worry a little about whether we’ve given enough thought about what it means to no longer have control over a zone of privacy,” Dr. Eran Klein, a neurology professor at Oregon Health and Sciences University and neuroethicist at the Center for Sensorimotor Neural Engineering, told me. “One of the things that makes us human is we can decide what stays in our mind and what comes from our mouth.”

Any inadvertent spillover from our inner monologues to online servers could have profound consequences: In society, “if you have a prejudice but you’ve worked diligently to squash that prejudice, that says something good about your character,” Klein pointed out. But if, thanks to your handy Facebook Neuro-Keyboard, “all those prejudices are open for other people to see and be judged, it opens up a whole different way of evaluating people’s moral character and moral choices.”

The importance of thinking things but leaving them unexpressed or unarticulated is fundamental to humanity, society, and morality — and it’s a line Facebook has stomped all over in the past. In 2013, Facebook published a study detailing how it had been recording and storing not just text that had been typed and published on its website, but also text users had written but then decided against publishing and deleted for whatever reason. The study’s authors lamented that “[Facebook] loses value from the lack of content generation” in such cases of “self-censorship.” Should users trust a company that so failed to grasp the essential intimacy of an unpublished thought with a line into their brains?

Facebook’s assurance that users will be able to easily toggle between thoughts that should and should not be transmitted to Facebook’s servers doesn’t ring true to Klein, who points out that an intrinsic part of speech is that you don’t have to think about each word or phrase before you speak it: “When we’re engaged in a conversation, I don’t have this running dialogue that comes up before my mind’s eye that I say yes or no to before it comes out of my mouth.” Facebook’s announcement made it seem as if your brain has simple privacy settings like Facebook’s website does, but with speech, “if you have to make a decision about every little thing, it becomes exhausting,” and would carry what neurologists call a “high cognitive load.” Klein added that, far from being able to switch between public and private thoughts on the fly, “the only way these technologies really will become part of our second nature is if they become subconscious at some level,” at which point Facebook’s “analogy with photographs” — that “you take many photos and choose to share only some of them” — “breaks down, because then you’re not consciously choosing each thing to let through the sieve.” The whole thing comes down to a sort of paradox, according to Klein: For this technology to be useful, it would have to be subconscious, which precludes the kind of granular privacy decisions described in Facebook’s PR comments.

Howard Chizeck is a neuroethicist and professor of electrical engineering at the University of Washington, where he also co-directs the school’s Biorobotics Laboratory. Like Klein, Chizeck thinks Facebook might be overestimating (or oversimplifying) how easy it could be to switch your brain into some sort of “privacy mode” for consciousness: “I doubt that you can precisely choose words you want to ‘think’ to an external system, versus verbal thoughts that occur otherwise.” Chizeck added that such activity “may look sufficiently different in different people’s brains, and under different conditions” (e.g., if you’re drunk or exhausted) so as to make Facebook’s project difficult to ever pull off. Even if it does prove possible to somehow cherry pick thoughts intended for speech, Chizeck adds that there’s a risk of other thoughts bleeding through (and onto Facebook’s servers): “Even if it is possible to see words that are desired to be sent, other brain signals might also be monitored … which is a privacy concern.”

As for the advertising potential (and other spooky what-ifs), Klein doesn’t think it’s too soon to start asking Facebook for serious answers about its serious research. “I would favor assurances that they need to be transparent about what they’re actually recording and how it might be used for advertising,” even in these early days. The necessity to make brain-to-text input streamlined and subconscious makes the advertising implications even dicier: “If it’s subconscious, you don’t have conscious control over what information companies get about you … so you could be targeted for ads for things you don’t even realize that you like.”

Both Klein and Chizeck said that Facebook, rather than deferring on the most obvious privacy questions, should set out its principles on brain research from the get-go. “I think that they should design their system, from the beginning, with privacy a consideration,” said Chizeck. “Ultimately I think that there is a need for standards (developed by an industry/professional society/government consortium), with mechanisms for self-enforcement by industry, and oversight by government or third parties.” Klein also thinks it’s important for private sector entities like Facebook conducting what could become pioneering scientific work to establish ground rules in advance, to “lay out ahead of time what their values are and what the vision is.” Klein concedes that Facebook “can only predict so much, but I think that if you just let the technology drive everything, then I think ethics is always the dog trying to catch the car.”

Top photo: Facebook CEO Mark Zuckerberg delivers the keynote address at Facebook’s F8 Developer Conference at McEnery Convention Center in San Jose, Calif., on April 18, 2017.

Join The Conversation