Article Lead Image

Illustration by Max Fleishman

Facebook’s privacy chief insists Facebook is ‘a privacy-enhancing platform’

When it comes to privacy, Facebook wants you to trust yourself.

 

Aaron Sankin

Tech

Posted on Jul 30, 2016   Updated on May 26, 2021, 9:07 am CDT

Sitting onstage in a black hoodie, Facebook founder Mark Zuckerberg looked over the audience during the 2010 TechCrunch awards and declared that the age of privacy was over. 

One month prior, the social network, born half a dozen years earlier in Zuckerberg’s Harvard dorm room, had changed the default setting for all of his then-350 million users. Where users’ profiles were, by default, previously only accessible to their friends, that information would now become visible to anyone with an internet connection.

“People have really gotten comfortable not only sharing more information and different kinds, but more openly and with more people,” Zuckerberg told the audience at the Silicon Valley’s annual celebration of techie self-congratulation. “That social norm is just something that has evolved over time.”

In a way, Zuckerberg was right. Social norms about information sharing have evolved dramatically since the dawn of social media. However, the era of people caring about their digital privacy is alive and kicking.

Over the following years, Facebook was battered with criticism about its privacy practices—including a settlement with the Federal Trade Commission over charges that Facebook “deceived consumers by telling them they could keep their information on Facebook private and then repeatedly allowing to be made public,” which resulted in the company agreeing to undergo two decades of regular privacy audits.

That’s why Zuckerberg brought in Erin Egan. Previously a partner at Covington & Burling, where she co-chaired the law firm’s Global Privacy and Data Security practice group, Egan’s job was to build Facebook a privacy program from scratch—and that’s precisely what she’s done.

“Facebook is, I believe, a privacy-enhancing platform. People control what they decide to share.”

While Facebook is still the target of criticism, any organization that collects even a fraction of the data Facebook does on over a billion internet uses is certain to take some flack—such as when the company was dinged for tracking web users who hadn’t even signed up for Facebook. Even so, Facebook’s record has improved dramatically as the company has realized that, for many users, being able to meaningfully manage their privacy is a way to set the social network apart.

Every year, the Electronic Frontier Foundation releases a report grading tech companies on how willing they are to stand up for user privacy. In 2011, when Egan joined the company, Facebook received one star out of a possible four. Last year, the company was one star away from a perfect score.

Egan argues the core of Facebook’s whole privacy philosophy is giving users the ability to choose for themselves precisely what they want to share and with whom. The Daily Dot caught up with Egan during the Republican National Convention in Cleveland earlier this month to talk about the privacy implications of Facebook Live, dealing with harassment, and the long, hard slog of building trust with its 1.7 billion active users.

This interview has been edited for length and clarity.

Daily Dot: Last week, there was a coup in Turkey and I watched it on Facebook Live. I was virtually bouncing around the country from blue dot to dot on Facebook’s interactive map of active livestreaming feeds, and I had this moment of understanding of Facebook Live’s true potential. I could go just from city to city in Turkey and watch people on the streets protest this coup in real time. How does Facebook conceive of privacy implications of that? How do you communicate what privacy means in this livestreaming era when anyone can just hold up their smartphone and broadcast what’s in front of them live on your platform?

Erin Egan: When we think of Live, we think of it as telling the story of what’s happening right now. Look at what’s happening at the Republican convention. People are going live, and they’re talking about what’s happening.

I don’t want to say we’re late to livestreaming, but if you look at the evolution of video—you think of Periscope, you think of YouTube. You think of these things, and they’ve been around for quite some time.

When we think about privacy, we have to think about people’s expectations. What do they understand? What are we telling them about our product?

On Facebook, people decide whether or not they want to decide to share information. They can decide whether or not they want to make their lives public, whether they want to do something just for their friends, or just do it for a very small group. We’ve worked very hard over the years on these sharing controls to educate people on them. The same privacy model that applies to what we do with whatever you share, that also applies to Live. Yes, people have to understand what that is. People have to use it and understand it and get it. We have a responsibility to tell people, and we are. But this isn’t a new phenomenon.

To a degree, there is a newness to Facebook Live. Last year, you had Meerkat and Periscope come out, but when Facebook moves, in a way, it’s bigger. This stuff was happening already, but when Facebook decided to seriously run with it, that’s what pushed livestreaming over the edge into something really mainstream.

Think about YouTube, though. It’s huge. And it’s been huge for a long time. They’ve failed, and they’ve learned.

We’re really excited about Live. People want to be sharing what’s happening. But, from a privacy perspective, we believe people understand it. Certainly we’re there to answer any and all questions people have, because we want to make sure that people do understand and they can control it.

Facebook is, I believe, a privacy-enhancing platform. People control what they decide to share. They decide whether they go live, they decide whether they don’t. They decide the audience. They decide to come off, they can decide to delete it, they can decide to get rid of it. They can control this. We have the platform for people, and that same model is going to govern.

When you have all these controls, where you set the default for people matters. Facebook Messenger recently turned on encryption, which is, for the health of the internet, fantastic. The more communications are encrypted online, the better. But then, there were some technologists who were critical of Facebook for not turning the option on by default. It was something users had to opt into, meaning it was something they had to first understand. Encryption is complicated, and people have so many different levels of sophistication in the area. Over and above giving people the options for where to set their own privacy controls, how does Facebook decide where to start people out?

We give people very meaningful controls. Audience is where it all starts. If you think about audience on Facebook, the way we have it set up is that you pick your audience and that control governs the next time you’re sharing. But what also goes hand-in-glove with control, and these tools that we give, is education. 

When we recently launched the encryption opt-in for Messenger, we did a lot of education for people within Facebook about what this is about. This is a new thing for people. We want to educate them. We want to give them that option. We want to give them the ability to do that if they feel it’s important. Controls go hand-in-glove with education.

A few days ago, there was a big blow-up on Twitter with a torrent of racist and sexist harassment directed at Ghostbusters actress Leslie Jones. There was a lot of criticism of Twitter that this incident was an example of how the company wasn’t doing enough to ensure that people on its platform feel safe. There was a comparison made, very specifically, to Instagram. There were all these harassing rat and snake emoji being posted in the comments to Taylor Swift‘s Instagram account, but they were deleted by the company. It highlighted the difference been how Facebook properties are doing a better job handling this type of targeted harassment than other platforms, like Twitter. Do you see a link between privacy and harassment? How does Facebook think about giving people the tools to deal with harassment outside of shutting their laptops and running away?

Privacy, safety, trust are all core to our service. We’re not going to be successful unless people trust our service and trust our community. We have community guidelines, and we take these guidelines really seriously. These guidelines spell out what we have indicated is appropriate behavior on our platform—hate speech is prohibited, bullying is prohibited. We take action, and we take these things very seriously, and we have a big team working on it.

“Facebook allows people to establish different communities, rather than some other services that are all public.”

This is something we prioritize and take very, very seriously. That’s the essence of it. Where privacy and safety come together is around trust. Privacy to me is about controlling your own information and enabling you to decide who sees what and when. That’s what you control: whether to go live or not live, whether to send a message to five people or share with a group.

On the safety side, it’s about what’s happening to you, and audience plays into that. Who can see my stuff? Who am I sharing this stuff with? Am I public where anyone can comment on my stuff? Facebook allows people to establish different communities, rather than some other services that are all public.

On Facebook, we have a real-name culture that breeds accountability. It’s just a different platform, a different way, a different community. I think there are places where anonymity is great, and I think there are places where you want to be public. But with that comes some some additional things that you’re going to have to deal with.

The real-name policy is what prevents Facebook from becoming a sea of Twitter eggs, but Facebook’s police created some difficulty with people in the trans community who felt that the real name policy actually put them in danger. How much flexibility needs to be built into policies like these so they don’t end up harming the people they intend to help?

It has to work for our community. If people are known in the community, in their lives, as a particular person, we want to honor that. We want to respect that. We wanted to create other means by which people can indicate who they’re known to in their lives. And what is that name? Yes, we have a real-name policy, but these are all people who, in their communities, are known as that, and therefore they’re going to be accountable within the community in which they live.

You have privacy in terms of the information that’s shared with the other members of Facebook and, on the other side of that, you have the information that’s shared with Facebook itself and then shared with Facebook’s advertisers. What sort of education and controls are you implementing so people are comfortable and cognizant of what they’re giving to you guys and what they give to advertisers?

We’re very clear to people about advertising. We want our ads on Facebook to be as interesting and relevant as the rest of the content that they see. On the ad side, people can control the ads they see. They can also understand why they’re seeing a particular ad. Whenever you get an ad on Facebook, you can click in the right-hand corner and you can say, ‘I don’t want to see these anymore.’ You can also say, ‘Why am I seeing this ad?.’

We are not sharing people’s contact information with advertisers. That’s not in our business interest or in our interest in creating trust in the platform. When it comes to advertising, what we’re looking to do is provide relevant ads to people that interest them, and give them control over the ads that they see. 

Share this article
*First Published: Jul 30, 2016, 8:00 am CDT