If you’ve been following the recent verdict by the European Court of Justice forcing Google to delete certain from search results at users’ request, you probably caught some coverage criticizing it as too broad and vague. True, it leaves many questions unanswered, such as what an “irrelevant” link is and how an EU citizen can make use of that new right in each of the 28 EU member countries.
But we think the verdict is also brilliant, because it finally forces both Europeans and Americans into the long overdue debate about who owns our data, or—in the verdict’s language—who “processes” and “controls” the myriad bits that make up our digital identities.
It’s a discussion that should have been going on for a while, given that legislators and regulators from Berlin to Washington are struggling with the question of how to bring the law up to Internet speed. Just three examples should drive home this point.
First, there’s the ongoing battle to update the EU’s 1995 Data Protection Directive, including the “right to be forgotten,” in order to devise one set of binding privacy guidelines for the entire European Union.
There’s also the recent White House report on Big Data and data discrimination, which finds that algorithms are anything but neutral. The report states, “The final computer-generated product or decision—used for everything from predicting behavior to denying opportunity—can mask prejudices while maintaining a patina of scientific objectivity.” That document can be seen as a follow-up to the Consumer Privacy Bill of Rights, issued by the Obama administration issued in early 2012 to “give users more control over how their information is handled.”
And finally, there’s a House bill in the pipeline to modernize the archaic Electronic Communications Privacy Act (ECPA) from 1986, which considers any communications stored in the cloud as “abandoned” after 180 days and therefore not protected from warrantless search and seizure. A more appropriate Email Privacy Act would plug that giant loophole.
The major concerns about the EU decision are based on the fear surrounding its long-term implications. “We are afraid of Google,” confessed Mathias Döpfner, the CEO of European publishing giant Springer, and the mostly lobbyist-driven laments over a new age of censorship. Wikipedia’s founder Jimmy Wales, for instance, described it as “wide-sweeping censorship.” Censorship, if you want to call it that, already exists, and it’s driven by commercial and government interests.
The search results and news feeds we see are determined by secret algorithms and force us to increasingly live in a “filter bubble.” Who knows why I see a link on page one that my neighbor doesn’t see or would only find on page five, if she ever bothered to click that far. Facebook can shut down groups for breastfeeding mothers or kill images of nude art, just like that. Takedown requests from copyright holders make videos or music disappear on a regular basis. The same goes for map data or satellite images that government agencies deem sensitive and have removed or pixelated.
Most consumers don’t even realize how they are being played or censored on a daily basis. How can you see, after all, what search results don’t show up, and why? And how would you go about turning off that filter? This invisible process is a part the massive wheeling and dealing of public data that goes on every day—from mobile carriers and cable companies to ad networks and data brokers that spy on Google’s habits and behavior. Our digital exploits and exhaust fuel businesses from Google on down that make billions off of our collective cluelessness.
What’s been missing in this game is the voice of hundreds of millions of consumers and citizens who feed the machines that collect, parse and serve up that data. The ECJ verdict may have room for improvement, but it does one thing exceedingly well: It shines a bright light on the fact that the average person finally deserves a place at the casino table called Net Economy. Right now, companies and bureaucrats treat our digital bits like chips, mining our data, betting on its value and profiting from it.
Remember: Google and other operators of search engines or social media platforms are not offering their services as a public good like a library or archive would. They are anything but impartial reference librarians. They are publicly traded companies (or desperately want to become so), ones that rely on advertising grafted onto and woven into our personal information. The self-anointed gatekeepers need a counterbalancing force, and the EU verdict is a first step to help us develop one.
Former Harvard professor Shoshana Zuboff recently described the dangers of overbearing Net firms, in particular Google, as the new face of absolutism: a “ubiquitous, hidden, and unaccountable” power. Zuboff argued, “Google’s absolutist pursuit of its interests is now regarded by many as responsible for the Web’s fading prospects as an open information platform in which participants can agree on rules, rights, and choice.”
We couldn’t agree more. It’s high time legislators and courts stepped in and curtailed those powers. That’s not overreach or heavy-handed regulation, as commentators like The Guardian’s Mark Stephens claim, but the opposite: civil society asserting its rights.
Since Silicon Valley groupthink or technology fetishism seems to be much more virulent in the U.S., that task has fallen to the EU. If we are racing toward an Internet of Everything with billions of devices, sensors and services talking to each other and over our heads, we better have that debate now. Guidelines and laws need to be in place before companies create faits accomplis and start digesting — and thoroughly monetizing — our health data, Glasshole moments or domestic trivia.
That’s not to say we don’t need a vigorous debate on where the line runs between relevant and irrelevant personal information. That includes the question if the Google verdict changes much since it just requires the removal of the links, not the original files. It’s a touchy subject, as Dutch researcher Paulan Korenhof told us recently, since “memory” for a human doesn’t equal the distributed “memory” on the Net. “We tend to think in human dimensions about a technical problem,” Korenhof said. “That doesn’t make it easier, quite the opposite.”
Korenhof has written extensively on what she calls “forgetting in bits and pieces”—the messy mix of human meets technology to erase memories—and makes a convincing argument for considering the interests of all parties involved and the technical challenges when it comes to deleting, erasing or just hiding information.
When human memory is as scattered as it is today—residing not just in our heads, photo albums or notepads but in our databases—the notion of complete erasure may be outdated.
But it might not always be necessary to find and delete the last copy of a document or photo. It’s often good enough to make that information harder to find on the Googles of the world—it doesn’t show for casual snooping, but since it still exists, it doesn’t shut out everybody who wants to get to that knowledge. In other words, researchers can still consult “the archives,” but there would be a limit on for-profit or frivolous info-scraping.
If companies tell you that won’t work because it’s too complicated, don’t believe them. They have already cracked same-day Amazon delivery and the once intractable sales tax on online purchases. That’s why people like Korenhof aren’t buying the “genie is out of the bottle” argument:
“That would be saying that we should give up on having technology adjust to what we want, and instead adjust ourselves to technology. Obviously the relationship has been one of mutually shaping each other—think about how the development of clocks changed the way we divided our days. We can try to steer it in a way that is beneficial for us. Is it very difficult to enable forgetting in an effective way? Yes. But does that mean we should give up? No.”
Privacy, in the end, comes before profits, and the EU has finally put a stop to the race to the bottom that Silicon Valley is advocating. It seems that companies will have to deal with the EU’s regulations, since there’s too much money at stake to stay out of the European Union altogether. Google is already prepared, with reports suggesting they’ll have delete request forms available in the next two weeks. According to polls, half of the German population is ready to take them up on that offer.
Steffan Heuer and Pernille Tranberg are authors of the book “Fake It: A Guide to Digital Self-Defense.” They cover technology and privacy issues in San Francisco and Copenhagen. In this series, Digital Self-Defense, Heuer and Tranberg report with updates from the digital identity wars and teach us how to defend our privacy in the great data grab going on all around us. Follow them at @FakeIt_Book.