The social media giant preaches “connection” as its ultimate goal, but we can’t forget that it has only ever been in business for itself.
We urgently need your help. DAME reports the stories that need to be told, from perspectives that aren’t heard enough. In times of crisis it is even more critical that these voices are not overlooked, but COVID-19 has impacted our ability to keep publishing. Please support our mission by joining today to help us keep reporting.
Way back last month, which these days seems like ten years ago, Facebook founder Mark Zuckerberg visited Capitol Hill for a couple days (and by visited, I mean “was summoned to talk to Congress”) in the wake of the Cambridge Analytica scandal, where 87 million people learned that Facebook had given Cambridge Analytica, a company launched by conservative billionaires Robert and Rebekah Mercer and Trump aide/Svengali Steve Bannon, access to huge swaths of their personal data without ever knowing about it.
While there weren’t really any startlingly dramatic moments at the hearing itself, all of this has been a stark reminder of how much we’ve let the semi-amoral culture of Silicon Valley tech bros play fast and loose with our privacy and our politics. And since then, we’ve seen a steady drip of news stories that illuminate just how big—and how bad—Facebook really is, and how much of the discourse it now controls.
Let’s first get out of the way the weird “this is the moment Mark Zuckerberg grew up” framing we saw from news outlets like CNN. Mark Zuckerberg is a grown man of 33 and a billionaire. He controls a media platform that has over 2 billion users worldwide and is the primary source of news for millions of Americans. His company spent $50 million to lobby Congress in 2015 alone. Pretending he’s a starry-eyed wunderkind just now coming face-to-face with the implications of the monster he created is part of the problem.
The problem isn’t just Zuckerberg. He’s created a culture at Facebook that lionizes “connection” over everything—even the safety of its users. A recently leaked internal memo from Facebook Vice-President Andrew “Boz” Bosworth shows just how unprincipled Facebook is:
“So we connect more people. That can be bad if they make it negative. Maybe it costs someone a life by exposing someone to bullies. Maybe someone dies in a terrorist attack coordinated on our tools. […] The ugly truth is that we believe in connecting people so deeply that anything that allows us to connect more people more often is de facto good.”
Besides the fact that this is both a chilling and awful worldview, it overlooks much of how Facebook forces those connections and does so with an incredible lack of commitment to privacy. Facebook’s privacy settings are notoriously labyrinthine, requiring a lengthy article from a giant tech magazine to explain how you can secure your privacy. In case you were wondering, it involves changing over a dozen settings, several of which are buried in sub-menus.
Facebook’s failure to ensure privacy isn’t a new problem. Indeed, Facebook has been under an FTC consent decree since 2011. They entered into that to settle charges that they told users they could keep their Facebook info private, but then repeatedly made that information both public and sharable. However, Facebook regularly changes its business model and what it does with your data and therefore regularly breaks its promise to the FTC and, by extension, you.
An additional problem arises because Facebook isn’t just careless with your privacy. Even if you vigorously defended your own privacy on the platform, if your friends didn’t, your data was often out there for the taking anyway. That’s because Facebook uses a lot of APIs that share your data widely.
API stands for “Application Programming Interface.” In the simplest terms, an API lets programs or platforms talk to each other. This is a widespread convenience of the Internet. When your Microsoft Word program is easily syncing to your Dropbox app, that’s done by use of APIs. The same goes for searching for apartments or homes: Behind the scenes are APIs sharing data with one another so that you can use just one site to research your next house and learn about the property value, the walkability of the neighborhood, and more.
Unsurprisingly, Facebook has been less than scrupulous about the data it shares via its API. Take, for example, the now-shuttered Friends API. That API meant that third-party apps—programs that live outside of Facebook but have access to much of its data—had access to an individual’s data and much of their friends’ data as well. If you had a friend that was cavalier about sharing data with those sorts of apps, they were also able to share your status updates, check-ins, location, and interests with that app. And if you think it couldn’t happen to you, think again. Facebook finally confessed that most users have had their data scraped by some sort of third-party app.
And Facebook needed those third-party apps, needed you to post your results from quizzes and play Farmville. The more people got attached to those apps, the more they joined or stayed on Facebook.
It’s this sort of thing that got us into the Cambridge Analytica mess. It started when Russian-American researcher Aleksandr Kogan created one of those ubiquitous quizzes. Only 270,000 people took the quiz, but thanks to a massive loophole in Facebook’s API, the quiz scraped data from all their friends, which resulted, eventually, in 87 million people having their data harvested by a partisan firm that, arguably, may have helped Trump win the election.
Since then, of course, Facebook has worked hard to regain the public trust by letting people figure out if their data was exposed and stopping third-party apps from targeting you with ads. But it’s really a classic case of far too little, far too late, because Facebook has so many other ways it is willing to overshare and compromise your privacy and safety.
There’s a class-action lawsuit making its way through the courts in California right now over Facebook’s “tag suggestions” tool. That tool lets you tag, or identify, your friends in photos. In fact, it works so well that it can suggest to you who your friend in the photo is. Facebook then stores all that data, which is now essentially a giant facial recognition database. That database is storing who you were with and where you were with them, perhaps for all time. (In case you were wondering, you can, and should, turn the feature off so that people can’t tag you in photos.)
The idea that you could be identified by nefarious third-parties isn’t some far-off future fear. For a recent study, Carnegie Mellon University built a few tools, using profile pictures on Facebook and other publicly available data, among other things. They were able to identify people on a dating site even though people used pseudonyms, and could figure out what students were on campus and where they were walking, and they could predict the personal interests of some of those students.
This is, of course, a huge safety issue for vulnerable people. A domestic violence victim could be tracked by her abuser. People living under repressive regimes (or Trump’s America, let’s be frank) could be tracked coming and going from meetings. Carnegie even built a smartphone app that would let this sort of tracking happen in real time.
Facebook used to have, as an unofficial motto, the saying “Move Fast and Break Things.” It was a very forgiving worldview. It meant that if Facebook rolled out things that were imperfect, they gave themselves a pass, because innovation and creation speed were what mattered. In theory, this was supposed to be related to the idea that they might release a new tool that was a bit buggy or incomplete. In reality, it meant that Facebook did things like leave giant loopholes in its API open, and bad actors like Cambridge Analytica rushed right in to smash and grab everything they could.
Facebook broke the trust of its users and sold their privacy to the highest bidder. What we forget is that doing so is a feature, not a bug, of Silicon Valley thinking. For years, Uber tracked riders even after their trip ended. A new FTC complaint alleges, likely credibly, that YouTube collects personal data from children under 13, which is actually illegal. These companies don’t believe they’re failing us when they violate our privacy because they need to violate our privacy to thrive.
Perhaps we also had a point of failure here: We pretended Facebook was for us, rather than for itself. It’s a relatively new concept. No one ever really felt like General Electric, even though it had the nice side effect of bringing us useful appliances and light bulbs, was around for anything except its own profit motive. No one thinks that just because Sony makes really fun game consoles, they’re committed to bettering the human condition. Even companies that were once nearly single-handedly responsible for our ability to communicate with each other, like Ma Bell, didn’t inspire some sense that they had our happiness as their primary motivation. But somehow, when tech companies gave us a sense of connectedness or offered to “disrupt” our lives in some way, we convinced ourselves that they had our best interests in mind. What we were reminded of in the last couple months is that they never did, and they never will.
We urgently need your help!
Covid-19 has dramatically impacted our ability to keep publishing. DAME is 100% reader funded and without additional support, we can’t keep publishing. Become a member at DAME today to help us continue reporting and shining a light on the stories that need to be told, from perspectives that aren’t heard enough. Every dollar we receive from readers goes directly into funding our journalism. Please become a member today!
(If you liked this article and just want to make a one-time donation, you can do that here)