Facebook, Cambridge Analytica, and the End of Privacy

Facebook is taking a lot of heat since news broke that Cambridge Analytica reportedly gained access to private information on about 50 million Facebook users to influence voters in the 2016 election. The company’s stock price continues to plummet, and as of this posting, Facebook has lost more than $50 billion off its market cap.   

What’s more is the hashtag #DeleteFacebook is trending on Twitter where people are declaring their intention to remove their profiles from Facebook. Even Brian Acton, the co-founder of WhatsApp, which Facebook bought for $19 billion in 2014, joined the conversation.

While this is a nightmare for Facebook, this news isn’t surprising. Facebook just happens to be the first domino to fall, and we’ll see thousands of global enterprises be negatively impacted as a result. And as we near the May 25th deadline for GDPR, all of the world’s data must now be considered REGULATED, and more regulations will be coming.

This brings to light an important conversation around data and privacy. The old way of protecting data is outdated, and the future of our privacy now lies in how our data is being used—not merely how it was stored.

What most don’t realize is that much of today’s data economy is based around incredibly loose conceptions of privacy. If users voluntarily provide their data information to a third-party, for example, their privacy expectations over that data decrease significantly. The problem is that almost all data we generate on the internet is, by default, handed over to third parties. That’s the reason why it’s so easy for Cambridge Analytica and other data mining campaigns to use Facebook and other websites to gather huge volumes of data that’s not theirs—or at least, shouldn’t be.

This news is just pointing out a HUGE flaw in the way we think of privacy. If I share my data with you—for example—and you share it with someone else, I enabled you to make that decision without me…even if you turn around and sell it to someone else. There’s no precedent for protecting information like this, which is meaningless on a small scale but intimate when aggregated.

The same goes for businesses.

As more organizations create predictive models in their quest to become AI-driven, they need to train these models on data to become effective. Data brokers—like Facebook in this case—provide massive data sets, but their assurance of compliance with GDPR and other regulations isn’t enough for banks or insurance companies to use the data. So, as companies feed algorithms with the data, they need to be sure that privacy is enforced and that regulatory concerns are met.

The power underlying all of this is the data being used to feed the algorithms making the decisions. And this is precisely where regulation should focus. As we get closer to GDPR, we should learn how we can provide greater protections to our own personal data as it’s used by more companies and technologies like AI.