Skip to main content

Facebook was always too busy selling ads to care about your personal data

(in)Secure is a weekly column that dives into the rapidly escalating topic of cybersecurity.

Recommended Videos

Last year, Facebook collected over nine billion dollars in ad revenue over just a single quarter. That’s a lot of ads. As a trade-off for using a free service, people on Facebook put up with the proliferation of these ads in their newsfeeds. But what if the trade-off involved more than that? What if it involved your personal data being sold off without your consent?

Let’s be clear. This isn’t an actual data breach. It’s merely a policy
no one at Facebook
cared about.

Facebook’s latest scandal involves a data analysis firm called Cambridge Analytica, which was supplied with the personal data of 50 million Facebook profiles without the consent of those people, which just happened to be used in the election of a certain presidential candidate. On its own, the scandal is more than a little troubling, and it provides a startling look into how little the world’s biggest social media platform is concerned about personal data.

Let’s be clear. This doesn’t involve an actual data breach. It’s merely a policy no one at Facebook cared about.

Under the guise of academic research

Using personal data for the sake of academic research has been a weak point in Facebook’s privacy policy for years now — and it’s the first vulnerability the collaborators involved with the Cambridge Analytica scandal exploited.

Despite the name, Cambridge Analytica has no official connection to academia. It’s a research organization founded with the specific purpose of impacting the electoral process, and was run by former Trump aide Steve Bannon, as well as and hedge fund billionaire Robert Mercer.

Cambridge Analytica Facebook breach
Bryan Bedder/Getty Images
Bryan Bedder/Getty Images

The facade of academic research was used as an entry point for an important figure in the crew — Aleksandr Kogan, a researcher who worked for both Cambridge University and (briefly at) St. Petersburg State University. According to a report by the New York Times, when doing work for Cambridge Analytica, Kogan told Facebook that he was collecting data for academic purposes rather than political.

The description for the app said, word for word, “This app is part of a research program in the Department of Psychology at the University of Cambridge.” Apparently, Facebook did nothing to verify that claim. To make things worse, Kogan stated he later changed the reason for his use for the data, and Facebook never bothered to inquire about it further.

Facebook has been giving the data of its users to academic researchers for years now — and not in secret.

Facebook has been giving the data of its users to academic researchers for years now — and not in secret. Facebook freely provided personal data from its users to Harvard University for an academic study back in 2007. Others since then include a partnership with Cornell University on influencing the mood of Facebook users, and yet another in 2017 which studied how AI could guess a person’s sexual orientation from only a photograph.

These studies were all met with public outrage, but Facebook emphasized that they weren’t the result of data breaches or significant holes in the company’s research protocols. It saw them as only “minor oversights.”

There’s little reason to believe a platform that views massive misuse of data without consent as “minor oversights” cares about your privacy. And that’s not where it ends.

Under the guise of a personality quiz

The other area where Facebook’s data policies are weak lie in something we all know too well: personality quizzes. They’re prominent on Facebook, and Kogan used the vulnerable pinch point to collect the data that Cambridge Analytica purchased from him.

Through Global Science Research (GSR), a separate company he created, Kogan developed a Facebook plugin called thisisyourdigitallife. It paid a group of 270,000 people to download the app and take the quiz. That might not sound like much, but the app was then allowed to collect data from each of those people’s friends as well. The result was data for 50 million profiles, now in the hands of Cambridge Analytica. That’s a lot of data.

Whistleblower Christopher Wylie posing for a portrait
Jake Naughton for The Washington Post via Getty Images
Christopher Wylie, one of the founders of Cambridge Analytica, blew the whistle on how the data firm harvested data from millions of Facebook users. Photo: Jake Naughton for The Washington Post via Getty Images

Never did Facebook inform its users that data was being used without their consent. That alone is already calling British law into question.

According to The Guardian, Facebook learned this trick was used to mine massive amounts of data in 2015, which was then used by the Ted Cruz presidential campaign. Facebook’s response was to send Cambridge Analytica an official letter, obtained by the Times, stating the following: “Because this data was obtained and used without permission, and because GSR was not authorized to share or sell it to you, it cannot be used legitimately in the future and must be deleted immediately.”

Never did Facebook inform its users of all
the data that was
being used without
their consent.

Over two years passed before Facebook would even follow up on its request. “If this data still exists, it would be a grave violation of Facebook’s policies and an unacceptable violation of trust and the commitments these groups made,” a blog post from Facebook stated. Eventually, it did get around to it, but it shows that Facebook’s problem isn’t that it lacks policies. It’s that they aren’t enforced.

Cambridge Analytica wasn’t the only organization bending Facebook’s privacy policies. A previous employee of Facebook spoke to The Guardian, saying that “My concerns were that all of the data that left Facebook servers to developers could not be monitored by Facebook, so we had no idea what developers were doing with the data.”

That’s from Sandy Parakilas, who was the platform operations manager in 2011 and 2012. “Once the data left Facebook servers there was not any control, and there was no insight into what was going on.”

Who could be bothered to care?

As reported by the Times, research director Jonathan Albright at Columbia University summarized the problem well: “Unethical people will always do bad things when we make it easy for them and there are few — if any — lasting repercussions.”

https://www.facebook.com/zuck/posts/10104712037900071

Facebook will make sure it takes care of this specific problem, sure. After remaining silent for multiple days after the release, Facebook CEO Mark Zuckerberg did finally make an official statement, in which he took a bit more responsibility for what happened: “We have a responsibility to protect your data, and if we can’t then we don’t deserve to serve you.”

He also vowed to take others steps, such as auditing suspicious apps or limiting the amount of data developers can access from applications. These policies will all help prevent a very similar scenario from unfolding, but cybersecurity is all about prevention. It requires a proactive approach to stopping holes in the system.

Mark Zuckerberg: “I’m really sorry that this happened”

For a company that lives and dies on the trust people have in giving away personal information, you’d think it’d issues a little more seriously across the breadth of its platform. If it doesn’t make massive changes to the way things are done across all levels of privacy and security, #deleteFacebook could grow into far more than just a hashtag.

Luke Larsen
Luke Larsen is the Senior Editor of Computing, managing all content covering laptops, monitors, PC hardware, Macs, and more.
Intel’s promised Arrow Lake autopsy details up to 30% loss in performance
The Core Ultra 9 285K socketed into a motherboard.

Intel's Arrow Lake CPUs didn't make it on our list of the best processors when they released earlier this year. As you can read in our Core Ultra 9 285K review, Intel's latest desktop offering struggled to keep pace with last-gen options, particularly in games, and showed strange behavior in apps like Premiere Pro. Now, Intel says it has fixed the issues with its Arrow Lake range, which accounted for up to a 30% loss in real-world performance compared to Intel's in-house testing.

The company identified five issues with the performance of Arrow Lake, four of which are resolved now. The latest BIOS and Windows Updates (more details on those later in this story) will restore Arrow Lake processors to their expected level of performance, according to Intel, while a new firmware will offer additional performance improvements. That firmware is expected to release in January, pushing beyond the baseline level of performance Intel expected out of Arrow Lake.

Read more
You can get this 40-inch LG UltraWide 5K monitor at $560 off if you hurry
A woman using the LG UltraWide 40WP95C-W 5K monitor.

If you need a screen to go with the upgrade that you made with desktop computer deals, and you're willing to spend for a top-of-the-line display, then you may want to set your sights on the LG 40WP95C-W UltraWide curved 5K monitor. From its original price of $1,800, you can get it for $1,240 from Walmart for huge savings of $560, or for $1,275 from Amazon for a $525 discount. You should complete your purchase quickly if you're interested though, as there's no telling when the offers for this monitor will expire.

Why you should buy the LG 40WP95C-W UltraWide curved 5K monitor
5K monitors are highly recommended for serious creative professionals, such as graphic designers and filmmakers, for their extremely sharp details and precise colors, and the LG 40WP95C-W UltraWide curved 5K monitor is an excellent choice. We've tagged it as the best ultrawide 5K monitor in our roundup of the best 5K monitors, with its huge 40-inch curved screen featuring 5120 x 2160 resolution, 98% coverage of the DCI-P3 spectrum, and support for HDR10 providing striking visuals that you won't enjoy from most of the other options in the market.

Read more
Generative-AI-powered video editing is coming to Instagram
Instagram on iPhone against a colorful background.

Editing your Instagram videos will soon be as simple as typing out a text prompt, thanks to a new generative AI tool the company hopes to release in 2025, CEO Adam Mosseri announced Thursday.

The upcoming tool, which leverages Meta's Movie Gen model, will enable users to "change nearly any aspect of your videos," Mosseri said during his preview demonstration. Those changes range from subtle modifications, like adding a gold chain to his existing outfit or a hippo in the background, to wholesale alterations including swapping his wardrobe or giving himself a felt, Muppet-like appearance.

Read more