Can Users Poison Data That Big Tech Gathers?

There’s growing discontent with Big Tech’s surveillance and encroachment on the public’s privacy. Is there a way for ordinary users to disrupt such unsolicited access to personal interests?

Ivana Vojinovic Image
Updated:

April 19,2022

DataProt is supported by its audience. When you buy through links on our site, we may earn a commission. This, however, does not influence the evaluations in our reviews. Learn More.

Platforms like WhatsApp and Instagram make money even though they are free to use. Surely the developers aren’t doing it for altruistic reasons?

Here’s a fun fact you probably already knew - Big Tech products used by the general public all have free access to your data, including your location, your personal interests, and what you’re doing. They make quite a lot of money from this data, either by selling it to other companies or using it in relentless marketing campaigns designed to get you to buy more.

There are so many things wrong with this model, but perhaps the most annoying would be the lack of privacy in using an essential app or tech product. Something needs to be done, so the question is - can users poison the data of Big Tech companies?

Can Users Throw a Wrench in Big Tech’s Works?

The answer is yes, every user can sabotage the data tech giants. There are three ways to do this:

  1. Data strikes
  2. Data poisoning
  3. Conscious data contribution

1. Data Strikes

Using data strikes is a technique that involves deliberately keeping data away from Big Tech companies. Users can decide to withhold or delete their data by taking advantage of privacy tools or privacy laws. An example of this is when groups initiate a data boycott by cutting off the usage of apps and services (as seen in boycotts against companies like Uber and Facebook) or ad-blocking software and tools, depriving companies of data on how their ad placements are doing.

2. Data Poisoning

This is simply the process of making your data useless to Big Tech. It involves the contribution of data that has no meaning or is harmful. A common way to do this is to use a browser extension called AdNauseam. It automatically clicks on every ad popup, thus confusing Google’s ad-targeting algorithms.

As Vincent and Li in Data Leverage: A Framework for Empowering the Public in Its Relationship with Technology Companies put it: 

“For instance, someone who dislikes pop music might use an online music platform to play a playlist of pop music when they step away from their device with the intention of ‘tricking’ a recommender system into using their data to recommend pop music to similar pop-hating users.”

Other examples of data poisoning include campaigns that promote products using fake reviews.

3. Conscious Data Contribution

An alternative to withholding, poisoning, or deleting data, conscious data contribution (CDC) means that users are giving their data to another organization, thereby increasing market competition for Big Tech. If other companies have this data, it becomes second-rate to Big Tech.

An example of CDC was when millions of WhatsApp users deleted their accounts in January 2021 and switched to its competitors, Signal and Telegram. This was after Facebook, now Meta Platforms, announced it would begin sharing WhatsApp data within the company. Facebook delayed its policy changes as a result, thus proving that CDC is an effective measure.

While all three options are viable ways for users to poison data Big Tech uses, we’ll focus on data poisoning in this article. 

What Steps Can Users Take?

The biggest question isn’t just about how to poison a tech giant's data well, but how to do it effectively. As it stands, the average person is already doing this, whether it's using an adblocker or a browser extension like AdNauseam. Still, these scattered drops of resistance don’t affect Big Tech if they aren’t coordinated.

It’s been done before. On March 23, 2016, the general public succeeded in poisoning the data well of Microsoft’s chatbot named Tay. Designed to appeal to the 18-24 female demographic, it consumed the information fed into it by users, to be reused in subsequent conversations.

The result was devastating – the chatbot was turned into a cyber monster using sexist and racist messaging that had deliberately been fed into it by incensed users. They saw they could play with the service and used the opportunity to their advantage.

Resistance should be a coordinated effort, as increased usage of tools like AdNauseam could contribute to reducing the effectiveness of Big Tech’s algorithms. Google has already committed to modifying ad targeting and user tracking across the web so that users are less vulnerable.

Policy advocacy and data poisoning efforts can contribute to fighting back against Big Tech. If there were more tools like AdNauseam, more people would try to poison the data pool. Data strikes are also effective, but the mass collaboration required for a data boycott would be more likely if there were stronger data privacy laws.

One such law is the European Union’s General Data Protection Regulation, which gives users the right to request that their data be deleted. Without these laws, Big Tech companies may not provide the option to scrub your data and records, even if an account has been deleted. This would render such strikes useless.

Bottom Line

The concept of data poisoning can easily be misconstrued as more extreme than it actually is. In reality, it should be the bare minimum that digital users, whose data is being collated, can decide how their information and privacy are treated. The decision to spoil data is just one of the many ways users can enact data leverage. 

FAQ
What is data poisoning?

Data poisoning is a form of attack that interferes with machine learning to produce unwanted results. White-hat data poisoners do this by breaking into a machine learning database and inputting incorrect or corrupt information. As the algorithm attempts to learn from the misleading information, it draws unplanned conclusions. Sometimes, these conclusions can be harmful.

What is model poisoning?

Model poisoning describes a form of data attack where attackers reduce the performance of a model on target sub-tasks by uploading incorrect or “poisoned” updates. An example would be classifying a plane as a type of bird.

What is machine learning poisoning?

Machine learning poisoning is one of the most popular methods of attacking machine learning systems. These poisoning attacks involve a data poisoner intentionally poisoning the training data used by the algorithm with the aim of weakening or corrupting it.

There are no comments yet
Leave your comment

Your email address will not be published.*