Can data collection lead to discrimination and self-censorship?

A human rights committee has launched an inquiry into the right to privacy and the digital revolution

In the wake of the Cambridge Analytica scandal last year, more users are now beginning to recognise the potential destructiveness of data. Despite this, a parliamentary committee has warned that most people are not fully aware that companies are infringing upon their privacy.

The dark side of data collection

Data collection practices could potentially lead to discrimination and self-censorship, the committee also warned. The human rights committee began an inquiry into the right to privacy and the digital revolution on Wednesday, The Guardian reported.

The group published evidence from privacy and data protection organisations, including the Information Commissioner’s Office and Liberty and Privacy International. Overall, the evidence suggests that Brits are largely unaware of what happens to their data.

It also found that many users are unable to provide meaningful consent and thus self-censor out of fear of being watched. According to Liberty, private companies exploiting data for commercial use has become a “normalised part” of “everyday existence.”

Discrimination and self-censorship

This data can reveal and “manipulate our deepest and most sensitive thoughts and feelings – including our political views,” Liberty added. In effect, the normalisation of these processes threatens freedom of expression.

Indeed, studies show users are more likely to censor social media posts when they become aware of this surveillance. The ICO also warned that the modern data economy could potentially fuel discrimination.

Last year, for example, Amazon implemented an AI tool that actively discriminated against women. The model screened candidates by analysing patterns in predominantly male resumes over a ten year period.

As ICO observed, it is becoming increasingly evident that algorithms contain inherent biases. As a result, there is a greater risk of “discriminatory outcomes, which runs contrary to the principle of fairness.”

This ultimately raises questions regarding the principles of consent, accountability, and transparency. Despite GDPR’s attempts to protect users, The Law Society of Scotland warned that “consumers may not fully understand the potential impact that certain uses of their data might have.”

Looking to learn more? Check out our podcast with CEO of Satalia Daniel Hulme, in which he discusses the ethics surrounding AI adoption