Anonymizing personal data ‘not enough to protect privacy,’ shows new study


Current methods for anonymizing data leave individuals at risk of being re-identified, according to new research.

The researchers say their paper, published today in Nature Communications, demonstrates that allowing data to be used — to train AI algorithms, for example — while preserving people’s privacy, requires much more than simply adding noise, sampling datasets, and other de-identification techniques.

They have also published a demonstration tool that allows people to understand just how likely they are to be traced, even if the dataset they are in is anonymised and just a small fraction of it shared.

They say their findings should be a wake-up call for policymakers on the need to tighten the rules for what constitutes truly anonymous data.

Companies and governments both routinely collect and use our personal data. Our data and the way it’s used is protected under relevant laws like GDPR or the US’s California Consumer Privacy Act (CCPA).

Data is ‘sampled’ and anonymised, which includes stripping the data of identifying characteristics like names and email addresses, so that individuals cannot, in theory, be identified. After this process, the data’s no longer subject to data protection regulations, so it can be freely used and sold to third parties like advertising companies and data brokers.


Story Source: Materials provided by Imperial College London. Original written by Caroline Brogan. Note: Content may be edited for style and length.


Добавить комментарий

Ваш адрес email не будет опубликован. Обязательные поля помечены *