The data transformation process is not trivial, and assessing its outputs can be equally complex. To fully understand the output of a data transformation process, data teams and stakeholders must look at the data environment and the combination of technical and organisational controls implemented to manage data access.
In the context of data analytics, machine learning, and federated learning, understanding input data and the anonymisation techniques applied to it is critical. In this paper, published in Data Protection and Privacy: Data Protection and Artificial Intelligence, you will learn:
- Why the de-identification spectrum resulting from the common readings of both the CCPA and GDPR is oversimplified
- How to analyse anonymisation controls in a way that mitigates risk while maximising data’s utility
- How to make sense of de-identification and anonymisation requirements stemming from CCPA, GDPR, or other regulations
- Why aggregated and synthetic data are not necessarily privacy-preserving