The whitepaper, released in partnership with The Future of Privacy Forum (FPF), presents a layered approach to data protection in machine learning, including recommending techniques such as noise injection, inserting intermediaries between training data and the model, making machine learning mechanisms transparent, access controls, monitoring, documentation, testing, and debugging.
The whitepaper builds on the analysis in Beyond Explainability: A Practical Guide to Managing Risk in Machine Learning Models, released by FPF and Immuta.
Co-authors of the paper include:
- Brenda Leong, FPF Senior Counsel & Director of Artificial Intelligence and Ethics
- Andrew Burt, Immuta Chief Privacy Officer and Legal Engineer
- Sophie Stalla-Bourdillion, Immuta Senior Privacy Counsel and Legal Engineer
- Patrick Hall, H2O.ai Senior Director for Data Science Products.
Leong and Burt discussed the findings of the WARNING SIGNS whitepaper at the Strata Data Conference in New York City during the “War Stories from the Front Lines of ML” panel and the “Regulations and the Future of Data” panel that happened on September 25, 2019.