The GDPR contains a host of forward-leaning data provisions, but none are thornier than the so-called “right to explainability” and the constraints the GDPR imposes on machine learning. With fines of up to four percent of global revenue, organizations using EU data will literally not be able to afford to ignore these issues.
Questions created by the GDPR include:
- What types of explanations are required for ML models?
- What rights do data subjects have when ML models user their data?
- What exactly constitutes “automated decision making” under the GDPR?
Steve focuses on the specific challenges created by the GDPR, the ambiguities around ML that regulators have left unaddressed, and what this means for every phase of the ML creation, testing, and deployment lifecycle.