Article

Why Provisioned Access is the Key to the Right to be Forgotten

Since the Court of Justice of the European Union (EU) decided the Google Spain case almost six years ago, the right to be forgotten (RTBF) has become the most well-known feature of the EU data protection law. In a nutshell, RTBF—for which the technical name is “the right to erasure”—enables individuals in certain circumstances to ask for the removal of their personal data.   

RTBF is both famous and controversial. Many in academia and civil society have criticized its implementation in the Google Spain case, hoping that new EU data protection laws would strike a better balance between the right to privacy and data protection on the one hand, and the right to freedom of expression on the other. 

Despite these criticisms, the drafters of the General Data Protection Regulation (GDPR) maintained RTBF as a remedy in Article 17 of the regulation. Other legal regimes have followed in the EU’s path. For example, the California Consumer Privacy Act (CCPA) includes a similar, although not identical, provision, as does Brazil’s General Data Protection Law. And the Court of Justice of the EU recently confirmed the relevance of this right in two important decisions (Google v CNIL and GC and Others v CNIL), even if it refused to precisely follow the approach of the French supervisory authority CNIL in relation to the scope of RTBF. 

Yet, the rising prominence of RTBF has left one major question unanswered—a question we see organizations struggling with in practice around the world: Does erasure mean permanent, irreversible deletion? Or does it imply significantly limiting access to that data? Given that GDPR is influencing other data protection frameworks such as  CCPA, which also introduces a right to deletion, it is crucial to clarify what the RTBF actually entails. 

In practice, there are several misconceptions about what it means to apply RTBF, both to comply with legal mandates and to meet the technology needs of the modern enterprise. In this article, we hope to clarify some of these misconceptions and to explain why there’s much more to RTBF than deletion alone. 

To start, let’s recall the holding of the CJEU in the Google Spain case where the Court had to answer the question whether the national supervisory authority could order a search engine to withdraw from its indexes information published by third parties. What was at stake for the Court was the delisting of a link in Google’s search results in one specific circumstance (a link that appeared when Google users entered the name of the claimant in the search box). 

The effect of RTBF in that case was to stop unnecessary processing of personal data. Said otherwise, the effect of RTBF was not permanent or irreversible deletion of personal data at its source—it was removing access to that data by delisting links to it. In the words of the Court in Google v CNIL, RTBF must “have the effect of preventing or, at the very least, seriously discouraging internet users in the Member States from gaining access to the links in question using a search conducted on the basis of that data subject’s name.”

But the Google Spain case (as well as the Google vs. CNIL and GC, and others vs. CNIL cases), is not the only instance of RTBF. In other examples, individuals have submitted their requests directly to the providers storing or hosting the contentious data. What does RTBF require in that instance? 

The answer, again, is not immediate and permanent deletion, but this time it’s because of the way other data protection principles overlap with RTBF. Data protection by design, in particular, which is the backbone of GDPR, mandates a range of data protection principles be embedded within data environments to protect individuals’ rights – ensuring things like data integrity and availability, data minimization and purpose limitation. 

Having one single button within one organization that could instantly and permanently delete data in an irreversible fashion would jeopardize the very tenets of data protection by design (see if you can find one CISO who thinks that giving a single tool write access to all data across their organization is a good idea!)

From a data protection by design perspective, a two-step rather than a one-step approach makes more sense. Importantly, a two-step approach is much more effective from a data subject standpoint because it allows data controllers to react faster, which increases compliance with GDPR as a whole. 

Once a request for erasure is received, the first step is stopping the unnecessary processing activity as quickly as possible. This is similar to the delisting at the center of the Google Spain case – data must be prevented from being processed, rather than outright deleted. 

When the data is prevented from being processed, it isn’t deleted but it is segregated such that it cannot be accessed to pursue the original processing purpose. But the activity in question is not deletion yet. 

Later, at a second stage, the data can be deleted locally in each of the different databases or data silos, after a more thorough (and less immediate) assessment. This approach is supported by  the fact that neither the EU right to erasure nor the Californian right to deletion are absolute. 

EU regulators have endorsed the two-step approach. As explained by the UK’s data protection supervisory authority in its guidance on deleting personal data, what is required is that  the data in question has been put “beyond use.” More precisely, the ICO referring back to its pre-GDPR guidance, describes this as when the data controller

  • is not able, or will not attempt, to use the personal data to inform any decision in respect of any individual or in a manner that affects the individual in any way  
  • does not give any other organization access to the personal data 
  • surrounds the personal data with appropriate technical and organizational security 
  • and commits to permanent deletion of the information if, or when, this becomes possible

And the ICO is not the only regulator suggesting that permanent deletion is not necessarily the standard in practice. The European Union Agency for Information Security Agency wrote all the way back in 2012, when the first draft of the GDPR was released, an approach would satisfy RTBF if it allowed “encrypted copies of the data to survive, as long as they cannot be deciphered by unauthorized parties.” The French data protection supervisory authority CNIL has made a similar point as in 2018, acknowledging that certain types of cryptographic algorithms can have an effect that is de facto equivalent to deletion. 

What does this mean in practice? It means that RTBF is not necessarily absolute deletion. It’s  also centered around restricting the processing of the data in question. If access to that data is restricted such that its no longer being processed and appropriate safeguards and commitments are in place, there is a strong claim that the RTBF has been satisfied. 

Sophie Stalla-Bourdillon is Senior Privacy Counsel and Legal Engineer at Immuta, the automated data governance company, and a Professor in Information Technology Law and Data Governance within Southampton Law School at the University of Southampton.