Businesses and organizations hold more personal data now than ever before. In general, this data is used to better serve customers and more effectively run business operations. But with plenty of malicious parties eager to access personal data and track sensitive information back to its source, finding a way to maintain data’s utility while adequately reducing the risk of sensitive information being leaked has become front and center for security experts and regulatory bodies worldwide.
This environment has led to the rise of k-anonymity, a privacy model and dynamic data masking technique first proposed over two decades ago that’s since evolved to become an effective form of privacy protection — when handled correctly.
What is k-Anonymity?
The concept of k-anonymity was introduced into information security and privacy back in 1998. It’s built on the idea that by combining sets of data with similar attributes, identifying information about any one of the individuals contributing to that data can be obscured. k-Anonymization is often referred to as the power of ‘hiding in the crowd.’ Individuals’ data is pooled in a larger group, meaning information in the group could correspond to any single member, thus masking the identity of the individual or individuals in question.
Let’s say you’re looking at a data set of 100 individuals featuring basic identifying information — name, zip code, age, gender, etc. There is also information about each person’s health status, which is what you want to study. Since health information must remain private according to regulations like HIPAA, k-anonymization could be used to generalize some identifying attributes and remove others entirely. Information such as individuals’ names is not relevant to health data in this case, so it can be removed. Other data, such as zip code, can be broadened to a larger geographical area. This removes the ability to connect specific health information to individuals with certainty, while still preserving the data’s utility and effectiveness. In fact, k-anonymization for sensitive health data is one of its most common use cases.
The k in k-anonymity refers to a variable — think of the classic ‘x’ in your high school algebra class. In this case, k refers to the number of times each combination of values appears in a data set. If k=2, the data is said to be 2-anonymous. This means the data points have been generalized enough that there are at least two sets of every combination of data in the data set.
For example, if a data set features the locations and ages for a group of individuals, the data would need to be generalized to the point that each age/location pair appears at least twice.
Access to the latest in data.
35,000 data professionals receive our monthly newsletter. Sign up for the latest insights, best practices, resources, and more.
How Can k-Anonymity Help Prevent a Privacy Attack?
k-Anonymity protects against hackers or malicious parties using ‘re-identification,’ or the practice of tracing data’s origins back to the individual it is connected to in the real world.
For a given person, identifying data (name, zip code, gender, etc.) may exist alongside sensitive data (health records, prescriptions, financial information, passwords, etc.). In the wrong hands, identifying data and sensitive data could be combined to re-identify that person and compromise their privacy. The purpose of k-anonymity is to ensure the two categories of data cannot be connected to one another.
How Is k-Anonymity Implemented?
Data owners who implement k-anonymization effectively can be examples to others, helping to show how data can be anonymized in ways that actually help prevent re-identification of sensitive information. Here are three examples of how we successfully use k-anonymization techniques at Immuta to keep data safe, secure, and anonymous:
Generalization is the practice of substituting a specific value for a more general one. For example, data sets that include zip codes may generalize specific zip codes into counties or municipalities (i.e. changing 01234 to 012XX). Ages may be generalized into an age bracket (i.e. grouping ‘Age: 35’ into ‘Age Group: 30-39’).
Generalization removes identifying information that can be gleaned from data by reducing an attribute’s specificity. It can be thought of as sufficiently ‘widening the net.’
Suppression is the process of removing an attribute’s value entirely from a data set. In the above example of age data, suppression would mean removing age information from each cohort entirely.
Keep in mind that suppression should only be used for data points that are not relevant to the purpose of the data collection. For example, if data is collected for the purpose of determining at which age individuals are most likely to develop a specific illness or condition, suppressing the age data would make the data itself useless. Suppression is often applied to irrelevant or mostly irrelevant data points, and must be applied on a case-by-case basis, rather than using a set of overarching rules that apply universally.
Some critics of k-anonymization take issue with the fact that achieving a re-identification risk of zero is impractical or impossible. But ensuring a zero percent chance of re-identification risk is not the industry standard — even the GDPR acknowledges that the complete absence of risk is impossible in most cases.
That means that a reasonably impossible re-identification risk is acceptable, and should be the goal for the use of k-anonymization.
How is l-Diversity Achieved Using k-Anonymization?
l-diversity is an extension of k-anonymization, and is often used as a benchmark to measure whether k-anonymization efforts have gone far enough to avoid re-identification. A data set is said to satisfy l-diversity if there are at least l well-represented values for each confidential attribute in each group of records that share key attributes.
l-diversity protects privacy even when the holder or publisher of data does not know what knowledge a malicious party may already have about the individuals in a data set. True anonymity is preserved because the values of sensitive attributes are well-represented in each group.
Data Masking and De-Identification with Immuta
Dynamic data masking and de-identification are two central tasks associated with k-anonymization and reaching suitable l-diversity. At Immuta, we remove the guesswork from achieving compliance with federal, industry, and contractual regulations, as well as organizations’ internal rules.
Using dynamic data masking, we are able to achieve format-preserving masking and anonymization without having to manually copy data or remove values— tasks which can not only delay analysis, but can weaken data’s utility and introduce risk of human error.
Dynamic k-anonymization helps address the inherent roadblocks to data privacy protection across cloud data platforms and as data sets and users scale. This allows organizations to safely and seamlessly prepare sensitive data for use while keeping the security and integrity of individuals intact.
Want to discover how Immuta can help eliminate re-identification risk and keep data secure? Request a demo today!