Best Practices for Data Security in Modern Cloud Platforms

As the volume of data generated, transformed, stored, and accessed has increased, digital transformation has significantly altered how companies do business and use data to create value. To deal with increasingly complex unstructured and semi-structured data, organizations are looking to build their analytics to stay competitive. Along with this, the collection and use of machine-generated data, streaming data types, data from SaaS applications, and data from third-party cloud providers also enhances datasets for analytics.

Following these changes, on-premises data warehouses have become increasingly obsolete, as they may not be capable of handling multiple data types along with advanced analytics. As a result, organizations have begun migrating data to the cloud to support and govern this information. The majority of the firms have moved to cloud data platforms such as modern cloud data warehouses and cloud data lakes.

With organizations dealing with such complex data both on-premises and in the cloud, data security remains a huge concern. Added layers of complexity lead to a much larger attack surface for bad actors, making it quite difficult to monitor and secure data. As consumers become more aware about data security and regulations step up data privacy requirements, the way organizations handle these requirements becomes both a challenge as well as a competitive business advantage.

But what exactly is data security, and how can it be applied across your cloud data ecosystem?

What is Data Security?

Data security is the combination of process and technology that helps protect sensitive digital information from malicious access, alteration, disclosure, or leakage throughout its life cycle. These kinds of security risks arise from various sources, but they can all be categorized as either accidental or intentional. Effective data security measures aim to protect data against any risk, regardless of intention.

The top priority of data security is to protect extremely sensitive data such as personally identifiable information (PII) and protected health information (PHI). Both PII and PHI include personal information that can be connected back to the individual from whom it originated, completely identifying them. It can be challenging for organizations to protect these data sets in modern cloud platforms, especially with a diverse and distributed data stack.

In a 2021 TDWI survey, only 55% of respondents confirmed that their data was secure, the rest disagreed or were on the fence about it. Another of TDWI’s surveys from 2022 found that the second highest priority for data management was for data security and privacy, behind only the need for data governance.

A very common data security technique is data encryption. Encrypting data takes the normal text and converts it into an unreadable format using an algorithm. Once completed, the only way to reveal the original data will be through the use of a protected decryption key. Data masking is another method of enforcing data security, in which PII is altered from its original state before being analyzed or used to develop applications. Security is applied through a range of other methods, but encryption and masking are some of the most widespread.

Best Practices for Implementing Data Security

Although data security is integral to maintaining an effective and secure modern data stack, it can be difficult to understand just where to begin with your organization’s data. TDWI’s new checklist report, Five Best Practices for Data Security in Modern Cloud Platforms, addresses this necessity by outlining organizational and technological best practices for managing data security in modern environments. The most common of these best practices are as follows:

1. Make Access Controls Compliant and Easy to Understand

In order to protect data, organizations need to effectively control access to this sensitive information. Doing so ensures that only the right people can access and use sensitive data for the right purposes at the right times, and no more.

Furthermore, organizations are working to enable self-service data access, allowing users to quickly obtain the right access to their respective data. In simple terms, access controls should facilitate compliance with regulations without becoming a barrier to setup or data utility. Compliance can be achieved with well-written, plain language policy authoring that legal and compliance stakeholders can comprehend. These policies should adhere to laws, compliance regulations, and standards, as well as ethical practices for the company’s data

With data access control measures in place, teams can monitor if their policies are being enforced accordingly. Data monitoring capabilities also track who can access sensitive data, when it was accessed, which data sources attract higher amounts of traffic, and which users are most active with sensitive data use. This will help internal and external auditors to better understand the compliance requirements and test the effectiveness of the access controls in place.

Companies are trying to democratize data use, opening up their data resources to more users. In a data-driven culture, users ranging from data engineers to business analysts, data scientists, and more will require access to an organization’s data sets. No one wants to wait for weeks or months to get the right access to the data, and teams want to avoid becoming overburdened by bottlenecked access requests.

2. Consider Attribute-Based Access Controls Over Role-Based Access Controls

Role-based access control (RBAC) determines access by assigning each user a role or membership in an access group. This group can be segregated based on department, physical location, or even designation. For example, a data engineer may have different privileges compared to that of a data scientist. If the data engineer needs access to data that is beyond their scope, then a new role has to be created with a customized set of rules. This is where RBAC can become cumbersome, as new required roles can build up exponentially in a phenomenon called role explosion.

Evolving beyond RBAC’s capabilities, attribute-based access control (ABAC) is the next major iteration in access control. Using ABAC, individuals are provided with access based on a range of user, data, and contextual attributes. These can include the user’s department or management level, objects such as a file, the data’s environment, and the action being taken with the data. This way data teams can create access for various combinations of attributes rather than being required to create countless user-specific roles.

ABAC policies are dynamic and executed at query time, rather than being applied statically like role-based policies. The initial setup of ABAC policies may require more time on the front end than RBAC, as all policies have to be defined per user. However, these policies are more granular and flexible in the long run, streamlining a data ecosystem’s overall performance. ABAC roles are also more intuitive and written semantically, so stakeholders can gather the full picture regardless of technical expertise.

3. Centralize Your Data Policy Management Capabilities

Contemporary companies often have to deal with massive amounts of data across various platforms, tools, and environments. Different cloud providers have their own applications and platforms, each with their own access controls and security methods. This wide-ranging amount of resources and platforms makes data security and access control very difficult, as they must be implemented and maintained consistently across these environments.

With so many policies required to control and secure access across disparate data ecosystems, organizations are looking to simplify and streamline their data policy management. In fact, a recent study by Immuta and S&P Global’s 451 Research found that a total of 52.5% of respondents reported their organization adopted additional policy-based data management controls in the pursuit of higher quality and consistent data.

Since using a variety of separate security solutions is not secure or scalable, modern organizations are opting to implement single unified data security platforms (DSPs). Here, policies are created, stored, and maintained in a single platform, and can be applied across various cloud environments consistently.

Enabling Data Security in Modern Cloud Platforms

Organizations are in critical need of data security platforms that provide easy and understandable data access policies, allow them to be managed on a central platform, and make use of modern tools to deal with massive amounts of complex data.

Access control is a key part of data security, especially when firms have to deal with multi-hybrid cloud platforms. A thoughtful approach is integral to mitigate risks and correct weaknesses in data security. This is where dynamic access control and centralized data security platforms come into play. With powerful DSP choices like Immuta’a Data Security Platform, customers can achieve results like reducing policy management burdens by 75x.

To dive deeper into how these best practices for data security in modern cloud platforms can strengthen your data security and ensure better compliance and governance, download TDWI’s Five Best Practices for Data Security in Modern Cloud Data Platforms today!

Five Best Practices for Data Security in Modern Cloud Platforms

Organizations have a responsibility to know what data they have and where it lives, and to prove compliant use of that data. So, how can you get your data security practices up to speed without cutting corners?

Download the Checklist
Blog

Related stories