You’ve just adopted a data security solution – congratulations! Now what?
As with any new piece of technology, it can be tempting to jump in feet first to solve all your problems. After all, the sooner you achieve ROI, the better – right? Not necessarily. Often, we see customers struggle in the planning stage of the onboarding process because their eagerness to quickly address all their pain points becomes a case of boiling the ocean.
That’s why we’ve developed solution patterns to help hone focus during onboarding so our customers can achieve quick, high-impact wins while gaining familiarity with our data security platform. In this blog, we’ll dig deeper into why solution patterns are key to successful onboarding and how our experience helping customers across all industries helped us identify three key patterns.
What are Solution Patterns?
Solution patterns are essentially best-practice guides that walk customers through the process of using our product to solve specific problems. They provide a clear “why” behind each decision and guide customers through the choices they need to make in order to be successful. For instance, customers may have a business imperative like improving operational efficiency, mitigating risk, or increasing data’s impact on business performance, which ultimately define their priorities – that is, their “why.”
This is key because it’s easy to get caught up in the “how” and neglect the rationale behind tackling a project. Having a “why” helps maintain focus and keep distractions at bay. Often, the lowest common denominator of thought wins poorly, so we aim to improve on that and add some meat with the veggies of data security.
Immuta’s Solution Patterns for Data Security
1. “Birthright” Table Access
This solution pattern is the most common across Immuta’s customers. It aims to solve the problem of entitlement to data based on user and data metadata. Our approach is unique in that it decouples access decisions from roles, allowing for dynamic, real-time enforcement. When user or data metadata changes, users’ access may also dynamically change – with no manual intervention required.
This helps avoid two common access control obstacles:
- Role explosion, which occurs when static, role-based access controls require a new role to be created for every possible access scenario. In these cases, data teams end up managing hundreds or thousands of user roles in an effort to control access to data in specific tables or databases.
- Manual approval processes, which require humans to be notified of, evaluate, and approve or deny all access requests. This is inefficient, slow, and error-prone, further delaying speed to access and increasing the threat of data users finding risky workarounds.
By decoupling policy logic from user and data metadata, customers accelerate time to data, enforce access control consistently, and reduce the potential for error, favoritism, or workarounds. For Swedbank, this resulted in a 2x growth in data use and a 5x increase in efficiency, while maintaining compliance with regulatory requirements.
When evaluating whether this solution pattern is right for you, start with these questions:
- Do you primarily have 1:1 static access scenarios with no overlap? For instance, permission X gives you access to all data tagged Y.
- Do you have multiple variables at play to determine access? For instance, data tagged Y might have multiple different X’s that are contingent on various attributes.
If you’re not sure which is more applicable to you, we suggest following path 2. While it may seem more complicated to get started, in the long run it will provide you with powerful flexibility and scalability in data policy management. Path 1 is more likely to cause role explosion, particularly if you plan to increase data use over time or want to achieve federated governance.
You can read more about how this works in practice here.
2. Prioritization of Global Policies Instead of Subscription
Our second pattern focuses on enhancing modern data governance without disrupting existing workflows. It’s particularly suitable for scenarios where:
- Established access control processes are already in place and working effectively
- Access is granted to “everyone” and the emphasis is more on what users can access, instead of who has access
- Generic subscription policies are in place, but are not tailored to specific user attributes, roles, or data sensitivity levels
Immuta’s default subscription policy is a key feature of this solution pattern because it does not automatically apply a subscription policy to new registered data sources, thereby preserving existing data access controls and workflows. Users can continue accessing data as usual while the organization gradually migrates to Immuta subscription policies and once complete, the old access controls can be removed. This approach ensures a smooth transition to Immuta, minimizing the impact on users and business operations.
Customers that start with this solution pattern often want to achieve more nuanced control over data access in a way that’s effective and minimally invasive. However, large customers with very specific data governance requirements and concerns might find that it causes more complexity than efficiency.
When evaluating whether this solution pattern is right for you, start with these questions:
- Do you already have established data access controls and governance workflows in place?
- Are you looking to improve the granularity of your data access controls with minimal disruption?
If you answered yes, using Immuta to prioritize global policies over subscription policies would set you up for a seamless transition and more comprehensive data security.
3. Data Mesh/Domain-Centric Immuta Management
Data mesh is an increasingly popular and buzzworthy architecture paradigm among customers. It promotes decentralization and domain ownership of data, and treats data as a product that can help solve a particular business need. This solution pattern leverages data mesh concepts to enable domain owners to effectively manage permissions on their data, which in turn allows organizations to effectively scale data capabilities, collaboration, and self-service usage.
Data mesh is especially useful for large, complex organizations that typically have more difficulty managing and scaling secure data use due to centralized IT bottlenecks, data silos, lack of clear data ownership and context, and highly scrutinized compliance concerns, among others. However, it does require cross-functional buy-in for a new approach built on four key pillars:
- Domain-oriented ownership, which puts individual domain teams – rather than a centralized data team – in charge of their own data quality, reliability, and accessibility. This ensures data is managed and used in a context-aware way that’s relevant to business needs.
- Data as a product, which refers to creating containerized solutions that are intended to meet data consumers’ needs. Rather than simply building the infrastructure that data consumers can leverage for data initiatives, this approach provides usable products that can be easily discovered and periodically updated.
- Self-service data infrastructure, which gives domain teams the platforms and tools to manage their data pipelines, processing, and other tasks. This reduces reliance on centralized data engineering teams and accelerates the overall data delivery process.
- Federated computational governance, which aims to establish a governance framework that balances domain-level data ownership with centralized oversight. This ensures that data is managed effectively and securely, while fostering collaboration and continuous improvement.
In practice, making each of these pillars a reality can be complex. Without a way to easily discover data, dynamically enforce access controls, and monitor how data is used across domains, operationalizing a data mesh will come with many obstacles.
“When you are building something that is distributed in nature … [it] can easily become more complex and more costly than something centralized,” said Snowflake Field CTO Matthias Nicola during a webinar on data mesh implementation. “So, having a focus on cost and simplicity is really important.”
For customers that start with this solution pattern, we focus on making data access policies easy for any stakeholder to understand, write, and enforce, regardless of technical expertise. This increases collaboration and reduces reliance on data engineering resources, ensuring the right access controls are dynamically applied across domains. During Roche Diagnostics’ data mesh implementation, Immuta’s attribute-based access control (ABAC) allowed the team to reduce its number of access groups by 94%. Ultimately, the company securely operationalized more than 200 data products in less than two years.
When evaluating whether this solution pattern is right for you, start with these questions:
- Is your organization large and/or distributed, with multiple lines of business relying on data?
- Do you have/are you able to secure cross-functional support for data mesh implementation, including buy-in from executives, data platform teams, data engineers/architects, domain owners, and governance/compliance stakeholders?
If you answered yes to both, the data mesh solution pattern and Immuta’s approach can help organize and facilitate your implementation, ensuring data security is a cornerstone of your data mesh architecture. Hear more from Roche’s Head of Data Management Platforms, Paul Rankin, about his experience here.
What’s Next?
As we continue to refine and expand our solution patterns, we’re confident that they will become invaluable tools for our customers, helping them get the most value out of Immuta and, more importantly, their own data.
To learn more about how customers have leveraged the Immuta Data Security Platform, download your copy of Building an Agile Data Stack for the Top Data Use Cases.
Find out more.
Interested in learning more about which solution pattern would suit you best?