Most data leaders, if asked whether their organization has data governance, would say yes. And they’d be right. Over the last decade, governance programs have matured significantly — policies defined, catalogs deployed, stewards hired, controls put in place.
So why do data consumers still wait days, sometimes weeks, to get access to the data they need?
Because governance and delivery are not the same thing. And most organizations have invested heavily in one while leaving the other largely unchanged.
The last mile is where governance stops working
A governance program defines who should have access to data, under what conditions, and what controls apply. That’s essential work. But it doesn’t automatically get data into the hands of the people (and systems) that need it.
Delivery is the last mile. It’s what happens between a policy existing and a person actually being able to query a dataset. And for most organizations, that last mile still looks like this: a ticket gets submitted, approvals bounce between teams, context gets lost, and an engineer eventually makes a manual change. In a recent Immuta survey of more than 400 data professionals, 38% of practitioners said their organization still handles access requests through a ticket-based system, and 50% reported burnout.
The rules were right. The process was slow. That’s a delivery problem, not a governance problem.
The two got decoupled for a reason, but that reason no longer holds
Historically, tolerating slow delivery made sense. The number of data consumers was small. Access requests were infrequent. And the risk of getting it wrong felt higher than the cost of the delay.
So organizations invested in the governance layer, the policies, the controls, the compliance programs, and left the access delivery process largely manual. Tickets were fine because the volume was manageable.
But that calculus has shifted. The number of data consumers has grown significantly. Business users expect faster answers. Data science teams are running more workloads. And now, AI agents are entering the picture, operating continuously, generating access requests at machine speed, and not pausing for human review.
The volume and velocity of data access demand has changed. The delivery model largely hasn’t.
There's a better model, and it's built for scale
Fixing the last mile isn’t about moving faster through the same process. It’s about replacing the process with something built for scale.
Data provisioning is that model. It combines two approaches that work together: automatic access when predefined criteria are met (role, team, purpose), and intelligent request workflows for everything else, ones that capture context, evaluate requests against policy, and auto-approve what’s safe.
The result is a system where delivery is continuous, not episodic. Compliance is maintained as access is granted, adjusted, and revoked, not reviewed periodically after the fact. And governance doesn’t slow down when demand spikes; it scales with it.
AI agents don’t wait in line
AI agents don’t file tickets. They act on behalf of humans, need data in real time, and will increasingly account for a significant share of access requests across the enterprise. Organizations that haven’t closed the gap between governance and delivery will feel that acutely, not as a future problem, but as a present one.
The good news: the same provisioning model that solves access for humans extends naturally to agents. The delivery infrastructure you build now is the same one that governs AI access later.
If you’re looking to move from governance program to delivery model, The Ultimate Guide to Data Provisioning walks through exactly what that looks like in practice, from provisioning maturity and operating models to governing access for AI agents.