AI and the Future of Finance

Earlier this month, I had the privilege of attending a small event hosted by the World Economic Forum on the future of AI in finance. The event brought together about 30 or so experts in finance, technology, and the law to talk about the adoption of AI, its potential downsides, and major obstacles standing in its way.

The discussion was fantastic, and I thought I’d take to the Immuta blog to highlight a few key takeaways.

The (Many, Many) Obstacles to AI

First is the huge number and variety of obstacles standing in the way of AI. There is, for example, the issue that organizations have no clear “owner” of AI – that is, no one person responsible for its implementation uniformly across organizations. This is a huge problem because lots of different groups may have interests in (or opposition to) its adoption, and it’s not clear who gets credit/blame for AI. This, in turn, makes AI incredibly difficult to implement at scale.

From Immuta’s perspective, we seen this in various forms. There are, for example, a host of ways we’ve seen organizations attempt to become data driven, from empowering a strong Chief Data Officer to creating “centers of excellence” for data to sponsoring different types of data labs.

But we’ve also seen many attempts fall flat, usually because executive sponsorship is lacking. When it comes to putting new, potentially transformative technology to work, there’s just no substitute for sustained executive attention and sponsorship.

A few other major problems that arose during the discussion:

  • Bad data: For technology like AI, a limiting factor relates to the underlying quality and timeliness of the data an organization has on hand. If the data itself is poor quality, or if it’s stale by the time data scientists can access it, data science programs will be significantly hampered.
  • Wasting time: Put another way, we talked about high value people doing low value tasks. One memorable quote was that, when it comes to data science, “the crap job is the main job” – which is to say that data scientists spend way too much time attempting to find the right data and access it, rather than building models with that data, which is what they’re trained (and highly compensated) to do.
  • Data access: Silos across lines of business, legacy architecture issues, myriad modeling tools and more all make data access slow and inefficient. This oftentimes forms the basis for the two problems listed above.
  • Regulatory approval: In regulated industries, interfacing with regulators while using new technology can be incredibly difficult. Proving that risk has been properly managed, for instance, and doing so in a way that regulators can understand and trust was seen as a major barrier.
  • Explainability: This problem is something we’ve addressed in depth at Immuta. The basic idea is that AI can, in some circumstances, be so complex that its inner workings are not completely clear to the human mind, and this opacity can be a challenge in a variety of contexts. This challenge is, to some extent, never going away, but there’s a lot that can be done to make it easier to confront.
  • Hidden biases: Bias is a problem in all data, but then again all data is biased, which is to say it is an inaccurate reflection of the real world. This isn’t a new problem in the world of data science. Hidden biases, on the other hand, are a serious problem, because they aren’t known and therefore can’t be addressed. The recent case of Amazon’s recruiting tool that penalized female recruits is one among many other such examples.
  • Ethics: There’s more to using AI responsibly than simply complying with the law – oftentimes, laws don’t fully address the range of scenarios we need to avoid when using the technology. So thinking about AI from an ethical dimension, and being confident that it is in fact being deployed in ways that uphold our values, is another major challenge.

  • Data silos: This is a problem so big, and that came up so frequently, I figured I’d list it again. Getting access to siloed data is simply a central challenge, and a major reason for the slow or improper adoption of AI.

AI Is Already Here

Another key takeaway – and one that’s, frankly, a bit unintuitive for non-technical folks –  is that all these problems have very little to do with the actual technology behind AI itself. That is, the modelling itself is the easy part.

It’s managing the data the models require, and monitoring their outputs, and all the requisite larger organizational processes, that mandate the most effort and attention. Put another way, the productization of AI is fairly easy. But managing its deployment is altogether another challenge (and a much harder one).

AI Enablement vs. AI Use

There are really two types of ways to embrace AI: enablement vs. use. The use aspect has, for reasons described above, become the fairly straightforward aspect of the equation.

But it’s the enablement where organizations are struggling the most. The barriers in the way of actually enabling AI, over time, across teams, and managing all the attendant risks, are very high.

Where Immuta Fits In

If you’ve read this far, you probably already have a sense of what we at Immuta do: Help make data access and algorithm risk management easy across large organizations. So it shouldn’t come as a surprise that almost all of these problems touch Immuta is some profound way – we were custom built to solve many of them. We specifically chose virtualization, for example, as a way to address data silos and the problems of a single data lake turning into many data ponds (and eventually a data swamp) over time. We built in a data governor role into our platform to ensure that compliance actually helps speed up the process of accessing data, rather than hindering it. Almost every choice we made when constructing our platform was based on decades of experience conducting high-impact data science in regulated environments – on actually enabling AI within large organizations.

I left the discussion more confident than ever that our platform sits at the center of the biggest challenges facing the adoption of AI. But there’s also a huge amount of work to be done, which is why I’m so excited to be focused on enhancing our risk management features in our next few releases.

So expect more exciting news from us in the coming months.

*******

Click here for a 1:1 demo to see how Immuta’s platform can help you rapidly personalize data access and dramatically improve the creation, deployment, and auditability of machine learning and AI.