New York City Aims to Bring Accountability to Algorithms with New Bill

In December, New York City unanimously passed a bill to bring transparency into the way that the city’s government agencies use algorithms to drive decisions.  This is a topic we care deeply about. At the core of Immuta’s technology and our belief is the ability to quickly connect to and control data for advanced analytics. We aim to help accelerate algorithm development through better data management.

Dubbed the algorithmic accountability bill, the first-of-its-kind piece of legislation will establish a task force to study how algorithms are being used by city agencies to make decisions that impact the citizens of New York, while exploring how to provide greater visibility into algorithmic decision making for the public. The task force will largely focus its efforts on investigating algorithmic bias and whether any of the models are discriminating against people based on age, race, religion, gender, sexual orientation or citizenship status.

The idea behind the bill is undoubtedly a step in the right direction towards shedding light onto what’s frequently described as  “black box” algorithms. By providing more visibility into how predictive models are arriving at specific outcomes, one day is may be easier for decision makers to either justify or refute the subsequent results. From a technical perspective, actually understanding the internal model reasoning might be a long way off, but with frameworks like LIME and SHAP, we have reasons to be cautiously optimistic.

Overall, the bill represents a stepping stone and likely sets the foundation for future regulations that will be adopted in the enterprise and by other governments. People simply want to know when these models are making certain decisions so that they can understand how machine learning models are using their data.

As always, there’s a flipside to this coin. As our Chief Privacy Officer & Legal Engineer, Andrew Burt, noted in his New York Times op-ed this month – “Leave A.I. Alone” – the general desire for broad legislation around A.I. may be premature.

Experts can’t even agree on what constitutes A.I., for starters, largely because defining A.I. without context into what the technology is looking to accomplish is fundamentally shortsighted. For example, A.I. in the financial sector differs greatly from algorithmic technology in the healthcare field. So why would we impose the same regulations to both? To put it simply, sweeping regulations on AI wouldn’t be ideal.

This isn’t to say that what New York City is looking to accomplish with the algorithmic accountability bill is flawed. It’s truly a groundbreaking, novel law – and every enterprise and government organization should aim to reduce algorithmic bias and provide as much visibility into predictive models as possible.

But before government leaders move towards all-encompassing, mass regulations that could limit A.I.’s potential, we should figure out how we can segment A.I. by use cases that are as clearly defined, and specific, as possible. This way, business leaders and governments will be able to use algorithms to drive favorable outcomes, controlling possible dangers  without putting the collection of technologies that make up “A.I.” under one, potentially restrictive, umbrella.

More information about the algorithmic accountability bill can be viewed here, and you can read Andrew Burt’s New York Times op-ed, “Leave A.I. Alone” here.