A few weeks back, we attended The World Medical Innovation Forum – Artificial Intelligence in Boston, where some of the most innovative minds in technology and healthcare gathered to discuss the massive wave of disruptive innovation in the space. The focus on AI and specifically, how to turn the promise of AI in healthcare into better patient outcomes was inspiring.
As we discussed the opportunity of AI in healthcare, there was a recurring secondary theme—data privacy and governance.
Data privacy is at the center of Immuta’s business and we take a lot of pride in working with companies to ensure quick, personalized data access for improved and compliant data science initiatives. So, we were excited to see data privacy take center stage at the World Medical Innovation Forum.
There’s no denying the growing role AI will play in healthcare—from affordable healthcare to rapid data processing—but one of the biggest roadblocks to the adoption of AI is how to protect and control patient data. While lack of confidentiality and the risks associated with operationalizing and sharing patient health data present a legitimate concern, the solution can be as simple as applying policies around the handling of data and the algorithms feeding that data.
As healthcare organizations rely more heavily on algorithms in clinical decision making, they will be required to demonstrate exactly how and why these decisions are being made, especially when it’s concerning patient care. For example, what happens when advanced algorithms begin predicting ailments and diseases? Our CPO Andrew Burt recently co-authored a piece published in Harvard Business Review that speaks to the impact to healthcare when algorithms begin making diagnoses, and the legal obligations that may arise when relying on machine learning models for patient care.
But the biggest hurdle is how much of a “black box” the algorithms feeding machine learning models remain. The future of AI and advanced analytics in healthcare relies on how well that technology can be controlled. For example, do you know what type of data is feeding the algorithms making decisions?
This idea of data governance was especially interesting during one of the panel sessions on AI and Genetic Sequencing, where Heidi Rehm, PhD said that while researchers are collecting much more genetic data from patients, their ability to process and take action on that data is still lagging behind. “We will never be able to disseminate these technologies for broad medical use if we can’t advance what we are doing today in terms of understanding the genetic and genomic variation,” she said.
Heidi is right, and here at Immuta, we are keen on helping organizations create policies on genomic data.
Consider Ancestry.com, 23andMe, and other genetic testers, all of which have mountains of data, but—we’re sorry to say—are vulnerable to be the next data privacy fiasco. Genetic testing services have been gathering consumer genetic data for years; but the positives of genetic testing and ancestry reviews will be outweighed by the potential negatives:
- What happens when insurance companies embed personal, genetic data into their algorithms that determines insurance coverage or pricing?
- Would it be fair to block a child’s insurance based on genetic background?
This could very well be the next great legal battleground on the data privacy front, and it’s one we’re keeping a close eye on. It’s also one that we’re steadfast on addressing head-on.
Like everyone who attended the World Medical Innovation Forum, we’re excited at how AI will transform healthcare, but as health organizations increasingly feed algorithms with data, they need to be sure that privacy is enforced and that regulatory concerns are met.
The power underlying all of this is the data being used to feed the algorithms making the decisions. And this is precisely where regulation should focus.