As new technologies continue to grow, it’s important to ensure that they are doing so safely and legally. Artificial intelligence (AI) software such as ChatGPT, a popular chatbot that examines prompts and answers queries, is trained via massive volumes of data sets. It’s therefore key that these softwares have the right to access these data sets, but only if it’s in compliance with laws and data regulations. Today, these regulations must evolve to keep pace with new technologies. Sensitive data should be accessed by properly qualified professionals for justified and legal purposes–when it’s not, that can be an issue.
Our most recent Immuta Unlocked Newsletter dives deeper into the topics, themes, and trends driving compliant modern data use. This April, we highlight the privacy regulations that are evolving to eliminate threats and keep individuals, their data, and their countries safe.
Here is a roundup of the key stories we followed in April:
Politicians Push for AI Software Regulations
Article: US Senate leader Schumer calls for AI rules as ChatGPT surges in popularity (Reuters)
OpenAI’s ChatGPT has exploded in popularity, becoming the fastest-growing consumer application in history with over 100 million active users each month. But the massive userbases and advanced technology have raised safety concerns amongst government officials from around the world. As a result, politicians want to be sure that regulations on AI can keep pace with the development of the software. U.S. politicians, for example, are putting forth a bipartisan effort to adopt these new regulations that can keep people and their data safe.
Senate Majority Leader Chuck Schumer (D) has launched an effort to establish rules on artificial intelligence to address national security and education concerns as ChatGPT continues to grow. He drafted and circulated a “framework that outlines a new regulatory regime that would prevent potentially catastrophic damage to our country while simultaneously making sure the U.S. leads in this transformative technology.” Schumer’s proposal would require companies to allow independent experts to review and test AI technologies ahead of public release or update, and give users access to findings. It presents an intriguing potential step towards achieving a balance between AI development and public data security and privacy.
Mandatory Cyberattack Reporting to Take Effect
Article: Reporting Cyberattacks Will Soon Be Mandatory. Is Your Company Ready? (Harvard Business Review)
Many countries, including the United States, have imposed mandatory cyber incident reporting requirements in recent years. Now, the U.S. Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency is drafting the regulations necessary to enshrine these requirements into law by mid-2025. The intent behind these requirements is to increase the government’s visibility regarding the scope, scale, and intensity of malicious cyber activity in their country.
Questions about liability or regulatory penalties loom large in discussions about reporting cyber incidents. However, increased cyber incident reporting can also be beneficial for businesses. The most obvious benefit is direct assistance with incident response. Governments can’t assist companies if they don’t know about an incident. Governments could also use reported data to develop a better understanding of the threat and detect trends or changes in the environment. Reporting rates under existing voluntary regimes are typically very low. But if the regulations are set up properly, businesses could reap clear benefits. It’s important to have tools such as incident detect monitoring can play an important role in responding to threats, and provide timely insights into risky user data access behavior.
Concerns Regarding how AI Software Collects Data
Article: Do AI’s backers care about data law breaches? (The Guardian)
The latest generation of AI systems are trained by enormous data sets. However, there’s controversy over where these data sets are actually coming from, and questions of whether AI systems such as ChatGPT are operating in breach of personal data and copyright law. The data training AI is likely to contain anything from images scraped from the internet, to pirated ebooks, and the whole of English-language Wikipedia. It’s important that data is properly and legally collected in order to train these systems, and without certainty around current data collection models, governments are beginning to ban the products until proper regulations are enacted.
In Italy, ChatGPT has been banned from operating after the country’s data protection regulator said there was no legal basis to justify the collection and “massive storage” of personal data in order to train the software. Shortly after, the Canadian privacy commissioner followed suit with an investigation into OpenAI company in response to a complaint alleging “the collection, use, and disclosure of personal information without consent.” Copyright lawsuits and regulator actions against OpenAI are hindered by the company’s secrecy about its training data. OpenAI CEO Sam Altman said: “We think we are following all privacy laws.” However the company has refused to share any information about what data was used to train ChatGPT. Ultimately, there are more organizations looking to use your data– in some cases sensitive data– to train their intelligence models, so it’s important that your data is secured and used by those with granted access.
Insider Threat Causes Leak at the Pentagon
Article: Pentagon leaks by a junior sysadmin put the spotlight back on insider threat (The Stack)
This month, it emerged that a junior systems administrator (sysadmin) in the Massachusetts Air National Guard was the chief suspect after a series of top-secret documents were leaked online. Airman Jack Teixeira is understood to have been a “Cyber Transport Systems Journeyman” with access to the sensitive Joint Worldwide Intelligence Communications System (JWICS). Pentagon officials say that the number of people with the same level of “Top Secret” access as Texeira is in the thousands, if not tens of thousands. “There’s the obvious question of why someone in this relatively low rank and obscure corner of the military, could have access to such critical secrets, but such an extraordinary array of them, which could have no possible bearing on his job,” said Glenn Gerstell, a former general counsel of the NSA, in the NYT. The Pentagon’s cybersecurity is rendered challenging by the scale and complexity of its infrastructure. It’s important to have scalable controls in place, so that as an organization grows, a data security platform grant users the right access to the only data they need.
It’s important for any organization–particularly in government–to give the right people access to the right data. It’s also important that those granted access to certain types of information are qualified to handle them, because if they are not, they may become a liability and potential insider threat risk. Chief Security Officers in the Department of Defense and across industries (CISOs) are refocusing on insider threat risks, as insider threat risks remain a challenge for their security. Organizations that want to mindfully protect sensitive information–whether it be intelligence data or intellectual property–will need to take a fresh look at their insider threat controls and broader operational security standards.
Keeping Up with Key Stories
Examining the developing technologies collecting our data, April’s Key Stories demonstrates the importance of being compliant with evolving regulations to ensure data is accessed safely and legally.
To stay up-to-date on the latest in data security and beyond, subscribe to the Immuta Unlocked Newsletter today. Each month, we include Key Stories in the newsletter to provide you with access to the latest news in data. We’ll see you next month!