OpenAI was breached last year but kept it a secret
OpenAI reportedly suffered a security breach last year that compromised details about its AI technologies, according to a recent New York Times report.
The breach, which targeted the company's internal messaging systems, did not expose customer data or core AI code. However, it raises serious concerns about OpenAI's security practices and potential national security implications.
Reports suggest the breach occurred early in 2023 within online forums where OpenAI staff discussed company products and technologies. While details remain unconfirmed by OpenAI, the company reportedly revealed the incident to its board and staff in April. However, it chose not to disclose the breach publicly.
OpenAI's justification for silence is the lack of stolen customer data and the belief the hacker was a lone actor. However, this reasoning has drawn criticism from security experts and former OpenAI employees, like Leopold Aschenbrenner, who voiced concerns about national security vulnerabilities.
"The messaging systems compromised could have just as easily been infiltrated by a nation-state," Aschenbrenner said, highlighting the potential for more sophisticated attacks.
Dr. Ilia Kolochenko, partner and cybersecurity practice lead at Platt Law LLP, echoed these concerns. "The global AI race has become a matter of national security for many countries, therefore, state-backed cybercrime groups and mercenaries are aggressively targeting AI vendors, from talented startups to tech giants like Google or OpenAI," he said.
According to Dr. Kolochenko, hackers target valuable information like research data, large language models, and client details. In extreme cases, these breaches could lead to ongoing control or even a complete shutdown of AI operations.
"More sophisticated cyber-threat actors may also implant stealthy backdoors to continually control breached AI companies, and to be able to suddenly disrupt or even shut down their operations, similar to the large-scale hacking campaigns targeting critical national infrastructure (CNI) in Western countries recently," he added.
This news comes on the heels of OpenAI's earlier report about shutting down accounts linked to covert influence campaigns. The campaigns believed to be linked to Russia, China, Iran, and Israel, aimed to manipulate public opinion through OpenAI's AI models.
Just recently, OpenAI established a Safety and Security Committee to handle risk management for its AI projects and operations. The committee is expected to present its findings and recommendations to the board in September.