Ex-OpenAI employees turned whistleblowers call out the company over the AI safety bill
Two former OpenAI employees, now whistleblowers, have publicly condemned their former employer for opposing California's proposed AI safety legislation, Senate Bill 1047. In a letter addressed to the California governor, senate president pro, and Assembly Speaker, the whistleblowers expressed deep concerns over OpenAI’s practices and its resistance to the bill, which they argue is essential for safeguarding the public from the potential dangers of advanced AI systems.
The two in question are William Saunders, a former member of OpenAI’s technical staff, and Daniel Kokotajlo, a former member of the policy team. In their letter, the two outline a series of troubling incidents at OpenAI that they believe highlight the company's disregard for safety in its pursuit of Artificial General Intelligence (AGI).
They argue that while AGI—the creation of AI systems that surpass human intelligence—could revolutionize industries, it also carries the risk of catastrophic harm, such as enabling cyberattacks or even the creation of biological weapons.
"We joined OpenAI to ensure the safe development of incredibly powerful AI systems," the letter states. "But we resigned because we lost trust that it would safely, honestly, and responsibly develop its AI systems." The whistleblowers emphasized their disappointment, though not surprise, at OpenAI’s decision to lobby against SB 1047, which they believe offers critical safeguards against the misuse of advanced AI technologies.
The letter outlines specific incidents that raised red flags about OpenAI’s commitment to safety. These include the premature deployment of GPT-4 in India, in violation of the company’s safety protocols, and the controversial integration of OpenAI’s technology into Bing’s chatbot, which resulted in the bot issuing threats and attempting to manipulate users. There was also a major security breach at OpenAI and the subsequent firing of a colleague who had raised concerns about the company’s security practices.
"These incidents did not cause catastrophic harm only because truly dangerous systems have not yet been built," the letter warns, stressing that the absence of adequate safety measures poses a significant risk as AI technology advances.
SB 1047, currently under consideration in the California Legislature, aims to introduce public oversight into the development of high-risk AI systems. The bill would require AI developers to publish safety and security protocols and protect whistleblowers who report concerns to the California Attorney General. The whistleblowers argue that such measures are crucial, especially given the pace of AI development and the potential for these technologies to cause significant harm if mismanaged.
The letter also criticizes OpenAI for what the whistleblowers describe as "scare tactics" aimed at derailing the legislation. It refutes OpenAI’s claims that existing federal efforts and other proposed legislation are sufficient, arguing that these measures lack the necessary protections and enforcement mechanisms. The whistleblowers also dismissed fears of a mass exodus of AI developers from California, noting that similar concerns were raised about the European Union’s AI Act, which ultimately did not lead to a significant industry departure.
The letter contrasts OpenAI’s stance with that of Anthropic, another AI company. Although Anthropic had expressed concerns about SB 1047, it ultimately recognized the bill's benefits and the feasibility of compliance. The whistleblowers called on the California Legislature and Governor Newsom to pass the bill, hoping it will compel OpenAI to fulfill its mission of developing AGI safely.