OpenAI Safety Experts Are Fleeing The Company

Sam AltmanSam Altman, CEO Y-Combinator. Youtube
Getting your Trinity Audio player ready...
Please Share This Story!
Something smells rotten at OpenAI, starting with Sam Altman. Ever since Altman was fired (and re-hired) in November 2022, emphasis on safety has plummeted: Altman restructured the board, firing those who were concerned about AI safety. Since then, almost half of the safety department has fled the company, citing that AI safety is not prioritized.

This wild-west approach to developing software that will change the world is a recipe for disaster. So far, Altman has smooth-talked around the issue to assure people he is trustworthy. I don’t buy it. ⁃ Patrick Wood, TN Editor.

OpenAI, the developer of the popular AI assistant ChatGPT, has seen a significant exodus of its artificial general intelligence (AGI) safety researchers, according to a former employee.

Fortune reports that Daniel Kokotajlo, a former OpenAI governance researcher, recently revealed that nearly half of the company’s staff focused on the long-term risks of superpowerful AI have left in the past several months. The departures include prominent researchers such as Jan Hendrik Kirchner, Collin Burns, Jeffrey Wu, Jonathan Uesato, Steven Bills, Yuri Burda, Todor Markov, and cofounder John Schulman. These resignations followed the high-profile exits of chief scientist Ilya Sutskever and researcher Jan Leike in May, who co-headed the company’s “superalignment” team.

OpenAI, founded with the mission to develop AGI in a way that “benefits all of humanity,” has long employed a significant number of researchers dedicated to “AGI safety” – techniques for ensuring that future AGI systems do not pose catastrophic or existential dangers. However, Kokotajlo suggests that the company’s focus has been shifting towards product development and commercialization, with less emphasis on research to ensure the safe development of AGI.

Kokotajlo, who joined OpenAI in 2022 and quit in April 2023, stated that the exodus has been gradual, with the number of AGI safety staff dropping from around 30 to just 16. He attributed the departures to individuals “giving up” as OpenAI continues to prioritize product development over safety research.

The changing culture at OpenAI became more apparent to Kokotajlo before the boardroom drama in November 2022, when CEO Sam Altman was briefly fired and then rehired, and three board members focused on AGI safety were removed. Kokotajlo felt that this incident sealed the deal, with no turning back, and that Altman and president Greg Brockman had been consolidating power since then.

While some AI research leaders consider the AI safety community’s focus on AGI’s potential threat to humanity to be overhyped, Kokotajlo expressed disappointment that OpenAI came out against California’s SB 1047, a bill aimed at putting guardrails on the development and use of the most powerful AI models.

Despite the departures, Kokotajlo acknowledged that some remaining employees have moved to other teams where they can continue working on similar projects, and the company has also established a new safety and security committee and appointed Carnegie Mellon University professor Zico Kolter to its board of directors.

As the race to develop AGI intensifies among major AI companies, Kokotajlo warned against groupthink and the potential for companies to conclude that their success in the race is inherently good for humanity, driven by the majority opinion and incentives within the organization.

Read full story here…

About the Editor

Patrick Wood
Patrick Wood is a leading and critical expert on Sustainable Development, Green Economy, Agenda 21, 2030 Agenda and historic Technocracy. He is the author of Technocracy Rising: The Trojan Horse of Global Transformation (2015) and co-author of Trilaterals Over Washington, Volumes I and II (1978-1980) with the late Antony C. Sutton.
Subscribe
Notify of
guest

1 Comment
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
D Hall

Certainly fishy. WEF/ygl. Participated in: Bilderberg/2016/2022/2023, WEF annual meeting 2024. Altman said: .”I’ve been very interested in things like universal basic income and what’s going to happen to global wealth redistribution and how we can do that better. Is there a way we can use technology to do that at global scale?”[1]  His company Worldcoin has as goal to become the world’s largest human identity and financial network. For this purpose, it is interested to collect biometric data, inviting people to let him scan their retina prints.[2] In Argentina in 2024, his company Worldcoin is utilizing country’s financial crisis to buy people’s biometric data. People in financial distress surrender the… Read more »