The chief executive of the company responsible for ChatGPT, who is widely considered the face of generative AI, has been pushed out of the firm he had helped propel to the forefront of the tech world.
But for those concerned with AI safety, the focus is not as much on what Sam Altman does next as on what the coming weeks might reveal about the technology behind ChatGPT.
The OpenAI board, which announced the decision on Friday, said Altman had not been “consistently candid in his communications” with its members.
This hindered the board’s ability to exercise its duties and resulted in them losing confidence in his ability to lead the company, the remaining members said in a blog post.
They did not go into further detail.
Greg Brockman, another of the founders, left the company after Mr Altman’s dismissal and three of OpenAI’s senior researchers followed.
The dismissal was only possible because of OpenAI’s unusual governance structure.
It’s based on its origin as a non-profit organisation which later created a capped profit subsidiary that limits investors’ returns.
This non-profit structure means board members are bound to a fiduciary duty to create safe artificial general intelligence that is “broadly beneficial”, rather than maximising value for shareholders like most corporate boards.
Concerns and questions surround sudden departure
OpenAI has a tight-knit board of just six members.
Other than two of the co-founders, Mr Altman and Mr Brockman, it was made up of Ilya Sutskever, the company’s chief scientist and only remaining founder on the board, and three non-executive directors.
They are tech entrepreneur Tasha McCauley, Quora chief executive Adam D’Angelo, and Helen Toner, director of strategy and foundational research grants at a technology think tank.
If members are justifying Mr Altman’s dismissal by suggesting that he concealed information from them, this raises concerns about the consequences of decisions made by the board and the impact those could have on the safe use of AI or its impact on society.
Members of the public have been using AI to help them with everyday tasks for decades, whether that is through a virtual assistant like Apple’s Siri or satellite navigation systems for directions.
However, it was the launch of ChatGPT a year ago that propelled AI to the forefront of the public consciousness as a powerful technology to be reckoned with and introduced many to the concept of generative AI, which generates text, images and many other forms of content.
Mr Altman was the public face of that charge and a key driver of OpenAI’s positioning at the forefront of generative AI.
No board would have dismissed him lightly, which raises the question: what was the impact of decisions made by the board when they believe Mr Altman was not candid with them?