OpenAI Mess

The conflict between OpenAI’s Board of Directors and Sam Altman was a complex and controversial event. Altman was initially fired by the Board, but then reinstated after several Board members resigned. A New Yorker article provides a detailed account of the situation, including the role of the OpenAI Charter and the different perspectives of Altman and the Board. Educators may be interested in this controversy because OpenAI is responsible for ChatGPT and this service has received a good deal of classroom attention. For some, the OpenAI controversy led to concerns regarding whether ChatGPT was an appropriate tool.

The Charter of OpenAI states that the organization’s mission is to ensure that artificial general intelligence benefits all of humanity and is to be developed in a careful and safe manner. The Board felt that Altman was not adhering to this mission and that he was prioritizing the development of AI over safety concerns. Altman, on the other hand, argued that the Board was being too cautious and that the only way to make progress in AI was to move quickly.

Fast progress required funding which Altman and developers thought could be provided by sale of focused ChatGPT applications and copilot development in collaboration with Microsoft. OpenAI also developed a relationship with Microsoft. Microsoft initially invested one billion and now a total of 13 billion.

Microsoft was interested in the collaboration to support the development of copilot and differences in how AI progress should be achieved would not exactly meet the expectations of careful and safe. Microsoft’s way of thinking about copilot and AI capabilities provided the public reasoned that what was made available would not have to be perfect and users would understand this was the case. Users would understand AI recommendations should be treated as suggestions that require evaluation and these same users would report when recommendations were flawed. Deployment “in the wild” under such circumstances was the best way to discover problems the developers could then fix. 

The lack of trust that was initially given as an explanation for Altman’s termination was reported in the article to be related to a conflict between Altman and Board member Helen Toner. Toner had written a paper critical of OpenAI. In comparing notes, Board members discovered that Altman had been making claims that members had suggested getting rid of Toner and this was evidently not the case (more on the Toner conflict). 

The New Yorker article provides a valuable perspective on the OpenAI conflict. It sheds light on the different factors that contributed to the conflict, and it offers insights into the different ways in which AI can be developed and used.

The article also raises important questions about the future of AI. How can we ensure that AI is developed in a way that benefits all of humanity? How can we balance the need for safety with the need for progress? These are complex questions, but they are essential to the future of AI.

The New Yorker article is a valuable resource for anyone who is interested in the future of AI. It provides a balanced and comprehensive account of the OpenAI conflict, and it raises important questions about the development and use of AI.

This entry was posted in Uncategorized. Bookmark the permalink.