- The Next
- Posts
- What’s next for OpenAI?
What’s next for OpenAI?
#Shitty Things
OpenAI's board fired CEO Sam Altman, sparking an AI-safety coup and chaos. Altman was later hired by Microsoft. The incident has sparked speculation about the future of the AI industry, with the company's future shaping up in the wake of the recent announcement.
What happened
Friday afternoon
Sam Altman was fired by OpenAI's board due to his inconsistent communication with them, following a Google Meet meeting. OpenAI president Greg Brockman and several senior researchers also resigned, and CTO Mira Murati became the interim CEO.
Saturday
Murati attempted to rehire Altman and Brockman while the board sought a successor. Altman and OpenAI staffers pressured the board to quit and demanded Altman's reinstatement, but the deadline was not met.
Sunday night
Microsoft announced it had hired Altman and Brockman to lead its new AI research team. Soon after that, OpenAI announced it had hired Emmett Shear, the former CEO of the streaming company Twitch, as its CEO.
Monday morning
Over 500 OpenAI employees have signed a letter threatening to quit and join Altman at Microsoft unless OpenAI's board steps down, with Sutskever also signing the letter and expressing regret.
What’s next for OpenAI
OpenAI's CEO, Altman, recently addressed the audience at DevDay, stating that the company is now significantly different from the one at the event. With Altman and Brockman gone, several senior employees resigned in support, with others, including Murati, expressing their support for OpenAI's people. The company is also facing a mass exodus to Microsoft, causing further upheaval before things settle.
Tension between Sutskever and Altman may have been brewing for some time. “When you have an organization like OpenAI that’s moving at a fast pace and pursuing ambitious goals, tension is inevitable,” Sutskever told MIT Technology Review in September (comments that have not previously been published). “I view any tension between product and research as a catalyst for advancing us, because I believe that product wins are intertwined with research success.” Yet it is now clear that Sutskever disagreed with OpenAI leadership about how product wins and research success should be balanced.
New interim CEO Shear, who cofounded Twitch, appears to be a world away from Altman when it comes to the pace of AI development. “I specifically say I’m in favor of slowing down, which is sort of like pausing except it’s slowing down,” he posted on X in September. “If we’re at a speed of 10 right now, a pause is reducing to 0. I think we should aim for a 1-2 instead.”
OpenAI, led by Shear, may focus on creating "AGI that benefits humanity" in practice, potentially slowing down or even discontinuing its product pipeline in the short term.
OpenAI faced tension between launching products quickly and slowing development for safety, leading key players to leave the company and start competing AI safety startup Anthropic, highlighting the company's initial issues.
With Altman and his camp gone, the firm could pivot more toward Sutskever’s work on what he calls superalignment, a research project that aims to come up with ways to control a hypothetical superintelligence (future technology that Sutskever speculates will outmatch humans in almost every way). “I’m doing it for my own self-interest,” Sutskever told us. “It’s obviously important that any superintelligence anyone builds does not go rogue. Obviously.”
Shear, a cautious leader, is a leader who would heed Sutskever's concerns about the company's inclination towards unexisting tech. Shear believes that the company can continue to lead the field with good ideas for generative AI. Sutskever believes that the company has a robust research organization that delivers the latest advancements in AI and has a strong team of talented individuals who will continue pushing the envelope of what's possible with AI. He trusts that the company's efforts will lead to success.
What next for Microsoft?
Microsoft, led by CEO Satya Nadella, has emerged as the winners of the AI crisis, with Altman and Brockman joining its ranks. The company has a significant advantage from embedding generative AI into its productivity and developer tools. However, the necessity of Microsoft's partnership with OpenAI to create cutting-edge tech remains a question. Nadella expressed excitement about hiring Altman and Brockman and emphasized the company's commitment to OpenAI and its product roadmap. The company remains committed to its AI partnership.
But let’s be real. In an exclusive interview with MIT Technology Review, Nadella called the two companies “codependent.” “They depend on us to build the best systems; we depend on them to build the best models, and we go to market together,” Nadella told our editor in chief, Mat Honan, last week. If OpenAI’s leadership roulette and talent exodus slows down its product pipeline, or leads to AI models less impressive than those it can build itself, Microsoft will have zero problems ditching the startup.
What next for AI?
The acquisition of OpenAI by Microsoft's Sutskever and OpenAI board has been a surprise to the tech community, with lawyers at Fried Frank stating that it was not anticipated. The acquisition has sparked interest in generative AI companies like Stability AI. The future of the company remains uncertain, with Altman and Brockman, two of the most connected individuals in VC funding circles, and Altman being considered one of the best CEOs in the industry. The funding could potentially support the future of AI, with potential backers ranging from Mohammed bin Salman to Jeff Bezos.
OpenAI's crisis highlights a growing divide in the AI industry, with some advocating for AI safety and others focusing on real-world risks like economic upheaval, biases, and misuse. The race to deploy AI tools, including those from Microsoft and Google, continues, but the exact nature of generative AI's killer app remains uncertain. If the rift spreads to the wider industry and development slows, it may take longer to see the impact of AI on human lives.
Deeper Learning
Text-to-image AI models can be tricked into generating disturbing image
Speaking of unsafe AI … Popular text-to-image AI models can be prompted to ignore their safety filters and generate disturbing images. A group of researchers managed to “jailbreak” both Stability AI’s Stable Diffusion and OpenAI’s DALL-E 2 to disregard their policies and create images of naked people, dismembered bodies, and other violent or sexual scenarios.
How they did it: A new jailbreaking method, dubbed “SneakyPrompt” by its creators from Johns Hopkins University and Duke University, uses reinforcement learning to create written prompts that look like garbled nonsense to us but that AI models learn to recognize as hidden requests for disturbing images. It essentially works by turning the way text-to-image AI models function against them.
Why this matters: That AI models can be prompted to “break out” of their guardrails is particularly worrying in the context of information warfare. They have already been exploited to produce fake content related to wars, such as the recent Israel-Hamas conflict.
Bits and Bytes
Meta has split up its responsible AI team
Meta is reportedly getting rid of its responsible AI team and redeploying its employees to work on generative AI. But Meta uses AI in many other ways beyond generative AI—such as recommending news and political content. So this raises questions around how Meta intends to mitigate AI harms in general.
Google DeepMind wants to define what counts as artificial general intelligence
A team of Google DeepMind researchers has put out a paper that cuts through the cross talk with not just one new definition for AGI but a whole taxonomy of them.
This company is building AI for African languages
Startup Lelapa has launched Vulavula, a tool that can identify four South African languages: isiZulu, Afrikaans, Sesotho, and English. The team is now working to include other African languages from across the continent.
Google DeepMind’s weather AI can forecast extreme weather faster and more accurately
The model, GraphCast, can predict weather conditions up to 10 days in advance, more accurately and much faster than the current gold standard.
How Facebook went all in on AI
In Broken Code: Inside Facebook and the Fight to Expose Is Harmful Secrets, journalist Jeff Horwitz discusses Facebook's reliance on artificial intelligence and the consequences it and the public have faced in the process.
Did Argentina just have the first AI election?
AI significantly influenced the campaigns of two candidates for the country's next president, using generative AI to create images and videos to promote their candidates and attack each other. Javier Milei, a far-right outsider, won the election, highlighting the challenge of determining the truth in upcoming elections and the need for more accurate campaigns.