The return of Sam Altman to OpenAI: A contrarian’s take

 

By Mark Sullivan

Welcome to AI Decoded, Fast Company’s weekly LinkedIn newsletter that breaks down the most important news in the world of AI. If a friend or colleague shared this newsletter with you, you can sign up to receive it every week here.

Does Altman’s return signal a victory of vested interests over AI safety?

The tech industry was still reverberating from OpenAI’s surprise firing of CEO Sam Altman late last week when the announcement came late Tuesday that an agreement (“in principle”) had been reached for Altman to return as CEO. President and former board chair Greg Brockman, who quit in protest of Altman’s dismissal, will also return to the company. This came after a majority of OpenAI employees signed a letter demanding just that, and a newly formed OpenAI board of directors. 

The new “initial” board of directors, as OpenAI calls it, is still a work in progress. Bret Taylor (Salesforce co-CEO) will be its chair. Former Commerce Secretary Larry Summers will get a seat. Quora CEO Adam D’Angelo is the lone holdover from the previous iteration of the board. No telling how permanent these appointments are. “We are collaborating to figure out the details,” OpenAI tweeted Tuesday night. It’s unclear which of the former board members, other than D’Angelo, will retain their seats.

Some observers might wonder what this dramatic fire drill was all about. Altman’s back at the helm, and we still don’t have a clear explanation as to why he was fired in the first place. Based on the reporting I’ve seen, and on talks with my own sources, I suggest that Altman’s firing was the  flashpoint of an ideological competition between two of the companies’ founders—Altman and chief scientist Ilya Sutskever—on how to run an AI company. 

Sutskever is something like OpenAI’s spiritual leader. OpenAI people speak about him in reverent tones. He studied under one of the fathers of AI, Geoffrey Hinton in Toronto, and has made several landmark discoveries in machine learning. A large, abstract, oil painting by Sutskever of the OpenAI logo watches over the first floor hustle and bustle in OpenAI’s headquarters. He’s also, by all appearances, not too interested in capitalism, instead sticking to the tenets of the effective altruism movement, which means distributing the benefits of his company’s AI widely and evenly and not performing for investors every quarter. Sutskever’s brand of effective altruism means a slower research pace and slower productization. And above all, it means very carefully managing the downside risks of AI with safety research applied rigorously at every stage of R&D. Sutskever has said that AI could pose serious threats to humanity sometime in the future if not managed carefully today.

The return of Sam Altman to OpenAI: A contrarian’s take
[Photo: Mark Sullivan]

Altman’s approach is totally different. He acknowledges the risks of AI but is less hesitant to release the technology into the world. He was, after all, an entrepreneur who went on to run the Y Combinator startup accelerator. Altman is, in many ways, a creature of Silicon Valley, focused on developing products and quickly finding the product-market fit that leads to rapid growth. And Altman was reportedly very keen to take advantage of ChatGPT’s momentum to launch new AI products. At OpenAI’s DevDay in early November, Altman skillfully presided over the announcement of an impressive lineup of new models for developers and tools for consumers. Afterwards, Brockman and Altman spoke to media in the press room. Sutskever, on the other hand, was nowhere to be seen. DevDay may have been the trigger for the board’s decision to fire Altman. 

For one long weekend, Sutskever’s slow-and-safe approach to running OpenAI won out. But the proponents of Altman’s product- and profit-centric worldview marshaled their forces over the weekend. OpenAI’s investors, focused on scale and returns, quickly began howling for Altman’s reinstatement Friday and through the weekend. One report says that the investors even contemplated legal action to bring the CEO back. Now they won’t have to. OpenAI’s employees—770 (over 90% of its workforce) of whom signed a letter demanding Altman’s reinstatement—have varying levels of financial interest in the company’s financial performance, too.

The question now is whether anything will change at the company. And even though the board may have handled Altman’s ouster poorly, we should leave open the possibility that its intentions were good, and correct. OpenAI’s product, after all, could be catastrophically dangerous in the hands of the wrong people. Silicon Valley’s “move fast and break things” mantra makes about as much sense in AI as it would with nuclear weapons. AI companies have to spend lots of time studying the potentially harmful use cases, and there’s a strong argument that building safeguards against such uses—for example, triggers to detect and shut down destructive applications of the model—should progress alongside development of any product. But Altman’s return could also signal that Sutskever’s bid for a slower, more cautious OpenAI has simply been defeated.

And the stakes could be higher than we know. It’s possible that OpenAI has made more progress toward general AI than it has said publicly, as I wrote Monday. This would raise the safety stakes considerably and may put the Altman drama in a new, and scarier, light.

How OpenAI’s organizational structure works

Sutskever’s nontraditional morals-over-profits approach is reflected in the OpenAI charter and mission. OpenAI was founded as a nonprofit in 2015; though the company eventually shifted to a for-profit model, OpenAI Inc.—and its board of directors—remained the controlling shareholder. That transition came when it became clear that OpenAI would need more capital to fund its massive amounts of compute power to support its supersized large language models. 

The return of Sam Altman to OpenAI: A contrarian’s take

“While the for-profit subsidiary is permitted to make and distribute profit, it is subject to this mission,” the OpenAI charter reads. “The Nonprofit’s principal beneficiary is humanity, not OpenAI investors.”

Cloud computing company Casebook PBC CEO Tristan Louis tells me that OpenAI’s convoluted structure is partly to blame for the current dysfunction and poor communication among company leadership and board. Louis says that if OpenAI had adopted a public benefit corporation structure (like Anthropic), its charter would have been clearer about how the company handles misalignments from an operational and legal perspective. “This would have allowed the tough discussions they now appear to be having in public to be held behind closed doors and settled without the current drama before the company received outside investments,” he says.

Microsoft gets named in a copyright lawsuit against OpenAI

Just to make this week’s AI Decoded an OpenAI trifecta, we’ll end with something that has nothing to do with Altman’s firing (as far as we know!). Lawyers in New York filed a proposed class action on behalf of best-selling author Julian Sancton (Madhouse at the End of the Earth) and other writers against OpenAI and Microsoft, alleging that the companies trained several versions of ChatGPT using copyrighted materials from nonfiction authors without permission. A flurry of copyright suits have already been filed against OpenAI, but this is the first one that also names Microsoft, the plaintiffs believe.

The lawsuit, which was filed in the U.S. District Court for the Southern District of New York, says the tech companies are “reaping billions off their ChatGPT products” without paying anything to authors of nonfiction books and academic journals. Plaintiffs seek damages for copyright infringement and an injunction stopping the unauthorized ongoing use of copyrighted material. OpenAI and Microsoft lawyers will argue that training AI models using content scraped from the web is covered under the fair use provisions in Section 107 of the U.S. Copyright Act. We’ll track the progress of the suit. 

In a related story, the VP of audio at AI developer Stable Diffusion, Ed Newton-Rex, resigned from his job because he believes the company is stretching the meaning of the fair use doctrine to justify the way it collects audio training data for its models. 

More AI coverage from Fast Company:

From around the web: 

Fast Company

(20)