Founded as a non-profit in December 2015 by a small group worried about AI safety. Eight years later it had built the dominant frontier model, an ecosystem of API customers, and the most contested governance crisis in the industry's short history. The story of how that happened.
OpenAI as an organisation, separate from its products. The founding bargain, the path from non-profit to capped-profit, the scaling bet, the November 2023 governance crisis, and the principals as people. The technical content of the GPT line is in the architecture-side decks of the LLMs hub.
OpenAI was announced on 11 December 2015 by a coalition of Sam Altman (then president of Y Combinator), Greg Brockman (former CTO of Stripe), Elon Musk, Ilya Sutskever (then at Google Brain), and a handful of others. The pitch: a non-profit research lab whose mission would be to "advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return".
The founding pledge was reportedly $1 B, announced at the time, although in practice less than half of that was actually called in over the next few years. Donors included Musk, Reid Hoffman, Peter Thiel, Jessica Livingston, Y Combinator, Amazon Web Services and Infosys.
Stanford CS dropout. Founded the location-app Loopt at 19, sold it for ~$45 M, joined YC, ran it 2014–2019. Quiet in person, methodical, a famously prolific networker. Not a researcher — openly so — and his role at OpenAI is operational, fundraising, and strategic. Has said, repeatedly, that AGI is the most important project of the century, and that the principal risk to manage is concentration of power, not technical misalignment.
Harvard / MIT dropout. Stripe's first-employee CTO. The hands-on engineering lead at OpenAI for most of its history — the person actually shipping infrastructure, training runs and demos. Took an extended sabbatical in 2024 and returned. Often the most visible technical voice from the lab.
Russian-Israeli-Canadian. PhD with Hinton at Toronto. Co-author AlexNet (2012); seq2seq (2014). Joined Brain after Toronto, lured to OpenAI by Altman/Brockman/Musk over a long courtship in 2015. Quiet, thoughtful, deeply convinced that AGI is achievable and dangerous in roughly equal measure. Played a central role in the November 2023 board crisis (slide 09); left in May 2024 to found Safe Superintelligence Inc.
Slovak-Canadian. Stanford PhD with Fei-Fei Li, deeply involved in the Stanford NLP and CV programmes. The most public-facing teacher of any frontier-lab founder — his minGPT, nanoGPT, the YouTube series Neural Networks: Zero to Hero and the Let's build GPT videos are how a generation has learned the topic. Independent from 2024, building Eureka Labs as an AI-native education startup.
The original OpenAI Charter (published 2018, revised since) committed the lab to four obligations:
The third clause is the crux. You cannot influence AGI safely from the cheap seats; you must be on the field. That premise — defensible — is the seed of every later compromise. Once you commit to staying at the frontier, you commit to the costs of staying there.
The original entity, OpenAI Inc, was a 501(c)(3) non-profit. The frontier-research budget needed to be in the billions per year by 2020. No 501(c)(3) raises billions per year — not even Howard Hughes Medical Institute, which is about the largest peer institution. The 2019 reorg into a capped-profit subsidiary was the inevitable consequence.
Two changes happened in 2018–19. First, Elon Musk resigned from the board in February 2018. The public reason was conflict of interest with Tesla's growing AI programme; the private reasons (subsequently litigated) involved disagreements about Musk's role. Musk has said publicly that he wanted to take over and run OpenAI directly; the rest of the board declined.
Second, in March 2019 OpenAI created OpenAI LP (formally OpenAI Global, LLC after a later tweak), a capped-profit subsidiary, and moved most operational research into it. Investors could earn up to 100× their investment back; profits beyond that flowed to the non-profit parent. Sam Altman became CEO of the new entity, having served as a board member of the non-profit until then.
Musk founded xAI in 2023. Has subsequently sued OpenAI multiple times over the deviation from its non-profit charter. The cases are technical and slow-moving; the more interesting fact is that the only major frontier lab Musk does not have an ownership stake in is the one he co-founded. Deck 08 picks up xAI.
In July 2019 OpenAI announced a $1 B partnership with Microsoft. The structure: Microsoft becomes the exclusive cloud provider; Microsoft makes a multi-year compute commitment; OpenAI commits to commercialise via Azure; Microsoft gets a non-exclusive licence to most OpenAI IP. The cash component was a fraction of the headline; the compute commitment was the bulk.
The deal was negotiated by Altman and Microsoft CEO Satya Nadella personally. It is the single most important commercial event in OpenAI's history.
The Microsoft licence covers everything up to AGI; OpenAI's board determines when AGI has been reached, at which point the licence stops applying to the new system. This clause has not been triggered. It is genuinely ambiguous what would trigger it. Bargaining over its definition is one of the slow-moving structural issues at OpenAI.
The scaling bet was, technically, the work of Alec Radford, Ilya Sutskever, Dario Amodei, Tom Brown, Jared Kaplan, Sam McCandlish, Jeff Wu, Mark Chen and a few others. It runs through three flagship models in three years and one foundational scaling-laws paper.
| Model | Date | Params | What it showed |
|---|---|---|---|
| GPT-1 | Jun 2018 | 117 M | Decoder-only transformer + unsupervised pretraining + supervised fine-tuning beats task-specific architectures across NLP benchmarks. Quietly transformational. |
| GPT-2 | Feb 2019 | 1.5 B | The first model good enough for the lab to consider not publishing weights. Coherent paragraph-level generation. Staged release became the template for responsible disclosure. |
| GPT-3 | May 2020 | 175 B | In-context learning. Few-shot prompting works. The first model that felt like progress toward general intelligence rather than an NLP system. |
| Scaling Laws (Kaplan) | Jan 2020 | — | Loss decreases as a smooth power law in compute, parameters and data. Formalised the bet. |
The pivot to "scale a single decoder-only transformer" was not pre-ordained. Through 2017–18 OpenAI ran multiple parallel programmes — robotics (Dactyl), Universe, Dota 2 RL agents, GPT-1. The decision to consolidate compute into the language line is the single most consequential strategic call OpenAI ever made. Sutskever's instinct, Brockman's compute-management, and Altman's fundraising all combined to make it possible.
The OpenAI API launched in June 2020 as a private beta with GPT-3. By late 2021 it was publicly available, and the enterprise revenue line that would underpin everything else had begun to grow.
Two products defined the early commercial era:
August 2021. Codex was OpenAI's GPT model fine-tuned on public GitHub. Microsoft shipped GitHub Copilot on top. It was the first LLM-powered product with material adoption among professional developers. The data and licensing questions it raised — "is training on public GitHub fair use?" — will recur for the rest of the decade.
March 2022, paper Jan 2022. Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright et al, plus Paul Christiano's alignment team. The application of RL from human feedback to a base model. This is the technique that made ChatGPT possible nine months later. It is also the single most influential alignment-research-into-product translation in the field.
It demonstrated that the alignment-research line and the product line could co-evolve. RLHF was originally an alignment-team idea (Christiano had been writing about it since 2017 at OpenAI). It became the technique that enabled the consumer chat product. After 2022 the alignment / capability boundary at OpenAI is less clear than at Anthropic and considerably less clear than the Charter language might suggest.
ChatGPT was, internally, framed as a "low-key research preview" of a chat interface on top of a fine-tuned GPT-3.5. The team worked through the Thanksgiving weekend to get it shippable. It went live on Wednesday 30 November 2022. By Saturday it had a million users; by January 2023 it had 100 million, the fastest consumer product ramp in history at that point.
Several things happened simultaneously inside OpenAI.
Technically, ChatGPT 1.0 was GPT-3.5-turbo (a distilled chat-tuned variant of the davinci-003 line) wrapped in a simple web UI with conversation history. The model was incremental over what had been on the API for months. The product was the demo; the demo was what changed the world. This is a useful pattern to remember — the moments that change everything are not always the moments where the technology takes its biggest single step.
GPT-4 launched on 14 March 2023. It was, by every external measure, a substantial step from GPT-3.5: SAT/Bar/MCAT human-level performance, image input (a few months later), better factual reliability, dramatic gains on reasoning benchmarks.
It was also the model that closed off the open-publication era at OpenAI. The accompanying technical report explicitly declined to disclose architecture, parameter count, training data composition, or training compute, citing both competitive and safety reasons.
By the time of GPT-4-Turbo (Nov 2023), GPT-4o (May 2024) and GPT-4.5 / GPT-5, public technical disclosure had become symbolic at most. The detailed engineering inside the frontier lab had passed irreversibly behind the wall.
The closed-weights stance is consistent across all three of the western frontier labs (OpenAI, Anthropic, Google DeepMind) for their flagship models. It is the central point of difference with Meta (open weights), Mistral (open weights for most checkpoints), and the Chinese frontier labs (open weights for everything except inference-time tuning). The open-vs-closed argument is the central live question of deck 08 and deck 10.
On Friday 17 November 2023 the OpenAI non-profit board fired Sam Altman as CEO, with the board's announcement saying he had not been "consistently candid" with them. Greg Brockman resigned in solidarity within hours. Mira Murati was named interim CEO. By Monday 20 November Microsoft had announced it was hiring Altman and Brockman to lead a new AI team; over 700 of OpenAI's 770 employees signed an open letter saying they would also leave unless the board resigned and Altman returned. By Wednesday 22 November Altman was reinstated and most of the board had been replaced.
The five days are well documented; the underlying tensions are partially understood and partially private. What can be said with reasonable confidence:
The November 2023 crisis is the single clearest demonstration in the field's history of the structural tension OpenAI's hybrid form was always going to produce. A 501(c)(3) board is meant to be able to fire its CEO over mission-fit concerns; in practice, when the operating subsidiary is a $90 B firm with thousands of equity-holding employees, it cannot. The crisis settled the question. The structural debate that produced it is not settled and shows up again in slide 11.
The post-crisis era at OpenAI has been about two things: diversifying the product surface (video, voice, agents, search) and cracking test-time compute (the o-series). Both have largely worked.
| Date | Launch | What it added |
|---|---|---|
| Feb 2024 | Sora 1 | Diffusion video, 60-second outputs. Initial limited rollout. Triggers a major industry pivot toward video. |
| May 2024 | GPT-4o ("omni") | Native multimodality — speech, vision, text in one model. Halves API price. |
| Sep 2024 | o1-preview | RL on chains of thought. Test-time compute pays off. Major step on math/coding/science. |
| Dec 2024 | o1 (full) and o3 announcements | o3 hits human-PhD level on FrontierMath; pricing model becomes "compute per query". |
| Jan 2025 | Operator | Browser agent. First production GUI-controlling agent from OpenAI; competes with Anthropic's Computer Use. |
| 2025–26 | GPT-5, Sora 2, the o-series continues | Frontier reasoning + multimodality consolidates. |
The o-series applies reinforcement learning to chains of thought. Rather than training the model only to output a final answer, it trains it to spend variable amounts of inference compute thinking through the problem first. The discovery — reportedly the result of work by Hyung Won Chung, Lukasz Kaiser, Hunter Lightman, Jakub Pachocki, Noam Brown and colleagues — was that this generalised dramatically, particularly on math and reasoning. By the end of 2024 the o-series had become the template every other frontier lab tried to match (Anthropic with the extended-thinking modes, Google with Gemini 2.5 Thinking, DeepSeek with R1).
Pre-o-series, frontier-lab pricing was per-token. Post-o-series, OpenAI began charging per-query for high-reasoning workloads, with single hard-math queries costing several dollars. This is the first time inference economics meaningfully diverged from a flat token rate, and probably an important precursor to the agent-pricing models of 2026.
Visible publicly more than any other frontier-lab principal. Personally invested in numerous adjacent ventures (Worldcoin / World, Helion fusion, Oklo nuclear, others). Appears comfortable with the scale of the responsibility; multiple interviewers describe him as unusually calm under pressure. The 2023 crisis appears, externally, to have strengthened rather than weakened his control.
Returned in late 2024. Continues to be the most engineering-heavy of the founders; runs much of the day-to-day infrastructure and training-run management.
Left in May 2024. SSI is small, has raised over $1 B in known rounds, and is explicitly mission-focused on safety. Sutskever's public statements since the departure have been minimal — a deliberate retreat from public discourse that contrasts sharply with Altman's posture.
Albanian-Canadian. Engineering manager at Tesla, then OpenAI from 2018. CTO during the ChatGPT release and the Sora preview. Briefly interim CEO during the November 2023 weekend. Left in late 2024 to start Thinking Machines Lab; raised one of the largest seed rounds in tech history.
Polish. Theoretical computer scientist (Carnegie Mellon PhD), joined OpenAI in 2017. Took over the Chief Scientist role on Sutskever's departure. Heavily associated with the o-series technical leadership.
The non-research-side senior team that has taken on increased responsibility post-crisis. Friar (ex-Square, ex-Block) brings public-company financial experience; Lightcap has been at OpenAI from the early days and runs much of the operations.
OpenAI alumni in 2026 include the founders of Anthropic, SSI, Thinking Machines Lab, xAI's senior research staff, Inflection's founders, the senior research at Pi/Physical Intelligence, parts of Cohere, and substantial fractions of the senior staff at Google DeepMind and Meta AI. There has never been a single research organisation in computer-science history whose alumni have founded so many other significant organisations within a single decade.