LLM History Series — Presentation 05

OpenAI — The Scaling Bet

Founded as a non-profit in December 2015 by a small group worried about AI safety. Eight years later it had built the dominant frontier model, an ecosystem of API customers, and the most contested governance crisis in the industry's short history. The story of how that happened.

2015–2026 AltmanBrockmanSutskever GPT-1…5o-seriesNov 2023
Founding (Dec 2015) Capped-profit (2019) GPT-3 (May 2020) ChatGPT (Nov 2022) Board crisis (Nov 2023) o-series & GPT-5
00

What This Deck Covers

OpenAI as an organisation, separate from its products. The founding bargain, the path from non-profit to capped-profit, the scaling bet, the November 2023 governance crisis, and the principals as people. The technical content of the GPT line is in the architecture-side decks of the LLMs hub.

01

The Founding (December 2015)

OpenAI was announced on 11 December 2015 by a coalition of Sam Altman (then president of Y Combinator), Greg Brockman (former CTO of Stripe), Elon Musk, Ilya Sutskever (then at Google Brain), and a handful of others. The pitch: a non-profit research lab whose mission would be to "advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return".

The founding pledge was reportedly $1 B, announced at the time, although in practice less than half of that was actually called in over the next few years. Donors included Musk, Reid Hoffman, Peter Thiel, Jessica Livingston, Y Combinator, Amazon Web Services and Infosys.

SA

Sam Altman

CEO since 2019; previously chair, then president of Y Combinator

Stanford CS dropout. Founded the location-app Loopt at 19, sold it for ~$45 M, joined YC, ran it 2014–2019. Quiet in person, methodical, a famously prolific networker. Not a researcher — openly so — and his role at OpenAI is operational, fundraising, and strategic. Has said, repeatedly, that AGI is the most important project of the century, and that the principal risk to manage is concentration of power, not technical misalignment.

GB

Greg Brockman

President; was CTO 2015–2024

Harvard / MIT dropout. Stripe's first-employee CTO. The hands-on engineering lead at OpenAI for most of its history — the person actually shipping infrastructure, training runs and demos. Took an extended sabbatical in 2024 and returned. Often the most visible technical voice from the lab.

IS

Ilya Sutskever

Co-founder, Chief Scientist 2015–2024; founded Safe Superintelligence (SSI) 2024

Russian-Israeli-Canadian. PhD with Hinton at Toronto. Co-author AlexNet (2012); seq2seq (2014). Joined Brain after Toronto, lured to OpenAI by Altman/Brockman/Musk over a long courtship in 2015. Quiet, thoughtful, deeply convinced that AGI is achievable and dangerous in roughly equal measure. Played a central role in the November 2023 board crisis (slide 09); left in May 2024 to found Safe Superintelligence Inc.

AK

Andrej Karpathy

Founding research scientist 2015–2017; Tesla AI 2017–2022; OpenAI 2023–2024; independent / Eureka Labs from 2024

Slovak-Canadian. Stanford PhD with Fei-Fei Li, deeply involved in the Stanford NLP and CV programmes. The most public-facing teacher of any frontier-lab founder — his minGPT, nanoGPT, the YouTube series Neural Networks: Zero to Hero and the Let's build GPT videos are how a generation has learned the topic. Independent from 2024, building Eureka Labs as an AI-native education startup.

02

The Charter and the Non-Profit Bet

The original OpenAI Charter (published 2018, revised since) committed the lab to four obligations:

  1. Broadly distributed benefits — AGI's gains should not concentrate.
  2. Long-term safety — willing to commit to "stop competing with and start assisting" a value-aligned, safety-conscious project that comes within two years of building AGI before they do.
  3. Technical leadership — AGI cannot be safely built by anyone if its builders are not at the frontier.
  4. Cooperative orientation — will publish most research, will collaborate.

The third clause is the crux. You cannot influence AGI safely from the cheap seats; you must be on the field. That premise — defensible — is the seed of every later compromise. Once you commit to staying at the frontier, you commit to the costs of staying there.

We worry that any single dominant AGI developer would be able to lock the world into bad outcomes. We are committed to fighting for outcomes that are good for humanity even at our own commercial cost. — OpenAI Charter, paraphrased; the document is several paragraphs long and worth reading in full.
The structure problem

The original entity, OpenAI Inc, was a 501(c)(3) non-profit. The frontier-research budget needed to be in the billions per year by 2020. No 501(c)(3) raises billions per year — not even Howard Hughes Medical Institute, which is about the largest peer institution. The 2019 reorg into a capped-profit subsidiary was the inevitable consequence.

03

The 2018 Reorg — Capped-Profit; Musk Departs

Two changes happened in 2018–19. First, Elon Musk resigned from the board in February 2018. The public reason was conflict of interest with Tesla's growing AI programme; the private reasons (subsequently litigated) involved disagreements about Musk's role. Musk has said publicly that he wanted to take over and run OpenAI directly; the rest of the board declined.

Second, in March 2019 OpenAI created OpenAI LP (formally OpenAI Global, LLC after a later tweak), a capped-profit subsidiary, and moved most operational research into it. Investors could earn up to 100× their investment back; profits beyond that flowed to the non-profit parent. Sam Altman became CEO of the new entity, having served as a board member of the non-profit until then.

Why capped-profit (in theory)

  • Lets the lab raise commercial capital.
  • Aligns investors with the mission — a 100× cap is enormous in absolute terms but bounded.
  • Keeps ultimate control with the non-profit board.
  • Permits employee equity.

What it created in practice

  • Two boards. The non-profit's controls the for-profit's.
  • Equity-holding employees who became wealthy on paper.
  • An incentive structure increasingly resembling a regular start-up.
  • The conditions for the November 2023 crisis.
Musk's later return

Musk founded xAI in 2023. Has subsequently sued OpenAI multiple times over the deviation from its non-profit charter. The cases are technical and slow-moving; the more interesting fact is that the only major frontier lab Musk does not have an ownership stake in is the one he co-founded. Deck 08 picks up xAI.

04

The Microsoft Bet ($1 B, 2019)

In July 2019 OpenAI announced a $1 B partnership with Microsoft. The structure: Microsoft becomes the exclusive cloud provider; Microsoft makes a multi-year compute commitment; OpenAI commits to commercialise via Azure; Microsoft gets a non-exclusive licence to most OpenAI IP. The cash component was a fraction of the headline; the compute commitment was the bulk.

The deal was negotiated by Altman and Microsoft CEO Satya Nadella personally. It is the single most important commercial event in OpenAI's history.

What Microsoft got

  • An exclusive cloud-provider relationship with the leading frontier lab.
  • The right to integrate OpenAI models into all Microsoft products (Copilot was a direct consequence).
  • A non-exclusive perpetual licence to OpenAI IP up to AGI — with "AGI" defined by the OpenAI board.
  • A 49% economic stake in OpenAI Global LLC over later rounds.

What OpenAI got

  • Effectively unlimited compute — the only frontier lab with this for ~3 years.
  • Commercial distribution via the largest enterprise software vendor on earth.
  • A strategic partner committed to its survival.
  • An exit from "where do the next $5 B in compute come from?"
The AGI clause

The Microsoft licence covers everything up to AGI; OpenAI's board determines when AGI has been reached, at which point the licence stops applying to the new system. This clause has not been triggered. It is genuinely ambiguous what would trigger it. Bargaining over its definition is one of the slow-moving structural issues at OpenAI.

05

GPT-1, GPT-2, GPT-3 — the Scaling Bet Vindicated

The scaling bet was, technically, the work of Alec Radford, Ilya Sutskever, Dario Amodei, Tom Brown, Jared Kaplan, Sam McCandlish, Jeff Wu, Mark Chen and a few others. It runs through three flagship models in three years and one foundational scaling-laws paper.

ModelDateParamsWhat it showed
GPT-1Jun 2018117 MDecoder-only transformer + unsupervised pretraining + supervised fine-tuning beats task-specific architectures across NLP benchmarks. Quietly transformational.
GPT-2Feb 20191.5 BThe first model good enough for the lab to consider not publishing weights. Coherent paragraph-level generation. Staged release became the template for responsible disclosure.
GPT-3May 2020175 BIn-context learning. Few-shot prompting works. The first model that felt like progress toward general intelligence rather than an NLP system.
Scaling Laws (Kaplan)Jan 2020Loss decreases as a smooth power law in compute, parameters and data. Formalised the bet.
We were doing a lot of small things at OpenAI, and Ilya kept saying that we should pick one thing and just scale it up. He was right. — A common framing of Sutskever's 2017–19 internal advocacy. Also reportedly said by several OpenAI researchers in interviews.
The internal moment that mattered

The pivot to "scale a single decoder-only transformer" was not pre-ordained. Through 2017–18 OpenAI ran multiple parallel programmes — robotics (Dactyl), Universe, Dota 2 RL agents, GPT-1. The decision to consolidate compute into the language line is the single most consequential strategic call OpenAI ever made. Sutskever's instinct, Brockman's compute-management, and Altman's fundraising all combined to make it possible.

06

Codex / Copilot and the Enterprise Pivot

The OpenAI API launched in June 2020 as a private beta with GPT-3. By late 2021 it was publicly available, and the enterprise revenue line that would underpin everything else had begun to grow.

Two products defined the early commercial era:

Codex / GitHub Copilot

August 2021. Codex was OpenAI's GPT model fine-tuned on public GitHub. Microsoft shipped GitHub Copilot on top. It was the first LLM-powered product with material adoption among professional developers. The data and licensing questions it raised — "is training on public GitHub fair use?" — will recur for the rest of the decade.

InstructGPT (RLHF)

March 2022, paper Jan 2022. Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright et al, plus Paul Christiano's alignment team. The application of RL from human feedback to a base model. This is the technique that made ChatGPT possible nine months later. It is also the single most influential alignment-research-into-product translation in the field.

Why InstructGPT mattered for OpenAI's identity

It demonstrated that the alignment-research line and the product line could co-evolve. RLHF was originally an alignment-team idea (Christiano had been writing about it since 2017 at OpenAI). It became the technique that enabled the consumer chat product. After 2022 the alignment / capability boundary at OpenAI is less clear than at Anthropic and considerably less clear than the Charter language might suggest.

07

ChatGPT — the Demo That Broke the Internet (30 Nov 2022)

ChatGPT was, internally, framed as a "low-key research preview" of a chat interface on top of a fine-tuned GPT-3.5. The team worked through the Thanksgiving weekend to get it shippable. It went live on Wednesday 30 November 2022. By Saturday it had a million users; by January 2023 it had 100 million, the fastest consumer product ramp in history at that point.

Several things happened simultaneously inside OpenAI.

What changed for the lab

  • Revenue went from material to enormous — ChatGPT Plus launched Feb 2023 at $20/month and exceeded $1 B ARR in months.
  • Headcount tripled in eighteen months.
  • Microsoft made a further $10 B commitment in January 2023.
  • Every other frontier lab abruptly had to compete on shipped product as well as on research.

What changed for the world

  • The phrase generative AI entered general usage.
  • Every company on earth had a board-level AI strategy meeting within 90 days.
  • Anthropic, Google, Meta and DeepMind compressed their own product timelines.
  • Government engagement with AI labs went from cordial to intense.
A note on what ChatGPT actually was

Technically, ChatGPT 1.0 was GPT-3.5-turbo (a distilled chat-tuned variant of the davinci-003 line) wrapped in a simple web UI with conversation history. The model was incremental over what had been on the API for months. The product was the demo; the demo was what changed the world. This is a useful pattern to remember — the moments that change everything are not always the moments where the technology takes its biggest single step.

08

GPT-4 and the Closing of the Weights

GPT-4 launched on 14 March 2023. It was, by every external measure, a substantial step from GPT-3.5: SAT/Bar/MCAT human-level performance, image input (a few months later), better factual reliability, dramatic gains on reasoning benchmarks.

It was also the model that closed off the open-publication era at OpenAI. The accompanying technical report explicitly declined to disclose architecture, parameter count, training data composition, or training compute, citing both competitive and safety reasons.

Given both the competitive landscape and the safety implications of large-scale models like GPT-4, this report contains no further details about the architecture, hardware, training compute, dataset construction, training method, or similar. — GPT-4 Technical Report, 14 March 2023

By the time of GPT-4-Turbo (Nov 2023), GPT-4o (May 2024) and GPT-4.5 / GPT-5, public technical disclosure had become symbolic at most. The detailed engineering inside the frontier lab had passed irreversibly behind the wall.

A pattern to track

The closed-weights stance is consistent across all three of the western frontier labs (OpenAI, Anthropic, Google DeepMind) for their flagship models. It is the central point of difference with Meta (open weights), Mistral (open weights for most checkpoints), and the Chinese frontier labs (open weights for everything except inference-time tuning). The open-vs-closed argument is the central live question of deck 08 and deck 10.

09

The November 2023 Board Crisis

On Friday 17 November 2023 the OpenAI non-profit board fired Sam Altman as CEO, with the board's announcement saying he had not been "consistently candid" with them. Greg Brockman resigned in solidarity within hours. Mira Murati was named interim CEO. By Monday 20 November Microsoft had announced it was hiring Altman and Brockman to lead a new AI team; over 700 of OpenAI's 770 employees signed an open letter saying they would also leave unless the board resigned and Altman returned. By Wednesday 22 November Altman was reinstated and most of the board had been replaced.

The five days are well documented; the underlying tensions are partially understood and partially private. What can be said with reasonable confidence:

What the board faced

  • A capped-profit subsidiary the non-profit board was meant to govern, with billions of dollars in equity-holding employees and a $13 B Microsoft stake economically tied to its decisions.
  • An internal disagreement about pace of capability work versus safety work that had been simmering for at least eighteen months.
  • Concerns — aired in board meetings — about the alignment between Altman's communications and his actions on certain matters.

What broke the play

  • The board did not have a credible succession plan.
  • It did not communicate its reasoning publicly.
  • Microsoft and most senior OpenAI staff aligned with Altman within hours.
  • The structural fact that the people doing the work owned equity that the board's action was visibly destroying.

The aftermath, in five bullets

Why this episode is in the deck

The November 2023 crisis is the single clearest demonstration in the field's history of the structural tension OpenAI's hybrid form was always going to produce. A 501(c)(3) board is meant to be able to fire its CEO over mission-fit concerns; in practice, when the operating subsidiary is a $90 B firm with thousands of equity-holding employees, it cannot. The crisis settled the question. The structural debate that produced it is not settled and shows up again in slide 11.

10

Sora, the o-series, and GPT-5

The post-crisis era at OpenAI has been about two things: diversifying the product surface (video, voice, agents, search) and cracking test-time compute (the o-series). Both have largely worked.

DateLaunchWhat it added
Feb 2024Sora 1Diffusion video, 60-second outputs. Initial limited rollout. Triggers a major industry pivot toward video.
May 2024GPT-4o ("omni")Native multimodality — speech, vision, text in one model. Halves API price.
Sep 2024o1-previewRL on chains of thought. Test-time compute pays off. Major step on math/coding/science.
Dec 2024o1 (full) and o3 announcementso3 hits human-PhD level on FrontierMath; pricing model becomes "compute per query".
Jan 2025OperatorBrowser agent. First production GUI-controlling agent from OpenAI; competes with Anthropic's Computer Use.
2025–26GPT-5, Sora 2, the o-series continuesFrontier reasoning + multimodality consolidates.

The o-series in one paragraph

The o-series applies reinforcement learning to chains of thought. Rather than training the model only to output a final answer, it trains it to spend variable amounts of inference compute thinking through the problem first. The discovery — reportedly the result of work by Hyung Won Chung, Lukasz Kaiser, Hunter Lightman, Jakub Pachocki, Noam Brown and colleagues — was that this generalised dramatically, particularly on math and reasoning. By the end of 2024 the o-series had become the template every other frontier lab tried to match (Anthropic with the extended-thinking modes, Google with Gemini 2.5 Thinking, DeepSeek with R1).

The first time test-time compute changed pricing

Pre-o-series, frontier-lab pricing was per-token. Post-o-series, OpenAI began charging per-query for high-reasoning workloads, with single hard-math queries costing several dollars. This is the first time inference economics meaningfully diverged from a flat token rate, and probably an important precursor to the agent-pricing models of 2026.

11

The Principals Today

SA

Sam Altman — CEO

CEO since 2019; reinstated Nov 2023

Visible publicly more than any other frontier-lab principal. Personally invested in numerous adjacent ventures (Worldcoin / World, Helion fusion, Oklo nuclear, others). Appears comfortable with the scale of the responsibility; multiple interviewers describe him as unusually calm under pressure. The 2023 crisis appears, externally, to have strengthened rather than weakened his control.

GB

Greg Brockman — President

President; back from sabbatical 2024

Returned in late 2024. Continues to be the most engineering-heavy of the founders; runs much of the day-to-day infrastructure and training-run management.

IS

Ilya Sutskever — founded Safe Superintelligence Inc (2024)

No longer at OpenAI

Left in May 2024. SSI is small, has raised over $1 B in known rounds, and is explicitly mission-focused on safety. Sutskever's public statements since the departure have been minimal — a deliberate retreat from public discourse that contrasts sharply with Altman's posture.

MM

Mira Murati — founded Thinking Machines Lab (2024)

No longer at OpenAI; was CTO 2022–2024

Albanian-Canadian. Engineering manager at Tesla, then OpenAI from 2018. CTO during the ChatGPT release and the Sora preview. Briefly interim CEO during the November 2023 weekend. Left in late 2024 to start Thinking Machines Lab; raised one of the largest seed rounds in tech history.

JS

Jakub Pachocki — Chief Scientist

Chief Scientist since May 2024

Polish. Theoretical computer scientist (Carnegie Mellon PhD), joined OpenAI in 2017. Took over the Chief Scientist role on Sutskever's departure. Heavily associated with the o-series technical leadership.

SW

Sarah Friar — CFO; Brad Lightcap — COO

Senior leadership team 2024–

The non-research-side senior team that has taken on increased responsibility post-crisis. Friar (ex-Square, ex-Block) brings public-company financial experience; Lightcap has been at OpenAI from the early days and runs much of the operations.

The diaspora

OpenAI alumni in 2026 include the founders of Anthropic, SSI, Thinking Machines Lab, xAI's senior research staff, Inflection's founders, the senior research at Pi/Physical Intelligence, parts of Cohere, and substantial fractions of the senior staff at Google DeepMind and Meta AI. There has never been a single research organisation in computer-science history whose alumni have founded so many other significant organisations within a single decade.

12

Cheat Sheet

Five turning points

  • Dec 2015 — founded as a non-profit.
  • Mar 2019 — capped-profit reorg; Microsoft $1 B.
  • May 2020 — GPT-3 launches.
  • Nov 2022 — ChatGPT.
  • Nov 2023 — the board crisis.

The principals

  • Altman — CEO.
  • Brockman — President.
  • Sutskever — left to found SSI.
  • Murati — left to found Thinking Machines.
  • Karpathy — left to build Eureka Labs.
  • Pachocki — current Chief Scientist.

The technology bets

  • Decoder-only at scale (GPT-3).
  • RLHF (InstructGPT → ChatGPT).
  • Multimodality (GPT-4o, Sora).
  • Test-time compute (the o-series).
  • Agents (Operator, GPT-5 agentic mode).

What's next in the series

  • 06 — Anthropic. The other side of the OpenAI alignment debate, run as its own lab.
  • 07 — Google DeepMind, the lab that published the transformer but never quite shipped the product.