LLM History Series — Presentation 07

DeepMind — The Other AGI Lab

Founded in London in 2010, acquired by Google in 2014, source of AlphaGo and AlphaFold and (indirectly) the transformer paper. The lab spent the early 2020s catching up to OpenAI on language models from a standing start, merged with Google Brain in 2023, and now ships the Gemini line.

2010–2026 HassabisSuleymanLegg AlphaGoAlphaFoldGemini
Founded London (2010) Google buys (2014) AlphaGo (2016) AlphaFold 2 (2020) Brain merger (2023) Gemini 1…2.5
00

What This Deck Covers

Google DeepMind as it exists today is an unusual hybrid: a London-headquartered research lab famous for AlphaGo, an in-house ML research arm of one of the world's largest companies, and the producer of the Gemini frontier model line. The deck traces how all three of those identities came to coexist, and what that means for the lab's role in LLM history.

01

The 2010 Founding — Hassabis, Suleyman, Legg

DeepMind was incorporated in London in September 2010 by three people with very different backgrounds: Demis Hassabis, a neuroscientist and former chess prodigy/games-industry entrepreneur; Mustafa Suleyman, a King's College Oxford dropout who had run a not-for-profit human-rights group; and Shane Legg, a New Zealand-born theoretical computer scientist who had done his PhD at IDSIA in Lugano under Marcus Hutter on theoretical AGI.

DH

Demis Hassabis

CEO and co-founder; Cambridge undergrad → Bullfrog/Lionhead games → UCL PhD neuroscience → DeepMind → Google DeepMind CEO

British, Cypriot/Greek/Singaporean heritage. Chess master at 13. Lead AI programmer on Theme Park aged 17. Worked at Lionhead Studios under Peter Molyneux. Founded the games studio Elixir in his early twenties (it produced Republic: The Revolution and Evil Genius). Then went back to UCL to do a PhD in cognitive neuroscience, specifically on memory and imagination. The DeepMind founding philosophy — AGI by understanding the brain — reflects this trajectory directly. Joint Nobel Prize 2024 (Chemistry, with Jumper, for AlphaFold).

MS

Mustafa Suleyman

Co-founder; left DeepMind 2019 → Inflection AI co-founder 2022 → Microsoft AI CEO 2024

British. Met Hassabis when their families were neighbours; Hassabis's brother and Suleyman were close friends. Ran a Muslim youth telephone helpline before co-founding DeepMind. The applied-AI / partnerships side of the early lab. Left DeepMind in 2019, eventually co-founded Inflection AI with Reid Hoffman and Karen Simonyan. In March 2024 most of Inflection's senior staff including Suleyman moved to Microsoft to run Microsoft AI.

SL

Shane Legg

Co-founder, Chief AGI Scientist

New Zealander. PhD with Marcus Hutter on theoretical AGI metrics. The most consistent theoretical voice in the lab; has held roughly the same view for twenty years that AGI is achievable in a relatively short timeframe and requires careful preparation. Lower public profile than Hassabis or Suleyman but central to the lab's strategic direction.

The original DeepMind pitch

"We're going to solve intelligence. And then we're going to use it to solve everything else." Hassabis used variations of this line in fundraising decks from 2010 onwards. Investors included Peter Thiel (Founders Fund), Elon Musk (a few hundred thousand dollars personally), Horizons Ventures (Li Ka-shing), and Scott Banister.

02

The 2014 Google Acquisition

In late 2013 Facebook tried to acquire DeepMind. Google won the bidding war — the deal closed in January 2014 at a reported ~$500 M, far higher than DeepMind's revenue justified. Two unusual conditions were attached:

What DeepMind got

  • Headquarters stayed in London. Has remained there.
  • An ethics board / advisory board was committed to (the structure has evolved).
  • Substantial autonomy from Google's product organisation.
  • Access to Google compute (eventually TPUs) at internal pricing.

What Google got

  • The leading academic-style RL research group, intact.
  • An optionality bet on AGI.
  • An existence-of-existence proof that an outside lab could be acquired without destroying it.
  • The first glimpse of a London-based research strategy that has held since.
Why Google paid so much

Multiple sources have reported that Larry Page and Sergey Brin had a personal interest in AGI dating back to Google's earliest days. The DeepMind acquisition was reportedly Page's call as much as Eric Schmidt's, on the thesis that owning a serious AGI research lab was strategically more valuable than any near-term financial accounting. The price — widely seen as inflated at the time — turned out to be a small fraction of the eventual strategic value.

03

Atari → AlphaGo — The RL Era

DeepMind's first famous result, before the Google acquisition closed, was the Deep Q-Network (DQN) paper of December 2013 (with the journal version in Nature Feb 2015): a reinforcement-learning agent that learned to play 49 Atari 2600 games at human-or-superhuman level using only the screen pixels and the score as inputs. Volodymyr Mnih, Koray Kavukcuoglu, David Silver et al.

From there the cadence was extraordinary:

YearResultWhat it showed
2013/15DQN, AtariDeep RL on raw pixels works.
2016AlphaGo beats Lee Sedol 4–1Search + deep RL solves a problem most experts thought was 10 years away. Run on TPU v1.
2017AlphaGo ZeroSelf-play from scratch, no human games. Far stronger than AlphaGo.
2018AlphaZeroSame algorithm beats world-best engines at chess, shogi and Go.
2019MuZeroPlans without being told the rules of the game. Closes the loop on model-based RL.
2019AlphaStarBeats StarCraft II grandmasters. Multi-agent, partial observation, real-time.

The AlphaGo–Lee Sedol match in March 2016 is, alongside the GPT-3 paper and the ChatGPT launch, one of the three "this changed how the public thinks about AI" events of the modern era.

A subtle pattern

DeepMind's pre-LLM identity was reinforcement learning + search. This is technically and culturally distinct from language modelling: RL emphasises agents, environments, exploration, and reward design. When the field's centre of gravity shifted to scaling next-token prediction, DeepMind had less institutional muscle than OpenAI did because the work was different. The lab's catch-up effort in 2021–2023 is partly a rebuild of language-modelling muscle from scratch.

04

AlphaFold and Science DeepMind

Alongside the games line, DeepMind built a science-applications line that produced its most consequential single achievement: AlphaFold, the protein-structure prediction system.

YearResultWhat it showed
2018AlphaFold 1Wins CASP13. First learning-based system to be competitive with classical methods.
2020AlphaFold 2Wins CASP14 by a huge margin. Generally considered to have solved single-domain protein structure prediction.
2021AlphaFold-Multimer; protein database releaseHundreds of thousands of structures released open-access.
2024AlphaFold 3Extended to small molecules, nucleic acids, ligands. Shipped as a controlled-access service.
2024Nobel Prize in ChemistryHassabis and Jumper share half of the prize.

The science programme has expanded beyond proteins: GraphCast for weather forecasting, AlphaProof / AlphaGeometry for IMO-level mathematics, materials work (GNoME), fusion (plasma control with EPFL).

Why the science line matters

Science DeepMind is the strongest existing case study that frontier AI applied to a specific, high-value scientific domain can produce step-change results. The AlphaFold programme's strategic importance for the lab is also internal: it gives DeepMind a research-credibility moat that LLM-only labs do not have. Hassabis's strategic posture, very consistently, is that DeepMind's distinctive identity is the science applications, with LLM frontier work as a means rather than an end.

05

DeepMind's Pre-LLM Reluctance

Through 2017–2020 DeepMind did not have an OpenAI-style GPT line. The reasons are partly cultural and partly accidental. In aggregate they explain why DeepMind, which produced the talent that wrote the transformer paper at sister organisation Google Brain, did not turn it into the dominant LLM lab.

1. Research culture

DeepMind valued conceptual breakthroughs (DQN, AlphaGo, AlphaFold). Scaling a known architecture was not glamorous. "Engineering, not science" was an unspoken framing.

2. Budget incentives

DeepMind ran on a fixed Google budget; running a $50 M training experiment competed with every other line in the lab. Brain (Mountain View) had similar fixed budgets but more political access to TPU pods.

3. Personnel concentration

The senior LLM-relevant talent at Google was concentrated at Brain and Google AI Language — the BERT and T5 teams — not at DeepMind. The transformer authors themselves were Brain people, not DeepMind people.

The internal effect

By 2020 DeepMind had Gopher (a 280 B-parameter LM) and was running good language-model research. But it was not the centre of the work the way OpenAI was. Its Chinchilla paper (Hoffmann et al, 2022) corrected the Kaplan 2020 scaling laws and is one of the most influential single-paper contributions to LLM training of the entire era. But it is Chinchilla as analysis, not Chinchilla as a competitive product.

Chinchilla, briefly

Hoffmann et al showed that the Kaplan 2020 scaling-laws conclusion ("more parameters, less data") was wrong: optimal training scales tokens roughly proportionally to parameters (rather than parameters faster than tokens). Chinchilla (70 B params, 1.4 T tokens) outperformed Gopher (280 B, 300 B tokens). The result has been built into every frontier lab's training methodology since.

06

Pathways, PaLM and the Brain–DeepMind Tension

For most of 2018–2023 Google had two large AI organisations — Brain (Mountain View) and DeepMind (London) — with overlapping mandates, both reporting up through Sundar Pichai but day-to-day independently. It is one of the most-discussed pieces of Google org chart history.

The tension peaked around 2021–2022 with two competing flagship LLM programmes:

Pathways / PaLM (Brain)

Jeff Dean's Pathways vision: a single foundation model that could route any task to specialised sub-modules. PaLM (540 B, April 2022) and PaLM-2 (May 2023) shipped under this banner. Brain's TPU-pod-scale infrastructure made these runs feasible.

Gopher / Chinchilla / RETRO (DeepMind)

DeepMind's Gopher (280 B, Dec 2021), Chinchilla, and RETRO (retrieval-augmented). Strong research, less product-channel adoption inside Google.

The duplication was visible enough externally to be a regular topic in the AI press by 2022. ChatGPT in November 2022 made the situation untenable: Google had two world-class LLM programmes and no shipped consumer chat product. Bard (now Gemini) launched on PaLM-2 derivatives in early 2023 and was widely judged underwhelming relative to ChatGPT.

The "code red"

Sundar Pichai reportedly issued a "code red" internally in December 2022 over ChatGPT's launch. The April 2023 announcement of the Brain–DeepMind merger was the structural response.

07

The 2023 Merger and Gemini

On 20 April 2023 Sundar Pichai announced that Google Brain and DeepMind would merge into a single organisation, Google DeepMind, with Demis Hassabis as CEO. Jeff Dean was appointed Chief Scientist of Google AI overall (slightly different remit). The merger formally completed over the following six months.

What the merger fixed

  • Single decision-maker on frontier model strategy.
  • Combined TPU resourcing and team allocation.
  • Faster shipping cadence on flagship models (Gemini 1 launched Dec 2023 — eight months after the merger announcement).
  • Clear external brand: Google DeepMind, the Gemini line.

What it did not fix

  • Bay Area / London cultural differences (still real).
  • Internal duplication on smaller research lines (still working through this in 2025).
  • Research-vs-ship tension that has been part of Google AI for a decade.
  • The reality that several of the strongest individual contributors had already left for OpenAI / Anthropic / start-ups.

Gemini 1 launched in three sizes (Nano, Pro, Ultra) on 6 December 2023. The launch was widely seen as competent but not transformational; the demo video was edited in ways that were criticised, and Ultra's headline benchmark wins were narrow.

08

Gemini 1, 1.5, 2, 2.5 and Production Adoption

DateModelWhat it added
Dec 2023Gemini 1 (Nano / Pro / Ultra)First post-merger flagship. Multimodal native. Mixed reception.
Feb 2024Gemini 1.5 Pro1 M-token context window — first frontier model with this scale of context. Genuine differentiation.
May 2024Project Astra demoReal-time multimodal agent demo. Streaming voice, vision, action.
Dec 2024Gemini 2.0 Flash & experimental ProStronger reasoning, native tool use.
2025Gemini 2.5 family + Thinking variantsTest-time compute matched to o-series; Gemini 2.5 Pro Deep Think pushes math/science benchmarks.
2025–26Project Mariner (browser agent)Google's GUI-controlling agent answer to Computer Use and Operator.

Where Gemini wins, where it does not

Wins: long context (still ahead at 2 M tokens for some Gemini variants), multimodal (Sora-class video understanding plus generation), Search and Workspace integration (a different game from API competition), the cost curve at the lower-end Flash tier.

Does not win: developer market share for coding agents (Claude leads), consumer chat brand (ChatGPT leads), API revenue (third behind OpenAI and Anthropic among western labs).

The strategic puzzle

Google's strategic asset is not really Gemini-the-model; it is Gemini deeply integrated into Search, Workspace, Android and the phone-OS layer. By 2025 the integration story was the most active one. Whether Google can convert distribution leverage into a frontier-lab competitive position is one of the open strategic questions of the next two years.

09

The Demis Hassabis Profile

Hassabis is one of the more distinctive principals in the field, in part because of how unusual his background is: a scientist who was previously a games-industry creative-and-technical lead, who is also an unusually strong public communicator without being a Twitter fixture, who runs an organisation an ocean away from his board. A short character sketch:

Reasonably well-attested

  • Calm, methodical, low-volume speaker.
  • Genuinely interested in the science of intelligence; reads and engages with neuroscience literature regularly.
  • Plays games competitively at high levels (chess, poker; was on the UK national team).
  • Long-horizon strategic thinker; the lab's roadmap is recognisably his.

Public posture

  • Cautious about AGI timelines but not dismissive of fast scenarios.
  • Stronger emphasis on scientific applications than most frontier-lab CEOs.
  • Engages substantially with UK government and regulators (UK AISI, AI Safety Summit).
  • Unusually willing to discuss philosophical questions about consciousness and intelligence in interviews.
I think we should treat this technology with the same care and respect we treat any other transformative technology in human history — nuclear, biological, the printing press. We don't get to pretend it is just another product. — Demis Hassabis, paraphrased from multiple interviews 2023–2024.

The Nobel Prize in Chemistry in October 2024 was a public confirmation of Hassabis's long-standing strategic emphasis on science applications. Few frontier-lab CEOs have a Nobel attached to a flagship product line; the strategic value of that is hard to quantify but real.

10

Other DeepMind Notables

KK

Koray Kavukcuoglu — CTO

DeepMind from 2012; previously LeCun postdoc at NYU

Turkish. Senior research engineer turned CTO. The technical infrastructure-and-engineering lead behind much of DeepMind's training stack. Quiet, low-profile, deeply trusted internally.

OV

Oriol Vinyals — VP Research

DeepMind from 2016 (was at Brain before); Hinton/UC Berkeley alumni

Spanish. Co-author seq2seq, AlphaStar lead, the most prolific recent senior researcher inside the lab. Public-facing scientific voice second only to Hassabis at the lab.

DS

David Silver — Principal Scientist; AlphaGo lead

UCL professor (joint); DeepMind from 2013

British. Rich Sutton's PhD student. Architect of AlphaGo, AlphaGo Zero, AlphaZero, MuZero. Authored the 2024 essay "Welcome to the Era of Experience" arguing that the next phase of AI is RL on real-world experience rather than imitation of static text.

MS

Mustafa Suleyman — departed for Microsoft AI

Co-founder; DeepMind 2010–2019; Inflection 2022–2024; Microsoft AI CEO 2024–

Public-facing applied-AI builder. After leaving DeepMind, founded Inflection AI with Reid Hoffman and Karen Simonyan; their product Pi was one of the more thoughtful consumer-AI experiments. Inflection's senior team transferred to Microsoft in March 2024 in a $650 M licensing arrangement that was widely read as a near-acquisition. Now runs Microsoft AI as Copilot's product lead.

SL

Shane Legg — Chief AGI Scientist

Co-founder, still in the lab

Public-facing strategic-thinking voice. The most consistent voice in the field on AGI timelines — has been forecasting roughly 2028 for AGI for over fifteen years. Many forecasters have updated; he largely has not, which is a useful data point in itself.

JD

Jeff Dean — Google AI Chief Scientist (post-merger role)

Google Senior Fellow; Brain co-founder; Google AI Chief Scientist 2023–

The best-regarded systems engineer in computer science. Co-founder Brain, MapReduce / Bigtable / Spanner / TensorFlow / TPU programme alumni. After the merger, runs Google's broader AI research strategy at the platform level rather than the immediate Gemini line.

11

The DeepMind Lab Today

Google DeepMind in 2026 is around 6,000 people across London (HQ), Mountain View, Zürich, Tokyo, Paris and a few other locations. It is by some margin the largest of the frontier labs. Its productive surface area is also the broadest:

What the lab works on

  • Gemini frontier line (text, image, audio, video, code).
  • Project Astra (real-time multimodal agent).
  • Project Mariner (browser agent).
  • AlphaFold and the science line (proteins, materials, weather, fusion).
  • AlphaProof / AlphaGeometry (math reasoning).
  • Gemma open-weight family.
  • Robotics (Gemini Robotics, RT-1/RT-2 lineage).
  • Safety, alignment, interpretability (Frontier Safety Framework, mechanistic interpretability team).

Strategic position

  • One of three western frontier labs by capability.
  • Deepest distribution leverage of any lab via Google products.
  • Strongest science-applications portfolio.
  • Less commercially direct than OpenAI/Anthropic at the API tier.
  • Most international talent footprint.
  • Subject to Google product-organisation pressures the others do not have.
A useful frame for the lab

Google DeepMind is the only frontier lab that is also a product engineering organisation inside a $2 T public company. The constraint is also the asset: it ships less aggressively than OpenAI, but it has surface area for the technology that no other lab has. Whether that becomes a moat or a millstone is the central commercial-strategic question for the lab over the next three years.

12

Cheat Sheet

Five turning points

  • 2010 — founded London by Hassabis, Suleyman, Legg.
  • 2014 — Google acquires for ~$500 M.
  • 2016 — AlphaGo beats Lee Sedol.
  • 2020 — AlphaFold 2 solves protein structure.
  • 2023 — Brain–DeepMind merger; Gemini 1.

The principals

  • Demis Hassabis — CEO; 2024 Nobel.
  • Shane Legg — Chief AGI Scientist.
  • Koray Kavukcuoglu — CTO.
  • Oriol Vinyals — VP Research.
  • David Silver — AlphaGo / RL.
  • Jeff Dean — Google AI Chief Scientist (overall).

Three pillars

  • Frontier LLM (Gemini line).
  • Science applications (AlphaFold, AlphaProof, materials).
  • RL / agents (DQN heritage; Astra, Mariner, robotics).

What's next in the series

  • 08 — Meta, Mistral, xAI & the second tier.
  • 09 — Chinese frontier labs.