"Empire of AI" by Karen Hao
Introduction: The Central Argument of "Empire of AI"
The book argues that the story of OpenAI is a powerful lens through which to understand the modern AI industry. It frames OpenAI not just as a company but as the leader of a new kind of AI empire. This empire operates by centralizing immense power, talent, and resources under the banner of a noble, world-saving mission: to ensure Artificial General Intelligence (AGI) benefits all of humanity.
However, the book contends that this mission has been reinterpreted over time to justify an aggressive, secretive, and profit-driven quest for dominance. This quest has significant hidden costs that are borne by the most vulnerable people around the world, creating a modern form of colonialism built on data extraction, labor exploitation, and environmental degradation.
Part 1: The Rise of the AI Empire: The Story of OpenAI
1.1. The Founding Myth vs. Reality
1.1.1. A Noble Mission OpenAI was launched in 2015 as a nonprofit research lab. Its public mission was to be an altruistic counterweight to corporate giants like Google, which co-founder Elon Musk feared would control the future of AI. The founders pledged to develop AGI safely, guided by ideals of openness, transparency, and collaboration for the good of the world.
1.1.2. The Erosion of Ideals This idealistic vision quickly crumbled. The founders realized that their ambition required extraordinary amounts of money and computational power ("compute"). This led to a series of critical shifts:
- From Nonprofit to "Capped-Profit": To raise capital, CEO Sam Altman created a for-profit arm, OpenAI LP, nested within the nonprofit, promising investors returns of up to 100x their investment.
- From Openness to Secrecy: The original promise of sharing all research was abandoned. The company argued that its increasingly powerful technology was too dangerous to be open-sourced and that secrecy was necessary for both safety and competitive advantage.
- From Collaboration to Competition: The mission became a race to be the first to build AGI, triggering the very "competitive race" the founders had warned against.
1.2. The Architect: Sam Altman
The book presents Sam Altman as the central figure driving OpenAIās transformation.
1.2.1. A Master of Power and Narrative Altman is profiled as a brilliant and charismatic leader, skilled at fundraising, building powerful networks, and telling compelling stories about the future. However, the book notes that he has been followed by accusations of dishonesty, manipulation, and self-serving behavior since his first startup, Loopt. The November 2023 board crisis that led to his temporary firing was rooted in the board's conclusion that he was "not consistently candid".
1.2.2. The Monopoly Strategy Influenced by his mentor, billionaire investor Peter Thiel, Altmanās strategy is to achieve a monopoly by creating a product that is an "order of magnitude" (or 10x) better than anything else and moving relentlessly fast. This aggressive mindset was applied to OpenAI, reframing its mission as a winner-takes-all competition.
1.3. The Winning Formula: Scaling and the GPT Models
1.3.1. The "Scaling Laws" OpenAIās core technical strategy was pioneered by co-founder and chief scientist Ilya Sutskever. He held a deep belief that the path to more powerful AI was simply to scaleāto make existing neural network designs (like Google's Transformer) vastly bigger by training them on more data with more compute power. Researchers at OpenAI discovered that this relationship was predictable, calling them "scaling laws": increasing a model's size, data, and compute led to predictably better performance.
1.3.2. From GPT-1 to ChatGPT This scaling strategy led to the development of the Generative Pre-trained Transformer (GPT) series. Each model was exponentially larger than the last, culminating in GPT-3, a model over 100 times the size of its predecessor, GPT-2. This massive leap in scale produced a stunning jump in capabilities. The public release of ChatGPT in November 2022, a user-friendly chatbot built on an improved version of GPT-3, made this power accessible to millions and cemented OpenAI as the world's leading AI company.
Part 2: The Hidden Costs of the Empire
This section focuses on the practical, real-world consequences of OpenAI's vision, particularly for those who experience the difficulties the book addresses.
2.1. The Flawed Foundation of Modern AI
2.1.1. An Unreliable Technology The book argues that modern AI models, including ChatGPT, are not truly "intelligent." They are complex statistical pattern-matchers trained on vast amounts of data. This makes them inherently unreliable:
- They are "black boxes," meaning even their creators don't fully understand how they work.
- They often reproduce biases and stereotypes found in their training data, harming marginalized communities.
- They are prone to making things up, a phenomenon misleadingly called "hallucinations".
2.1.2. Data Colonialism: The Endless Hunger for Data To feed their ever-larger models, AI companies engage in mass data extraction. They scrape text and images from the entire internet, including copyrighted books, news articles, and artwork, often without consent, credit, or compensation. The book calls this a new form of "data colonialism," where human experience and culture are treated as a raw resource to be extracted and refined for profit.
2.2. The Human Price: Ghost Work and Exploitation
A central theme of the book is the hidden human labor that powers AI. To make AI systems functional and safe, a global underclass of poorly paid "ghost workers" is required.
2.2.1. A Business Model of "Disaster Capitalism" The AI data-labeling industry profits from global crises. The book details how firms like Scale AI and Appen built their workforce by targeting populations in economically devastated countries like Venezuela, where educated people were desperate enough to work for pennies.
2.2.2. The Trauma Behind ChatGPT To make its models less toxic, OpenAI needed to build a content-moderation filter. It outsourced this work to a firm in Kenya, where workers were paid less than $2 an hour to read and label hundreds of thousands of graphic and disturbing text excerpts describing sexual abuse, violence, and hate speech. The book tells the story of Mophat Okinyi, a Kenyan worker who suffered severe and lasting psychological trauma from this project.
2.2.3. The New Digital Sweatshop Making ChatGPT helpful and conversational required a technique called Reinforcement Learning from Human Feedback (RLHF). This involved hiring over a thousand contractors globally to perform tedious tasks: writing countless example conversations and ranking the AI's responses from best to worst. The book highlights the precarious lives of these workers, like Winnie in Kenya, who often work grueling hours for unstable, piece-rate pay, their livelihoods dependent on the whims of an algorithm.
2.3. The Environmental Toll: A Plundered Earth
The book argues that the digital AI empire has a massive, physical footprint that is unsustainable.
2.3.1. The Insatiable Thirst for Energy and Water Training and running large AI models requires massive data centers that consume enormous amounts of energy and water. A single AI data "megacampus" can use as much power as a city and billions of gallons of fresh water for cooling. This is driving up global energy demand and forcing utility companies to delay the retirement of fossil fuel plants.
2.3.2. A New Wave of Extraction This infrastructure boom has led tech giants like Microsoft and Google to build data centers in water-scarce regions like Arizona, Chile, and Uruguay. The book tells the stories of local activists fighting back against these projects, which they see as a modern form of colonial extraction that drains their communities of vital resources like drinking water for corporate profit. This demand also fuels mining for materials like copper and lithium, further devastating ecosystems and Indigenous lands.
Part 3: The Struggle for Control and the Ouster of Sam Altman
3.1. A Battle of Ideologies: "Doomers" vs. "Boomers"
The book portrays OpenAI as a battleground between two conflicting ideologies that have polarized the AI world:
- The "Doomers": Influenced by the effective altruism (EA) movement, this faction is deeply concerned about long-term, catastrophic risks, including the possibility that a superintelligent AGI could go rogue and cause human extinction. Many in OpenAIās "Safety" team and at rival company Anthropic belong to this group.
- The "Boomers" (or "e/acc"): Short for effective accelerationists, this faction views rapid technological progress as a moral good and pushes to build and deploy AI as fast as possible to unlock its benefits.
3.2. The Coup and Its Failure
The book provides a detailed inside account of Sam Altman's firing in November 2023. It argues that the board's decision was not based on a single incident but on a cumulative loss of trust. Board members and key executives like Ilya Sutskever and Mira Murati had grown concerned that Altman's pattern of being "not consistently candid," manipulative, and resistant to oversight posed a threat to the company's mission.
However, the board completely miscalculated the power Altman held. An intense backlash from employees, investors like Microsoft, and the broader Silicon Valley elite forced the board to resign and reinstate Altman within five days.
3.3. The Aftermath: Power Consolidated
The failed coup ultimately solidified Altmanās control. Key safety-focused leaders who had opposed him, including Ilya Sutskever and Jan Leike, departed the company. The crisis also accelerated plans to restructure OpenAI into a for-profit benefit corporation, a move that would formally dismantle the nonprofit boardās authority and align the company more closely with traditional corporate interests.
Part 4: An Alternative Vision: How the Empire Falls
The book concludes not on a note of despair but with a call to action, presenting a vision for a more democratic and equitable AI future.
4.1. A Different Path: The Te Hiku Media Model
The book highlights the work of Te Hiku Media, an Indigenous MÄori organization in New Zealand. They used AI to help revitalize their endangered language, but did so on their own terms. Their approach was the opposite of OpenAI's:
- It was small-scale, using open-source models that required only two GPUs.
- It was community-driven, based on principles of consent, reciprocity, and data sovereignty.
- It proved that valuable AI can be built without mass data extraction or massive resource consumption.
4.2. A Call to Action: Redistributing Power
The book argues that to challenge the AI empires, power must be redistributed away from a handful of corporations and back to the public along three key axes:
- Knowledge: Demanding transparency from companies about their training data and funding independent, community-driven research outside of Big Tech.
- Resources: Implementing stronger labor protections for all workers affected by AIāfrom data labelers to artistsāand updating IP laws to protect creators.
- Influence: Promoting public education to demystify AI, break through the corporate hype, and enable a more democratic conversation about how this technology should be governed for the true benefit of all.