Book Review: Olson, P., (2024), Supremacy, Macmillan
- Peter Lorange
- 3 days ago
- 6 min read

This book is about the race among giants in the IT industry to come up with more powerful AI applications, such as ChatGPT. It is also about the safety issues around AI. Can the potentials for dysfunctional AI-driven applications be lessened? What role might AIG (artificial general intelligence; see explanation below) play in this respect? The book additionally covers how slow and inept large corporations seem to be when it comes to innovations. California-based Google, for instance, appears to rely heavily on the relatively small London-based Deep Mind company. And Seattle-based Microsoft is perceived to significantly depend on San Francisco’s Open AI. Both Google as well as Microsoft have some degree of control over these two entities, but both are thought of as wanting to let these smaller entities maintain a strong degree of independence. Their key roles are to support innovation.
Written by Parmy Olson, a technology columnist with Bloomberg covering artificial intelligence, this manuscript covers the period up until 2023. Yet the race between the major players continues today. For instance, as late as in mid-2025, Chat GPT 5.0 was launched by Open AI. However, the broad consensus among most of those reviewing this launch was that it was a “failure”. Accordingly, this generative AI version was hastily withdrawn after only three weeks on the market. The competition among key AI players calls for fast releases, even at the cost of sufficient testing.
Before going further with this review, let us restate several basic facts:
The Giants:
- Google is the world’s clear leader in search engines, with particular emphasis on the role of search in marketing.
- Microsoft is the world’s leader in software.
- Amazon jockeys with Google as well as Microsoft to become the leader in cloud.
The smaller innovation entities:
- Deep Mind, controlled by Google, tends to focus more on academic uses and applications.
- Open AI, controlled by Microsoft, is more growth and performance oriented.
Key executives with distinct personalities and leadership approaches, clearly ad deeply impacting how their companies are moving forward:
- Sam Altman, founder and CEO of Open AI
- Demis Hassabis, founder and CEO of Deep Mind
There are interesting background descriptions of Sam Altman as well as Demis Hassabis, especially during their formative years. The former was portrayed to be on top of almost everything, athletic as well as academic. The latter was described as growing up with games, especially those that simulate the real world.
Altman’s early success came from being heavily involved in the Y-Combinator as well as the Hydrasins venture funds. Here he also learned two important leadership lessons:
- Do not try to force people to do what they do not want to do.
- Be quick to emotionally disengage oneself from difficult situations.
Hassabis was involved with various early AI-focused activities in the UK, all which failed (such as Elixir Studios) before he set off to start Deep Mind. Google’s founder and CEO Larry Page became impressed with Deep Mind’s approach to AI and ended up purchasing the firm. Deep Mind was, however, as already noted, not integrated into Google.. it was basically left alone.
Google has become involved in several sex, gender and race controversies. Also, over time Google became rather strict when providing permits to its own staff regarding publishing anything that in any way might shed doubts on Google’s search engine! As a consequence, many felt that Google had become slow and bureaucratic. Deep Mind instead steered clear of these issues that its parent company were struggling with.
Both Open A and Deep Mind were faced with what many of their employees saw as the dilemma— how to make the world a better place and simultaneously earn profit? Both firms tried to build AGIs (artificial general intelligence) in an attempt to lend support to successfully coping with this dilemma.
As an aside, it is important to note the main difference between Artificial Intelligence (AI) and Artificial General Intelligence (AGI). They differ significantly in their capabilities. AI, as we know it now, are systems designed to perform specific tasks, while AGI is a hypothetical concept that envisions systems with human-level cognitive abilities across a wide range of tasks. In other words, think of AI as task-specific and AGI as aiming to become more general-purpose. More specifically, AI encompasses any system that performs tasks typically requiring human intelligence such as chatbots, recommendation systems and image recognition. AGI refers to broad, human-level cognitive capability systems that are able to reason, learn, and apply knowledge across a wide range of tasks, not just narrow ones. Companies such as Google, Microsoft, OpenAI, Anthropic, Meta, and Amazon are involved in both AI and AGI, though the focus and framing of their efforts may differ.
Both Open AI as well as Deep Mind were making releases that many might see as rather risky. Their parent companies were typically ready to take such risks since many would not create any dysfunctional effects on their main business activities. Afterwards, so-called big language models were launched without having been thoroughly tested. But in early 2022, Open AI launched Chat GPT, which changed almost everything. This was revolutionary, providing succinct briefings of often complex issues. For Google, this posed a particular threat, in the sense that a user might now no longer have to go to long resms of references provided by a Google search, thus implying considerably extra work for the entity doing a search. Chat GPT was simpler, faster and better!
So, as of early to mid-2022, Sam Altman and Open AI took the lead in bringing the best large language model to the market (Chat GPT), as well as taking over who controls the AI narrative.
Google frantically attempted to respond. The most tangible step was that it merged its two research organizations which both were working on AI— Deep Mind and Google Research. Perhaps surprisingly, Hassabis was put in charge. The task was clear, namely, to come up with a large language model that would outperform Chat GPT.
At the end of 2022, Google/Deep Mind/Hassabis got some surprising “help”. Open AI’s board fired Altman. No clear reason was given. Some claim that Altman might have become a little bit too arrogant. The de factoowner of Open AI, Microsoft, did not appreciate this move. Apparently, Microsoft had not been in on the decision to let Altman go. Anyway, Microsoft offered job opportunities to all employees at Open AI who had become disgruntled with the firing of its widely popular leader. But, after some time, Altman was reinstated. This was, however, not covered in the book.
What this episode clearly demonstrates is that boards at times make decisions that can take its firms in directions that are at odds with the overall strategy. In this case, it was clear that Open AI’s decision to fire Altman led to organizational confusion and frustration, thereby slowing down Open AI’s ability to successfully compete against Deep Mind/Google.
In her conclusion, the author indicates that transparency seemed to be lacking in the AI industry. Both Open AI as well as Deep Mind were so focused on making better AI approaches that they did not open themselves up to scrutiny about whether their new systems might cause harm. Thus, the race between the two sides, actually between two giants, seemed to lead to a lessening of control of potentially dysfunctional effects from new systems launched. Thus, rivalry may lead to relaxed concerns regarding new systems’ ethical sides!
This reviewer found that the present book was excellent, in the sense that it offered a glimpse into how two large IT-based corporations, Google and Microsoft, may compete. It accounts issues relating to the lack of speed of innovation in large firms, protecting their basic reputation by allowing risk-taking through semi-independent subsidiary firms, or dysfunctional decisions by subsidiary boards. Perhaps above all, the book illustrates the seemingly critical importance of having truly high-profile leaders at the helm. Both Altman and Hassabis were exceptional leaders, hard-driving, and strong in technical insights. Yet, while Altman emphasized rapid new product launches as well as new talent recruitment and development, Hassabis was perhaps more focused on “academic style” research. In the end though, both were highly successful. Perhaps the “answer” to “who was the best” might lie in the links between the two semi-autonomous companies they were leading and their parents (Google, Microsoft). Autonomy for the subsidiaries seems to be a key instrument, so that the companies could “deliver” on fast innovations. But control of the subsidiary boards is also critical, so as to maintain a steady strategic focus. To concentrate on potentially dysfunctional effects on society from new language models seems to be a particularly legitimate endeavor. Competitive rivalry must not lead to a weakening of this consideration!
This is a book that I can recommend. It even provides insights to readers such as this reviewer, who is certainly not a computer-systems expert! The sequel to the story so dramatically retold in this book shall likely be of high interest. The reviewer is eagerly awaiting the next book by Parmy Olson. Do not wait for the next book— read this one now!
Comments