
Sam Altman’s Controversial Take on AI and Human Energy Consumption
Last Friday, at a prominent AI summit in India, Sam Altman, CEO of OpenAI, addressed a criticism he deemed “unfair.” When questioned by The Indian Express about the substantial natural resources required to train and operate generative AI models, Altman swiftly countered. While acknowledging the significant power demands of chatbots, he posed a thought-provoking question: have we considered the resources consumed throughout human evolution?
“It also takes a lot of energy to train a human,” Altman stated to a captivated audience. “It requires roughly 20 years of life and all the sustenance consumed during that time before achieving intelligence. Furthermore, it involved the widespread evolution of the 117 billion people who have lived, learning to avoid predators and unraveling the mysteries of science – all contributing to your existence.”
He continued, arguing that a fair comparison lies in assessing the energy needed for ChatGPT to answer a question, once trained, versus a human. Altman believes AI has likely already achieved energy efficiency on that basis.
Deconstructing Altman’s Argument
Altman’s comments are open to scrutiny. The energy expenditure of the human brain is demonstrably less than even the most efficient AI models for basic tasks, not to mention the energy consumed by the devices people use to interact with AI. While humans require nourishment to develop intelligence, Altman’s redirection subtly shifts the focus away from AI’s contribution to climate change – the core concern.
Atmospheric carbon dioxide levels are at a multi-million-year high, driven not by the evolution of humanity, but by contemporary society and combustion turbines, like those OpenAI is establishing at its Stargate data centers. Other data centers are also constructing private, gas-fired power plants, potentially generating enough electricity – and greenhouse gas emissions – to rival dozens of major American cities, or extending the lifespan of coal plants. (OpenAI did not respond to a request for comment regarding Altman’s remarks.)
You can learn more about the environmental impact of data centers at The Guardian.
The Troubling Comparison: Humans and Machines
However, the most significant aspect of Altman’s statement is the very act of comparing chatbots to humans. This suggests a view of people and machines as equals. This isn’t a spontaneous thought; it’s a calculated position prevalent within the AI industry. Altman echoed a similar sentiment to Forbes India at the same summit, and Dario Amodei, CEO of Anthropic, Altman’s primary competitor, made a comparable analogy, linking AI training to human evolution and learning just a week prior.
This mindset permeates product development. Anthropic is investigating whether its chatbot, Claude, possesses consciousness or experiences “distress,” even allowing it to terminate conversations deemed “persistently harmful or abusive” due to “risks to model welfare” – explicitly anthropomorphizing a program devoid of biological needs or volition.
Marketing or a Genuine Belief?
AI firms are either convinced their products are genuinely comparable to humans, or they believe this comparison is effective marketing. Both scenarios are alarming. A sincere belief in building a higher power, potentially even a deity – Altman himself suggested superintelligence is only a few years away – could easily justify treating humans and the planet as expendable. He also acknowledged the problem of energy consumption is real because “the world is now using so much AI” and that societies must “move towards nuclear, or wind and solar, very quickly.”
If the comparison is purely a PR tactic, it’s a deeply misanthropic one, aimed at attracting investors. The narrative of AI labs creating digital life has always been convenient, and OpenAI is reportedly seeking funding that would value the company at over $800 billion – nearly as much as Walmart.
While tech companies may genuinely aspire to develop AI for the benefit of humanity, as OpenAI’s founding mission states, the immense capital required raises questions. Equating the development of a child – or the evolution of Homo sapiens – to the creation of algorithmic products reveals a disconnect from the essence of being human. To “train a human” is to experience struggle, accept failure, and embrace the pursuit of wonder and beauty. Generative AI aims to eliminate these processes, striving for instant efficiency and effortless results. These tools may assist us, but equating them to organic life is a disheartening prospect.
Source: TheAtlantic.com




