
Sam Altman thinks humanity is “close” to building superintelligence, an artificial intelligence with greater-than-human capabilities. He believes this will have a huge impact on the 2030s, including dramatically reducing the cost of AI as robots take over the construction of its infrastructure.
The CEO of OpenAI has written a blog post on his site outlining his predictions for the rest of this decade and the next. Altman acknowledges that while AI will lead to job losses, society will adapt and ultimately benefit from transformative scientific breakthroughs elsewhere.
Altman concludes his new blog post by outlining the two crucial safety challenges that must be addressed: aligning AI systems with humanity’s long-term values, and ensuring superintelligence is broadly accessible rather than concentrated in the hands of a few.
Back in January, ChatGPT creator OpenAI announced that its primary focus for the coming year would be developing superintelligence. Now, Altman says, “We have recently built systems that are smarter than people in many ways.”
This means that, in the next decade-and-a-half, individuals will be able to achieve more than ever before, and that potential will only grow as AI continues to evolve beyond even superintelligent levels. “We do not know how far beyond human-level intelligence we can go, but we are about to find out,” Altman wrote.
AI will get cheaper and even more accessible
According to Altman, access to AI will expand significantly in the coming years as the technology becomes more affordable. This affordability stems from AI’s ability to accelerate its own development by researching new computing substrates, more efficient algorithms, and other infrastructure improvements. Eventually, robots could be tasked with building that infrastructure, reducing costs compared to relying on human labour.
“If we have to make the first million humanoid robots the old-fashioned way, but then they can operate the entire supply chain—digging and refining minerals, driving trucks, running factories, etc.—to build more robots, which can build more chip fabrication facilities, data centers, etc., then the rate of progress will obviously be quite different,” Altman said. “As datacenter production gets automated, the cost of intelligence should eventually converge to near the cost of electricity.”
The CEO added that the average ChatGPT query uses about 0.34 watt-hours of energy, the equivalent of turning on a high-efficiency lightbulb for a couple of minutes, and roughly one-fifteenth of a teaspoon of water. Nevertheless, a 2024 report from the International Energy Agency said one ChatGPT prompt used 10 terawatt-hours more electricity per year than the total used annually for Google searches. Research from The Washington Post also finds that writing a 100-word email with GPT-4 uses approximately 519 millilitres of water.
Some jobs will become redundant, but there will be societal benefits of AI
Altman says that “whole classes of jobs” will disappear as a result of advances in AI in the 2030s. Many companies, including Duolingo (which has since clarified messaging that AI will be replacing employees), Salesforce, Shopify, and Klarna, have already taken the opportunity to reduce their headcounts after integrating the technology.
However, Altman says that “the world will be getting so much richer so quickly that we’ll be able to seriously entertain new policy ideas we never could before.” These could include universal basic income, a concept he envisions supporting through his Orb system, which can biometrically verify identities and prevent fraud. A social safety net like this could alleviate the negative societal consequences of job losses.
“If history is any guide, we will figure out new things to do and new things to want, and assimilate new tools quickly (job change after the Industrial Revolution is a good recent example),” he wrote.
‘At least some people’ will have brain-computer interfaces
The OpenAI CEO says that “true high-bandwidth brain-computer interfaces” could be one of the advancements that superintelligence allows by 2035. While many people may not choose to have such an implant, “at least some people will probably decide to ‘plug in’,” he wrote.
This brings to mind Elon Musk’s Neuralink, the company behind a controversial brain implant that connects the human brain directly to digital systems. While the aim is to improve the lives of those with neurological disorders and disabilities, there are still many ethical and safety concerns surrounding such technology.
Another Musk-technological advancement Altman envisions coming to fruition within the next decade is space colonisation. SpaceX wants to start sending people to Mars in 2026 or 2027.
But first, we need to solve the safety issues
Altman acknowledges the need to solve two primary safety challenges as we move towards and beyond superintelligence. The first relates to “alignment”: ensuring that AI systems reliably act in humanity’s long-term interests. He points to social media algorithms as an example of misaligned AI, as these expertly cater to short-term user engagement, yet lead users to spend time in ways they may later regret.
The second issue relates to ensuring superintelligence does not end up concentrated in the hands of a few powerful individuals, companies, or countries. Currently, the US and China are battling for AI dominance, as are tech giants like Google, Meta, Microsoft, and even OpenAI. Recently, the latter decided not to become a fully for-profit company, and often pledges its intention to develop AI that benefits all of humanity.
It is unclear whether Altman believes these must be addressed prior to pushing forward with superintelligence. He does caveat his acknowledgement of the challenges by saying that “it’s critically important to widely distribute access to superintelligence given the economic implications,” and also thinks that society should be given the opportunity to shape shared norms and boundaries around its use.
“Giving users a lot of freedom, within broad bounds society has to decide on, seems very important,” he wrote. “The sooner the world can start a conversation about what these broad bounds are and how we define collective alignment, the better.”
Read about why OpenAI might be teaming up with Google and on our sister site eWeek about Meta’s superintelligence lab.