To prevent the race for superintelligence from going off the rails, OpenAI recommends shared standards and insights from frontier labs, public oversight and accountability proportional to capabilities, building an AI-resilient ecosystem, and reporting and measurement by labs and governments on AI's impact.

OpenAI has warned that superintelligent systems could pose catastrophic risks to humanity, even as their potential remains enormous, marking one of the company's starkest warnings yet.
This comes as Big Tech companies spend billions to develop ever more powerful artificial intelligence systems.
The Microsoft-backed company said in a blogpost that while the future could bring new and potentially better ways to live fulfilling lives, it will also come with downsides.
While most people still associate AI with chatbots, there are systems already capable of outperforming the smartest humans in some of the most challenging intellectual tasks.
'It is true that work will be different, the economic transition may be very difficult in some ways, and it is even possible that the fundamental socioeconomic contract will have to change,' the company said.
Superintelligence typically refers to machines that are more intelligent than the smartest humans.
OpenAI stressed the importance of empirically studying safety and alignment to inform global decisions, such as whether AI development should be slowed to allow careful study of systems capable of recursive self-improvement.
'Obviously, no one should deploy superintelligent systems without being able to robustly align and control them, and this requires more technical work,' the blogpost added.
At the same time, AI systems could accelerate progress in healthcare, materials science, drug development, climate modelling, and expand access to personalised education worldwide.
Demonstrating tangible benefits can help create a shared vision of a world where AI improves life, not just efficiency.
OpenAI predicts that by 2026, AI may be capable of making small discoveries, and by 2028 and beyond, systems could achieve more significant breakthroughs, driven in part by declining computing costs.
The cost per unit of a given level of intelligence has fallen sharply, roughly 40 times per year.
Tech giants such as Meta, Microsoft, and Amazon -- and billionaires like Elon Musk -- are racing to develop superintelligent systems.
Meta has been at the forefront, spending billions, including offering eye-watering salaries to attract AI scientists.
Earlier this year, it appointed Alexandr Wang, founder of Scale AI, to lead its Superintelligence Labs.
Microsoft has also launched a superintelligence team, headed by Mustafa Suleyman, chief executive officer of the company's AI group overseeing Bing and Copilot.
To prevent the race for superintelligence from going off the rails, OpenAI recommends shared standards and insights from frontier labs, public oversight and accountability proportional to capabilities, building an AI-resilient ecosystem, and reporting and measurement by labs and governments on AI's impact.
'We think frontier labs should agree on shared safety principles, share safety research, learnings about new risks, mechanisms to reduce race dynamics, and more.
'Ideas like agreeing on standards for AI control evaluations could be very helpful,' OpenAI said.
Cooperation with executive branches and relevant agencies across countries, especially in areas such as mitigating AI-enabled bioterrorism and understanding the implications of self-improving AI, will also be critical.
Feature Presentation: Ashish Narsale/Rediff








