Superintelligent AI Systems Could Lead to Human Extinction, Warns OpenAI Co-Founder
In a recent blog post, OpenAI Co-Founder Ilya Sutskever issued a stark warning about the potential dangers of superintelligent artificial intelligence (AI) systems. According to Sutskever, these advanced AI systems will be so powerful that humans will struggle to effectively monitor them, which could ultimately result in the “disempowerment of humanity or even human extinction.”
Sutskever, along with head of alignment Jan Leike, emphasized their focus on addressing the challenges posed by “superintelligence,” which surpasses the capabilities of artificial general intelligence (AGI).
The duo believes that superintelligence could emerge within this decade, but the speed of technological development remains uncertain. They acknowledged the lack of a current solution for steering or controlling potentially superintelligent AI and preventing it from going rogue.
“Our current techniques for aligning AI, such as reinforcement learning from human feedback, rely on humans’ ability to supervise AI. But humans won’t be able to reliably supervise AI systems much smarter than us, and so our current alignment techniques will not scale to superintelligence. We need new scientific and technical breakthroughs.”
OpenAI’s goal is to develop a roughly human-level automated alignment researcher and leverage significant computing power to scale their efforts in aligning superintelligence. They aim to create a training method that can be scaled upwards, validate the resulting model, and test the entire pipeline by deliberately training misaligned models and confirming the detection of the worst kinds of misalignments through adversarial testing.
The company has committed 20% of its computing power over the next four years to address the problem of superintelligence alignment.
" Conservative News Daily does not always share or support the views and opinions expressed here; they are just those of the writer."