A former security researcher in Openai says he is “quite terrified” as to the rhythm of the development of artificial intelligence, warning that the industry takes a “very risky bet” on technology.
Steven Adler has expressed his concerns concerning companies seeking to quickly develop artificial general intelligence (AG), a theoretical term referring to systems that correspond or exceed humans to any intellectual task.
Adler, who left Openai in November, said in a series of messages on X He had a “wild ride” to the American company and would lack “many parts of it”.
However, he said that technology was developing so quickly High doubles on the future of humanity.
“I am quite terrified by the rhythm of the development of AI these days,” he said. “When I think about where I am going to raise a future family, or how much to save for retirement, I can’t help but ask myself: will humanity even come to this point?”
Some experts, such as Geoffrey Hinton, winner of the Nobel Prize, fear that powerful AI systems can escape human control with potentially catastrophic consequences. Others, like the chief scientist of Meta AI, Yann Lecun, have played in existential threatsaying that AI “could really save humanity from extinction”.
According to Adler Linkedin profileHe led security related to “product launches for the first time” and “more more speculative long -term AI systems” during a four -year career in Openai.
Referring to the development of AGE, the main objective of Openai, Adler added: “An acting race is a very risky bet, with a huge drawback.” Adler said that no research laboratory had a solution to the alignment of AI – the process of ensuring that systems adhere to a set of human values - and that the industry could move too quickly for in find one.
“The more quickly we run, less than anyone finds one in time.”
Adler’s X messages came while Deepseek of China, which also seeks to develop AG, has shaken the American technological industry by revealing a model that rivaled with Openai technology despite its development with apparently fewer resources.
Warning that the industry seemed “stuck in a very bad balance,” Adler said that “real safety rules” were necessary.
“Even if a laboratory really wants to develop acts in a responsible manner, others can still cut the corners to make up for their delay, perhaps disastrously.”
Adler and Openai were contacted for comments.