NEW YORK, USA — A new report commissioned by the US State Department paints an alarming picture of the “catastrophic” national security risks posed by rapidly evolving artificial intelligence, warning that time is running out for the federal government to avert disaster.
The findings were based on interviews with more than 200 people over more than a year – including top executives from leading AI companies, cybersecurity researchers, weapons of mass destruction experts, and national security officials inside the government.
The report, released this week by Gladstone AI, flatly states that the most advanced AI systems could, in a worst case, “pose an extinction-level threat to the human species.”
A US State Department official confirmed to CNN that the agency commissioned the report as it constantly assesses how AI is aligned with its goal to protect US interests at home and abroad. However, the official stressed the report does not represent the views of the US government.
The warning in the report is another reminder that although the potential of AI continues to captivate investors and the public, there are real dangers too.
“AI is already an economically transformative technology. It could allow us to cure diseases, make scientific discoveries, and overcome challenges we once thought were insurmountable,” Jeremie Harris, CEO and co-founder of Gladstone AI, told CNN on Tuesday.
“But it could also bring serious risks, including catastrophic risks, that we need to be aware of,” Harris said. “And a growing body of evidence—iincluding empirical research and analysis published in the world’s top AI conferences — suggests that above a certain threshold of capability, AIs could potentially become uncontrollable.”
White House spokesperson Robyn Patterson said President Joe Biden’s executive order on AI is the “most significant action any government in the world has taken to seize the promise and manage the risks of artificial intelligence.”
“The President and Vice President will continue to work with our international partners and urge Congress to pass bipartisan legislation to manage the risks associated with these emerging technologies,” Patterson said.
News of the Gladstone AI report was first reported by Time.
First, Gladstone AI said, the most advanced AI systems could be weaponized to inflict potentially irreversible damage. Second, the report said there are private concerns within AI labs that at some point they could “lose control” of the very systems they’re developing, with “potentially devastating consequences to global security.”
“The rise of AI and AGI [artificial general intelligence] has the potential to destabilize global security in ways reminiscent of the introduction of nuclear weapons,” the report said, adding there is a risk of an AI “arms race,” conflict, and “WMD-scale fatal accidents.”
Gladstone AI’s report calls for dramatic new steps aimed at confronting this threat, including launching a new AI agency, imposing “emergency” regulatory safeguards and limits on how much computer power can be used to train AI models.
“There is a clear and urgent need for the US government to intervene,” the authors wrote in the report.
Harris, the Gladstone AI executive, said the “unprecedented level of access” his team had to officials in the public and private sector led to the startling conclusions. Gladstone AI said it spoke to technical and leadership teams from ChatGPT owner OpenAI, Google DeepMind, Facebook parent Meta, and Anthropic.
“Along the way, we learned some sobering things,” Harris said in a video posted on Gladstone AI’s website announcing the report. “Behind the scenes, the safety and security situation in advanced AI seems pretty inadequate relative to the national security risks that AI may introduce fairly soon.”
Gladstone AI’s report said that competitive pressures are pushing companies to accelerate the development of AI “at the expense of safety and security,” raising the prospect that the most advanced AI systems could be “stolen” and “weaponized” against the United States.
The conclusions add to a growing list of warnings about the existential risks posed by AI – including those from some of the industry’s most powerful figures.
Nearly a year ago, Geoffrey Hinton, known as the “Godfather of AI,” quit his job at Google and blew the whistle on the technology he helped develop. Hinton has said there is a 10% chance that AI will lead to human extinction within the next three decades.
Hinton and dozens of other AI industry leaders, academics, and others signed a statement last June that said “mitigating the risk of extinction from AI should be a global priority.”
Business leaders are increasingly concerned about these dangers – even as they pour billions of dollars into investing in AI. Last year, 42% of CEOs surveyed at the Yale CEO Summit last year said AI has the potential to destroy humanity five to ten years from now.
(CNN)
The post AI could pose ‘extinction-level’ threat to humans appeared first on nationnews.com.