The UN Security Council held for the first time a session on Tuesday on the threat posed by artificial intelligence to international peace and stability, and Secretary-General Antonio Guterres called for a global watchdog to oversee a new technology that has raised at least as many concerns as Amal.
Mr. Guterres warned that AI could pave the way for criminals, terrorists and other actors bent on causing “death and destruction, widespread trauma, and profound psychological damage on an unimaginable scale.”
Last year’s launch of ChatGPT — which can generate text from prompts, mimic voice, and create images, illustrations, and videos — raised concerns about misinformation and manipulation.
On Tuesday, diplomats and leading experts on artificial intelligence presented to the Security Council the risks and threats — along with the scientific and social benefits — of the new emerging technology. They said much remains unknown about the technology even as its development accelerates.
“It’s as if we’re building engines without understanding the science of combustion,” said Jack Clark, co-founder of Anthropic, an AI-powered safety research firm. He said that private companies should not be the only manufacturers and regulators of AI
Mr. Guterres said a UN watchdog should act as the governing body to regulate, monitor and enforce AI regulations in the same way other agencies oversee aviation, climate and nuclear energy.
The proposed agency will be composed of experts in the field who will share their expertise with governments and administrative agencies that may lack the technical knowledge to address AI threats.
But the prospect of a legally binding decision on his administration remains remote. However, the majority of diplomats It supported the idea of a global governance mechanism and a set of international rules.
“No country will be affected by artificial intelligence, so we must engage and engage the broadest coalition of international actors from all sectors,” said British Foreign Secretary James Cleverly, who chaired the meeting as Britain holds the rotating presidency of the council this month. .
Diverting from the majority opinion on the council, Russia expressed doubts about knowing enough of the dangers of artificial intelligence to raise it as a source of threats to global instability. China’s ambassador to the United Nations, Zhang Jun, also opposed setting up a set of global laws and said international regulators should be flexible enough to allow countries to develop their own rules.
However, the Chinese ambassador said that his country opposes the use of artificial intelligence “as a means to create military dominance or undermine the sovereignty of a country.”
The military use of autonomous weapons on the battlefield or in another country for assassinations has also been touched upon, such as the satellite-controlled artificial intelligence robot that Israel sent to Iran to kill a prominent nuclear scientist, Mohsen Fakhrizadeh.
Mr. Guterres said the United Nations must reach a legally binding agreement by 2026 banning the use of artificial intelligence in automated weapons of war.
Professor Rebecca Willett, director of artificial intelligence at the University of Chicago’s Data Science Institute, said in an interview that when regulating technology, it’s important not to lose sight of the humans behind it.
The systems are not completely independent. She said the people who design them should be held accountable.
“This is one of the reasons why the United Nations is looking into this,” said Professor Willett. “International ramifications are really needed so that a company based in one country cannot destroy another without violating international agreements. Real, enforceable regulation can make things better and safer.”