Posted on

SEOUL, South Korea – World leaders and top corporate executives on Wednesday agreed to new protocols on the use of artificial intelligence, but critics say the voluntary commitments made during a major two-day summit here fall far short of the more ambitious steps needed to govern the rapidly evolving technology.

The two-day Seoul AI Summit was the second meeting in a joint initiative launched by South Korea and the U.K. last year. It was co-hosted by South Korean President Yoon Suk Yeol, who attended in person, and British Prime Minister Rishi Sunak, who joined remotely.

The leaders of the G7 economies — the U.S., Britain, Canada, France, Germany, Italy and Japan — and the leaders of Australia, Singapore and South Korea agreed on a range of “safe, innovative and inclusive” AI usage protocols. For example, they agreed to expand the number of AI safety institutes, which are learning bodies that will align research on machine-learning standardization and testing.



Separately, 16 companies from China, South Korea, the U.S. and the United Arab Emirates committed to responsible AI usage and risk management.

But the agreements were not binding. And while China joined the summit, Russia, another leading player in using AI for weapons and disinformation, did not.

Critics said the summit did not properly address the weaponization of AI in global competition between authoritarian and democratic governments.


SEE ALSO: Artificial intelligence town halls? House committee weighs new approach before writing AI rules


“One of the key struggles between democracies and authoritarian governments is who will control the latest cutting-edge AI, and how they will use it,” said Geoffrey Cain, author of “The Perfect Police State,” an examination of Beijing’s surveillance net. Mr Cain is also policy director of the Tech Integrity Project.

“Voluntary pledges from companies are not going to solve a problem as enormous as this one,” he said.

AI potential, AI fears

Key officials said that AI, for all of its promise and remarkable technological advances, also is creating major new dangers.

“We are seeing life-changing technological advances and life-threatening new risks — from disinformation to mass surveillance to the prospect of lethal autonomous weapons,” U.N. Secretary-General Antonio Guterres told the summit in a video address. “We cannot sleepwalk into a dystopian future where the power of AI is controlled by a few people — or worse, by algorithms beyond human understanding.”

The ultimate fear is that self-learning AI, once capable of bypassing firewalls, could self-propagate, take control of physical assets and, in a worst-case, apocalyptic scenario, challenge humanity itself.

The question for the London-Seoul initiative is whether it can establish ground rules in an era when norms are not clear, and when technological innovation is outpacing regulatory frameworks.

The summit won some kudos for uniting governments and companies from around the world.

“Technical experts will be ahead of social, legal and political actors,” said Daniel Pinkston, who studies and teaches space and cyber security at Troy University. “Technical experts lead, and the realization spills over into other realms.”

The governmental signatories noted the importance of global, intra-organizational coordination. In a statement, they said they aim to “strengthen international cooperation on Al governance.”

Leaders from Australia, Canada, the European Union, France, Germany, Italy, Japan, Singapore, South Korea, the United Kingdom, and the U.S. agreed to the declaration. Vice President Kamala Harris represented the U.S. at the event.

The declaration, posted on the South Korean presidential website, calls for “enhanced international cooperation to advance Al safety, innovation and inclusivity to harness human-centric Al to address the world’s greatest challenges, to protect and promote democratic values, the rule of law and human rights, fundamental freedoms and privacy, to bridge Al and digital divides between and within countries, thereby contributing to the advancement of human well-being, and to support practical applications of Al, including to advance the U.N. sustainable development goals.”

Sixteen leading global AI companies also pledged to publish frameworks detailing how the risks of AI models can be measured and mitigated, including potential misuse by bad actors.

Chinese AI firm Zhipui.ai, backed by such major Chinese brands as Alibaba, Tencent and Xiaomi, was one of the companies that signed on. Other co-signers included Amazon Web Services, Google DeepMind, Meta, Microsoft, IBM, Open AI and Samsung Electronics.

Battle lines

Amid broad conflicts between the U.S. and its democratic allies, and an authoritarian bloc led by China and Russia, specialists say it’s become crystal clear that the authoritarian side will use AI to achieve its goals.

“We are finding that the authoritarian bloc will use whatever is at their disposal to further their political aims, and we are finding that in some of these emerging technologies, authoritarian regimes have figured out how to use them to undermine the democratic bloc,” said Mr. Pinkston, the Troy University scholar. “I think that has caught a lot of people off guard. It was unanticipated at how adept they would become.”

The ability of AI to monitor and aggregate massed, cross-domain data – from CCTV cameras to mobile telecommunications and online transactions — makes it the ideal overseer of surveillance networks, such as the one implemented by the Chinese Communist Party.

When used as a tool of disinformation, AI can create fake news and deep-fake images, videos and voices, as well as generate mass messaging that can be mass disseminated, or aimed at key target audiences.

Crafted onto physical weaponry, AI makes arms such as missiles and drones fully autonomous and potentially enables the era of robotic warfare.

Leave a Reply

Your email address will not be published. Required fields are marked *