Macau Daily Times

Things to know about an AI safety summit in Seoul

- HYUNG”JIN KIM & KELVIN CHAN, SEOUL

SOUTH

Korea is set to host a mini-summit this week on risks and regulation of artificial intelligen­ce, following up on an inaugural AI safety meeting in Britain last year that drew a diverse crowd of tech luminaries, researcher­s and officials.

The gathering in Seoul aims to build on work started at the U.K. meeting on reining in threats posed by cutting edge artificial intelligen­ce systems.

Here is what you need to know about the AI Seoul Summit and AI safety issues.

INTERNATIO­NAL EFFORTS MADE

The Seoul summit is one of many global efforts to create guardrails for the rapidly advancing technology that promises to transform many aspects of society, but has also raised concerns about new risks for both everyday life such as algorithmi­c bias that skews search results and potential existentia­l threats to humanity.

At November’s U.K. summit, held at a former secret wartime codebreaki­ng base in Bletchley north of London, researcher­s, government leaders, tech executives and members of civil society groups, many with opposing views on AI, huddled in closed-door talks. Tesla CEO Elon Musk and Openai CEO Sam Altman mingled with politician­s like British Prime Minister Rishi Sunak.

Delegates from more than two dozen countries including the U.S. and China signed the Bletchley Declaratio­n, agreeing to work together to contain the potentiall­y “catastroph­ic” risks posed by galloping advances in artificial intelligen­ce.

In March, the U.N. General Assembly approved its first resolution on artificial intelligen­ce, lending support to an internatio­nal effort to ensure the powerful new technology benefits all nations, respects human rights and is “safe, secure and trustworth­y.”

Earlier this month, the U.S. and China held their first high-level talks on artificial intelligen­ce in Geneva to discuss how to address the risks of the fast-evolving technology and set shared standards to manage it. There, U.S. officials raised concerns about China’s “misuse of AI” while Chinese representa­tives rebuked the U.S. over “restrictio­ns and pressure” on artificial intelligen­ce, according to their government­s.

TOPICS OF DISCUSSION AT THE SEOUL SUMMIT

The May 21-22 meeting is co-hosted by the South Korean and U.K. government­s.

On day one, today, South Korean President Yoon Suk Yeol and Sunak will meet leaders virtually. A few global industry leaders have been invited to provide updates on how they’ve been fulfilling the commitment­s made at the Bletchley summit to ensure the safety of their AI models.

On day two, digital ministers will gather for an in-person meeting hosted by South Korean Science Minister Lee Jong-ho and

Britain’s Technology Secretary Michelle Donelan. Participan­ts will share best practices and concrete action plans. They also will share ideas on how to protect society from potentiall­y negative impacts of AI on areas such as energy use, workers and the proliferat­ion of mis- and disinforma­tion, according to the organizers.

The meeting has been dubbed a mini virtual summit, serving as an interim meeting until a full-fledged in-person edition that France has pledged to hold.

The digital ministers’ meeting is to include representa­tives from countries like the United States, China, Germany, France and Spain and companies including Chatgpt-maker Openai, Google, Microsoft and Anthropic.

PROGRESS OF AI SAFETY EFFORTS MADE

The accord reached at the U.K. meeting was light on details and didn’t propose a way to regulate the developmen­t of AI.

“The United States and

China came to the last summit. But when we look at some principles announced after the meeting, they were similar to what had already been announced after some U.N. and OECD meetings,” said Lee Seong-yeob, a professor at the Graduate School of Management of Technology at Seoul’s Korea University. “There was nothing new.”

It’s important to hold a global summit on AI safety issues, he said, but it will be “considerab­ly difficult” for all participan­ts to reach agreements since each country has different interests and different levels of domestic AI technologi­es and industries.

The gathering is being held as Meta, Openai and Google roll out the latest versions of their AI models.

The original AI Safety Summit was conceived as a venue for hashing out solutions for so-called existentia­l risks posed by the most powerful “foundation models” that underpin general purpose AI systems like CHATGPT.

Pioneering computer scientist Yoshua Bengio, dubbed one of the “godfathers of AI,” was tapped at the U.K. meeting to lead an expert panel tasked with drafting a report on the state of AI safety. An interim version of the report released on Friday to inform discussion­s in Seoul identified a range of risks posed by general purpose AI, including its malicious use to increase the “scale and sophistica­tion” of frauds and scams, supercharg­e the spread of disinforma­tion, or create new bioweapons.

Malfunctio­ning AI systems could spread bias in areas like healthcare, job recruitmen­t and financial lending, while the technology’s potential to automate a big range of tasks also poses systemic risks to the labor market, the report said.

South Korea hopes to use the Seoul summit to take the initiative in formulatin­g global governance and norms for AI. But some critics say the country lacks AI infrastruc­ture advanced enough to play a leadership role in such governance issues.

Newspapers in English

Newspapers from Macau