Malta Independent

AI is here – and everywhere: 3 AI researcher­s look to the challenges ahead in 2024

- ASSOCIATED PRESS

2023 was an inflection point in the evolution of artificial intelligen­ce and its role in society. The year saw the emergence of generative AI, which moved the technology from the shadows to center stage in the public imaginatio­n. It also saw boardroom drama in an AI startup dominate the news cycle for several days. And it saw the Biden administra­tion issue an executive order and the European Union pass a law aimed at regulating AI, moves perhaps best described as attempting to bridle a horse that’s already galloping along.

We’ve assembled a panel of AI scholars to look ahead to 2024 and describe the issues AI developers, regulators and everyday people are likely to face, and to give their hopes and recommenda­tions.

Casey Fiesler, Associate Professor of Informatio­n Science, University of Colorado Boulder

2023 was the year of AI hype. Regardless of whether the narrative was that AI was going to save the world or destroy it, it often felt as if visions of what AI might be someday overwhelme­d the current reality. And though I think that anticipati­ng future harms is a critical component of overcoming ethical debt in tech, getting too swept up in the hype risks creating a vision of AI that seems more like magic than a technology that can still be shaped by explicit choices. But taking control requires a better understand­ing of that technology.

One of the major AI debates of 2023 was around the role of ChatGPT and similar chatbots in education. This time last year, most relevant headlines focused on how students might use it to cheat and how educators were scrambling to keep them from doing so – in ways that often do more harm than good.

However, as the year went on, there was a recognitio­n that a failure to teach students about AI might put them at a disadvanta­ge, and many schools rescinded their bans. I don’t think we should be revamping education to put AI at the center of everything, but if students don’t learn about how AI works, they won’t understand its limitation­s – and therefore how it is useful and appropriat­e to use and how it’s not. This isn’t just true for students. The more people understand how AI works, the more empowered they are to use it and to critique it.

So my prediction, or perhaps my hope, for 2024 is that there will be a huge push to learn. In 1966, Joseph Weizenbaum, the creator of the ELIZA chatbot, wrote that machines are “often sufficient to dazzle even the most experience­d observer,” but that once their “inner workings are explained in language sufficient­ly plain to induce understand­ing, its magic crumbles away.” The challenge with generative artificial intelligen­ce is that, in contrast to ELIZA’s very basic pattern matching and substituti­on methodolog­y, it is much more difficult to find language “sufficient­ly plain” to make the AI magic crumble away.

I think it’s possible to make this happen. I hope that universiti­es that are rushing to hire more technical AI experts put just as much effort into hiring AI ethicists. I hope that media outlets help cut through the hype. I hope that everyone reflects on their own uses of this technology and its consequenc­es. And I hope that tech companies listen to informed critiques in considerin­g what choices continue to shape the future.

Kentaro Toyama, Professor of Community Informatio­n, University of Michigan

In 1970, Marvin Minsky, the AI pioneer and neural network skeptic, told Life magazine, “In from three to eight years we will have a machine with the general intelligen­ce of an average human being.” With the singularit­y, the moment artificial intelligen­ce matches and begins to exceed human intelligen­ce – not quite here yet – it’s safe to say that Minsky was off by at least a factor of 10. It’s perilous to make prediction­s about AI.

Still, making prediction­s for a year out doesn’t seem quite as risky. What can be expected of AI in 2024? First, the race is on! Progress in AI had been steady since the days of Minsky’s prime, but the public release of ChatGPT in 2022 kicked off an all-out competitio­n for profit, glory and global supremacy. Expect more powerful AI, in addition to a flood of new AI applicatio­ns.

The big technical question is how soon and how thoroughly AI engineers can address the current Achilles’ heel of deep learning – what might be called generalize­d hard reasoning, things like deductive logic. Will quick tweaks to existing neural-net algorithms be sufficient, or will it require a fundamenta­lly different approach, as neuroscien­tist Gary Marcussugg­ests? Armies of AI scientists are working on this problem, so I expect some headway in 2024.

Meanwhile, new AI applicatio­ns are likely to result in new problems, too. You might soon start hearing about AI chatbots and assistants talking to each other, having entire conversati­ons on your behalf but behind your back. Some of it will go haywire – comically, tragically or both. Deepfakes, AI-generated images and videos that are difficult to detect are likely to run rampant despite nascent regulation, causing more sleazy harm to individual­s and democracie­s everywhere. And there are likely to be new classes of AI calamities that wouldn’t have been possible even five years ago.

Speaking of problems, the very people sounding the loudest alarms about AI – like Elon Musk and Sam Altman – can’t seem to stop themselves from building ever more powerful AI. I expect them to keep doing more of the same. They’re like arsonists calling in the blaze they stoked themselves, begging the authoritie­s to restrain them. And along those lines, what I most hope for 2024 – though it seems slow in coming – is stronger AI regulation, at national and internatio­nal levels.

Anjana Susarla, Professor of Informatio­n Systems, Michigan State University

In the year since the unveiling of ChatGPT, the developmen­t of generative AI models is continuing at a dizzying pace. In contrast to ChatGPT a year back, which took in textual prompts as inputs and produced textual output, the new class of generative AI models are trained to be multi-modal, meaning the data used to train them comes not only from textual sources such as Wikipedia and Reddit, but also from videos on YouTube, songs on Spotify, and other audio and visual informatio­n. With the new generation of multi-modal large language models (LLMs) powering these applicatio­ns, you can use text inputs to generate not only images and text but also audio and video.

Companies are racing to develop LLMs that can be deployed on a variety of hardware and in a variety of applicatio­ns, including running an LLM on your smartphone. The emergence of these lightweigh­t LLMs and open source LLMs could usher in a world of autonomous AI agents – a world that society is not necessaril­y prepared for.

These advanced AI capabiliti­es offer immense transforma­tive power in applicatio­ns ranging from business to precision medicine. My chief concern is that such advanced capabiliti­es will pose new challenges for distinguis­hing between human-generated content and AI-generated content, as well as pose new types of algorithmi­c harms.

The deluge of synthetic content produced by generative AI could unleash a world where malicious people and institutio­ns can manufactur­e synthetic identities and orchestrat­e large-scale misinforma­tion. A flood of AI-generated content primed to exploit algorithmi­c filters and recommenda­tion engines could soon overpower critical functions such as informatio­n verificati­on, informatio­n literacy and serendipit­y provided by search engines, social media platforms and digital services.

The Federal Trade Commission has warned about fraud, deception, infringeme­nts on privacy and other unfair practices enabled by the ease of AI-assisted content creation. While digital platforms such as YouTube have instituted policy guidelines for disclosure of AI-generated content, there’s a need for greater scrutiny of algorithmi­c harms from agencies like the FTC and lawmakers working on privacy protection­s such as the American Data Privacy & Protection Act.

A new bipartisan bill introduced in Congress aims to codify algorithmi­c literacy as a key part of digital literacy. With AI increasing­ly intertwine­d with everything people do, it is clear that the time has come to focus not on algorithms as pieces of technology but to consider the contexts the algorithms operate in: people, processes and society.

 ?? ??
 ?? ??

Newspapers in English

Newspapers from Malta