DT Next

How long will AI’s ‘slop’ era last?

Where not long ago we used to find the best results for Google searches, we can now find instead plagiarise­d, inaccurate summaries of answers to our queries — including, reportedly, that only 17 American presidents were white, that Barack Obama is a Musli

- DAVID WALLACE-WELLS

Remember the season of A.I. doom? It wasn’t that long ago, in the spring of last year, that the new chatbots were trailed by various “godfathers” of the technology, warning of existentia­l risk; by researcher­s suggesting a 10 percent chance of human extinction at the hands of robots; by executives speculatin­g about future investment rounds tabulated in the trillions.

Now the reckoning is happening on very different terms. In a note from Barclays, one analyst warned that today’s A.I. investment­s might be three times as large as expected returns, while another analyst, in several assessment­s published by Sequoia Capital, calculated that investment­s in A.I. were running short of projected profits by a margin of at least several hundred billion dollars annually. (He called this “A.I.’s $600 billion question” and warned of “investment incinerati­on.”) In a similarly bearish Goldman Sachs report, the firm’s head of global equity research estimated that the cost of A.I. infrastruc­ture build-out over the next several years would reach $1 trillion. “Replacing lowwage jobs with tremendous­ly costly technology is basically the polar opposite of the prior technology transition­s I’ve witnessed,” he noted. “The crucial question is: What $1 trillion problem will A.I. solve?”

A decade ago, venture capital provided Americans the “millennial lifestyle subsidy”: investors keeping the price of Uber and DoorDash and dozens of other services artificial­ly low for years. Today the same millennial might read about that trillion-dollar A.I. expenditur­e, more than the United States spends annually on its military, and think: What exactly is that money going toward? What is A.I. even for?

One increasing­ly intuitive answer is “garbage.” The neuroscien­tist Erik Hoel has called it “A.I. pollution,” and the physicist Anthony Aguirre “something like noise” and “A.I.-generated dross.” More recently, it has inspired a more memorable neologisti­c term of revulsion, “A.I. slop”: often uncanny, frequently misleading material, now flooding web browsers and social-media platforms like spam in old inboxes. Years deep into national hysteria over the threat of internet misinforma­tion pushed on us by bad actors, we’ve sleepwalke­d into a new internet in which meaningles­s, nonfactual slop is casually mass-produced and force-fed to us by A.I.

When Thomas Crooks tried to assassinat­e Donald Trump, for instance, X’s A.I. sputtered out a whole string of cartoonish­ly false trending topics, including that it was Kamala Harris who had been shot. Where not long ago we used to find the very best results for Google searches, we can now find instead potentiall­y plagiarize­d and often inaccurate paragraph summaries of answers to our queries — including, reportedly, that only 17 American presidents were white, that Barack Obama is a Muslim and that Andrew Johnson, who became president in 1865 and died in 1875, earned 13 college degrees between 1947 and 2012. We can also read that geologists advise eating at least one rock a day, that Elmer’s glue should be added to pizza sauce for thickening and that it’s completely chill to run with scissors.

Sometimes, of course, you can get reliable informatio­n too; maybe even most of the time. But you can also get bad advice about A.D.H.D., about chemothera­py, about Ozempic — some potentiall­y delicate subjects. And while the internet was never perfectly trustworth­y, one epoch-defining breakthrou­gh of Google was that it got us pretty close. Now the company’s chief executive acknowledg­es that hallucinat­ions are “inherent” to the technology it has celebrated as a kind of successor for ranked-order search results, which are now often found far below not just the A.I. summary but a whole stack of “sponsored” results as well.

But not all A.I.s are large language models like ChatGPT, Gemini or Claude, each of which were trained on gobsmackin­gly large quantities of text to better simulate interactio­n with humans and bring them closer to approximat­ions of humanlike thinking, at least in theory. Peer away from those chatbots and you can see a very different story, with different robot protagonis­ts: machine-learning tools trained much more narrowly and focused less on producing a conversati­onal, natural-language interface than on processing data dumps much more efficientl­y than human minds ever could. These products are less eerie, which means they have generated little existentia­l angst. They are also — for now, at least — much more reliable and productive.

This month, KoBold Metals announced the largest discovery of new copper deposits in a decade — a green-energy gold mine, so to speak, delivered with the help of its own proprietar­y A.I., which integrated informatio­n about subatomic particles detected undergroun­d with century-old mining reports and radar imagery to make prediction­s about where minerals critical for the green transition might be found. Machine learning may help make our electricit­y grid as much as 40 percent more efficient at delivering power as it is today, when many of its routing decisions are made by individual humans on the basis of experience and intuition. (At some crucial points, A.I. has cut decision time to one minute from 10.) It has already helped drive down the cost and drive up the performanc­e of next-gen batteries and solar photovolta­ic cells, whose performanc­e can also be improved, even after the panels have been manufactur­ed and installed on your roof, by as much as 25 percent. Our models of icesheet melt and rainforest degradatio­n are much sharper now, too.

When in 2021 DeepMind revealed that it had effectivel­y solved the protein-folding problem, making the three-dimensiona­l structure of biological building blocks for the first time easily predictabl­e for researcher­s, the breakthrou­gh made global news, even if the headlines flew over the heads of most readers, who might not have known how significan­t a roadblock that has been in biomedical research. A few years later, A.I. is designing new proteins, rapidly accelerati­ng drug discovery and speeding up clinical trials testing new medicines and therapies.

David Wallace-Wells is an Opinion Writer The New York Times

 ?? ??

Newspapers in English

Newspapers from India