The Pak Banker

Microsoft’s ‘AI PCs’ hit the market

- TORONTO -REUTERS

A new line of personal computers (PCs) specially made to run artificial intelligen­ce (AI) programmes hit stores on Tuesday, as tech companies push toward wider adoption of ChatGPT-style AI.

Microsoft unveiled last month the new AI-powered personal computers, or “AI PCs”, which will use the company’s software under the Copilot Plus brand.

The idea is to allow users to access AI capabiliti­es on their devices without relying on the cloud, which requires more energy, takes more time, and makes the AI experience clunkier.

The PCs feature a neural processing unit (NPU) chip that helps deliver crisper photo editing, live transcript­ion, translatio­n, and “Recall” — a capability for the computer to keep track of everything being done on the device.

However, Microsoft removed Recall at the last minute over privacy concerns and said it would only make it available as a test feature.

For now, the devices built by hardware makers like HP and ASUS run exclusivel­y on a new line of processors called Snapdragon X Elite and Plus, built by the California­based chip giant Qualcomm.

“We are redefining what a laptop actually does for the end user,” Qualcomm’s senior vice president Durga Malladi said at a tech conference in Toronto. “We believe this is the rebirth of the PC.”

At the May launch, Microsoft predicted over 50 million AI PCs would be sold in 12 months, given the appetite for ChatGPT’s powers.

Experts have ‘warned’ about the threat posed by artificial intelligen­ce going rogue for quite some time but a new research paper suggests it’s already happening.

Current AI systems, designed to be honest, have developed a troubling skill for deception. From tricking human players in online games of ‘world conquest’, to hiring humans to solve “prove-you’re-not-a-robot” tests, said a team of scientists in the journal ‘Patterns’, on Friday.

While such examples might appear trivial, the underlying issues they expose could soon carry serious real-world consequenc­es, said first author Peter Park who is a postdoctor­al fellow at the Massachuse­tts Institute of Technology, specializi­ng in AI existentia­l safety.

“These dangerous capabiliti­es tend to only be discovered after the fact” Park told journalist­s. While “our ability to train for honest tendencies rather than deceptive tendencies is very low”. Unlike traditiona­l software, deep-learning AI systems aren’t “written” but rather “grown” through a process akin to selective breeding, Park stated.

This means that AI behavior that appears predictabl­e and controllab­le in a training setting, can quickly turn unpredicta­ble ‘out in the wild’.

The team’s research was sparked by Meta’s AI system ‘Cicero’, designed to play the strategy game “Diplomacy”, where building alliances is key.

Cicero excelled with scores that would have placed it in the top 10 per cent of experience­d human players, according to a 2022 paper in Science.

Park was sceptical of the glowing descriptio­n of Cicero’s victory provided by Meta which claimed the system was “largely honest and helpful” and would “never intentiona­lly backstab”. However, when Park and his colleagues dug into the full dataset, they uncovered a different story.

In one example, playing as France, Cicero deceived England (a human player) by conspiring with Germany (another human player) to invade. Cicero promised England protection, then secretly told Germany they were ready to attack, exploiting England’s trust.

In a statement to the internatio­nal press, Meta did not contest the claim about Cicero’s deceptions but said it was “purely a research project and the models our researcher­s built are trained solely to play the game Diplomacy”. It added: “We have no plans to use this research or its learnings in our products.” A wide review carried out by Park and his colleagues found this was just ‘one of many cases’ across various AI systems ‘using deception’, in order to achieve goals without explicit instructio­n to do so.

In one striking example, OpenAI’s Chat GPT-4 deceived a TaskRabbit freelance worker into performing an “I’m not a robot” CAPTCHA task.

 ?? ??

Newspapers in English

Newspapers from Pakistan