DOOMSD(A.I.) ALERT
Call for tech giants to do more to protect humanity
(OpenAI) and others have bought into the ‘move fast and break things’ approach.
— Former OpenAI worker Daniel Kokotajio
A group of AI whistleblowers claim tech giants like Google and ChatGPT creator OpenAI are locked in a reckless race to develop technology that could endanger humanity — and demanded “a right to warn” the public in an open letter Tuesday.
Signed by current and former employees of OpenAI, Google DeepMind and Anthropic, the letter warned that “AI companies have strong financial incentives to avoid effective oversight” and little federal regulation.
The workers say potential risks include the spread of misinformation, worsening inequality and even “loss of control of autonomous AI systems potentially resulting in human extinction” — especially as OpenAI, helmed by Sam Altman, and other firms pursue so-called advanced general intelligence, with capacities on par with, or surpassing, the human mind.
‘Moving too fast’
“Companies are racing to develop and deploy ever more powerful artificial intelligence, disregarding the risks and impact of AI,” former OpenAI employee Daniel Kokotajlo, one of the letter’s organizers, said in a statement. “I decided to leave OpenAI because I lost hope that they would act responsibly, particularly as they pursue artificial general intelligence.
“They and others have bought into the ‘move fast and break things’ approach and that is the opposite of what is needed for technology this powerful and this poorly understood,” Kokotajlo added.
Kokotajlo, who joined OpenAI in 2022 as a researcher focused on
charting AI advancements before leaving in April, has placed the probability that advanced AI will destroy or severely harm humanity at 70%, said the New York Times, which first reported on the letter.
He believes there’s a 50% chance that researchers will achieve artificial general intelligence by 2027.
The letter drew endorsements by two prominent experts known as the “Godfathers of AI” — Geoffrey Hinton, who warned last year that rogue AI’s threat was “more urgent” to humanity than climate change, and Canadian computer scientist Yoshua Bengio.
Famed British AI researcher Stuart
Russell also backed the letter.
The letter asks AI giants to commit to boosting transparency and protecting whistleblowers.
Safety demands
It proposes an agreement not to retaliate against employees who speak out about safety concerns and support for an anonymous system for whistleblowers to alert the public and regulators about risks.
The AI firms are also asked to allow a “culture of open criticism” so long as no trade secrets are disclosed.
As of Tuesday, the letter’s 13 signers
included 11 formerly or currently employed by OpenAI, including Kokotajlo, Jacob Hilton, William Saunders, Carroll Wainwright and Daniel Ziegler.
When reached for comment, an OpenAI spokesperson said the company has a proven track record of not releasing AI products until necessary safeguards were in place.
“We’re proud of our track record providing the most capable and safest A.I. systems and believe in our scientific approach to addressing risk,” OpenAI said in a statement.
Google and Anthropic did not return requests for comment.