Guidelines set for state agencies in using AI
Gov. Gavin Newsom has encouraged the dozens of departments that make up California’s state government to use artificial intelligence. Some have jumped at the chance to use the emerging technology in everything from traffic prediction to tax filing.
Now Newsom’s administration has released a road map to guide state agencies that want to buy or use the technology.
The plan devolves much of the onus onto departments and agencies to evaluate whether and how to use generative AI — which can create text, video, or images — in consultation with the help of the state’s Department of Technology and other sections including operations and human resources.
They’ll also have to make the business case for using AI. The state’s technology and human resources departments would provide training and support to other agencies on how they acquire and use the technology.
But “the ultimate ownership and accountability and decision making ability is on the individual” agency, Amy Tong, state secretary of government operations, told the Chronicle.
The guidelines should
be officially in place by the end of the month, said Jonathan Porat, the state’s chief technology officer.
To start with, every department is required to prepare for so-called “incidental” generative AI purchases. That is, AI tech brought on board as part of something else it buys.
That means among other things, assigning a senior person to monitor how an agency buys and uses the technology at all levels. “In most cases, this responsibility should fall to the state entity’s chief information officer, the guidelines said.
For instances where department heads are looking to buy AI technology they first have to:
• Identify a need for the technology and make their case
• Communicate with employees who would use the technology about it
• Write up an assessment of the potential risks and benefits
• Test whatever AI model they’ve selected for bias and the potential to return inaccurate information
• Establish a team to continuously monitor how AI is being used, and report back to the Department of Technology The goal is to avoid “throwing a bunch of money at a contract and then it maybe doesn’t deliver,” or finding out another product might be better, Porat said.
Newsom’s executive order from last year explicitly encourages using AI technology. So the plan is to make sure the right technology is used instead of creating a mechanism to say no, Porat and Tong said.
Testing in particular is no easy task.
A top White House tech official told the Chronicle earlier this year that testing models for safety was still an emerging field. A bill from State Sen. Scott Wiener, D-San Francisco, is aiming to create more resources for safety testing of AI programs before they are released to the public.
Porat said the plan is not to saddle every agency with that kind of technical work.
Instead he said the purchasing process will include getting testing and safety data from companies. He said with the technology evolving rapidly, stiff safety rules could become quickly obsolete.
The technology department is also working with the U.S. Department of Homeland Security, among others, on AI policy, he said.
Without safeguards in place, AI programs can produce inaccurate information, called hallucinations, or convey bias and hate speech depending on the data used to train them.
California, and San Francisco in particular, may be the epicenter of the AI boom. But having marquee AI companies such as OpenAI, Anthropic, Google and others headquartered hours from the capital has not yet translated into the technology being bolted on to state agencies.
That is something Newsom began trying to change last year, when he signed the order. It directed the technology department to begin sketching out how state agencies might use AI in everything from chatbots to content generation and data analysis.
That was followed by a report from the Government Operations Department outlining the upside of using the technology, including increased accessibility to information for people from different backgrounds and better customer service.
But that report warned of the potential dangers of using outputs from Generative AI verbatim. It also cautioned against the potential privacy disaster of plugging private information into publicly accessible programs such as ChatGPT or Google’s Gemini.
Porat and Tong said the guidelines are just part of the state’s AI plan outlined in the executive order.
Future reports will look at the technology’s potential to affect critical infrastructure security, vulnerable communities, and the state’s workforce.
“That will help shore up some of the details,” Porat said.