Artificial intelligence has the potential to greatly improve state programs, but also poses risks, according to a report released by the governor’s office on Tuesday.
Generative AI, which can generate text, images, and other content, could be used to quickly translate government materials, detect fraud in tax claims, summarize public comments, and answer questions about state services. However, the report highlights concerns related to data privacy, misinformation, equity, and bias.
The report, ordered by Gov. Gavin Newsom, provides insights into how California could implement generative AI in state programs while addressing the need to protect people without hindering innovation.
Divided Opinions on AI Safety
AI safety has become a topic of debate among tech executives. While some, like Elon Musk, express concerns about the potential risks of AI leading to the destruction of civilization, others have a more optimistic view of its potential to address issues like climate change and diseases.
Notably, major tech firms like Google, Facebook, and Microsoft-backed OpenAI are competing to develop and release new AI tools capable of generating content.
Recent Developments and Challenges
The report comes at a critical juncture for generative AI, as the board of OpenAI fired its CEO, Sam Altman. However, on Tuesday night, OpenAI announced an agreement for Altman to return as CEO, following pressure from investors, tech executives, and employees.
The firing of Altman raised questions about disagreements over ensuring AI safety while generating revenue. The unique governance structure of OpenAI, controlled by a nonprofit board, facilitated the CEO’s removal.
A First Step for California
Governor Newsom sees the AI report as an important initial step in addressing the safety concerns associated with AI. He emphasizes the need for a nuanced and measured approach to leverage the benefits of AI while understanding its risks.
Furthermore, the report highlights the potential economic benefits of AI advancements for California. The state is home to numerous leading AI companies, and the generative AI market is projected to reach $42.6 billion by 2023.
Risks and Safeguards
The report acknowledges various risks, including the spread of false information, the provision of dangerous medical advice, and the potential for the creation of harmful substances or weapons. Data breaches, privacy, bias, and job displacement are also concerns.
As the state develops guidelines for the use of generative AI, the report suggests that state employees follow certain principles to safeguard Californians’ data. Examples include refraining from providing data to generative AI tools and avoiding the use of unapproved tools on state devices.
Broader Impact of AI
Generative AI has applications beyond state government, with law enforcement agencies planning to use it for analyzing officer behavior in body camera videos. Efforts to regulate AI in California, particularly regarding bias, have not made significant progress, but new bills are expected to be introduced in January.
Globally, regulators are still grappling with how to protect people from the potential risks of AI. President Biden issued an executive order in October outlining safety and security standards for AI developers, and AI regulation was a major topic at the recent Asia-Pacific Economic Cooperation meeting in San Francisco.
While Altman praised Biden’s executive order, he expressed concern about the need for global supervision as AI models continue to advance and their impact expands.