Jump to content

Safe and Secure Innovation for Frontier Artificial Intelligence Models Act

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Astudent (talk | contribs) at 04:37, 29 July 2024 (Added Template:Unbalanced to attract editors with different viewpoints). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Safe and Secure Innovation for Frontier Artificial Intelligence Models Act
California State Legislature
Full nameSafe and Secure Innovation for Frontier Artificial Intelligence Models Act
IntroducedFebruary 7, 2024
Senate votedMay 21, 2024 (32-1)
Sponsor(s)Scott Wiener
GovernorGavin Newsom
BillSB 1047
WebsiteBill Text

The Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, or SB 1047, is a 2024 California bill with the claimed goal of reducing the risks of "foundation model", which the bill defines as models that were created with more than a specified threshold of GPU operations, as models with "equivalent capability", which means that smaller cheaper models would qualify if trained against existing larger models. If passed, the bill will also establish CalCompute, a public cloud computing cluster for startups, researchers and community groups.

Background

The bill was motivated by the rapid increase in capabilities of AI systems in the 2020s, including the release of ChatGPT in November 2022.

In May 2023, AI pioneer Geoffrey Hinton resigned from Google, warning that humankind could be overtaken by AI as soon as the next 5 to 20 years.[1][2] Later that same month, the Center for AI Safety released a statement signed by Hinton and other AI researchers and leaders: "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."

Governor Newsom and President Biden issued executive orders on artificial intelligence in late 2023.[3][4] Senator Wiener says his bill draws heavily on the Biden executive order.[5]

Provisions

SB 1047 establishes a new California state agency, the California Frontier Model Division, to be funded by fees and fines charged to companies that ask permission to create, improve, or operate AI models. The agency is to review the results of safety tests and incidents, and issue guidance, standards and best practices. It also creates a public cloud computing cluster called CalCompute to enable research into safe AI models, and provide compute for academics and startups.

SB 1047 initially covers AI models with training compute over 1026 integer or floating-point operations, and also models with "equivalent" capability. The same compute threshold is used in the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. In contrast, the European Union's AI Act set its threshold at 1025, one order of magnitude lower.[6]

In addition to this compute threshold, the bill has a cost threshold of $100 million. The goal is to exempt startups and small companies, while covering large companies that spend over $100 million per training run.

Developers of models that exceed the compute and cost thresholds, or develop models of "equivalent" capability, are required to conduct safety testing for the following risks:

  • Creation or use of a weapon of mass destruction
  • Cyberattacks on critical infrastructure causing mass casualties or at least $500 million of damage
  • Autonomous crimes causing mass casualties or at least $500 million of damage
  • Other harms of comparable severity

Developers of covered models are required to implement "reasonable" safeguards to reduce risk, including the ability to shut down the model. Whistleblowing provisions protect employees who report safety problems and incidents. What is "reasonable" will be defined by the California Frontier Model Division. These rules will render it impossible to release a model as open source or publicly available.

Well Known Scientist Supporters

Supporters of the bill include Turing Award recipients Geoffrey Hinton and Yoshua Bengio.[7] The Center for AI Safety, Economic Security California[8] and Encode Justice[9] are sponsors.

Well Known Scientist Opponents

Prominent scientists like Prof. Andrew Ng, Prof. Yann LeCun (Turing Award winner), Prof. Ion Stoica and Prof. Jeremy Howard have also expressed concern over SB 1047 in public comments and on the record in CA legislative sessions (https://www.deeplearning.ai/the-batch/issue-257/, https://x.com/ylecun/status/1800222175099765029, https://x.com/AnjneyMidha/status/1811207378949607638, https://www.answer.ai/posts/2024-04-29-sb1047.html)

Trade and Industry Opposition

The bill is opposed by industry trade associations including the California Chamber of Commerce, the Chamber of Progress[a], the Computer & Communications Industry Association[b] and TechNet[c].[13] Companies Meta and Google argue that the bill would undermine innovation.[14]

Open Source Developer Opposition

The sponsors of the bill claimed to have consulted with open source communities, leaders, and foundations. No such consultation occurred.

Open Source developers have expressed concerns about liability imposed by the bill if they use or improve existing open source or freely available models that would be covered by SB 1047, or if they use or improve any open source or freely available models trained outside California.

Startup Founder Opposition

Several well known Startup Founder Organizations are opposed to the bill.

For example

Polls of Public Opinion

A David Binder Research poll commissioned by the Center for AI Safety Action Fund found that in May 2024, 77% of Californians support a proposal to require companies to test AI models for safety risks before releasing them.[15][16] That particular poll did not refer to SB 1027 or reference any of its threasholds or proposals.

A poll in May 2024 by the Artificial Intelligence Policy Institute found 77% of Californians think the government should mandate safety testing for powerful AI models.[17] The same institute ran a poll in July 2024, finding that 59% of Californians support SB 1047, with 20% opposed, and 22% not sure. 64% of technology workers in California think Governor Newsom should sign the bill, with 15% supporting a veto, and 21% not sure.[18]

The Center for AI Safety and the Artificial Intelligence Policy Institute are founded to prevent Existential risk from artificial general intelligence.

See also

Notes

  1. ^ whose corporate partners include Amazon, Apple, Google and Meta[10]
  2. ^ whose members include Amazon, Apple, Google and Meta[11]
  3. ^ whose members include Amazon, Anthropic, Apple, Google, Meta and OpenAI[12]

References

  1. ^ Metz, Cade (2023-05-01). "'The Godfather of A.I.' Leaves Google and Warns of Danger Ahead". The New York Times.
  2. ^ Lazarus, Ben (2023-05-06). "The godfather of AI: why I left Google". The Spectator.
  3. ^ "Governor Newsom Signs Executive Order to Prepare California for the Progress of Artificial Intelligence". Governor Gavin Newsom. 2023-09-06.
  4. ^ "President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence". White House. 2023-10-30.
  5. ^ Myrow, Rachael (2024-02-16). "California Lawmakers Take On AI Regulation With a Host of Bills". KQED.
  6. ^ "Artificial Intelligence – Questions and Answers". European Commission. 2023-12-12.
  7. ^ Kokalitcheva, Kia (2024-06-26). "California's AI safety squeeze". Axios.
  8. ^ DiFeliciantonio, Chase (2024-06-28). "AI companies asked for regulation. Now that it's coming, some are furious". San Francisco Chronicle.
  9. ^ Korte, Lara (2024-02-12). "A brewing battle over AI". Politico.
  10. ^ "Corporate Partners". Chamber of Progress.
  11. ^ "Members". Computer & Communications Industry Association.
  12. ^ "Members". TechNet.
  13. ^ Daniels, Owen J. (2024-06-17). "California AI bill becomes a lightning rod—for safety advocates and developers alike". Bulletin of the Atomic Scientists.
  14. ^ Korte, Lara (2024-06-26). "Big Tech and the little guy". Politico.
  15. ^ "California Likely Voter Survey: Public Opinion Research Summary". David Binder Research.
  16. ^ Piper, Kelsey (2024-07-19). "Inside the fight over California's new AI bill". Vox. Retrieved 2024-07-22.
  17. ^ "AIPI Survey". Artificial Intelligence Policy Institute.
  18. ^ "New Poll: California Voters, Including Tech Workers, Strongly Support AI Regulation Bill SB1047". Artificial Intelligence Policy Institute.