Jump to content

Safe and Secure Innovation for Frontier Artificial Intelligence Models Act

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Astudent (talk | contribs) at 04:14, 31 July 2024 (→‎Public opinion polls: New Polls: AIPI Releases Recent Polls on AI Public Opinion in California, Iowa, Massachusetts, Michigan, New Mexico, Texas, and Virginia). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Safe and Secure Innovation for Frontier Artificial Intelligence Models Act
California State Legislature
Full nameSafe and Secure Innovation for Frontier Artificial Intelligence Models Act
IntroducedFebruary 7, 2024
Senate votedMay 21, 2024 (32-1)
Sponsor(s)Scott Wiener
GovernorGavin Newsom
BillSB 1047
WebsiteBill Text

The Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, or SB 1047, is a 2024 California bill with the claimed goal of reducing the risks of "foundation model", which the bill defines as models that were created with more than a specified threshold of GPU operations, as models with "equivalent capability", which means that smaller cheaper models would qualify if trained against existing larger models. If passed, the bill will also establish CalCompute, a public cloud computing cluster for startups, researchers and community groups.

Background

The bill was motivated by the rapid increase in capabilities of AI systems in the 2020s, including the release of ChatGPT in November 2022.[citation needed] Many advocates have suggested that regulation is necessary due to possible existential risk from artificial general intelligence.[1]

Governor Newsom and President Biden issued executive orders on artificial intelligence in late 2023.[2][3] Senator Wiener says his bill draws heavily on the Biden executive order.[4]

Provisions

SB 1047 establishes a new California state agency, the California Frontier Model Division, to be funded by fees and fines charged to companies that ask permission to create, improve, or operate AI models. The agency is to review the results of safety tests and incidents, and issue guidance, standards and best practices. It also creates a public cloud computing cluster called CalCompute to enable research into safe AI models, and provide compute for academics and startups.[citation needed]

SB 1047 initially covers AI models with training compute over 1026 integer or floating-point operations, and also models with "equivalent" capability. The same compute threshold is used in the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. In contrast, the European Union's AI Act set its threshold at 1025, one order of magnitude lower.[5]

In addition to this compute threshold, the bill has a cost threshold of $100 million. The goal is to exempt startups and small companies, while covering large companies that spend over $100 million per training run.[citation needed]

Developers of models that exceed the compute and cost thresholds, or develop models of "equivalent" capability, are required to conduct safety testing for the following risks:[citation needed]

  • Creation or use of a weapon of mass destruction
  • Cyberattacks on critical infrastructure causing mass casualties or at least $500 million of damage
  • Autonomous crimes causing mass casualties or at least $500 million of damage
  • Other harms of comparable severity

Developers of covered models are required to implement "reasonable" safeguards to reduce risk, including the ability to shut down the model. Whistleblowing provisions protect employees who report safety problems and incidents. What is "reasonable" will be defined by the California Frontier Model Division.[citation needed]

Currently (as of July 2024), there are concerns in the open-source community that due to the threat of legal liability companies like Facebook may choose not to make models (for example, Llama) freely available.[6][7] In June 14th under the law a company that open-sourced a model with restrictions but which could be fine-tuned to provide information on how to commit a mass murder would be liable.[7] As of July 19th, according to Scott Weiner, amendments have been made in response to these concerns removing liability for models that have been significantly fine-tuned and removing the shutdown requirement for models that have been released as open-source.[6]

Reception

Supporters of the bill include Turing Award recipients Geoffrey Hinton and Yoshua Bengio.[8] The Center for AI Safety, Economic Security California[9] and Encode Justice[10] are sponsors.

Andrew Ng, Fei-Fei Li, Ion Stoica and Turing Award recipient Yann LeCun have come out against the legislation, among other scientists.[1][11] Andrew Ng argues specifically that there are better more targeted regulatory approaches, such as targeting deepfake pornography, watermarking generated materials, and investing in red teaming and other security measures.[12]

Industry

The bill is opposed by industry trade associations including the California Chamber of Commerce, the Chamber of Progress[a], the Computer & Communications Industry Association[b] and TechNet[c].[16] Companies Meta and Google argue that the bill would undermine innovation.[17]

Several well known startup founder organizations are opposed to the bill, for example, Y Combinator[18][19][20], a16z[21][22][23][24], Context Fund[25][26] and Alliance for the Future[27][28].

Open source developers

The sponsors of the bill claimed to have consulted with open source communities, leaders, and foundations. No such consultation occurred.[citation needed]

Critics expressed concerns about liability on open source software imposed by the bill if they use or improve existing freely available models. Yann LeCun, Chief AI Officer of Meta, has suggested the bill would kill open source AI models.[12]

Public opinion polls

The Artificial Intelligence Policy Institute, a group founded to prevent existential risk from artificial general intelligence, ran a poll in July 2024, finding that 59% of Californians support SB 1047.[29] 64% of technology workers in California think Governor Newsom should sign the bill.[30] A poll from the same institute in May 2024 found 77% of Californians think the government should mandate safety testing for powerful AI models.[31]

A David Binder Research poll commissioned by the Center for AI Safety, another group focused on existential risk, found that 77% of Californians support a proposal to require companies to test AI models for safety risks.[32][33][34]

See also

Notes

  1. ^ whose corporate partners include Amazon, Apple, Google and Meta[13]
  2. ^ whose members include Amazon, Apple, Google and Meta[14]
  3. ^ whose members include Amazon, Anthropic, Apple, Google, Meta and OpenAI[15]

References

  1. ^ a b Goldman, Sharon. "It's AI's "Sharks vs. Jets"—welcome to the fight over California's AI safety bill". Fortune. Retrieved 2024-07-29.
  2. ^ "Governor Newsom Signs Executive Order to Prepare California for the Progress of Artificial Intelligence". Governor Gavin Newsom. 2023-09-06.
  3. ^ "President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence". White House. 2023-10-30.
  4. ^ Myrow, Rachael (2024-02-16). "California Lawmakers Take On AI Regulation With a Host of Bills". KQED.
  5. ^ "Artificial Intelligence – Questions and Answers". European Commission. 2023-12-12.
  6. ^ a b Piper, Kelsey (2024-07-19). "Inside the fight over California's new AI bill". Vox. Retrieved 2024-07-29.
  7. ^ a b Piper, Kelsey (2024-06-14). "The AI bill that has Big Tech panicked". Vox. Retrieved 2024-07-29.
  8. ^ Kokalitcheva, Kia (2024-06-26). "California's AI safety squeeze". Axios.
  9. ^ DiFeliciantonio, Chase (2024-06-28). "AI companies asked for regulation. Now that it's coming, some are furious". San Francisco Chronicle.
  10. ^ Korte, Lara (2024-02-12). "A brewing battle over AI". Politico.
  11. ^ "Assembly Judiciary Committee 2024-07-02". California State Assembly.
  12. ^ a b Edwards, Benj (2024-07-29). "From sci-fi to state law: California's plan to prevent AI catastrophe". Ars Technica. Retrieved 2024-07-30.
  13. ^ "Corporate Partners". Chamber of Progress.
  14. ^ "Members". Computer & Communications Industry Association.
  15. ^ "Members". TechNet.
  16. ^ Daniels, Owen J. (2024-06-17). "California AI bill becomes a lightning rod—for safety advocates and developers alike". Bulletin of the Atomic Scientists.
  17. ^ Korte, Lara (2024-06-26). "Big Tech and the little guy". Politico.
  18. ^ "Founder-Led Statement on SB-1047" (PDF). Politico.
  19. ^ "Little Tech Brings a Big Flex to Sacramento". Politico.
  20. ^ "Proposed California law seeks to protect public from AI catastrophes". The Mercury News.
  21. ^ "California's Senate Bill 1047 - What You Need to Know". a16z.
  22. ^ "Stop SB 1047". Stop SB 1047.
  23. ^ "California's AI Bill Undermines the Sector's Achievements". Financial Times.
  24. ^ "Senate Bill 1047 will crush AI innovation in California". Orange County Register.
  25. ^ "SB 1047 Analysis". Context Fund.
  26. ^ "AI Startups Push to Limit or Kill California Public Safety Bill". Bloomberg Law.
  27. ^ "Call-To-Action on SB 1047". Alliance For The Future.
  28. ^ "The AI Safety Fog of War". Politico.
  29. ^ Bordelon, Brendan. "What Kamala Harris means for tech". POLITICO Pro. (subscription required)
  30. ^ "New Poll: California Voters, Including Tech Workers, Strongly Support AI Regulation Bill SB1047". Artificial Intelligence Policy Institute.
  31. ^ "New Polls: AIPI Releases Recent Polls on AI Public Opinion in California, Iowa, Massachusetts, Michigan, New Mexico, Texas, and Virginia". Artificial Intelligence Policy Institute.
  32. ^ "California Likely Voter Survey: Public Opinion Research Summary". David Binder Research.
  33. ^ Lee, Wendy (2024-06-19). "California lawmakers are trying to regulate AI before it's too late. Here's how". Los Angeles Times.
  34. ^ Piper, Kelsey (2024-07-19). "Inside the fight over California's new AI bill". Vox. Retrieved 2024-07-22.