Jump to content

AI boom

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Gluonz (talk | contribs) at 21:21, 14 January 2024 (Linking). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

From top, left to right: Théâtre D'opéra Spatial; DALL-E image; skyscrapers in Stable Diffusion XL 1.0; Mechanical Dove created in Midjourney; Stable Diffusion image; AlphaFold 2 performance, experiments, and architecture; and Time Magazine cover featuring a ChatGPT conversation.

The AI boom,[1][2] or AI spring,[3][4] is the ongoing period of rapid progress in the field of artificial intelligence. Prominent examples include protein folding prediction and generative AI, led by laboratories including Google DeepMind and OpenAI.

The AI boom is expected to have a profound cultural, philosophical,[5] religious,[6] economic,[7] and social impact,[8] as questions such as AI alignment,[9] qualia,[5] and the development of artificial general intelligence[9] became widely prominent topics of popular discussion.[10]

History

In 2012, a University of Toronto research team used artificial neural networks and deep learning techniques to lower their error rate below 25% for the first time during the ImageNet challenge for object recognition in computer vision. The event catalyzed the AI boom later in that decade, when alumni of the ImageNet challenge became leaders in the tech industry.[11][12] The generative AI race began in earnest in 2016 or 2017 following the founding of OpenAI and earlier advances made in graphical processing unit, the amount and quality of training data, generative adversarial network, diffusion model and transformer architecture.[13][14] In 2018, the Artificial Intelligence Index, an initiative from Stanford University, reported a global explosion of commercial and research efforts in AI. Europe published the largest number of papers in the field that year, followed by China and North America.[15] Technologies such as AlphaFold led to more accurate predictions of protein folding and better drug development.[16] Economic researchers and lawmakers began to discuss the impact of AI more frequently.[17][18] By 2022, large language models saw increased usage in chatbot applications; text-to-image-models could generate images that appeared to be human-made;[19] and speech synthesis software was able to replicate human speech efficiently.[20]

According to metrics from 2017 to 2021, the United States outranks the rest of the world in terms of venture capital funding, the number of startups, and patents granted in AI.[21][22] Scientists that have immigrated to the U.S. play an outsize role in the country's development of AI technology.[23][24] Many of them were educated in China, prompting debates about national security concerns amid worsening relations between the two countries.[25] Lauren Kahn, an analyst at Center for Security and Emerging Technology outlined ways that the U.S. could maintain its position amid progress made by China.[26] In 2023 an analyst for the Center for Strategic and International Studies advocated that the U.S. use its dominance in AI technology to drive its foreign policy instead of relying on trade agreements.[21] Many experts have framed AI development as a struggle for technological primacy between the United States and China. The winner of that competition, they say, will gain both economic and geopolitical advantage [1]. Similar to other new technologies, the EU has been leading the way for creating the regulations like AI ACT to shape the space and control the harms.

Advances

Scientific

There have been proposals to use AI to advance radical forms of human life extension.[27]

AlphaFold 2 score of more than 90 in CASP's global distance test (GDT) is considered a significant achievement in computational biology[28] and great progress towards a decades-old grand challenge of biology.[29] Nobel Prize winner and structural biologist Venki Ramakrishnan called the result "a stunning advance on the protein folding problem",[28] adding that "It has occurred decades before many people in the field would have predicted. It will be exciting to see the many ways in which it will fundamentally change biological research."[30] AlphaFold 2's success received widespread media attention.[31]

The ability to predict protein structures accurately based on the constituent amino acid sequence is expected to have a wide variety of benefits in the life sciences space including accelerating advanced drug discovery and enabling better understanding of diseases.[29][32] Writing about the event, the MIT Technology Review noted that the AI had "solved a fifty-year old grand challenge of biology."[33] It went on to note that the AI algorithm could "predict the shape of proteins to within the width of an atom."[33]

Large language models

GPT-3 is a large language model that was released in 2020 by OpenAI and is capable of generating high-quality human-like text that can be hard to determine whether it was written by a human.[34] An upgraded version called GPT-3.5 was used in ChatGPT, which later garnered attention for its detailed responses and articulate answers across many domains of knowledge.[35] A new version called GPT-4 was released on March 14, 2023, and was used in the Microsoft Bing search engine.[36][37] Other language models have been released such as PaLM and Gemini by Google and LLaMA by Meta Platforms.

In January 2023, DeepL Write, an AI-based tool to improve monolingual texts, was released.[38] In December 2023, Gemini, the latest model by Google, was unveiled, claiming to beat previous state-of-the-art-model GPT-4 on most benchmarks.[39]

Text-to-image models

An image generated by Stable Diffusion based on the text prompt "a photograph of an astronaut riding a horse"

One of the first text-to-image models to capture widespread public attention was OpenAI's DALL-E, a transformer system announced in January 2021.[40] A successor capable of generating complex and realistic images, DALL-E 2, was unveiled in April 2022.[41] An alternative text-to-image model, Midjourney, was released in July 2022.[42] Another alternative, open-source model Stable Diffusion, released in August 2022.[43]

Following other text-to-image models, language model-powered text-to-video platforms such as DAMO,[44] Make-A-Video,[45] Imagen Video[46] and Phenaki[47] can generate video from text and/or text/image prompts.[48]

Speech synthesis

15.ai was one of the first publicly available speech synthesis software that allowed people to generate natural emotive high-fidelity text-to-speech voices from an assortment of fictional characters from a variety of media sources. It was first released in March 2020.[49][50] ElevenLabs unveiled a website where users are able to upload voice samples to that allowed it to generate voices from them. The company was criticized after users were able to abuse its software to generate controversial statements in the vocal style of celebrities, public officials, and other famous individuals[51] and raised concerns that it could be used to generate deepfakes that were more convincing.[52] An unofficial song created using the voices of musicians Drake and The Weeknd in speech synthesis software raised questions about the ethics and legality of similar software.[53]

Impact

Cultural

During the AI boom, there emerged differing factions. These include the effective accelerationists, effective altruists, and catastrophists.[54]

Dominance by tech giants

The commercial AI scene is dominated by American Big Tech companies such as Alphabet Inc., Amazon, Apple Inc., Meta Platforms, and Microsoft, whose investments in this area have surpassed those from U.S.-based venture capitalists.[55][56][57] Some of these players already own the vast majority of existing cloud computing infrastructure, which could help entrench them further in the marketplace.[58]

Intellectual property

Tech companies have been sued by artists and software developers for using their work to train AI models.[59]

Concerns

Economic disruption

There are concerns that as AI becomes more sophisticated, it will perform better than human workers and be more cost-effective.[60][17]

Risks to humanity

Many experts have stated that the AI boom has started an arms race in which large companies are competing against each other to have the most powerful AI model on the market with little concern about safety.[61] During the AI boom, numerous safety concerns have been expressed by experts.[62] In particular, there have been concerns about the development of powerful models with speed and profit prioritized over safety and user protection.[61] There have already been significant numbers of reports about racist, sexist, homophobic, and other types of discrimination from ChatGPT, Microsoft's Tay, and leading AI facial recognition models.[63] It has been estimated that there are 80 to 120 researchers globally[63] working to understand how to ensure AI is aligned with human values. With incomplete understanding about how AI works,[63] many researchers around the globe have voiced concerns about potential future implications of the AI boom.[62] Public reaction to the AI boom has been mixed, with some parties hailing the new possibilities that AI creates,[64] its potential for benefiting humanity, and sophistication, while other parties denounced it for threatening job security, and for giving 'uncanny' or flawed responses.[65][66][67][68]

In the midst of the AI boom, the hype surrounding artificial intelligence has been described as posing significant dangers.[citation needed] The enthusiasm and pressure generated by public fascination with AI can drive developers to expedite the creation and deployment of AI systems. This rush may lead to the omission of crucial safety procedures, potentially resulting in serious existential risks. As noted by Holden Karnofsky, the imperative competition to meet consumer expectations might tempt organizations to prioritize speed over thorough safety checks, thus jeopardizing the responsible development of AI.[69]

The prevailing AI race mindset heightens the risks associated with the development of artificial general intelligence.[70] While competition can foster innovation and progress, an intense race to outperform rivals may encourage the prioritization of short-term gains over long-term safety.[71] A "winner-takes-all" mentality can further incentivize cutting corners, potentially creating a race to the bottom and compromising ethical considerations in responsible AI development.[71]

Prominent voices in the AI community have advocated for a cautious approach, urging AI companies to avoid unnecessary hype and acceleration.[69] Concerns arise from the belief that pouring money into the AI sector too rapidly could lead to incautiousness from companies, as they race to develop transformative AI without due consideration for key risks.[71][69] Despite prevailing hype and investment in AI, some argue that it is not too late to mitigate the risks associated with acceleration. Advocates for caution stress the importance of raising awareness about key risks, strong security procedures, and investing in AI safety measures, such as alignment research, standards, and monitoring.[69]

See also

References

  1. ^ Knight, Will. "Google's Gemini Is the Real Start of the Generative AI Boom". Wired. ISSN 1059-1028. Retrieved December 12, 2023.
  2. ^ Meredith, Sam (December 6, 2023). "A 'thirsty' generative AI boom poses a growing problem for Big Tech". CNBC. Retrieved December 12, 2023.
  3. ^ Bommasani, Rishi (March 17, 2023). "AI Spring? Four Takeaways from Major Releases in Foundation Models". Stanford Institute for Human-Centered Artificial Intelligence. Archived from the original on May 7, 2023. Retrieved May 16, 2023.
  4. ^ "The coming of AI Spring". www.mckinsey.com. Retrieved December 7, 2023.
  5. ^ a b Huckins, Grace (October 16, 2023). "Minds of machines: The great AI consciousness conundrum". MIT Technology Review. Retrieved December 12, 2023.
  6. ^ Robertson, Derek (July 18, 2023). "The religious mystery of AI". POLITICO. Retrieved December 12, 2023.
  7. ^ Lu, Yiwen (June 14, 2023). "Generative A.I. Can Add $4.4 Trillion in Value to Global Economy, Study Says". The New York Times. ISSN 0362-4331. Retrieved December 12, 2023.
  8. ^ Tomašev, Nenad; Cornebise, Julien; Hutter, Frank; Mohamed, Shakir; Picciariello, Angela; Connelly, Bec; Belgrave, Danielle C. M.; Ezer, Daphne; Haert, Fanny Cachat van der; Mugisha, Frank; Abila, Gerald; Arai, Hiromi; Almiraat, Hisham; Proskurnia, Julia; Snyder, Kyle (May 18, 2020). "AI for social good: unlocking the opportunity for positive impact". Nature Communications. 11 (1): 2468. doi:10.1038/s41467-020-15871-z. ISSN 2041-1723. PMC 7235077.
  9. ^ a b Tong, Anna; Dastin, Jeffrey; Hu, Krystal (November 23, 2023). "OpenAI researchers warned board of AI breakthrough ahead of CEO ouster, sources say". Reuters. Retrieved December 12, 2023.
  10. ^ Milmo, Dan (October 24, 2023). "Hope or horror? The great AI debate dividing its pioneers". The Guardian. ISSN 0261-3077. Retrieved December 12, 2023.
  11. ^ "The data that transformed AI research—and possibly the world". Quartz. July 26, 2017.
  12. ^ Lohr, Steve (November 30, 2017). "A.I. Will Transform the Economy. But How Much, and How Soon?". The New York Times.
  13. ^ "Everything You Need To Know About The Artificial Intelligence Boom". Nasdaq.com. Investing Daily. August 22, 2018.
  14. ^ "Why am I not terrified of AI?". Shtetl-Optimized. March 6, 2023. Archived from the original on May 12, 2023. Retrieved March 19, 2023.
  15. ^ Statt, Nick (December 12, 2018). "The AI boom is happening all over the world, and it's accelerating quickly". The Verge.
  16. ^ Wong, Matteo (December 11, 2023). "Science Is Becoming Less Human". The Atlantic. Retrieved December 12, 2023.
  17. ^ a b Lohr, Steve (November 30, 2017). "A.I. Will Transform the Economy. But How Much, and How Soon?". The New York Times.
  18. ^ "Nine charts that really bring home just how fast AI is growing". MIT Technology Review.
  19. ^ Vincent, James (May 24, 2022). "All these images were generated by Google's latest text-to-image AI". The Verge. Archived from the original on February 15, 2023. Retrieved March 15, 2023.
  20. ^ Cox, Joseph (January 31, 2023). "AI-Generated Voice Firm Clamps Down After 4chan Makes Celebrity Voices for Abuse". Vice. Archived from the original on May 7, 2023. Retrieved March 15, 2023.
  21. ^ a b Frank, Michael (September 22, 2023). "US Leadership in Artificial Intelligence Can Shape the 21st Century Global Order". The Diplomat. Retrieved December 8, 2023.
  22. ^ "Global AI Vibrancy Tool". Artificial Intelligence Index. Stanford University.
  23. ^ Gold, Ashley (June 27, 2023). "Exclusive: Immigrants play outsize role in the AI game". Axios. Retrieved December 12, 2023.
  24. ^ Ellis, Lindsay (October 23, 2023). "Dropping Out of College to Join the AI Gold Rush". WSJ. Retrieved December 12, 2023.
  25. ^ Mozur, Paul; Metz, Cade (June 9, 2020). "A U.S. Secret Weapon in A.I.: Chinese Talent". The New York Times.
  26. ^ Kahn, Lauren (October 28, 2021). "U.S. Leadership in Artificial Intelligence Is Still Possible". Council on Foreign Relations.
  27. ^ Batin, Michael; Turchin, Alexey; Markov, Sergey; Zhila, Alisa; Denkenberger, David (December 1, 2017). "Artificial intelligence in life extension: From deep learning to superintelligence". Informatica.
  28. ^ a b Robert F. Service, 'The game has changed.' AI triumphs at solving protein structures, Science, 30 November 2020
  29. ^ a b Callaway, Ewen (November 30, 2020). "'It will change everything': DeepMind's AI makes gigantic leap in solving protein structures". Nature. 588 (7837): 203–204. Bibcode:2020Natur.588..203C. doi:10.1038/d41586-020-03348-4. PMID 33257889. S2CID 227243204.
  30. ^ "AlphaFold: a solution to a 50-year-old grand challenge in biology". Deepmind. Retrieved November 30, 2020.
  31. ^ Brigitte Nerlich, Protein folding and science communication: Between hype and humility, University of Nottingham blog, 4 December 2020
  32. ^ Tim Hubbard, The secret of life, part 2: the solution of the protein folding problem., medium.com, 30 November 2020
  33. ^ a b "DeepMind's protein-folding AI has solved a 50-year-old grand challenge of biology". MIT Technology Review. Retrieved November 30, 2020.
  34. ^ Sagar, Ram (June 3, 2020). "OpenAI Releases GPT-3, The Largest Model So Far". Analytics India Magazine. Archived from the original on August 4, 2020. Retrieved March 15, 2023.
  35. ^ Lock, Samantha (December 5, 2022). "What is AI chatbot phenomenon ChatGPT and could it replace humans?". The Guardian. ISSN 0261-3077. Archived from the original on January 16, 2023. Retrieved March 15, 2023.
  36. ^ Lardinois, Frederic (March 14, 2023). "Microsoft's new Bing was using GPT-4 all along". TechCrunch. Archived from the original on March 15, 2023. Retrieved March 15, 2023.
  37. ^ Derico, Ben; Kleinman, Zoe (March 14, 2023). "OpenAI announces ChatGPT successor GPT-4". BBC News. Archived from the original on May 15, 2023. Retrieved March 15, 2023.
  38. ^ Ziegener, Daniel (January 17, 2023). "DeepL Write: Brauchen wir jetzt noch eine menschliche Lektorin?". Golem.de. Archived from the original on February 6, 2023. Retrieved March 15, 2023.
  39. ^ "Gemini". DeepMind. Retrieved December 8, 2023.
  40. ^ Coldewey, Devin (January 5, 2021). "OpenAI's DALL-E creates plausible images of literally anything you ask it to". TechCrunch. Archived from the original on January 6, 2021. Retrieved March 15, 2023.
  41. ^ Coldewey, Devin (April 6, 2022). "New OpenAI tool draws anything, bigger and better than ever". TechCrunch. Archived from the original on May 6, 2023. Retrieved March 15, 2023.
  42. ^ "We're officially moving to open-beta". twitter.com. Archived from the original on December 28, 2023.
  43. ^ "Stable Diffusion Public Release". Stability AI. Archived from the original on August 30, 2022. Retrieved March 15, 2023.
  44. ^ "ModelScope 魔搭社区". modelscope.cn. Archived from the original on May 9, 2023. Retrieved March 20, 2023.
  45. ^ kumar, Ashish (October 3, 2022). "Meta AI Introduces 'Make-A-Video': An Artificial Intelligence System That Generates Videos From Text". MarkTechPost. Archived from the original on December 1, 2022. Retrieved March 15, 2023.
  46. ^ Edwards, Benj (October 5, 2022). "Google's newest AI generator creates HD video from text prompts". Ars Technica. Archived from the original on February 7, 2023. Retrieved October 25, 2022.
  47. ^ "Phenaki". phenaki.video. Archived from the original on October 7, 2022. Retrieved October 3, 2022.
  48. ^ Edwards, Benj (September 9, 2022). "Runway teases AI-powered text-to-video editing using written prompts". Ars Technica. Archived from the original on January 27, 2023. Retrieved September 12, 2022.
  49. ^ Zwiezen, Zack (January 18, 2021). "Website Lets You Make GLaDOS Say Whatever You Want". Kotaku. Archived from the original on January 17, 2021. Retrieved January 18, 2021.
  50. ^ Ruppert, Liana (January 18, 2021). "Make Portal's GLaDOS And Other Beloved Characters Say The Weirdest Things With This App". Game Informer. Game Informer. Archived from the original on January 18, 2021. Retrieved January 18, 2021.
  51. ^ Jorge Jimenez (January 31, 2023). "AI company promises changes after 'voice cloning' tool used to make celebrities say awful things". PC Gamer. Archived from the original on April 4, 2023. Retrieved February 3, 2023.
  52. ^ "Seeing is believing? Global scramble to tackle deepfakes". Yahoo News. Archived from the original on February 3, 2023. Retrieved March 15, 2023.
  53. ^ Coscarelli, Joe (April 19, 2023). "An A.I. Hit of Fake 'Drake' and 'The Weeknd' Rattles the Music World". The New York Times. ISSN 0362-4331. Archived from the original on May 15, 2023. Retrieved May 16, 2023.
  54. ^ Roose, Kevin (December 10, 2023). "This A.I. Subculture's Motto: Go, Go, Go". The New York Times. ISSN 0362-4331. Retrieved December 11, 2023.
  55. ^ Hammond, George (December 27, 2023). "Big Tech is spending more than VC firms on AI startups". Ars Technica.
  56. ^ Wong, Matteo (October 24, 2023). "The Future of AI Is GOMA". The Atlantic.
  57. ^ "Big tech and the pursuit of AI dominance". The Economist.
  58. ^ Fung, Brian (December 19, 2023). "Where the battle to dominate AI may be won | CNN Business". CNN.
  59. ^ Kafka, Peter (February 1, 2023). "The AI boom is here, and so are the lawsuits". Vox. Archived from the original on May 9, 2023. Retrieved March 15, 2023.
  60. ^ "The AI boom: lessons from history". The Economist.
  61. ^ a b CHOW, A.R. et al. (2023) ‘The Ai Arms Race Is Changing Everything’, TIME Magazine, 201(7/8), pp. 50–54.
  62. ^ a b Anderljung, M., Barnhart, J., Korinek, A., Leung, J., O’Keefe, C., Whittlestone, J., Avin, S., Brundage, M., Bullock, J., Cass-Beggs, D., Chang, B., Collins, T., Fist, T., Hadfield, G., Hayes, A., Ho, L., Hooker, S., Horvitz, E., Kolt, N., … Wolf, K. (2023). Frontier AI Regulation: Managing Emerging Risks to Public Safety.
  63. ^ a b c Scharre, Paul. "Killer Apps." Foreign Affairs, 16 April 2019, https://www.foreignaffairs.com/articles/2019-04-16/killer-apps. Accessed 30 November 2023.
  64. ^ Eapen, Tojin T.; Finkenstadt, Daniel J.; Folk, Josh; Venkataswamy, Lokesh (June 16, 2023). "How Generative AI Can Augment Human Creativity". Harvard Business Review. ISSN 0017-8012. Retrieved June 20, 2023.
  65. ^ McKendrick, Joe (May 17, 2020). "No matter how sophisticated, artificial intelligence systems still need human oversight". ZDNET. Archived from the original on May 10, 2023. Retrieved May 16, 2023.
  66. ^ Sukhadeve, Ashish (February 9, 2021). "Council Post: Artificial Intelligence For Good: How AI Is Helping Humanity". Forbes. Archived from the original on May 9, 2023. Retrieved May 16, 2023.
  67. ^ "Could AI advancements be a threat to your job security?". Learning People. Archived from the original on May 9, 2023. Retrieved May 16, 2023.
  68. ^ Zinkula, Jacob; Mok, Aaron (June 4, 2023). "ChatGPT may be coming for our jobs. Here are the 10 roles that AI is most likely to replace". Business Insider. Archived from the original on May 9, 2023. Retrieved May 16, 2023.
  69. ^ a b c d Karnofsky, Holden. "What AI Companies Can Do Today to Help With the Most Important Century." Cold Takes. Accessed 8 December 2023, https://www.cold-takes.com/what-ai-companies-can-do-today-to-help-with-the-most-important-century/.
  70. ^ "Nobody's on the Ball on AGI Alignment." LessWrong. Accessed 8 December 2023, https://www.lesswrong.com/posts/uqTJ7mQqRpPejqbfN/nobody-s-on-the-ball-on-agi-alignment.
  71. ^ a b c "Global Vulnerability and the AI Race." AI Safety Fundamentals. Accessed 8 December 2023, https://aisafetyfundamentals.com/blog/global-vulnerability.