User:Francishunger
This page collects recent academic papers (from the humanities) on generative AI to make it available to Wikipedia Editors for the wikipedia project 'AI Cleanup' https://en.wikipedia.org/wiki/Wikipedia:WikiProject_AI_Cleanup
Image Chiang, Ted. 2023. “ChatGPT Is a Blurry JPEG of the Web.” The New Yorker, February 9, 2023. https://www.newyorker.com/tech/annals-of-technology/chatgpt-is-a-blurry-jpeg-of-the-web.
Ball, Philip. 2023. “Is AI Leading to a Reproducibility Crisis in Science?” Nature 624 (7990): 22–25. https://doi.org/10.1038/d41586-023-03817-6.
Meyer, Roland. 2022. “Im Bildraum von Big Data. Unwahrscheinliche Und Unvorhergesehene Suchkommandos: Über Dall-E 2.” Cargo, no. 55, 50–53.
Cetinić, Eva. 2022. “The Myth of Culturally Agnostic AI Models.” https://doi.org/10.48550/arXiv.2211.15271.
Birhane, Abeba, Vinay Uday Prabhu, and Emmanuel Kahembwe. 2021. “Multimodal Datasets: Misogyny, Pornography, and Malignant Stereotypes.” arXiv. https://doi.org/10.48550/arXiv.2110.01963.
Pereira, Gabriel, and Bruno Moreschi. 2020. “Artificial Intelligence and Institutional Critique 20 – Unexpected Ways of Seeing with Computer Vision.” AI & SOCIETY, September. https://doi.org/10.1007/s00146-020-01059-y.
Offert, Fabian, and Peter Bell. 2020. “Perceptual Bias and Technical Metapictures – Critical Machine Vision as a Humanities Challenge.” AI & SOCIETY, October. https://doi.org/10.1007/s00146-020-01058-z.
Harvey, Adam, and Julien LaPlace. 2019. “MegaPixels – Origins and Endpoints of Biometric Datasets »In the Wild«.” Website. 2019. https://megapixels.cc.
Steyerl, Hito. 2018. “A Sea of Data – Pattern Recognition and Corporate Animism (Forked Version).” In Pattern Discrimination, edited by Wendy Hui Kyong Chun, Florian Cramer, Clemens Apprich, and Hito Steyerl, 1–21. Lüneburg: Meson Press.
Chun, Wendy Hui Kyong. 2018. “Queerying Homophily.” In Pattern Discrimination, edited by Hito Steyerl, Wendy Hui Kyong Chun, Florian Cramer, and Clemens Apprich, 59–98. Lüneburg: Meson Press.
Language
Birhane, Abeba, and Marek McGann. 2024. “Large Models of What? Mistaking Engineering Achievements for Human Linguistic Agency.” arXiv. http://arxiv.org/abs/2407.08790.
Anderson, Ross, Ilia Shumailov, Zakhar Shumaylov, Yiren Zhao, Yarin Gal, and Nicolas Papernot. 2023. “The Curse of Recursion – Training on Generated Data Makes Models Forget.” arXiv. https://doi.org/10.48550/arXiv.2305.17493.
Luitse, Dieuwertje, and Wiebke Denkena. 2021. “The Great Transformer – Examining the Role of Large Language Models in the Political Economy of Ai.” Big Data & Society 8 (2): 1–14. https://doi.org/10.1177/20539517211047734.
Bender, Emily M., Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. 2021. “On the Dangers of Stochastic Parrots – Can Language Models Be Too Big?” In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610–23. Virtual Event Canada: ACM. https://doi.org/10.1145/3442188.3445922.
Carlini, Nicholas, Matthew Jagielski, Christopher A. Choquette-Choo, Daniel Paleka, Will Pearce, Hyrum Anderson, Andreas Terzis, Kurt Thomas, and Florian Tramèr. 2023. “Poisoning Web-Scale Training Datasets Is Practical.” arXiv. https://doi.org/10.48550/arXiv.2302.10149.