Jump to content

AI in education

From Wikipedia, the free encyclopedia

Artificial intelligence or Ai is a broad “skewer” term that has specific areas of study clustered next to it, including machine learning, natural language processing, the philosophy of artificial intelligence, autonomous robots and TESCREAL. Research about AI in higher education is widespread in the global north,[1] where there is much hype from venture capital, big tech and some open educationalists. Some believe that Ai will remove the obstacle of "access to expertise”.[2] Others claim that education will be revolutionized with machines and their ability to understand natural language.[3] While others are exploring how LLM’s “reasoning” might be improved.[4] Marketers often use different kinds of myths to make complex products seem simpler [5]There is at present, no scientific consensus on what Ai is or how to classify and sub-categorize AI[6][7] This has not hampered the growth of Ai systems which offer scholars and students automatic assessment and feedback, predictions, instant machine translations, on-demand proof-reading and copy editing, intelligent tutoring or virtual assistants. [1] Ai brings conversational coherence to the classroom, and automates the production of content.[8]Using categorisation, summaries and dialogue, Ai "intelligence" or "authority" is reinforced through anthropomorphism and the Eliza effect. Ai also introduces hazards, such as skewed scientific knowledge dissemination[9] , harmful educational practices, [10] or might discourage researchers from publishing their original research openly.[11] Worries about AI safety risks such as privacy breaches, algorithmic biases, security concerns, ethics, and compliance barriers are accompanied by other doomsday warnings.

Different Perspectives on Ai in Education

[edit]

Educational technology can be a powerful and effective assistant for learning. Computer companies are constantly updating their technology products. Some educationalists have suggested that Ai might automate procedural knowledge and expertise[12] or even match or surpass human capacities on cognitive tasks. They advocate for the integration of AI across the curriculum and the development of AI Literacy.[13] Others are more skeptical as Ai faces an ethical challenge, where "fabricated responses" or "inaccurate information", politely referred to as “hallucinations[14] are generated and presented as fact. Some remain curious about societies tendency to put their faith in engineering achievements, and the systems of power and privilege[15] that leads towards determinist thinking.[16] While others see copyright infringement[17][18] or the introduction of harm, division and other social impacts, and advocate resistance to Ai.[19] Evidence is mounting that Ai written assessments are undetectable, which poses serious questions about the academic integrity of university assessments.[20]

Tokens, Text and Hallucinations

[edit]

Large language models (LLMs) take text as input data and then generate output text.[21] Ai models are trained by billions of words and code that has been web-scraped. LLMS are feats of engineering, that see text as tokens. The relationships between tokens, allows LLM to predict the next word, and then the next, thus generating a meaningful sentence and the appearance of thought and interactions. LLM are often dependent on a huge text corpus that is normally extracted from the World Wide Web. This dataset allows the LLM to act as a statistical reasoning machine, [22] or do pattern recognition.[23] The LLM examines the relationships between tokens, generates probable outputs in response to a prompt, and completes a defined task, such as translating, editing, or writing. The output that is presented is a smoothed collection of words,[24] that is normalized and predictable. However, the text corpora that LLMs draw on can be problematic, as outputs will reflect their stereotypes or biases of the people or culture whose content has been digitized. The confident, but incorrect outputs are termed “hallucinations”. These plausible errors are not malfunctions but a consequence of the engineering decisions that inform the large language model.[25]"Guardrails" offer to act as validators of the LLM output, prevent these errors, and safeguard accuracy[26]There are no fixes[27][28] for so-called "hallucinations", the "factually incorrect or nonsensical information that seems plausible[29] Translation, summarization, information retrieval, conversational interactions are some of the complex language tasks that machines are expected to handle.[30]

Socio-Technical Imaginaries (STI)

[edit]

The benefits of multilingualism, grammatically correct sentences or statistically probable texts written about any topic or domain are clear to those who can afford software as a service (SaaS). In edtech, there is a recurrent theme, that “emerging technologies”[31] will transform education.[32] Whether it be radio, TV, PC computers, the internet, interactive whiteboards, social media, mobile phones or tablets. New technologies generate a socio technical imaginary (STI) that offer's society, a shared narrative[33] and a collective vision for the future.[34] Improvements in natural language processing and computational linguistics have re-enforced assumptions that underlie this “emerging technology” STI. Ai is not an emerging technology, but an “arrival technology”[35] Ai appears to understand instructions and can generate human-like responses.[36] Behaving as a companion for many in a lonely and alienated world.[37] While also creating a “jagged technology frontier”, [38]where Ai is both very good and terribly bad at very similar tasks.[39]

Public Goods vs Venture Capital

[edit]

At first glance, Artificial intelligence in Education (AIEd)[40] does indeed offer pertinent technical solutions to address future education needs. Ai optimists envision a future where machine learning and artificial intelligence might be applied in writing, personalization, feedback or course development. The growing popularity of AI, is especially apparent to many who have invested in higher education in the past decade. [41] AI skeptics on the other hand, are wary of rhetoric that presents technology as solution. They point out that in public services, like education, human and algorithmic decision systems should be approached with caution. [42] Post digital scholars and sociologists are more cautious about any techno-solutions, and have warned about the dangers of building public systems around alchemy [43] or stochastic parrots. They argue that there are multiple costs that accompany LLMs, including dangerous biases the potential for deception, and environmental costs[44] The AI curious are aware of how cognitive activity has become commodified. They see how education has been transformed into a “knowledge business” where items are traded, bought, or sold.[45] African hyper scalers, venture capital and vice chancellors[46] are punting the Fourth Industrial Revolution. with the prospect of billions earmarked for South African. Data Centers,[10] such as Teraco Data Environments, Vantage Data Centre[6], Africa Data Centres[8] NTT /Dimension_Data,[3] carefully avoiding being accused of monopoly practices[47]

AI Resilient Graduates

[edit]

AI has co-existed comfortably between academia and industry for years.[48] The terrain is shifting and currently Ai research in the global north has computing power, large datasets, and highly skilled researchers. Power is shifting away from students and academics toward corporations and venture capitalists.[49] Graduates from universities in dominant cultures, where there are high levels of digitisation, need to become AI-resilient. Graduates from the majority world also need to value their own process of knowledge construction, resist the lure of normalisation and see Ai for what it is, another form of enclosure, and start blogging. Graduates from both the global north and the majority of the world need to be able to critique AI output, become familiar with ithe processes of technical change,[50] and let their own studies and intellectual life guide their working futures.[51]

References

[edit]
  1. ^ a b Crompton, Helen; Burke, Diane (2023-04-24). "Artificial intelligence in higher education: the state of the field". International Journal of Educational Technology in Higher Education. 20 (1): 22. doi:10.1186/s41239-023-00392-8. ISSN 2365-9440.
  2. ^ Wiley, David (2023). "The Near-term Impact of Generative AI on Education, in One Sentence – improving learning". opencontent.org. Archived from the original on 2024-08-29. Retrieved 2024-08-29.
  3. ^ a b Wang, Alex; Pruksachatkun, Yada; Nangia, Nikita; Singh, Amanpreet; Michael, Julian; Hill, Felix; Levy, Omer; Bowman, Samuel R. (2019-12-08), "SuperGLUE: a stickier benchmark for general-purpose language understanding systems", Proceedings of the 33rd International Conference on Neural Information Processing Systems, Red Hook, NY, USA: Curran Associates Inc., pp. 3266–3280, arXiv:1905.00537, retrieved 2024-08-22
  4. ^ Havrilla, Alex; Raparthy, Sharath; Nalmpantis, Christoforus; Dwivedi-Yu, Jane; Zhuravinskyi, Maksym; Hambro, Eric; Raileanu, Roberta (2024). "GLoRe: When, Where, and How to Improve LLM Reasoning via Global and Local Refinements". arXiv:2402.10963 [cs.CL].
  5. ^ Salvaggio, Eryk (2024-08-29). "Challenging The Myths of Generative AI | TechPolicy.Press". Tech Policy Press. Retrieved 2024-09-04.
  6. ^ a b Knoedler, Leonard; Knoedler, Samuel; Allam, Omar; Remy, Katya; Miragall, Maximilian; Safi, Ali-Farid; Alfertshofer, Michael; Pomahac, Bohdan; Kauke-Navarro, Martin (2023-10-30). "Application possibilities of artificial intelligence in facial vascularized composite allotransplantation—a narrative review". Frontiers in Surgery. 10. doi:10.3389/fsurg.2023.1266399. ISSN 2296-875X. PMC 10646214. PMID 38026484.
  7. ^ Grobelnik, Marko; Perset, Karine; Russell, Stuart (March 6, 2024). "What is AI? Can you make a clear distinction between AI and non-AI systems?". oecd.ai. Retrieved 28 June 2024.
  8. ^ a b "What price your 'AI-ready' graduates?". 7 August 2024.
  9. ^ Algaba, Andres; Mazijn, Carmen; Holst, Vincent; Tori, Floriano; Wenmackers, Sylvia; Ginis, Vincent (2024-08-24), Large Language Models Reflect Human Citation Patterns with a Heightened Citation Bias, doi:10.48550/arXiv.2405.15739, retrieved 2024-08-29
  10. ^ a b Riley, B., & Bruno, P. (2024). Education hazards of generative AI. https://www.cognitiveresonance.net/resources.html
  11. ^ "AI firms must play fair when they use academic data in training". Nature. 632 (8027): 953–953. 2024-08-27. doi:10.1038/d41586-024-02757-z.
  12. ^ Collins, Harry (2024-05-18). "Why artificial intelligence needs sociology of knowledge: parts I and II". AI & Society. doi:10.1007/s00146-024-01954-8. ISSN 1435-5655.
  13. ^ Southworth, Jane; Migliaccio, Kati; Glover, Joe; Glover, Ja'Net; Reed, David; McCarty, Christopher; Brendemuhl, Joel; Thomas, Aaron (2023). "Developing a model for AI Across the curriculum: Transforming the higher education landscape via innovation in AI literacy". Computers and Education: Artificial Intelligence. 4. doi:10.1016/j.caeai.2023.100127.
  14. ^ Collins, Harry (2024-05-18). "Why artificial intelligence needs sociology of knowledge: parts I and II". AI & Society. doi:10.1007/s00146-024-01954-8. ISSN 1435-5655.
  15. ^ Birhane, Abeba; McGann, Marek (2024-11-01). "Large models of what? Mistaking engineering achievements for human linguistic agency". Language Sciences. 106: 101672. arXiv:2407.08790. doi:10.1016/j.langsci.2024.101672. ISSN 0388-0001.
  16. ^ "Resisting Deterministic Thinking". 5 April 2023.
  17. ^ "Generative AI is a marvel. Is it also built on theft?". The Economist. ISSN 0013-0613. Retrieved 2024-08-20.
  18. ^ "Researchers tested leading AI models for copyright infringement using popular books, and GPT-4 performed worst". CNBC. 6 March 2024.
  19. ^ McQuillan, Dan (2022). Resisting AI: An Anti-fascist Approach to Artificial Intelligence. Bristol University Press
  20. ^ Scarfe, Peter; Watcham, Kelly; Clarke, Alasdair; Roesch, Etienne (2024-06-26). "A real-world test of artificial intelligence infiltration of a university examinations system: A "Turing Test" case study". PLOS ONE. 19 (6): e0305354. Bibcode:2024PLoSO..1905354S. doi:10.1371/journal.pone.0305354. ISSN 1932-6203. PMC 11206930. PMID 38923941.
  21. ^ McCoy, R. Thomas; Yao, Shunyu; Friedman, Dan; Hardy, Matthew; Griffiths, Thomas L. (2023-09-24). "Embers of Autoregression: Understanding Large Language Models Through the Problem They are Trained to Solve". arXiv.org. doi:10.48550/arxiv.2309.13638. Retrieved 2024-09-05.
  22. ^ "Artificial Intelligence (AI) vs. Difference". Law Office of Lainey Feingold. 2024-08-20. Retrieved 2024-08-21.
  23. ^ Singh, Chetanpal (2023-04-11). "Machine Learning in Pattern Recognition". European Journal of Engineering and Technology Research. 8 (2): 63–68. doi:10.24018/ejeng.2023.8.2.3025. ISSN 2736-576X.
  24. ^ Experience, World Leaders in Research-Based User. "When Should We Trust AI? Magic-8-Ball Thinking and AI Hallucinations". Nielsen Norman Group. Retrieved 2024-08-21. {{cite web}}: |first= has generic name (help)
  25. ^ "ChatGPT Isn't 'Hallucinating.' It's Bullshitting". 6 April 2023.
  26. ^ https://jrodthoughts.medium.com/inside-guardrails-ai-a-new-framework-for-safety-control-and-validation-of-llm-applications-df8646884390
  27. ^ "Tech experts are starting to doubt that ChatGPT and A.I. 'hallucinations' will ever go away: 'This isn't fixable'".
  28. ^ "Google CEO Sundar Pichai says 'hallucination problems' still plague A.I. Tech and he doesn't know why".
  29. ^ Xu, Ziwei; Jain, Sanjay; Kankanhalli, Mohan (2024). "Hallucination is Inevitable: An Innate Limitation of Large Language Models". arXiv:2401.11817 [cs.CL].
  30. ^ Naveed, Humza; Khan, Asad Ullah; Qiu, Shi; Saqib, Muhammad; Anwar, Saeed; Usman, Muhammad; Akhtar, Naveed; Barnes, Nick; Mian, Ajmal (2024-04-09), A Comprehensive Overview of Large Language Models, doi:10.48550/arXiv.2307.06435, retrieved 2024-09-01
  31. ^ https://www.veletsianos.com/2008/11/18/a-definition-of-emerging-technologies-for-education/
  32. ^ Hrastinski, Stefan; Olofsson, Anders D.; Arkenback, Charlotte; Ekström, Sara; Ericsson, Elin; Fransson, Göran; Jaldemark, Jimmy; Ryberg, Thomas; Öberg, Lena-Maria; Fuentes, Ana; Gustafsson, Ulrika; Humble, Niklas; Mozelius, Peter; Sundgren, Marcus; Utterberg, Marie (2019-10-01). "Critical Imaginaries and Reflections on Artificial Intelligence and Robots in Postdigital K-12 Education". Postdigital Science and Education. 1 (2): 427–445. doi:10.1007/s42438-019-00046-x. ISSN 2524-4868.
  33. ^ Tidwell, Jacqueline Hettel; Tidwell, Abraham S. D. (2018-05-01). "Energy ideals, visions, narratives, and rhetoric: Examining sociotechnical imaginaries theory and methodology in energy research". Energy Research & Social Science. 39: 103–107. Bibcode:2018ERSS...39..103T. doi:10.1016/j.erss.2017.11.005. ISSN 2214-6296. OSTI 1894586.
  34. ^ Jasanoff, Sheila; Kim, Sang-Hyun (2015). Dreamscapes of Modernity: Sociotechnical Imaginaries and the Fabrication of Power. University of Chicago Press. doi:10.7208/chicago/9780226276663.001.0001. ISBN 978-0-226-27652-6.
  35. ^ Klopfer, Eric; Reich, Justin; Abelson, Hal; Breazeal, Cynthia (2024-03-27). "Generative AI and K-12 Education: An MIT Perspective". An MIT Exploration of Generative AI. doi:10.21428/e4baedd9.81164b06.
  36. ^ Menon, Devadas; Shilpa, K (2023-11-01). ""Chatting with ChatGPT": Analyzing the factors influencing users' intention to Use the Open AI's ChatGPT using the UTAUT model". Heliyon. 9 (11): e20962. Bibcode:2023Heliy...920962M. doi:10.1016/j.heliyon.2023.e20962. ISSN 2405-8440. PMC 10623159. PMID 37928033.
  37. ^ Jacobs, Kerrin Artemis (2024-03-05). "Digital loneliness—changes of social recognition through AI companions". Frontiers in Digital Health. 6. doi:10.3389/fdgth.2024.1281037. ISSN 2673-253X. PMC 10949182. PMID 38504806.
  38. ^ Dell'Acqua, Fabrizio; McFowland, Edward; Mollick, Ethan R.; Lifshitz-Assaf, Hila; Kellogg, Katherine; Rajendran, Saran; Krayer, Lisa; Candelon, François; Lakhani, Karim R. (2023). "Navigating the Jagged Technological Frontier: Field Experimental Evidence of the Effects of AI on Knowledge Worker Productivity and Quality". SSRN Electronic Journal. doi:10.2139/ssrn.4573321. ISSN 1556-5068.
  39. ^ Klopfer, Eric; Reich, Justin; Abelson, Hal; Breazeal, Cynthia (2024-03-27). "Generative AI and K-12 Education: An MIT Perspective". An MIT Exploration of Generative AI. doi:10.21428/e4baedd9.81164b06.
  40. ^ Crompton, Helen; Burke, Diane (2023-04-24). "Artificial intelligence in higher education: the state of the field". International Journal of Educational Technology in Higher Education. 20 (1): 22. doi:10.1186/s41239-023-00392-8. ISSN 2365-9440.
  41. ^ Crompton, Helen; Burke, Diane (2023-04-24). "Artificial intelligence in higher education: the state of the field". International Journal of Educational Technology in Higher Education. 20 (1): 22. doi:10.1186/s41239-023-00392-8. ISSN 2365-9440.
  42. ^ "Don't Believe Every AI You See". New America. Retrieved 2024-08-20.
  43. ^ "Don't Believe Every AI You See". New America. Retrieved 2024-08-20.
  44. ^ Bender, Emily M.; Gebru, Timnit; McMillan-Major, Angelina; Shmitchell, Shmargaret (2021-03-01). "On the Dangers of Stochastic Parrots: Can Language Models be Too Big? 🦜". Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. FAccT '21. New York, NY, USA: Association for Computing Machinery. pp. 610–623. doi:10.1145/3442188.3445922. ISBN 978-1-4503-8309-7.
  45. ^ https://www.researchgate.net/publication/383148782_The_commodification_of_education_and_the_generative_AI-induced_scam-like_culture/citations
  46. ^ https://businesstech.co.za/news/lifestyle/788618/five-new-universities-coming-to-south-africa/
  47. ^ https://mybroadband.co.za/news/cloud-hosting/531335-competition-commission-slapping-microsoft-with-monopoly-complaint-in-south-africa.html
  48. ^ Ahmed, Nur; Wahed, Muntasir; Thompson, Neil C. (2023). "The growing influence of industry in AI research". Science. 379 (6635): 884–886. Bibcode:2023Sci...379..884A. doi:10.1126/science.ade2420. PMID 36862769.
  49. ^ Stilgoe, Jack (2024). "AI has a democracy problem. Citizens' assemblies can help". Science. 385 (6711): eadr6713. doi:10.1126/science.adr6713. PMID 39172823.
  50. ^ Postman, N. (1998, March). Five things we need to know about technological change [Conference presentation]. Denver, CO, United States. https://www.cs.ucdavis.edu/~rogaway/classes/188/materials/postman.pdf
  51. ^ "What price your 'AI-ready' graduates?". 7 August 2024.