User:Sarahlocke/Report
Wikipedia Advising Report
[edit]The prospective incorporation of generative AI tools and large language models (LLMs) to aid in the development of Wikipedia and the Wikimedia Foundation propose a plethora of complexities to consider before practicing their use. As the Wikimedia Foundation aims to build approaches that will ultimately grow Wikipedia’s impact, it is additionally important to prioritize the foundational purpose of Wikipedia, to “benefit readers by acting as a widely accessible and free encyclopedia; a comprehensive written compendium that contains information on all branches of knowledge.” Keeping this in mind, how should we go about productively achieving the purpose of Wikipedia, with or without the use of AI?
As AI possesses a computer’s perspective, it is naturally valuable for providing clear, comprehensible content. Building outlines, condensing information, and identifying brutally relevant points within an article are all considerable ‘skills’ that AI can be used for. For the individual who has yet to contribute to Wikipedia, some of these tasks may be viewed as obstacles or result in unorganized articles not up to the community standard. To illuminate this benefit of AI and better equip users with crafting articles, Wikipedia could introduce a feature giving users the option to enhance their writing with AI. This feature would not immediately alter the user’s text, but instead provide a series of potentially useful layouts or edits for the article. The main purpose of the feature would be to organize and structure the article to achieve a higher transfer of information and increase the overall readability of the articles published on Wikipedia. Additionally, the AI could make edit suggestions in the instance of run-on sentences or overly-wordy writing. These suggestions would not write any content, only suggest where to improve or expand. This would promote the experience for new Wikipedians by providing them with some small guidance pertaining to their content and ensure that their article outlines are cohesive within the norms of Wikipedia. Or, Wikimedia could introduce an AI character to teach new users going through the WikiEdu process. This would simply gamify the learning stage and possibly compel more users to learn how to contribute. Again, it is important to note that this aspect would not fully write the content, but provide starting points and structures for article outlines. This feature would utilize AI’s skill of identifying and organizing information, not of content writing. Since Wikipedia is an online encyclopedia, it is imperative that the points made within Wikipedia writing have integrity and truth to them, and using AI to generate content published on Wikipedia could be detrimental to the platform’s reputation.
Now that we have noted the potential benefits to come from implementing AI within Wikipedia’s interface, it is crucial to identify what Wikipedia users might think of the change. A general standard within Wikipedia communities, as discussed in class, is the promotion of well-written, true information within individual articles and the development of the platform. Increasing the use of AI to aid with writing articles could potentially disrupt Wikipedia communities, especially users who take pride in reviewing or editing different articles. AI writing, while normally semi-cohesive, is noticeable to humans and could potentially dishearten or discourage members from enthusiastically participating in Wikipedia activity. An admirable characteristic of Wikipedia is the authenticity of its’ makeup. The platform’s content exists from millions of individuals with the intention of sharing information freely with others. Encouraging the use of AI to generate articles or produce content for articles would surely disappoint dedicated Wiki users. It is generally understood that individuals who actively participate on Wikipedia have passionate motivations of writing, editing, and sharing information. Discrepancies in all of those avenues would undoubtedly restrain loyal users. Introducing AI as an element would likely take the place of one or more of the motivations that users enjoy. In order to avoid disruption of communities and protect the integrity of Wikipedia content, there should be an AI detector embedded within the code to identify any potential use of AI, and disclose when and how AI is used in published articles. An additional risk AI proposes to the Wikipedia community is the inaccuracy of using AI to flag bots or spam activity. While any accounts being reported for spam content may very well be disruptions to Wikipedia, it is unreasonable and unreliable to flag a human as AI should the possibility occur.
Amongst the myriad of possibilities where AI could result in harm for Wikipedia as a platform, and for its communities, it is very important to consider how Wikimedia could best maintain the reputation of the program. Allowing the use of AI to write content could seriously obstruct the positive reputation of Wikipedia as a source of information. While writers may use whatever resources they please to get inspiration or guidance, it would be foolish to encourage any sort of feature prompting the generation of articles using artificial intelligence, as the drawbacks and inaccuracies of AI are widely recognized.
When drawing upon the benefits of and drawbacks of AI, an open mind is necessary. I advise Wikimedia to move very carefully and strategically when introducing any potential new features, and consider the impact on the communities as a result of incorporating AI into Wikipedia. While small features like possible structure for articles, suggested edits, and even AI tutorials could be helpful and increase user-friendliness, there is a fine line to draw when combining artificial intelligence and online encyclopedias. It is within Wikipedia’s best interest to make any potential improper usage of AI unattainable to users.