Jump to content

User:RumpusLumpus/Report

From Wikipedia, the free encyclopedia

Wikipedia Advising Report

[edit]

Creatively, I think Generative AI (G-AI) creates more problems than solutions, but when it comes to proofreading and outlining, I've found it to be quite a useful tool. Based on what I've seen on Wikipedia in the last several weeks, and what I've learned in my Online Communities course, I've put together a few suggestions for how Wikipedia could utilize G-AI in a way that doesn't severely alter the culture of the platform or the integrity of any articles, by allowing for sandboxes to be made out of articles from the press of a button and allowing users the latitude to disclose if they used G-AI tools in their edits. I also think that the guidelines for G-AI use should be generated by the community and should be constantly evolving and changing to the needs of all users at large.

G-AI is very useful for translation and for outlining.[1] When encouraging users to use G-AI as a tool for article production, you run the risk of having information on Wikipedia that is not either real, accurate, or neutral. To allow users to experiment with these tools in a private, non-committal space that doesn't muck up the articles could be crucial to reducing harm done by reckless use of G-AI. I think users should be able to create sandboxes for entire articles. One thing that I personally found a bit frustrating while editing on Wikipedia was not being able to save my progress/writing/outlining without having it be published to the world. The key is the simplicity in the request itself. As per Building Successful Online Communities (BSOC),[2] users are more likely to comply with requests if they are simple, so something like "Wanna try out G-AI tools? Create a sandbox of this article to try."

Another G-AI concern is that it's difficult to tell if someone used it. If an article were to be edited using G-AI, it should have a textbox, similar to the "Content Assessment Scale" textbox, like the one at the top of this Talk page, that says "This article has been edited using Generative AI tools" so that people know if an article had been edited with G-AI. In terms of whether or not a tool should be used to automatically detect if G-AI was used or if users themselves should flag it off, I think that trusting users to tick a box is the way to go. Using an Automatic tool has significant drawbacks. For example, Meta's automatic tool would flag a large percentage of art posted on Instagram as "Made with AI" even if no AI was used. This[1] article goes into detail about how certain tools simply aren't reliable enough to make accurate decisions[1] so trying to implement them on a scale as large as the entirety of Wikipedia could cause things to go awry pretty quickly.

I suggest instead that you add a checkbox underneath "This is a minor edit" that says "This edit used Generative AI tools." Allowing users the option to self-govern instead of relying on an automatic system that may falsely identify edits not containing G-AI promotes an ethos of truth-telling and self-moderation. However, the checkbox may act as a risk itself, as it may invite people to use G-AI in a way that does harm to articles due to its tendency to make new information out of thin air. Accounting for that, instead of it being an option to all users, start with a much smaller, more tenured population of Wikipedia editors that have specifically (to build on my previous point) used a G-AI sandbox, as if it's a task that has to be completed before you can access it. Then, once enough people have tried it out, it can be opened up to more and more users to try.

This leads me to my final point: I think there should be general guidelines for G-AI use on Wikipedia,[1] but WikiProjects should be given the ability to make more specific rules/norms surrounding use as well. For example, on Reddit, subreddits build their rules and norms based off of the communities themselves.[3] R/AmITheAsshole and r/aww do not have similar rules because they are not similar communities. WikiProjects widely vary, so a Medicine project may need to bar the use of G-AI tools completely due to the fear of inaccurate information whereas a World Foods project may encourage the use of G-AI translators and outlines to start and build on topics.

Just as self-governance invites trust between users and Wikipedia itself, other users should be committed to hold one another accountable. My very first edits on an article were reverted within minutes of making them, as someone in an associated WikiProject noticed that I was not doing things correctly. As frustrating as that was, it helped me learn the norms and the rules of the site and equipped me with doing better work with every subsequent edit (and reversion). Within my limited time on Wikipedia, I have noticed above all else that Wikipedians hold each other accountable. Similarly, instead of punishing users for misuse of G-AI, if it's intentional or unintentional, Wikipedians should encourage each other to lead by example and take the time to tell others to explore different options, like telling them to be truthful if they used G-AI on an article, or by encouraging them to use a G-AI sandbox. Flexibility and teachability is key: after all, that's what Wikipedia's all about!

References

[edit]
  1. ^ a b c d "Wikipedia:Using neural network language models on Wikipedia", Wikipedia, 2024-06-04, retrieved 2024-11-11
  2. ^ Kraut; Resnick; Kiesler; Burke; Chen, Yan; Kittur; Konstan; Ren; Riedl (2011). Building successful online communities: evidence-based social design. Cambridge, Massachusetts London, England: The MIT Press. pp. 30–45, 160–170. ISBN 978-0-262-01657-5.
  3. ^ Fiesler; Jiang; McCann; Frye; Brubaker (2018). "Reddit Rules! Characterizing an Ecosystem of Governance". www.aaai.org.