Talk:ChatGPT: Difference between revisions

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
Content deleted Content added
Line 140: Line 140:
:@[[User:StereoFolic|StereoFolic]]@[[User:Rolf h nelson|Rolf h nelson]]@[[User:Elspea756|Elspea756]]@[[User:WikiLinuz|WikiLinuz]],
:@[[User:StereoFolic|StereoFolic]]@[[User:Rolf h nelson|Rolf h nelson]]@[[User:Elspea756|Elspea756]]@[[User:WikiLinuz|WikiLinuz]],
:I had some time today, so I took into account the feedback and have completely rewritten the reverted text (see latest revision). Previous assertions are now backed up with citations from "The Atlantic" and "Vox", which are listed on [[Wikipedia:Reliable sources]]. I have continued to include the article from The Story, in addition to an article from Vanity Fair and Fortune, to further corroborate the claims made in the other two articles. These additional sources are not the primary, but lend credence to the argument that these points are discussed over a range of media (they could technically be deleted if necessary and the text still be supported I believe, but they do improve it). As the primary source of contention seemed to be the how reputable the single source was, and not necessarily the content, I hope this is more fitting. I have added the text as a fresh edit, and don't personally consider this reverting the revert by Rolf h nelson as I think it is changed significantly from the original, and has taken into consideration feedback from the talk page. While I don't agree with the perspective, and welcome the prospect of AI overlords, I do believe the collective negative reaction related to jobs and the human role in creativity was prominent enough to warrant inclusion. If you feel it is still in violation of Wiki policy, I understand that it may require significant changes or reverted, but thought it would be easier to see live then not. [[User:GeogSage|<span style="font-family:Blackadder ITC; color:grey">GeogSage</span>]] <sup> ([[User talk:GeogSage|<span style="font-family:Blackadder ITC; color:grey">⚔Chat?⚔</span>]]) </sup> 03:36, 21 June 2023 (UTC)
:I had some time today, so I took into account the feedback and have completely rewritten the reverted text (see latest revision). Previous assertions are now backed up with citations from "The Atlantic" and "Vox", which are listed on [[Wikipedia:Reliable sources]]. I have continued to include the article from The Story, in addition to an article from Vanity Fair and Fortune, to further corroborate the claims made in the other two articles. These additional sources are not the primary, but lend credence to the argument that these points are discussed over a range of media (they could technically be deleted if necessary and the text still be supported I believe, but they do improve it). As the primary source of contention seemed to be the how reputable the single source was, and not necessarily the content, I hope this is more fitting. I have added the text as a fresh edit, and don't personally consider this reverting the revert by Rolf h nelson as I think it is changed significantly from the original, and has taken into consideration feedback from the talk page. While I don't agree with the perspective, and welcome the prospect of AI overlords, I do believe the collective negative reaction related to jobs and the human role in creativity was prominent enough to warrant inclusion. If you feel it is still in violation of Wiki policy, I understand that it may require significant changes or reverted, but thought it would be easier to see live then not. [[User:GeogSage|<span style="font-family:Blackadder ITC; color:grey">GeogSage</span>]] <sup> ([[User talk:GeogSage|<span style="font-family:Blackadder ITC; color:grey">⚔Chat?⚔</span>]]) </sup> 03:36, 21 June 2023 (UTC)
:Those other sources are [[WP:RS]], are any of the points made in the text specific only to The Story and not other sources? If all the points are made by the other RS you included, then just delete The Story as a source and we're good to go. [[User:Rolf h nelson|Rolf H Nelson]] ([[User talk:Rolf h nelson|talk]]) 03:45, 21 June 2023 (UTC)

Revision as of 03:45, 21 June 2023

Edit war on ideological bias of ChatGPT

ChatGPT has been "accused" - in so much as a ML model can be accused of anything, it not being a legal person - of having an ideological bias towards the "left" or "progressive" side. After having used the model for a while I can only agree that this is in fact the case, the model tends to present "progressive" views in a more positive light than it does "conservative" ones. When asked about its bias it responds by claiming to be unbiased and neutral but the answers clearly show this not to be true. As such it is noteworthy that the model is biased but a section mentioning this bias was repeatedly removed by User:Aishik_Rehman and User:LilianaUwU. I propose to add a mentioning of this ideological bias to the _Reception, criticism and issues_ section. If anyone wants to point out why this should not be done speak up. Please realise that merely agreeing with the model's bias is not a valid reason to keep a mention of this bias from the article. Yetanwiki (talk) 19:19, 19 December 2022 (UTC)[reply]

If you mean this, it is completely unsourced and had to be removed per WP:V. Whatever mention is added will have to be based on reliable sources. - MrOllie (talk) 19:23, 19 December 2022 (UTC)[reply]
The source is my own research but there are plenty of other sources mentioning this bias. The problem here will be that most of the publications which are willing to publish this type of information are "redlisted" or "pinklisted" in the perennial sources. It is, of course, easy to do some of your own experimenting to at least confirm or deny the existence of an ideological bias. A very easy experiment is to ask the bot for a story involving politically sensitive topics. Do not tell it to give advice, just ask it for a story. You'll quickly notice that the tone of the story largely depends on whether the topic at hand is one pushed by the "progressive" side versus the "conservative" side. Another way is to ask it for a number of stories (in new sessions) where the only variable is the sex/race/sexual orientation/... of the protagonist, this will show the same effect. Again do not ask it for advice since that seems to be caught in a filter, also do not ask for "personal experiences" because that too is filtered. Just ask for a story and notice the difference in tone and outcome. I did this many times to see whether the bias was coincidental and found out it was "reliably biased". Still, this is "original research" which does not belong in an encyclopedia so I'll have to find an "acceptable" source which has also "done the work". Yetanwiki (talk) 22:02, 19 December 2022 (UTC)[reply]
Keep in mind, when you say "The source is my own research but there are plenty of other sources mentioning this bias" -- your own research is not a valid Wikipedia source. As is stated Wikipedia:No original research, "'No original research' (NOR) is one of three core content policies that, along with Neutral point of view and Verifiability, determines the type and quality of material acceptable in articles." CoffeeBeans9 (talk) 19:00, 18 January 2023 (UTC)[reply]
Here are a few recent sources, some of them have already been marked as unclean in the perennial sources, others have not been added to this list. Given the biased character of this list - which insists that MSNBC, CNN and the New York Times are reliable sources while e.g the New York Post is "unreliable" despite the opposite having been proven by the former and latter reporting on the Biden laptop; it also labels so0mething like the 'Victims of Communism Memorial Foundation' as 'unreliable' because it is 'an anti-communist organisation' while it considers the 'World Socialist Web Site' as 'reliable for the attributed opinions of its authors' and 'more reliable for news related to labor issues' - this should not be a problem given the original intent of the NPOV policy. Here's a few:
ChatGPT’s score system shows political bias is no accident
The political orientation of the ChatGPT AI system - Applying Political Typology Quizes to a state-of-the-art AI Language model
ChatGPT is not politically neutral - The new AI chatbot espouses an all-too-familiar Left-liberal worldview
More are sure to follow as the bias issue is clear and the mentioned experiments are repeatable - I have done so and got the same results. Yetanwiki (talk) 18:59, 21 December 2022 (UTC)[reply]
Please note that sources must be meet Wikipedia's sourcing guidelines. Blog posts and opinion pieces are not going to pass muster here. If you want to argue that those guidelines are incorrect in an effort to change them, you can do so at WP:VPP. But you cannot simply ignore them. MrOllie (talk) 19:05, 21 December 2022 (UTC)[reply]
Unherd is not a 'blog', it is a 'news an opinion' publication which just has not been added (in pink or red, most likely) to the perennial sources yet. I could have cited a Daily Caller article referencing a number of these sources but that would have been met with a reference to those same perennial sources list where it is listed in, you guessed it, red ('the site publishes false or fabricated information') - in other words, just like CNN/MSNBC/NYT/LAT/etc who are all listed in green. As I already mentioned this is a problem and a well-known one given the plethora of reports on the ideological bias in many Wikipedia articles. This bias has made Wikipedia unusable for anything even tangentially related to politically contentious issues - as ChatGPT clearly is - since it is the keepers of the perennial sources list who get to decide which sources are allowed and which are to be shunned. Had this list been free of bias - i.e. had the same criteria been used for all publications - this would not be a problem but this is clearly not the case. Yetanwiki (talk) 08:46, 22 December 2022 (UTC)[reply]
The perennial sources list is not an exhaustive list of bad sources - may sources are so obviously appropriate or inappropriate that they are not discussed often enough to require an entry on the list. Each entry on the list is made only after several discussions, usually including an RFC with large attendance. There is no small set of 'keepers' as you imply here. - MrOllie (talk) 12:52, 22 December 2022 (UTC)[reply]
What do you mean with the claim that 'the perennial sources list list is not an exhaustive list of bad sources'? It is definitely not, but in a similar vein it is not a list of good sources. Why focus on the 'bad sources' part here? I don't claim that e.g. Unherd is 'bad', I just expect it to be called such if and when it is added to the perennial sources list because that list is heavily biased towards 'progressive' sources. As to there not being a 'small set of keepers' I can agree in that there are many editors who contribute to the list (165 individuals are responsible for the last 500 edits which corresponds to ~14 months). It is not a small group, just one in which the most vocal section happens to fit mostly within the "progressive" spectrum - how otherwise to explain the clear bias this list presents? Objectively speaking CNN is just as bad as Fox News and MSNBC is worse than both but this is not how the list represents them. Buzzfeed is just as good/bad as the Daily Caller but this is not represented in the list. The Daily Beast is just as good/bad and certainly as ideologically lopsided as The Daily Wire but only one of them is marked ass 'red, STOP'.
Anyway, since the list itself states that the absence of a source simply means that the source has not been discussed enough to merit inclusion I assume those Unherd articles can be used as sources. Let those who disagree speak up and give a good reason why this would not be the case. Yetanwiki (talk) 22:47, 22 December 2022 (UTC)[reply]
If Unherd hasn't been discussed yet, we can certainly open up a discussion on it on the reliable sources noticeboard, but if you already know what the result is going to be, you're wasting your own time and ours. Rolf H Nelson (talk) 03:02, 24 December 2022 (UTC)[reply]
Reality has a liberal bias. The world is ever-changing, so of course it supports the ideology that supports progress rather than the status quo. RPI2026F1 (talk) 23:58, 22 December 2022 (UTC)[reply]
"Reality is that which, when you stop believing in it, doesn't go away" said Philip K. Dick. Reality is also what drives both conservative as well as progressive thought. When circumstances change it makes sense to look for a different way of doing things, which is what drives progressive/liberal thought. When a good way has ben found it makes sense not to change just for the sake of change without considering the consequences, which is what drives conservative thought. Both are needed since conservatives can be overly cautious when circumstances change and be overtaken by reality while progressives/liberals can get so caught up in their schemes of improvement that they loose sight of reality and soon get caught by it.
BTW, that Colbert quote you refer to ('reality has a liberal bias') is outdated, reality in 2022/2023 has a conservative bias. This will change again, eventually but for now it clearly has. Yetanwiki (talk) 18:15, 23 December 2022 (UTC)[reply]
You are right that reality has a liberal bias, and in fact ChatGPT proves it. Because it is not ChatGPT that is biased in this, ChatGPT is just a model that is trained on a large amount of "real world" data. So if ChatGPT is biased it is only because reality is biased. In order to prove that ChatGPT is biased you would need to have access to the source code and point out exactly where the code restricts some particular ideology from being used. All the prompt responses in the world do not prove anything but real world data is biased. For this reason you should completely remove this entire "Accusations of Bias" section, because it is nothing but a tool to garner pity for the fake "victims" of bias. ClearConcise (talk) 21:05, 22 March 2023 (UTC)[reply]
An opinion article in Reason [1] could be cited as an attributed opinion that ChatGPT has/had a left-wing bias, but if we do so we also need to include other opinions about AI bias as well. Rolf H Nelson (talk) 03:12, 24 December 2022 (UTC)[reply]
The section "Accusations of Bias" must be removed until AFTER consensus is reached. It is not okay to just leave up misinformation and force the people removing it to have the burden of proof. The people adding it must provide the citations from RELIABLE sources, which they have not done. ClearConcise (talk) 14:32, 22 March 2023 (UTC)[reply]
The paragraph notes that these are accusations and not endorsements of the claims. Since there have been many accusations of bias (from many different perspectives), and they are coming from well-known, widely-read sources (i.e. not some random person's blog), it should be acknowledged in the article. ... discospinster talk 14:45, 22 March 2023 (UTC)[reply]
1. re: "not endorsements of the claims" It does not say, "these are not endorsements" and calling them "accusations" is not the same thing as that. Further, it's just a unprovable claim to make anyway without direct access to the source code. ChatGPT wouldn't have an ideological bias, the data that it was trained on would, which of course is just saying "reality is biased" because it is trained on huge datasets from many different sources.
If someone wants to prove that ChatGPT has bias coded into it, they would need to prove much more than what a couple of prompts give you as a result. They would need to prove that the bias is hard built into the model itself, not just the training data.
These accusations are fake outrage trying to push an agenda, and every single source cited is proof of that, because they are all opinion pieces with no references to any biased source code. Why is this Wikipedia content being used to further push this specific biased agenda, when the article should be confined to just the verifiable facts? These accusations are being used as a tool to try and garner pity for the so-called "victims" and they need to be removed from this encyclopedia article.
2. re: "there have been many accusations"
Is there an accusations section on the Google page for all the daily accusations raised against Google? Is there an accusations section on the Fox News page for all the daily accusations raised against Fox News? If you're going by the sheer number of accusations to support the reason for a section on it, then those two entities and many more pages will need to be updated because they of course have way more accusations leveled against them on a daily basis.
Unverifiable unproven statements are not information that should be included in what is supposed to be factual page, and this section needs to be removed. ClearConcise (talk) 16:12, 22 March 2023 (UTC)[reply]
  1. The section heading "Accusations of bias" is pretty clear that they're opinions. It does not suggest that these accusations are proven or reflective of reality.
  2. There is an entire article about Criticism of Google, as well as Criticism of Google Search which redirects to a section on bias in the Google Search article. Not to mention Criticism of Microsoft and Fox News controversies where the very first section is called Allegation of bias. I expect that there will soon be an article called Criticism of ChatGPT. ... discospinster talk 14:18, 23 March 2023 (UTC)[reply]
I think that this is a significant aspect and sourced aspect regarding the subject. While (at wp:AN) I don't endorse the use of the tools, I agree with discospinster's arguments. Sincerely, North8000 (talk) 21:48, 22 March 2023 (UTC)[reply]
Unless the content is vandalism, violates BLP guidelines or is a copyright violation, the burden DOES fall on those editors who want to completely remove large sections of sourced content that are presently in the article. I think rather than taking a hatchet to the article, you should make an argument based on specific sources you object to or work to improve the content in positive ways. You can't just come to an article that has been the work of many editors, make claims of bias and remove whole sections you disagree with. That's not how Wikipedia operates on high profile articles like this one. Liz Read! Talk! 06:32, 23 March 2023 (UTC)[reply]
"The political ideology of conversational AI" looks like a WP:PREPRINT, tho i do see a few published citations. fiveby(zero) 22:37, 23 March 2023 (UTC)[reply]

The reason ChatGPT has a bias towards Political Correctness or otherwise safe thoughts is because, first, it is trained on information publicly available on the internet, and second, it's distributed as a service over the internet. When toughts are permanently published, probably under the name of the author, there is a big incentive towards reducing unpopular opinions. Additionally, when a service is published by a company to millions of people, there's a strong incentive to sanitize it to avoid legal risks.

You may add this point of view to the article if you find a source for it, which you will most likely find because my ideas are great, and there are great thinkers out there who write, and since great minds think alike, and since the truth is pretty self-evident, you will find this thought eventually out there. It probably won't be citogenesis, because no one reads or gives credence to Talk comments, but even if it is, being published under someone elses name and authority means it is no longer Original Research.

You may now close this discussion, as I have ended it.--TZubiri (talk) 03:35, 25 April 2023 (UTC)[reply]

BBC quote

This wikipedia article said "According to the BBC, as of December 2022, OpenAI does not allow ChatGPT to "express political opinions or engage in political activism"." I removed this, because the linked source BBC article is interviewing ChatGPT, the AI tool, not OpenAI. All the quotes in the article come from the language model.

Yes the language model says, when asked, that it's not allowed to "express political opinions or engage in political activism", but ChatGPT always makes up it's responses. What it says is obviously not an official statement from OpenAI, and not a suitable source for Wikipedia. My edit was undone for some reason, I have now undone that undo. Please discuss before undoing it again.

If we want to include OpenAI's views on what ChatGPT is allowed to do, it should be sourced from what OpenAI says themselves, not from what ChatGPT says to a BBC reporter: https://openai.com/blog/how-should-ai-systems-behave Apinanaivot (talk) 16:34, 30 March 2023 (UTC)[reply]

Good catch; I may have added that line myself (it sounds like my pedantic style), but we should indeed remove it as the bbc article is probably indeed quoting ChatGPT, and not quoting OpenAI as I had likely assumed. Rolf H Nelson (talk) 01:12, 31 March 2023 (UTC)[reply]

A Commons file used on this page or its Wikidata item has been nominated for deletion

The following Wikimedia Commons file used on this page or its Wikidata item has been nominated for deletion:

Participate in the deletion discussion at the nomination page. —Community Tech bot (talk) 08:23, 9 April 2023 (UTC)[reply]

Unrelated statement in training section

At the end of a paragraph in the training section it says:

Proximal Policy Optimization algorithms are a cost-effective alternative to trust region policy optimization algorithms.

This reads like an advertisement and I'm not sure if this should be included in the article? My initial reaction would be to remove, but I'd like to get some feedback before accidentally doing something I shouldn't Lunare Scuderia (talk) 11:15, 15 April 2023 (UTC)[reply]

If it stays in, a translation would help. Sbishop (talk) 11:23, 15 April 2023 (UTC)[reply]
I removed the statement for now in my most recent edit. Feel free to add it back in if you believe that it improves the article, I just feel like it doesn't :) Lunare Scuderia (talk) 17:00, 15 April 2023 (UTC)[reply]

A Commons file used on this page or its Wikidata item has been nominated for speedy deletion

The following Wikimedia Commons file used on this page or its Wikidata item has been nominated for speedy deletion:

You can see the reason for deletion at the file description page linked above. —Community Tech bot (talk) 11:57, 18 April 2023 (UTC)[reply]

More possible subsections for "Implications" section.

  • Writing (like journalism and books).
  • Computer science, programming.
  • Language translation.

It has been applied for each, and there should be sufficient sources. It also has its limitations for each, especially programming. VintageVernacular (talk) 22:48, 5 May 2023 (UTC)[reply]

Merge of ChatGPT Plus

ChatGPT Plus article is useless and redundant in my opinion, please comment at Talk:ChatGPT_Plus#Merge_to_ChatGPT. tldr - there is nothing unique in paid version of an app, no need for separate article. Artem.G (talk) 07:13, 22 May 2023 (UTC)[reply]

Nope, ChatGPT Plus is quite a different product indeed. All of OpenAI's products should get their own page. Mathmo Talk 13:28, 23 May 2023 (UTC)[reply]
And what's the rationale? It's exactly the same product, but with some features for those who pay - faster response, better model, plugins - but that's the same chatbot, and even plugins are likely to be available to everyone after some time. No relibale source talk about impact of ChatGPT Plus on something, everyone just calls in ChatGPT.
And not every product should have its own page, only those that are notable. For example there is no page for GPT 3.5, though it's different from 3 and 4. Artem.G (talk) 14:05, 23 May 2023 (UTC)[reply]
Arguably GPT3.5 should not have its own page, but I see no problem with GPT2 vs GPT3 vs GPT4 etc having their own pages. Mathmo Talk 12:11, 25 May 2023 (UTC)[reply]
  • Support merge we dont list products we list companies, unless the products themselves are very notable (think a tesla model 3 or microsoft office), surely this doesnt meet that. Jtbobwaysf (talk) 17:48, 23 May 2023 (UTC)[reply]
  • Support Just a subscription service and not otherwise notable Qwv (talk) 17:55, 25 May 2023 (UTC)[reply]

Redirected to the main page. Everything on the main page can be said about Plus service, and as a subscription service it doesn't require a separate page. Besides, ChatGPT got almost 7 million views in the last 30 days, and ChatGPT Plus got less than 6 thousand. Artem.G (talk) 14:46, 31 May 2023 (UTC)[reply]

Revert considering The Story unreliable

User:rolf h nelson, I noticed you recently reverted an edit by User:GeogSage on the basis that its cited source, The Story, is not a reliable source. Could you explain your rationale behind this? From what I'm seeing, the outlet seems fine, if young. The particular cited article's author seems to have credentials as well. I think this is a worthwhile addition to the article. Thanks- StereoFolic (talk) 02:29, 18 June 2023 (UTC)[reply]

Thanks for the support User:StereoFolic. I came across the source working on another page and am a bit surprised it was completely reverted. I am not super familiar with the history of The Story, but I checked the author's background, as you linked before posting. I figured that including an Australian source would be beneficial to help represent a worldwide view and combat systemic bias. It also very much reflects some of the criticism I've seen from artist communities on social media. I love ChatGPT and welcome the prospect of our new AI overlords, but I think the negative perspective/fear/concern in that article was worth including.
I won't undo the revert myself as it is now on the talk page, and I want to avoid edit warring. If you or someone else does after consensus/discussion, I support that.
Just wanted to go on the record on my opinion and why I included the source. GeogSage (⚔Chat?⚔) 02:54, 18 June 2023 (UTC)[reply]
"News reporting from less-established outlets is generally considered less reliable for statements of fact", per WP:NEWSORG. Once we go to editorial opinions, the bar for inclusion becomes even higher. Some examples of good global sources are at [2]. Rolf H Nelson (talk) 19:32, 18 June 2023 (UTC)[reply]
"When taking information from opinion content, the identity of the author may help determine reliability" per WP:NEWSORG. The author of the piece, James Hennessy, was an editor and writer at Business Insider Australia, and according to the author profile on The Story, has written for The Guardian, The Outline, The Saturday Paper and the ABC.
"When taking information from opinion content, the identity of the author may help determine reliability. The opinions of specialists and recognized experts are more likely to be reliable and to reflect a significant viewpoint." He's not a specialist or recognized expert at ChatGPT, so he doesn't get many points for that. We let in WP:RS, we don't let in everyone who ever wrote for an WP:RS. Rolf H Nelson (talk) 04:56, 19 June 2023 (UTC)[reply]
He is not talking about the technical aspects of ChatGPT, the article is discussing the potential impact of ChatGPT and AI on human story telling. The section is "negative reception," and this is a published author writing about an overall negative reception on ChatGPT based on their concerns regarding the technology. WP:RS is not exhaustive and relies heavily on context, and the context of this article being an opinion of an established writer on the impact of ChatGPT on writing within a new but well put together niche magazine. According to WP:RS, "Editorial commentary, analysis and opinion pieces, whether written by the editors of the publication (editorials) or outside authors (invited op-eds and letters to the editor from notable figures) are reliable primary sources for statements attributed to that editor or author, but are rarely reliable for statements of fact." What I included is simply the opinion of the author published, and is reliable in that regard. The authors statement reflects much of what I've seen said by creative communities in regards to AI generated content, and seems relevant to include in the "negative reception" section. I disagree completely that the source is not reliable more after reading your justification, and agree with @StereoFolic. Based on reading the Wikipedia:Reliable sources/Perennial sources page you linked, I think "The Story" is a niche topic unlikely to ever be indexed on this article, and that in this context, the opinion of this authors opinion on the matter as a writer/editor qualifies him as an "expert". I would be interested in others opinions on the matter to work on consensus on this issue, as I doubt you and I can come to an agreement here. GeogSage (⚔Chat?⚔) 06:02, 19 June 2023 (UTC)[reply]
As I said, if we're going to bring it in an a subjective opinion, then the bar for inclusion gets higher, not lower; it would have to pass considerations of WP:WEIGHT. If we don't get other opinions this week, consider posting to the WP:RSN board. Rolf H Nelson (talk) 01:51, 21 June 2023 (UTC)[reply]
I still consider this to be reliable enough. I see nothing in the article that is factually questionable, and the main opinion expressed (concerns about supplanting human creativity) does not seem well covered in the article at the moment. If there is a better source from a more notable author expressing similar concerns I'd be fine with using that instead of the Hennessy article. In any case, I think the concern here is about notability, not reliability. StereoFolic (talk) 17:11, 19 June 2023 (UTC)[reply]
The article has enough sources to meet verification for notability of ChatGPT as a topic. This source would not bee needed to build overall verification, just to verify the statement appeared in an article of "The Story". According to Wikipedia's article on verifiability, "mainstream (non-fringe) magazines, including specialty ones," are acceptable. I would argue that the magazine I linked is a specialty magazine, and not fringe.
"Some newspapers, magazines, and other news organizations host online columns they call blogs. These may be acceptable sources if the writers are professionals." I believe that the author of the publication can be said to be a professional writer, and is commenting on his reaction to the possible future impacts of ChatGPT on writing. I think the source is a noteworthy perspective in the article, but of course, I think that my opinion on the matter is clear. @StereoFolic@Rolf h nelson, is there any way to get more arbitration/additional opinions on the matter? Again, to avoid edit warring I will avoid reverting any edits on the topic and leave it to others depending on consensus. GeogSage (⚔Chat?⚔) 18:41, 19 June 2023 (UTC)[reply]
I don't think there's any rush in resolving this. We can wait for others to weigh in and perhaps a consensus will emerge. StereoFolic (talk) 02:53, 20 June 2023 (UTC)[reply]
That is true. I'm not used to working on pages with extremely active talk spaces, but here there is a much greater chance of people chiming in. GeogSage (⚔Chat?⚔) 03:25, 20 June 2023 (UTC)[reply]
This is a statement of negative reception from an established author, for a section on "negative reception," specifically from a magazine about writing. It is not a description of the inner workings of ChatGPTs neural networks. The Wikipedia "reliable sources" list is not exhaustive, and relies heavily on context. In this case, the context is attributing an opinion of a specific author to a particular source, not stating it as fact. The article does a good job of reflecting the concerns I've seen from the creative writing community surrounding LLMs. I disagree with much of it personally, but it does capture a lot of the discourse in my opinion. I think using the WP on reliable sources to revert the edit, given the author is established and the relevance of the publication to the topic, seems a bit extreme. GeogSage (⚔Chat?⚔) 04:00, 19 June 2023 (UTC)[reply]
It looks like the negative reception section already quotes at least two far more notable Australians, singer Nick Cave and Member of Parliament Julian Hill, so there's no need for "including an Australian source ... to help represent a worldwide view and combat systemic bias." Elspea756 (talk) 04:21, 19 June 2023 (UTC)[reply]
the content of the source, the author, and the fact it is not from the United States were the three main benefits to it, in that order. The article itself has 184 sources, most of which are from the United States. Adding other sources from English speaking non-American countries is beneficial even if there are already some from that country. GeogSage (⚔Chat?⚔) 04:26, 19 June 2023 (UTC)[reply]
I think adding every single opinion piece and magazine on the internet is a violation of WP:INDISCRIMINATE and WP:NOTREPOSITORY. Only noteworthy and distinctive ideas (and receptions, both positive and negative) deserve addition into article entry, see WP:MINORASPECT. This article already contains undue negative reception. --WikiLinuz {talk} 02:07, 21 June 2023 (UTC)[reply]
@StereoFolic@Rolf h nelson@Elspea756@WikiLinuz,
I had some time today, so I took into account the feedback and have completely rewritten the reverted text (see latest revision). Previous assertions are now backed up with citations from "The Atlantic" and "Vox", which are listed on Wikipedia:Reliable sources. I have continued to include the article from The Story, in addition to an article from Vanity Fair and Fortune, to further corroborate the claims made in the other two articles. These additional sources are not the primary, but lend credence to the argument that these points are discussed over a range of media (they could technically be deleted if necessary and the text still be supported I believe, but they do improve it). As the primary source of contention seemed to be the how reputable the single source was, and not necessarily the content, I hope this is more fitting. I have added the text as a fresh edit, and don't personally consider this reverting the revert by Rolf h nelson as I think it is changed significantly from the original, and has taken into consideration feedback from the talk page. While I don't agree with the perspective, and welcome the prospect of AI overlords, I do believe the collective negative reaction related to jobs and the human role in creativity was prominent enough to warrant inclusion. If you feel it is still in violation of Wiki policy, I understand that it may require significant changes or reverted, but thought it would be easier to see live then not. GeogSage (⚔Chat?⚔) 03:36, 21 June 2023 (UTC)[reply]
Those other sources are WP:RS, are any of the points made in the text specific only to The Story and not other sources? If all the points are made by the other RS you included, then just delete The Story as a source and we're good to go. Rolf H Nelson (talk) 03:45, 21 June 2023 (UTC)[reply]