Jump to content

Wikipedia talk:Oversight: Difference between revisions

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
Content deleted Content added
Line 144: Line 144:


I've skimmed this discussion and I am astounded. Suppressing threats of self-harm has been done for years, and policy or not, is the ''right thing to do''. --'''[[User:Rschen7754|Rs]][[User talk:Rschen7754|chen]][[Special:Contributions/Rschen7754|7754]]''' 00:50, 3 September 2021 (UTC)
I've skimmed this discussion and I am astounded. Suppressing threats of self-harm has been done for years, and policy or not, is the ''right thing to do''. --'''[[User:Rschen7754|Rs]][[User talk:Rschen7754|chen]][[Special:Contributions/Rschen7754|7754]]''' 00:50, 3 September 2021 (UTC)

Why '''the fuck''' is this still being discussed on-wiki? I thought the list was adamant that we don't need to get the community involved and then proceeded to gaslight anyone who dared suggest otherwise? Why would we even ''consider'' informing the community who trust us to correctly apply these tools? In ''what possible world'' would we then update the only publicly auditable part of the oversight process? ~[[User:TheresNoTime|TNT]] (she/they • [[User talk:TheresNoTime|talk]]) 00:56, 3 September 2021 (UTC)

Revision as of 00:56, 3 September 2021

Suppressing workplace information

Hello. I was recently reverted by GeneralNotability for changing the scope of suppression from "workplaces" to "work addresses". I should point out that "workplaces" can be construed to include "employer names", but that information is not necessarily eligible for suppression. I recently submitted an Oversight request where a person's employer was mentioned on an article, but the request was denied. It appears that WP:OSPOL is meant to address information that can be used to threaten the safety or security of a person, and employer name doesn't necessarily rise to that level of protection. Edge3 (talk) 17:24, 21 November 2020 (UTC)[reply]

Employers of an editor would be suppressed. In an article, it may be suppressed depending on the circumstances. The policy is in my opinion fine as is. TonyBallioni (talk) 17:50, 21 November 2020 (UTC)[reply]
The policy doesn't make a distinction between the private information of an editor, versus that of the subject of an article. It seems to me that the policy is vague. Also I should point out that the policy uses the term "non-public", yet doesn't define it. Biographical information could be posted on someone's social media account, or on public records such as those held by a government agency, yet still be considered "non-public" because it wasn't meant for widespread consumption. The policy doesn't address this as currently written. Edge3 (talk) 18:21, 21 November 2020 (UTC)[reply]
A lot of this is discretionary. The oversight policy tends to be shades of grey because of the nature of the content involved, and individual oversighters have a lot discretion on how to implement it. It very much depends on the specific circumstances and the vagueness of the oversight policy in this regard is a feature. TonyBallioni (talk) 18:33, 21 November 2020 (UTC)[reply]

Non-public personal information about deceased people

  1. Is it allowed to suppress names of victims died in event (such as child murder cases)?
  2. Is it allowed to suppress names of people that is presumed dead per WP:BDP?

--GZWDer (talk) 22:26, 9 December 2020 (UTC)[reply]

Remove criterion 5 from the policy list

The following discussion is an archived record of a request for comment. Please do not modify it. No further edits should be made to this discussion. A summary of the conclusions reached follows.
This isn't going to go anywhere. Anarchyte (talkwork) 05:27, 21 December 2020 (UTC)[reply]

I cannot see a single reason, now that admins have the needed tools, that there is a need for oversight of mere vandalism. Revdel is fine for all edits that are merely vandalism and not libelous, which is covered by criterion 2. RD3 exists for a reason. 4thfile4thrank (talk) 02:51, 10 December 2020 (UTC)[reply]

  • I suppose you can't see a single reason because...well...they're suppressed. This criterion is rare but is used from time to time to deal with what tends to just get lumped in as vandalism (but could include serious BLP violations, and a pile of other things). One of its uses is to deal with edit filters, where there is either "visible" or "suppressed" and nothing in between; although it doesn't seem to be an issue right now, it has been a problem in the past with other extensions. Some of my colleagues may identify other examples. Risker (talk) 03:04, 10 December 2020 (UTC)[reply]
  • Deferring support or oppose until I hear more from oversighters about whether THEY think this is needed. Thank you Risker for providing your input. davidwr/(talk)/(contribs) 03:40, 10 December 2020 (UTC)[reply]
    I do a suppression that falls under OSPOL 5 probably once or twice a month. I know that's not a huge amount, but you wanted my thought so here it is. Primefac (talk) 10:47, 10 December 2020 (UTC)[reply]
    @Primefac: What is an example of content that would solely fall under OSOPOL 5? Obviously you can't name any specific cases but I would like to know when it is needed. 4thfile4thrank (talk) 13:39, 10 December 2020 (UTC)[reply]
    The one that comes to mind first is a vandal username that isn't necessarily a direct attack on someone (therefore failing OSPOL 2 & 4) but needs suppression. Primefac (talk) 13:58, 10 December 2020 (UTC)[reply]
  • Oppose per Risker. We also don’t want a strict Oversight policy where people are worried about what to suppress and what not to suppress. Think about the nature of the tool. These type of things are always discussed. There are a lot of reasons why limiting the oversight policy would be a very bad idea. TonyBallioni (talk) 04:27, 10 December 2020 (UTC)[reply]
  • Oppose We know that people who can oversight an edit are smart enough to not oversight poop. If they do it, it's because there is a good reason. Johnuniq (talk) 04:31, 10 December 2020 (UTC)[reply]
  • It's probably worth noting here that this criterion is hardly ever used. "Vandalism" is not one of the options in the dropdown for the suppression tool, and nearly all suppressions fall into either the personal information or the libel/defamation categories instead. Nevertheless, there are occasional edits outside of those two categories that are so vile and revolting that the best thing for the project is to make them invisible even to administrators. I wouldn't want this criterion to be used any more frequently than it already is (WP:RD3 is sufficiently appropriate in all but the most severe cases), but at the same time I don't see what could possibly be gained by removing this option from policy altogether. – bradv🍁 15:01, 10 December 2020 (UTC)[reply]
  • Oppose change - it sounds like every use of this criteria would be a good use of WP:IAR if the criteria did not exist ("there are occasional edits outside of those two categories [where] the best thing for the project is to make them invisible even to administrators." User:Bradv, 15:01, 10 December 2020 (UTC)) and they are used often enough that relying on WP:IAR over a dozen times a year for basically the same issue puts more pressure than necessary on WP:IAR, which by design should be used for cases so exceptional that nobody has seen a need to create a rule or guideline for them before. davidwr/(talk)/(contribs) 21:31, 10 December 2020 (UTC)[reply]
  • Oppose; it might not be the most frequently used reason for requesting OS, but it's still used steadily and consistently. 4thfile4thrank, you seem to be making a lot of weird suggestions and requests that have no chance of being accepted recently; this is just a suggestion not an order, but you might want to consider dialling it back a little. We're more than 21 years old; while we're certainly not perfect, a lot of things that might not seem obvious are nonetheless there for a reason and if we're not doing something, we've likely discussed it at length and come up with good reasons not to do it. ‑ Iridescent 16:11, 11 December 2020 (UTC)[reply]
The discussion above is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.

The current Wikipedia logo uses the Wikipedia logo dating all the way back to 2003. That image, however, is scaled quite weirdly, has questionably-scaled pieces in the small hole created by the missing pieces, and has a blot of gray pixels at the top, which is a reflection of the problems of the old logo. That said, I got a 3D version of the puzzle globe and deleted the pieces much like in the current Oversight logo. After rendering, it looks significantly better than the old version, although not exactly like the Wikipedia logo (since this is an SVG file, which doesn't have the backside pieces available). It's been quite a while since the current logo was made (2009), so can it (finally) be replaced with this newer and cleaner version?

Feel free to suggest changes to the proposed logo. Your thoughts on this would be very much appreciated. Chlod (say hi!) 18:19, 13 April 2021 (UTC)[reply]

Maybe it could be a bit brighter to be more similar to the Wikipedia logo? — Berrely • TalkContribs 18:22, 13 April 2021 (UTC)[reply]
  • Support. besides the obviously ne3ded update, on the old one the empty section on the bottom looks convex instead of concave to me. The new one doesn't. BTW those three puzzle pieces that are removed; what languages do they represent? --Guy Macon (talk) 21:36, 13 April 2021 (UTC)[reply]
    @Guy Macon: According to Wikipedia:Wikipedia logos, it's Cyrillic i (И), Hebrew vav (ו), and Kannada va + (i) (ವಿ). Chlod (say hi!) 21:42, 13 April 2021 (UTC)[reply]
    Darn. I was hoping that is wouldn't be Arabic, Hebrew, or any other language where somebody could accuse us of making a political statement. Oh, well, it could be worse; imagine the complaints if Taiwan had a different language fro mainland China and we "deleted" it. --Guy Macon (talk) 22:01, 13 April 2021 (UTC)[reply]
  • I quite liked the shadowy visual effect in the missing pieces in the current logo, which seems to be lost in the general lightening of the logo Nosebagbear (talk) 10:56, 14 April 2021 (UTC)[reply]
    • Too dark vs. too light, IMO. Maybe somewhere between? Could we please see row of candidates from light to dark and see what the consensus is? --Guy Macon (talk) 14:24, 14 April 2021 (UTC)[reply]
      @Guy Macon: This is the midway point between Nosebagbear and Berrely's comments: Visible shadow and pieces on the inside of the globe, with a generally bright exterior. How does that look? Chlod (say hi!) 15:55, 14 April 2021 (UTC)[reply]
      • Not a big fan of those shadows. I was hoping for the "Proposed Oversight logo" with the addition of some subtle blurry shadows in the bottom part only as seen on the "Current Oversight logo". Definitely keep the edge you added to the rear piece right above the Omega. Way better than the abrupt gray to white edge the previous versions had. Right now it looks better than the "Wikipedia logo", so please make a version with all the puzzle pieces so I can propose replacing the current version. --Guy Macon (talk) 16:10, 14 April 2021 (UTC)[reply]
        Also not a big fan of the shadows. I might go ahead and make the spot of light inside of the globe darker so that it doesn't stand out as much. I'm having issues finding out what you meant by the "subtle blurry shadows" – would that be inside or outside the globe? Also, the original Wikipedia logo is an SVG file, unlike the Oversight logo which is a PNG file. The above Wikipedia logo is used on all wikis (and also trademarked by the WMF) because that version can easily scale to multiple resolutions, whereas a rendered PNG file of a 3D object cannot. Changing it would require a proposal to the entire Wikimedia community, and I don't think this 3D version is superior to the SVG file. Chlod (say hi!) 16:24, 14 April 2021 (UTC)[reply]
  • I don't care one way or the other, but my impression is that we aren't the only project using this, and so the discussion should probably be at Meta as well, or at least outreach to other projects that use this image should be made. There's nothing more annoying than sudden change, especially of things that routinely are placed on user pages. Risker (talk) 20:25, 14 April 2021 (UTC)[reply]
  • I have already done this back in October 2020. Maybe the lighting in my version looks more accurate to the current Wikipedia logo than Chlod's? -- Ljcool2006 (talk) 20:52, 14 April 2021 (UTC)[reply]
    • IMO that's almost perfect.
      One quibble; above the omega we see the back side of a puzzle piece. The edge of that piece should get more light. It almost looks like it has zero thickness until you notice the low-contrast edge. --Guy Macon (talk) 23:20, 14 April 2021 (UTC)[reply]
  • I'm a bit late to the discussion. The thing I like most about the current design is the contrast between the dark part where the tiles are removed and the lighter parts. This image is most often used at a very small size, e.g. in topicons or userboxes, so it's important to make sure that the icon is clearly identifiable at topicon sizes (e.g. 20px). I don't think any of the proposed designs are: . Current design for reference: . Mz7 (talk) 17:06, 20 May 2021 (UTC)[reply]
    Perhaps the part where the tiles are removed could have its brightness bumped down with a bitmap editor? — Berrely • TalkContribs 19:54, 13 June 2021 (UTC)[reply]
    Could also just add a semi-transparent black sphere within the globe to obscure some of the light. Chlod (say hi!) 22:41, 13 June 2021 (UTC)[reply]

Is anyone who works on images still reading this? I have been waiting a month for a response to my comment about the top edge of the back side of a puzzle piece above the omega. --Guy Macon (talk) 12:21, 14 June 2021 (UTC)[reply]

Protected edit request on 26 May 2021

Re-add the IRC option, but with the Libera chat template ({{libera.Chat|wikipedia-en-revdel}}) instead of Freenode. Libera is working well. aeschylus (talk) 17:33, 26 May 2021 (UTC)[reply]

 Done Primefac (talk) 17:37, 26 May 2021 (UTC)[reply]

OSPOL #1 updated

The following discussion is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.



@TheresNoTime: I do not disagree in principle with your change to OSPOL #1 to include "most threats of self-harm". However, I am surprised to learn that "currently accepted practice" amongst the Oversighters is inconsistent with the Oversight policy, and that the Oversight team feels that the best way to fix that is to quietly update the policy to expand their privileges. This isn't how it should work, and that is especially true for the functionary tools, where the community has virtually no way to ensure that the policy is being followed. Are there any other areas you are aware of in which it is "currently accepted practice" to use the Oversight tool in violation of the consensus version of WP:OSPOL?
Additionally, as a drafting note, OSPOL #1 states "non-public personal information" and then provides a list of examples which are pretty clearly personally identifiable information. Including "threats of self-harm" in this category is a fairly substantial re-drafting of OSPOL #1 from a tool to suppress non-public PII to a tool to suppress any information about a person that the Oversight team does not believe is public. This should be a very clear and narrow policy. Is it your intention to allow the use of oversight to suppress non-public PII plus threats of self-harm (in which case they are unrealated, and the self-harm thing really should be its own, new, criterion, not stuffed under the PII one), or do you have a more expansive definition of "personal information" that you would like to share? ST47 (talk) 04:46, 1 September 2021 (UTC)[reply]

@ST47: As a functionary, you would have received my email to functionaries-en on Monday, August 30, 2021, as well as been privy to the resultant discussion. I am certain that thread will provide you with adequate answers to the above queries. I understand that this has clearly frustrated you, but I really don't appreciate the accusatory tone of your message. ~TNT (she/they • talk) 05:35, 1 September 2021 (UTC)[reply]
The same mailing list where you were advised not to respond to this thread? (As if ignoring questions about a unilateral change in policy will bring about any result other than a revert!) I would prefer that changes to policy be discussed in a public forum. ST47 (talk) 08:03, 1 September 2021 (UTC)[reply]
@ST47: Yes - I have no issue in replying and defending my bold edit, and I hope by doing so you appreciate that I'm not trying to have a "behind closed doors" conversation That being said, there's very little I can add to this conversation which I haven't already covered in the edit itself - I "merely" (for want of a much better word, excuse me) updated policy to match the consensus of the people in which it governs. As I am certain you understand, this subject is very emotive and feelings run high - no one wants to have to deal with the very real possibility that you are reading and hiding an editors darkest moments. ~TNT (she/they • talk) 08:13, 1 September 2021 (UTC)[reply]
ST47's characterization of the thread is an accurate one, there was absolutely a suggestion made to not respond here from someone above the normal functionary level, and the suggestion that they are trying to make a stink about it. Disappointing. SQLQuery Me! 11:47, 1 September 2021 (UTC)[reply]
I'd like to express my support for TNT's actions here. As several oversighters I've argued with (plus current ArbCom, whom I've emailed complaining about this) are well aware, I am unhappy with the current state of affairs in which off-wiki OS consensus apparently includes suppressing threats of harm. I also agree with ST47's comment that the community has virtually no way to ensure that the policy is being followed. However, I have no problem with updating OSPOL to include threats of self-harm if that will bring policy into line with practice, my complaint is just that there are "accepted uses" of OS that are not documented in OSPOL and that the community is not aware of. GeneralNotability (talk) 13:04, 1 September 2021 (UTC)[reply]
  • For the record, this has been a use of the OS policy for years. I can think of cases from before I passed RfA where it was used that way, and based on what other Oversighters have said, it has been used in this way over a decade. I also don't disagree with TNT's change, but I think the best solution would have been explaining to anyone who complained that their interpretation of the policy was wrong rather than adding to the policy.
    The policy list use cases as principles, and then lists examples of somethings that fall under each bullet. Probably in the top 2 reasons for suppression is a minor self-disclosing themselves; that's nowhere to be found in the text of the policy but it is so common it has its own drop-down in the revdel tool (as anyone who is an admin can tell you.) That's a logical extension of the private information criteria as a child doesn't have the capacity to understand the risks and we don't need to update the policy to reflect this. We also will on occasion suppress self-revealed information from people early in their wiki-career on the principle that they don't understand how wikis work and that self-revelation is forever.
    Both of these flow from the same principle, but they're not directly mentioned: I do not consider either to be a violation of the policy, and I don't think the policy needs to be updated to include them either. On a similar front, someone's mental health status at one of the darkest points of their life is obviously private information, and if they're in a state of mental stress to that point, they don't have the capacity to publish it at that moment.
    If we're going to make a change, my preference would be to make it clear that the list of private information is not all inclusive, and I think that could be done easily if someone wants to make the edit. TonyBallioni (talk) 13:50, 1 September 2021 (UTC)[reply]
    I'm actually a little surprised that wording like this was never used. Primefac (talk) 14:03, 1 September 2021 (UTC)[reply]
    Yes, I think that's a good change, and is my existing understanding of the policy. TonyBallioni (talk) 14:09, 1 September 2021 (UTC)[reply]
  • In general, our precedent has indeed been to consider threats of self-harm "non-public personal information". Indeed, this is someone's health status we are talking about—not to mention a glimpse of one of the darkest moments of someone's life. I can think of few pieces of information more personal and more non-public than that. Mz7 (talk) 17:41, 1 September 2021 (UTC)[reply]
    @Primefac, TonyBallioni, and Mz7: My concern is that OSPOL #1 was "personal information" with a list of examples of PII, and the addition of this example - even though it is completely reasonable to seek to suppress threats of self-harm - forces a much broader reading of non-public personal information than could previously be supported by the list of examples - one which could be read to include, for example, "my favorite color is blue", or "I am an engineer", or "I prefer to code in Python" (or perhaps "{some BLP} is a Christian"...). It would be better to specifically call out threats of self-harm as an alternative in the definition of OSPOL #1, or else to create a new OSPOL #6, to avoid creating this situation where OSPOL #1 is so broad as to be a blank check. What about wording OSPOL #1 as Removal of non-public personal information or mental health information. Suppression is a tool of first resort in removing this type of information, including (but not limited to):?
    And to respond to @TonyBallioni: about the common case of self-disclosures by a minor - well, if you're suppressing a date of birth, or a name, or even an age and location, then that is part of policy as PII. The fact that Oversighters use their discretion to suppress information about minors proactively is not inconsistent with this policy. I don't see this as an argument that this policy is "meant to be broken", I think it's already covered. (However, for better consistency, it might be helpful to decide on an age cutoff for this sort of proactive suppression, and add it to the examples? Identities of pseudonymous or anonymous individuals who have not made their identity public or who are under the age of 16...) ST47 (talk) 21:45, 1 September 2021 (UTC)[reply]
    I don't think the policy is meant to be broken — I agree that it's not inconsistent with the policy to suppress information about minors or even adults who don't understand how wikis work and reveal too much on their first day with an account — but I also don't think that suppressing non-public information about someone such as health information is inconsistent with the policy as it is written now.
    I'd also oppose your suggested wording because I think it would narrow the policy from one that is fairly expansive and gives discretion with how to deal with issues surrounding privacy that would legitimately fall under suppression as non-public personal information, but might not strictly be PII or mental health related (example: someone's HIV status.) I think Primefac's change solves the problem of people interpreting the policy differently than it has historically been interpreted by the Oversight team: the list is not an exclusive list, but rather some of the more common examples. There can be circumstances which arise (such as with mental health issues) where we would suppress even if not listed, and we have done that for years. I don't think there needs to be any change to the policy, like I've said on-list and here, I don't think TNT's change was necessary, and while I really do respect your and GN's position on this and think I have good relationships with both of you, I do think your interpretation of the policy is not consistent with how it has been interpreted since I've been active on-wiki as an editor and as an oversighter.
    To be consistent with that position, I'm fine with keeping TNT's change to make it clear we do suppress those, but I would prefer no additional changes other than the one's that have already been made because I don't think the policy is broken. TonyBallioni (talk) 22:04, 1 September 2021 (UTC)[reply]
    I think this is where we disagree: I simply don't see it as broadening the definition of "non-public personal information" into a blank check—I see it as falling under the definition as originally stated. The oversight team, just like Wikipedia as a whole, operates on common sense and precedent. If you come to us and request oversight for "my favorite color is blue", that would of course be ridiculous because we have not historically considered that "non-public personal information". On the other hand, we have historically considered someone's mental health status to fall under this category, so I don't see any need to break it out into a new OSPOL #6. Mz7 (talk) 22:19, 1 September 2021 (UTC)[reply]
    Mz7, the problem as I see it is one of transparency: I don't necessarily disagree with your/Tony's interpretation of these mental health situations as information that should be hidden for privacy reasons, the problem is that your interpretation is not what I'd consider obvious to outsiders. You mention precedent, but the overwhelming majority of editors can't see internal OS discussions and so won't be aware of that precedent, and I don't think the average person would see "PII" and think "threats of self-harm". Basically, I think the community needs to be aware of some of the less obvious precedents/interpretations of OSPOL, with the knowledge that any such list will not be exhaustive. GeneralNotability (talk) 00:58, 2 September 2021 (UTC)[reply]
    I think it would be extremely difficult to provide any information about precedents here without compromising editor safety or privacy. I have an example in mind from over 10 years ago, but if I gave you any details about it at all publicly someone would be able to figure it out (or think they've figured it out), and that would have a negative effect on people's real lives. But at the same time I am confident that if you were entrusted with the details of the example I have in mind you would agree that it was handled properly.
    I think the bottom line is this – we need to have trust in our functionaries that they have the best interest of the project and its editors at heart. The job often is not easy, and at times quite stressful, but the internal OS list is there for functionaries to support each other and hold each other accountable (and in my experience does an excellent job at both of these). Dealing with mental health issues and threats of suicide is difficult, and it's not something anyone should have to do alone, but it simply can't be done in the open. Not even in terms of hypotheticals or generalizations – this community is simply too small for that to work.
    At any rate, this is by no means a substantive change to practice or policy. Suppressing "medical information" has been listed as a common practice (although not on this particular page) since at least 2009. – bradv🍁 02:08, 2 September 2021 (UTC)[reply]
    I have no doubt you're right, but the community has almost no (heh) oversight here. I do trust our functionaries, but "just trust us" can be hard to accept when a group is both responsible for interpreting the policies that apply to it and for internally enforcing those policies on itself. That goes double for OS, where there's basically no data available to non-OSers to contest the use of suppression. GeneralNotability (talk) 02:17, 2 September 2021 (UTC)[reply]
    I hear you, and this is why both the oversight list and ArbCom take complaints about the use of suppression very seriously. By necessity, the oversight group has a "suppress first, ask questions later" philosophy, so it's very common for suppressions to be overturned upon review. According to policy, an email must be sent to the list anytime someone is "oversight blocked" where it gets reviewed by the other oversighters, and this practice is usually also followed any time an action is unusual, potentially controversial, or contested. I know it's not perfect, and it's pretty easy to imagine some sort of nefarious clique that's suppressing the Truth™ or something, but in practice the system does work well. – bradv🍁 02:38, 2 September 2021 (UTC)[reply]
    PII isn't listed anywhere in the policy: non-public personal information is. That's historically been interpreted much more broadly than what is traditionally thought of as PII, and changing the definition to only be PII now would be a fairly major shift in the policy.
    As an example, if a Wikimedian was public with who they were but had not revealed some aspect of their life — such as their sexuality or religion — we would suppress that without question. That doesn't necessarily meet the definition of PII, but it would be a non-controversial suppression.
    If you want to take this thought experiment a bit further: if a Wikimedian with a known identity had actually self-harmed in real life, and someone posted on their talk page revealing details about it that the person had not revealed on-wiki, I think that would be a fairly non-controversial suppression. It's not PII, but it is private information about an individual they have not self-revealed. Going back to the principle that the Oversight team has consistently applied for years: if someone lacks the capacity to reveal information that would otherwise be suppressed, we will suppress. If someone is making a threat of self-harm on the project, we can safely assume they lack that capacity. Since we would suppress in cases where someone else revealed it, we suppress in these cases as well. TonyBallioni (talk) 02:41, 2 September 2021 (UTC)[reply]
The discussion above is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.

Okay so what do we do now....?

I have had alot on my plate IRL and was only peripherally aware of this discussion. My experience is that threats of self-harm have been suppressed. I've not done suppressing myself so am not the person best placed to really opine in detail, but only to add that trying to decipher a serious from non-serious threat of self harm via an edit without any clinical context is too big an ask for any functionaries really. Anyway, how shall we proceed here? Cas Liber (talk · contribs) 14:35, 2 September 2021 (UTC)[reply]

I should add it's after midnight Sydney time so am going to sleep, but this page strikes me as being left in an unresolved state that we should sort out now. Seen y'all in about 6-7 hours..... Cas Liber (talk · contribs) 14:36, 2 September 2021 (UTC)[reply]

(edit conflict) I self-reverted my unilateral and disputed change, so there's nothing to resolve. Status quo. ~TNT (she/they • talk) 14:40, 2 September 2021 (UTC)[reply]
I plan on going with the precedent we have set for ourselves internally and suppress first (+ emailing T&S), discuss second, and potentially revert third if necessary. Primefac (talk) 14:38, 2 September 2021 (UTC)[reply]
(brushing teeth and drooling toothpaste on keyboard) in which case this should be reflected in policy, both on this page and on Wikipedia:Responding to threats of harm. Cas Liber (talk · contribs) 14:41, 2 September 2021 (UTC)[reply]
(edit conflict) Why? Isn't the point here that it doesn't need to be in policy? The oversight corp self-governs, doesn't it? The community don't need to know ~TNT (she/they • talk) 14:43, 2 September 2021 (UTC)[reply]
For the former, my change to make OSPOL#1 not an exhaustive list probably does that. For the latter, Point 3 can probably be changed from "contact an admin" to "contact an OSer", since further down the page we tell the admins to contact us anyway. Primefac (talk) 14:43, 2 September 2021 (UTC)[reply]
We'll still suppress threats of self-harm as we always have, per WP:RFO and WP:EMERGENCY. The only remaining point of contention is whether that needs to be listed here, and if so, how it should be worded. Personally I think this page should give an accurate description of common practices so as to avoid the kind of confusion that led to this discussion, and I thought TheresNoTime's addition was a good starting point for that. – bradv🍁 14:43, 2 September 2021 (UTC)[reply]
@Bradv: You're in the minority Brad, not that consensus has anything to do with self-governance. ~TNT (she/they • talk) 14:44, 2 September 2021 (UTC)[reply]
On that note, everyone should probably be aware that I made those (unilateral) changes. Go dispute them too. ~TNT (she/they • talk) 14:45, 2 September 2021 (UTC)[reply]
@TheresNoTime, that's not a novel change to policy. That type of information has been listed onwiki as suppressible since at least 2009. It's perfectly reasonable to want this page to reflect that consensus, and I find no fault with your efforts to do so. – bradv🍁 14:53, 2 September 2021 (UTC)[reply]
Is what it is Brad. I'm fucking astounded at people who I really used to respect wanting to opt for bureaucracy. It's deeply upsetting as someone who has, in the past, been there that the most trusted individuals on the English Wikipedia would rather act like a real cabal, show actual disdain for the community and at best indifference to mental health. Ain't a group of people I overly want to be a part of. ~TNT (she/they • talk) 14:58, 2 September 2021 (UTC)[reply]
  • I mentioned this on-list (and I think I've said in some form everything I've said in private in public, because I do think transparency is important in these discussions), but I think a lot of the confusion comes from misunderstanding the list of examples below as the only types of information we suppress. Primefac's change alleviates that, but it might be better to simply remove the examples if they are going to be a source of confusion. If we're going to have a list, I don't have a problem with including self-harm on it, but if we do keep a list of examples we should also beef up the language making it clear that suppression can be used when there is a serious threat of real world harm to privacy or health, even if it doesn't fit neatly in the examples.
    There'sNoTime, the issue I think some of us are concerned about is that in addition to threats of self-harm there are many other types of information that could have an extremely negative impact on someone if made public that do not neatly fit into the blanket of PII, but does clearly fit into the criteria of personal non-public information (and example I gave above was HIV status; but you can go beyond that to ethnicity, gender, sexuality, etc.) We routinely suppress all of these things non-controversially and I think with broad community support. Some of the discussion above seemed to be taking a much narrower view on that.
    If we want to have the most transparency perhaps including language along the lines of any information about an individual's activities off Wikipedia that they have not revealed about themselves or where they did not understand the implications of posting the information of Wikipedia. The goal here is to protect as many people as possible and while I definitely agree that threat's of self-harm fall under the criteria, we need to be very clear that the Oversight team can act if there is a situation that is a serious threat to someone's privacy even if it isn't in our list of examples. TonyBallioni (talk) 17:28, 2 September 2021 (UTC)[reply]
  • Policy is intended to reflect practice, not to dictate it, and this has been the practice for the decade I have been on the oversight team. This isn't done punitively, it is done for the protection of people in crisis. There's precious little we can do for someone experiencing a mental health crisis, not letting it be public information and contacting the back office to direct resources their way is about it, and we should continue to do that. Beeblebrox (talk) 18:54, 2 September 2021 (UTC)[reply]
    If you'd like to remove the list of examples, then we need a better definition of "personal information". The current set of examples defines it pretty clearly as information which could be used to identify a person. So if we want to remove the examples and replace it with a definition, we need a similarly clear definition that does what you need. I suggested something along the lines of "PII or mental health information" above, I still haven't heard whether there is any other category of information which it is "currently accepted practice" to suppress, but which is not yet covered by this policy. ST47 (talk) 20:15, 2 September 2021 (UTC)[reply]
    I would like to point out that while the vast majority of "personally identifying information" is "non-public personal information", not all non-public personal information is pii. It doesn't have to identify someone to be considered personal, and has been repeatedly said by the OSers here, we do not want to paint ourselves into a box by essentially saying "we will only consider X and Y". Primefac (talk) 20:21, 2 September 2021 (UTC)[reply]
    Exactly. As with most policies, the list of examples are just that, examples, not an exhaustive list. Trying to create a perfect rule that specifically defines exactly what will and will not be suppressed is not a worthwhile use of anyone's time. Beeblebrox (talk) 00:39, 3 September 2021 (UTC)[reply]

I've skimmed this discussion and I am astounded. Suppressing threats of self-harm has been done for years, and policy or not, is the right thing to do. --Rschen7754 00:50, 3 September 2021 (UTC)[reply]

Why the fuck is this still being discussed on-wiki? I thought the list was adamant that we don't need to get the community involved and then proceeded to gaslight anyone who dared suggest otherwise? Why would we even consider informing the community who trust us to correctly apply these tools? In what possible world would we then update the only publicly auditable part of the oversight process? ~TNT (she/they • talk) 00:56, 3 September 2021 (UTC)[reply]