Wikipedia talk:Bots/Requests for approval

From Wikipedia, the free encyclopedia
Jump to: navigation, search
Alternatively, you can talk at #wikipedia-BAG connect.

Re-examination of approval[edit]

I'd like the community to re-examine Wikipedia:Bots/Requests for approval/Dexbot 6. This bot task was approved by a former BAG member who has publicly stated they do not understand WP:COSMETICBOT many times over the past year with no consensus discussion behind it and no input from anyone else. The task itself just replaces a normal external link to an official website with {{Official website}}. The output of these two methods of using official website links is the same. The only difference is that the template may help Wikidata import official websites; something that could be just as easily done with a bot without the template using a database dump. Given the lack of consensus, it seems clear this request for approval should have been bumped back to a broad community venue at the time it was submitted, not quickly approved. Without specific consensus, this violates COSMETICBOT.

For background: In February, I requested the bot operator to provide an explanation of the value of these edits or a pointer in the direction of any consensus at User_talk:Ladsgroup#Query. He declined to respond with anything but an appeal to look in his archives (which I was unable to locate anything within). That's a separate issue (WP:BOTCOMM), but not what this thread is about.

Should this approval be rescinded until consensus has been demonstrated? ~ Rob13Talk 15:03, 12 April 2017 (UTC)

You avoid the real issue here which is that you disagree with the task. Otherwise, why you raise the issue now after almost 8 months after its approval? -- Magioladitis (talk) 21:08, 12 April 2017 (UTC)
@Magioladitis: I actually am not sure if I disagree with this task. I haven't been able to figure out the merit of the cosmetic-only edits because when I tried to speak to the bot operator about them, you responded by insulting me and he declined to discuss this with me. My "disagreement" is with tasks operating without consensus, especially when some or all the edits are cosmetic. If that consensus is built, I never have a problem with the task, even if I personally disagree that the edits are a net positive. ~ Rob13Talk 04:35, 13 April 2017 (UTC)
What exactly is the issue here? How has it not already been addressed, both in the trial and in discussions such as this one (and at Template talk:Official website)? Headbomb {t · c · p · b} 15:22, 12 April 2017 (UTC)
@Headbomb: At the discussion you linked, there was no consensus for the task to be performed. In fact, it was clear that editors were objecting to it. That is the problem. Note that the disapproving editor wasn't isolated; I've located Wikipedia_talk:External_links/Archive_35#Official_website_template, which shows many other editors also disagreed with this task. ~ Rob13Talk 15:28, 12 April 2017 (UTC)
Agree with Rob, - there's no consensus for this bot task and approval should be rescinded for the time being. Hchc2009 (talk) 16:56, 12 April 2017 (UTC)
Wikipedia talk:External links/Archive 35#Official website template (prompted by User talk:Ladsgroup/Archive 4#Cosmetic edits) is the closest to a wide discussion that I can see. Without too much reading, I don't see consensus to perform this task en masse or by bot. The BRFA approver appears to advocate this change and heavily participate in the discussion above and did not request broader consensus before, which makes their approving of such a bot undesirable. I would support a re-approval. While I don't oppose the task itself,the proposal and approval were certainly rushed before establishing clear consensus. —  HELLKNOWZ  ▎TALK 18:46, 12 April 2017 (UTC)
Just to clarify my own opinions, I'm not sure I oppose the task itself either, but I do oppose any task operating without consensus. Re-approval is very possible if this is rescinded. I'd like to have a discussion about the merits of this bot task and whether it can be accomplished without cosmetic edits, but unfortunately, the bot operator wouldn't speak to me about that. ~ Rob13Talk 20:51, 12 April 2017 (UTC)

The conversion changes a hidden tracking page which helps compare date between English Wikipedia and Wikidata. It also help populate Category:Official website not in Wikidata (created in May 2015). Moreover, this is open since 2013 T99568. This is a 100% useful task. -- Magioladitis (talk) 17:46, 12 April 2017 (UTC)

Don't forget Wikipedia:Requests for comment/Wikidata Phase 2 in 2013 that also resulted in creating the tracking categories, modifying the template to support Wikidata fields, etc. There is an entire construction leading to the same direction. See also Template_talk:Official_website/Archive_2#Wikidata in 2014. Rob in fact opposes these changes and they do not tell publicly. They try with a piece-to-piece tactic to undo this construction. It's the same thing they do with the various bot tasks. What is the best place to report this behaviour of Rob? AN? Or should it be an ArbCom? -- Magioladitis (talk) 18:54, 12 April 2017 (UTC)

I do have opinions, as does every editor, and I sometimes turn to the community to determine whether there is a consensus for my opinions (or no consensus for the opinions of others, in this case). There's nothing "tactical" about that. It's the normal consensus-building process. I, in fact, support Wikidata, although I do question their quality-control sometimes. This has nothing to do with the quality of Wikidata, though, since official websites could easily be exported to Wikidata by bot without this template. This is about whether we force a template on our content creators without consensus and via a cosmetic-only bot task. Do what you feel you need to do, but I really struggle to understand what you find objectionable about this discussion other than the fact that I'm questioning that you approved it without consensus to back it up. ~ Rob13Talk 20:49, 12 April 2017 (UTC)
As this task has been ongoing for some time without issue (implicit consensus) IMO it is up to User:BU_Rob13 to clarify if consensus is now against this task being done. There was no one apposing here[1]
I am not seeing an issue and support its continuation. Doc James (talk · contribs · email) 19:07, 12 April 2017 (UTC)
That is not a particularly convincing argument, given that people have argued against this task from very early on. The bot operator was unresponsive to it. In every discussion that spawned related to that, no consensus emerged for the task, which is evidence that no consensus exists. ~ Rob13Talk 20:49, 12 April 2017 (UTC)
The first and largest wave of official websites has already been converted. Probably in the first 4 months of the tasks. During this process we even improved the data comparison between Wikidata and English Wikipedia. Now the bot does a few edits per day to maintain this good state. -- Magioladitis (talk) 21:07, 12 April 2017 (UTC)
  • There is no consensus for this, and for that reason I asked the bot operator to stop several months ago. Discussion here, for example. Pinging Fram, who seems to know most about this. SarahSV (talk) 04:56, 13 April 2017 (UTC)

Quick talley[edit]

If I've read the comments correctly, I think the debate currently seems to be:

In favour of rescinding approval for, and re-examining Wikipedia:Bots/Requests for approval/Dexbot 6:

Against rescinding approval for Wikipedia:Bots/Requests for approval/Dexbot 6:


Any further additions or corrections from other editors? Hchc2009 (talk) 10:21, 17 April 2017 (UTC)

This is satisfying for me because it shows that this is a procedure to re-examine consensus. This is totally OK. -- Magioladitis (talk) 11:34, 17 April 2017 (UTC)

Not cosmetic[edit]

As far as I can tell, these are not cosmetic edits. Here's a recent edit. If you look at the post-bot version, it has a new tracking category, Category:Official website different in Wikidata and Wikipedia. This means that it is by definition not cosmetic, i.e. the HTML output is changed. The discussion on the bot operator's talk page makes it clear that these edits are part of a project to improve multiple WPs and Wikidata. BU Rob13, can you please strike your objection to this bot task on COSMETICBOT grounds? That would leave only your question about whether this task really has consensus, which I also have no opinion on, being a human who does not yet grok Wikidata. – Jonesey95 (talk) 04:33, 13 April 2017 (UTC)

@Jonesey95: Any edits the bot makes where the official website is not different between Wikidata and Wikipedia would presumably not introduce that category, so there is potential for cosmetic-only edits to be made. I haven't researched how frequent that is, but the edits I spot-checked before trying to discuss this with the operator back in February contained many cosmetic-only edits. If a bot were created to input official websites into Wikidata from Wikipedia (e.g. search database dumps for interwiki links titled "Official website", then input those to Wikidata), that tracking category would also be depopulated completely, making these all cosmetic-only edits. ~ Rob13Talk 04:40, 13 April 2017 (UTC)
Fair enough. Links to example edits, as I provided above, would be helpful. In any event, if the task has consensus approval as a bot task, it does not matter if the edits are cosmetic, because bot task approval explicitly overrides COSMETICBOT (as long as it is understood during the approval process that some or all edits may be cosmetic). – Jonesey95 (talk) 04:52, 13 April 2017 (UTC)
@Jonesey95: Fully agreed. It was February when I did spot-checking and first tried contacting the botop, so I no longer have those diffs handy. The cosmetic issue is secondary to the consensus issue and is solved if the consensus issue is solved, so it's probably not the best use of my time to go digging back through the edits to find what I saw in February. ~ Rob13Talk 04:55, 13 April 2017 (UTC)
What's this?!??!? Two reasonable, logical, open-minded people just talked something through rationally and agreed on the internet! This has never happened. This made my day. – Jonesey95 (talk) 05:01, 13 April 2017 (UTC)

There is also Category:Official website not in Wikidata and Category:Official_website_missing_URL. This means the task populates 3 different tracking categories. the default is to keep the task ans Rob and his supporters to open a discussion to change the consensus if they like. -- Magioladitis (talk) 07:15, 13 April 2017 (UTC)

Both categories won't be populated if a bot handles transfers of official websites to Wikidata, and many edits are made which don't populate any of the three tracking categories now, I'm sure. One cannot change consensus that does not exist. Consensus is required for any bot task, per WP:BOTREQUIRE. ~ Rob13Talk 05:13, 17 April 2017 (UTC)
Converting to help editors do manually and converting directly to Wikidata are two different things. As Doc James said, the task is performed for 8 months now with the main part already done so if you want to changes consensus please go ahead. But please specify if you ask this because you ant full automatisation and transition to Wikidata or because you are with the guys who oppose Wikidata. -- Magioladitis (talk) 09:56, 17 April 2017 (UTC)
There was never any consensus. There was a BAG member who violated the bot policy by approving a task that had no associated consensus discussion. This was immediately and continuously objected to with no response from either of you or the bot operator except hostility and silence, respectively. One cannot assume consensus to exist given such objections. We are now fixing the mess. ~ Rob13Talk 16:22, 17 April 2017 (UTC)
If you proposed a better bot that will use Wikidata directly that's fine. The solution was given as a comprise between those who did not want a bot to go directly and use the Wikidata values and instead give time to compare en.wp to Wikidat values. There is mess done by other bots including yours. -- Magioladitis (talk) 16:50, 17 April 2017 (UTC)
First, I should respond to a point earlier about whether I "[want] full automatisation and transition to Wikidata or because you are with the guys who oppose Wikidata". I am neither. In most bot discussions I take part in, I comment to ensure consensus is carried out and the bot policy is abided by, not because I hold any deep convictions about how bots should operate. I generally support the idea of Wikidata, but I have some criticisms of their current quality control. I certainly never oppose something to improve Wikidata just because I dislike Wikidata itself. More free knowledge projects are always good things, in my opinion, and Wikidata fills a unique need there. You keep trying to pidgeonhole me into either "us" or "them", but I am neither. I'm not pushing some POV; I just want things to be done the right way when we're talking about making thousands of potentially unnecessary automated edits.

In this case, I think everyone can probably agree that some more direct method is preferable. If we want semi-automated transfer to Wikidata with human oversight, we can build a tool that allows such transfer to occur with human oversight and without any edits needed to Wikipedia. It could use database dumps to populate items needing review. If we want automated transfer to Wikidata, that's also a possibility - basically build that same tool without a GUI and with a bot account approved on Wikidata. Neither requires cosmetic edits to the English Wikipedia, and so both are more efficient manners of carrying out the task. Which method we should go with is up to consensus, and I have no strong feelings on which way we should go. I just know we definitely should not go in a direction that has no consensus supporting it, makes trivial edits, and is demonstrably inefficient in accomplishing the task it's attempting to accomplish. ~ Rob13Talk 19:50, 17 April 2017 (UTC)

OK you are either trying to form an alliance with people opposing Wikidata or you are by yourself against Wikidata. By the way, using the template also gives us statistics of how many pages in English Wikipedia use official websites. So it's 3 tracking categories plus more benefits. Recall that in the past we also used IMdb links, etc. via templates to avoid WP:LINKROT etc. -- Magioladitis (talk) 13:45, 20 April 2017 (UTC)
... or I'm doing exactly what I said and ensuring that bot tasks are operating with consensus. "Form an alliance"? This is an encyclopedia, not the United Nations. ~ Rob13Talk 05:57, 24 April 2017 (UTC)

Re-examination of BU Rob13's bot approval[edit]

Per User_talk:BU_Rob13#Bot_autoassess_error the bot causes errors and the bot owner is aware of them. Instead of topping his bot and fix the error. The allow to bot to run and then fix manually. (It reminds me of Yobot in some cases). I think the bot should stop and resume only when the error is fixed otherwise search for a different approach for this minor task. -- Magioladitis (talk) 11:37, 17 April 2017 (UTC)

This has been discussed in the past with BAG members. The "errors" are caused by things like "| class = sub" (a misspelling of stub). The duplicate parameters are a good thing, as they allow me (or other editors) to correct unambiguous errors that exist on the talk page. The final result is both a correct class and the removal of an incorrect class. Thank you for (a) not notifying me, and (b) not even linking to the bot approval that you're talking about. ~ Rob13Talk 14:58, 17 April 2017 (UTC)
Anytime. I know that you follow this page very closely. AWB provides a feature to update parameters instead of creating all this mess. -- Magioladitis (talk) 15:28, 17 April 2017 (UTC)
You are incorrect. It doesn't provide a feature to update parameters with suitable regex allowed to determine when to make the change when multiple templates on the same page have the same name for a parameter. Each "error" represents an edge case caused by an erroneous value of a template parameter which must be removed by a human editor; it draws attention to an existing mess so I can clean it up. ~ Rob13Talk 15:45, 17 April 2017 (UTC)
It provides if you use a custom module. We have a section called Helper functions. Yobot has similar scripts and doing it in a better way than your bot. -- Magioladitis (talk) 16:05, 17 April 2017 (UTC)
Anyway. My argument is since this is a low priority task we should discuss and find way to minimise the edge cases instead of creating more mess and require two and three runs per pages. Take note also that removing a duplicate parameters that in fac does not affect the visual output is "cosmetic". The same holds if the bot visits a page and adds a parameter that does not affect the page because there is anoher paramtere there. Please read WP:COSMETICBOT. -- Magioladitis (talk) 16:11, 17 April 2017 (UTC)
If you'd like to write a module to handle that, do be my guest. Saying "we have a feature for that" and then "go write a custom module in a coding language you do not know" are rather different things, though! You'll also need to explain exactly how such a module would handle a theoretically infinite number of invalid "class" parameters, some of which are intended to be importance parameters, some of which are intended to be actual class parameters and are the same as what the bot would apply, and some of which are intended to be actual class parameters and are not the same as what the bot would apply. I'm always open for better implementations, but it makes little sense to spend 10 hours coding when it would take 10 minutes manually fixing, especially when the manual fixing is going to achieve a better end result. Human attention on the few talk pages with invalid parameters is desirable, which is why this hasn't been changed in the past.

(edit conflict) Article assessment is not a low priority task. It has been requested by the WikiProjects being tagged with knowledge of how the implementation works. There are consensus discussions to back up each project's opt-in. My bot doesn't run more than once per page in any given run (of course, if new project templates are added that need assessment between runs, a second edit is possible). That is simply false, as-is the claim that my bot removes duplicate parameters (it does not). I do welcome discussion of any bot task on my talk page, and I'm always happy to improve if one can suggest a better implementation. ~ Rob13Talk 16:14, 17 April 2017 (UTC)

OK let's see if BAG is informed that the bot creates a percentage of errors and that in some cases it violated COSMETICBOT. Then we and we will handle it. No problem. -- Magioladitis (talk) 16:20, 17 April 2017 (UTC)
Provide a single edit which violated COSMETICBOT and I will be happy to fix it. These do not exist. As for BAG being aware, I know you were aware of this in the past because we discussed it on my talk page. You were a BAG member at the time. So yes, BAG was aware. ~ Rob13Talk 16:23, 17 April 2017 (UTC)

Magioladitis (talk · contribs), stop WP:REICHSTAGing in apparent retaliation for BU Rob's request concerning DexBot, and drop the WP:BATTLEGROUND mentality. If you have an issue with User:BU RoBOT, then I suggest you follow the directions outlined in WP:BOTISSUE, which I will quote here for both of you's convenience

"If you have noticed a problem with a bot, or have a complaint or suggestion to make, you should contact the bot operator directly via their user talk page (or via the bot account's talk page). Bot operators are expected to be responsive to the community's concerns and suggestions, but please assume good faith and don't panic. Bugs and mistakes happen, and we're all here to build an encyclopedia."

It should be fairly obvious which bold passage applies to whom. Headbomb {t · c · p · b} 16:32, 17 April 2017 (UTC)

Also, consider doing articles likely to have duplicated parameters because of the bot semi-automatically, rather than automatically, if possible. Headbomb {t · c · p · b} 16:35, 17 April 2017 (UTC)
In my next run, which will not be for several months, I will (a) fix the duplicate auto parameter issue, which I believe has become more prevalent over time due to the amount of people copying my bot's assessment templates when creating new article talk pages, and (b) see what I can do about duplicate class parameters without compromising quality of the final result. ~ Rob13Talk 16:38, 17 April 2017 (UTC)

Fair enough. I think it's obvious that Rob has two criteria though. The one that apply to other bots and the one that apply to his bot. I know that the problem with Rob's bot is not that big but it's good to handle at some point because the replies in his talk page were most of the direction that he is actually aware of the problem but his way to fix the problem is manually fixing the edges cases. It reminds me of something. -- Magioladitis (talk) 16:45, 17 April 2017 (UTC)

I did not make cosmetic-only edits. I did not make edits against consensus. I did not introduce errors into any pages that didn't already, themselves, contain errors. I did not know of a way to prevent an erroneous error my bot was causing and refuse to apply it. For the third time or so, if you have a proposed solution that I can implement and that actually works, I'm all ears. ~ Rob13Talk 16:56, 17 April 2017 (UTC)

Were was User:Ladsgroup pinged in the above discussion User talk:BU Rob13? Doc James (talk · contribs · email) 19:21, 17 April 2017 (UTC)

@Doc James: Here, after I attempted to discuss the issue with him multiple times in that same thread and gave him several months to respond to my concerns. ~ Rob13Talk 19:38, 17 April 2017 (UTC)
Excellent thanks Doc James (talk · contribs · email) 19:47, 17 April 2017 (UTC)

The point here is that BU RoBOT's minor edit required another editor to make a cosmetic edit to fix the issue that BU RoBOT introduced. Seems to me that M. has a valid point here. I don't see any misspelling, nor any unambiguous error; just a parameter that wasn't assigned a value. Don't edits of this nature to talk pages clog up the article-space watchlists, thus making it possibly harder to detect article-space vandalism? I'm not sure one can watch an article without simultaneously watching its talk as well. wbm1058 (talk) 20:18, 17 April 2017 (UTC)

Yes that is a major problem with bot edits. Many of us hide them on our watchlists. But than that hides everything that occurred before that bot edit which can then hide vandalism from a human editor. We seriously need the WMF to fix this. Doc James (talk · contribs · email) 22:30, 17 April 2017 (UTC)
@Doc James: I believe T11790/[2] is being reviewed for deployment in the next MediaWiki update. Headbomb {t · c · p · b} 22:40, 17 April 2017 (UTC)
@Wbm1058: Typically, I follow very shortly behind my bot and fix such double parameters. These affect a very small percentage of articles, so it takes me maybe 5 minutes after a run on ~10,000 pages. I made the simple mistake of forgetting to do that this time because I had travel scheduled the next day and got distracted with that. In any event, I've already stated above that I would work something out with that bug before the next run of the bot task (or I will not run it, and explain to the WikiProjects why I'm unable to implement their consensus). The term "cosmetic edit" is meaningless when it comes to non-automated edits. It is purely a construction of the bot policy (and the AWB rules of use). ~ Rob13Talk 22:36, 17 April 2017 (UTC)

Combining of magic links BRFAs[edit]

@Headbomb, JJMC89, Legoktm, Jonesey95, Magioladitis, and Anomie: I think we're going around in circles with three-and-a-half BRFAs going on right now about exactly the same topics. All three (as near as I can tell) are using pretty much identical regex, and now it just seems like we're holding one discussion in three locations. So, I'm bringing us all together. I realize I'm not BAG, but here's what it looks like we need to get this ball rolling:

  1. Full regex for converting ISBN and PMID from magic link to regex (including edge cases) and (if consensus allows) regex for converting doi.
  2. Test runs on said regex
  3. Splitting the massive "magic links" categories into groups so that PrimeBOT, Magic links bot, and Yobot can get the go-ahead.

Feel free to add points to this list if necessary, and feel free to ping anyone I missed. Primefac (talk) 03:01, 18 April 2017 (UTC)

Agreed. I modified the third bullet to clarify the scope of action. I strongly recommend that the regex that is used be the regex that the Mediawiki software uses to recognize and create magic links. That code is not foolproof, but it is what puts pages in the "magic links" categories in the first place, and it should be conservative enough to avoid false positives. Once we have done all of the pages in the magic links categories, we can troll around and look for other patterns that should/could be converted.
I think we should leave DOI and other identifiers out of this round of fixes. Keep it simple. – Jonesey95 (talk) 04:33, 18 April 2017 (UTC)
DOI and PMC should be definitely be included, and other identifier fixes should be incorporated as they are tested and developed. It is highly undesirable to have the same article be edited multiple times for the same type of thing. These bots should covering the most highly used identifiers from the get go.
Mediawiki regex should not be used for ISBNs, we want to catch bad ISBNs and other ISBN errors. The template will put those in categories, flagging issues that were silently ignored before. Headbomb {t · c · p · b} 11:02, 18 April 2017 (UTC)
Isn't there a CHECKWIKI for malformed ISBNs? Also, I think it would be easier to find malformed ISBNs after we do this fix, because we could search for any ISBN that isn't immediately preceded by {{. Primefac (talk) 11:37, 18 April 2017 (UTC)
There is a checkwiki for that yes. Checking for malformed ISBNs after is always a possibility I suppose, but I don't really such much benefit. After the run, people will keep adding bare ISBNs, and those will need conversions too. The absence of a {{ won't be a guarantee the ISBN is malformed, and it will only take a few days before valid untemplated isbns take over bad untemplated ones. The advantage templating bad ISBNs is that the ISBN error will be visible to everyone, and can be fixed much more quickly rather than only listed at WP:CHECKWIKI. Headbomb {t · c · p · b} 12:14, 18 April 2017 (UTC)
I think we're at a "6 and two 3s" point with regard to how best to fix the extant broken ISBNs. Either way it will mean going through a second pass to fix broken links. I guess from a regex perspective we know the MediaWiki code works. So maybe we should use that plus an "everything else" regex for the bad stuff? Primefac (talk) 12:23, 18 April 2017 (UTC)
There is little point in trying to catch "bad ISBNs" with a bot. There are only about 800 ISBNs in the Checkwiki report, and the vast majority of them are inside URL links. The links need to be processed manually. The remaining population should be processed manually or as a supervised run, because they are broken in many different ways. – Jonesey95 (talk) 13:04, 18 April 2017 (UTC)
I agree with what Jonesey95 said. DOI is entirely unrelated here. It's not a magic link. Legoktm (talk) 17:53, 18 April 2017 (UTC)
Whether DOIs is a magic link or not is besides the point. The point is that DOIs should be linked, and this is a good fix to bundle in these bots. Headbomb {t · c · p · b} 17:56, 18 April 2017 (UTC)
It's entirely relevant. The RfC explicitly says those kinds of things that should be linked but are not currently are not included. Legoktm (talk) 19:47, 18 April 2017 (UTC)
No one objected to do those (save Rich, who also opposed doing ISBNs which are being done) to doing DOIs and other identifiers this way, neither in that RFC, or the follow up specifically on this question, or at the ongoing Wikipedia:Bots/Requests_for_approval/CitationCleanerBot_2. There is no argument from anyone that doi:10.1234/whatever is somehow desired over doi:10.1234/whatever. Headbomb {t · c · p · b} 19:51, 18 April 2017 (UTC)
Headbomb: You're making the perfect the enemy of the good here. Adding DOIs to this bot task adds an additional level of complexity and the opportunity for false positive edits. Do you have a regex in mind? (See step 1 above.) If you want to move forward with DOIs, bring us some ideas for step 1 instead of only arguing (hint: Module:Citation/CS1/Identifiers has a regex that works well in citations). – Jonesey95 (talk) 20:49, 18 April 2017 (UTC)
It's by far not perfect, we're talking adding 1 thing (DOIs), not all identifiers (at least not while developement is going on for the other ones). Perfect would be preventing this from going on while barely used identifiers like {{OSTI}} are covered. CS1 has no regex to catch DOIs, but the general regex would be something like
(\[\[digital object identifier\|doi\]\]|doi)(:|\s*)(10.<foobar>|\[http(s)?:\/\/(dx\.)?doi\.org\/<foobar> <foobar>\]). The tricky bit is what <foobar> should be exactly. Likely something like ([^(\]|\s*|,)]+?) since that would indicate the end of the doi match, but the finale "\.\s" is also allowed, and I'm not sure how to regex that. Also not sure on if +? is the right one for this match. However, that wouldn't take very long to develop and save several thousand edits from being made. The benefits are worth it.
Headbomb {t · c · p · b} 21:33, 18 April 2017 (UTC)

Has this discussion stalled yet? Can we just approve the bots already for PMID and ISBN? (cc: Legoktm) --MZMcBride (talk) 19:01, 23 April 2017 (UTC)

That's the thing, doing DOIs with those will save several several thousand edits. Headbomb {t · c · p · b} 19:09, 23 April 2017 (UTC)
Lack of consensus around DOIs (and lack of a proposed regex for DOIs) is preventing us from moving forward with ISBNs and PMIDs. Let's do the thing that has consensus. – Jonesey95 (talk) 23:25, 23 April 2017 (UTC)
Fully agree with Jonesey. From what I've read, we've got one camp that things we shouldn't do DOIs but should do ISBNs and PMIDs and another camp that wants them all done together. We need to avoid the situation where "do nothing" wins out even though no-one likes it. ~ Rob13Talk 06:00, 24 April 2017 (UTC)
From what I can tell, all objections are purely procedural in nature. "This wasn't in the RFC". No, it wasn't. But that doesn't mean it shouldn't be done. Headbomb {t · c · p · b} 10:35, 24 April 2017 (UTC)
Given that a bot owner was taken to ArbCom for going outside of standard procedure, I can understand the importance to some editors to strictly follow the rules. However, I also know that situation was rather exceptional, and maybe in this instance it's better to ask forgiveness than permission? In case you can't tell, I'm neutral as far as actually doing it.
Maybe a quick yes/no !vote among those gathered here would settle the issue. Primefac (talk) 12:07, 24 April 2017 (UTC)
The arbcom case wasn't about because some things were done "out-of-process", but rather because of WP:IDIDNTHEARTHAT. There was a follow up to the RFC with no objections, and people expressed support at Wikipedia:Bots/Requests_for_approval/CitationCleanerBot_2, and there's yet to be any objection to the idea that doi and other identifiers should be linked.
So the question is should "magic link" bots clean up other identifiers when they have the opportunity to do so, or not. Or do we want separate bots that will edit several pages multiple times.Headbomb {t · c · p · b} 13:10, 24 April 2017 (UTC)
The objections are not just procedural in nature. See step 1 at the very top of this discussion: we do not yet have a good regex for templating DOIs. Nobody has brought one forward and said "I have tested this in my main account" or "I followed the link that Jonesey95 provided and used that regex and found that it doesn't work" or anything like that.
Given that basic lack of ambition, and the fact that we have three editors ready to go with ISBNs and PMIDs (using the regex that WP uses to generate magic links), and a well-defined category of ISBN and PMID magic links, I think we should table the DOI idea and move forward with ISBNs and PMIDs. – Jonesey95 (talk) 13:33, 24 April 2017 (UTC)
Or simply develop that regex? \[?\[?(digital object identifier\|doi|doi)\]?\]?(:|\s)+10\.\d+\/<foobar> is a good starter. "Ends with \s, , or \.\s" should be the end. I just don't know how to regex that, because [^(\s|,|\.\s)]+ doesn't work. But that's a regex that shouldn't take long to develop and test. As for a "lack of ambition", I'm still waiting on Wikipedia:Bots/Requests_for_approval/CitationCleanerBot_2. Headbomb {t · c · p · b} 14:54, 24 April 2017 (UTC)