Jump to content

Wikipedia:Bot requests

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by 70.166.73.34 (talk) at 17:53, 28 February 2017 (→‎Reformation). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

This is a page for requesting tasks to be done by bots per the bot policy. This is an appropriate place to put ideas for uncontroversial bot tasks, to get early feedback on ideas for bot tasks (controversial or not), and to seek bot operators for bot tasks. Consensus-building discussions requiring large community input (such as request for comments) should normally be held at WP:VPPROP or other relevant pages (such as a WikiProject's talk page).

You can check the "Commonly Requested Bots" box above to see if a suitable bot already exists for the task you have in mind. If you have a question about a particular bot, contact the bot operator directly via their talk page or the bot's talk page. If a bot is acting improperly, follow the guidance outlined in WP:BOTISSUE. For broader issues and general discussion about bots, see the bot noticeboard.

Before making a request, please see the list of frequently denied bots, either because they are too complicated to program, or do not have consensus from the Wikipedia community. If you are requesting that a template (such as a WikiProject banner) is added to all pages in a particular category, please be careful to check the category tree for any unwanted subcategories. It is best to give a complete list of categories that should be worked through individually, rather than one category to be analyzed recursively (see example difference).

Alternatives to bot requests

Note to bot operators: The {{BOTREQ}} template can be used to give common responses, and make it easier to keep track of the task's current status. If you complete a request, note that you did with {{BOTREQ|done}}, and archive the request after a few days (WP:1CA is useful here).


Please add your bot requests to the bottom of this page.
Make a new request
# Bot request Status 💬 👥 🙋 Last editor 🕒 (UTC) 🤖 Last botop editor 🕒 (UTC)
1 Automatic NOGALLERY keyword for categories containing non-free files (again) 27 11 Anomie 2024-08-04 14:09 Anomie 2024-08-04 14:09
2 Clear Category:Unlinked Wikidata redirects 9 6 Wikiwerner 2024-07-13 14:04 DreamRimmer 2024-04-21 03:28
3 Fixing stub tag placement on new articles Declined Not a good task for a bot. 5 4 Tom.Reding 2024-07-16 08:10 Tom.Reding 2024-07-16 08:10
4 Adding Facility IDs to AM/FM/LPFM station data Y Done 13 3 HouseBlaster 2024-07-25 12:42 Mdann52 2024-07-25 05:23
5 Tagging women's basketball article talk pages with project tags Y Done 20 4 Usernamekiran 2024-09-05 16:55 Usernamekiran 2024-09-05 16:55
6 Bot that condenses identical references Coding... 12 6 ActivelyDisinterested 2024-08-03 20:48 Headbomb 2024-06-18 00:34
7 Bot to remove template from articles it doesn't belong on? 3 3 Thryduulf 2024-08-03 10:22 Primefac 2024-07-24 20:15
8 One-off: Adding all module doc pages to Category:Module documentation pages 7 3 Andrybak 2024-09-01 00:34 Primefac 2024-07-25 12:22
9 Draft Categories 13 6 Bearcat 2024-08-09 04:24 DannyS712 2024-07-27 07:30
10 Remove new article comments 3 2 142.113.140.146 2024-07-28 22:33 Usernamekiran 2024-07-27 07:50
11 Removing Template:midsize from infobox parameters (violation of MOS:SMALLFONT)
Resolved
14 2 Qwerfjkl 2024-07-29 08:15 Qwerfjkl 2024-07-29 08:15
12 Change stadium to somerhing else in the template:Infobox Olympic games Needs wider discussion. 8 5 Jonesey95 2024-07-29 14:57 Primefac 2024-07-29 13:48
13 Change hyphens to en-dashes 16 7 1ctinus 2024-08-03 15:05 Qwerfjkl 2024-07-31 09:09
14 Consensus: Aldo, Giovanni e Giacomo 17 5 Dicklyon 2024-08-14 14:43 Qwerfjkl 2024-08-02 20:23
15 Cyclones 3 2 OhHaiMark 2024-08-05 22:21 Mdann52 2024-08-05 16:07
16 Substing int message headings on filepages 8 4 Jonteemil 2024-08-07 23:13 Primefac 2024-08-07 14:02
17 Removing redundant FURs on file pages 4 2 Jonteemil 2024-08-12 20:26 Anomie 2024-08-09 14:15
18 Need help with a super widespread typo: Washington, D.C (also U.S.A) 32 10 Jonesey95 2024-08-26 16:55 Qwerfjkl 2024-08-21 15:08
19 Dutch IPA 4 3 IvanScrooge98 2024-08-25 14:11
20 AnandTech shuts down 9 6 GreenC 2024-09-01 18:39 Primefac 2024-09-01 17:28
21 Date formatting on 9/11 biography articles 5 2 Zeke, the Mad Horrorist 2024-09-01 16:27
22 Discussion alert bot 6 4 Headbomb 2024-09-08 12:29 Headbomb 2024-09-08 12:29
23 Regularly removing {{coords missing}} if coordinates are present BRFA filed 11 2 Usernamekiran 2024-09-07 13:19 Usernamekiran 2024-09-07 13:19
24 Latex: move punctuation to go inside templates 3 2 Yodo9000 2024-09-07 18:59 Anomie 2024-09-07 03:38
25 Removing spurious nobot notice BRFA filed 4 2 DreamRimmer 2024-09-07 12:55 DreamRimmer 2024-09-07 12:55
26 de-AMP bot
Resolved
4 3 Primefac 2024-09-09 16:01 Primefac 2024-09-09 16:01
Legend
  • In the last hour
  • In the last day
  • In the last week
  • In the last month
  • More than one month
Manual settings
When exceptions occur,
please check the setting first.

Add protection templates to recently protected articles

We have bots that remove protection templates from pages (DumbBOT and MusikBot), but we don't have a bot right now that adds protection templates to recently protected articles. Lowercase sigmabot used to do this until it stopped working about two years ago. I generally think it's a good idea to add protection templates to protected articles, so people know (especially if you're logged in and autoconfirmed, because then you would have no idea it would be semi-protected). —MRD2014 (talkcontribs) 13:06, 18 October 2016 (UTC)[reply]

We need those bots because the expiration of protection is usually an automatic process. However, placing on the protection has to be done by an admin - and in this process as part of the instructions, a template that they use places that little padlock. Thus, any protected page will have the little padlock, I don't think many admins forget to do this. For it to be worth a bot to do this, there would have to be a substantial problem - can you show us any? If you can, then I will code and take this on. TheMagikCow (talk) 18:01, 15 December 2016 (UTC)[reply]
@TheMagikCow: Sorry for the late reply, but it's not really a problem, it's just that some administrators don't add protection templates when protecting the page (example), so a logged-in autoconfirmed user would have no idea it's semi-protected or extended-protected unless they clicked "edit" and saw the notice about the page being semi-protected or extended-protected. I ended up adding {{pp-30-500}} to seven articles ([1]). This has nothing to do with removing protection templates (something DumbBOT and MusikBot already do). The adding of {{pp}} templates was previously performed by lowercase sigmabot. —MRD2014 (talkcontribs) 00:34, 28 December 2016 (UTC)[reply]
@MRD2014: Ok, those examples make me feel that a bot is needed for this - and it would relieve the admins of the task of manually adding them. I think I will get Coding... and try to take this one on! TheMagikCow (talk) 10:39, 28 December 2016 (UTC)[reply]
@TheMagikCow: Thanks! —MRD2014 (talkcontribs) 14:41, 28 December 2016 (UTC)[reply]
Or possibly a bot who sends the admin a notice that "It looks like during your protection action on X you may have forgotten to add the lock icon. Please check and add the appropriate lock icon. Thank you" Hasteur (talk) 02:07, 2 January 2017 (UTC)[reply]
Hasteur's suggestion should probably be incorporated into the bot since it has the clear benefit of diminishing future instances of mismatched protection levels and protection templates by reminding admins for the future. Enterprisey (talk!) 03:05, 2 January 2017 (UTC)[reply]
OK - Will try to add that - would it be easier if that was a template? TheMagikCow (talk) 11:53, 2 January 2017 (UTC)[reply]
Some admins have {{nobots}} on their talk pages (Materialscientist for example) so the bot couldn't message those users. Also, lowercase sigmabot (the last bot to add protection templates) would correct protection templates too. —MRD2014 (talkcontribs) 17:20, 2 January 2017 (UTC)[reply]
In some cases, there is no need to add a prot padlock, such as when the page already bears either {{collapsible option}} or {{documentation}}; mostly these are pages in Template: space. Also, redirects should never be given a prot padlock - if done like this, for example, it breaks the redirection. Insread, redirects have a special set of templates which categorise the redir - they may be tagged with {{r fully protected}} or equivalent ({{r semi-protected}}, etc.), but it is often easier to ensure that either {{redirect category shell}} or the older {{this is a redirect}} is present, both of which determine the protection automatically, in a similar fashion to {{documentation}}. --Redrose64 🌹 (talk) 12:11, 3 January 2017 (UTC)[reply]
About the notifying admins thing, MediaWiki:Protect-text says "Please update the protection templates on the page after changing the protection level." in the instructions section. Also, the bot should not tag redirects with pp templates per Redrose64. If it tags articles that aren't redirects, it shouldn't have any major issues. —MRD2014 (talkcontribs) 19:26, 3 January 2017 (UTC)[reply]
This would be better as a mediawiki feature - see Wikipedia:Village_pump_(technical)#Use_CSS_for_lock_icons_on_protected_pages.3F, meta:2016_Community_Wishlist_Survey/Categories/Admins_and_stewards#Make_the_display_of_protection_templates_automatic, phab:T12347. Two main benefits: not depending on bots to run, and not spamming the edit history (protections are already displayed, no need to double up). As RedRose has pointed out, we already have working Lua code. Samsara 03:48, 4 January 2017 (UTC)[reply]
TheMagikCow has filed a BRFA for this request (see Wikipedia:Bots/Requests for approval/TheMagikBOT 2). —MRD2014 (talkcontribs) 18:29, 5 January 2017 (UTC)[reply]

Missing BLP template

We need a bot that will search for all articles in Category:Living people, but without a {{BLP}} (or alternatives) on article's talk page, and add to these pages missing template. --XXN, 21:21, 20 November 2016 (UTC)[reply]

Ideally, not {{BLP}} directly, but indirectly via {{WikiProject Biography|living=yes}}. But we once had a bot that did that, I don't know what happened to it. --Redrose64 (talk) 10:33, 21 November 2016 (UTC)[reply]
{{WikiProject Biography|living=yes}} add the biography to Category:Biography articles of living people. TheMagikCow (talk) 18:48, 16 January 2017 (UTC)[reply]
Hi@Redrose64:, what was that bot's name? We faced such need recently during the Wiki Loves Africa photo contest on Commons. Hundreds of pictures from a parent category missed a certain template. I am planning to build of bot or adapt an existing one for similar cases.--African Hope (talk) 17:08, 4 February 2017 (UTC)[reply]
I'll code this. If there is a living people category but no {{BLP}} or {{WikiProject Banner Shell}} or {{WikiProject Biography}}? I might expand this to see if it has a 'living people' category but it does not have a living or not parameter. Dat GuyTalkContribs 17:28, 4 February 2017 (UTC)[reply]
I don't recall. Maybe Rich Farmbrough (talk · contribs) knows? --Redrose64 🌹 (talk) 00:20, 5 February 2017 (UTC)[reply]
I have been fixing members of Category:Biography articles without living parameter along with User:Vami_IV for some time. Menobot ensures that most biographies get tagged. I also did a one-off to tag such biographies a couple of moths ago. All the best: Rich Farmbrough, 00:32, 5 February 2017 (UTC).[reply]

IP-WHOIS bot

During vandal hunting I've noticed that IP vandals usually stop in their tracks the moment you add the 'Shared IP' template (with WHOIS info) to their Talk page. I assume they then realise they're not as anonymous as they thought. A bot that would automatically add that WHOIS template to an IP vandal's Talk page, let's say once they've reached warning level 2, would prevent further vandalism in a lot of cases. I don't know if this needs to be a new bot or if it could be added to ClueBot's tasks. I think ClueBot would be the best option since it already leaves warnings on those Talk pages, so adding the Shared/WHOIS template as well would probably be the fastest option. Any thoughts? Mind you, I'm not a programmer so there's no way I could code this thing myself. Yintan  20:27, 30 November 2016 (UTC)[reply]

This would be fairly easy to do. Coding... Tom29739 [talk] 17:32, 8 December 2016 (UTC)[reply]
Nice idea, Tom29739 what's the status on this? 103.6.159.67 (talk) 08:04, 16 January 2017 (UTC)[reply]
This is still being coded, development has slowed unfortunately due to being very busy in real life. Tom29739 [talk] 22:40, 18 January 2017 (UTC)[reply]

MarkAdmin.js

Hello.

I would like to transfer the following script to Wikipedia so users such as myself could identify which users are the following:

  • Administrators (by default)
  • Bureaucrats (by default)
  • Checkusers (by default)
  • Oversighters (by default)
  • ARBCOM Members (optional)
  • OTRS Members (optional)
  • Edit Filter Managers (optional)
  • Stewards (optional)

https://commons.wikimedia.org/wiki/MediaWiki:Gadget-markAdmins.js

I would like a bot to frequently update the list to make the information accurate.

https://commons.wikimedia.org/wiki/MediaWiki:Gadget-markAdmins-data.js 1989 (talk) 19:30, 18 December 2016 (UTC)[reply]

Add importScript('User:Amalthea/userhighlighter.js'); to your "skin".js file to show admins Ronhjones  (Talk) 21:27, 3 January 2017 (UTC)[reply]

add birthdate and age to infoboxes

Here's a thought... How about a bot to add {{birth date and age}}/{{death date and age}} templates to biography infoboxes that just have plain text dates? --Zackmann08 (Talk to me/What I been doing) 18:13, 20 December 2016 (UTC)[reply]

These templates provide the dates in microformat, which follows ISO 8601. ISO 8601 only uses the Gregorian calendar, but many birth and death dates in Wikipedia use the Julian calendar. A bot can't distinguish which is which, unless the date is after approximately 1924, so this is not an ideal task to assign to a bot. (Another problem is that if the birth date is Julian and the death date is Gregorian the age computation could be wrong.) Jc3s5h (talk) 19:07, 20 December 2016 (UTC)[reply]
@Jc3s5h: that is a very valid point... One thought, the bot could (at least initially) focus on only people born after 1924 (or whichever year is decided). --Zackmann08 (Talk to me/What I been doing) 19:13, 20 December 2016 (UTC)[reply]
Without comment on feasibility, I support this as useful for machine-browsing. The ISO 8601 format is useful even if the visual output of the page doesn't change. ~ Rob13Talk 08:22, 30 December 2016 (UTC)[reply]

I all go for it. I am filing a BFRA after my wikibreak. -- Magioladitis (talk) 08:24, 30 December 2016 (UTC)[reply]

  • When I open the edit window, I just see a bunch of template clutter, so I would like to understand what the template is used for, who on WP uses it, and specifically what the purpose of micro format dates is; It strikes me that the info boxes are sufficiently well labelled for any party to pull date metadata off them without recourse to additional templates. -- Ohc ¡digame! 23:13, 14 February 2017 (UTC)[reply]

Can a useful bot be taken over and repaired.

(Was posted at WP:VPT, user:Fastily suggested to post here if there was no takers)
User:Theopolisme is fairly inactive (last edit May). He mde User:Theo's Little Bot. Of late the bot has not been behaving very well on at least one of it's tasks (Task 1 - reduction of non-free images in Category:Wikipedia non-free file size reduction requests. It typically starts at 06:00 and will drop out usually within a minute of two (although sometimes one is lucky and it runs for half an hour occasionally). Messages on talk pages and github failed to contact user. User:Diannaa and I both sent e-mails, and Diannaa did get a reply - He is very busy elsewhere, and hopes to maybe look over Xmas... In view of the important work it does, Dianna suggested I ask at WP:VPT if there was someone who could possibly take the bot over? NB: See also Wikipedia:Bot requests#Update WikiWork factors Ronhjones  (Talk) 19:44, 25 December 2016 (UTC)[reply]

Now this should be a simple task. Doing... Dat GuyTalkContribs 12:39, 27 December 2016 (UTC)[reply]
@DatGuy: FWIW, I'm very rusty on python, but I tried running the bot off my PC (with all saves disabled of course), and the only minor error I encountered was resizer_auto.py:49: DeprecationWarning: page.edit() was deprecated in mwclient 0.7.0 and will be removed in 0.9.0, please use page.text() instead.. I did note that the log file was filling up, maybe after so long unattended, the log file is too big. Ronhjones  (Talk) 16:24, 28 December 2016 (UTC)[reply]
Are you sure? See [2]. When it tries to upload it, the file is corrupted. However, the file is fine on my local machine. Can you test it on the file? Feel free to use your main account, I'll ask to make it possible for you to upload files. As a side note, could you join ##datguy connect so we can talk more easily (text, no voice). Thanks. Dat GuyTalkContribs 16:33, 28 December 2016 (UTC)[reply]
Well just reading the files is one thing, writing them back is a whole new ball game! Commented out the "theobot.bot.checkpage" bit, changed en.wiki to test.wiki (2 places), managed to login OK, then it goes bad - see User:Ronhjones/Sandbox2 for screen grab. And every run adds two lines to my "resizer_auto.log" on the PC. Bit late now for any more. Ronhjones  (Talk) 01:44, 29 December 2016 (UTC)[reply]
Ah, just spotted the image files in the PC directory - 314x316 pixels, perfect sizing. Does that mean the bot's directory is filling up with thousands of old files? Just a thought. Ronhjones  (Talk) 01:49, 29 December 2016 (UTC)[reply]
See for yourself :). Weird thing for me is, I can upload it manually from the API sandbox on testwiki just fine. When the bot tries to do it via coding? CORRUPT! Dat GuyTalkContribs 10:28, 30 December 2016 (UTC)[reply]
25 GB of temp image files !! - it there a size limit per user on that server? Somewhere (in the back of my mind - I know not where - trouble with getting old..., and I could be very wrong) I read he was using a modified mwclient... My PC fails when it hits the line site.upload(open(file), theimage, "Reduce size of non-free image... and drops to the error routine, I tried to look up the syntax of that command (not a lot of documentation) and it does not seems to fully agree with his format. Ronhjones  (Talk) 23:29, 30 December 2016 (UTC)[reply]
OTOH, I just looked at the test image, have you cracked it? Ronhjones  (Talk) 23:31, 30 December 2016 (UTC)[reply]

BRFA filed. Dat GuyTalkContribs 09:19, 1 January 2017 (UTC)[reply]

@DatGuy: And approved I see - Is it now running? I'll stop the original running. I see it was that "open" statement that was the issue I had! Ronhjones  (Talk) 00:34, 3 January 2017 (UTC)[reply]
@DatGuy: Is this running? ~ Rob13Talk 11:36, 27 February 2017 (UTC)[reply]
Yes, although irregularly. I run it when I see more than one page in the category. Dat GuyTalkContribs 14:54, 27 February 2017 (UTC)[reply]

Copy coordinates from lists to articles

Virtually every one of the 3000-ish places listed in the 132 sub-lists of National Register of Historic Places listings in Virginia has an article, and with very few exceptions, both lists and articles have coordinates for every place, but the source database has lots of errors, so I've gradually been going through all the lists and manually correcting the coords. As a result, the lists are a lot more accurate, but because I haven't had time to fix the articles, tons of them (probably over 2000) now have coordinates that differ between article and list. For example, the article about the John Miley Maphis House says that its location is 38°50′20″N 78°35′55″W / 38.83889°N 78.59861°W / 38.83889; -78.59861, but the manually corrected coords on the list are 38°50′21″N 78°35′52″W / 38.83917°N 78.59778°W / 38.83917; -78.59778. Like most of the affected places, the Maphis House has coords that differ only a small bit, but (1) ideally there should be no difference at all, and (2) some places have big differences, and either we should fix everything, or we'll have to have a rather pointless discussion of which errors are too little to fix.

Therefore, I'm looking for someone to write a bot to copy coords from each place's NRHP list to the coordinates section of {{infobox NRHP}} in each place's article. A few points to consider:

  • Some places span county lines (e.g. bridges over border streams), and in many of these cases, each list has separate coordinates to ensure that the marked location is in that list's county. For an extreme example, Skyline Drive, a scenic 105-mile-long road, is in eight counties, and all eight lists have different coordinates. The bot should ignore anything on the duplicates list; this is included in citation #4 of National Register of Historic Places listings in Virginia, but I can supply a raw list to save you the effort of distilling a list of sites to ignore.
  • Some places have no coordinates in either the list or the article (mostly archaeological sites for which location information is restricted), and the bot should ignore those articles.
  • Some places have coordinates only in the list or only in the article's {{Infobox NRHP}} (for a variety of reasons), but not in both. Instead of replacing information with blanks or blanks with information, the bot should log these articles for human review.
  • Some places might not have {{infobox NRHP}}, or in some cases (e.g. Newport News Middle Ground Light) it's embedded in another infobox, and the other infobox has the coordinates. If {{infobox NRHP}} is missing, the bot should log these articles for human review, while embedded-and-coordinates-elsewhere is covered by the previous bullet.
  • I don't know if this is the case in Virginia, but in some states we have a few pages that cover more than one NRHP-listed place (e.g. Zaleski Mound Group in Ohio, which covers three articles); if the bot produced a list of all the pages it edits, a human could go through the list, find any entries with multiple appearances, and check them for fixes.
  • Finally, if a list entry has no article at all, don't bother logging it. We can use WP:NRHPPROGRESS to find what lists have redlinked entries.

No discussion has yet been conducted for this idea; it's just something I've thought of. I've come here basically just to see if someone's willing to try this route, and if someone says "I think I can help", I'll start the discussion at WT:NRHP and be able to say that someone's happy to help us. Of course, I wouldn't ask you actually to do any coding or other work until after consensus is reached at WT:NRHP. Nyttend (talk) 00:55, 16 January 2017 (UTC)[reply]

Off-topic discussion
I'm a WikiProject NRHP member and I'd like to support what Nyttend is getting at. I support anyone considering Nyttend's question directly, but want to ask about a variation. Note, it's kind of unfortunate though that the source of coordinates is not identified by WikiProject NRHP editors, neither originally (when the source was probably the NRIS database) nor now. (Marking source of coordinates, going forward, is under discussion at Wikipedia talk:WikiProject National Register of Historic Places#Coordinates conversions, and should we be footnoting coordinates?) Perhaps what Nyttend is getting at, and more, could be done by a bot which would make three-way comparison of coordinates in A) individual articles to B) coordinates in NRHP county list-articles to C) coordinates in the downloadable NRIS database. The NRIS database is the original source of most of the coordinates that Nyttend has painstakingly improved upon, for places in Virginia. I believe them that they have gone through Virginia carefully and that wherever they have changed coordinates in the (B) county list-articles that they have done that well. In other states it is much more random, and the coordinates might have been improved in an individual article OR in the county list-article. I personally have fixed coordinates in individual articles (A) but not in list-articles (B), working the opposite way from how Nyttend has done. Could a bot be programmed to make a three-way comparison. If A and B are the same as C, then mark them as being sourced from NRIS. If the state is Virginia, and just one out of A and B is different than C, then accept the change at the other place too and mark both A and B as being sourced by Nyttend's evaluation (using {{NRHPcoord}}) with "improvedby=Nyttend" parameter. If both A and B are different than C, then mark them as discrepancies (using template NRHPcoord with some suitable parameter). If either A or B already has been marked as improved, then improve the other one and copy the sourcing over. If the (C) NRIS coordinates cannot be found for a given site, then mark something else. I wonder, is it possible for someone to consider running this kind of three-way comparison (and would that be easier/better)? --doncram 02:52, 23 January 2017 (UTC)[reply]
No, any three-way comparison is a big distraction. What we need is a bot that will copy human-checked coordinates from lists to articles (with exceptions to be provided by me) and nothing else; we can worry about the other stuff at another time. Nyttend (talk) 15:38, 23 January 2017 (UTC)[reply]
What about the coordinates in Virginia lists that were not improved or verified, though. I checked two lists and see that Nyttend changed all 8 sets of coordinates in one, and changed 1 out of 3 sets of coordinates in another. And what about coordinates in individual articles that were improved by another editor (I don't know how many of these exist, but there will be some within the articles for 2,995 Virginia NRHP sites). I think bot editing has to be restricted to cases where the edit will clearly be making an improvement.
A lesser task would be if a bot could mark, using template:NRHPcoord, the specific coordinates in Virginia lists that Nyttend changed recently. If a bot can examine edits and see that the coordinates were changed by Nyttend. That would still be helpful. --doncram 17:03, 25 January 2017 (UTC)[reply]
I have confirmed coordinates for every site in the state, aside from a few for which I did not have information, and I logged all of those. Most items on which I changed nothing are items in which the original coordinates were already correct; aside from the items I logged, there's no possibility of the current coordinates being wrong, unless I made a typo or misread a map or something like that. The bot shouldn't worry about whether I changed anything. Nyttend (talk) 20:12, 27 January 2017 (UTC)[reply]

Bot to help with FA/GA nomination process

The process is as follows: (Pasted from FA nomination page):

Before nominating an artic)le, ensure that it meets all of the FA criteria and that peer reviews are closed and archived. The featured article toolbox (at right) can help you check some of the criteria. Place {{FAC}} should be substituted at the top of the article talk page at the top of the talk page of the nominated article and save the page. From the FAC template, click on the red "initiate the nomination" link or the blue "leave comments" link. You will see pre-loaded information; leave that text. If you are unsure how to complete a nomination, please post to the FAC talk page for assistance. Below the preloaded title, complete the nomination page, sign with ~~~~ and save the page.

Copy this text: Wikipedia:Featured article candidates/name of nominated article/archiveNumber (substituting Number), and edit this page (i.e., the page you are reading at the moment), pasting the template at the top of the list of candidates. Replace "name of ..." with the name of your nomination. This will transclude the nomination into this page. In the event that the title of the nomination page differs from this format, use the page's title instead.

May be a bot could automate that process? Thanks.47.17.27.96 (talk) 13:08, 16 January 2017 (UTC)[reply]

This was apparently copied here from WP:VPT; the original is here. --Redrose64 🌹 (talk) 21:34, 16 January 2017 (UTC)[reply]
I think that at WP:VPT, the IP was directed here - see here. TheMagikCow (talk) 19:40, 17 January 2017 (UTC)[reply]
There is some information that is required from the user, both with the FAC and GAN templates, that can't be inferred by a bot but requires human decision making. I don't think this would be that useful or feasible. BlueMoonset (talk) 21:35, 22 January 2017 (UTC)[reply]

Bot for category history merges

Back in the days when the facility to move category pages wasn't available, Cydebot made thousands of cut-and-paste moves to rename categories per CFD discussions. In the process of the renames, a new category would be created under the new name by the bot with the the edit summary indicating that it was "Moved from CATEGORYOLDNAME" and identifying the the editors of the old category to account for the attribution. An example is here.

This method of preserving attribution is rather crude and so it is desirable that the complete editing history of the category page be available for attribution. The process of recovering the deleted page histories has since been taken on by Od Mishehu who has performed thousands of history merges.

I suggest that an adminbot be optimised to go through Cydebot's contribs log, identify the categories that were created by it (i.e, the first edit on the page should be by Cydebot) and

  1. undelete the category mentioned in the Cydebot's edit summary
  2. history-merge it into the new category using Special:MergeHistory.
  3. Delete the left-over redirect under CSD G6.

This bot task is not at all controversial. This is just an effort to fill in missing page histories. Obviously, there would be no cases of any parallel histories encountered - and even if there were, it wouldn't be an issue since Special:MergeHistory cannot be used for merging parallel histories - which is to say that there is no chance of any unintended history mess-up. This should an easy task for a bot. 103.6.159.72 (talk) 10:52, 18 January 2017 (UTC)[reply]

There's one thing that i have overlooked above, though it is again not a problem. In some rare cases, it may occur that after the source page has been moved to the destination page, the source page may later have been recreated - either as a category redirect or as a real category. In such cases, just skip step #3 in the procedure described above. There will be edits at the source page that postdate the creation of the destination page, and hence by its design, Special:MergeHistory will not move these edits over - only the old edits that the bot has undeleted would be merged. (It may be noted that the MergeHistory extention turns the source page into a redirect only when all edits at the source are merged into the destination page, which won't be the case in such cases - this means that the source page that some guy recreated will remain intact.) All this is that simple. 103.6.159.72 (talk) 19:37, 18 January 2017 (UTC)[reply]
Is this even needed? I would think most if not all edits to category pages do not pass the threshold of originality to get copyright in the first place. Our own guidelines on where attribution is not needed reinforce this notion under US law, stating duplicating material by other contributors that is sufficiently creative to be copyrightable under US law (as the governing law for Wikipedia), requires attribution. That same guideline also mentions that a List of authors in the edit summary is sufficient for proper attribution, which is what Cydebot has been doing for years. Avicennasis @ 21:56, 20 Tevet 5777 / 21:56, 18 January 2017 (UTC)[reply]
Cydebot doesn't do it any longer. Since 2011 or sometime 2015, Cydebot renames cats by actually moving the page. So for the sake of consistency, we could do this for the older cats also. The on-wiki practise, for a very lomg time, has been to do a history merge wherever it is technically possible. The guideline that edit summary is sufficient attribution is quite dated and something that's hardly ever followed. It's usually left as a worst-case option where a histmerge is not possible. History merge is the preferred method of maintaining attribution. Some categories like Category:Members of the Early Birds of Aviation do have some descriptive creative content. 103.6.159.72 (talk) 02:21, 19 January 2017 (UTC)[reply]
I'm not completely opposed to this, but I do think that we need to define which category pages are in scope for this. I suspect the vast majority of pages wouldn't need attribution, and we should be limiting the amount of pointless bot edits. Avicennasis @ 02:49, 21 Tevet 5777 / 02:49, 19 January 2017 (UTC)[reply]
It wasn't 2011 (it can't have been, since the ability to move category pages wasn't available to anybody until 22 May 2014, possibly slightly later, but certainly no earlier). Certainly Cydebot was still making cutpaste moves when I raised this thread on 14 June 2014; raised this thread; and commented on this one. These requests took some months to be actioned: checking Cydebot's move log, I find that the earliest true moves of Category: pages that were made by that bot occurred on 26 March 2015. --Redrose64 🌹 (talk) 12:07, 19 January 2017 (UTC)[reply]
Since we are already talking about using a bot, I think it makes sense to do them all (or lest none at all) since that would come at no extra costs. Selecting a cherry-pick for the bot to do is just a waste of human editors' time. The edits won't be completely "pointless" - it's good to be able to see full edit histories. Talking of pointless edits, I should remind people that there are bots around that perform hundreds of thousands of pointless edits. 103.6.159.84 (talk) 16:14, 19 January 2017 (UTC)[reply]
As to when it became technically possible, I did it on May 26, 2014. עוד מישהו Od Mishehu 05:32, 20 January 2017 (UTC)[reply]
~94,899 pages, by my count. Avicennasis @ 03:36, 23 Tevet 5777 / 03:36, 21 January 2017 (UTC)[reply]
That should keep a bot busy for a week or more. The Usercontribs module pulls the processing queue. Here's the setup in the API sandbox. Click "make request" to see the results of a query to get the first three. Though I've never written an admin-bot before, I may take a stab at this within the next several days. – wbm1058 (talk) 04:28, 21 January 2017 (UTC)[reply]
The other major API modules to support this are Undelete, Mergehistory and Delete. This would be a logical second task for my Merge bot to take on. The PHP framework I use supports undelete and delete, but it looks like I'll need to add new functions for user-contribs and merge-history. In my RfA I promised to work the Wikipedia:WikiProject History Merge backlog, so it would be nice to take that off my back burner in a significant way. I'm hoping to leverage this into another bot task to clear some of the article-space backlog as well...
Coding... wbm1058 (talk) 13:06, 21 January 2017 (UTC)[reply]
My count is 89,894 pages. wbm1058 (talk) 00:58, 24 January 2017 (UTC)[reply]
@Wbm1058: Did you exclude the pages that have already been histmerged (by Od Mishehu and probably a few by other admins also)?— Preceding unsigned comment added by 103.6.159.67 (talkcontribs) 12:39, 24 January 2017 (UTC)[reply]
I was about to mention that. My next step is to check the deleted revisions for mergeable history. No point in undeleting if there is no mergeable history. Working on that now. – wbm1058 (talk) 14:40, 24 January 2017 (UTC)[reply]
Note this example of a past histmerge by Od Mishehu: Category:People from Stockport
Should this bot do that with its histmerges too? wbm1058 (talk) 21:51, 25 January 2017 (UTC)[reply]
Yes, when there is a list of users present (there were periods when the bot didn't do it, but most of the time it did). עוד מישהו Od Mishehu 22:24, 25 January 2017 (UTC)[reply]

An other issue: Some times, a category was renamed multiple times. For example, Category:Georgian conductors->Category:Georgian conductors (music)->Category:Conductors (music) from Georgia (country); this must be supported also for categories where the second rename was recent. e.g Category:Visitor attractions in Washington (U.S. state)->Category:Visitor attractions in Washington (state)->Category:Tourist attractions in Washington (state). Back-and-forth renames must also be considered, for example, Category:Tornadoes in Hawaii->Category:Hawaii tornadoes->Category:Tornadoes in Hawaii; this also must be handled in cases where the second rename was recent, e.g Category:People from San Francisco->Category:People from San Francisco, California->Category:People from San Francisco. עוד מישהו Od Mishehu 05:35, 26 January 2017 (UTC)[reply]

Od Mishehu, this is also something I noticed. I'm thinking the best way to approach this is to start with the oldest contributions, and then merge forward so the last merge would be into the newest, currently active, category. Is that the way you would manually do this? So I think I need to reverse the direction that I was processing this, and work forward from the oldest rather than backward from the newest. Category:Georgian conductors was created at 22:56, 23 June 2008 by a human editor; that's the first (oldest) set of history to merge. At 22:38, 7 June 2010 Cydebot moved Category:Conductors by nationality to Category:Conductors (music) by nationality per CFD at Wikipedia:Categories for discussion/Log/2010 May 24#Category:Conductors. At 00:12, 8 June 2010 Cydebot deleted page Category:Georgian conductors (Robot - Moving Category Georgian conductors to Category:Georgian conductors (music) per CFD at Wikipedia:Categories for discussion/Log/2010 May 24#Category:Conductors.) So we should restore both Category:Georgian conductors and Category:Georgian conductors (music) in order to merge the 5 deleted edits of the former into the history of the latter. The new category creation by Cydebot that would trigger this history restoration and merging is
  • 00:11, 8 June 2010 . . Cydebot (187 bytes) (Robot: Moved from Category:Georgian conductors. Authors: K********, E***********, O************, G*********, Cydebot)
However, if you look at the selection set I've been using, you won't find this new category creating edit: 8 June 2010 Cydebot contributions
It should slot in between these:
To find the relevant log item, I need to search the Deleted user contributions
I'm looking for the API that gets deleted user contributions. This is getting more complicated. – wbm1058 (talk) 16:38, 26 January 2017 (UTC)[reply]
OK, Deletedrevs can list deleted contributions for a certain user, sorted by timestamp. Not to be confused with Deletedrevisions. wbm1058 (talk) 17:18, 26 January 2017 (UTC)[reply]
After analyzing these some more, I think my original algorithm is fine. I don't think it should be necessary for the bot to get involved with the deleted user contributions. What this means is that only the most recent moves will be merged on the first pass, as my bot will only look at Cydebot's active contributions history. The first pass will undelete and merge the most recently deleted history, which will expose additional moves that my bot will see on its second pass through the contributions. I'll just re-run until my bot sees no more mergeable items. The first bot run will merge Category:Georgian conductors (music) into Category:Conductors (music) from Georgia (country). The second bot run will merge Category:Georgian conductors into Category:Conductors (music) from Georgia (country). The first bot run will merge Category:Visitor attractions in Washington (U.S. state) into Category:Tourist attractions in Washington (state), and there's nothing to do on the second pass (there is no mergeable history in Category:Visitor attractions in Washington (state)). The first pass would merge Category:Hawaii tornadoes into Category:Tornadoes in Hawaii – I just did that for testing. The second pass will see that Category:Tornadoes in Hawaii should be history-merged into itself. I need to check for such "self-merge" cases and report them (a "self-merge" is actually a restore of some or all of a page's deleted history)... I suppose I should be able to restore the applicable history (only the history that predates the page move). Category:People from San Francisco just needs to have the "self-merge" procedure performed, as Category:People from San Francisco, California has no mergeable history. Thanks for giving me these use-cases, very helpful.
I should mention some more analysis from a test run through the 89,893 pages in the selection set. 2369 of those had no deleted revisions, so I just skip them. HERE is a list of the first 98 of those. Of the remaining 87,524 pages, these 544 pages aren't mergeable, because the timestamp of the oldest edit isn't old enough, so I skip them too. Many of these have already been manually history-merged. That leaves 86,980 mergeable pages that my bot should history-merge on its first pass. An unknown number of additional merges to be done on the second pass, then hopefully a third pass will either confirm we're done or mop up any remaining – unless there are cats that have moved four times... wbm1058 (talk) 22:42, 26 January 2017 (UTC)[reply]
Some of the pages with no deleted reivsions are the result of a category rename where the source category was changed into something else (a category redirect or disambiguation), and a history merge in those caes should be done (I juse did onesuch merge, the thirds on the list of 99). However, this may be too difficult for a bot to handle; I can deal with those over time if you give me a full list. The first 2 on the list you gave are different - the bot didn't delete them (it did usually, but not always), and they were removed without deletion by Jc37 and used as new categories. I believe, based on the link to the CFD discussion at the beginning, that the aanswer to that would be in Wikipedia:Categories for discussion/Log/2015 January 1#Australian politicians. עוד מישהו Od Mishehu 05:34, 27 January 2017 (UTC)[reply]

This whole thing seems a waste of time (why do we need to see old revisions of category pages that were deleted years ago), but if you want to spend your time writing and monitoring a bot that does this, I won't complain; it won't hurt anything. I'm just concerned by the comments up above that point out a lot of not-so-straightforward cases, like the tornadoes in Hawaii and the visitor attractions in Washington. How will the bot know what information is important to preserve and what isn't? Nyttend (talk) 05:28, 27 January 2017 (UTC)[reply]

The reasons for it, in my opinion:
  1. While most categories have no copyrightable information, some do; on these, we legally need to maintain the history. While Cydebot did this well for categories which were renamed once, it didn't for categories which were renamed more than once. Do any of these have copyrightable information? It's impossible to know.
  2. If we nominate acategory for deletion, we generally should inform its creator - even if the creation was over 10 years ago, as long as the creator is still active. With deleted history, it's difficult for a human admin to do this, and impossible for automated nomination tools (such as [[WP::TW|Twinkle]]) or non-admins.
עוד מישהו Od Mishehu 05:37, 27 January 2017 (UTC)[reply]
  1. Because writing a bot is fun, isn't it? As only programmers know. And especially if the bot's gonna perform hundreds of thousands of admin actions.
  2. Because m:wikiarchaeologists will go to any lengths to make complete edoting histories of pages visible, even if it's quite trivial. Using a bot shows a far more moderate level of eccentricity than doing it manually would. Why do you think Graham87 imported thousands of old page revisions from nostwiki?
103.6.159.76 (talk) 08:59, 27 January 2017 (UTC)[reply]

I think it may be best to defer any bot processing of these on the first iteration of this. Maybe after a first successful run, we can come back and focus on an automated solution for these as well. It's still a lot to be left for manual processing. I'll work on the piece that actually performs the merges later today. – wbm1058 (talk) 13:49, 27 January 2017 (UTC)[reply]

@Wbm1058: For the pages that were copy-pasted without rhe source catgeory being delted, you can still merge them. Use of Special:MergeHistory ensures that only the edits that predate the creation of the destination category will be merged. 103.6.159.90 (talk) 08:32, 29 January 2017 (UTC)[reply]

BRFA filed I think this is ready for prime time. wbm1058 (talk) 01:17, 28 January 2017 (UTC)[reply]

Website suddenly took down a lot of its material, need archiving bot!

Per Wikipedia_talk:WikiProject_Academic_Journals#Urgent:_Beall.27s_list, several (if not) most links to https://scholarlyoa.com/ and subpages just went dead. Could a bot help with adding archive links to relevant citation templates (and possibly bare/manual links too)? Headbomb {talk / contribs / physics / books} 00:31, 19 January 2017 (UTC)[reply]

Cyberpower678, could you mark this domain is dead in IABot's database so that it will handle adding archive urls? — JJMC89(T·C) 01:13, 19 January 2017 (UTC)[reply]
@Cyberpower678: ? Headbomb {talk / contribs / physics / books} 10:50, 2 February 2017 (UTC)[reply]
Sorry, I never got the first ping. I'll mark it in a moment.—CYBERPOWER (Chat) 16:52, 2 February 2017 (UTC)[reply]
Only 61 urls were found in the DB with the domain.—CYBERPOWER (Chat) 17:39, 2 February 2017 (UTC)[reply]
@Cyberpower678: Well that's 61 urls that we needed! Would it be possible to have a list of those urls, or is that complicated? It would be really useful to project members to have those centralized in one place. Headbomb {talk / contribs / physics / books} 20:04, 13 February 2017 (UTC)[reply]
I would but, the DB is under maintenance right now.—CYBERPOWER (Be my Valentine) 20:06, 13 February 2017 (UTC)[reply]
I'll ping you next week then. Headbomb {talk / contribs / physics / books} 20:07, 13 February 2017 (UTC)[reply]
@Cyberpower678:. Headbomb {talk / contribs / physics / books} 21:00, 22 February 2017 (UTC)[reply]
The interface link will be made available soon, but...
Giant list removed. It may still be seen in the page history. Anomie 23:03, 24 February 2017 (UTC)[reply]
Cheers.—CYBERPOWER (Chat) 06:11, 24 February 2017 (UTC)[reply]
Please don't dump giant lists of stuff in this page. Put them in a subpage in your userspace and link to them instead. Thanks. Anomie 23:03, 24 February 2017 (UTC)[reply]

@Cyberpower678:, I may have been unclear in my request, but what I meant was is is possible to have a consolidated list of the archived versions and also have a bot update the current citation templates (and bare links, if any) with the archived version? Headbomb {talk / contribs / physics / books} 18:19, 24 February 2017 (UTC)[reply]

Non-free images used excessively

I'd like a few reports, if anyone's able to generate them.

1) All images in Category:Fair use images, defined recursively, which are used outside of the mainspace.

2) All images in Category:Fair use images, defined recursively, which are used on more than 10 pages.

3) All images in Category:Fair use images, defined recursively, which are used on any page that is not anywhere in the text of the file description page. i.e. If "File:Image1.jpg" was used on page "Abraham Lincoln" but the text "Abraham Lincoln" appeared nowhere on the file page.

If anyone can handle all or some of these, it would be much appreciated. Feel free to write to a subpage in my userspace. ~ Rob13Talk 20:29, 21 January 2017 (UTC)[reply]

No bot needed for tasks 1 and 2:
  1. https://tools.wmflabs.org/betacommand-dev/nfcc/NFCC9.html
  2. https://tools.wmflabs.org/betacommand-dev/nfcc/high_use_NFCC.html
Task 3 was done by User:BetacommandBot, but the bot and its master have been since blocked. User:FairuseBot, I think. I'd very much like to see this task being done by a bot. – Finnusertop (talkcontribs) 20:42, 21 January 2017 (UTC)[reply]

Move GA reviews to the standard location

There are about 3000 Category:Good articles that do not have a GA review at the standard location of Talk:<article title>/GA1. This is standing in the way of creating a list of GAs that genuinely do not have a GA review. Many of these pages have a pointer to the actual review location in the article milestones on the talk page, and these are the ones that could potentially be moved by bot.

There are two cases, the easier one is pages that have a /GA1 page but the substantive page has been renamed. An example is 108 St Georges Terrace whose review is at Talk:BankWest Tower/GA1. This just requires a page move and the milestones template updated. Note that there may be more than one review for a page (sometimes there are several failed reviews before a pass). GA reviews are identified in the milestones template with the field actionn=GAN and the corresponding review page is found at actionnlink=<review>. Multiple GA reviews are named /GA1, /GA2 etc but note that there is no guarantee that the review number corresponds to the n number in actionn.

The other case (older reviews, example 100,000-year problem) is where the review took place on the article talk page rather than a dedicated page. This needs a cut and paste to a /GA1 page and the review transcluding back on to the talk page. This probably needs to be semi-automatic with some sanity checks by human, at least for a test run (has the bot actually captured a review, is it a review of the target article, did it capture all of the review). SpinningSpark 08:30, 22 January 2017 (UTC)[reply]

Discussion at Wikipedia talk:Good articles#Article incorrectly listed as GA here? and Wikipedia:Village pump (technical)/Archive 152#GA reviews SpinningSpark 08:37, 22 January 2017 (UTC)[reply]

For some time Russian soccer stats website Klisf.info is inactive, inavailable. There are many links to this website (either as incline references or simple external links like the one from this article) and they should be tagged as dead links (at least). --XXN, 12:54, 22 January 2017 (UTC)[reply]

No one bot operator is interested in this task? This is an important thing, there are a lot of articles based on only one Klisf.info dead link, and the WP:VER is problematic. I don't request (yet) to remove these links - just tag them as dead, and another bot will try to update them with a link to an archived version, if possible. The FOOTY wikiproject was notified some time ago, but there is nothing controversial. XXN, 13:55, 10 February 2017 (UTC)[reply]

Add https to ForaDeJogo.net

Please change foradejogo.net links to https. I have already updated its templates. SLBedit (talk) 19:24, 22 January 2017 (UTC)[reply]

@SLBedit: see User:Bender the Bot and its contribs. You can contact directly the bot operator, probably. XXN, 13:59, 10 February 2017 (UTC)[reply]
I'll keep it in mind. --bender235 (talk) 15:40, 10 February 2017 (UTC)[reply]

User's recognized content list

List like Wikipedia:WikiProject Physics/Recognized content generated by User:JL-Bot/Project content seems very neat. Is it possible to generate and maintain the same list, tied to a user instead of a Wikiproject? For example, I can use it to have a list of DYK/GA/FAs credited to me in my user page. HaEr48 (talk) 03:51, 23 January 2017 (UTC)[reply]

Let's ping JLaTondre (talk · contribs) on this. Headbomb {talk / contribs / physics / books} 04:43, 23 January 2017 (UTC)[reply]
He replied in User talk:JL-Bot#Generating User-centric recognized content and said that he doesn't have time to add this new feature right now, and the way such a thing can be implemented is a bit different from JL-Bot's existing implementation. So probably we need new bots. HaEr48 (talk) 06:50, 9 February 2017 (UTC)[reply]

Bot to delete emptied monthly maintenance categories

I notice that we have a bot, AnomieBOT that automatically creates monthly maintenance categories (Femto Bot used to do it earlier). Going by the logs for a particular category, I find that it has been deleted and recreated about 10 times. While all recreations are by bots, the deletions are done by human adminstrators. Why so? Mundane, repetitive tasks like the deletion of such categories (under CSD G6) when they get emptied should be done by bots. This bot task is obviously non-controversial and absolutely non-contentious, since AnomieBOT will recreate the category if new pages appear in the category. 103.6.159.93 (talk) 14:21, 23 January 2017 (UTC)[reply]

Needs wider discussion. It should be easy enough for AnomieBOT III to do this, but I'd like to hear from the admins who actually do these deletions regularly whether the workload is enough that they'd want a bot to handle it. Anomie 04:54, 24 January 2017 (UTC)[reply]
Are these already being tagged for CSD by a bot? I don't work CAT:CSD has much as I used to, but rarely see these in the backlog there now. — xaosflux Talk 14:08, 24 January 2017 (UTC)[reply]
I think they are tagged manually by editors. Anyway, this discussion is now shifted to WP:AN#Bot to delete emptied monthly maintenance categories, for the establishment of consensus as demanded by Anomie. 103.6.159.67 (talk) 14:12, 24 January 2017 (UTC)[reply]
Thanks for taking it there, 103.6.159.67. It looks like it's tending towards "support", if that keeps up I'll write the code once the discussion there is archived. I also see some good ideas in the comments, I had thought of the "only delete if there are no edits besides AnomieBOT" condition already but I hadn't thought of "... but ignore reverted vandalism" or "don't delete if the talk page exists". Anomie 03:21, 25 January 2017 (UTC)[reply]
@Xaosflux: No, {{Monthly clean-up category}} (actually {{Monthly clean-up category/core}}) automatically applies {{Db-g6}} if the category contains zero pages. Anomie 03:12, 25 January 2017 (UTC)[reply]
My experience shows this is safe to delete. They can even be recreated when needed (usually a delayed reversion in a page edit history). -- Magioladitis (talk) 23:09, 24 January 2017 (UTC)[reply]
The question isn't if they're safe to delete, that's obvious. The question is whether the admins who actually process these deletions think it's worth having a bot do it since there doesn't seem to be any backlog. Anomie 03:12, 25 January 2017 (UTC)[reply]
Category:Candidates for uncontroversial speedy deletion is almost always empty when I drop by it. — xaosflux Talk 03:39, 25 January 2017 (UTC)[reply]

The AN discussion is archived now, no one opposed. I put together a task to log any deletions such a bot would make at User:AnomieBOT III/DatedCategoryDeleter test‎, to see if it'll actually catch anything. If it logs actual deletions it might make I'll make a BRFA for actually doing them. Anomie 14:45, 31 January 2017 (UTC)[reply]

Bot to remove old warnings from IP talk pages

There is consensus for removing old warnings from IP talk pages. See Wikipedia_talk:Criteria_for_speedy_deletion/Archive_9#IP_talk_pages and Wikipedia:Village_pump_(proposals)/Archive_110#Bot_blank_and_template_really.2C_really.2C_really_old_IP_talk_pages.. This task is being done using AWB by BD2412 for several years now. Until around 2007, it was also being done by Tawkerbot.

I suggest that a bot should be coded up to remove all sections from IP talk pages that are older than 2 years, and add the {{OW}} template to the page if it doesn't already exist (placed at the top of the page, but below any WHOIS/sharedip templates) There are many reasons why this should be done by a bot. (i) Bot edits marked as minor do not cause the IPs to get a "You have new messages" notification, when the IP talk page is edited. (ii) Blankings done using AWB also remove any WHOIS/sharedip templates, for which there is no consensus. (iii) This is a type of mundane task that should be done by bots. Human editors should not waste their time with this, rather spend it at tasks that require some human intelligence. 103.6.159.93 (talk) 14:41, 23 January 2017 (UTC)[reply]

Needs wider discussion. These are pretty old discussions to support this sort of mass blanking of talk pages. If I recall correctly, an admin deleted a bunch of IP user talk pages a while back and this proved controversial. This needs a modern village pump discussion. ~ Rob13Talk 20:21, 24 January 2017 (UTC)[reply]
Here is one such discussion that I initiated. I think that two years is a bit too soon. Five years is reasonable. When I do these blankings with AWB, I typically go back seven, just because it is easy to skip any page with a date of 2010 or later on hte page. I think some flexibility could be built in based on the circumstances. An IP address from which only one edit has ever been made, resulting in one comment or warning in response, is probably good for templating after no more than three years. I would add that I intentionally remove the WHOIS/sharedip templates, because, again, these are typically pages with nothing new happening in the past seven (and sometimes ten or eleven) years. We are not a permanent directory of IP addresses. bd2412 T 01:01, 25 January 2017 (UTC)[reply]
@BU Rob13: don't be silly. The is consensus for this since 2006. Tawkerbot did it till 2007 and BD2412 has been doing it for years, without anyone disputing the needs for doing it on his talk page. You correctly remember that MZMcBride used an unapproved bot to delete over 400,000 IP talk pages in 2010. That was obviously controversial since there is consensus only for blankings, not for deletions. Any new discussion on this will only result in repetition of arguements. The only thing that needs discussion is the approach. 103.6.159.84 (talk) 04:29, 25 January 2017 (UTC)[reply]
  • I wrote above "remove all sections from IP talk pages that are older than 2 years". I realise that this was misunderstood. What I meant was remove the sections in which the last comment is over 2 years old. This is more moderate proposal. Do you agree with this, BD2412? 103.6.159.84 (talk) 04:29, 25 January 2017 (UTC)[reply]
    I have two thoughts on that. First, I think that going after individual sections, as opposed to 'everything but the top templates' is a much harder task to program. I suppose it would rely on the last date in a signature in the section, or on reading the page history. Secondly, I think that there are an enormous number of pages to deal with that would have all sections removed even under that criteria, so we may as well start with the easy task of identifying those pages and clearing everything off of them. If we were to go to a section-by-section approach, I would agree with a two year window. bd2412 T 04:35, 25 January 2017 (UTC)[reply]
As mentioned, deletion should NOT be done (and is also not requested), deletion results in hiding tracks that may be of interest (either discussions on a talkpage of an IP used by an editor years ago that has relevance to edits to mainspace pages (every now and then there are edits with a summary 'per discussion on my talk'), and it hides that certain IPS that behaved bad were actually warned (company spamming in 2010, gets several warnings, sits still for 7 years, then someone else spams again - we might consider blacklisting with reasoning 'you were warned in 2010, and now you are at it again' - it may be a different person behind a different IP, and the current editor may not even be aware of the situation of 2 1/2 years ago, it is the same organisation that is responsible). If the talkpage 'exists', and we find the old IP that showed the behaviour, it is easy to find the warnings back; if it involves 15 IPs of which 2 were heavily warned, and those two pages now are also redlinks, we need someone with the admin bit to check deleted revisions on 15 talkpages - in other cases, anyone can do it.
Now, regarding blanking: what would be the arguments against archiving threads on talkpages where:
  1. the thread is more than X old (2 years?)
  2. the IP did not edit in the last Y days (1 year?)
We would just insert a custom-template in the header like {{IPtalkpage-autoarchive}} which is pointing to the automatically created archives and which provides a lot of explanation, and we have a specified bot that archives these pages as long as the conditions are met. Downside is only that it would preserve utter useless warnings (though, some editors reply to warnings and go in discussion, and are sometimes right, upon which the perceived vandalism is re-performed), upside is that it preserves also constructive private discussions.
(@BD2412: regarding your "I think that going after individual sections, as opposed to 'everything but the top templates' is a much harder task to program" - the former is exactly what our archiving bots do). --Dirk Beetstra T C 05:51, 25 January 2017 (UTC)[reply]
As far as I am aware, editors have previously opposed archiving of IP talk pages and so this would require wider discussion at an appropriate forum first. Regarding removal of warnings by section, I don't think there is any need to bother about the time the IP last edited -- the whole point of removing old warnings is to ensure that the current (or future) users of the IPs don't see messages that were intended for someone who used the IP years ago. Ideally, a person who visit their IPtalk should see only the messages intended for that person. 103.6.159.84 (talk) 06:19, 25 January 2017 (UTC)[reply]
That consensus could have changed - it may indeed need a wider community consensus. As I read the above thread, however, removal is not only restricted to warnings, it mentions remove the sections in which the last comment is over 2 years old, which also would include discussions. Now, archiving is not a must, one is allowed to simply delete old threads on ones 'own' talkpage.
Whether you archive, or delete - in both cases the effect is the same: the thread that is irrelevant to the current user of the IP is not on the talkpage itself anymore. And with highly fluxional IPs, or with IPs that are used by multiple editors at the same time it is completely impossible to address the 'right editor', you will address all of them. On the other hand, some IPs stay for years with the same physical editor, and the messages that are deleted will be relevant to the current user of the page, even if they did not edit for years. And that brings me to the point whether the editor has been editing in the last year (or whichever time period one choses) - if the IP is continuously editing there is a higher chance that the editor is the same, as when an IP has not been editing for a year (though in both cases, the IP can be static or not static, thwarting that analysis and making it needful to check on a case-by-case basis, which would preclude bot-use). --Dirk Beetstra T C 10:53, 25 January 2017 (UTC)[reply]
I favour archiving using the {{wan}} template rather than blanking. It alerts future editors that there have been previous warnings. If the IP belongs to an organisation, they might just possibly look at the old warnings and discover that the things they are about to do were done before and were considered bad. SpinningSpark 12:07, 25 January 2017 (UTC)[reply]
I think that archiving for old IP talk pages is very problematic. One of the main reasons I am interested in blanking these pages is to reduce link load - the amount of dross on a "What links here" page that obscures the links from that page to other namespaces, which is particularly annoying when a disambiguator is trying to see whether relevant namespaces (mainspace, templates, modules, files) are clear of links to a disambiguation page. All archiving does for IP talk pages is take a group of random conversations - link load and all - and disassociate them from their relevant edit history, which is what a person investigating the IP address is most likely to need. This is very different from archiving article talk pages or wikispace talk pages, where we may need to look back at the substance of old discussions. bd2412 T 14:08, 25 January 2017 (UTC)[reply]
Agree with that. I also don't think archiving of IP talk pages is useful. In any case, it needs to be discussed elsewhere (though IMO it's unlikely to get consensus). There is no point in bringing it up within this bot request. 103.6.159.89 (talk) 15:59, 25 January 2017 (UTC)[reply]
I see the point of that, but that is also the reason why some people want to see what links to a page - where the discussions were. The thread above is rather unspecific, and suggests to blank ALL discussions, not only warnings. And that are the things that are sometimes of interest, plain discussions regarding a subject, or even discussions following a warning. If the talkpage-discussions obscure your view, then you can choose to select incoming links per namespace.
@103.6.159.89: if there is no consensus to blank, but people are discussing whether it should be blanking or archiving or nothing, then there is no need for a discussion here - bots should simply not be doing this. I agree that the discussion about what should be done with it should be somewhere else. --Dirk Beetstra T C 03:29, 26 January 2017 (UTC)[reply]
You can not choose to select incoming links per namespace if you need to see multiple namespaces at once to figure out the source of a problem. For example, sometimes a link return appears on a page that can not actually be found on that page, but is transcluding from another namespace (a template, a portal, a module, under strange circumstances possibly even a category or file), and you need to look at all the namespaces at once to determine the connection. It would be nice if the interface would allow that, but that would be a different technical request. bd2412 T 17:06, 28 January 2017 (UTC)[reply]
I agree that that is a different technical request. But the way this request is now written (to remove all sections from IP talk pages that are older than 2 years) I am afraid that important information could be wiped. I know the problems with the Wikimedia Development team (regarding feature requests etc., I have my own frustrations about that), but alternatives should be implemented with extreme care. I would be fine with removal of warnings (but not if those warnings result in discussion), but not with any other discussions, and I would still implement timing restrictions (not having edited for x amount of time, etc.). --Dirk Beetstra T C 07:32, 29 January 2017 (UTC)[reply]
If there is a really useful discussion on an IP talk page that has otherwise gone untouched for half a decade or more, than that discussion should be moved to a more visible location. We shouldn't be keeping important matters on obscure pages, and given the hundreds of thousands of existing IP talk pages, there isn't much that can be more obscure than the random set of numbers designating one of those. (Yes, I know they are not really random numbers, but for purposes of finding a particular one, they may as well be). bd2412 T 17:02, 5 February 2017 (UTC)[reply]
@BD2412: and how are you going to see that (and what is the threshold of importance)? When you archive it is at least still there, with blanking any discussion is 'gone'. --Dirk Beetstra T C 03:54, 9 February 2017 (UTC)[reply]
Then people will learn not to leave important discussions on IP talk pages with no apparent activity after half a decade or more. bd2412 T 04:13, 9 February 2017 (UTC)[reply]
Your kidding, right? Are we here to collaboratively create an encyclopedia, or are we here to teach people a lesson? --Dirk Beetstra T C 05:49, 9 February 2017 (UTC)[reply]
We are not here to create a permanent collection of random IP talk page comments. bd2412 T 00:33, 14 February 2017 (UTC)[reply]

Fix duplicate references in mainspace

Hi. Apologies if this is malformed. I'd like to see a bot that can do this without us depending on a helpful human with AWB chancing across the article. --Dweller (talk) Become old fashioned! 19:11, 26 January 2017 (UTC)[reply]

As a kind of clarification, if an article doesn't used named references because the editors of that article have decided not to, we don't want to require the use of named references to perform this kind of merging. In particular, AWB does not add named references if there are not already named references, in order to avoid changing the citation style. This is mentioned in the page linked above (which is an AWB subpage), but it is an important point for bot operators to keep in mind. — Carl (CBM · talk) 19:27, 26 January 2017 (UTC)[reply]
Been here bonkers years and never come across that, thanks! Wikipedia:Citing_sources#Duplicate_citations suggests finding other ways to fix duplicates. I don't know what those other ways are, but if that makes it too difficult, maybe the bot could only patrol articles that already make use of the refname parameter. --Dweller (talk) Become old fashioned! 19:57, 26 January 2017 (UTC)[reply]
It's easy enough for a bot to limit itself to articles with at least one named ref; a scan for that can be done at the same time as a scan for duplicated references, since both require scanning the article text. — Carl (CBM · talk) 20:26, 26 January 2017 (UTC)[reply]
Smashing! Thanks for the expertise. --Dweller (talk) Become old fashioned! 20:44, 26 January 2017 (UTC)[reply]
Note: this is not what is meant by CITEVAR. It is perfectly fine to add names to references. All the best: Rich Farmbrough, 00:48, 5 February 2017 (UTC).[reply]

NB Your chart, above, is reading my signature as part of my username. Does that need a separate Bot request ;-) --Dweller (talk) Become old fashioned! 12:00, 1 February 2017 (UTC)[reply]

Create a simple TemplateData for all Infoboxes

Request
Check if the /doc file in all templates contained in WP:List of infoboxes contains <templatedata> or <templatedata /> or <templatedata/> :

If none exists add to the bottom:

<templatedata>
{
	"params": {},
	"paramOrder": [],
	"format": "block"
}
</templatedata>

— አቤል ዳዊት?(Janweh64) (talk) 17:05, 27 January 2017 (UTC)[reply]

Why? How is the editor's or reader's experience improved if this is done? – Jonesey95 (talk) 16:15, 27 January 2017 (UTC)[reply]
It is only step one of a series of bot operations I have in mind to systematical create a base-line TemplateData for all Biography Infoboxes by importing data from Infobox person. Maybe, it is too small of a step. Sorry this is my first bot request. The next step would be to check if the template contains a "honorific_prefix" parameter and if so add between {}:
"honorific_prefix": {"description": "To appear on the line above the person's name","label": "Honorific prefix","aliases": ["honorific prefix"]},
and
"honorific_prefix",
inside []. Step by step we could accomplish the goals set out by this daunting task. The same idea could be used to create TemplateData to other infoboxes or even many other templates by inheriting the data from their parents.— አቤል ዳዊት?(Janweh64) (talk) 17:05, 27 January 2017 (UTC)[reply]
It sounds like this idea needs more development. I suggest having a discussion at that TemplateData talk page, coming up with a plan that could be executed by a bot, and then coming back here with that plan. – Jonesey95 (talk) 17:31, 27 January 2017 (UTC)[reply]
This discussion is proof that discussing things accomplishes nothing. Nevermind, I will just learn how to build a bot myself and get it approved. — አቤል ዳዊት?(Janweh64) (talk) 23:10, 27 January 2017 (UTC)[reply]
Sorry to disappoint you. So that you don't waste your time and get more frustrated, here's one more piece of information: you will find that when you make a bot request, you will also be asked for a link to the same sort of discussion. If you take a look at Wikipedia:Bots/Requests for approval, you will see these instructions: If your task could be controversial (e.g. most bots making non-maintenance edits to articles and most bots posting messages on user talk pages), seek consensus for the task in the appropriate forums. Common places to start include WP:Village pump (proposals) and the talk pages of the relevant policies, guidelines, templates, and/or WikiProjects. Link to this discussion from your request for approval. This is how things work. – Jonesey95 (talk) 00:51, 28 January 2017 (UTC)[reply]
I was rude. You were kind. — አቤል ዳዊት?(Janweh64) (talk) 13:46, 28 January 2017 (UTC)[reply]

WP:UAAHP

Hi, is it possible for a bot, such as DeltaQuadBot, to remove stale reports at the UAA holding pen (those blocked and those with no action in seven days), like it does with blocked users and declined reports at WP:UAAB? If this is not possible I would be happy to create my own bot account and have it do this task instead. Thanks! Linguisttalk|contribs 22:23, 28 January 2017 (UTC)[reply]

You should ask DeltaQuad if she would consider adding it to her bot. Also, is this something that the UAA admins want? — JJMC89(T·C) 22:39, 28 January 2017 (UTC)[reply]
I haven't asked the UAA regulars but I'm sure this would be helpful. In fact, I'm almost the only one who cleans up the HP and it would be helpful to me. Linguisttalk|contribs 22:41, 28 January 2017 (UTC)[reply]
This would certainly be helpful if it could remove any report that is more than seven days old, where the account has not edited at all. This is the bulk of what gets put in the holding pen, so keeping it up to date would be quite simple if these type of reports were removed automatically. Beeblebrox (talk) 22:10, 19 February 2017 (UTC)[reply]

One-off bot to ease archiving at WP:RESTRICT

This isn't urgent, or even 100% sure to be needed, but it looks likely based on this discussion that we will be moving listings at WP:RESTRICT to an archive page if the user in question has been inactive for two years or more. Some of the restrictions involve more than one user and would require a human to review them, but it would be awesome if a bot could determine that if a user listed there singly had not edited at all in two or more years it could automatically transfer their listing to the archive. There are aloso some older restrictions that involved a whole list of users (I don't think arbcom does that anymore), and in several of those cases all of the users are either blocked or otherwise totally inactive. This would only be needed once, just to reduce the workload to get the archive started. (the list is extremely long, which is why this was proposed to begin with) Is there a bot that could manage this? Beeblebrox (talk) 18:46, 4 February 2017 (UTC)[reply]

Ongoing would be better, and even bringing back "resurrected" users might be helpful too. All the best: Rich Farmbrough, 01:01, 5 February 2017 (UTC).[reply]
 Doing... All the best: Rich Farmbrough, 23:25, 13 February 2017 (UTC).[reply]
Awesome, the discussion was archived without a formal close, but consnesus to do this is pretty clear. Beeblebrox (talk) 20:57, 15 February 2017 (UTC)[reply]

Addition of tl:Authority control

There are very many biographies - tens of thousands - for which wikidata has authority control data - example Petscan, and where such data is not displayed in the article. {{Authority control}}, with no parameters, displays authority control data from wikidata. It is conventionally placed immediately before the list of categories at the foot of an article. It is used on 510,000+ articles and appears to be the de facto standard for handling authority control data on wikipedia.

Would this make a good bot task: use the petscan lists, like the above, to identify articles for which {{Authority control}} can be placed at the foot of the article? --Tagishsimon (talk) 11:51, 6 February 2017 (UTC)[reply]

I think there is already a bot doing this. -- Magioladitis (talk) 18:48, 11 February 2017 (UTC)[reply]

Tagishsimon User:KasparBot adds Authority Control. -- Magioladitis (talk) 09:24, 12 February 2017 (UTC)[reply]

Thanks Magioladitis. I've added to User talk:T.seppelt. --Tagishsimon (talk) 17:34, 12 February 2017 (UTC)[reply]

Doing... I'm taking care of it. --19:01, 12 February 2017 (UTC) — Preceding unsigned comment added by T.seppelt (talkcontribs)

Requesting bot for wikisource

I'm not sure exactly what to say here, at least in part because I'm not sure exactly what functions we are necessarily seeking a bot to do. But there is currently a discussion at wikisource:Wikisource:Scriptorium#Possible bot about trying to get some sort of bot which would be able to generate an output page roughly similar to Wikipedia:WikiProject Christianity#Popular pages and similar for the portals, author, and categories over there at wikisource. I as an individual am not among the most knowledgeable editors there. On that basis, I think it might be useful to get input from some of the more experienced editors there regarding any major issues which might occur to either a bot developer or them but not me. Perhaps the best way to do this would be to respond at the first linked to section above and for the developer to announce himself, perhaps in a separate subsection of the linked to thread there, to iron out any difficulties. John Carter (talk) 14:31, 6 February 2017 (UTC)[reply]

How about a bot to update (broken) sectional redirects?

When a section heading is changed, it breaks all redirects targeting that heading. Those redirects then incorrectly lead to the top of the page rather than to the appropriate section.

Is this desirable and feasible? If so, how would such a script work? The Transhumanist 22:14, 6 February 2017 (UTC)[reply]

This may turn out to be a WP:CONTEXTBOT. How often do people delete the section entirely, or split the section into two (then which should the bot pick?), or revise the section such that the redirect doesn't really apply anymore? Can the bot correctly differentiate these cases from cases where it can know what section to change the target to?
Such a script would presumably work by watching RecentChanges for edits that change a section heading, and then would check all redirects to the article to see if they targeted that section. It would probably want to delay before actually making the update in case the edit gets reverted or further edited. Anomie 22:29, 6 February 2017 (UTC)[reply]

Script works intermittently

Hi guys, I'm stuck.

I forked the redlink remover script above, with an eye toward possibly developing a bot from it in the future, after I get it to do what I need on pages one-at-a-time. But first I need to get it to run. Sometimes it works and sometimes it doesn't (mostly doesn't).

For example, the original worked on Chrome for AlexTheWhovian but not for me. But later, it started working for no apparent reason. I also had the fork I made (from an earlier version) working on two machines with Firefox. But I turned one off for the night. And in the morning, it worked on one machine and not the other.

The script I'm trying to get to work is User:The Transhumanist/OLUtils.js.

I'm thinking the culprit is a missing resource module or something.

Is there an easy way to track down what resources the script needs in order to work? Keep in mind I'm a newb. The Transhumanist 01:41, 11 February 2017 (UTC)[reply]

After some trial and error, I learned the following: in Firefox, if I run the Feb 28 2016 version of User:AlexTheWhovian/script-redlinks.js and if I use it to strip redlinks from a page (I didn't save the page), then I can load the 15:05, December 26, 2016 version and it works.

Does anyone have any idea why using one script (not just loading it) will cause another script to work? I'm really confused. The Transhumanist 05:33, 11 February 2017 (UTC)[reply]

Maybe one has dependencies that it doesn't load itself, instead relying on other scripts to load them. --Redrose64 🌹 (talk) 21:55, 11 February 2017 (UTC)[reply]
The author said it was stand alone. (They are both versions of the same script). I now have them both loaded, so I can more easily use the first one (User:The Transhumanist/redlinks Feb2016.js) to enable the other (User:The Transhumanist/OLUtils.js). Even the original author doesn't know why it isn't working.
What's the next step in solving this? The Transhumanist 06:46, 12 February 2017 (UTC)[reply]
You've changer the outer part: that's what I would suspect, maybe not loading the mw library properly. Possibly the best way is to make the changes step-by-step, with a browser restart between. (Or better still binary chop.) All the best: Rich Farmbrough, 22:32, 13 February 2017 (UTC).[reply]

Creating a list of red-linked entries at Recent deaths

I request a bot to create and maintain a list consisting of red-linked entries grabbed from the Deaths in 2017 page, as and when they get added there. These entries, as you may know, are removed from the "Deaths in ... " pages if an article about the subject isn't created in a month's time. It would be useful to maintain a list comprising of just the red entries (from which they are not removed on any periodic basis) for editors to go through. This would increase the chances of new articles being created. Preferably at Wikipedia:WikiProject Biography/Recent deaths red list, or in the bot's userspace to begin with. (In the latter case, the bot wouldn't need any BRFA approval.) 103.6.159.71 (talk) 12:54, 15 February 2017 (UTC)[reply]

Check book references for self-published titles

This is in response to this thread at VPT. So here's the problem. We have a list of vanity publishers whose works should be used with extreme caution, or never (some of these publishers exclusively publish bound collections of Wikipedia articles). But actually checking if a reference added to Wikipedia is on this list is time consuming. However, it occurs to me that in some cases it should be simple to automate. At any Amazon webpage for a book, there is a line for the publisher, marked "publisher". On any GoogleBooks webpage, there is a similar line to be found in the metadata hiding in the page source. If an ISBN is provided in the reference, it can be searched on WorldCat to identify the publisher.

So it seems to me like a bot should be able to do the following:

1) Watch recent changes for anything that looks like a link or reference to a book, such as a "cite book" template, a number that looks like an ISBN, or a link to a website like Amazon or GoogleBooks
2) Follow the link (if to Amazon or GoogleBooks), or search the ISBN (if provided), to identify the publisher
3) Check the publisher against the list of vanity publishers
4) Any positive hits could then be automatically reported somewhere on Wikipedia. There could even be blacklisted publishers (such as those paper mirrors of Wikipedia I mentioned) that the bot could automatically revert, after we're sure there are few/no false positives

What do people think? Doable? Someguy1221 (talk) 00:13, 16 February 2017 (UTC)[reply]

Bot to move files to Commons

Certainly files that a trusted user like Sfan00 IMG has reviewed and marked as suitable for move to Commons, can be moved without any further review using bot? All these files are tagged with {{Copy to Commons|human=Sfan00 IMG}} and appear in Category:Copy to Wikimedia Commons reviewed by Sfan00 IMG. There are over 11,000 files. I have personally no experience in dealing with files and so can't talk about the details, but I reckon something like CommonsHelper would be useful? I have asked Sfan about this but they have been inactive for 3 days.

If the process is likely to be error-free, I suppose that instead of marking the transferred files as {{NowCommons}} (which creates more work for admins in deleting the local copy), the bot could outright delete them under CSD F8. 103.6.159.65 (talk) 05:14, 16 February 2017 (UTC)[reply]

Technical details: such a bot would need to operate on a user-who-tagged-the-file basis; I'd envision using a parameter with the tagging user's username, combined with some setup comparable to {{db-u1}} to detect if the last user to edit the page was someone other than the user whose name appears in the parameter. On eligibility for specific user participation, I'm hesitant with Sfan00 IMG, basically because it's a semiautomated script account, and I'd like to ensure that every such file be checked manually first; of course, if ShakespeareFan00 is checking all these images beforehand and then tagging the checked images with the script, that's perfectly fine. Since you asked my perspective as a dual-site admin: on the en:wp side, the idea sounds workable, and bot-deleting the files sounds fine as long as we're programming it properly. On the Commons side, I hesitate. We already have several bots that do the Commons side of things, and they tend to do a rather poor-quality job; they can accurately copy the license and description, but they often mess up with the date and sometimes have problems with copying template transclusion properly, and they're horrendous with categories (which are critical for Commons images) — basically the only options with such bots are leaving the images entirely uncategorised, or ending up with absolute junk, e.g. "Companies of [place]" on portraits because the subject was associated with a company from that place; you can see one bad example in this revision of File:Blason Imbert Bourdillon de la Platiere.svg. If we deem it a good idea, adding another Commons bot would be fine; the issue is whether having a bot do this at all is a good idea on the Commons side. Nyttend (talk) 05:55, 16 February 2017 (UTC)[reply]
phew, there are 200k files in Category:Copy to Wikimedia Commons (bot-assessed)‎ which need further review. But what is surprising is that there are over 12,000 in Category:Copy to Wikimedia Commons maincat, which must all have been tagged by humans (because the bot-tagged ones are in the former cat). I wonder whether it would be a good idea to have a bot identify the tagger from the page history and add the |human= parameter. I also note that there are some files like File:Ambyun official.jpg that were tagged by Sfan without the human parameter. 103.6.159.65 (talk) 15:17, 16 February 2017 (UTC)[reply]
I've written a tool, Wikipedia:MTC!, which does have an option for mass-transfer. While it is reasonably accurate, I still think it is important for human to review each file before and after transfer. -FASTILY 00:10, 18 February 2017 (UTC)[reply]
I would echo Nyttend's concerns, especially as a few I'd tagged in good faith were subsquently found to have copyright concerns despite my best efforts.
There also a category that I created for inline-assessed based on a PD licence.

Those might also be amenable for semi-automated transfer with the above caveats. Sfan00 IMG (talk) 10:50, 20 February 2017 (UTC)[reply]

Declined Not a good task for a bot. Mostly because this would require approval at Commons as well, and the Commons community would eat anyone who suggested this alive. These really need manual review before transfer. ~ Rob13Talk 11:28, 27 February 2017 (UTC)[reply]
The admins at Commons are known to block users for persistently uploading copyrighted images. Just imagine what would happen if we were to mass-tranfer images without performing a thorough copyright check. --Redrose64 🌹 (talk) 13:31, 27 February 2017 (UTC)[reply]

Linkfix: www.www. to www.

We have 87 links in the form www.www.foo.bar which should really be www.foo.bar - the form www.www. is normally a fault. A simple link checker with text replacement would help.--Oneiros (talk) 13:49, 16 February 2017 (UTC)[reply]

OK - I will have a look into this. TheMagikCow (talk) 17:19, 16 February 2017 (UTC)[reply]
BRFA filed TheMagikCow (talk) 10:33, 18 February 2017 (UTC)[reply]

Could someone have one of their bots update this page frequently? A bot once updated it, but stopped in 2014. MCMLXXXIX 16:59, 16 February 2017 (UTC)[reply]

Hi 1989. We have Special:LongPages. Is that insufficient? --MZMcBride (talk) 05:30, 17 February 2017 (UTC)[reply]
Yes. The list only shows articles while the page I referenced has talk pages. MCMLXXXIX 09:21, 17 February 2017 (UTC)[reply]
I have popped up a page on tool labs that lists the fifty longest talk pages that are not user talk page or sub-pages. Hope this helps. - TB (talk) 12:35, 17 February 2017 (UTC)[reply]

Bot to update Alexa ranks

OKBot apparently used to do this, but blew up in April 2014 and has never been reactivated. It would be quite handy as there's a lot of articles that contain Alexa ranks and they do change frequently. Triptothecottage (talk) 05:35, 18 February 2017 (UTC)[reply]

WP 1.0 bot

Hi, there is a problem with WP 1.0 bot that produces the various project assessment tables and logs of changes to article assessment. The bot has not been working since early February and both of the stated maintainers are no longer active. We need some one to get the bot operating again and possibly a couple of people who could take over the maintenance of the bot. Any offers? Keith D (talk) 23:51, 19 February 2017 (UTC)[reply]

@Keith D: Oh no ... that's a bit of a mess. Have you tried reaching out to the maintainers via email? In the absence of a handing over of existing code, a bot operator would need to start from scratch. Someone could definitely do it (not me ... but someone), but it would take longer. ~ Rob13Talk 00:13, 20 February 2017 (UTC)[reply]
Thanks, sorry for not putting brain in gear I had forgotten about e-mail. Keith D (talk) 13:08, 20 February 2017 (UTC)[reply]
@Keith D and BU Rob13: There is a formal procedure for adding maintainers/taking over an abandoned project: Tool Labs Abandoned tool policy. --Bamyers99 (talk) 14:47, 20 February 2017 (UTC)[reply]

I request a bot to remove any wikilink at a page that redirects back to the same page or section. Thanks.Anythingyouwant (talk) 03:08, 22 February 2017 (UTC)[reply]

It isn't clear to me exactly what is being asked for. If a link points to a redirect that points to a different section in the original article, the link should not be removed. If anything, it should be replaced with a section link, see Wikipedia:Manual of Style/Linking#Section links (second paragraph). Anyway, in all cases care is needed for redirects that have recently been created from existing articles. Sometimes such redirects are controversial and will be reverted. Thincat (talk) 08:55, 22 February 2017 (UTC)[reply]
I said the same section, not a different section. Here is an example of what the bot would do. In that example, the redirect has existed for years (since 2013).Anythingyouwant (talk) 17:22, 22 February 2017 (UTC)[reply]

This is already part of CHECKWIKI. I can do them semi-automatically. -- Magioladitis (talk) 17:51, 22 February 2017 (UTC)[reply]

Replace br tags with plainlists in Infobox television

I would like a bot to replace br tags with plainlists in Infobox television. -- Magioladitis (talk) 10:31, 26 February 2017 (UTC)[reply]

Please provide a few diffs to give an idea of which parameters, etc. we're talking about. ~ Rob13Talk 11:21, 27 February 2017 (UTC)[reply]

Reformation

Protestant Reformation was moved to Reformation. I'd like a bot to replace the many piped links [[Protestant Reformation|Reformation]] by the simple link. --Gerda Arendt (talk) 10:14, 27 February 2017 (UTC)[reply]

Needs wider discussion. @Gerda Arendt: That's probably more trouble than it's worth. Such an edit would be a purely cosmetic fix to the wiki markup, which violates WP:COSMETICBOT. You could seek consensus that this is a useful bot task at a broad community venue, but I doubt that would be easy to build. The piped links don't hurt anything, so probably not worth your time. ~ Rob13Talk 11:25, 27 February 2017 (UTC)[reply]
If it's not easy - looked easy enough to me - I will just do the "cosmetics" myself. --Gerda Arendt (talk) 11:32, 27 February 2017 (UTC)[reply]
@Gerda Arendt: It's technically trivial, but there would need to be strong consensus for overriding COSMETICBOT in this instance. That would have to come at a venue like WP:VPT or something similar, probably. ~ Rob13Talk 11:41, 27 February 2017 (UTC)[reply]
As I said before: If it's not easy (which includes easy to achieve) I will do the cosmetics myself ;) --Gerda Arendt (talk) 11:45, 27 February 2017 (UTC)[reply]

Gerda Arendt It's not cosmetic to actually point to the correct article. -- Magioladitis (talk) 12:06, 27 February 2017 (UTC)[reply]

I did it for the four templates on the page, but agree that there, it didn't even fall under pipe linking. --Gerda Arendt (talk) 12:10, 27 February 2017 (UTC)[reply]
Piece of cake. -- Magioladitis (talk) 12:18, 27 February 2017 (UTC)[reply]

This may not be WP:COSMETICBOT, but it certainly is WP:NOTBROKEN. Headbomb {talk / contribs / physics / books} 12:27, 27 February 2017 (UTC)[reply]

It's misleading to pipe to a redirect when the correct term is shown on the text. Moreover, it only had to change 4 templates. -- Magioladitis (talk) 12:34, 27 February 2017 (UTC)[reply]
How does that mislead? How does bypassing a redirect change the visual output of the page (or anything else, for that matter). ~ Rob13Talk 12:35, 27 February 2017 (UTC)[reply]

Headbomb, see what happens when the mouses hovers over the link. Thanks for claryfying that Rob's argument was wrong. -- Magioladitis (talk) 12:37, 27 February 2017 (UTC)[reply]

Do you mean this link: Reformation? What the mouse hovering reveals is that we are complicated, but perhaps that needs to be shown a few more times. - Of course the visual appearance is the same, but why go via a redirect? In articles I write, I will not do that. --Gerda Arendt (talk) 12:46, 27 February 2017 (UTC)[reply]

Gerda Arendt did not ask for [[Protestant Reformation]] to change but only for [[Protestant Reformation|Reformation]]. -- Magioladitis (talk) 12:38, 27 February 2017 (UTC)[reply]

This is textbook WP:NOTBROKEN. I don't see what's so hard to understand about it. Headbomb {talk / contribs / physics / books} 12:40, 27 February 2017 (UTC)[reply]

NOTBROKEN reads: It is almost never helpful to replace [[redirect]] with [[target|redirect]]. It does not say about [[redirect|target]] with [[target]] because this would allow people to redirect to mispellings, invalid redirects, etc. The first case shows correct when the mouse hovers over the link, the second is misleading. -- Magioladitis (talk) 12:43, 27 February 2017 (UTC)[reply]

Related(?) comment: I try to imagine how would my neighbors would react to a link like [[FYROM|Republic of Macedonia]] this one. Hehe. -- Magioladitis (talk) 12:55, 27 February 2017 (UTC)[reply]

  • Whether or not changing all instances of [[Protestant Reformation|Reformation]] to [[Reformation]] by bot is WP:COSMETICBOT is subject to WP:CONSENSUS. I think Rob was right to point out that such consensus needs to established before a bot can proceed with operating such task. Thus far the discussion here shows no consensus either way. If my opinion were asked I'd side with those who adopt the COSMETICBOT approach.
  • Regarding the applicability of WP:NOTBROKEN it is true that the case is not explicitly mentioned in that guidance. Nonetheless I think the fact that "Protestant Reformation" shows up on mouseover is generally an advantage. Not all readers would in all contexts see the connection to the Protestantism-related topic when seeing Reformation in an article: the mouseover clarifies that even without clicking the bluelink. So "don't fix what isn't broken" applies as a general principle I think (whether or not the specific case is literally explained in the NOTBROKEN guidance). --Francis Schonken (talk) 13:19, 27 February 2017 (UTC)[reply]
    • My disagreement is on the pipe. If we agree that the correct place to link is the "Protestant Reformation" then the correct link in the page should be [[Protestant Reformation]] (Independent for the fact that this is a redirect). My example is to show that. In Greek articles I would use FYROM per the Manual of Style and I would avoid the use of RoM in all cases. This is not dependet from the fact that the English Wikipedia uses RoM as the page title.-- Magioladitis (talk) 13:31, 27 February 2017 (UTC)[reply]
      • IMHO your argumentation is misleading. There's nothing "misleading" in [[Protestant Reformation|Reformation]] (whether with or without mouseover). The [[FYROM|Republic of Macedonia]] example is of no relevance: we're not discussing it, we're discussing changing all instances of [[Protestant Reformation|Reformation]] to [[Reformation]] by bot. If people need to understand the intricacies regarding FYROM / Republic of Macedonia before they can understand what's going on regarding (Protestant) Reformation I think that strengthens what I was trying to say above: not all people would upon reading "Reformation" automatically understand that in Wikipedia context this usually means "Protestant Reformation", in which case what shows up on mouseover is helpful. --Francis Schonken (talk) 13:45, 27 February 2017 (UTC)[reply]
  • If an article is repetitive, such as when the article states the same thing three times in one paragraph, a bot should be able to change that so the article only states what is needed to be stated. This would make articles on this website much more uniform.