Jump to content

Wikipedia:Bot requests: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
Remove links from user talk pages to domain squatter: JaGa, R'n'B, and Dispenser control the bot
Line 1,075: Line 1,075:
:Does [[User:JaGa]] give permission, to edit his talk page posts? He hasn't logged in since April. -- [[User:GreenC|<font color="#006A4E">'''Green'''</font>]][[User_talk:GreenC|<font color="#009933">'''C'''</font>]] 12:48, 1 October 2017 (UTC)
:Does [[User:JaGa]] give permission, to edit his talk page posts? He hasn't logged in since April. -- [[User:GreenC|<font color="#006A4E">'''Green'''</font>]][[User_talk:GreenC|<font color="#009933">'''C'''</font>]] 12:48, 1 October 2017 (UTC)
:: Him, [[User:R'n'B]], and myself control the bot. These links are boon for the domain squatter and are worthless once they're fixed. — [[User:Dispenser|Dispenser]] 14:00, 1 October 2017 (UTC)
:: Him, [[User:R'n'B]], and myself control the bot. These links are boon for the domain squatter and are worthless once they're fixed. — [[User:Dispenser|Dispenser]] 14:00, 1 October 2017 (UTC)
List of pages in [https://www.mediawiki.org/wiki/Extension_default_namespaces#ID_0-99 namespace 0-15] that contain the string "dispenser.homenet.org":
;Namespace 0:
*[[Russian military intervention in the Syrian Civil War]]
*[[Donetsk People's Republic]]
;Namespace 1:
* (857 pages)
;Namespace 2:
* (395 pages)
;Namespace 3:
* (25489 pages)
;Namespace 4:
* (414 pages)
;Namespace 5:
* (59 pages)
;Namespace 6-7:
* (0 pages)
;Namespace 8:
*[[MediaWiki:Gadget-dropdown-menus-vector.js]]
;Namespace 9:
*[[MediaWiki talk:Linkshere]]
;Namespace 10:
*[[Template:Siadn]]
*[[Template:GeoTemplate]]
*[[Template:Dablinks/FAQ]]
*[[Template:WikiProject College football navbox]]
*[[Template:Good article tools]]
*[[Template:LDSTaskBox]]
*[[Template:Editor tools]]
*[[Template:Featured article tools/sandbox]]
*[[Template:Good article tools/PR]]
*[[Template:Did you know nominations/Telmatobius ventriflavum]]
*[[Template:Featured article tools/without list]]
*[[Template:Did you know nominations/People v Marquan M.]]
*[[Template:Book report start]]
*[[Template:Dablinks/sandbox]]
;Namespace 11:
*[[Template talk:BLP]]
*[[Template talk:Disambiguation needed]]
*[[Template talk:DYK tools]]
*[[Template talk:Disambiguation cleanup]]
*[[Template talk:Swimmingrecord]]
*[[Template talk:Featured article tools]]
*[[Template talk:Db-meta/Archive 3]]
*[[Template talk:Jewish and Arab localities in Israel]]
*[[Template talk:Sister project links/Archive 1]]
*[[Template talk:Good article tools]]
*[[Template talk:United States presidential election, 2016/Archive 1]]
;Namespace 12:
*[[Help:Link]]
*[[Help:Citation Style 1]]
*[[Help:What links here]]
*[[Help:Citation tools]]
;Namespace 13:
*[[Help talk:Disambiguation]]
;Namespace 14-15:
* (0 pages)

-- [[User:GreenC|<font color="#006A4E">'''Green'''</font>]][[User_talk:GreenC|<font color="#009933">'''C'''</font>]] 15:35, 1 October 2017 (UTC)

Revision as of 15:36, 1 October 2017

This is a page for requesting tasks to be done by bots per the bot policy. This is an appropriate place to put ideas for uncontroversial bot tasks, to get early feedback on ideas for bot tasks (controversial or not), and to seek bot operators for bot tasks. Consensus-building discussions requiring large community input (such as request for comments) should normally be held at WP:VPPROP or other relevant pages (such as a WikiProject's talk page).

You can check the "Commonly Requested Bots" box above to see if a suitable bot already exists for the task you have in mind. If you have a question about a particular bot, contact the bot operator directly via their talk page or the bot's talk page. If a bot is acting improperly, follow the guidance outlined in WP:BOTISSUE. For broader issues and general discussion about bots, see the bot noticeboard.

Before making a request, please see the list of frequently denied bots, either because they are too complicated to program, or do not have consensus from the Wikipedia community. If you are requesting that a template (such as a WikiProject banner) is added to all pages in a particular category, please be careful to check the category tree for any unwanted subcategories. It is best to give a complete list of categories that should be worked through individually, rather than one category to be analyzed recursively (see example difference).

Alternatives to bot requests

Note to bot operators: The {{BOTREQ}} template can be used to give common responses, and make it easier to keep track of the task's current status. If you complete a request, note that you did with {{BOTREQ|done}}, and archive the request after a few days (WP:1CA is useful here).


Please add your bot requests to the bottom of this page.
Make a new request
# Bot request Status 💬 👥 🙋 Last editor 🕒 (UTC) 🤖 Last botop editor 🕒 (UTC)
1 Bot to remove template from articles it doesn't belong on? 4 4 Wikiwerner 2024-09-28 17:28 Primefac 2024-07-24 20:15
2 One-off: Adding all module doc pages to Category:Module documentation pages 7 3 Andrybak 2024-09-01 00:34 Primefac 2024-07-25 12:22
3 Removing redundant FURs on file pages 5 3 Wikiwerner 2024-09-28 17:28 Anomie 2024-08-09 14:15
4 AnandTech shuts down 9 6 GreenC 2024-09-01 18:39 Primefac 2024-09-01 17:28
5 Date formatting on 9/11 biography articles 5 2 Zeke, the Mad Horrorist 2024-09-01 16:27
6 Discussion alert bot 6 4 Headbomb 2024-09-08 12:29 Headbomb 2024-09-08 12:29
7 Regularly removing coords missing if coordinates are present BRFA filed 11 2 Usernamekiran 2024-09-07 13:19 Usernamekiran 2024-09-07 13:19
8 Latex: move punctuation to go inside templates 3 2 Yodo9000 2024-09-07 18:59 Anomie 2024-09-07 03:38
9 de-AMP bot BRFA filed 13 7 Usernamekiran 2024-09-24 16:04 Usernamekiran 2024-09-24 16:04
10 Articles about years: redirects and categories BRFA filed 7 3 DreamRimmer 2024-09-16 01:18 DreamRimmer 2024-09-16 01:18
11 WikiProject ratings change BRFA filed 3 2 DreamRimmer 2024-09-15 11:43 DreamRimmer 2024-09-15 11:43
12 QIDs in Infobox person/Wikidata BRFA filed 11 4 Tom.Reding 2024-10-06 14:23 Tom.Reding 2024-10-06 14:23
13 Remove outdated "Image requested" templates 3 2 7804j 2024-09-21 11:26 DreamRimmer 2024-09-19 18:53
14 "Was" in TV articles 5 3 Primefac 2024-09-29 19:34 Primefac 2024-09-29 19:34
15 Films by director  done 9 4 Usernamekiran 2024-10-03 13:30 Usernamekiran 2024-10-03 13:30
16 altering certain tags on protected pages? 10 5 Primefac 2024-10-20 14:47 Primefac 2024-10-20 14:47
17 Request for Bot to Remove ARWU_NU Parameter from Articles Using Infobox US University Ranking Template 4 2 Primefac 2024-10-13 12:50 Primefac 2024-10-13 12:50
18 Removal of two external link templates per TfD result 6 4 Primefac 2024-10-14 13:48 Primefac 2024-10-14 13:48
19 Replace merged WikiProject template with parent project + parameter  Done 7 3 Primefac 2024-10-21 10:04 Primefac 2024-10-21 10:04
20 Bot Request to Add Vezina Trophy Winners Navbox to Relevant Player Pages 3 3 Primefac 2024-10-19 12:23 Primefac 2024-10-19 12:23
21 Replace standalone BLP templates 4 3 DreamRimmer 2024-10-26 09:29 DreamRimmer 2024-10-26 09:29
22 Assess set index and WikiProject Lists based on category as lists 6 2 Mrfoogles 2024-10-28 19:29 Primefac 2024-10-28 18:39
Legend
  • In the last hour
  • In the last day
  • In the last week
  • In the last month
  • More than one month
Manual settings
When exceptions occur,
please check the setting first.


Redirect talk pages with only banners

Per Wikipedia:Village_pump_(proposals)/Archive 139#Redirect_talk_pages_with_only_banners, I'm requesting a bot with the following logic

For an example where this is already done, see WT:NJOURNALS. Headbomb {t · c · p · b} 19:39, 3 May 2017 (UTC)[reply]

This is something I have been thinking about for a while. The converse might also be useful. I presume we could also include other standard templates like {{Talk header}}. All the best: Rich Farmbrough, 19:21, 4 May 2017 (UTC).[reply]
There is no consensus, nor point, in adding {{talk header}} to redirected talk pages. Headbomb {t · c · p · b} 23:11, 4 May 2017 (UTC)[reply]
You misunderstand. I presume we could count such pages as "having only banners". All the best: Rich Farmbrough, 20:33, 5 May 2017 (UTC).[reply]
Oh, my bad then. Yes, that makes sense. Headbomb {t · c · p · b} 06:22, 11 June 2017 (UTC)[reply]
I'm still a little hazy as to why the Talk header template would be placed on a page where there should never be any discussion. It seems that it may mislead editors to start up discussions on the redirect talk page.  Paine Ellsworth  put'r there  13:09, 14 July 2017 (UTC)[reply]
(Archived discussion, for posterity.) Sounds like a good idea, working on some code that counts how many pages like this exist. Enterprisey (talk!) 22:54, 5 June 2017 (UTC)[reply]
@Enterprisey: Any updates on this? Headbomb {t · c · p · b} 14:26, 11 July 2017 (UTC)[reply]
None at the moment, although I can get back to you later today with a possible BRFA. Enterprisey (talk!) 15:44, 11 July 2017 (UTC)[reply]
Fortunately, my existing task 10 also works on talk pages of redirects, so development of this should be a lot easier. Enterprisey (talk!) 02:58, 12 July 2017 (UTC)[reply]
Headbomb and Rich Farmbrough, I was wondering whether I should put {{Redirect category shell}} on the redirects that the bot creates. I think it would be helpful, but we would probably need to come up with an appropriate redirect-sorting template to avoid putting all the redirects into Category:Miscellaneous redirects. Enterprisey (talk!) 04:03, 12 July 2017 (UTC)[reply]
Maybe. I'm rather indifferent to it but it's probably not a bad idea. I suppose the best place ask is the redirect category shell talk page. The problem i see is that the redirects are very varied and it would be very hard to determine by bot what type of redirect they are. I don't see what kind of special redirect category could be created for them either.Headbomb {t · c · p · b} 07:44, 12 July 2017 (UTC)[reply]
I suspect that Paine Ellsworth (talk · contribs) would have an opinion. Probably it would be useful to identify those that need categorising. All the best: Rich Farmbrough, 18:06, 12 July 2017 (UTC).[reply]
Thank you for the ping, Rich. Frankly, I see no benefit in changing these talk pages to "hard" redirects, when I've been working diligently to turn them into soft redirects with the {{Talk page of redirect}} template. That template is placed at the TOP of the page above banners and has a link to both the subject page and the talk page of the target. It also has a message that no discussion should take place on that page. I would strongly support a bot project that would place the Talk page of redirect template at the TOP of such talk pages.  Paine Ellsworth  put'r there  13:23, 14 July 2017 (UTC)[reply]
Discussion started on the redirects WikiProject talk page about whether it's acceptable to add empty rcat shells to all the new redirects this task will create: WT:RE#Adding the shell template to new redirects during a bot task. Enterprisey (talk!) 16:26, 31 July 2017 (UTC)[reply]
Doesn't seem to be any traction on the use of the shell. @Paine Ellswort: {{Talk page of redirect}} should likely be used on pages that this bot wouldn't touch (i.e. there are existing discussions / more than banners). Headbomb {t · c · p · b} 22:40, 3 August 2017 (UTC)[reply]
You could very well be right. I remember a long time ago having a discussion with an editor (can't remember who, I think it may have been Steel1943) about what to do with talk pages that only have banners and no discussions. I had been converting them to redirects, but the editor convinced me that it would be better to use the {{Talk page of redirect}} template instead. I've been doing that ever since. Didn't really see much difference, since the Talk page of redirect template basically just makes the talk page a "soft redirect".  Paine Ellsworth  put'r there  08:23, 4 August 2017 (UTC)[reply]

Convert protocol relative URLs to http/https

All protocol relative links on Wikipedia should be converted to either http or https. As of June 2015, Wikipedia is 100% HTTPS only and because protocol relative links are relative to where they are hosted it will always render as HTTPS. This means any underlying website that doesn't support HTTPS will break. For example:

[1] (//americanbilliardclub.com/about/history/)

..the http version of this link works. The article American rotation shows it in action, the first three footnotes are broken because they use a protocol relative link to a HTTP only website. But Wikipedia is rendering the link as HTTPS.

More info at WP:PRURL and Wikipedia:Village_pump_(technical)#Protocol_relative_URLs. It's probably 10s of thousands of links broken. -- GreenC 21:06, 8 June 2017 (UTC)[reply]

This should only be done if the existing link is proven to be broken, and where forcing it to http: conclusively fixes it. Otherwise, if the link is not dead under either protocol, it is WP:COSMETICBOT. --Redrose64 🌹 (talk) 21:45, 8 June 2017 (UTC)[reply]
Well let's ask, what happens if you keep them? It creates a point of failure. If the remote site stops supporting HTTPS then the link immediately breaks. There is no guarantee a bot will return years later and recheck. WP:COSMETICBOT is fine but it shouldn't prevent from removing a protocol that causes indefinite maintenance problems and MediaWiki no longer really supports. By removing it also discourages editors from further usage, which is good. -- GreenC 22:07, 8 June 2017 (UTC)[reply]
That reasoning makes no sense. If a bot converts the link to https and the remote site stops supporting HTTPS, then the link immediately breaks then too. Anomie 00:22, 9 June 2017 (UTC)[reply]
Different reasoning. IABot forces HTTPS on all PR URLs since Wikipedia does too, when it analyzes the URL. It's erroneously seeing some URLs as dead as a consequence since they don't support SSL. The proposal is to convert all non-functioning PR URLs to HTTP when HTTPS doesn't work.—CYBERPOWER (Message) 02:22, 9 June 2017 (UTC)[reply]
@Cyberpower678: The proposal, as specified above by Green Cardamom (talk · contribs) is not to convert all non-functioning PR URLs to HTTP when HTTPS doesn't work, but to convert all PR URLs to either http or https. No exceptions were given, not even those that are presently functioning. This seems to be on the grounds that some are broken. --Redrose64 🌹 (talk) 09:06, 9 June 2017 (UTC)[reply]
Do I want to get rid of PR URLs? I personally think we should because they confuse editors, confuse other bots, ugly and non-standard etc they're an unnecessary complication. If we don't want to get rid of them (all), we still need to the fix broken HTTP links either way. -- GreenC 14:35, 9 June 2017 (UTC)[reply]
  • As someone who's been strongly involved with URL maintenance over the last 2 years, I think this bot should be run on Wikipedia, and should enforce protocols. It's pushing WP:COSMETICBOT but if the link ends up being broken because only HTTP works, then that will create other issues. The task can be restricted to only converting those not functional with HTTPS, but my first choice is to convert all. — Preceding unsigned comment added by Cyberpower678 (talkcontribs) 01:38, 13 June 2017 (UTC)[reply]
Opining as a bot op: I personally don't think this can be read as having community consensus because it's going to create a lot of revisions for which there is no appreciable difference. Yes it would be nice if wikipedia was smart enough to figure out if the relative URL is accessable only via HTTP or can be accessed via https, but the link is clicked in the user's browser and therefore the user doesn't know that the content may be accessable via HTTPS or HTTP. Ideally, users entering relative URLS could be reminded via a bot that it's better to be explicit with what protocol needs to be used to get to the content. The counter is we could set a bot to hunt down all the relative URLS and put a maintanance tag/category in the reference block so that a human set of eyes can evaluate if the content is exclusively available via one route or if the content is the same on both paths.

TLDR: This request explicitly bumps against COSMETICBOT, needs further consensus, and there might be a way to have "maintenance" resolve the issue. Hasteur (talk) 12:38, 13 June 2017 (UTC)[reply]

Those are all good ideas but too much for me to take on right now. Agree there is no community consensus about changing relative HTTPS links; However existing relative HTTP cases broken in June 2015 should be fixed asap. A bot should be able to do it as any broken-link job without specific community consensus (beyond a BRFA). Broken links should be fixed. That's something I can probably do, unless someone else wants to (I have lots of other work..). Note this fix would not interfere with any larger plans to deal with relative links. -- GreenC 15:26, 13 June 2017 (UTC)[reply]
Bump. -- GreenC 17:13, 9 August 2017 (UTC)[reply]

Could someone rename these categories?

Could someone please rename the following categories to Category:Paleozoic life of Statename? Abyssal (talk) 20:43, 3 August 2017 (UTC) [reply]

Categories
 Not done Category renames must occur at WP:CFD. — JJMC89(T·C) 21:04, 3 August 2017 (UTC)[reply]

Mass-scale moving

Good evening ladies and gentlemen,

There has been a recent discussion on Help talk:IPA#Converting individual help pages for the various languages into subpages of Help:IPA which I would appreciate if you could go take a look at for the full picture.In summary we have to move a massive number of pages en-masse and thus, can any kind samaritan device out any tool/way to automate the tedious procedure.Cheers!Winged Blades Godric 04:26, 4 August 2017 (UTC)[reply]

@Winged Blades of Godric: Moves done except for Hawaiian, which is move protected. — JJMC89(T·C) 06:34, 5 August 2017 (UTC)[reply]
@Nardog:--Well I believe that's all.Winged Blades Godric 13:33, 5 August 2017 (UTC)[reply]
@JJMC89:--Many thanks! As a side-note, how did you manage this task?(So that I can take a cue for future purposes!.)Winged Blades Godric 13:33, 5 August 2017 (UTC)[reply]
@Winged Blades of Godric: I used movepages.py with the -pairsfile parameter. — JJMC89(T·C) 18:44, 5 August 2017 (UTC)[reply]

Hi all, the Swiss Office of Statistcs contacted Wikimedia CH to check the possibility to change around 50'000 links. They have changed systems and, consequently, also the structure of the links. It seems that this modification should be done in several lingustic versions and they can provided a sheet listing the old obsolete link and the new one. Do you think that this activity can be done easily? Do we have to contact several wikipedias or is there a bot able to change in several linguistic versions? --Ilario (talk) 09:29, 4 August 2017 (UTC)[reply]

Hi Ilario. This task sounds like it may be suitable; we have done similar tasks in the past. Can we have some more information about the changes (what they are, for what links etc) so we can see if a bot can do it here on the English Wikipedia? A request filed on the English Wikipedia is only valid for a bot on the English Wikipedia. If you want a bot to edit on the Swiss Wikipedia I believe you will need to file a separate request there. Thanks. TheMagikCow (T) (C) 10:49, 4 August 2017 (UTC)[reply]
Definitely doable (and something that sounds fun to do). As above, more information would be helpful so we can try to develop the rewrite rules to make it happen. Hasteur (talk) 13:51, 4 August 2017 (UTC)[reply]
@TheMagikCow: Is there a Swiss Wikipedia? The country has four official languages: French, German, Italian, and Romansh. Maybe you mean the Romansh Wikipedia, which doesn't have many articles. --Redrose64 🌹 (talk) 19:27, 4 August 2017 (UTC)[reply]
@Redrose64: Wikimedia CH Wikimedia CH is the Swiss Chapter of the global Wikimedia movement, and officially recognized as such by the Wikimedia Foundation. Hasteur (talk) 19:33, 4 August 2017 (UTC)[reply]
OK, but TheMagikCow wrote "on the Swiss Wikipedia". --Redrose64 🌹 (talk) 19:50, 4 August 2017 (UTC)[reply]
Ahh the 4 separate languages! The point is that each one of the Wikipedias will need separate approval. TheMagikCow (T) (C) 20:09, 4 August 2017 (UTC)[reply]
Hi all, I will take contact with my colleague of the French Switzerland to transmit this information. I supposed that the bot had to be approved by more than one language. --Ilario (talk) 13:10, 11 August 2017 (UTC)[reply]
Hi Ilario I think this may also affect the English Wikipedia - those URLs may be insourced here. You will need to speak to each of the Wikipedias, but if you show us the sturcture of the links, I would be happy to see if any links here need to be changed. TheMagikCow (T) (C) 14:10, 11 August 2017 (UTC)[reply]
Ok, we have received this information from the Office of statistic but no one reply to our ping at the moment, I suppose that there are still holidays. --Ilario (talk) 17:53, 11 August 2017 (UTC)[reply]

Bare Twitter URL bot

Would a bot that turns bare Twitter references into formatted Template:Cite tweet citations be feasible? The four basic parameters of user, number, date, and title should be easily machine-readable, and even though a bot wouldn't be able to interpret the optional parameters, the result would still be better than a bare URL. Madg2011 (talk) 23:03, 5 August 2017 (UTC)[reply]

@Madg2011: Should be doable with the Twitter API. I'll look into it later on. Mdann52 (talk) 10:50, 4 September 2017 (UTC)[reply]

In 2009, the list article List of films in the public domain was (quite correctly) moved to List of films in the public domain in the United States. There are over 200 articles that still link to the redirect; a sampling indicates that it's generally in the "See also" section.

Does there exist a bot that can update to use the correct link?

This is not a case of a redirect where the redirect has a correct but less desirable name than the actual target. This is a case where the redirect name is actually inaccurate and misdescriptive; without qualification, "in the public domain" indicates in the public domain worldwide, not merely in the US. I understand and agree that there's nothing inherently wrong in having a redirect in the See also section; this is a limited case where the redirect title is factually wrong and misleading.

I'm no bot-writer, but I suspect it's not worth coding a bot specifically for this, but if this is a task that an existing bot can do, that would be great. I started in on doing them manually ([2], [3], [4], [5]) until I realized how many there were. TJRC (talk) 18:46, 7 August 2017 (UTC)[reply]

Probably it could be listed at Wikipedia:Categories for discussion/Working to have the existing category-rename bots handle it. Anomie 20:52, 7 August 2017 (UTC)[reply]
I agree with TJRC. What I do is generate a list of article titles that contain the string [List of films in the public domain] using wikiget (./wikiget -a "insource:/\[List of films in the public domain\]/") then load that list into AWB and do a plain (non-regex) string replace. -- GreenC 15:55, 9 August 2017 (UTC)[reply]
Y Done (example) -- GreenC 17:01, 9 August 2017 (UTC)[reply]
Wonderful, thank you, GreenC! I have tried AWB, but I really had some trouble getting the hang of it. TJRC (talk) 23:27, 9 August 2017 (UTC)[reply]

Scrape some data from WP:WBFAN

I have been collecting statistical data on WP:FAC for over a year now; see this thread for details. It would be a big help for certain kinds of reporting if I could convert a historical revision of WP:WBFAN into a simple list of editor/count pairs. Any format of output would be fine; table, comma separated list -- anything predictable. I just need to convert the names with wrapped star lists into names with numbers of stars, along with the date of the revision.

Ideally this would be something I could run at will, but if someone runs this and sends me a file with the results that would work too.

The benefit to Wikipedia is that we are trying to make it easier for first-time nominators to succeed at FAC, but we can't know if we're succeeding without information about who had WBFAN stars and when they got them. Thanks for any help with this. Mike Christie (talk - contribs - library) 13:37, 9 August 2017 (UTC)[reply]

Dealing with Mdann52 (talk) 21:37, 18 August 2017 (UTC)[reply]
This is done; Mdann52 has sent me the results. Thank you very much! Mike Christie (talk - contribs - library) 12:40, 31 August 2017 (UTC)[reply]

Unicode subscript/superscript bot

This is a longstanding thing that annoys me, so here's a BOTREQ for it. Unicode subscripts and superscripts#Superscripts and subscripts block contains a list of the affected characters.

The request two distinct tasks:

A) Page moves:

  • Find unicode super/subscript in titles, and move it to the non-unicode version (e.g. Foo²bar, move it to Foo2bar)
  • Add the appropriate displaytitle key to the new page (e.g. {{DISPLAYTITLE:Foo<sup>2</sup>bar}})

B) Page cleanup

  • Find unicode super/subscript in titles, and replace them with the non-unicode version (e.g. ²<sup>2</sup>)
  • Avoid pages in Category:Unicode and Category:Typefaces, as well as their subcategories.

Headbomb {t · c · p · b} 18:11, 15 August 2017 (UTC)[reply]

What support in guideline/policy is there for using the non-Unicode version? --Izno (talk) 19:16, 15 August 2017 (UTC)[reply]
Well we have Wikipedia:Manual_of_Style/Mathematics#Superscripts_and_subscripts and also Wikipedia:Manual of Style/Superscripts and subscripts (which failed because the information was elsewhere / didn't need its own page, not because it was disputed). There is also MOS:UNITSYMBOLS (down in the table). Headbomb {t · c · p · b} 19:32, 15 August 2017 (UTC)[reply]
I would guess a significant portion of those articles using superscript Unicode blocks have nothing to do with mathematics, so I would be uncomfortable with this bot request for that reason (WP:CONTEXTBOT)--most of the links you've provided are specifically for styling in STEM topics. --Izno (talk) 19:58, 15 August 2017 (UTC)[reply]
A) should be problem free. For B) I'm a bit worried about WP:CONTEXTBOT as well, but from my own recollection, I can't recall any example that doesn't get filtered by avoiding the above-mentioned categories. A trial would find out if this is actually an issue. I could also do a database scan + a small manual run to see if there actually is an issue. Headbomb {t · c · p · b} 20:27, 15 August 2017 (UTC)[reply]
My point is that moving articles not about STEM according to a STEM MOS is probably not going to fly. So no, A is not problem free. --Izno (talk) 20:40, 15 August 2017 (UTC)[reply]
This isn't a STEM standard and isn't STEM specific. It just happens that most instances will be STEM-related, so that's where the guidance is. Headbomb {t · c · p · b} 21:09, 15 August 2017 (UTC)[reply]
Wikipedia:Manual of Style/Mathematics#Superscripts and subscripts is STEM-topic specific and its associated page that you linked to isn't a guideline. I would object to a bot moving any page below which is unrelated to STEM, since the guidance is specifically located on a MOS for STEM topics. (I would not be against a bot/semi-automatic process nominating the page for a move via WP:Move requests.) Additionally, the categories need to go to WP:CFD regardless. I don't see any value to moving templates unless their page titles are incomprehensible (ever, not specific to this scenario). --Izno (talk) 11:13, 16 August 2017 (UTC)[reply]
MOS:UNITSYMBOLS and Wikipedia:Manual of Style/Superscripts and subscripts aren't STEM-specific, and I've shown below that this is done outside of STEM-fields as well. Cats/Templates could easily be excluded from this though. Still, probably best to had this debate on the WP:VPR to get outside eyes. Headbomb {t · c · p · b} 11:19, 16 August 2017 (UTC)[reply]

A case-by-case approach might work best though. For page moved, with superscripts (filtering User/Wikipedia space), we get

Extended content
  1. (ISC)²
  2. 101²
  3. 12m² Sharpie
  4. ABCD² score
  5. AM²
  6. America³
  7. America³ (1992 yacht)
  8. Atm⁵
  9. A² Records
  10. A¹ homotopy theory
  11. Book:E=MC² (Mariah Carey album)
  12. Carl²
  13. Category:(ISC)²
  14. Category:12m² Sharpie
  15. Category:12m² Sharpie class Olympic sailors
  16. Category:12m² Sharpie class sailors
  17. Category:30m² Skerry cruiser class Olympic sailors
  18. Category:30m² Skerry cruiser class sailors
  19. Category:40m² Skerry cruiser class Olympic sailors
  20. Category:40m² Skerry cruiser class sailors
  21. Category:40m² Skerry cruisers
  22. Category:Gen¹³ and DV8 characters
  23. Category:Suspected Wikipedia sockpuppets of Eep²
  24. Chhut-thâu-thiⁿ
  25. Counterfeit²
  26. DNA²
  27. Don Omar Presents MTO²: New Generation
  28. E=MC² (Giorgio Moroder album)
  29. E=MC² (Mariah Carey album)
  30. E² (album)
  31. File:Carl².png
  32. File:Counterfeit².jpg
  33. File:E=MC² 1.png
  34. File:E=MC² cover.jpeg
  35. File:Gen¹³ FilmPoster.jpeg
  36. File:Gen¹³ vol. 2 6 Coverart.jpg
  37. File:I²C bus logo.svg
  38. File:Me² (Red Dwarf).jpg
  39. File:SABIN【420 stoⁿer】.jpeg
  40. File:Sini Sabotage - 22 m².jpg
  41. File:Why Does Emc².jpg
  42. GA²LEN
  43. Gen¹³
  44. Gen¹³ (film)
  45. Gen¹³/Monkeyman and O'Brien
  46. Heavy Metal: F.A.K.K.²
  47. Hi-Teknology²: The Chip
  48. INXS²: The Remixes
  49. I²C
  50. I²S
  51. K² (band)
  52. List of political and geographic subdivisions by total area from .1 to 1,000 km²
  53. List of political and geographic subdivisions by total area from .1 to 250 km²
  54. List of political and geographic subdivisions by total area from 1,000 to 3,000 km²
  55. List of political and geographic subdivisions by total area from 1,000 to 5,000 km²
  56. List of political and geographic subdivisions by total area from 10,000 to 20,000 km²
  57. List of political and geographic subdivisions by total area from 100,000 to 1,000,000 km²
  58. List of political and geographic subdivisions by total area from 100,000 to 200,000 km²
  59. List of political and geographic subdivisions by total area from 20,000 to 30,000 km²
  60. List of political and geographic subdivisions by total area from 20,000 to 50,000 km²
  61. List of political and geographic subdivisions by total area from 200,000 to 500,000 km²
  62. List of political and geographic subdivisions by total area from 250 to 1,000 km²
  63. List of political and geographic subdivisions by total area from 3,000 to 5,000 km²
  64. List of political and geographic subdivisions by total area from 30,000 to 50,000 km²
  65. List of political and geographic subdivisions by total area from 5,000 to 20,000 km²
  66. List of political and geographic subdivisions by total area from 5,000 to 7,000 km²
  67. List of political and geographic subdivisions by total area from 50,000 to 100,000 km²
  68. List of political and geographic subdivisions by total area from 50,000 to 200,000 km²
  69. List of political and geographic subdivisions by total area from 500,000 to 1,000,000 km²
  70. List of political and geographic subdivisions by total area from 7,000 to 10,000 km²
  71. List of political and geographic subdivisions by total area in excess of 1,000,000 km²
  72. List of political and geographic subdivisions by total area in excess of 200,000 km²
  73. Live at the O² Arena
  74. L² cohomology
  75. MAC³PARK Stadion
  76. MD² International
  77. Magnavox Odyssey²
  78. Mercedes-Benz G500 4×4²
  79. Me²
  80. M² (album)
  81. PC²
  82. Rite²
  83. SGI Indigo² and Challenge M
  84. SR² Motorsports
  85. Sailing at the 1920 Summer Olympics – 30m² Skerry cruiser
  86. Sailing at the 1920 Summer Olympics – 40m² Skerry cruiser
  87. Sailing at the 1956 Summer Olympics – 12m² Sharpie
  88. Secretory Pathway Ca²⁺ ATPase
  89. Stella Women’s Academy, High School Division Class C³
  90. Template:Footer Olympic Champions 30m² Skerry cruiser
  91. Template:Footer Olympic Champions 40m² Skerry cruiser
  92. Template:S³ University Alliance
  93. Template:The EMC² Barnstar
  94. V-Partei³
  95. Why Does E=mc²?
  96. Zeit²
  97. Z² (album)

I don't see any reason why any of those shouldn't render like we do with Vitamin B6, Golem100, Omega1 Scorpii, 12e Régiment blindé du Canada, Aice5, or Tommy heavenly6 discography. Headbomb {t · c · p · b} 21:33, 15 August 2017 (UTC)[reply]

Fix for Japan Times URLs

Would it be possible for a bot to change every instance of the dead link "search.japantimes.co.jp" to "www.japantimes.co.jp" to fix references in Japan-related articles? Thanks.--Pawnkingthree (talk) 17:51, 16 August 2017 (UTC)[reply]

Pawnkingthree. I checked and there are about 1800 articles (have the list). It would be trivial to replace text, but many are also now tagged with {{dead link}} or |deadurl=yes or converted to a https://web.archive.org/web/2012/http://search.japanatimes... so it will be more serious bot work to untangle correctly. Almost wonder if it wouldn't be easier for someone to do it manually, or supervised with a text replace in AWB and manually undo any extraneous dead tags. -- GreenC 19:21, 20 August 2017 (UTC)[reply]
Thanks, GreenC. If you can send me the list I'll start working on it manually (not familiar with AWB). I would begin with my own area of interest (which is sumo).--Pawnkingthree (talk) 12:48, 21 August 2017 (UTC)[reply]
Pawnkingthree ... List of search.japantimes. If it becomes too much to do manually, the worst case is convert them with a bot which is easy, and not worry about the dead links stuff. Partly depends what you discover how common it's needed to do something other than change the domain name. -- GreenC 14:27, 21 August 2017 (UTC)[reply]
Thanks, I'll let you know how I get on.--Pawnkingthree (talk) 14:46, 21 August 2017 (UTC)[reply]

Administrator "you messed up" notifications

Just in the last few days, I've twice messed up when blocking users: I left the block template and forgot to levy a block. This caused confusion in one circumstance, and in the other, another admin levied a longer block because it looked like the person had already come off an earlier shorter block.

What if we had a bot that would notify admins who added a block template without blocking the user? I'm envisioning the bot finding new substitutions of all block templates, checking to see whether the user really is blocked, and leaving a "you messed up" message (comparable to what BracketBot did) to remind the admin to go back and fix the situation. Sometimes one admin will block and another will leave the message; that's fine, so the bot shouldn't pay attention to who actually levied the block. And bonus points if the bot finds that a non-admin left the template on a non-blocked user's talk page; the bot could leave a note quoting the {{uw-block}} documentation: Only administrators can block users; adding a block template does not constitute a block. See RFAA to request that a user be blocked. Finally, since actually doing the blocking is quick and simple, we don't need the bot to wait a long time; sometimes you need to compose a long and thoughtful message explaining the block, but you don't need to do that when using Special:Block. Nyttend (talk) 01:30, 18 August 2017 (UTC)[reply]

Sounds more like a job for a EF, warning editors who add a block template to an unblocked user's talk page, of the same kind as trying to save an edit without an edit summary. Ben · Salvidrim!  03:16, 18 August 2017 (UTC)[reply]
An edit filter is an interesting idea. Especially when I am blocking for a reason that is better described in writing than with a template, I tend to write my block reason before actually clicking the block button; it's so that I won't get yelled at for blocking someone without a reason (or worse yet, the wrong or incomplete reason, which appears punishable by desysopping nowadays). I suspect, though, that it would be a pretty complex filter that uses a lot of resources, so we should be able to answer how frequently this happens, and whether it is worth the impact on all editing in the project to have this filter. (People forget how impactful these filters are; just try opening a page on a slow connection and you'll see it...) Risker (talk) 04:10, 18 August 2017 (UTC)[reply]
An edit filter is almost certainly not possible because editors may save block templates before actually blocking. Some existing tools may do the same. ~ Rob13Talk 20:31, 20 August 2017 (UTC)[reply]
Fair point, though it would almost surely be useful to have an edit filter to forbid non-admins from placing a template on a non-blocked user's page. (The original concern still stands, but if that EF is implemented, the "you screwed up" bot will need only look at administrators' edits). TigraanClick here to contact me 16:56, 11 September 2017 (UTC)[reply]

I've been manually adding lots of links to references in articles like this one. Does Wikipedia have any bots that can automate this process using Google Scholar or something similar? Jarble (talk) 21:10, 18 August 2017 (UTC)[reply]

Try WP:OABOT. Headbomb {t · c · p · b} 21:17, 18 August 2017 (UTC)[reply]
@Headbomb: OABot only works with articles that use citation templates. Does Wikipedia have any tools that can automatically format references using these templates? Jarble (talk) 21:17, 6 September 2017 (UTC)[reply]
Automatically, no. But you can enable refToolbar / citation expander in your preferences, and that can help converting links to proper references. Headbomb {t · c · p · b} 23:43, 6 September 2017 (UTC)[reply]
@Headbomb: If there isn't a tool that can automatically format citations, we could easily create one using a citation parser. Jarble (talk) 13:36, 13 September 2017 (UTC)[reply]

Re this conversation, User:InternetArchiveBot does a great job scanning our 5,000,000 articles for deadlinks and fixing them, but it moves very slowly. The FA Coordinators agree that it would be useful to keep Featured material patrolled much more regularly. We could do this by manually dumping a list of article names into the tool, but that's not rigorous and a working 'Featureddeadlink bot' could presumably quite happily also patrol FLs, other FT articles and even GAs. So perhaps the request is a bot that will initially patrol the FAs only, with a view to expanding the remit to other quality material once it's proved itself. That level of detail I can leave to your expertise. --Dweller (talk) Become old fashioned! 09:44, 22 August 2017 (UTC)[reply]

I would advise against an entirely new bot to do this. IABot has a high level of complexity and is very advanced to account for a slew of problems archive bots encounter when making runs. If anything, a bot can be made to regularly submit bot jobs to IABot, which will have IABot regularly patrol FAs, GAs, and other desired list of articles. Besides IABot maintains a large central DB of URLs on every wiki it runs on, so this method would be a lot easier. If anyone wants to write a bot to have IABot regularly scan requested articles, please ping me. I can help get the bot set up to interface with IABot.—CYBERPOWER (Chat) 09:49, 22 August 2017 (UTC)[reply]
@Dweller and GreenC: ^ GreenC's bot already communicates with IABot.—CYBERPOWER (Chat) 09:55, 22 August 2017 (UTC)[reply]
I like your idea of a bot to feed IABot. Recreating the wheel is a bad idea and it makes the task a lot simpler. --Dweller (talk) Become old fashioned! 10:01, 22 August 2017 (UTC)[reply]
Anyone that is interested in setting this bot up, the documentation for interfacing with IABot is m:InternetArchiveBot/API and the particular function you want to call is action=submitbotjob. If you need any help, just ping me. I would still recommend get the task approved first.—CYBERPOWER (Chat) 10:33, 22 August 2017 (UTC)[reply]
BRFA filed -- GreenC 13:35, 28 August 2017 (UTC)[reply]

lang-fa --> lang-prs

Hi there,
I'm looking for a helpful bot who's willing to make a large number of fixes. At the moment, there are many articles directly related to Afghanistan, where the incorrect {lang-fa} template is listed in the lede, instead of the corrected {lang-prs} template. All the {lang-fa} templates on these articles, i.e. articles about buildings, people (post-19th century), cities, towns, rivers, provinces, mountains, etc. need to be changed to the correct {lang-prs} template. So basically changing/adding 3 letters on every one of these articles.

The official name of the variant of the Persian language spoken in Afghanistan is Dari, and it has its own lang-template. However, until 2015, no such template existed on Wiki, hence people carelessly dropped the lang-fa template on all these articles. All the best, - LouisAragon (talk) 23:19, 13 August 2017 (UTC)[reply]

LouisAragon, unless there is a specific list of pages that need fixing, this task will not be given to a bot on account of context and the fact that there are 79k pages that call {{lang-fa}}. In other words, there are a huge number of pages that use the template, and without knowing which ones to change the process cannot be automated. Primefac (talk) 14:00, 23 August 2017 (UTC)[reply]
@Primefac: got it. I'll try to narrow it down then. What do you think about beginning, for a start, with all the articles listed in Category:Afghanistan_stubs, Category:Populated places in Afghanistan and Category:Cities in Afghanistan, including all their respective subcategories? Will the bot be able to process this? - LouisAragon (talk) 18:12, 23 August 2017 (UTC)[reply]
I mean, sure, if they almost all need changing. Of course, I randomly checked a dozen or so in the first two categories and none of them used either template, so a wider net may be needed. A couple in the third category already used -prs. If the rest of the articles are similarly template-free, you're probably looking at only 100ish pages, which could/should probably be manually done (i.e. it's not really worth a bot task for <250 edits). Primefac (talk) 21:15, 23 August 2017 (UTC)[reply]
@Primefac: The thing is, there are genuinely quite alot of articles that require this fix. Though not "every" single article has the erroneous template on these "Afghanistan-related articles", there are still really alot of them that do. I'm pretty sure no one would be willing to do that manually. Just imagine scouting every single article in order to see whether it has template or not. :O I just handpicked those three category examples in order to see whether such mass changes would even be possible (I'm really anything but a "veteran" regarding this particular area of Wiki, i.e. bots 'n stuff, hence I had no clue of what a bot is capeable of).
Apart from the 3 categories I mentioned above, all the articles listed in Category:Afghan people, Category:Cities in Afghanistan, Category:Education in Afghanistan and Category:Mountain ranges of Afghanistan should, preferably, be added to the to-do list of the bot as well. Altogether, I'm pretty certain that the articles in these 7 major categories, that contain the wrong lang-fa template, number much more than 200/250 altogether. Category:Afghan people by itself contains many hundreds of articles for example.
Its really important that all these templates get corrected, because, yeah, otherwise people will continue to add this erroneous template for another century to come. - LouisAragon (talk) 22:22, 23 August 2017 (UTC)[reply]
You don't need to look at every article in the category. You can use petscan to search for pages in a particular category that contain a given template. Here's a sample search that shows all articles in Category:Mountain ranges of Afghanistan and its subcategories that use {{lang-fa}}. – Jonesey95 (talk) 22:48, 23 August 2017 (UTC)[reply]
Even if someone makes a list of the categories, it's still WP:CONTEXTBOT because a bot isn't going to know whether any particular instance of {{lang-fa}} is actually for Dari or if it really is Persian after all. Anomie 23:03, 23 August 2017 (UTC)[reply]

Wayback Machine

Would it be possible for a bot to archive each and every references cited on a particular requested WP page? Doing so manually consume a lot of time when there are hundreds of references on a page. --Saqib (talk) 15:27, 25 August 2017 (UTC)[reply]

Like User:InternetArchiveBot? Anomie 18:42, 25 August 2017 (UTC)[reply]
Saqib as noted by Anomie, IABot is the tool. For every ref (including live ones) go the article History tab, click "fix dead links" (top row), this directs to an external tool (requires login just press "OK") where it gives a check-box option to archive every link. -- GreenC 19:35, 25 August 2017 (UTC)[reply]

Keeping track of cross-space moves....

In conjunction with the discussion raised at this discussion, it will be probably helpful for the community to get an idea about the numbers and keep a track of the articles that are draftified from main-space--in a friendly format.SoWhy has written a SQL query for the purpose.I seek for the development of a bot that will maintain a list of articles which are draftified along with necessary info such as the time of draftification, draftifying editor, article creator, last edit date etc. in a tabular format and that the table will be updated in a regular period of time.Thanks!Winged Blades of GodricOn leave 11:49, 27 August 2017 (UTC)[reply]

The query Godric mentions only counts article moves where the creation of the redirect was suppressed (AND log.log_params RLIKE 'noredir";s:1'). A bot should probably also find pages moved with a redirect where the redirect was later deleted as WP:R2. Also maybe list prior AFDs or MFDs for the article/draft. Regards SoWhy 12:02, 27 August 2017 (UTC)[reply]
Strike that, I just noticed a flaw in the query. The query actually finds both because I missed a part of the comment. I fixed it to only show those where the redirect was suppressed and this one should find those where the redirect was created. Regards SoWhy 07:52, 28 August 2017 (UTC)[reply]
Second SoWhy.@SoWhy:--As prior AfDs do you mean articles which are once deleted as a consequence of an AfD and then draftified on re-creation?Otherwise, I don't think anybody would draftify an article which has been a voyager to a Afd/Declined Prod.Winged Blades of GodricOn leave 12:09, 27 August 2017 (UTC)[reply]
Draftifying can be the result of an AFD, so listing prior AFDs makes sense. I suppose the bot can also check the XFD and display the result, like my AFD close analyzer but probably not all results correctly. Regards SoWhy 12:12, 27 August 2017 (UTC)[reply]
P. H. Barnes is a page created in main space that survived Wikipedia:Articles for deletion/P. H. Barnes in 2016. A few days ago it was included in a potential bulk AWB move to draft space[6] but, after being advised on IRC that bulk moves might be controversial, the editor backtracked[7] and the whole matter is under consultation at WP:AN#Poorly references sports biographies. Thincat (talk) 13:53, 27 August 2017 (UTC)[reply]
I now see from the edit summaries that a move to userspace was contemplated (the consultation is unclear) but it does show that experienced editors, in good faith, can consider that moves of AFD-surviving articles can be appropriate. Also see Sir Edward Antrobus, 8th Baronet which is under consideration in the same way, having existed for nine years with many "real" editors. So numbers of editors and lifetime of article are also relevant statistics in a table. I suspect if the moves are done without removing categories then a bot will fill the omission so pages may enter draft space with no edits (except bot or maintenance) in the last six months. Thincat (talk) 14:13, 27 August 2017 (UTC)[reply]

@Winged Blades of Godric, SoWhy, and Thincat: I've drafted an example report below.

Example for 2017-09-02 as of 20:46, 4 September 2017 (UTC)

Please let me know if you have any comments on it. — JJMC89(T·C) 23:11, 3 September 2017 (UTC)[reply]

@JJMC89:--Can't we have clickable editor names and the move diff. linked-to in the move-summary field.Overall, it'a great job! Thanks!:)Winged Blades Godric 03:45, 4 September 2017 (UTC)[reply]
@Winged Blades of Godric: I've added links for users. I could link the move log, but I'm not going to try to parse the history for a diff. — JJMC89(T·C) 20:55, 4 September 2017 (UTC)[reply]
This looks great! I am going to be away for a bit so here are some quick comments. The bulk draftification I alluded to above took place on 31 August 2017,[8] causing chaos for the cricket wikiproject. A useful cautionary tale. To spot such erroneous actions date of page creation, number of editors and quality/importance of article are relevant – as well as one surviving AFD a Good article was included (the latter self-reverted). Possibly such indicators of "quality" could be flagged in the present AFD field. Number of links can be an issue when pages are moved on the basis of fewer claimed links than actually exist. An emphasised indication could be given when both source and target have both been deleted (draft space pages not edited recently manually are vulnerable to speedy deletion whereas in main space being "abandoned" may well not be a problem. Thincat (talk) 04:48, 4 September 2017 (UTC)[reply]
@Thincat: I've added creation and number of editors. I can add GA/FA/FL, but it will require parsing the page for the template (e.g. {{Good article}}), which I was hoping to avoid. — JJMC89(T·C) 20:55, 4 September 2017 (UTC)[reply]
Yes, I can see that indicators of importance are problematic. Creation date and number of editors form a proxy of sorts. Are you including moves between main and user space as well as between main and draft? On the face if it they are both relevant. Why not give this a spin and tweaks could be made based on experience? Could lists be kept separately for each day so it is possible to review what has been going on? Thincat (talk) 17:13, 7 September 2017 (UTC)[reply]
See draftification reports as a starting point. — JJMC89(T·C) 01:40, 11 September 2017 (UTC)[reply]

I have been tagging lots of broken links to the New York Observer, but most of the tags that I added have been removed. Since the Internet Archive Bot is unable to repair these links, is there another way that we can update them? Jarble (talk) 19:54, 27 August 2017 (UTC)[reply]

If there is a standard issue with all of these links that a computer could fix - and does not need human discretion to do so (WP:CONTEXTBOT) it may be possible. Without knowing detail about the issue and the fix I can't say. With some more details I would happily look at getting these done. TheMagikCow (T) (C) 13:19, 28 August 2017 (UTC)[reply]

Null bot on everything that transcludes Infobox journal

If someone could do that, that would be much appreciated. We've recently added some redirect detection/creation logic to the template, and it would be nice to know which articles are in need of review. Headbomb {t · c · p · b} 19:45, 29 August 2017 (UTC)[reply]

Doing... — JJMC89(T·C) 20:39, 29 August 2017 (UTC)[reply]
JJMC89 (talk · contribs) Have you started? Because if you have, it doesn't seem to be working. Headbomb {t · c · p · b} 12:40, 30 August 2017 (UTC)[reply]
@Headbomb: I made a typo when trying to start the job, so I've restarted it correctly now. — JJMC89(T·C) 15:24, 30 August 2017 (UTC)[reply]
Why write a bot when we already have Joe's Null Bot (talk · contribs)? --Redrose64 🌹 (talk) 12:50, 30 August 2017 (UTC)[reply]
I didn't write anything. I'm using Pywikibot's touch.py. — JJMC89(T·C) 15:24, 30 August 2017 (UTC)[reply]
Thanks! Category:Articles with missing ISO 4 redirects is definitely populating now! Headbomb {t · c · p · b} 15:37, 30 August 2017 (UTC)[reply]

@JJMC89:: The infobox template has been massively updated with automated search functionality. If you could run the bot again, this time only on Category:Articles with missing ISO 4 redirects, that would be super helpful! Headbomb {t · c · p · b} 12:58, 1 September 2017 (UTC)[reply]

@Headbomb:  Doing... — JJMC89(T·C) 00:13, 2 September 2017 (UTC)[reply]

A bot for creating articles for ISO standards

I have noticed that there are a lot of ISO standards that do not have an article on Wikipedia. Considering the fact that there are a lot of ISO standards (by my estimate, over 21000 of them in English alone, some that have possibly been updated), of which (rough estimate) maybe 10% - 20% have an article, the number of ISO standards could potentially warrant some automated help. Since I couldn't find a concerted effort to document ISO standards in Wikipedia, I thought it'd be useful to debate whether it would be desirable and feasible to use a bot to increase Wikipedia's coverage of the ISO standards.

Should Wikipedia cover ISO standards extensively? Well-known ISO standards like the ISO 9000 and 27000 families obviously meet notability standards, but lesser-known standards might not. In my opinion, considering the fact that the ISO's work constitutes global standards, there is a case to be made, and there is most certainly precedent for jobs like this.

Since I don't have any previous experience with writing Wikipedia bots, I thought I'd chime in here first. Would this be something that would be useful for Wikipedia, and would it be feasible to create valuable articles or article stubs this way? There is information available from the [website] in a structured form that could go some way towards creating articles, and each standard publishes some metadata about the standard and usually has a description (see for instance 1, 2, 3.

I don't know of any project that is already working towards incorporating information about international standards, or ISO standards specifically, into Wikipedia, nor a bot that works in a related field. If this might be useful, I might very well be interested in writing a bot that either writes stubs or automatic articles on ISO standards, prepares drafts, keeps metadata about ISO standards up-to-date, or something along those lines. I'd gladly hear some feedback. Nietvoordekat (talk) 11:09, 31 August 2017 (UTC)[reply]

Nietvoordekat, while I admit to only having a general knowledge of this subject, I suspect that such an undertaking would fall afoul of guidelines similar to WP:NASTCRIT (i.e. "this isn't notable enough for it's own article"). WP:AST generally goes by "if it exists, but isn't notable, redirect". However, if we were to create a redirect for every ISO code, we'd end up with 100 000 pages total. Even if we went by what was on the list of ISO codes we're still looking at about 2500 new redirects. Doable, yes. Necessary? Maybe.
For a creation of this sort of size/scale, I think getting community input would be worthwhile, if only to see if there is a want/need for this sort of mass creation.
If there is, feel free to ping me and I'll be happy to put in a BRFA. Primefac (talk) 13:01, 7 September 2017 (UTC)[reply]

ISO 4 redirect creation bot

To help clear up the backlog in Category:Articles with missing ISO 4 redirects, if a bot could

  • Parse every article containing {{Infobox journal}}, retrieve |abbreviation=J. Foo. Some articles will contain multiple infoboxes.
  • If J. Foo. exists and is tagged with {{R from ISO 4}}
#REDIRECT[[Article containing Infobox journal]]
{{R from ISO 4}}

Thanks! Headbomb {t · c · p · b} 11:57, 31 August 2017 (UTC)[reply]

@Headbomb: Wikipedia:Bots/Requests for approval/Mdann52 bot 13 Mdann52 (talk) 10:37, 4 September 2017 (UTC)[reply]

simple string substitution in external URLs

A very useful news site in a specialised field (The Week in Chess) has changed domains at least twice, meaning there are (at a guess) hundreds of refs or external links to change. They would all change in a regular way (i.e. simple string replacement, or at worst regular expression replacement). There has got to be an already existing bot to do this. Can someone point me in the right direction? Adpete (talk) 12:32, 31 August 2017 (UTC)[reply]

It might be better to make a template for these links so that if the web site or link format changes, you only have to change the template in one place to fix all of the links. – Jonesey95 (talk) 15:30, 31 August 2017 (UTC)[reply]
If you have any more details eg. old url and new url examples and the changes between them, I can have a look at doing this. TheMagikCow (T) (C) 15:56, 31 August 2017 (UTC)[reply]

It's pretty simple. Every URL beginning with "http://www.chesscenter.com/twic/twic" needs to instead begin with "http://theweekinchess.com/html/twic". Note these are not the complete URLs, but anything after that is unchanged. e.g. at Baadur Jobava, reference 2 needs to change from http://www.chesscenter.com/twic/twic646.html#6 to http://theweekinchess.com/html/twic646.html#6 . I'm happy to run it if given some pointers. But if you want to run it, thanks, that'd be great. I'd be curious to hear how many URLs get changed, if you do.

And to Jonesey95, yes a template could be a good idea too, though enforcing compliance can be difficult, so I'd prefer to do the bot in the first instance. Adpete (talk) 23:22, 31 August 2017 (UTC)[reply]

Adpete, I think the best way forward would be to create a {{cite web}} wrapper (maybe {{cite WiC}} or {{cite The Week in Chess}}?) that links to the website. After this is done, a bot could go through and replace all extant uses with the template. Thoughts? Primefac (talk) 12:50, 7 September 2017 (UTC)[reply]
Ran the numbers; looks like 422 pages with the twic/twic in the URL that need changing. Definitely something a bot would be good for. The other 100ish point to different places. Primefac (talk) 12:54, 7 September 2017 (UTC)[reply]
Thanks for listing those links! If we're going to make a wrapper, I think it should be {{cite TWIC}}, because The Week in Chess uses the all-capital acronym TWIC. The main downside to the template is enforcing compliance but I guess it'll be ok. First let me see if I can find all or most of those 100 "other" links, because that might affect the format of the bot and/or template. Adpete (talk) 23:12, 7 September 2017 (UTC)[reply]

Wikipedia has hundreds of articles that cite AOL News, but all of the links to this site are now broken. I tried using IABot, but it could not find archived URLs for these references. Is there another bot that can add archive URLs for all of these links? Jarble (talk) 17:12, 1 September 2017 (UTC)[reply]

Are they broken in the sense of permanent dead, or fixable by changing to a different URL at aolnews? -- GreenC 17:42, 1 September 2017 (UTC)[reply]
@GreenC: As far as I know, all links to aolnews.com are now redirected to the front page of aol.com. It may be possible to recover the original articles from the Internet Archive, but IABot is apparently unable to do this. Jarble (talk) 21:25, 1 September 2017 (UTC)[reply]
Reported here. There might be a way to solve this checking with Cyberpower678. -- GreenC 21:48, 1 September 2017 (UTC)[reply]

Jarble, IABot is currently rescuing aolnews.com where it can or leaving a dead link tag. If you see any it missed let me know. Should be done in an hour or so. -- GreenC 14:33, 4 September 2017 (UTC)[reply]

Social Security Death Index (SSDI)

Articles about deceased U.S. persons often cite the Social Security Death Index, which lies behind a paywall at ancestry.com. An example may be found at George Luz. I have no idea of the total count. The SSDI is also available at familysearch.org for free. The version at Family Search does not display the social security number; the version at ancestry once did but, according to our page no longer does. Converting from ancestry to family search will, I think, require a little human effort and judgment. I don't know if that raises a WP:SYNTHESIS flag. Is it possible to search for uses of the SSDI at ancestry and put them into a list or, preferably, a hidden (Wikipedia:?) category so they can be changed to the Family Search version?--Georgia Army Vet Contribs Talk 00:53, 5 September 2017 (UTC)[reply]

Special:LinkSearch/http://ssdi.rootsweb.ancestry.com/? Or perhaps Special:LinkSearch/http://ssdi.rootsweb.ancestry.com/cgi-bin/ssdi.cgi if there are legitimate links to the base page at ssdi.rootsweb.ancestry.com. Anomie 02:38, 11 September 2017 (UTC)[reply]
That should do it, thanks. Call the request closed.--Georgia Army Vet Contribs Talk 21:30, 11 September 2017 (UTC)[reply]

WikiProject Asessment banner replacement

I am making this request on behalf of WikiProject Finance and WikiProject Investment. The two projects are merging so there are two things that a bot is needed for:

  1. Replace all {{WikiProject Investment}} banners on talk pages of articles that were only assessed by the Investment project with the {{WikiProject Finance}} banner.
  2. Remove the WikiProject Investment banner from pages that were already assessed by WikiProject Finance.

It would help immensely! Cheers. WikiEditCrunch (talk) 17:58, 8 September 2017 (UTC)[reply]

The former category has about 500 pages and the latter category has about 1k pages. --Izno (talk) 18:02, 8 September 2017 (UTC)[reply]
Which are quite a lot of pages.It would take too long to do it by hand so to speak.
Cheer. WikiEditCrunch (talk) 14:07, 9 September 2017 (UTC)[reply]
Coding... --Kanashimi (talk) 09:23, 12 September 2017 (UTC)[reply]
@WikiEditCrunch: For Talk:Financial endowment, it is OK to remove {{WikiProject Investment}}. But I find that some pages with different class or importance, for example, Talk:Prediction market. How about them? --Kanashimi (talk) 12:30, 12 September 2017 (UTC)[reply]
@Kanashimi: Yeah I would just replace those too.Worst case I will look over them later on.Thanks and Cheers! WikiEditCrunch (talk) 17:01, 12 September 2017 (UTC)[reply]
@WikiEditCrunch: OK. I will generate a report (User:Cewbot/log/20170828) for these cases. Please tell me if you have any idea of the report, and then I will continue doing the task. However, I think there may needs more discussion for these cases... --Kanashimi (talk) 08:55, 13 September 2017 (UTC)[reply]
@Kanashimi: The report looks good.Ill see if I can bring up those cases sometime soon. Thanks and Cheers! WikiEditCrunch (talk) 17:21, 13 September 2017 (UTC)[reply]
@WikiEditCrunch: Well, I have do 100 edits. Please confirm they are OK and you will check the conflicts, so I will complete the task. --Kanashimi (talk) 23:13, 13 September 2017 (UTC)[reply]
@Kanashimi: They look good (OK)! Cheers! WikiEditCrunch (talk) 16:15, 15 September 2017 (UTC)[reply]
Doing... --Kanashimi (talk) 22:36, 15 September 2017 (UTC)[reply]
Y Done --Kanashimi (talk) 01:49, 16 September 2017 (UTC)[reply]
@Kanashimi: Thanks! Cheers. WikiEditCrunch (talk) 16:34, 16 September 2017 (UTC)[reply]

A BOT ready to help.

HelpBOT responds to help citations with advice, and welcomes new editors to Wikipedia. — Preceding unsigned comment added by Lookis (talkcontribs) 04:03, 11 September 2017 (UTC)[reply]

@Lookis: Declined Not a good task for a bot. per context bot and see a list of frequently denied bots. Dat GuyTalkContribs 15:56, 23 September 2017 (UTC)[reply]

Infobox image cleanup

Is it possible to add such cleanup tasks to one f the existing bots or create a bot for such cleanups? These fill up the maintenance cat of unknown parameters unnecessarily. Even GA level articles has such issues. -- Pankaj Jain Capankajsmilyo (talk · contribs · count) 13:39, 14 September 2017 (UTC)[reply]

SoccerbaseBot

I am requesting a bot to change code like this:
{{cite web | title = Games played by Jack Cork in 2014/2015 | url = http://www.soccerbase.com/players/player.sd?player_id=45288&season_id=144 | publisher = Soccerbase | accessdate = 31 January 2015}}

to this:
{{soccerbase season|45288|2014|accessdate= 31 January 2015}}
which makes the job done faster than doing it manually and it does not introduces errors in later seasons when providing reference to new seasons. Iggy (talk) 12:32, 16 September 2017 (UTC)[reply]

How about {{cite news |title=Games played by Wayne Rooney in 2002/2003 |url=http://www.soccerbase.com/players/player.sd?player_id=30921&season_id=132 |publisher=Soccerbase.com |date=6 April 2011 |accessdate=6 April 2011 }} at Wayne Rooney, [http://www.soccerbase.com/players/player.sd?player_id=13501&season_id=129 "Games played by Thierry Henry in 1999–2000"] at Thierry Henry and other articles similar to these? --Kanashimi (talk) 11:06, 17 September 2017 (UTC)[reply]
It sounds algorithmically-easy enough to change all links coming from {{cite news}}, {{cite web}} and the like by matching the URL. (The only question is whether the formula to go from season year to season ID at Template:soccerbase season can really be trusted when doing the reverse conversion.) Out-of-template references are of course another matter. TigraanClick here to contact me 15:58, 18 September 2017 (UTC)[reply]
This may become a long term task... --Kanashimi (talk) 22:34, 18 September 2017 (UTC)[reply]
Coding... I will start from {{cite news}}, {{cite web}} and then others. --Kanashimi (talk) 10:46, 19 September 2017 (UTC)[reply]
@Struway2: Please give some advice, thank you. For example, for the case of {{cite web |title=Richard Cresswell |url=http://www.soccerbase.com/players/player.sd?player_id=8959 |work=Soccerbase |publisher=Centurycomm |accessdate=12 September 2015}} at York City F.C.. Are there a better solution? Is using Template:soccerbase or something this a good idea? (Template:Soccerbase is not in a citation format still.) --Kanashimi (talk) 13:29, 19 September 2017 (UTC)[reply]
The first I knew of this task request was when I undid your change at the template page, and was pinged to come here. Template:Soccerbase season is specifically for citing stats for individual seasons formatted as a cite web. Template:Soccerbase was designed to generate an external link to the front page of a player's details at the Soccerbase.com website. Probably if citations like the Cresswell link at York City F.C., which is citing that front page, were to be template-ised, it would be best done by adding functionality to Template:Soccerbase. Has there been any consultation at WT:FOOTBALL at all on this? I haven't seen any. cheers, Struway2 (talk) 13:53, 19 September 2017 (UTC)[reply]
Thank you. @Iggy the Swan and Tigraan: I am sorry that I do not know very much about soccer. Do we need to create a new template for the cases lack of season? --Kanashimi (talk) 13:59, 19 September 2017 (UTC)[reply]
I know nothing about soccer, and only looked at the question from a programmer's point of view: from a look at the URL and the template, it is fairly clear that the season-id/year correspondence is not so trivial, and the formula should be checked. But yeah, if this is not an uncontroversial task, you should get approval of the Wikiproject or whoever else in charge before asking for a bot. TigraanClick here to contact me 16:47, 20 September 2017 (UTC)[reply]
Doing... I will only deal with these cases with season_id. --Kanashimi (talk) 11:32, 21 September 2017 (UTC)[reply]
Y Done Please read the task report. --Kanashimi (talk) 05:39, 22 September 2017 (UTC)[reply]
@Kanashimi: Of those I've checked, the straightforward ones are OK. Obviously the requester didn't explain enough of the details, like what to do with the output |name= parameter, or with the exceptions (season_id=146, mostly), and I didn't realise there had been no communication: sorry about that. Mostly, you left the season_id=146 ones unchanged, which was OK, but another time, it might be worth asking rather than guessing. There's one edit I found, here, which is a bit of a mess: I've fixed it manually. Thank you for your work. cheers, Struway2 (talk) 09:50, 22 September 2017 (UTC)[reply]
Thank you for your checking! Please let me know if there are still errors need to be fixed. --Kanashimi (talk) 10:01, 22 September 2017 (UTC)[reply]
Struway2 posted to me about this 146 issue saying, as above on this thread, that these have been left alone - I have manually changed some of the false ID's from this:
{{cite web | title = Games played by Lee Tomlin in 2016/2017 | url = http://www.soccerbase.com/players/player.sd?player_id=41944&season_id=146 | publisher = Soccerbase | accessdate = 31 January 2015}}
to this
{{soccerbase season|41944|2016|accessdate= 31 January 2015}}


to a certain number of articles and found out there are still around 200+ articles to be done. I probably should have mentioned that at the first post on this thread,Iggy (talk) 14:25, 22 September 2017 (UTC)[reply]

Can anyone replace the text "Alabama" with the name of the relevant state in these drafts?

I've been drafting a series of lists of Paleozoic life by state and I used the Alabama page as a template to set them the others up. Could someone replace the text "Alabama" with the state named in the title of the following articles? Abyssal (talk) 16:51, 19 September 2017 (UTC)[reply]

Doing... -- John of Reading (talk) 16:59, 19 September 2017 (UTC)[reply]
@John of Reading:Thanks, John! Super fast work! Abyssal (talk) 17:11, 19 September 2017 (UTC)[reply]
Y Done, I hope. I used AWB to replace "Alabama" with {{subst:str right|{{subst:PAGENAME}}|30}}, and that picked the state name out of the name of each draft. You'll see I had to fix up the pages with disambiguation suffixes. -- John of Reading (talk) 17:14, 19 September 2017 (UTC)[reply]
@John of Reading:Hey John, could I ask you for another favor? This one's even easier. Could you remove the following templates from the same articles as before? Abyssal (talk) 18:33, 19 September 2017 (UTC)[reply]
  • {{col-begin|width=100%}}
  • {{col-1-of-4}}
  • {{col-2-of-4}}
  • {{col-3-of-4}}
  • {{col-4-of-4}}
  • {{col-end}}
@Abyssal:  Done -- John of Reading (talk) 19:01, 19 September 2017 (UTC)[reply]
@John of Reading: Hey, John. Do you think you could do what you did with the state names for the following articles I have hidden here? Abyssal (talk) 14:19, 20 September 2017 (UTC)[reply]


@Abyssal:  Done -- John of Reading (talk) 14:50, 20 September 2017 (UTC)[reply]
Thanks, John. You deserve a barnstar! Abyssal (talk) 15:07, 20 September 2017 (UTC)[reply]
@John of Reading:Hey, John. Could you remove those same column templates from the "List of the Mesozoic life of..." articles and also remove the "==Mesozoic==" secition heading from the same? Abyssal (talk) 17:22, 20 September 2017 (UTC)[reply]
@Abyssal:  Done -- John of Reading (talk) 19:08, 20 September 2017 (UTC)[reply]
@John of Reading:My hero! Is there anyway you can scan the Mesozoic lists for lines of code that includes the phrase "sp." and delete those entire lines? They represent species that couldn't be identified, so their presence is redundant clutter in the articles. Abyssal (talk) 19:24, 20 September 2017 (UTC)[reply]
@Abyssal:  Done -- John of Reading (talk) 20:06, 20 September 2017 (UTC)[reply]
@John of Reading: Hey John, I have a more complicated request this time. I'm not sure if it's possible, but here goes. Could you replace "* †[[A" (and "* †[[B", "* †[[C", "* †[[D"...) in the "List of the Mesozoic life of..." articles with the following block of code (please copy it from inside the edit screen it doesn't come out formatted right here on the page itself):

==A== <!-- Please hide unwanted images in comments like this one so that they may be easily restored later if we change our minds and the image is wanted again --> [[File:PSM V53 D224 The great cretaceus ocean.jpg|thumb|right|Fossil of ''[[Animal]]''.]] [[File:PSM V53 D224 The great cretaceus ocean.jpg|thumb|right|Fossil of ''[[Animal]]''.]] [[File:PSM V53 D224 The great cretaceus ocean.jpg|thumb|right|Fossil of ''[[Animal]]''.]] [[File:PSM V53 D224 The great cretaceus ocean.jpg|thumb|right|Fossil of ''[[Animal]]''.]] [[File:PSM V53 D224 The great cretaceus ocean.jpg|thumb|right|Fossil of ''[[Animal]]''.]] [[File:PSM V53 D224 The great cretaceus ocean.jpg|thumb|right|Fossil of ''[[Animal]]''.]] [[File:PSM V53 D224 The great cretaceus ocean.jpg|thumb|right|Fossil of ''[[Animal]]''.]] [[File:PSM V53 D224 The great cretaceus ocean.jpg|thumb|right|Fossil of ''[[Animal]]''.]] [[File:PSM V53 D224 The great cretaceus ocean.jpg|thumb|right|Fossil of ''[[Animal]]''.]] [[File:PSM V53 D224 The great cretaceus ocean.jpg|thumb|right|Fossil of ''[[Animal]]''.]] [[File:Acteon tornatilis 2.png|thumb|right|A living ''[[Acteon ]]''.]] [[File:Bonefish.png|thumb|right|Illustration of a living ''[[Albula]]'', or bonefish.]] [[File:Ancilla ventricosa 001.jpg|thumb|right|Modern shell of ''[[Ancilla (gastropod)|Ancilla]]''.]] [[File:Appalachiosaurus montgomeriensis.jpg|thumb|right|Life restoration of ''[[Appalachiosaurus ]]''.]] {{Compact ToC}} * †''[[A

with the letter in the section heading and the "* †[[A" being changed once for each letter of the alphabet? Thanks. If this isn't possible I understand but setting up all the section heading in all 50 articles looks daunting. If we could automate it it would speed up the production of the articles immensely. Thanks for all the help you've given me so far. These articles are turning out great. Abyssal (talk) 16:40, 21 September 2017 (UTC)[reply]
@Abyssal: Please check what I've done to Draft:List of the Mesozoic life of Georgia (U.S. state). I've guessed that I should remove the original "List" subheading, is that right? -- John of Reading (talk) 17:45, 21 September 2017 (UTC)[reply]
@John of Reading:Yes. The changes made to Georgia look perfect. Thank you so much. Abyssal (talk) 17:50, 21 September 2017 (UTC)[reply]
@Abyssal: Done? -- John of Reading (talk) 18:19, 21 September 2017 (UTC)[reply]
Done. :) Abyssal (talk) 18:48, 21 September 2017 (UTC)[reply]

@John of Reading: Hey, John, do you think you could do me a few more favors? Could you run that bot to remove lines of code containing "sp." from the following commented-out list of articles just like you did on September 20th? Then could you scan these articles for the phrases " – o" and " – t" and replace them with " @ o" and " @ t" before removing every "–" from the articles and then replacing the "@"s with the "–" again? Then could you run that operation from September 21st where you replaced the first instance of each capital letter in the format "* †[[A" with a block of code, but with this new smaller block of code listed below:

==A== <!-- Please hide unwanted images in comments like this one so that they may be easily restored later if we change our minds and the image is wanted again --> {{Compact ToC}} * †''[[A


Abyssal (talk) 14:51, 28 September 2017 (UTC)[reply]

@Abyssal: I've had a go at these. (1) I noticed several entries marked "spp." and one marked "p." Should those be removed? (2) I think the lists for South Carolina and Washington are too short to need A-Z subheadings. I hope that doesn't complicate the collection of the illustrations for these articles. (3) There's too much data at Draft:List of the Paleozoic life of Ohio#Z. -- John of Reading (talk) 09:02, 30 September 2017 (UTC)[reply]
@John of Reading:Yes, you can remove "spp." and "p.". SC and WA, can be skipped, I'll just move those lists to sections of the respective list of prehistoric life overall for each state. Any idea what went wrong with Ohio? Why does it have a whole alphabetical list under the Z section? Abyssal (talk) 01:15, 1 October 2017 (UTC)[reply]
@Abyssal: (1) Done (3) I've now checked that "Ohio" had two copies of the same data, and have removed one. I guess you just typed Ctrl-V twice by mistake. And, (4), I'll be away from tomorrow, back Thursday. :-) John of Reading (talk) 07:37, 1 October 2017 (UTC)[reply]
OK, thanks for all your help! Abyssal (talk) 11:10, 1 October 2017 (UTC)[reply]

Request to replace pushpin_map with map_type

This is a huge task and seem nearly impossible to do manually. This will enable in effective template maintenance and easier consolidation. I have been trying to cleanup Category:Pages using infobox Hindu temple with unknown parameters since quite some time and the task seems never ending process. -- Pankaj Jain Capankajsmilyo (talk · contribs · count) 17:52, 19 September 2017 (UTC)[reply]

It looks like there are about 500 (up to no more than 900) affected pages in the category. Someone with AWB access may be able to get this done pretty quickly. – Jonesey95 (talk) 18:18, 19 September 2017 (UTC)[reply]
Its not merely about the cat I referred to. There's a wider intiative going on at Wikipedia:Maps in infoboxes, one of whose objective is what I have stated above. You yourself are a part of it. -- Pankaj Jain Capankajsmilyo (talk · contribs · count) 04:08, 20 September 2017 (UTC)[reply]
@Capankajsmilyo: Can you tell me explicitly what you need changed and on what templates? Do you just need to replace pushpin_map with map_type on every instance of {{Infobox Hindu temple}}? Is there more? — nihlus kryik  (talk) 21:28, 23 September 2017 (UTC)[reply]

Can anyone scan a list of articles and put every picture used in those articles into another article?

This is related to my previous request, but was so different that I thought I'd make a new heading for it. I'm a making a series of ~50 articles listing the prehistoric animals that once inhabited each US state. I was wondering if someone could rig a bot to search the articles linked to in the list for all the images and copy them into the article under the list heading in the format "[[File:Alethopteris PAMuseum.jpg|thumb|right|Fossil of ''[[articletitlegenusname]]''.]]". Draft:List of the Paleozoic life of Alabama is a good example of what I'm going for, I had originally tried to do this manually. Article list hidden in a comment here. Abyssal (talk) 19:40, 19 September 2017 (UTC)[reply]

I have do a little trying on Draft:List of the Paleozoic life of Alabama, but I don't know if this is what you want. Please tell me what do you feel and what can I improve the tool, thank you. --Kanashimi (talk) 08:17, 22 September 2017 (UTC)[reply]

By the way, here is the source code: 20170922.scan link targets in page.js on GitHub --Kanashimi (talk) 09:04, 22 September 2017 (UTC)[reply]

Thanks for helping, I was afraid this request was going unnoticed. The trial run is kind of like what I was going for, but I noticed a few problems. The big one is that the scan included the links in the article's lead section, so it added a bunch of images of Alabama from the state's main article that didn't have anything to do with prehistoric life. I just want the pictures from the links under the main list section. Also, the scan didn't pick up the images from the taxoboxes in those articles. Also, many of the images are left aligned. Is there anyway you could tweak the code so that it only includes images from the articles in the list itself, includes images from taxoboxes, and makes sure they're all right-aligned? Abyssal (talk) 15:00, 22 September 2017 (UTC)[reply]
Coding... Sure. I will try. --Kanashimi (talk) 15:30, 22 September 2017 (UTC)[reply]
Thanks for the help. Abyssal (talk) 15:48, 22 September 2017 (UTC)[reply]
@Abyssal: I have do a little more tasting. By the way, it seems the link to Archimedes is a mistake? --Kanashimi (talk) 00:59, 23 September 2017 (UTC)[reply]
@Kanashimi:Sorry I haven't been on despite your efforts to assist this project, I was working long weekend shifts. Anyway, it seems like the scan isn't picking up on all the pictures from the articles and the images aren't displayed in the same order as the articles they come from are listed. The Archimedes thing isn't a big deal. There's a common prehistoric bryozoan with that name. Maybe I'll try to disambiguate all the links to it before we do the final run. Until then we need to make sure we're picking up all the images and ordering them correctly. Abyssal (talk) 03:29, 25 September 2017 (UTC)[reply]
@Abyssal: For the images lacked, may you give some examples, so we can quickly find out what is going wrong. And for the problem of image order, I think it is because I won't add the images already existing in the article. Perhaps I should add all images whether it exists or not? --Kanashimi (talk) 08:37, 25 September 2017 (UTC)[reply]
@Kanashimi:Here's an example of an image that didn't make it into the article: File:Caninia torquia coral KGS.jpg. No need to repetitively add images, there were some images already in the article, so that may be what threw it off. Abyssal (talk) 12:40, 25 September 2017 (UTC)[reply]
@Abyssal: I find some bugs to improve and fixed them. Please check again. --Kanashimi (talk) 14:02, 25 September 2017 (UTC)[reply]
@Kanashimi: Could you try running it again? I think when you reverted you re-added some of the extraneous original images and those may have affected the results. Abyssal (talk) 14:16, 25 September 2017 (UTC)[reply]
@Abyssal:  Done --Kanashimi (talk) 14:38, 25 September 2017 (UTC)[reply]
@Kanashimi: Hmm. There's still problems with the bot not putting images in alphabetical order. It seems to be getting images from Archimedes, then Kullervo, and some taxa starting with L and then later starts going all over the place. Abyssal (talk) 14:57, 25 September 2017 (UTC)[reply]
@Abyssal: Yes, I find this problem and trying to fix it. Please tell me how about the result now, thank you. --Kanashimi (talk) 10:12, 26 September 2017 (UTC)[reply]
@Kanashimi: Big improvement, but I notice that it the scan isn't picking up on the images in the Diplichnites article. Abyssal (talk) 12:45, 26 September 2017 (UTC)[reply]
@Abyssal:  Done --Kanashimi (talk) 13:59, 26 September 2017 (UTC)[reply]
@Abyssal: If there are still somewhere needing improved, please let me know, thank you.--Kanashimi (talk) 09:56, 30 September 2017 (UTC)[reply]
Sorry I haven't been in touch. I'll get back with you tonight ~ 9:00 PM eastern. Abyssal (talk) 11:19, 30 September 2017 (UTC)[reply]
@Kanashimi: Hey, Kanashimi. Sorry to keep you waiting. I found that I actually hadn't finished all the lists in that article series and needed to spend a few days getting them ready for you. I haven't seen any actual problems with the images, so we're almost ready to go. The only change I'd ask to make is if your bot could sort the images under the headings of the first letter in the name of the article they were taken from, so the pictures from the articles starting with A go under the "A" section but before the mini table of contents, the images from articles starting with B under the B section and so on. After that we can run the whole batch. Abyssal (talk) 01:21, 1 October 2017 (UTC)[reply]
@Abyssal:  Done Please help me to fix the grammatical errors in the source code (20170922.scan link targets in page.js on GitHub), thank you. --Kanashimi (talk) 08:06, 1 October 2017 (UTC)[reply]

The site closes in the near future, it is necessary to save links in the web archive. Is it possible to collect all the links from our articles to this site on one page for the convenience of archiving? Many were included through Template: SportsReference. In general, it would be necessary to archive all the athletes' profiles from there, regardless of whether we have articles. Who has what to offer? It would be good to do this in Wikipedia in all languages. JukoFF (talk) 13:06, 20 September 2017 (UTC)[reply]

It's already archived on the Wayback Machine. At least it should be.—CYBERPOWER (Chat) 16:55, 20 September 2017 (UTC)[reply]

Reference suggest

Can a bot or tool be coded which has the capability to suggest references for article, or maybe statement? -- Pankaj Jain Capankajsmilyo (talk · contribs · count) 09:59, 22 September 2017 (UTC)[reply]

  • If you mean an automated process whose input is a chunk of text (article or sentence) and whose output is a list of URLs of relevance, it will be way beyond the capacities of any volunteer bot-coder, and possibly of the WMF servers computational power or storage. It would probably be feasible to feed the text to a given search engine and reformat the result to have a list of URLs but I am guessing you expected something more. TigraanClick here to contact me 11:30, 22 September 2017 (UTC)[reply]
  • There's a whole set of {{Find sources}} type of templates which already exist, and could be a useful reply to the first half of the request (without replying to the "suggest statements" part of the request, which doesn't seem like much of a good idea either). --Francis Schonken (talk) 13:25, 22 September 2017 (UTC)[reply]

I don't know how practical this might be, but I thought it would be helpful if redlinks could be tagged as such and a bot could then automatically add the month/year the redlink (or redlink template) was added.

So, for instance, if I create a link to John Simon (author), which is a redlink, one of the following would happen:

  1. Ideal case: the bot would note the redlink (if that's even possible) and insert a tag with the month and year that the redlink was added.
  2. Also good: I or someone else notices that the link is a redlink and tag it with a template such as Template:Nolink. The bot would detect the use of the template and flesh it out with the month/year.

I feel this would be extremely helpful for determining how long a redlink has been extant, and would give editors an indication of the likelihood that the redlink might ever become a bluelink.

Never done a bot request before, so apologies if I've horribly mangled this or such. Thanks! DonIago (talk) 13:30, 22 September 2017 (UTC)[reply]

I can see this being useful. Maybe we could somehow use it to build up an index of the most common redlinks - I know we have WP:TOPRED for the most viewed redlinks but not the most frequently created, and it's broken now anyway. We could also use it to look for simple errors, like someone writes an article with a link to Wold War I and doesn't correct it. Ivanvector (Talk/Edits) 13:41, 22 September 2017 (UTC)[reply]
This is possible. I think there would need to be consensus for it, as there's a lot of articles with redlinks. Regarding the typo fixing (if that's what you meant), it's better for a human to do it. Dat GuyTalkContribs 13:56, 22 September 2017 (UTC)[reply]
I kind of love that you're both already taking this further than I had it in my head (smile). I'd definitely be wary of a bot "auto-correcting" redlinks, but I like the notion of having them compiled somewhere for review.
I was only thinking of this being a "going forward" bot, but if it was able to scan for existing redlinks as well, that would be nifty...though I'm guessing it wouldn't be able to determine when redlinks were originally created (much less if a bluelink became a redlink), so any data would start off with a huge spike when the bot went active. DonIago (talk) 14:09, 22 September 2017 (UTC)[reply]
Do we need to re-scan all revisions to get the status of all redlinks? --Kanashimi (talk) 14:12, 22 September 2017 (UTC)[reply]
The list probably would need to be refreshed to capture, e.g. redlinks that have been delinked, but I could see that being a "once a month" or even "once every few months" process. I don't know whether it's more intensive to do regular scans or fewer but wider-scope ones. DonIago (talk) 14:47, 22 September 2017 (UTC)[reply]
It could be possible using Special:ApiSandbox#action=query&format=json&prop=links and the continue parameter, but it'll definitely take a very, very long time. I'm guessing 10 per hour would be the best. Dat GuyTalkContribs 15:08, 22 September 2017 (UTC)[reply]
If the comment above about humans fixing typos was in response to my suggestion, yes, that's what I meant. A human could scan the list of redlinks to spot obvious errors and then fix them manually. I wouldn't trust a bot to do the corrections. Ivanvector (Talk/Edits) 15:11, 22 September 2017 (UTC)[reply]
How about a bot changing all redlinks to templates with date mark, and when we have some articles created, the template will just show the link as normal wikilinks; so we won't need scan all revisions? --Kanashimi (talk) 15:37, 22 September 2017 (UTC)[reply]
I'm not sure I'm following this? DonIago (talk) 23:11, 24 September 2017 (UTC)[reply]
That would create some very confusing formatting. Perhaps we could have a bot template red links, and then turn around and un-template links that are no longer red, but I think the easier course of action would be to delineate lists of oldest red links. bd2412 T 14:40, 26 September 2017 (UTC)[reply]

Authority control template

A Wikidata query informs us that there (are at the time of writing) 1,556 people with an article on English Wikipedia, and an ORCID iD in Wikidata, However, Category:Wikipedia articles with ORCID identifiers has only 1,421 embers.

This means that 135 - a considerable percentage - of the people found by the Wikidata query do not have the {{Authority control}} template at the foot of their article.

The same is no doubt true for other authority control identifiers, such as VIAF.

We need a bot, please, to add the template to those articles, and perhaps more.

If the template is added to an article, and no relevant identifier is found, it does not display - so it can safely be added to all biographical articles (if this might fall foul of COSMETICBOT, then it could be added as part of other tasks, such as general fixes done by AWB.

Can anyone kindly oblige? Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 16:15, 22 September 2017 (UTC)[reply]

Incidentally, the same applies in several other Wikipedias, for example Spanish, should anyone feel inclined to help in those also. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 18:40, 22 September 2017 (UTC)[reply]

Change the name of sub catogories of Members of the Parliament of England

The categories under Category:Members of the Parliament of England (pre-1707) by parliament were created before July 2016 when RfC on date ranges was closed. That RfC changed how the MOS:DATERANGE is specified.

Currently the names that contain a date-range are in the format ccyy–yy (unless the century is different) rather than the range style now recommended by MOS:DATERANGE ccyy–ccyy. So I am requesting a bot job to run through all the subcategories and sub-subcategories changing the name of the subcategories and sub-subcategories to ccyy–ccyy and the corresponding category names in articles that are within such categories.

For example the subcategory Category:16th-century English MPs contains a subcategory Category:English MPs 1512–14‎. To be MOS:DATERANGE compliment it ought to be renamed Category:English MPs 1512–1514‎.

-- PBS (talk) 10:42, 23 September 2017 (UTC)[reply]

Y Done — nihlus kryik  (talk) 06:53, 26 September 2017 (UTC)[reply]

Convert amp pages to full pages

If someone is editing on mobile, there's a chance the link they wish to cite will be an amp page. Requesting a bot to identify these pages and convert them to the full version. Example: amp full.Terrorist96 (talk) 18:29, 23 September 2017 (UTC)[reply]

Category:Storyboard artists and Category:American storyboard artists redundancy

There is a great deal of redundancy between the parent Category:Storyboard artists and the child Category:American storyboard artists. Per WP:SUPERCAT "an article should be categorised as low down in the category hierarchy as possible, without duplication in parent categories above it. In other words, a page or category should rarely be placed in both a category and a subcategory or parent category (supercategory) of that category." Could someone create a bot to remove the redundancy? Thanks! Mtminchi08 (talk) 08:46, 24 September 2017 (UTC)[reply]

 Done. — JJMC89(T·C) 21:46, 24 September 2017 (UTC)[reply]

BBC Genome in citations

Citations to BBC Genome should be amended thus, as the Genome is merely a front end to scans of The Radio Times. Metadata can be fetched using Citoid (or the Zotero translator for Genome, which Citoid uses). Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 09:49, 28 September 2017 (UTC)[reply]

AWB to fix WP:TEMPLATECAT template

{{National Heroes of Indonesia}} currently includes a transcluded category, Category:National Heroes of Indonesia. Can someone with AWB run over the pages linked in that template to add the category and then remove the category from the template? (Besides the book link.) --Izno (talk) 21:22, 28 September 2017 (UTC)[reply]

 Done. — JJMC89(T·C) 22:08, 28 September 2017 (UTC)[reply]

Auto notification?

I'd like to be notified by bot every time someone joins the WikiProject JavaScript. Is there a bot that can do this? The Transhumanist 06:02, 29 September 2017 (UTC)[reply]

Why don't you just watchlist the membership list? --Redrose64 🌹 (talk) 11:28, 29 September 2017 (UTC)[reply]

Reducing Lint errors in Misnested tag with different rendering in HTML5 and HTML4

To reduce lint errors in Lint errors: Misnested tag with different rendering in HTML5 and HTML4, would someone be able to do a bot run that would do the following search and replaces:

  • [[User:Cyberbot II|<sup style="color:green;font-family:Courier">cyberbot II]] with [[User:Cyberbot II|<sup style="color:green;font-family:Courier">cyberbot II</sup>]]
  • [[User talk:Cyberbot II|<span style="color:green">Talk to my owner]] with [[User talk:Cyberbot II|<span style="color:green">Talk to my owner</span>]]

Maybe even cyberpower678 might be able to get Cyberbot II to do it? -- WOSlinker (talk) 16:50, 29 September 2017 (UTC)[reply]

I don't have time to code up a bot for that right now. If someone with AWB experience would like to do, I would be grateful. Pinging MagioladitisCYBERPOWER (Chat) 16:52, 29 September 2017 (UTC)[reply]
There are *two* unclosed tags in the signature. One is the sup tag that User:WOSlinker has identified above. But, there is also an unclosed span tag in the talk page link in the signature. That should be closed as well. Could someone fix the bot's signature (which should be a simpler task for Cyberpower678?) besides fixing all pages that already have the old signature? SSastry (WMF) (talk) 17:36, 29 September 2017 (UTC)[reply]
Note that the html5-misnesting category is a false positive (I am fixing my code that triggers this). But, it will just shift the linter issue to a lower priority category (missing-end-tag) that doesn't affect tidy replacement. This doesn't reduce the usefulness of fixing them, just lowers the urgency a bit. SSastry (WMF) (talk) 17:36, 29 September 2017 (UTC)[reply]

My shared DDNS domain was lost to a domain squatter. I would like the mass removal of links left by DPL bot on User talk pages. In short remove " (check to confirm | fix with Dab solver)" from edits like [9]. — Dispenser 17:38, 29 September 2017 (UTC)[reply]

Does User:JaGa give permission, to edit his talk page posts? He hasn't logged in since April. -- GreenC 12:48, 1 October 2017 (UTC)[reply]
Him, User:R'n'B, and myself control the bot. These links are boon for the domain squatter and are worthless once they're fixed. — Dispenser 14:00, 1 October 2017 (UTC)[reply]

List of pages in namespace 0-15 that contain the string "dispenser.homenet.org":

Namespace 0
Namespace 1
  • (857 pages)
Namespace 2
  • (395 pages)
Namespace 3
  • (25489 pages)
Namespace 4
  • (414 pages)
Namespace 5
  • (59 pages)
Namespace 6-7
  • (0 pages)
Namespace 8
Namespace 9
Namespace 10
Namespace 11
Namespace 12
Namespace 13
Namespace 14-15
  • (0 pages)

-- GreenC 15:35, 1 October 2017 (UTC)[reply]