Jump to content

Wikipedia:Bot requests: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
m Reverted 1 edit by Sun Sunris (talk) to last revision by Tom (LT) (TW)
Sun Sunris (talk | contribs)
→‎Archive Bot: new section
Tags: Mobile edit Mobile web edit
Line 805: Line 805:
Hi all, {{u|Kadane}} very helpfully created {{u|KadaneBot}} for us over at [[WP:PR|Wikipedia Peer Review]] - it sends out automated reminders based on topic areas of interest for unanswered peer reviews. Unfortunately, Kadane's been inactive almost since creation (September 2018), and hasn't responded to my request [https://en.wikipedia.org/w/index.php?title=User_talk:Kadane&diff=877167405&oldid=875194938]. Would anyone be so kind as to usurp this bot so we can continue to use it? --[[User:Tom (LT)|Tom (LT)]] ([[User talk:Tom (LT)|talk]]) 07:32, 22 February 2019 (UTC)
Hi all, {{u|Kadane}} very helpfully created {{u|KadaneBot}} for us over at [[WP:PR|Wikipedia Peer Review]] - it sends out automated reminders based on topic areas of interest for unanswered peer reviews. Unfortunately, Kadane's been inactive almost since creation (September 2018), and hasn't responded to my request [https://en.wikipedia.org/w/index.php?title=User_talk:Kadane&diff=877167405&oldid=875194938]. Would anyone be so kind as to usurp this bot so we can continue to use it? --[[User:Tom (LT)|Tom (LT)]] ([[User talk:Tom (LT)|talk]]) 07:32, 22 February 2019 (UTC)
:ADDIT. As {{u|Xaosflux}} pointed out, there isn't really a 'usurp' process, however I think this is probably the easiest title to describe what I am requesting.--[[User:Tom (LT)|Tom (LT)]] ([[User talk:Tom (LT)|talk]]) 07:32, 22 February 2019 (UTC)
:ADDIT. As {{u|Xaosflux}} pointed out, there isn't really a 'usurp' process, however I think this is probably the easiest title to describe what I am requesting.--[[User:Tom (LT)|Tom (LT)]] ([[User talk:Tom (LT)|talk]]) 07:32, 22 February 2019 (UTC)

== Archive Bot ==

A bot that will, in certain situations, switch links to web.archive.org.

Revision as of 01:31, 23 February 2019

This is a page for requesting tasks to be done by bots per the bot policy. This is an appropriate place to put ideas for uncontroversial bot tasks, to get early feedback on ideas for bot tasks (controversial or not), and to seek bot operators for bot tasks. Consensus-building discussions requiring large community input (such as request for comments) should normally be held at WP:VPPROP or other relevant pages (such as a WikiProject's talk page).

You can check the "Commonly Requested Bots" box above to see if a suitable bot already exists for the task you have in mind. If you have a question about a particular bot, contact the bot operator directly via their talk page or the bot's talk page. If a bot is acting improperly, follow the guidance outlined in WP:BOTISSUE. For broader issues and general discussion about bots, see the bot noticeboard.

Before making a request, please see the list of frequently denied bots, either because they are too complicated to program, or do not have consensus from the Wikipedia community. If you are requesting that a template (such as a WikiProject banner) is added to all pages in a particular category, please be careful to check the category tree for any unwanted subcategories. It is best to give a complete list of categories that should be worked through individually, rather than one category to be analyzed recursively (see example difference).

Alternatives to bot requests

Note to bot operators: The {{BOTREQ}} template can be used to give common responses, and make it easier to keep track of the task's current status. If you complete a request, note that you did with {{BOTREQ|done}}, and archive the request after a few days (WP:1CA is useful here).


Please add your bot requests to the bottom of this page.
Make a new request
# Bot request Status 💬 👥 🙋 Last editor 🕒 (UTC) 🤖 Last botop editor 🕒 (UTC)
1 Automatic NOGALLERY keyword for categories containing non-free files (again) 27 11 Anomie 2024-08-04 14:09 Anomie 2024-08-04 14:09
2 Bot that condenses identical references Coding... 12 6 ActivelyDisinterested 2024-08-03 20:48 Headbomb 2024-06-18 00:34
3 Bot to remove template from articles it doesn't belong on? 4 4 Wikiwerner 2024-09-28 17:28 Primefac 2024-07-24 20:15
4 One-off: Adding all module doc pages to Category:Module documentation pages 7 3 Andrybak 2024-09-01 00:34 Primefac 2024-07-25 12:22
5 Draft Categories 13 6 Bearcat 2024-08-09 04:24 DannyS712 2024-07-27 07:30
6 Change hyphens to en-dashes 16 7 1ctinus 2024-08-03 15:05 Qwerfjkl 2024-07-31 09:09
7 Consensus: Aldo, Giovanni e Giacomo 17 5 Dicklyon 2024-08-14 14:43 Qwerfjkl 2024-08-02 20:23
8 Cyclones 3 2 OhHaiMark 2024-08-05 22:21 Mdann52 2024-08-05 16:07
9 Substing int message headings on filepages 8 4 Jonteemil 2024-08-07 23:13 Primefac 2024-08-07 14:02
10 Removing redundant FURs on file pages 5 3 Wikiwerner 2024-09-28 17:28 Anomie 2024-08-09 14:15
11 Need help with a super widespread typo: Washington, D.C (also U.S.A) 32 10 Jonesey95 2024-08-26 16:55 Qwerfjkl 2024-08-21 15:08
12 Dutch IPA 4 3 IvanScrooge98 2024-08-25 14:11
13 AnandTech shuts down 9 6 GreenC 2024-09-01 18:39 Primefac 2024-09-01 17:28
14 Date formatting on 9/11 biography articles 5 2 Zeke, the Mad Horrorist 2024-09-01 16:27
15 Discussion alert bot 6 4 Headbomb 2024-09-08 12:29 Headbomb 2024-09-08 12:29
16 Regularly removing coords missing if coordinates are present BRFA filed 11 2 Usernamekiran 2024-09-07 13:19 Usernamekiran 2024-09-07 13:19
17 Latex: move punctuation to go inside templates 3 2 Yodo9000 2024-09-07 18:59 Anomie 2024-09-07 03:38
18 de-AMP bot BRFA filed 13 7 Usernamekiran 2024-09-24 16:04 Usernamekiran 2024-09-24 16:04
19 Articles about years: redirects and categories BRFA filed 7 3 DreamRimmer 2024-09-16 01:18 DreamRimmer 2024-09-16 01:18
20 WikiProject ratings change BRFA filed 3 2 DreamRimmer 2024-09-15 11:43 DreamRimmer 2024-09-15 11:43
21 QIDs in Infobox person/Wikidata BRFA filed 10 3 Pigsonthewing 2024-09-17 16:56 Usernamekiran 2024-09-17 16:11
22 Remove outdated "Image requested" templates 3 2 7804j 2024-09-21 11:26 DreamRimmer 2024-09-19 18:53
23 "Was" in TV articles 5 3 Primefac 2024-09-29 19:34 Primefac 2024-09-29 19:34
Legend
  • In the last hour
  • In the last day
  • In the last week
  • In the last month
  • More than one month
Manual settings
When exceptions occur,
please check the setting first.


spectator.co.uk

There are about 1000 mainspace links to Spectator, most are broken. They changed URL schemes without redirects. The pages still exist at a new URL. Example:

There's no obvious way to program this, but posting if anyone has ideas. -- GreenC 06:27, 7 November 2018 (UTC)[reply]

I actually does not have much knowledge about the wikipedia bots. When I checked two to three links, the things that needs to be done from a reader's point of view is:

1)Identify the link which is identified as broken.
2)Remove the words "-.thtml" from the last portion of the link.
3)Add the month number and year number before the last section of url which needs to be separated by commas. This year and month number is the number on which the article appeared. If the month is only one digit, you need to add a zero before the month number.Adithyak1997 (talk) 10:40, 7 November 2018 (UTC)[reply]

The idea is to automate the conversion since it's 1000+ links. A bot wouldn't know which month. In the second example it is "letters-201" vs "letters" thus "-210" is also an unknown. If there was a way to find the redirected URL, such as though archive.org or some other way. Or volunteers to manually fix them. -- GreenC 20:14, 7 November 2018 (UTC)[reply]
One could also just write an e-mail to spectator.co.uk with the old urls and kindly ask them to give a mapping to the new urls. Then a bot could replace those links. -- seth (talk) 11:13, 10 November 2018 (UTC)[reply]
@Lustiger seth:. Do you want to give it a try? Narrowed it down to 552 dead links (User:GreenC/data/spectator). I've tried asking these things before and never had success so maybe someone else would have better luck. If they provide a mapping, I'll make the changes. -- GreenC 17:42, 10 November 2018 (UTC)[reply]
E-mail with links to special:linksearch/http://www.spectator.co.uk, User:GreenC/data/spectator, and to this discussion sent. If I get an answer, where shall I place the list? -- seth (talk) 10:04, 11 November 2018 (UTC)[reply]
Thanks! In data/spectator -- GreenC 16:30, 11 November 2018 (UTC)[reply]
Hi!
2018-11-11 10:02: mail sent to spectator digitalhelp@... (probably this was the wrong address, because they only look after subscriptions).
2018-11-11 10:12: first (automatic) answer: "You will receive a reply from one of our customer service team members within 48hrs."
2018-11-13 01:48: second answer: "I am awaiting further information regarding your enquiry and I will contact you as soon as this information has been received."
2018-11-14 01:42: third answer: "We would request you to email editor@... for further information." (deleted e-mail address)
2018-11-14 19:54: second try (mailed to editor@...)
2018-11-14 19:54: forth answer: "I'm afraid that due to the number of them received at this address it’s not possible to send a personal response to each one. To help your email find its way to the right home and to answer some questions:
  • If you are writing a letter for publication, please send it to letters@....
  • Please send article pitches and submissions to pitches@....
  • If you are having problems with your subscription, please email customerhelp@... [...]. For problems with the website, our digital paywall, our apps or the Kindle edition of the magazine, our FAQ page is here – and if that doesn’t answer your question please email digital@....
  • If the matter is urgent, please call our switchboard on 020 [...]."
2018-11-14 20:06: third try (mailed to digital@...)
iow: this may take some time. -- seth (talk) 20:10, 14 November 2018 (UTC)[reply]
Well, I don't think, I'll get an answer. :-( -- seth (talk) 23:37, 25 December 2018 (UTC)[reply]

College football schedule conversions

I'd like have a bot update the templates used to render college football schedule tables. Three old templates—Template:CFB Schedule Start, Template:CFB Schedule Entry, and Template:CFB Schedule End—which were developed in 2006, are to be replaced with two newer, module-based templates—Template:CFB schedule and Template:CFB schedule entry. The old templates remain on nearly 12,000 articles. The new templates were coded by User:Frietjes, who has also developed a process for converting the old templates to the new:

add {{subst:#invoke:CFB schedule/convert|subst| at the top of the table, before the {{CFB Schedule Start}} and }} to the bottom after the {{CFB Schedule End}}.

The development and use of these new templates has been much discussed in the last year at Wikipedia talk:WikiProject College football and has a consensus of support.

Thanks, Jweiss11 (talk) 00:32, 8 November 2018 (UTC)[reply]

We also need to add the optional "Source" column that was approved as part of the new template. Cbl62 (talk) 03:13, 19 November 2018 (UTC)[reply]
@Cbl62: This is irrelevant to the conversion process at stake here. Template:CFB schedule entry services the source column, although the template documentation does not reflect that. Jweiss11 (talk) 04:52, 19 November 2018 (UTC)[reply]
While we're doing the conversion, it makes sense to get everything working properly. Others have noted that there is a glitch in using the "Source" column in the named parameters version of the template. Whether the glitch in documentation or in core functionality, it should be remedied so that the "Source" column can be added. Cbl62 (talk) 10:38, 19 November 2018 (UTC)[reply]
@Cbl62: What is the glitch with the "Source" column in the named parameters version of the template? You can describe it or show an example? Jweiss11 (talk) 14:38, 19 November 2018 (UTC)[reply]
The "glitch" is that people have expressed a concern that they have difficulty adding a "Source" column to the new named parameters chart. See discussion here: Wikipedia talk:WikiProject College football#2018 Nebraska score links. I have yet to see a version of the new named parameters chart that includes a source column. Can you show an example where it has been done? And is there a reason it is not included in the template documentation? (By way of contrast, in the unnamed parameters version, the Source column is included in the template documentation as an optional add-on, see, e.g., 1921 New Mexico Lobos football team.) Cbl62 (talk) 15:00, 19 November 2018 (UTC) See also 2018 Michigan Wolverines football team where sources are presented in each line of the template but no "Source" column has been generated. Cbl62 (talk) 15:06, 19 November 2018 (UTC)[reply]
This is not a glitch. The is simply user habit. The person to ask about the template documentation is User:Frietjes, as she is the editor who wrote it. The inline citations at 2018 Michigan Wolverines football team could be easily moved to the source column if one so wanted. Jweiss11 (talk) 16:00, 19 November 2018 (UTC)[reply]
the source parameter is demonstrated in example 3. feel free to add this to the blank example at the top of the documentation, along with other missing parameters, like overtime, etc. Frietjes (talk) 16:11, 19 November 2018 (UTC)[reply]
Excellent. Thanks, Frietjes! Cbl62 (talk) 22:22, 19 November 2018 (UTC)[reply]

@BU Rob13: would you be available to take on this bot request? Thanks, Jweiss11 (talk) 03:16, 4 December 2018 (UTC)[reply]

@Jweiss11: Sorry, but not really. I'm about to take an extended break from Wikipedia, most likely. ~ Rob13Talk 04:00, 4 December 2018 (UTC)[reply]
I'm only skimming this but it might be a good candidate for PrimeBOT's Task 30. Primefac (talk) 15:21, 4 December 2018 (UTC)[reply]
@Primefac: Could you actually leave this for now? I've been trying to get a technically-minded friend interested in Wikipedia for a bit, and this may interest her. I'm reaching out to see if she'd be interested in jumping in and creating a bot. ~ Rob13Talk 23:44, 7 December 2018 (UTC)[reply]
Sure thing. Primefac (talk) 16:23, 9 December 2018 (UTC)[reply]
@BU Rob13: any word from your friend about whether she is interested in taking this on? Thanks and happy holidays, Jweiss11 (talk) 21:15, 25 December 2018 (UTC)[reply]
Sadly, a non-starter. She took a look around and ultimately decided she wasn't interested in the culture after seeing a talk page discussion gone bad. Which is fair, to be honest. Primefac, all yours. Thanks for holding off. ~ Rob13Talk 02:20, 26 December 2018 (UTC)[reply]
@Primefac: are you still available to take this on? Jweiss11 (talk) 04:32, 8 January 2019 (UTC)[reply]
Sorry for the late reply; yes, I should be able to do this. Primefac (talk) 11:09, 23 January 2019 (UTC)[reply]

 Working. Primefac (talk) 15:30, 27 January 2019 (UTC)[reply]

 Done. There are still some user-space transclusions of {{CFB Schedule Start}} et al, but barring any accidental miscues from GIGO issues it should be finished. Primefac (talk) 04:15, 28 January 2019 (UTC)[reply]

Unreferenced articles

Could a bot please identify articles that are not currently tagged as unreferenced but seem not to have references? Thanks for looking at this, Boleyn (talk) 19:12, 10 November 2018 (UTC)[reply]

Why do I get the feeling that this might be WP:CONTEXTBOT? --Redrose64 🌹 (talk) 23:42, 11 November 2018 (UTC)[reply]
Hi, Redrose64, I'm not sure I was clear enough, by identify the articles I meant generate a list of articles, similar to Wikipedia:Mistagged unreferenced articles cleanup. Thanks, Boleyn (talk) 18:18, 12 November 2018 (UTC)[reply]
Boleyn, I like this idea. Will take it up. If/when something is ready I'll post at Wikipedia talk:WikiProject Unreferenced articles or if any questions arise. -- GreenC 05:06, 2 December 2018 (UTC)[reply]
Bot now in beta. Initial test results. Followup at Wikipedia talk:WikiProject Unreferenced articles. -- GreenC 01:22, 17 December 2018 (UTC)[reply]

BRFA filed -- GreenC 04:07, 31 December 2018 (UTC)[reply]

The task is rather simple. Find all pages with Foobar (barfoo). If they redirect to Foobar, tag those with {{R from unnecessary disambiguation}}. This should be case-sensitive (e.g. Foobar (barfoo)FOOBAR should be left alone).

Could probably be done with AWB to add/streamline other redirect tags if they exist. Headbomb {t · c · p · b} 13:15, 14 November 2018 (UTC)[reply]

How would you find these pages? Via a database dump and regular expressions I assume? --TheSandDoctor Talk 07:24, 1 December 2018 (UTC)[reply]
@TheSandDoctor: via a dump scan yes. Or some kind of 'intitle' search. Headbomb {t · c · p · b} 05:08, 6 December 2018 (UTC)[reply]
@TheSandDoctor: any updates on this? Headbomb {t · c · p · b} 20:20, 19 December 2018 (UTC)[reply]
@Headbomb: No, sorry. I had forgotten about this. You are only anticipating pages like your Footer example above, right? What I mean is: Joe (some text) redirecting to Joe would be tagged with {{R from unnecessary disambiguation}}? Or am I getting this completely wrong/missing something? --TheSandDoctor Talk 20:33, 19 December 2018 (UTC)[reply]
Not sure what you mean by my Footer example, but basically if you have Foobar (whatever)Foobar, then tag Foobar (whatever) with {{R from unnecessary disambiguation}}. Nothing else. Headbomb {t · c · p · b} 21:26, 19 December 2018 (UTC)[reply]
@Headbomb: That would've been autocorrect being sneaky. Foobar is what I meant (did it again writing this) and that does clarify it for me. I will work on this tonight or tomorrow. --TheSandDoctor Talk 00:13, 20 December 2018 (UTC)[reply]

@TheSandDoctor: any updates on this? Headbomb {t · c · p · b} 08:24, 13 January 2019 (UTC)[reply]

Bot to improve names of media sources in references

Many references on Wikipedia point to large media organizations such as the New York Times. However, the names are often abbreviated, not italicized, and/or missing wikilinks to the media organization. I'd like to propose a bot that could go to an article like this one and automatically replace "NY Times" with "New York Times". Other large media organizations (e.g. BBC, Washington Post, and so on) could fairly easily be added, I imagine. - Sdkb (talk) 04:43, 19 November 2018 (UTC)[reply]

  • I would be wary of WP:CONTEXTBOT. For instance, NYT can refer to a supplement of the Helsingin Sanomat#Format (in addition to the New York Times), and maybe is the main use of Finland-related pages. TigraanClick here to contact me 13:40, 20 November 2018 (UTC)[reply]
    • @Tigraan:That's a good point. I think it'd be fairly easy to work around that sort of issue, though — before having any bot make any change to a reference, have it check that the URL goes to the expected website. So in the case of the New York Times, if a reference with "NYT" didn't also contain the URL nytimes.com, it wouldn't make the replacement. There might still be some limitations, but given that the bot is already operating only within the limited domain of a specific field of the citation template, I think there's a fairly low risk that it'd make errors. - Sdkb (talk) 10:52, 25 November 2018 (UTC)[reply]
  • I should add that part of the reason I think this is important is that, in addition to just standardizing content, it'd allow people to more easily check whether a source used in a reference is likely to be reliable. - Sdkb (talk) 22:01, 25 November 2018 (UTC)[reply]
@Sdkb: This is significantly harder than it seems, as most bots are. Wikipedia is one giant exception - the long tail of unexpected gotchas is very long, particular on formatting issues. Another problem is agencies (AP, UPI, Reuters). Often times the NYT is running an agency story. The cite should use NYT in the |work= and the agency in the |agency= but often the agency ends up in the |work= field, so the bot couldn't blindly make changes without some considerable room for error. I have a sense of what needs to be done: extract every cite on Enwiki with a |url= containing nytimes.com, extract every |work= from those and create a unique list, manually remove from the list anything that shouldn't belong like Reuters etc.., then the bot keys off that list before making live changes, it knows what is safe to change (anything in the list). It's just a hell of a job in terms of time and resources considering all the sites to be processed and manual checks involved. See also Wikipedia:Bots/Dictionary#Cosmetic_edit "the term cosmetic edit is often used to encompass all edits of such little value that the community deems them to not be worth making in bulk" .. this is probably a borderline case, though I have no opinion which side of the border it falls other people might during the BRFA. -- GreenC 16:53, 26 November 2018 (UTC)[reply]
@GreenC: Thanks for the thought you're putting into considering this idea; I appreciate it. One way the bot could work to avoid that issue is to not key off of URLs, but rather off of the abbreviations. As in, it'd be triggered by the "NYT" in either the work or agency field, and then use the URL just as a confirmation to double check. That way, errors users have made in the citation fields would remain, but at least the format would be improved and no new errors would be introduced. - Sdkb (talk) 08:17, 27 November 2018 (UTC)[reply]
Right that's basically what I was saying also. But to get all the possible abbreviations requires scanning the system because the variety of abbreviations is unknowable ahead of time. Unless pick a few that might be common, but it would miss a lot. -- GreenC 14:54, 27 November 2018 (UTC)[reply]
Well, for NYT at the least, citations with a |url=https://www.nytimes.com/... could be safely assumed to be referring to the New York Times. Headbomb {t · c · p · b} 01:20, 8 December 2018 (UTC)[reply]
Yeah, I'm not too worried about comprehensiveness for now; I'd mainly just like to see the bot get off the ground and able to handle the two or three most common abbreviation for maybe half a dozen really big newspapers. From there, I imagine, a framework will be in place that'd then allow the bot to expand to other papers or abbreviations over time. - Sdkb (talk) 07:01, 12 December 2018 (UTC)[reply]
Conversation here seems to have died down. Is there anything I can do to move the proposal forward? - Sdkb (talk) 21:42, 14 January 2019 (UTC)[reply]
I am not against this idea totally but the bot would have to be a very good one for this to be a net positive and not end up creating more work. Emir of Wikipedia (talk) 22:18, 14 January 2019 (UTC)[reply]
@Sdkb: you could build a list of unambiguous cases. E.g. |work/journal/magazine/newspaper/website=NYT combined with |url=https://www.nytimes.com/.... Short of that, it's too much of a WP:CONTEXTBOT. I'll also point out that NY Times isn't exactly obscure/ambiguous either.Headbomb {t · c · p · b} 17:47, 27 January 2019 (UTC)[reply]
Okay, here's an initial list:

Sdkb (talk) 03:54, 1 February 2019 (UTC)[reply]

Changing New York Times to The New York Times would be great. I have seen people going through AWB runs doing it, but seems like a waste of human time. Kees08 (Talk) 23:32, 2 February 2019 (UTC)[reply]

@Kees08: Thanks; I added in those cases. - Sdkb (talk) 01:19, 3 February 2019 (UTC)[reply]
Not really sure changing Foobar to The Foobar is desired in many cases. WP:CITEVAR will certainly apply to a few of those. For NYT/NY Times, WaPo/Wa Po, WSJ, LA Times/L.A. Times, are those guaranteed to a refer to a version of these journals that were actually called by the full name? Meaning that was there as some point in the LA Times's history were "LA Times" or some such was featured on the masthead of the publication, in either print or webform? If so, that's a bad bot task. If yes, then there's likely no issue with it. Headbomb {t · c · p · b} 01:54, 3 February 2019 (UTC)[reply]
For the "the" publications, it's part of their name, so referring to just "Foobar" is incorrect usage. (It's admittedly a nitpicky correction, but one we may as well make while we're in the process of making what I'd consider more important improvements, namely adding the wikilinks to help readers more easily verify the reliability of a source.) Regarding the question of whether any of those publications ever used the abbreviated name as a formal name for something, I'd doubt it, as it'd be very confusing, but I'm not fully sure how to check that by Googling. - Sdkb (talk) 21:04, 3 February 2019 (UTC)[reply]
The omission of 'the' is a legitimate stylistic variation. And even if 'N.Y. Times' never appeared on the masthead, the expansion of abbreviations (e.g. N.Y. Times / L.A. Times) could also be a legitimate stylistic variation. The acronyms (e.g. NYT/WSJ) are much safer to expand though. Headbomb {t · c · p · b} 21:41, 3 February 2019 (UTC)[reply]
It is a change I have had to do many times since it is brought up in reviews (FAC usually I think). It would be nice if we could find parameters to make it possible. Going by the article, since December 1, 1896, it has been referred to as The New York Times. The ranges are:
  • September 18, 1851–September 13, 1857 New-York Daily Times
  • September 14, 1857–November 30, 1896 The New-York Times
  • December 1, 1896–current The New York Times
New York Times has never been the title of the newspaper, and we could use date ranges to verify we do not hit the edge cases of pre-December 1, 1896 The New York Times articles. There is The New York Times International Edition, but it seems like it has a different base-URL than nytimes.com. I can go through the effort to verify the names of the other publications throughout the years, but do you agree with my assessment of The New York Times? Kees08 (Talk) 01:51, 4 February 2019 (UTC)[reply]

Remind me bot

Hi, it would be wonderful if we had a bot that looked for uses of a template called {{remindme}} or something similar (with a time parameter, such as 12 hours, 1 year, etc. etc.) and duly dropped a message on your own talk page at the designated time with a link to the page on which you put the remindme tag. It would only send such reminders to the person who posted the edit containing the template in the first place. Kind of like the functionality of such bots on reddit, I guess. Fish+Karate 13:11, 20 November 2018 (UTC)[reply]

  • That looks like it should go through BRFA smoothly. There seems to be some use case. It looks simple enough, so I would volunteer to code it, but the only way I can imagine to make it work is by monitoring Special:RecentChanges (or the API equivalent) for additions of the template, and that looks extremely inefficient; beards grayer than mine might have a better idea. TigraanClick here to contact me 13:33, 20 November 2018 (UTC)[reply]
Monitor the backlinks (whatlinkshere) for the template, maintain a database of diffs to that backlinks list each time the bot runs via cron. New additions will show up. I wrote a ready-made tool Backlinks Watchlist. -- GreenC 14:20, 20 November 2018 (UTC)[reply]
Would it even need to maintain a database? Just go hourly (or some period) through a populated category and if it is time, notify and then change the template to {{remind me|notified = yes}} to disable the category (and also change the text of the template to something like "This user was reminded of this discussion on Fooember 24, 2078."). Galobtter (pingó mió) 14:35, 20 November 2018 (UTC)[reply]
Backlinks or category are pretty much the same from user and bot PoV, I think (maybe it is a different story on the servers though). Maybe a small advantage to cat, because it can be more easily reviewed by humans.
In either case the point of maintaining the database would be to limit the scans. If the template gets some traction, and a million user each places a thousand reminders asking for a reminder in 3018, scanning every still-active template every time could get inefficient (as the category is populated with lots of reminders that you need to scan every time). For a first version though, we do not care; if bad stuff happens, it will be easy enough to put a limit on templates left by users (either limit active templates per user, or how far in the future you can set reminders).
If none else comes around it, I will try to draft the specs this weekend. Fish and karate, please whip me if you see nothing next Monday. I would ask the bot to do it, but it does not exist yet. The trickiest part will probably be who can ask a reminder for whom (I would probably say that only User:X can ask for a notification to User:X, to avoid abuse of the tool, which then needs a bit of checking of who put the template on the page). TigraanClick here to contact me 17:36, 21 November 2018 (UTC)[reply]
You might be interested in m:Community Wishlist Survey 2019/Notifications/Article reminders. Anomie 03:21, 21 November 2018 (UTC)[reply]
That is interesting, I think the bot I'm envisioning is more general than that, you could place the template anywhere and it'll ping you to go back there after a set time has elapsed (potentially could also put a specific datetime). Tigraan I definitely think only user X could ask for a reminder for user X, otherwise it would be open to abuse. A throttle of no more than Y reminders per day (or Z open reminders overall) may also be a good idea. Fish+Karate 09:20, 22 November 2018 (UTC)[reply]

Basic spec, policy questions to be answered

OK, so the basic use is as follows:

User:Alice places a template (to be created, let's call it {{remind me}}) inside a thread of which they wish to be reminded. The user specifies the date/time at which the reminder should be given as an argument of the template (either as "on Monday 7th" or "in three days" - syntax to be discussed later). At the given date, a bot "notifies" Alice.

On a policy level, I see a few questions:

  1. What kind of notification?
  2. Can Alice ask for Bob to be notified?
  3. Should we rate limit (and if so how?)
  4. Where, if anywhere, should we get consensus for all that?

Depending on the choice for each of those, this will change the amount of technical work needed, but as far as I can tell, those questions entirely define the next steps (coding/testing/approval request etc.). Please discuss here if I missed something, but below to answer the questions. TigraanClick here to contact me 13:36, 25 November 2018 (UTC)[reply]

Discussion on the spec

I made a separate section for this because I am almost sure of the questions that need asking but less sure of the answers they should get. What follows is my $0.02 for each:

  1. The simplest would be a user talk page message or a ping from the page from where the notification originates, but maybe WP:ECHO can allow better stuff. A user talk message is easy to code (read: I know how to do it), but it might lead to some clutter.
  2. After thinking over it, that is not obviously a bad thing that it is technically feasible (we can certainly decide it is against policy to do it or restrict the conditions, the question is whether this should be technically impossible). Alice has to post something to cause the bot to annoy Bob, so it is fairly similar to pings, which none would call to be terminated because of the potential for abuse. On the other hand, surely it would be OK and have some use case that one user can notify their own sockpuppet (e.g. I notify myself from my bot account). The only real problems I can imagine for cross-user postings is "privilege escalation" stuff:
    1. A user could cause the bot to notify users who have a protected talk page (at a protection level that the bot can access but not the user)
    2. A user could place many such templates in a single edit, causing multiple notices to be sent in a short time (while not being caught by rate limits on the servers)
  3. I do not think there is any legitimate-use reason for restricting the number of notifications. There might be a technical reason of avoiding to have large amounts of pending notifications depending on how the bot works (see previous discussion) or counter+vandalism reasons (e.g. allowing not only self-notifications, but only X pending notifications originating from a single user at a given time, so that a spam-notifier vandal cannot get far). If we do not allow cross-user notifications, I think we can go without rate limiting until the performance becomes an issue.
  4. The bot request page is not watched a lot, but I am not sure where else it can go. Maybe worth cross-posting to WP:VPP?

(Ping: Fish and karate) TigraanClick here to contact me 13:36, 25 November 2018 (UTC)[reply]

Answering the set of questions (this is as I see the bot working - and note when it comes to this kind of thing I'm a vision man, not details!)
  1. What kind of notification?
    A message on your talk page ("Hi (user name), here's the reminder you asked for - {link)") People could, I guess, if they prefer, have the bot post to a defined sub page (I would see this as a "phase 2" development). A small, unobtrusive ping might also work but that requires an edit and I can see a busy thread that lots of people are interested in being peppered with these pings being an unpopular choice. Keeping it to the user's own talk page is less imposing on other users.
  2. Can Alice ask for Bob to be notified?
    No. Alice can only ask for Alice to be notified. Let's keep it simple, at least initially.
  3. Should we rate limit (and if so how?)
    Initially I think it's not a terrible idea to ensure the capability is there in the code to throttle it in case it starts causing (as yet unforeseen) issues. The bot can just refuse to provide more than X reminders a day if issues arise.
  4. Where, if anywhere, should we get consensus for all that?
    Wikipedia:Bots/Requests for approval to sign off the bot, presumably. I think WP:VPP for suggestions would also be good.
A note to say thank you, Tigraan; I appreciate the thought and effort you're putting into this. Fish+Karate 09:23, 26 November 2018 (UTC)[reply]
@Fish and karate: About 4: if we go to BRFA with the whole agreement of two of us, they are going to tell us to get consensus that the task is useful somewhere else. Per the guide at WP:BRFA: If your task could be controversial (e.g. (...) most bots posting messages on user talk pages), seek consensus for the task in the appropriate forums. (...) Link to this discussion from your request for approval. Again, VPP is the catch-all, but that's because I have no other idea. Maybe a link from the talk page of WP:PING as well, since the functionality is closely related.
(Oh, and save your thanks for after the bot sees the light of day.) TigraanClick here to contact me 15:31, 26 November 2018 (UTC)[reply]
Tigraan, I would absolutely ask for a well-advertised discussion with consensus for this bot, if I came across the BRFA. I think it's an idea I would use (I do on reddit!), but I could see it becoming unintentionally disruptive (Let's say - 50 people use the "remindme" template on a popular arbcom case or RFA). I'm not sure what the best way to address that would be. SQLQuery me! 22:37, 3 December 2018 (UTC)[reply]
I see the point, perhaps this will remain a pipe dream. If someone can think of a way to work around that, that would be welcome. Fish+Karate 15:07, 6 December 2018 (UTC)[reply]
Since m:Community Wishlist Survey 2019/Notifications/Article reminders was #8 on the wishlist, hopefully it gets implemented in a way that works more like the watchlist: click a button, and it sends you a notification when the time is up without having to put a template on the page for everyone else to be annoyed by. Anomie 02:03, 4 December 2018 (UTC)[reply]

Hi. MOS:ACCESS#Text / MOS:FONTSIZE are clear. We are to "avoid using smaller font sizes in elements that already use a smaller font size, such as infoboxes, navboxes and reference sections." However, many infoboxes use {{small}} or the html code, especially around degrees earned (here's one example I corrected yesterday). I used AWB to remove small font from many U.S. politician infoboxes of presidents, senators, and governors, but there are so many more articles that have them. Here's an example for a TV station. I've noticed many movies and TV shows have small text in the infobox as well. Since I cannot calculate how many articles violate this particular rule of MOS, I would like someone to automate a bot to remove small text from infoboxes of all kinds. – Muboshgu (talk) 22:04, 20 December 2018 (UTC)[reply]

At least on my screen, your edit had no effect, because as far as I know, there is some sort of CSS style that limits infobox font size to a minimum of 85%. I am pretty sure I just saw that described the other day, but my searches for it have turned up nothing. Maybe someone like TheDJ would know.
If I am correct, that means that edits to remove small templates and tags from infoboxes would be cosmetic edits, which are generally frowned upon. However, there are a heck of a lot of unclosed <small>...</small> tags within infoboxes, along with small tags wrapping multiple lines, both of which cause Linter errors, so it may be possible to get a bot approved to remove tags as long as fixing Linter errors is in the bot's scope. I welcome corrections on the four things I got wrong in these four sentences. – Jonesey95 (talk) 23:58, 20 December 2018 (UTC)[reply]
It's not "cosmetic". It's an accessibility issue. In this version, the BS, MS, and JD in the infobox are smaller than 85%. – Muboshgu (talk) 05:47, 21 December 2018 (UTC)[reply]
FWIW, Firefox's Inspector tells me that "BS" in that version is exactly 85%. – Jonesey95 (talk) 10:29, 21 December 2018 (UTC)[reply]
Odd. That was not the assessment of User:Dreamy Jazz. [1] – Muboshgu (talk) 20:42, 22 December 2018 (UTC)[reply]
Fascinating. I just looked at the two revisions of Brian Bosma in Chrome while not logged in, and I definitely see a size difference in the "BS" and "JD" characters. So these would not be cosmetic edits after all, at least for some viewers using some browsers. (I have struck some of my previous comments.) – Jonesey95 (talk) 21:59, 22 December 2018 (UTC)[reply]
P.S. I found the reference to the small template sizing text at 85% at Template:Small. It looks like I may have misinterpreted that note. – Jonesey95 (talk) 01:42, 23 December 2018 (UTC)[reply]

@Jonesey95 and Muboshgu: Hello. Although the 85% font-size is defined, the computed value of the font-size is below 11.9px (it is 10.4667px). This is because font-size percentages work based on the parent container, not the document (see 1 under percentages). In this case the infobox has already decreased the font-size to 88% of the document, the font-size computed from the {{small}} tag will be 74.8% smaller than the rest of the document (0.88 * 0.85 = 0.748). This is the case in Firefox, Chrome, Edge (10.4px), Opera and Internet Explorer. This behaviour is the standard and so will be experienced in all browsers. Dreamy Jazz 🎷 talk to me | my contributions 10:46, 23 December 2018 (UTC)[reply]

Yes, here's a demo of what happens when percentages get enclosed by other percentages: Text Text Text Text Text . That goes to five levels, each being 95% of the enclosing element. --Redrose64 🌹 (talk) 12:42, 23 December 2018 (UTC)[reply]
That is helpful. I discovered that I have set my Firefox preferences to prevent the font size from going below 11 pt, which enforces MOS for me. But in Chrome, which I have left unconfigured, that text gets smaller. By all means, let's remove instances of <small>...</small> and {{small}} (and its size-reducing siblings) from infoboxes, both in Template space and in article space. – Jonesey95 (talk) 14:31, 23 December 2018 (UTC)[reply]
Yes, let's. Thanks for that clarification Jonesey95. – Muboshgu (talk) 15:46, 23 December 2018 (UTC)[reply]
I have been using AWB to help with this issue too. <small> and </small> cam be removed with a simple find and replace but the template is better dealt with using Regex. --Emir of Wikipedia (talk) 21:08, 3 February 2019 (UTC)[reply]
Is there a category and/or method of easily listing these questionable pages? Primefac (talk) 15:44, 10 February 2019 (UTC)[reply]
I think that Special:WhatLinksHere/Template:Small hiding links and redirects but showing transclusions might find what you want but not in a convenient list or category. When I was doing it in AWB I was just loading from the birth year categories. Emir of Wikipedia (talk) 15:58, 17 February 2019 (UTC)[reply]

Section sizes

Please can someone add {{Section sizes}} to the talk pages of ~6300 articles that are longer than 150,000 bytes (per Special:LongPages), like in this edit?

The location is not critical, but I would suggest giving preference to putting it immediately after the last Wikiproject template, whee possible. Omit pages that already have the template. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 15:59, 29 December 2018 (UTC)[reply]

A reasonable request, but I think it might need some sort of consensus to implement. Is there a WikiProject interested in using this (very-recently-created) template in order to improve Wikipedia? Primefac (talk) 15:13, 30 December 2018 (UTC)[reply]
Wikipedia:Village_pump_(technical)#Analysing_long_articles (started by Andy). There was another thread on long articles recently but it must be archived as I can't find it, it was a call to arms on how to deal with breaking them up. -- GreenC 15:23, 30 December 2018 (UTC)[reply]
Cool. Primefac (talk) 15:42, 30 December 2018 (UTC)[reply]
Created a Village Pump (proposal) at Primefac's request for more discussion. -- GreenC 19:26, 3 January 2019 (UTC)[reply]

Check 5.7 million mainspace talk pages for sections that would benefit from a {{reflist-talk}}.

Example edit.

Scope: for each talk page, extract each 2-level section. For each section, check for existence of reference tags ie. <ref></ref>. If exist, check for existence of {{reflist-talk}} or <references/>. If none exist, add {{reflist-talk}} at end of section (optionally in a 3rd-level subsection called "References").

-- GreenC 16:27, 1 January 2019 (UTC)[reply]

A more determinate method is search the HTML for <ol class="references"> - this will always exist if there is <ref></ref> somewhere in the page, regardless of existence of {{reflist-talk}} or <references/> and it will account for things like <!-- <ref></ref> --> -- GreenC 16:36, 1 January 2019 (UTC)[reply]
@GreenC: Not true: it's also present in pages with anautogenerated reflist, such as the previous version. --Redrose64 🌹 (talk) 20:08, 1 January 2019 (UTC)[reply]
Yeah I know. It will always exist if there is a ref, regardless of the existence of <references/> or its equiv. -- GreenC 20:16, 1 January 2019 (UTC)[reply]

I ran a script. In 2000 Talk pages it found 11 cases:

Extrapolated it would be about 29,000 pages are like this. -- GreenC 19:43, 1 January 2019 (UTC)[reply]

BRFA filed -- GreenC 20:02, 1 January 2019 (UTC)[reply]

Y Done -- GreenC 07:19, 11 February 2019 (UTC)[reply]

WikiProject Soil Tagging

The request is to have {{WikiProject Soil}} added to the article talk pages in 39 categories. Project notification posted. Much appreciated:

requested: -- Paleorthid (talk) 23:10, 6 January 2019 (UTC)[reply]

@Paleorthid:  Doing... --DannyS712 (talk) 02:31, 8 January 2019 (UTC)[reply]
@Paleorthid: See BRFA filed --DannyS712 (talk) 02:35, 8 January 2019 (UTC) (change to template 01:23, 10 January 2019 (UTC))[reply]

Auto-archive IP warnings

I imagine it's fairly confusing for IP users to have to scroll through lots of old warnings from previous users of their IP before getting to their actual message. We have Template:Old IP warnings top (and its partner), but it's rarely used—thoughts on writing a bot to automatically apply it to everything more than a yearish ago? Gaelan 💬✏️ 16:21, 10 January 2019 (UTC)[reply]

Technically feasible and is a good idea, IMO. Needs wider community input beyond BOTREQ. -- GreenC 17:09, 10 January 2019 (UTC)[reply]
Brought it to WP:VPR. Gaelan 💬✏️ 19:50, 11 January 2019 (UTC)[reply]

Remove living-yes, etc from talkpage of articles listed at Wikipedia:Database reports/Potential biographies of dead people (3)

Hi bot people. I was wondering whether it might be appropriate/worthwhile/a good idea to get a bot to remove "living=yes", "living=y", "blp=yes", "blp=y", etc from the talkpages of the articles listed at Wikipedia:Database reports/Potential biographies of dead people (3). I recognize that automating such a process might result in a few errors, but I think that would be a reasonable tradeoff compared to how tedious it would be for humans to check and update all 968 articles in the list one by one. (And hopefully, for those few(?) articles where an error does occur, someone watching the article will fix it). I spot-checked a random sample of articles in the list, and for every one I checked, it would have been appropriate to remove the "living=yes", etc from the talkpage, i.e. the article had a sourced date of death. To minimize potential errors, I would suggest the bot skips any articles which cover multiple people, e.g. ones with "and" or "&" in the title and Dionne quintuplets, Clarke brothers, etc. Thoughts? DH85868993 (talk) 12:53, 15 January 2019 (UTC)[reply]

That last might not be easy to bot-automate. Though, if instead of a bot we get a script, it would be possible to quickly deal with any multiples before running it. Adam Cuerden (talk)Has about 8.9% of all FPs 13:03, 15 January 2019 (UTC)[reply]

Redirects to Star Sports

In coming days, I'm going to redirect Star Sports to Fox Sports (Southeast Asian TV network). But some redirects to Star Sports need to be retargetted in advance.

Redirect the following to Fox Sports (Southeast Asian TV network)
Redirect the following to Star Sports (Indian TV network)

(Correction: STAR Sports HD3, Star Sports HD3, STAR Sports HD4 and Star Sports HD4 did exist. JSH-alive/talk/cont/mail 14:12, 21 January 2019 (UTC))[reply]

I don't know what to do with STAR Sports Network and Star Sports Network. Is it the name for Indian channels or Southeast Asian channels? JSH-alive/talk/cont/mail 09:32, 20 January 2019 (UTC)[reply]

Y Done per User_talk:Xqt#Requesting_mass_redirect_fix  @xqt 13:50, 1 February 2019 (UTC)[reply]

NZ heritage site lists

Hi guys, I'd like to create lists of NZ heritage sites. Lists would be very similar to those at German Wikipedia, see List of monuments in New Zealand. The database with the heritage sites is available here: http://www.heritage.org.nz/the-list You can search all sites in a specific region and export CSV.

I'm not technically skilled enough to program a bot that'd help me to do that. Is there anyone keen to help out? List of heritage sites is quite commons practice here, see eg. Listed buildings in Windermere, Cumbria (town). Regards, Podzemnik (talk) 11:34, 22 January 2019 (UTC)[reply]

Well, the CSV contains this header and first record:

RegisterNumber,Name,RegistrationType,RegistrationStatus,DateRegistered,Address,RegisteredLegalDescription,ExtentOfRegistration,LocalAuthorit
yName,NZAANumbers

660,1YA Radio Station Building (Former),Historic Place Category 1,Listed,1990-02-15,"74 Shortland Street, AUCKLAND","Pt Allots 10‐11 Sec 3 City of Auckland (CT NA67C/507), Pt Allot 12 Sec 3 City of Auckland (CT NA152/135), North Auckland Land District","Extent includes the land described as Pt Allots 10‐11 Sec 3 City of Auckland defined on DP 874 (CT NA67C/507), Pt Allot 12 Sec 3 City of Auckland (CT NA152/135), North Auckland Land District, and the building known as 1YA Radio Station Building (Former) thereon.",Auckland Council (Auckland City Council),[]

The problem will be mapping the "Name" field (eg. "1YA Radio Station Building (Former)") with the Wikipedia article name (Kenneth Myers Centre). There's no bot magic for that. -- GreenC 16:34, 22 January 2019 (UTC)[reply]

Turning a CSV into a table is not that difficult - it ca be done with a Word processor, like Word - import csv
  1. Add a "| " to start of row
  2. Change all "," to " || "
  3. Change "end of line" to "end of line" + "|-" + "end of line"
so 
text 1,text 2,text 3,text 4
becomes
| text 1 || text 2 || text 3 || text 4
|-
Then one just needs to add the top and bottom of the table.
However - note http://www.heritage.org.nz/terms-and-conditions - "None of the content of this website may be reproduced, copied, used, communicated to the public or transmitted without the express written permission of Heritage New Zealand, except for the purposes of private study, research, review or education, as provided for in the New Zealand Copyright Act 1994.". That could be an issue, as there is a lot of text in some columns. Ronhjones  (Talk) 18:06, 22 January 2019 (UTC)[reply]
Right, can't copy-paste content from the web. The challenge is determining which Wikipedia article corresponds to a given CSV record, so you can make a list of Wikipedia articles. -- GreenC 18:40, 22 January 2019 (UTC)[reply]

Alright, thanks for the inputs guys, I'll try to do it myself! Podzemnik (talk) 07:51, 25 January 2019 (UTC)[reply]

Bot to create entry in the (english) Wikipedia Category: Plants described in (year)

Data to be taken from Wikidata to give the the year of publication of a taxon and create "Category:Taxa described in ()" within the(English) wikipedia taxon entry, if a wikipedia enty has been created. MargaretRDonald (talk) 22:55, 22 January 2019 (UTC)[reply]

Bot to create category "Category:Taxa described by ()"

The bot would use the wikidata taxon entry to find the auhor of a taxon, and then use it again to find the corresponding author article to find the appropriate author category. (This will not always work - but will work in large number of cases. Thus, the English article for "Edward Rudge" corresponds to the category:"Category:Taxa named by Edward Rudge", and the simple strategy outlined here would work for Edward Rudge, Stephen Hopper and .... The category created would be an entry in the article. MargaretRDonald (talk) 23:08, 22 January 2019 (UTC)[reply]

Auto-classifying bot

There is a huge backlog within most Wikipedia Projects of unclassified articles. I've been recently assessing a number of these for the Politics Project, and have noticed a few patterns that I believe could be automated to heavily reduce this backlog.

  • At the moment I believe there is a bot that goes around and updates quality tags if another tag on the project has had it's quality increased. However, it appears to only do this if there is currently a quality tag for the project. I believe this can and should be updated to do this regardless of whether there is such a tag for a project. If, for instance, it finds a page like this Talk:1842 New York gubernatorial election it should update the Politics and Election & Referendum templates for that article with the stub class tag, in the process removing that article from the list of articles that the Politics Project needs to work on - and this is not an isolated occurrence. I don't have the numbers, but I have seen this sort of thing numerous times.
  • It is also sometimes possible to discern the importance of a article with excellent accuracy from the assessment of surrounding projects. For instance, in this article Talk:1844 United States presidential election in New York it would be reasonable to take the assessment from the US/Government/PresElections taskforce, if such an assessment is low, and apply it to the politics one, because the Politics project is never going to consider an article that said taskforce considers low importance any higher than that. As such, a bot that projects could instruct to duplicate the tag of certain other taskforces or sub-taskforces, up to a certain 'level', to their own taskforce is what I am proposing. Alongside the above proposal, this should drastically reduce the backlog across numerous projects that are willing to set up the instruction page for it.

And of course, if we can heavily reduce the backlog like this, we will make attempting the remaining tasks that must be classified by hand less daunting, and thus more likely to be done. It is true that the second part of this proposal will sometimes result in incorrect classification, but the criteria will be up for each taskforce to determine and so I don't believe that risk should prevent this bot being created - and even if they are incorrectly assessed, a few incorrect assessments are better than numerous unassessed articles.

If no one is interested in taking this up then I do intend to get around to it at some point - unless someone is able to explain why it is stupid/unnecessary, though I think the first part of this proposal would be better as a modification to the existing tag-update bot.

-- NoCOBOL (talk) 07:50, 25 January 2019 (UTC)[reply]

I've forgotten just how many times I've had to explain this, but the WikiProject importance ratings are intentionally different. That is because they indicate how important the page is to that specific WikiProject or task force. --Redrose64 🌹 (talk) 11:59, 25 January 2019 (UTC)[reply]
I realize that. However, that doesn't mean they can't sometimes be derived from each other. I'm not suggesting that we set up a bot to duplicate all rankings, I'm suggesting we set up a bot that allows wikiprojects to set conditionals by which their importance ranking can sometimes be derived - I think this misunderstanding is my fault, I didn't explain things. For instance, per my under-explained example above, the Politics Project could seta conditional where:
  • If the bot finds an article that is tagged as part of the politics project
  • And If that tag does not have an assessed importance
  • And If that article is also tagged by the Presidential Elections subproject of the Governence subproject of the United States Project
  • And If that article is assessed as low importance
  • Then assess the Politics Project Tag for that article as Low Importance
The idea is that sometimes importance will be the same; in this case, I believe it's extremely unlikely that the Politics Project will find something important when the Presidential Elections subproject does not, and from this idea I wish to enable participating projects to take advantage of these patterns when they discern them, and in doing so reduce the extreme backlog that most projects have. -- NoCOBOL (talk) 18:35, 25 January 2019 (UTC)[reply]

I just found and fixed an article with two Wikipedia links to two articles that were nothing but redirects back to it, and apparently that's all they had ever been. [2] Can you make a bot to check all Wikipedia links that point to pages that are redirects, then checks to see if that redirect points back to the page its coming from, and then remove the brackets around it so it doesn't link there anymore? If the link has a | in it, then keep what's after that and ditch the rest. Dream Focus 16:29, 26 January 2019 (UTC)[reply]

Why is it a crime for an article to link to itself? See for example Promotion (chess)#Promotion to various pieces, where we find the parenthesis
(See [[Promotion (chess)#Promotion to rook or bishop|Underpromotion: Promotion to rook or bishop]] for examples ...
If this link were to be removed, people would need to find their own way to Promotion (chess)#Promotion to rook or bishop. --Redrose64 🌹 (talk) 16:58, 26 January 2019 (UTC)[reply]
Self-redirects are rarely a result of deliberate self-linking, especially without anchor links. --Izno (talk) 17:34, 26 January 2019 (UTC)[reply]

Redrose64 I meant a link to another article that then redirects back to the first article again. Dream Focus 18:18, 26 January 2019 (UTC)[reply]

You mean like a redirect with possibilities? We should definitely not delink those, there is always the possibility that the redirect gets turned into a full article: if this happens, the existing links will then point to the new article. --Redrose64 🌹 (talk) 19:30, 26 January 2019 (UTC)[reply]

Tagging shill journal articles

Adverts pretending to be peer-reviewed papers are cited in thousands, possibly tens of thousands, of Wikipedia articles. Articles in paid supplements to journals are generally not independent sources. See this discussion for details.

Sometimes, the citation contains the abbreviation "Suppl.". In this case, the citation could be bot-tagged with {{Unreliable medical source|sure=no|reason=sponsored supplements generally unreliable per WP:SPONSORED and WP:MEDINDY|date=30 September 2024}}

The "sure=no" parameter will add a question mark to the tag, as, rarely, the supplement might actually be a valid source. I think these exceptions would probably be rare enough to manually mark for exclusion by the bot.

This would increase awareness of this problem among editors as well as encouraging editors to scrutinize the tagged sources. HLHJ (talk) 04:42, 27 January 2019 (UTC)[reply]

Unless a highly-sophisticated algorithm can be made, this will be denied per WP:CONTEXTBOT. There are zillions of reliable supplements (e.g. Astronomy & Astrophysics Supplement Series, Astrophysical Journal Supplement Series, Nuclear Physics B: Proceedings Supplements, Supplement to the London Gazette, The Times Higher Education Supplement, Raffles Bulletin of Zoology Supplement), so flagging something as problematic merely because it's from a supplement will not fly. Headbomb {t · c · p · b} 17:11, 27 January 2019 (UTC)[reply]

Moving Reference Metadata Out of Two Infoboxes

The Medical Translation Task Force faces an issue with respect to Content Translation. Basically the tool loses references when the metadata exists within template:infobox medical condition (new) and template:drugbox. The issue is described here and the task is supposedly not easily fixable and thus will not be fixed anytime soon.[3]

As a work around I am proposing a bot that moves the metadata for references from these two infoboxes to the lead or body of the article in question. Will be done for these ~1200 articles.Category:RTT

An example of what such an edit will look like is this.[4]

Doc James (talk · contribs · email) 19:14, 30 January 2019 (UTC)[reply]

@Doc James: In the example given a named reference gets moved out of the infobox and into a call of that named reference in the main body. What happens in the case that there is not such an obvious place to transfer the citation? In that case would the reference just hang generally in the article, would it go into a special subsection for odd references, would the bot skip that transfer, or is there some other plan? Blue Rasberry (talk) 20:23, 30 January 2019 (UTC)[reply]
User:Bluerasberry the bot would do nothing. The problem with content translation only occurs when a named reference occurs within the infobox and than is used as "<ref name=X/>" outside the infobox. If it is only used within the infobox there is no problem. Doc James (talk · contribs · email) 20:29, 30 January 2019 (UTC)[reply]
@Doc James: I see. The translation project is concerned with the leads of articles, so when citations critical to the leads is inaccessible in the infobox, then the translation workflow has difficulties applying the full citation in the translated text.
I guess the controversy here could be whether first uses of a citation should be in the body of text rather than the infobox. I say yes - infoboxes increasingly are becoming a space for semi-automatic engagement and any template is challenging for new editors to manipulate anyway. I prefer having a more consistent practice of keeping citations in the body of the text.
Support This proposal does not hurt anything, makes a change which most users would find arbitrary, but which has a big impact for the workflow of the translation team. The bot would do a one-time run of 1200 articles and then perhaps occasional maintenance, which I am guessing could be 1-2 times yearly in the next few years. Do it. Blue Rasberry (talk) 20:35, 30 January 2019 (UTC)[reply]
Thanks User:Bluerasberry. Only need the one run for the translation efforts. I am happy to manually make sure metadata is in the appropriate spot for all new articles prepared for translation. Doc James (talk · contribs · email) 20:39, 30 January 2019 (UTC)[reply]
Support Infoboxes should be summarising information available in the main article text, so most of the time there will be a choice between citing the full reference in the body text or in the infobox. We ought to prefer having the full citation in article text, because it makes it easier to re-use snippets of text from one article in another, related one. There is rarely any corresponding need to copy infoboxes from one article to another. If this also improves the functionality of the Content Translation tool, then that is a real bonus. --RexxS (talk) 11:43, 31 January 2019 (UTC)[reply]
I'm willing to create and run a bot to accomplish the task as mentioned by Doc James. --Fz-29 (talk) 20:20, 1 February 2019 (UTC)[reply]
Anything more we need before User:Fz-29 builds this? Doc James (talk · contribs · email) 01:03, 2 February 2019 (UTC)[reply]

Detect Hijacked journals

Stop Predatory Journals maintains a list of hijacked journals. Could someone search wikipedia for the presence of hijacked URLs and produce a daily/weekly/whateverly report? Maybe have a WP:WCW task for it too? Headbomb {t · c · p · b} 00:09, 4 February 2019 (UTC)[reply]

This is a good idea. Made a script to scrape the site and search WP, it found three domains in 11 articles. -- GreenC 16:50, 4 February 2019 (UTC)[reply]
Extended content
  • Emma Yhnell <snippet>wins BSA Award Lecture | News | The British Neuroscience Association". www.bna.org.uk. Retrieved 2018-10-11. Video of Emma Yhnell speaking on public engagement</snippet>
  • Catherine Abbott <snippet>Neuroscience Day 2018 | Events | The British Neuroscience Association". www.bna.org.uk. Retrieved 2018-04-15. "Funding Panel membership | NC3Rs". www.nc3rs</snippet>
  • Irene Tracey <snippet>Winners 2018 Announced! | News | The British Neuroscience Association". www.bna.org.uk. Retrieved 2019-01-04. Tracey, Irene; Farrar, John T.; Okell, Thomas</snippet>
  • John H. Coote <snippet>"Professor John Coote | News | The British Neuroscience Association". www.bna.org.uk. British Neuroscience Association. Retrieved 4 December 2017. "John</snippet>

@Headbomb: can post the report on a regular basis if there is a page. Script takes less than 20 seconds to complete so not expensive on resources. -- GreenC 17:02, 4 February 2019 (UTC)[reply]

Broken ref tag report bot

A bot that reports broken ref tags to a user, so he/she can fix it. — Preceding unsigned comment added by Darkwolfz (talkcontribs) 04:48, 6 February 2019 (UTC)[reply]

@Darkwolfz: Can you give a few examples? (I know they exist, but I haven't analyzed in depth why they appear broken). Thanks, --DannyS712 (talk) 04:55, 6 February 2019 (UTC)[reply]

Sure DannyS712 For example if an article have a <ref> and the editors maybe used source editor, and that cause a backspace or enter in ref tag, which will make it broken, or editors giving wrong parameters, for example I found a article today where they entered url correct, but Instead of giving website name, they added url. So if there's a bot which can detect broken ref tags or hyperlinks, and report it to me, I can fix them. Darkwolfz (talk) 05:02, 6 February 2019 (UTC)[reply]

@Darkwolfz: What I meant was can you link to a few articles so I know what to scan for? --DannyS712 (talk) 05:10, 6 February 2019 (UTC)[reply]
Or couldn't you just look through the pages in Category:Pages with incorrect ref formatting and Category:Pages with broken reference names? --DannyS712 (talk) 05:12, 6 February 2019 (UTC)[reply]

DannyS712 https://en.wikipedia.org/wiki/Formby_Hall In it's recent history, I fixed an error like that, and maybe we should scan for source that are in red color between <ref>...</ref> or a missing opening <ref> or closing </ref>, also reference title missing ones.

@Darkwolfz: Did you see the categories I linked to above? --DannyS712 (talk) 05:32, 6 February 2019 (UTC)[reply]
@DannyS712: yes, but almost all of then have title errors, and if we could just find tag errors, it'd be great. As in Missing <ref> or </ref>
@Darkwolfz: what about Category:CS1 errors: external links? --DannyS712 (talk) 05:48, 6 February 2019 (UTC)[reply]

Yes it helps a bit, but is it possible to find articles which doesn't belong to the category, as in a new error made by someone accidentally. And filter missing <ref> tags?

The article before you edited it (https://en.wikipedia.org/w/index.php?title=Formby_Hall&oldid=871268351) was in the CS1 category - can you give an example of an article with the error you're thinking of that isn't in one of the above-mentioned categories? --DannyS712 (talk) 05:58, 6 February 2019 (UTC)[reply]
This may not be what the OP intended, but it would be very useful to have a category or report for broken Harvard-style references. For example, in The White Negro, the short reference "Manso 1985" does not link to a full citation. An individual editor can use User:Ucucha/HarvErrors.js to make these references appear in red, but I do not know of a report or category that systematically lists articles where such errors are present. A set of reports, including individual reports for FAs and GAs, would be useful for ensuring that articles have verifiable references. – Jonesey95 (talk) 15:46, 6 February 2019 (UTC)[reply]
@Jonesey95: I'll try to adapt the script you linked to --DannyS712 (talk) 06:08, 11 February 2019 (UTC)[reply]
It should be pretty easy, if there is a category (or list) of all pages using Harvard-style references, or an easy way to make one. Otherwise I would have to scan through all pages to find the ones with Harvard-style reference errors. --DannyS712 (talk) 06:14, 11 February 2019 (UTC)[reply]
You could start with something like this. – Jonesey95 (talk) 08:45, 11 February 2019 (UTC)[reply]
The {{sfn}} template isn't Harvard-style references, it's Shortened footnotes. Harvard-style references are parenthetical, as used on pages like Actuary. However, the two methods have a number of common features, primarily the separation of page number information from the long-form citation, with the association between the two being by means of a link formed from up to four surnames and a year. From my reading of the above, it is these links that need to be tested; and we have a script to do that, see User:Ucucha/HarvErrors. --Redrose64 🌹 (talk) 20:38, 11 February 2019 (UTC)[reply]
@Jonesey95: adapting it is a lot more complicated than I thought - I don't think I'll be able to do this. But, User:DannyS712 test/HarvErrors.js will give you an alert on every page that you visit that has these errors - don't know if you'll find that useful. --DannyS712 test (talk) 21:15, 11 February 2019 (UTC)[reply]

Shadows Commons

This is a relatively simple query: https://quarry.wmflabs.org/query/18894

The images listed in that query ideally should be tagged with {{Shadows Commons}} (unless already tagged as CSD F8)

As this is a repeatable, and felt to be uncontroversial task, It would be better to let a bot do it, freeing up contributors for more complex tasks that require human skills rather than simple tagging clicks. Thanks

Given the query size, the bot would not need to be run continuously, but once a week should prove to be more than adequate.


ShakespeareFan00 (talk) 10:58, 6 February 2019 (UTC)[reply]

BRFA filed -- GreenC 15:30, 6 February 2019 (UTC)[reply]

listing for Speedy Renaming all subcategories of Category:GTK+ to plain GTK

per Wikipedia:Categories_for_discussion/Speedy#Current_requests (Consistency with main article's name per official renaming)

please list all subcategories of Category:GTK+ with "GTK+" to plain GTK, their number is high and I can't do it manually, thanks. -- Editor-1 (talk) 08:19, 10 February 2019 (UTC)[reply]

@Editor-1: I can do it (with AWB) - but what specifically are you asking for? I list of the categories? --DannyS712 (talk) 08:25, 10 February 2019 (UTC)[reply]
just list of all categories that have "GTK+" and same list without plus mark (plain GTK), see mentioned link and related discussion, thanks. Editor-1 (talk) 08:28, 10 February 2019 (UTC)[reply]
@Editor-1: I made a list of all of the subcategories (below) --DannyS712 (talk) 08:30, 10 February 2019 (UTC)[reply]
now there is need to make this list into below format:

* [[:Category:old name with plus]] to [[:Category:same name without plus]] – per official renaming (request by [[User:Editor-1]])

to can include into Wikipedia:Categories_for_discussion/Speedy#Current_requests

thanks. -- Editor-1 (talk) 08:46, 10 February 2019 (UTC)[reply]

@Editor-1:  Done --DannyS712 (talk) 09:02, 10 February 2019 (UTC)[reply]
@DannyS712: thank you very much. -- Editor-1 (talk) 09:17, 10 February 2019 (UTC)[reply]

list

Extended content

Tagging sub-categories of Category:English-language singers

Please tag all sub-cats of Category:English-language singers (except Uganda) with

{{subst:cfr-speedy|English-language singers from ...}}

i.e. the nomination is to change the word "of" to "from".

Ideally, each category's country name should replace "...", but the ellipsis would be sufficient.

I will then list them at WP:CFDS myself. – Fayenatic London 23:03, 13 February 2019 (UTC)[reply]

@Fayenatic london: I'll do this one again manually, but I'm going to submit a BRFA soon, so feel free to message me for these in the future --DannyS712 (talk) 00:38, 14 February 2019 (UTC)[reply]
@Fayenatic london: Can you list it first, so I can link to it in the edit summary? --DannyS712 test (talk) 00:41, 14 February 2019 (UTC)[reply]
@DannyS712 test: Thank you, I have listed them in a separate section at Wikipedia:Categories_for_discussion/Speedy#Current_requests. – Fayenatic London 13:53, 14 February 2019 (UTC)[reply]
@Fayenatic london:  Done --DannyS712 (talk) 16:11, 14 February 2019 (UTC)[reply]

List of values used for Template:Tooltip

Could someone generate a list of values used for Template:Tooltip (the redirect, not Template:Abbr) in a table form, so it would be easier to see what needs to be converted to {{abbr}} per the result of this discussion? --Gonnym (talk) 14:20, 15 February 2019 (UTC)[reply]

Doing... Dat GuyTalkContribs 15:54, 15 February 2019 (UTC)[reply]
Gonnym I can't find what Kuznetsov-class aircraft carrier has that transcludes the template. Could you help me figure it out? Dat GuyTalkContribs 16:33, 15 February 2019 (UTC)[reply]
In addition, would you like me to also look if the articles that transclude the tooltip template also match the [Abbr/Abrrv/What is] templates? It seems like they're redirects. Also pinging @Amorymeltzer: fyi. Dat GuyTalkContribs 16:47, 15 February 2019 (UTC)[reply]
Regarding your second question, no need. Only {{Tooltip}} was discussed as deprecated in that discussion. Regarding the first issue though. Wow. I've looked over that article multiple times and inside the templates used on that page and I can't seem to figure out where tooltip is used. Nothing seems to be using it. --Gonnym (talk) 17:06, 15 February 2019 (UTC)[reply]
It was Template:Ukrainian ships, I've removed the use. ~ Amory (utc) 17:13, 15 February 2019 (UTC)[reply]
@Amorymeltzer and Gonnym: Before I finish it, does User:DatGuy/sandbox look good? Dat GuyTalkContribs 18:01, 15 February 2019 (UTC)[reply]
I can't speak for Gonnym, but one thing I think would be helpful (at least, how I was planning on thinking about it) is to know which of these are within the same template or table. I imagine that'd be harder to handle, but ideally many of the uses in mainspace could be replaced by a wrapper template for Module:Sports table, so knowing what the common pairings would be helpful. ~ Amory (utc) 18:07, 15 February 2019 (UTC)[reply]
That looks good. Do you think it is possible to list only unique pairings and the number of times it appears? So for example, list only once the "Ref."/"Reference". If this is possible, it will help in deciding if this is something that can be done with AWB or a bot. It will also make reading the table easier. If it can't, the current table still helps a lot though. --Gonnym (talk) 21:31, 15 February 2019 (UTC)[reply]
I don't believe that checking if it's inside a template/table is simple/worth the time, but I've made User:DatGuy/sandbox and User talk:DatGuy/sandbox. They will be updated with sandbox1 accordingly when article size exceeds limits. Dat GuyTalkContribs 23:45, 15 February 2019 (UTC)[reply]
Since I see (at least) two different "GD" entries in User talk:DatGuy/sandbox, I'm assuming you managed to get unique pairings right? If that is true, could you also add the 2nd argument column to this table? This is very helpful btw. I've already identified a few thousand easy replacements. --Gonnym (talk) 23:56, 15 February 2019 (UTC)[reply]
I'm not sure what the duplicate entries are actually. I've attempted a fix. Dat GuyTalkContribs 23:59, 15 February 2019 (UTC)[reply]
@Gonnym and Amorymeltzer: Well, seems like it's being a bit of a pain in the ass due to article size limits. It has calculated 24000 uses. The pages are User talk:DatGuy/sandbox and User:DatGuy/sandbox(0-22). Dat GuyTalkContribs 09:44, 16 February 2019 (UTC)[reply]
Yeah, it still has a lot of uses, but for example, just "Pts" alone has 10705 uses. Just reconfirming with you, are all "Pts" uses using the same second argument value? --Gonnym (talk) 09:47, 16 February 2019 (UTC)[reply]
No, they aren't. You could go through a few pages of the User: pages and look for it. Apologies. Dat GuyTalkContribs 09:48, 16 February 2019 (UTC)[reply]
Haha indeed! I was surprised you were so confident. Still, it's helpful to have. I've got a lot on my plate at the moment, but if I get a chance in the next month or so, I'll try and work on finding the uses that are the same (e.g. all the headers with W/D/L, those with W/D/L/Pts, etc.). ~ Amory (utc) 11:34, 17 February 2019 (UTC)[reply]

MOSDATE bot

Hello,

I would like to suggest a bot that fixes dates in Category:Use mdy dates and Category:Use dmy dates.

RhinosF1(chat)(status)(contribs) 18:09, 15 February 2019 (UTC)[reply]

I'd be happy to run the task (preferably python 2.7 (or 3)) or in any automated editor that works with ChromeOS but I can't guarantee consistency in it running. RhinosF1(chat)(status)(contribs) 18:18, 15 February 2019 (UTC)[reply]
This was recently discussed at Wikipedia:Village pump (proposals)/Archive 156#"Datebot" (limited scope). Glancing through the !votes, it looks like it's not particularly straightforward and may not have enough support for a bot to be doing it. If someone were to want to do this, they should probably restart that discussion and try for much more input. And be a bit clearer about exactly which dates would be touched (only in citation templates?). Anomie 18:26, 15 February 2019 (UTC)[reply]
I wasn't aware of the discussion, As I've said I'd be happy to do a automated one-time run on something like AWB if that would be better. I'd probably suggest only touching articles with the tag that haven't been updated in 12 months and run once every year or so. I'd personally go with citation templates only for a bot. If it was AWB generating a list to approve then pushing those changes in batches then any date on an article with the tag. I'm on IRC often if anyone would like to help develop and wants to discuss via PM (you must register your nick first). RhinosF1(chat)(status)(contribs) 18:34, 15 February 2019 (UTC)[reply]
I agree that this issue is not straightforward, and is not conducive to a bot operation. For example, any bot that changes access-date dates from BIGENDIAN to either 'mdy' or 'dmy' when the article has established use of BIGENDIAN for the access-date dates would be in violation of WP:DATERET and WP:CITESTYLE/WP:CITEVAR – but how is a bot supposed to figure this out? --IJBall (contribstalk) 21:45, 15 February 2019 (UTC)[reply]
Declined Not a good task for a bot. per the comments above. Primefac (talk) 15:50, 17 February 2019 (UTC)[reply]

ARKive

The ARKive project has ended and its website has been replaced with a single page noting that act. Links to pages on arkive.org need to be replaced with archive.org equivalents; and citations need |archive-url= and |archive-date= attributes. I've already updated {{ARKive}}. Can someone oblige, please?

Links like the one at the foot of Bitis schneideri could usefully be replaced using {{ARKive}}. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 13:23, 16 February 2019 (UTC)[reply]

I have set the domain to dead on IABot and submitted a task to update the affected pages. Dat GuyTalkContribs 14:41, 16 February 2019 (UTC)[reply]
Hey Dat Guy I did the same thing and our queues were started about 15 seconds apart. I just killed my job. -- GreenC 14:49, 16 February 2019 (UTC)[reply]

Categories for Discussion bot

Wikipedia:Categories for discussion is looking for a new bot to process category deletions, mergers, and moves. User:Cydebot currently processes the main /Working page, but there is a growing list of issues that call out for a replacement bot:

  1. Cydebot's default is to create a category redirect in most (though, oddly, not all) cases of renaming or merging. This is helpful in some cases (e.g. Swaziland → Eswatini) but unhelpful or downright wrong in most others, and it promotes future miscategorization (see examples here). Quite simply, a bot should not be creating thousands of category redirects without either more specific parameters or direct human guidance. Currently, this is substantially adding to the workload of the few admins who close CfDs and contributing to backlogs.
  2. Cydebot unexpectedly stalls on certain large runs.
  3. Cydebot no longer process the /Large and /Retain subpages.
  4. The bot's operator is no longer very active (just 4 edits last year), and therefore unable to address these issues.

At a minimum, the new bot should process the main /Working page:

  • Deleting, merging, and renaming (i.e. moving) categories, as specified, with appropriate edit summaries.
  • Deleting the old category with an appropriate deletion summary.
  • In the case of renaming, removing the CfD notice from the renamed category.

Ideally, it would also do some or all of the following:

  • Process the /Large and /Retain subpages.
  • Accept manual input when a category redirect should be created—for example, by recognizing leading text when a redirect is wanted, such as * REDIRECT [[:Category:Foo]] to [[:Category:Bar]].
  • Recognize and update category code in transcluded templates. This would need to be discussed/tested to minimize errors and false positives.
  • Recognize and update incoming links to the old category. This would need to be discussed/tested to minimize errors and false positives.

Your assistance would earn the gratitude of some very tired and increasingly frustrated CfD'ers.

Thank you, -- Black Falcon (talk) 20:48, 18 February 2019 (UTC)[reply]

@Black Falcon: I may be able to help - see my notes at the CfD talk page about a bot for tagging. A similar functionality would be to go through all of the pages in a category and recategorize them, or remove a category so it can be deleted. I'm really busy the next week, but I'm interested in working on this (though I would need someone else with a bit to operate it for the deletions, etc) --DannyS712 (talk) 20:55, 18 February 2019 (UTC)[reply]
ArmbrustBot is already approved for this (tasks 1 and 6); alerting the operator. {{3x|p}}ery (talk) 04:27, 19 February 2019 (UTC)[reply]
ArmbrustBot requires operator input to run, and it cannot perform admin actions. — JJMC89(T·C) 05:02, 19 February 2019 (UTC)[reply]
I'm willing to take this on. — JJMC89(T·C) 05:02, 19 February 2019 (UTC)[reply]

My new bot request

Hello! I would like to request to operate a bot! My idea for a bot is a bot that can revert reference blanking. As my job as a recent changes patroller, I see many people blanking the reference. I know that CluBot reverts vandalism, but usually, CluBot does not revert the reference blanking. Let me know what you think!    Shalvey    17:10, 19 February 2019 (UTC)[reply]

How would it tell the difference between vandalism and legitimate deletion. User:ClueBot NG is a sophisticated bot built by a team of programmers. Maybe ask if they can incorporate the idea. Give them example diffs of the types of edits. -- GreenC 17:35, 19 February 2019 (UTC)[reply]

A Bot that would see if references go to a site

Hello, I would like to know if it is possible that you guys could create a bot that would check references in articles, and see if they are actually websites, not just a random URL that isn’t even existent. What I mean is that, when you type in a website, you have a blue outline, which then forwards you to the cite. What I’m seeing is URL’s that aren’t highlighted in blue, but just URLs. The bot could be ran by me, but I don’t know how to code a bot. Thanks!    Shalvey    18:49, 19 February 2019 (UTC)[reply]


Let me know what you think! — Preceding unsigned comment added by Shalvey (talkcontribs) 19:13, 19 February 2019 (UTC)[reply]

Usurp KadaneBot

Moved from WP:BON

Hi all, Kadane very helpfully created KadaneBot for us over at Wikipedia Peer Review - it sends out automated reminders based on topic areas of interest for unanswered peer reviews. Unfortunately, Kadane's been inactive almost since creation (September 2018), and hasn't responded to my request [5]. Would anyone be so kind as to usurp this bot so we can continue to use it? --Tom (LT) (talk) 07:32, 22 February 2019 (UTC)[reply]

ADDIT. As Xaosflux pointed out, there isn't really a 'usurp' process, however I think this is probably the easiest title to describe what I am requesting.--Tom (LT) (talk) 07:32, 22 February 2019 (UTC)[reply]

Archive Bot

A bot that will, in certain situations, switch links to web.archive.org.