Jump to content

Wikipedia:Bot requests: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
m Archiving 1 discussion(s) to Wikipedia:Bot requests/Archive 79) (bot
Line 420: Line 420:
Thanks,
Thanks,
[[User:WhisperToMe|WhisperToMe]] ([[User talk:WhisperToMe|talk]]) 20:48, 5 July 2019 (UTC)
[[User:WhisperToMe|WhisperToMe]] ([[User talk:WhisperToMe|talk]]) 20:48, 5 July 2019 (UTC)

== Bot to update economic statistics ==

I plan on making a bot that updates statistics at regular intervals. For example, by updating the latest GDP numbers or inflation numbers on the article [[Economy of the United States]]. Numbers will initially be retrieved from the [https://fred.stlouisfed.org/|St. Louis Fed's FRED] API. Other sources of data could be added later.

I envision the typical flow will be:

# Retrieve a list of all pages using the associated template.
# Parse the pages and retrieve series identification and other relevant information from the templates. For example [https://fred.stlouisfed.org/series/A191RL1Q225SBEA A191RL1Q225SBEA] for quarterly US GDP from FRED.
# Retrieve the latest series values through APIs.
# Replace the old value with the latest value.

I started a similar project several years ago but never followed through. Please let me know if you have any thoughts or suggestions.--[[User:Bkwillwm|Bkwillwm]] ([[User talk:Bkwillwm|talk]]) 03:26, 6 July 2019 (UTC)

Revision as of 03:26, 6 July 2019

This is a page for requesting tasks to be done by bots per the bot policy. This is an appropriate place to put ideas for uncontroversial bot tasks, to get early feedback on ideas for bot tasks (controversial or not), and to seek bot operators for bot tasks. Consensus-building discussions requiring large community input (such as request for comments) should normally be held at WP:VPPROP or other relevant pages (such as a WikiProject's talk page).

You can check the "Commonly Requested Bots" box above to see if a suitable bot already exists for the task you have in mind. If you have a question about a particular bot, contact the bot operator directly via their talk page or the bot's talk page. If a bot is acting improperly, follow the guidance outlined in WP:BOTISSUE. For broader issues and general discussion about bots, see the bot noticeboard.

Before making a request, please see the list of frequently denied bots, either because they are too complicated to program, or do not have consensus from the Wikipedia community. If you are requesting that a template (such as a WikiProject banner) is added to all pages in a particular category, please be careful to check the category tree for any unwanted subcategories. It is best to give a complete list of categories that should be worked through individually, rather than one category to be analyzed recursively (see example difference).

Alternatives to bot requests

Note to bot operators: The {{BOTREQ}} template can be used to give common responses, and make it easier to keep track of the task's current status. If you complete a request, note that you did with {{BOTREQ|done}}, and archive the request after a few days (WP:1CA is useful here).


Please add your bot requests to the bottom of this page.
Make a new request
# Bot request Status 💬 👥 🙋 Last editor 🕒 (UTC) 🤖 Last botop editor 🕒 (UTC)
1 Bot to remove template from articles it doesn't belong on? 4 4 Wikiwerner 2024-09-28 17:28 Primefac 2024-07-24 20:15
2 Removing redundant FURs on file pages 5 3 Wikiwerner 2024-09-28 17:28 Anomie 2024-08-09 14:15
3 de-AMP bot BRFA filed 13 7 Usernamekiran 2024-09-24 16:04 Usernamekiran 2024-09-24 16:04
4 QIDs in Infobox person/Wikidata BRFA filed 11 4 Tom.Reding 2024-10-06 14:23 Tom.Reding 2024-10-06 14:23
5 Remove outdated "Image requested" templates 3 2 7804j 2024-09-21 11:26 DreamRimmer 2024-09-19 18:53
6 "Was" in TV articles 6 4 Pigsonthewing 2024-11-11 12:30 Primefac 2024-09-29 19:34
7 Films by director  done 9 4 Usernamekiran 2024-10-03 13:30 Usernamekiran 2024-10-03 13:30
8 altering certain tags on protected pages? 10 5 Primefac 2024-10-20 14:47 Primefac 2024-10-20 14:47
9 Request for Bot to Remove ARWU_NU Parameter from Articles Using Infobox US University Ranking Template 4 2 Primefac 2024-10-13 12:50 Primefac 2024-10-13 12:50
10 Removal of two external link templates per TfD result 6 4 Primefac 2024-10-14 13:48 Primefac 2024-10-14 13:48
11 Replace merged WikiProject template with parent project + parameter  Done 7 3 Primefac 2024-10-21 10:04 Primefac 2024-10-21 10:04
12 Bot Request to Add Vezina Trophy Winners Navbox to Relevant Player Pages 3 3 Primefac 2024-10-19 12:23 Primefac 2024-10-19 12:23
13 Replace standalone BLP templates  Done 7 3 MSGJ 2024-10-30 19:37 Tom.Reding 2024-10-29 16:04
14 Assess set index and WikiProject Lists based on category as lists 19 5 Mrfoogles 2024-11-06 16:17 Tom.Reding 2024-11-02 15:53
15 Request for WP:SCRIPTREQ 1 1 StefanSurrealsSummon 2024-11-08 18:27
16 LLM summary for laypersons to talk pages of overly technical articles? 10 7 Legoktm 2024-11-12 17:50 Legoktm 2024-11-12 17:50
17 Redirects with curly apostrophes 6 5 Pppery 2024-11-11 17:30 Primefac 2024-11-11 16:52
18 Bot for replacing/archiving 13,000 dead citations for New Zealand charts 3 2 Muhandes 2024-11-14 22:49 Muhandes 2024-11-14 22:49
Legend
  • In the last hour
  • In the last day
  • In the last week
  • In the last month
  • More than one month
Manual settings
When exceptions occur,
please check the setting first.


Bot to improve names of media sources in references

Many references on Wikipedia point to large media organizations such as the New York Times. However, the names are often abbreviated, not italicized, and/or missing wikilinks to the media organization. I'd like to propose a bot that could go to an article like this one and automatically replace "NY Times" with "New York Times". Other large media organizations (e.g. BBC, Washington Post, and so on) could fairly easily be added, I imagine. - Sdkb (talk) 04:43, 19 November 2018 (UTC)[reply]

  • What about the Times's page? The page says: 'The New York Times (sometimes abbreviated as the NYT and NY Times)…' The bot might replace those too and that might be a little confusing…The 2nd Red Guy (talk) 14:55, 23 April 2019 (UTC)[reply]
  • I would be wary of WP:CONTEXTBOT. For instance, NYT can refer to a supplement of the Helsingin Sanomat#Format (in addition to the New York Times), and maybe is the main use of Finland-related pages. TigraanClick here to contact me 13:40, 20 November 2018 (UTC)[reply]
    • @Tigraan:That's a good point. I think it'd be fairly easy to work around that sort of issue, though — before having any bot make any change to a reference, have it check that the URL goes to the expected website. So in the case of the New York Times, if a reference with "NYT" didn't also contain the URL nytimes.com, it wouldn't make the replacement. There might still be some limitations, but given that the bot is already operating only within the limited domain of a specific field of the citation template, I think there's a fairly low risk that it'd make errors. - Sdkb (talk) 10:52, 25 November 2018 (UTC)[reply]
  • I should add that part of the reason I think this is important is that, in addition to just standardizing content, it'd allow people to more easily check whether a source used in a reference is likely to be reliable. - Sdkb (talk) 22:01, 25 November 2018 (UTC)[reply]
@Sdkb: This is significantly harder than it seems, as most bots are. Wikipedia is one giant exception - the long tail of unexpected gotchas is very long, particular on formatting issues. Another problem is agencies (AP, UPI, Reuters). Often times the NYT is running an agency story. The cite should use NYT in the |work= and the agency in the |agency= but often the agency ends up in the |work= field, so the bot couldn't blindly make changes without some considerable room for error. I have a sense of what needs to be done: extract every cite on Enwiki with a |url= containing nytimes.com, extract every |work= from those and create a unique list, manually remove from the list anything that shouldn't belong like Reuters etc.., then the bot keys off that list before making live changes, it knows what is safe to change (anything in the list). It's just a hell of a job in terms of time and resources considering all the sites to be processed and manual checks involved. See also Wikipedia:Bots/Dictionary#Cosmetic_edit "the term cosmetic edit is often used to encompass all edits of such little value that the community deems them to not be worth making in bulk" .. this is probably a borderline case, though I have no opinion which side of the border it falls other people might during the BRFA. -- GreenC 16:53, 26 November 2018 (UTC)[reply]
@GreenC: Thanks for the thought you're putting into considering this idea; I appreciate it. One way the bot could work to avoid that issue is to not key off of URLs, but rather off of the abbreviations. As in, it'd be triggered by the "NYT" in either the work or agency field, and then use the URL just as a confirmation to double check. That way, errors users have made in the citation fields would remain, but at least the format would be improved and no new errors would be introduced. - Sdkb (talk) 08:17, 27 November 2018 (UTC)[reply]
Right that's basically what I was saying also. But to get all the possible abbreviations requires scanning the system because the variety of abbreviations is unknowable ahead of time. Unless pick a few that might be common, but it would miss a lot. -- GreenC 14:54, 27 November 2018 (UTC)[reply]
Well, for NYT at the least, citations with a |url=https://www.nytimes.com/... could be safely assumed to be referring to the New York Times. Headbomb {t · c · p · b} 01:20, 8 December 2018 (UTC)[reply]
Yeah, I'm not too worried about comprehensiveness for now; I'd mainly just like to see the bot get off the ground and able to handle the two or three most common abbreviation for maybe half a dozen really big newspapers. From there, I imagine, a framework will be in place that'd then allow the bot to expand to other papers or abbreviations over time. - Sdkb (talk) 07:01, 12 December 2018 (UTC)[reply]
Conversation here seems to have died down. Is there anything I can do to move the proposal forward? - Sdkb (talk) 21:42, 14 January 2019 (UTC)[reply]
I am not against this idea totally but the bot would have to be a very good one for this to be a net positive and not end up creating more work. Emir of Wikipedia (talk) 22:18, 14 January 2019 (UTC)[reply]
@Sdkb: you could build a list of unambiguous cases. E.g. |work/journal/magazine/newspaper/website=NYT combined with |url=https://www.nytimes.com/.... Short of that, it's too much of a WP:CONTEXTBOT. I'll also point out that NY Times isn't exactly obscure/ambiguous either.Headbomb {t · c · p · b} 17:47, 27 January 2019 (UTC)[reply]
Okay, here's an initial list:

Sdkb (talk) 03:54, 1 February 2019 (UTC)[reply]

What about BYU to Brigham Young University?The 2nd Red Guy (talk) 15:41, 23 April 2019 (UTC)[reply]
Sorry, I'm not sure what you're proposing here. Is BYU a media source? - Sdkb (talk) 18:07, 19 June 2019 (UTC)[reply]

Changing New York Times to The New York Times would be great. I have seen people going through AWB runs doing it, but seems like a waste of human time. Kees08 (Talk) 23:32, 2 February 2019 (UTC)[reply]

@Kees08: Thanks; I added in those cases. - Sdkb (talk) 01:19, 3 February 2019 (UTC)[reply]
Not really sure changing Foobar to The Foobar is desired in many cases. WP:CITEVAR will certainly apply to a few of those. For NYT/NY Times, WaPo/Wa Po, WSJ, LA Times/L.A. Times, are those guaranteed to a refer to a version of these journals that were actually called by the full name? Meaning that was there as some point in the LA Times's history were "LA Times" or some such was featured on the masthead of the publication, in either print or webform? If so, that's a bad bot task. If yes, then there's likely no issue with it. Headbomb {t · c · p · b} 01:54, 3 February 2019 (UTC)[reply]
For the "the" publications, it's part of their name, so referring to just "Foobar" is incorrect usage. (It's admittedly a nitpicky correction, but one we may as well make while we're in the process of making what I'd consider more important improvements, namely adding the wikilinks to help readers more easily verify the reliability of a source.) Regarding the question of whether any of those publications ever used the abbreviated name as a formal name for something, I'd doubt it, as it'd be very confusing, but I'm not fully sure how to check that by Googling. - Sdkb (talk) 21:04, 3 February 2019 (UTC)[reply]
The omission of 'the' is a legitimate stylistic variation. And even if 'N.Y. Times' never appeared on the masthead, the expansion of abbreviations (e.g. N.Y. Times / L.A. Times) could also be a legitimate stylistic variation. The acronyms (e.g. NYT/WSJ) are much safer to expand though. Headbomb {t · c · p · b} 21:41, 3 February 2019 (UTC)[reply]
It is a change I have had to do many times since it is brought up in reviews (FAC usually I think). It would be nice if we could find parameters to make it possible. Going by the article, since December 1, 1896, it has been referred to as The New York Times. The ranges are:
  • September 18, 1851–September 13, 1857 New-York Daily Times
  • September 14, 1857–November 30, 1896 The New-York Times
  • December 1, 1896–current The New York Times
New York Times has never been the title of the newspaper, and we could use date ranges to verify we do not hit the edge cases of pre-December 1, 1896 The New York Times articles. There is The New York Times International Edition, but it seems like it has a different base-URL than nytimes.com. I can go through the effort to verify the names of the other publications throughout the years, but do you agree with my assessment of The New York Times? Kees08 (Talk) 01:51, 4 February 2019 (UTC)[reply]

Is anyone interested in this? I still think it would save myself a lot of editing time. Headbomb did you have further thoughts? Kees08 (Talk) 16:21, 15 March 2019 (UTC)[reply]

@Kees08: I definitely still am, but I'm not sure how to move the proposal forward from here. - Sdkb (talk) 21:45, 21 March 2019 (UTC)[reply]

WikiProject Civil Rights Movement

I'm trying to set-up a bot to perform assessment and tagging work for Wikipedia:WikiProject Civil Rights Movement. The bot would need to rely only on keywords present in pages. The bot would provide a list of prospective pages that appear to satisfy rules given it. An example of what the project is seeking is something similar to User:InceptionBot. WikiProject Civil Rights Movement uses that bot to generate report Wikipedia:WikiProject Civil Rights Movement/New articles. Whereas that bot generates a report of new pages, the desired bot would assess old pages. Mitchumch (talk) 16:27, 1 April 2019 (UTC)[reply]

At Wikipedia:Village pump (technical)#Assessment and tagging bot I didn't intend that you should try to set up your own bot. There are plenty of bots already authorised to carry out WikiProject tagging runs. Just describe the selection criteria, and we'll see who picks it up. --Redrose64 🌹 (talk) 19:46, 1 April 2019 (UTC)[reply]
The selection criteria are keywords on pages:
  • civil rights movement
  • civil rights activist
  • black panther party
  • black power
  • martin luther king
  • student nonviolent coordinating committee
  • congress of racial equality
  • national associaton for the advancement of colored people
  • naacp
  • urban league
  • southern christian leadership conference
Mitchumch (talk) 22:02, 1 April 2019 (UTC)[reply]
Redrose64 Since no one responded, is there another option? Mitchumch (talk) 20:00, 29 April 2019 (UTC)[reply]
@Mitchumch: Since no one has responded yet, I would like to know something. What if InceptionBot be used to generate both old and new pages? Adithyak1997 (talk) 17:49, 4 June 2019 (UTC)[reply]
@Adithyak1997: My preference would be a separate list for old pages. However, if that is not possible, then a report that combines both old and new pages would be acceptable. Mitchumch (talk) 19:04, 5 June 2019 (UTC)[reply]
@Mitchumch: and @Redrose64: I would like to know whether the following method be feasible to run:
(i)In the Make List option in AWB, provide the source as Wiki search (text).
(ii)In the wiki search textbox, provide the text Civil rights movement (as an example) and press make list button.
(iii)Then, you will get a list of pages containing the word 'civil rights movement'. Then add the category Category:Pages with Civil Rights Movement. Either other keywords be added to the same category or to a new category like Category:Pages with Civil Rights Movement:Civil Rights Movement where the text after colon denotes the keyword. Here, there are some restrictions. Firstly, since I am using a plain method, I don't know how I can increase the limit of the make list option ie. currently only 1000 pages are listed which might needs to be increased. Secondly, I need to know how to skip pages that are already in the category. I think it needs some regex. I have just mentioned a method which I think is easier for me. Do note that this has to be done manually. Once a person starts it, I think it can be helped by other fellow wikipedians also. Adithyak1997 (talk) 17:56, 6 June 2019 (UTC)[reply]

Russia district maps

Replace image_map with {{Russia district OSM map}} for all the articles on this list, as in this diff. The maps are already displayed in the articles, but currently this is achieved through a long switch function on {{Infobox Russian district}}; transcluding the template directly would be more efficient.--eh bien mon prince (talk) 11:58, 11 April 2019 (UTC)[reply]

@Underlying lk: should be pretty similar to the German maps, right? --DannyS712 (talk) 22:31, 11 April 2019 (UTC)[reply]
Yes pretty much. In fact, the German template is based on this one.--eh bien mon prince (talk) 13:26, 12 April 2019 (UTC)[reply]
@Underlying lk: I can do this. I have a few BRFAs currently open, but once some finish I'll file one for this task --DannyS712 (talk) 04:20, 14 April 2019 (UTC)[reply]
@DannyS712: any progress on this?--eh bien mon prince (talk) 19:51, 26 May 2019 (UTC)[reply]
@Underlying lk: Will do this weekend, sorry --DannyS712 (talk) 19:52, 26 May 2019 (UTC)[reply]
@DannyS712: any updates on this and the Germany template? If you're too busy at the moment, perhaps someone else can take over.--eh bien mon prince (talk) 03:46, 5 June 2019 (UTC)[reply]
@Underlying lk: Sorry, I've been sick and really busy IRL. I'll do both next week --DannyS712 (talk) 07:08, 5 June 2019 (UTC)[reply]
@DannyS712: Any news?--eh bien mon prince (talk) 16:35, 29 June 2019 (UTC)[reply]
@Underlying lk: I'm waiting until the german one is done, and I see you've responded on that, so it should be soon. --DannyS712 (talk) 16:37, 29 June 2019 (UTC)[reply]

Bot to make a mass nom of subcategories in a tree

Is it possible for a bot to nominate all the subcategories in the tree Category:Screenplays by writer for a rename based on Wikipedia:Categories for discussion/Log/2019 May 10#Category:Screenplays by writer. There is about a thousand of them! I guess each one needs to be tagged with {{subst:CFR||Category:Screenplays by writer}}, and then added to the list at the nom. Is this feasible? --woodensuperman 15:23, 10 May 2019 (UTC)[reply]

@BrownHairedGirl: ? --Izno (talk) 15:53, 10 May 2019 (UTC)[reply]
@Woodensuperman and Izno: technically, I could do this quite easily.
But I won't do it for a proposal to rename to "Category:Films by writer". Many films are based on books or plays, so "Category:Films by writer" is ambiguous: it could refer either to the writer of the original work, or to the writer of the screenplay.
I suggest that Woodensuperman should withdraw the CFD nomination, and open a discussion at WT:FILM about possible options ... and only once one or more options have been clarified consider opening a mass nomination. --BrownHairedGirl (talk) • (contribs) 16:03, 10 May 2019 (UTC)[reply]
@DannyS712:, that's one for you, I think? Headbomb {t · c · p · b} 23:46, 12 May 2019 (UTC)[reply]
@Headbomb: yes, but since BHG suggested that the nom be withdrawn and a discussion opened first, I was going to wait and see what woodensuperman says before chiming in --DannyS712 (talk) 23:47, 12 May 2019 (UTC)[reply]
@DannyS712: I don't intend to withdraw the nom. I think a sensible discussion can be had at CFD. --woodensuperman 11:30, 13 May 2019 (UTC)[reply]
@Woodensuperman: I just finished my bot trial, so I can't do this run, sorry --DannyS712 (talk) 21:13, 14 May 2019 (UTC)[reply]

@DannyS712 and Woodensuperman: does this still need doing? Headbomb {t · c · p · b} 01:14, 3 July 2019 (UTC)[reply]

@Headbomb: Well, the nom is still open. It's gone a bit stale though... --woodensuperman 07:52, 3 July 2019 (UTC)[reply]

So, maybe a bot that does this already exists, in which case, awesome. I've seen this come up twice in the last week, and it occurred to me it should probably be automated. So let's say there's a website, and it is used as a reference on a bunch of articles. The company that maintained the website shuts down, and the domain gets sold to a company that sells dick pills (I'm not being needlessly vulgar, by the way, this is a real example). Every reference to an article on that site now contains a link to a dick pill advertisement. Sometimes the article is archived somewhere, and citation templates have "dead-url=unfit" for this sort of situation, so we can note the original url for editors but never display it to readers.

Anyway, why automation? It might be just a few references, it might be a lot. But in the narrow type of situation I described, all existing article-space links to that site should stop appearing for readers, whether or not an archive can be found. The bot I imagine doing this would just sit around until an operator gave it a domain that has suffered such a fate, and it would get to work, hiding the urls from articles and replacing with archives if they exist. So, does a bot like that exist? Thanks. Someguy1221 (talk) 06:31, 23 May 2019 (UTC)[reply]

@Someguy1221: See User:InternetArchiveBot --DannyS712 (talk) 06:35, 23 May 2019 (UTC)[reply]
I was already aware of that, but even after looking through the runpage, the manual, and even the source code, I couldn't find a way to make that bot actually mark reference templates with "dead-url=unfit"/"dead-url=usurped" or otherwise remove the url from reference templates. Someguy1221 (talk) 01:37, 24 May 2019 (UTC)[reply]

User:Someguy1221, interesting points. A bot could readily toggle |dead=unfit and add an archive link. But there is the question of non-CS1|2 links that have no archive available or non-CS1|2 links that use {{webarchive}}. One solution create a new template called {{unfit}} and extract the URL from the square brackets and replace with {{unfit}}. Examples:

Scenario 1:

Org: Author (2019). [http://trickydick.com "Title"]{{dead link}}, accessed 2019
NewAuthor (2019). {{unfit|url=http://trickydick.com|title="Title"}}, accessed 2019

..in the New case it would display the title without the hyperlink.

Scenario 2:

Org: Author (2019). [http://trickydick.com "Title"] {{webarchive|url=<archiveurl>}}, accessed 2019</ref>
New: Author (2019). {{unfit|url=http://trickydick.com|title="Title"}} {{webarchive|url=<archiveurl>}}, accessed 2019

..in the New case it would display the archive URL only.

There are many other scenarios like {{official}} / {{URL}}, bare links with no title or anything else (maybe these could disappear entirely from display), etc.. -- GreenC 21:43, 26 May 2019 (UTC)[reply]

Given the number of sites using unfit, it probably should be done with IABot since it can scale. It would have to get a nod of approval from User:Cyberpower678 though his time and attention which may be limited due to other open projects. In the mean time I might add something to WP:WAYBACKMEDIC which can take requests for bot runs at WP:URLREQ. It could be a proof of concept anyway. -- GreenC 22:09, 26 May 2019 (UTC)[reply]

if you compare the fan polls section on mobile vs. non-mobile you will see that the table is missing in mobile. this is because the table is using "navbox wikitable" for the class. nothing with class navbox appears on mobile :( in this particular case, the navbox class is basically superfluous. it would be amazing if we could change all the pages using navbox wikitable to use just wikitable instead to avoid empty sections on mobile. there are probably more, but this is a start. Frietjes (talk) 16:00, 24 May 2019 (UTC)[reply]

Well, a thing that is different is

vs

wikitable
Month Winner Other candidates
June Bob Others

So it's not just a matter of blindly removing "navbox", which makes it very likely to be a WP:CONTEXTBOT, so an WP:AWB run is likely best over a bot. Could be wrong though. Maybe every instance is easily replaceable (with e.g. centering styles instead). Headbomb {t · c · p · b} 17:26, 27 June 2019 (UTC)[reply]

Update world WPA rankings for infobox

Hi, I'm enquiring about the possibility to have a bot that could generate world rankings from WPApool.com/rankings and generate and update Template:Infobox pool player/rankings. The {{Infobox pool player}} reads this file (as well as Template:Infobox pool player/Euro Tour rankings). One of the biggest issues is the formatting of the rankings, and that the names used arent guarenteed to be the same as that of the article (which is why there is so many name changes in that template.

Is there anyway a bot could help out with this update process? Best Wishes, Lee Vilenski (talkcontribs) 19:06, 25 May 2019 (UTC)[reply]

New election article name format

Since earlier this year, a new naming convention for articles on elections, referendums, etc, has been established. Very many articles link to election articles, and after the page moves, very many articles are now linking to what are now redirects. This of course works fine, but I assume it's less economic on the servers when it's done in that scale, because it means one extra access.

Another thing is that the "What links here" function only displays (indented) the first 500 – or so it seems – of those articles linking to the redirects. In many cases, there are thousands of articles linking to the redirect, and thus all of these do not show. Fixing links in templates is one thing, but links that are placed in articles need to be fixed too.

All these links cannot be fixed automatically, because it may cause awkward wording and/or punctuation, but one thing that actually can be fixed is piped links, because editing those doesn't change wording or punctuation. So I'm suggesting the following changes to be done be a suitable edit bot:

  • from the previos naming convention [[United Kingdom general election, nnnn| to the new one [[nnnn United Kingdom general election|, where nnnn = year of election (or month and year of election).

Note the pipe sign.

It's preferable if the bot can edit all occurrences in the same article, regardless of year, in a single edit.

This can of course be applied to other types of elections after this initial batch.

HandsomeFella (talk) 09:43, 3 June 2019 (UTC)[reply]

@HandsomeFella: I'd be happy to do this, but is there consensus for such edits? See WP:NOTBROKEN --DannyS712 (talk) 09:51, 3 June 2019 (UTC)[reply]
I'm aware of WP:NOTBROKEN, but usually redirects do not have 500+ incoming links, resulting in most of them being "out of sight".
Where do you suggest I can get more input on this?
HandsomeFella (talk) 12:11, 3 June 2019 (UTC)[reply]
You can ask for input on the relevant WikiProject for elections. Side comment on WP:NOTBROKEN, none of the bullet points listed there is actually relevant to this scenario, where one pipped link is being replaced with another, where the former is from an older style. --Gonnym (talk) 11:15, 4 June 2019 (UTC)[reply]
I think the more-interesting item is WP:DWAP. --Izno (talk) 12:28, 3 June 2019 (UTC)[reply]
Wikipedia is regularly asking for donations. Also, as I said above, that is not the big problem, rather a bonus. The problem is that only 500 articles linking to a redirect are visible, despite there being thousands more.
HandsomeFella (talk) 12:48, 3 June 2019 (UTC)[reply]
If you go need to see all links to the redirect, then go to the redirected page and you can view the "What links here" from there. (e.g. Special:WhatLinksHere/United_Kingdom_general_election,_2010) Spike 'em (talk) 13:46, 3 June 2019 (UTC)[reply]
I know that of course. But 1) that's a little backwards, and 2) people might not know that only 500 entries are shown. In fact, I didn't realize that myself until recently, when I counted the articles listed. I found that exactly 500 were too even a number to be a co-incidence. You can't expect people – readers, not necessarily editors – know that. I bet far from all editors know that.
HandsomeFella (talk) 14:47, 3 June 2019 (UTC)[reply]
You seem to be making a BOTREQ to deal with (possible) shortcomings in other areas of WP. If you think the display of "What links here" is wrong / confusing then you should take that up with whoever maintains that. Spike 'em (talk) 10:35, 4 June 2019 (UTC)[reply]
On a related point, is running a BOT to fix MOS:NOPIPE failures appropriate? Using the 2010 UK election as an example again, I've found some occurrences of [[United Kingdom general election, 2010|2010 United Kingdom general election]] and [[United Kingdom general election, 2010|2010 UK general election]] which I think are valid to fix? Spike 'em (talk) 12:55, 4 June 2019 (UTC)[reply]
At least the first one should be uncontroversial. HandsomeFella (talk) 21:34, 5 June 2019 (UTC)[reply]
But MOS:NOPIPE is talking about distinct sub-topics that are redirected to a parent article, as a way of potentially demonstrating that an article on the sub-topic is needed. There's no way we would ever want to have distinct articles about United Kingdom general election, 2010 and 2010 United Kingdom general election; one should always be a redirect. Nyttend (talk) 19:51, 15 June 2019 (UTC)[reply]
Ah, there may be a point I missed in MOS:NOPIPE, as it does mention using a redirected term directly rather than a piped link. What is happening here is that a piped link of the form [[redirect|target]] which seems to go against the similarly named WP:NOPIPE, which says to keep links as short as possible. Spike 'em (talk) 09:27, 19 June 2019 (UTC)[reply]

Birth date and age

About 762 articles which use {{Infobox person}} or equivalents contain wikitext such as |birth_date=1 May 1970 (age 49). (Search) Some ages are wrong; others may become wrong on the subject's next birthday. Would it be a good idea for a bot to convert this to |birth_date={{Birth date and age|1970|05|01}}, both as a one-off run and on a regular basis for new occurrences? It may also be useful to produce an error report of alleged dates that the bot can't decipher. Ideally, the code should be flexible enough to add similar tasks later. For example, we might want to extend it to {{Infobox software}} with |released={{Start date and age|...}}, though I think that particular case would catch only Adobe Flash Player. Certes (talk) 01:10, 4 June 2019 (UTC)[reply]

@Certes:  Doing... with AWB, at least to test it out DannyS712 (talk) 01:27, 4 June 2019 (UTC)[reply]
@Certes: done a few hundred, but the remaining ones are in a different format DannyS712 (talk) 01:57, 4 June 2019 (UTC)[reply]
DannyS712, nice work! We're down to 430 search results. If you could tweak your AWB patterns a bit, you could probably catch quite a few more. I'm seeing the following common formats:
"22 August 1988(age 30)" (no space); "1961 (age 57-58)" (year only; use {{Birth year and age}}); "February 8, 1962 (aged 56)" (note MDY format and "aged" instead of "age"); "July 30, 1948<br> (age 70)" (br tag; some are closed with a slash). If you were able to fix these, I suspect that we'd be left with about 50 to clean up manually. – Jonesey95 (talk) 07:27, 4 June 2019 (UTC)[reply]
Thanks Danny and Jonesey. I did a similar exercise with JWB a few years ago, so these cases have accumulated since then at about one per day. I was wondering whether it's worth doing with a bot on a regular basis. I also remember finding a few with "aged", br tags and similar clutter. There are also various date formats to parse, but I hope there's a standard library function for that somewhere. Certes (talk) 09:25, 4 June 2019 (UTC)[reply]
@Certes: I'll do the rest of this batch in the next week, and for next time probably file a brfa DannyS712 (talk) 15:22, 4 June 2019 (UTC)[reply]
@Certes and DannyS712: Thank you for your work on this. I see it too frequently. МандичкаYO 😜 22:53, 4 July 2019 (UTC)[reply]

Moscow Metro station article location map

I just created Module:Location map/data/Moscow Metro to replace the altnerative of Module:Location map/data/Russia Moscow Ring Road because the Moscow Metro system has expanded beyond the boundary of the latter map. There are over 100 Moscow Metro station articles need to be updated this way:

  • from:
{{Infobox station
...
|map_type      = Moscow Ring Road
|AlternativeMap= Moscow map MKAD grayscale.png
|map_overlay   = Moscow map MKAD metro line.svg
...
}}

to:

{{Infobox station
...
|map_type      = Moscow Metro
...
}}

-- Sameboat - 同舟 (talk · contri.) 04:55, 7 June 2019 (UTC)[reply]

@Sameboat: I am not volunteering at the moment but would help to change some articles manually to show example. Also clarify which articles (ie. all in Category:Moscow Metro stations?) -- GreenC 05:07, 25 June 2019 (UTC)[reply]
@GreenC: Nekrasovka (Moscow Metro) is the article manually converted by myself. Not all Moscow Metro stations need to be converted, only those using the "Moscow Ring Road" location map with altnerative and overlay. As the others using the Central Moscow location map like Kuznetsky Most (Moscow Metro) can remain intact. -- -- Sameboat - 同舟 (talk · contri.) 05:35, 25 June 2019 (UTC)[reply]

Take over part of User:RonBot

User:Ronhjones has disappeared a while back, and the bot hasn't run since. If someone could takeover Ronbot 10, that would be great. The code is available at User:RonBot/10/Source1, User:RonBot/10/Source2, and User:RonBot/10/Source3. I believe only the last two are relevant however.

The main idea is that the bot sorts and detects unnecessary/duplicates entries in WP:CITEWATCH/SETUP and WP:JCW/EXCLUDE. Headbomb {t · c · p · b} 00:23, 10 June 2019 (UTC)[reply]

@TheSandDoctor: the code might be in need of minor updates, see this (stuff happening after March 8) and this. But it would still be useful as is if you don't have time to do an update. Headbomb {t · c · p · b} 21:21, 15 June 2019 (UTC)[reply]
Events unplanned came up. Planning to do this this coming week Headbomb. --TheSandDoctor Talk 05:57, 17 June 2019 (UTC)[reply]
No rush. It's a convenient task, but not a critical one. Headbomb {t · c · p · b} 15:17, 17 June 2019 (UTC)[reply]
@TheSandDoctor: any update on this? Headbomb {t · c · p · b} 04:22, 23 June 2019 (UTC)[reply]
Thanks for the prod. Filed, Headbomb. --TheSandDoctor Talk 05:08, 23 June 2019 (UTC)[reply]

Bot to notify draft authors of possible G13 deletion

Hey folks, is anyone interested in putting together a bot to message draft authors whose drafts will soon be eligible for G13? IME there are a good number of editors who create drafts, forget about them, but decide they want them when reminded. I'm speculating that automatic reminders (e.g. 1 week before G13-eligibility) would help cut down the number of unnecessary WP:REFUND/G13 requests. -FASTILY 08:39, 10 June 2019 (UTC)[reply]

I assume this is to take over HasteurBot's task? Primefac (talk) 15:20, 10 June 2019 (UTC)[reply]
Yes, partially. HasteurBot's task was wider in scope, in that it also nominated drafts for deletion. I'm just interested in the notifications bit. -FASTILY 20:58, 10 June 2019 (UTC)[reply]

One-off task: Historic graph of prosesize of all FA articles

Ideally prosesize at time of promotion, but since FAs seldom change much, could be current size. Perhaps use code from Drpda's prosesize javascript (?). I haven't thought through whether or not it would be worthwhile to get info for both demoted and current FAs. Perhaps it would... Either way, graph from the beginning of time, showing trends in article size (mean, median, etc.). You'd need to know the date of promotion, which could either be scraped off that article's talk page (article milestones), or perhaps off whatever pages WP:WBFAN gets its info from. ♦ Lingzhi2 (talk) 13:31, 11 June 2019 (UTC)[reply]

AnomieBOT for converting WikiProjects to taskforces of WP:MOLBIO

Similar to request Wikipedia:Bot_requests/Archive_22#Change_from_WikiProject_Neurology_to_task_force

Hi there. I'm converting WP:MCB, WP:GEN, WP:BIOP, WP:COMBIO, WP:CELLSIG, and WP:WikiProject_RNA into taskforces of a centralised WP:WikiProject Molecular Biology (see this discussion, and page move requests); however, all of the articles under these projects now need to have their talk page banners replaced with a different one that classifies it under a task force of WP:MOLBIO. Is it possible to edit the banners for all the pages under the relevant categories to have their talk page banners replaced from {{WikiProject XYZ|class=|importance=}} to {{WikiProjects Molecular Biology|class=|importance=|XYZ=yes}}, keeping the classes and importance as the existing class already existing on the talk page and merging into a single template where a page is tagged with the templates of multiple taskforces of WP:WikiProject Molecular Biology?

Thank you in in advance for any assistance! T.Shafee(Evo&Evo)talk 11:42, 15 June 2019 (UTC)[reply]

Evo, yes, it should be possible. All that is required is to convert each of the "old" banners into wrappers for the "new" banner, and then add {{subst only|auto=yes}} to the documentation. Primefac (talk) 11:51, 15 June 2019 (UTC)[reply]
@Primefac: Brilliant! Let me know if there's any additional info you'd need from me. There's also a comment here about setting up WP 1.0 bot for a {{WikiProject Molecular Biology}} template that I might need some help with. Thanks again, T.Shafee(Evo&Evo)talk 12:49, 16 June 2019 (UTC)[reply]

Civil parish bot

I am requesting to have a bot to create missing civil parishes in England, see User:Crouch, Swale/Bot tasks/Civil parishes (current) for the complete instructions/ideas and User:Crouch, Swale/Civil parishes for many of the missing CPs. The reasons in response to the common objections to bot created articles are in the "Process" section. I would at minimum include location and number of people in the parish but many other suggestions are there (and in particular the "Other ideas" section). As noted however Nomis does combine smaller parishes into larger ones and thus would likely be unsuitable therefore City Population would be better but it instead simply doesn't have data at all for small parishes (example) therefore either they could be left out or created without the population data (and I add the most recent data from Vision of Britain). I notified Wikipedia talk:WikiProject England#Bot created articles, Wikipedia talk:WikiProject UK geography#Bot created articles and Wikipedia talk:WikiProject Scotland#Bot created articles and although there was a question it was over listed buildings not parishes.

I intend to have the articles created at something like 6 a day (so its a manageable amount) that (especially if required) I can check manually (and possibly improve them). This would mean that it would take about 5 months for this to be done. I don't know enough about how to code a bot but I can give my instructions to a bot operator.

I intend to have this started in about a month because I currently have a page move/page creation ban and that would make creating DAB pages and name fixes more difficult but even if my appeal fails I still intend to go ahead with this. Crouch, Swale (talk) 17:12, 18 June 2019 (UTC)[reply]

page creation ban You probably should not be requesting a page creation bot then. --Izno (talk) 19:31, 18 June 2019 (UTC)[reply]
That's not a problem for pages created by someone else as a result of consensus, see [1] (SilkTork was one of the users who participated in the previous appeal anyway). In any case its quite possible that that restriction will be removed anyway. Crouch, Swale (talk) 19:34, 18 June 2019 (UTC)[reply]
This would basically be WP:PROXYING. Wait for your appeal to be up and then come back. Even so, you will probably need to have a consensus reached at e.g. WP:VPPRO for a bot to create any articles. --Izno (talk) 20:00, 18 June 2019 (UTC)[reply]
The point is to get consensus before this and that even if I can't manually create articles then this remains an option. Crouch, Swale (talk) 20:23, 18 June 2019 (UTC)[reply]
The example at User:Crouch, Swale/Bot tasks/Civil parishes (current) looks very much like a stub; if all of the proposed articles will be similar in structure and content, then WP:FDB#Bots to create massive lists of stubs applies. Otherwise, WP:MASSCREATION. --Redrose64 🌹 (talk) 21:06, 18 June 2019 (UTC)[reply]
The 2nd half of WP:FDB#Bots to create massive lists of stubs says "exceptions do exist, provided the database contains high-quality/reliable data, that individual entries are considered notable, and that the amount of stubs created can be reasonably reviewed by human editors. If you think your idea qualifies, run it by a WikiProject first" I believe that both of those points have been met and WP:MASSCREATION says "Any large-scale automated or semi-automated content page creation task must be approved at Wikipedia:Bots/Requests for approval" which this discussion is the start of but it also says "While no specific definition of "large-scale" was decided, a suggestion of "anything more than 25 or 50" was not opposed" and "Alternatives to simply creating mass quantities of content pages include creating the pages in small batches" in this case creating 6 a day batches clearly falls bellow this. Crouch, Swale (talk) 07:32, 19 June 2019 (UTC)[reply]
None of the project pages you listed above have consensus that this is a good idea. This does look like you are trying to circumvent your article creation restrictions, as 6 articles per day is a lot more than 1 per week. Spike 'em (talk) 08:32, 19 June 2019 (UTC)[reply]
None of the projects have concerns about CPs (after more than 2 weeks). And I have discussed this with one of the users who participated in the previous appeal who said "I'm quite happy for Crouch, Swale to present their ideas to others such as Begoon and Iridescent." and the fact that I have disclosed it here is likely to mean that its not proxying, had I not mentioned that it could have been. Crouch, Swale (talk) 16:39, 19 June 2019 (UTC)[reply]
There seems to be 1 (one) supportive comment across the 3 projects, and even that expresses concerns: I think the idea is good in principle. It might need a fair amount of tidying up though. I'd expect more than this to show consensus that it is a good idea. Spike 'em (talk) 09:28, 20 June 2019 (UTC)[reply]
But no opposers (to the CP proposal) and WP:MASSCREATION says "While no specific definition of "large-scale" was decided, a suggestion of "anything more than 25 or 50" was not opposed" and "Alternatives to simply creating mass quantities of content pages include creating the pages in small batches or creating the content pages as subpages of a relevant WikiProject to be individually moved to public facing space after each has been reviewed by human editors". Also while I get the impression that the 1 article a week was to try to get me to create longer articles etc instead of many short ones the main reason for "1 article a week" seemed to be to prevent me from clogging up AFC. Crouch, Swale (talk) 18:53, 24 June 2019 (UTC)[reply]
An alternative (that I pointed out to SilkTork) if this fails is to have the bot create them in draftspace. Crouch, Swale (talk) 19:32, 24 June 2019 (UTC)[reply]
Comments such as I'm very worried that it sounds like the desired "endgame" of this editor appears to be the rapid creation of about 1,000 articles in a narrow topic area and there is the possibility that you are just biding your time in order to unleash hundreds of civil parish stubs on Wikipedia which will need to be examined by someone to check if they are worthwhile in your last ARCA appeal seem to have plenty of merit. WP:FDB#Bots to create massive lists of stubs also states If you think your idea qualifies, run it by a WikiProject first (...) to gain consensus for the idea; lack of replies is not consensus. Spike 'em (talk) 20:53, 24 June 2019 (UTC)[reply]
Creating around 725 articles over the time frame of 5/6 months isn't that rapid and as noted the part of MASSCREATION generally applies to 25-50 pages a time and not smaller batches (6 at a time) even if we assume that the lack of replies doesn't constitute consensus. Crouch, Swale (talk) 08:12, 25 June 2019 (UTC)[reply]

MASSCREATION says nothing about a daily limit, so I'd take it as meaning 25 to 50 total pages; 725 is clearly bigger than this.

Get some clear consensus that this is a good idea and your article creation rights sorted as you have been asked above. Spike 'em (talk) 16:28, 25 June 2019 (UTC)[reply]

Yes it doesn't say anything about a daily limit but it does explicitly make reference to "smaller batches". If I am checking each "batch" every day (or so) then that doesn't violate the letter or spirit of that guideline. As far as I'm aware the guideline is to prevent hundreds of articles being created that might contain errors or not meet the notability guidelines. As I will be checking them I'll notice any errors and other editors will likely do so to. But yes getting clearer consensus for this and clarity of my restrictions would be helpful. Crouch, Swale (talk) 18:50, 25 June 2019 (UTC)[reply]
"Smaller batches" is a way of having a better chance at getting support when things could potentially be contentious. It does not negate the need for prior consensus before creation, but it might make consensus easier to get. Going "I want to create 1000 articles tomorrow!" vs going "Hey, about about we have a bot create 10 articles as drafts as a subpage of WP:PLANTS, see what the feedback is on them, if they need more work, etc... so the next 10 are easier to handle, ... and then we'll see if we get to a point where we're comfortable having the remaining articles get created directly in article space" or similar.
Note the it might. People may decide this is too close to violating a page creation ban for comfort. Or maybe they'd be open to such a bot creating articles in the project space if someone other than you reviews the article before moving into mainspace. Or maybe people would be comfortable with the task as proposed. Headbomb {t · c · p · b} 01:36, 26 June 2019 (UTC)[reply]
Please note, this BOTREQ is mentioned on Crouch, Swale's restrictions appeal. Spike 'em (talk) 08:52, 3 July 2019 (UTC)[reply]
Gave a reply on behalf of BAG there. Other BAG members are free to chime in. Headbomb {t · c · p · b} 09:56, 3 July 2019 (UTC)[reply]

Automatically Update IUCN Statuses

Good afternoon, Does anyone have a program that can carry out the menial task of updating IUCN statuses? All of these can be retrieved from the IUCN's website, and finding the information is easy, just tedious. Of course this program would have to fetch some external information, such as the current version and the species ID number, but one of these are things that require too much expertise. — Preceding unsigned comment added by AidenD (talkcontribs) 01:46, 19 June 2019 (UTC)[reply]

Hello,

WikiProject Missing encyclopedia articles maintenance has been very inactive with several categories not getting updated. I am currently working on updating the Wikipedia:WikiProject Missing encyclopedic articles/List of US Newspapers area, but there are several other areas that have not been touched in years. If you take a look at the lists of missing articles, you see that there are blue links and areas that are not updated. The Wikipedia:WikiProject Missing encyclopedic articles/Progress is also rarely updated.

I propose a bot that could update all of these missing article lists. Most of the areas are sorted by state and updated 50 different sections per category is a ton of work. A bot will keep the project more fresh. In addition, the Progress page I linked above could also be updated.

Wikipedia:WikiProject Missing encyclopedic articles is losing people and we need a way to jumpstart the project.

Thank you AmericanAir88(talk) 16:57, 27 June 2019 (UTC)[reply]

@AmericanAir88: I could probably do this (A slightly similar task at Wikipedia:Bots/Requests for approval/DannyS712 bot 18) but what specifically are you looking for in terms of edits? Can you link to some diffs? --DannyS712 (talk) 17:51, 27 June 2019 (UTC)[reply]
@DannyS712: Thank you very much. Here are some edits:
  • [2] Me updating the Maryland section of missing newspapers by deleting blue links
  • [3] Me updating the grand total of missing newspapers after updating the Maryland section.
  • [4] Me updating the statistics list. AmericanAir88(talk) 18:11, 27 June 2019 (UTC)[reply]
    @AmericanAir88: Okay. The first edit you linked too should be fairly easy to automate, and I'll submit a BRFA ideally in the next few hours. The 2nd could be automated to use a module, eg the conversion I made in Special:Diff/891799155. I can try to make/find one for you if you want, or I can try to code that too. The last part is the hardest - would it still be helpful to only do part 1 (or only 1 and 2)? --DannyS712 (talk) 18:19, 27 June 2019 (UTC)[reply]
@DannyS712: Any help would be great. Thank you. AmericanAir88(talk) 18:40, 27 June 2019 (UTC)[reply]

One of the issues here is to avoid false positives. Simply having a blue link does not mean there is an article about the subject. It could be a redirect, or a different subject with the same name. If it is a redirect, the topic may or may not be sufficiently well covered. All the best: Rich Farmbrough, 10:55, 28 June 2019 (UTC).[reply]

thanks for the note, already accounted for (also filtering out disambiguation pages, just in case) --DannyS712 (talk) 20:18, 28 June 2019 (UTC)[reply]

"Validation of the new Hipparcos reduction"

This publication is cited in some 2600 Wikipedia articles but the individual citations are of uneven quality. A template holding the citation exists, {{R:Van Leeuwen 2007 Validation of the new Hipparcos reduction}} (possibly badly categorized at the moment), and could be substituted for the bit between the <ref> tags in the aforementioned 2600+ articles. (Or could Wikidata be leveraged instead?). Urhixidur (talk) 19:55, 28 June 2019 (UTC)[reply]

Templated citations kinda suck for a bunch of reasons. Good to have it as a guide, but include the full citation in the wikitext. IMO -- GreenC 20:03, 28 June 2019 (UTC)[reply]
Make it a subst: template that a bot would regularly use to fix new occurrences? Urhixidur (talk) 20:07, 28 June 2019 (UTC)[reply]
See this discussion for why the Hipparcos reduction does not have a template. Cross-posting a notice of this discussion at WT:AST might be appropriate. Primefac (talk) 12:24, 29 June 2019 (UTC)[reply]
How about substituting {{cite Q|Q28315126}} for it? Urhixidur (talk) 21:12, 3 July 2019 (UTC)[reply]

replace bad apostrophe

I frequently come across an accent mark used as an apostrophe and it drives me bonkers. The ´ accent does have uses in linguistics articles but it should never be used as an apostrophe. I'm guessing it comes from copy-pasting.

Unfortunately the Wikipedia Search tool does not work to find it and general search on Google also doesn't work (´s site:en.wikipedia.org). Would it be possible to create a bot that is sensitive this character, and eradicates this when it's used as a possessive or contraction? I also see it with an extra space. For example:

  • King´s → to King's
  • King´ s → to King's

This isn't just a cosmetic thing. The ´ is not a real apostrophe and screenreaders probably won't read it that way. Thanks. МандичкаYO 😜 22:51, 4 July 2019 (UTC)[reply]

Seems very much of a WP:CONTEXTBOT Headbomb {t · c · p · b} 22:59, 4 July 2019 (UTC)[reply]

Bot to remove school mapframe **at a later date**

There is some code that I found that can add a map to the schools infobox: | module = {{Infobox mapframe | stroke-color = #C60C30 | stroke-width = 3 | marker = school | marker-color = #1F2F57 | zoom = 13}} }}

I liked what this code could do, so I started adding it to some school infoboxes. User:Steven (Editor) told me that there were plans to replace this code with a built-in functionality in the infobox for schools itself so the code would be unnecessary. He stated that he would like for me to hold off on adding the code so there would be fewer instances of this to remove once the code is installed. I do not know when the installation of the built-in function is expected to occur.

I want to explore whether a bot can be used to remove the lines of code I posted here. If it can just automatically remove this, I can add the code without fear of having to remove it later once the built-in functionality is ready.

Thanks, WhisperToMe (talk) 20:48, 5 July 2019 (UTC)[reply]

Bot to update economic statistics

I plan on making a bot that updates statistics at regular intervals. For example, by updating the latest GDP numbers or inflation numbers on the article Economy of the United States. Numbers will initially be retrieved from the Louis Fed's FRED API. Other sources of data could be added later.

I envision the typical flow will be:

  1. Retrieve a list of all pages using the associated template.
  2. Parse the pages and retrieve series identification and other relevant information from the templates. For example A191RL1Q225SBEA for quarterly US GDP from FRED.
  3. Retrieve the latest series values through APIs.
  4. Replace the old value with the latest value.

I started a similar project several years ago but never followed through. Please let me know if you have any thoughts or suggestions.--Bkwillwm (talk) 03:26, 6 July 2019 (UTC)[reply]