Jump to content

Wikipedia:Bot requests: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
Line 246: Line 246:
:::Well if the MoS allows both styles, then this is a fast-track to [[Special:Blockip]]. No thankyou! <font color="forestgreen">[[User:Happy-melon|'''Happy''']]</font>‑<font color="darkorange">[[User talk:Happy-melon|'''melon''']]</font> 21:54, 2 May 2008 (UTC)
:::Well if the MoS allows both styles, then this is a fast-track to [[Special:Blockip]]. No thankyou! <font color="forestgreen">[[User:Happy-melon|'''Happy''']]</font>‑<font color="darkorange">[[User talk:Happy-melon|'''melon''']]</font> 21:54, 2 May 2008 (UTC)
::: Two things: 1) we've denied bot proposals for this before, and 2) the regexes to do this properly are a lot more complicated than the above. [[User_talk:Gimmetrow|''Gimmetrow'']] 04:29, 3 May 2008 (UTC)
::: Two things: 1) we've denied bot proposals for this before, and 2) the regexes to do this properly are a lot more complicated than the above. [[User_talk:Gimmetrow|''Gimmetrow'']] 04:29, 3 May 2008 (UTC)
:I'm surprised to see one mention placing something "immediately after the fact" as meaning to jump in front of proper grammar by truncating the punctuation of the sentence structure. i.e. before the fact is finished. I think policy needs to be reviewed in this instance! The latter form I gave is clearly superior in looks and logical structure. [[Special:Contributions/67.5.147.10|67.5.147.10]] ([[User talk:67.5.147.10|talk]]) 10:23, 3 May 2008 (UTC)


== UN/LOCODE redirects - add values to infoboxes ==
== UN/LOCODE redirects - add values to infoboxes ==

Revision as of 10:23, 3 May 2008

This is a page for requesting work to be done by bots per the bot policy. This is an appropriate place to simply put ideas for bots. If you need a piece of software written for a specific article you may get a faster response time at the computer help desk. You might also check Wikipedia:Bot policy to see if the bot you are looking for already exists. There are also quite a few "frequently denied requests", for various reasons, such as a welcoming bot, as it would de-humanize the process, and an anti-vandalism bot, as several already exist. If you want to request a bot to populate a category for a wikiproject, please create a full list of categories to be used, as most bot operators who can complete this task will not go into all subcategories, as some members may be irrelevant to your project. Also note that if you are requesting that an operator change or add a function to an existing bot, you should ask on that editor's talkpage.

If you have a question about a certain bot, it should be directed to the bot owner's talk page or to the Bot Owners' Noticeboard. If a bot is acting improperly, a note about that should be posted to the owner's talk page, to the Administrators' Noticeboard, or to AIV, depending on severity (ongoing vandalism to AIV). A link to such a posting may be posted at the Bot Owners' Noticeboard.

Please add your bot requests to the bottom of this page.

If you are a bot operator and you complete a request, note what you did, and archive it. Requests that are no longer relevant should also be archived in a timely fashion.

See also: Wikipedia:Bot policy and Wikipedia:Bots/Frequently denied bots, to make sure your idea is not listed.

I'm not just talking about auto-archived pages; I would love to able to run a bot on my userpage that would check to make sure section links still work, and if they don't, then make an intelligent guess what the name of the most recent archive is and look for the section link there, and update my userpage to reflect the change. Has anyone written a bot to do this yet?

And while I'm asking, does anyone know of a tool that helps in creating wikilinks? What I have in mind is something that would create the link [[Wikipedia:Bot requests#Does anyone know of a bot that fixes links to sections as the sections get archived?]] on this page, by reading the title of the editing box and the section title at the top of this edit screen, or better yet, simply copy it to my clipboard, so that I don't have to. - Dan Dank55 (talk) 16:00, 20 April 2008 (UTC)[reply]
AnchorLinkBot used to do update section links but has stopped now. I have some source somewhere (in C) if anyone is interested. -- maelgwn - talk 11:13, 26 April 2008 (UTC)[reply]
Thanks, I'll ask around. It seems to me it would be a huge help for everyone not to have to babysit links when they get archived. Another question: Is there a bot that can do a "WhatLinksHere" for section headings? We have a serious need for that on style guidelines pages. - Dan Dank55 (talk)(mistakes) 16:07, 26 April 2008 (UTC)[reply]
ClueBot III does this for sections it archives. It doesn't do it for things that it doesn't archive, though. -- Cobi(t|c|b) 22:33, 26 April 2008 (UTC)[reply]

A decent-sized task. There are about 1900 articles in Category:Album articles without infoboxes. A bot would need to (1) Go to the article page for the talk page listed; (2) Check if the article uses {{Infobox Album}} and if it does, (3) go back to the article talk page and remove the "|needs-infobox=yes" parameter from {{album}}. I hope that's not too hard. -- Ricky81682 (talk) 10:10, 21 April 2008 (UTC)[reply]

Kinda, but not really. I can prepare a list. Q T C 15:07, 21 April 2008 (UTC)[reply]
I tried using a query, but it seems to me that the templates don't work as expected. For example On My Own Two Feet uses an infobox and the corresponding talk page is in Album articles without infoboxes. However the parameter needs-infobox is set to no. So why is it in the category? I tested some articles from my query results and this is the case for all of 'm. --Erwin85 (talk) 21:04, 21 April 2008 (UTC)[reply]
Probably my fault - I never guessed that anyone would specify |needs-infobox=no. I think I've fixed this at the source - just wait for the category to empty through the job queue. Happymelon 21:18, 21 April 2008 (UTC)[reply]
Sorry I asked. Seem to be nothing but trouble for you Happy. =( -- Ricky81682 (talk) 03:22, 22 April 2008 (UTC)[reply]
Dont know if the queue has been work through long enough but here's the list as it stands now (using E85's SQL). Q T C 09:18, 24 April 2008 (UTC)[reply]
The job queue is still recovering from a server-clock-error which suspended all the maintenance scripts on wikimedia wikis, so it's currently standing at about eleven million (average is about 800,000!). It will get there eventually... Happymelon 18:44, 25 April 2008 (UTC)[reply]

Just ran the query again, output here. If anybody is still going to work through these. Q T C 01:15, 1 May 2008 (UTC)[reply]

Bot to replace inactive NrhpBot

User:NrhpBot only ran for a while in July of 2007. I'm seeking someone to write a bot that can take all red links of List of Registered Historic Places in Ohio and create stubs for each place from the NRIS database and tag articles with: {{Ohio-NRHP-struct-stub}}, Fill out a {{Infobox nrhp}}, and Category:Registered Historic Places in Ohio and tag the talk pages with {{OH-Project|importance=|class=stub}}. Can this be done? Thanks! §hep¡Talk to me! 2008-04-10 6:29 pm (UTC-4)

I guess code wouldn't even have to written from scratch, just modified. See NrhpBot coding here: link. §hep¡Talk to me! 22:54, 29 April 2008 (UTC)[reply]

Need a bot to find corrupted REFLIST outputs.

When <ref> tags are not properly closed (or a closing </ref> tag is deleted by a later edit) the entire {{reflist}} output becomes corrupted. Editors do not always notice this because the problem is way at the bottom. This can however be easily fixed by simply restoring the missing </ref> data and tag. So far I have found and fixed two such cases here[1] and here[2] (scroll through the "References" section to see the problem). Additionally what is happening is that the article text you see inside the References section ought to be displaying in the article but instead it is being hidden at the point where the </ref> tag is missing until the next </ref> tag is encountered. Pretty ugly problem.

Can a bot be built to spot this problem and add a category tag {Category:Articles with damaged reflists to be repaired}? If a bot can find affected pages I can try and fix the problem by reviewing what changes happened later and restoring the /ref without disrupting whatever edits followed. If a bot cannot do this what about the {{reflist}} software being modified to spot problems as it builds the output? -- Low Sea (talk) 21:39, 23 April 2008 (UTC)[reply]

Wikipedia:Bot requests/Archive 19#Unclosed ref tags may be of interest to you. « Gonzo fan2007 (talkcontribs) 01:03, 24 April 2008 (UTC)[reply]
Thanks for the information Gonzo fan, looks like I am not the only one seeing this. One possible algorithm would be to count the number of bytes between the <ref> and </ref> tags. An abnormally large number of bytes would be a high probability of an error. Report could simply identify "Article=ExamplePage, Largest Refblock=1,234 bytes." and list them on a page in largest first order. -- Low Sea (talk) 06:48, 24 April 2008 (UTC)[reply]
Yes, please do something like this. Just don't fix with a bot, but a list of broken pages is urgently needed. I would say use a category, but I would prefer a list so that the size of the problem can be assessed as well. Ultimately, I hope there is a bugzilla report where broken ref tags automatically populate a category and also (politely) prompt the editor to fix before saving. Carcharoth (talk) 07:10, 24 April 2008 (UTC)[reply]
Can a Category tag handle a parameter? As in {Category:Articles with large REF blocks|1234} so that this number would show up on the Category list page? -- Low Sea (talk) 13:27, 24 April 2008 (UTC)[reply]

The best way would be to wait for the patch from bug 12757 to be reviewed and committed. MaxSem(Han shot first!) 13:49, 24 April 2008 (UTC)[reply]

That patch will be most welcome to prevent future problems, but we need something like a bot to track down existing pages that are in need of repair. -- Low Sea (talk) 00:19, 25 April 2008 (UTC)[reply]
I would support a bot or something/anything that would help editors track these down - it does seem to crop up a lot at the help desk or VPT. DuncanHill (talk) 15:09, 30 April 2008 (UTC)[reply]

Creating redirects

I'm requesting a bot that would go through a single page defined by me (page would be able to be changed) and create redirects to the main link (most pages based on a template). For example on this page, if the first link is blue it would redirect all links in that specific template to that blue link. I don't know of a bot that does this. Thanks. Dividing 17:06, 24 April 2008 (UTC)[reply]

I'll Try this.. CWii(Talk|Contribs) 17:44, 25 April 2008 (UTC)[reply]
Okay, I've created a bot that will do this automatically. The request for this is at Wikipedia:Bots/Requests for approval/John Bot II 2. CWii(Talk|Contribs) 20:25, 25 April 2008 (UTC)[reply]
 Done awhile ago :P CWii(Talk|Contribs) 19:37, 30 April 2008 (UTC)[reply]

Could someone quickly change everything in Template:WikiProject Milton Keynes to the one it is redirect to, Template:WikiProject Buckinghamshire? There's only about 100 articles to change. Thanks! -- Ricky81682 (talk) 01:47, 25 April 2008 (UTC)[reply]

all you need is a template redirect. βcommand 01:51, 25 April 2008 (UTC)[reply]
Oh, I don't see that at WP:TFD. Would I be going to WP:RFD or WP:RM and have the redirect deleted or move made first, and then the articles transferred? -- Ricky81682 (talk) 02:38, 25 April 2008 (UTC)[reply]
Just leave it as it is. βcommand 02:39, 25 April 2008 (UTC)[reply]
 Done Templates updated. CWii(Talk|Contribs) 17:39, 25 April 2008 (UTC)[reply]

Assessment request for WikiProject Africa

(Original request)

Could a bot please do the following for all pages in Category:Unassessed Africa articles:

  • If a stub tag appears on the main article, specify "class=Stub" for {{AfricaProject}} on the talk page.
  • If the talk page contains the banner of another WikiProject that has a valid "class=" assessment, apply the same assessment to {{AfricaProject}}.

The project's discussion for this request can be found here. Thank you, Black Falcon (Talk) 18:25, 25 April 2008 (UTC)[reply]

I'll see what I can do. CWii(Talk|Contribs) 01:13, 30 April 2008 (UTC)[reply]
 On hold. I'm coding this. CWii(Talk|Contribs) 19:54, 30 April 2008 (UTC)[reply]
 Doing... (User:John Bot) CWii(Talk|Contribs) 21:00, 30 April 2008 (UTC)[reply]

Inactivity by users

There are two cases I can think of where inactivity by Wikipedians is a concern. The first is WikiProjects, where participant lists can often give the impression of activity when the members have actually mostly been gone for more than a year! Of course, we don't know if active members are active related to the project, but we can certainly say that people who haven't edited Wikipedia for a long time are inactive. I think all projects should strive to keep these lists up to date, moving inactive people to an inactive list and then deleting them. As for how long it takes to become 'inactive', that's a subjective matter, but a bot could surely be programmed to see how long it was since a user's last edit and take an action if it meets a certain minimum (perhaps specified by each project?).

Another is the somewhat controversial subject of article maintenance. Here it is vital that 'maintainers'/'stewards'/whatever-you-want-to-call-them are currently active. If you are maintaining and/or working on an article you must have made an edit within a certain amount of time, otherwise you should be removed as a maintainer (and template:maintained taken down if there is nobody else). It's a very bad look to present someone as a maintainer when they are no longer active, especially when they have been gone for more than a year.

Notifications to the talk page of the person removed would be a good idea too. This isn't a request for a bot as such, but more a question of whether it could be done and how it would work, and any general feedback on the idea. I'd probably have to take these issues to the WikiProject Council and perhaps the village pump for the maintained thing first before I can come up with a specific request. Richard001 (talk) 02:45, 26 April 2008 (UTC)[reply]

I also wish there was a better protocol for removing inactive people from projects. Has the community discussed this issue before? --Adoniscik(t, c) 02:21, 28 April 2008 (UTC)[reply]
No idea. What would be the best place - VP proposals? I wouldn't even mind having different colours for users based on how long they have been away, so you don't have to check their contribs all the time to see if you're talking to a ghost. Would probably require too much in terms of resources to keep it up to date though. Richard001 (talk) 09:39, 28 April 2008 (UTC)[reply]
I have brought it up at WP:VPPR, though it doesn't seem there will be any objection to such a bot's existence (it's more how it gets used for WikiProjects that is relevant). If someone thinks they could make something like this I'd like to hear from them. Richard001 (talk) 06:54, 30 April 2008 (UTC)[reply]

Reguest under MoS(flags)

Before I knew there was an MoS guideline, I had inserted Image:Red Army flag.svg in all articles that are part of the

. I am not experienced in writing bots, and have no desire to be, but I'm sure that someone has a written one that can be easily used to remove these images from the selected articles, and would appreciate this help.--mrg3105 (comms) ♠04:11, 26 April 2008 (UTC)[reply]

 Done hopefully i got them all -- maelgwn - talk 06:08, 26 April 2008 (UTC)[reply]
You got most, and some other editors pitched in before I had a chance to look in. The last one was deleted manually just a short time ago. Thank you. I really should learn how to write bots since I can see potential usefulness in some of the things I'm doing--mrg3105 (comms) ♠13:29, 26 April 2008 (UTC)[reply]

Hi there guys. Any chance someone could do me a favour by creating a bot that will rename all of the articles in this category so that the years are in the format "YYYY-YY"? The reason for this is because that is the format used by the majority of other football-related articles, and therefore the remainder should be retitled to follow the same standard. Any help with this would be much appreciated. – PeeJay 23:11, 26 April 2008 (UTC)[reply]

I could write a script to do this easily enough. To be clear: All you want is the date format changed. Thus Division 1 season 1993/1994 would be moved to Division 1 season 1993-94. Correct?--Dycedarg ж 00:32, 27 April 2008 (UTC)[reply]
Yes, this is correct. Any further changes to the name of each article I could do myself, but considering there's nearly 100 articles there, I couldn't possibly do this task myself at this time. – PeeJay 00:37, 27 April 2008 (UTC)[reply]
OK, I'm writing the script for this now. I should be able to actually do it within an hour or two (I'll be monitoring it for safety's sake, and running it through my standard account as opposed to my bot account).--Dycedarg ж 01:14, 27 April 2008 (UTC)[reply]
 Done Fixed the double redirects too.--Dycedarg ж 03:55, 27 April 2008 (UTC)[reply]
Just a note, I ended up doing it under my bot account after all due to my forgetting to log out and back in. Also, I fixed the links on the navigation template.--Dycedarg ж 04:47, 27 April 2008 (UTC)[reply]
Thanks very much mate. Job well done. – PeeJay 09:03, 27 April 2008 (UTC)[reply]
Ah, I almost forgot. Could you also make it so that each article is sorted by its year in this category? So, for example, Division 1 season 1993-94 would be sorted by "1993-94" and Ligue 1 season 2003-04 would be sorted by "2003-04". – PeeJay 09:11, 27 April 2008 (UTC)[reply]
 Done--Dycedarg ж 21:08, 27 April 2008 (UTC)[reply]
Sweet, thanks. That will be all :D – PeeJay 23:17, 27 April 2008 (UTC)[reply]
If we're going to this trouble, we should use the en-dash rather than the hyphen, per WP:MOS. The Rambling Man (talk) 06:21, 28 April 2008 (UTC)[reply]
What for? The difference is purely visual, and no one has an endash button on their keyboard, so typing the hyphen is so much easier both in naming the articles and in linking to them. – PeeJay 09:29, 28 April 2008 (UTC)[reply]
Because if we don't follow the MoS when it's easy to do so, we might as well not have it at all. The MOS mandates that we use the ndash in page titles, and create a redirect from the same title but using a hyphen. It's no extra effort, and it makes us look that bit more professional. Happymelon 10:40, 28 April 2008 (UTC)[reply]
Should we perhaps take this opportunity to convert all season articles to use the endash in the article title then? – PeeJay 13:07, 28 April 2008 (UTC)[reply]
I couldn't agree more. Go the whole hog. Or else why have a MOS in the first place? The Rambling Man (talk) 16:23, 28 April 2008 (UTC)[reply]
And curiously, look at thread following this one....! The Rambling Man (talk) 16:24, 28 April 2008 (UTC)[reply]
Unfortunately I don't have the MOS naming conventions memorized, or I could have reduced the amount of editing necessary for the task below. Anyway, these will get taken care of when I do that regardless.--Dycedarg ж 18:46, 28 April 2008 (UTC)[reply]

What ab0ut 1999-2000? Rich Farmbrough, 13:14 29 April 2008 (GMT).

For whatever reason, it seems to be the de facto standard to write that as 1999-00, even though that makes absolutely no sense, because it matches the rest. In any case, I don't make up these rules, I just propagate them willy-nilly throughout the encyclopedia. You should probably be arguing this with whoever came up with it in the first place. Preferably before I go editing some 6900 articles with date ranges in them pertaining to the request below. (The request is specifically to do with en dashes only, but I could work in some changing of this if you could gain consensus for it if only because it is so very stupid looking.)--Dycedarg ж 15:54, 29 April 2008 (UTC)[reply]
I usually change article titles that say "1999-00" to say "1999-2000". It bugs me the other way. – PeeJay 20:03, 1 May 2008 (UTC)[reply]

DashBot

Could someone create a bot to replace hyphens in numerical ranges with en dashes? I'm too lazy to use the right one myself and people keep hounding me. A bot could easily take care of this. Any takers? --Adoniscik(t, c) 02:19, 28 April 2008 (UTC)[reply]

Conveniently, the script I wrote to take care of the request immediately above this one could be modified to do this easily. (It also could have been modified to use en dashes instead of hyphens for those articles in the first place had anyone told me, but that's besides the point.) Of course, since it will probably be a much larger bunch of articles (given the innate laziness of people when it comes to naming things properly), I'm not going to want to semi it, and will have to run it through BRFA. I'm going to class now, I'll put the BRFA up when I get back. Hopefully it won't end up taking a week for approval.--Dycedarg ж 18:44, 28 April 2008 (UTC)[reply]
To clarify, as it seems I misread the request a bit, I am planning on doing this for numerical ranges that appear in page titles, not numerical ranges that appear in article text. As that is a relatively minor issue, you'd probably do better to ask the operator of SmackBot or some other bot that propagates general fixes to add this, or perhaps get the maintainers of AWB to add it to the general fixes for that.--Dycedarg ж 22:06, 28 April 2008 (UTC)[reply]
BRFA is up.--Dycedarg ж 00:04, 29 April 2008 (UTC)[reply]
I think this is actually subtle. For example ISBNs we don't want to insert en-dashes. Rich Farmbrough, 13:03 29 April 2008 (GMT).
Do you know of any Wikipedia articles on specific IBSN numbers? Happymelon 10:49, 30 April 2008 (UTC)[reply]
It would be a great thing if these hyphens could be changed to en dashes, as this is a problem in, I daresay, hundreds of articles. However, I believe that some kind of list should be made of the articles, so that the WikiProjects responsible can take notice; the change of the titles should be a good opportunity for the correction of any relevant templates, as well as instances in the text.
I should also like to mention that it is not just date ranges which are problematic. All the tens of articles about bilateral relations should have an en dash; if one of the two parties has a space in its name, then the en dash should be spaced as well. Examples: Canada–Greece relations and Canada – United States relations. This is a wide and pretty much standardised category (x-y relations), so this could perhaps be taken care of automatically as well. Waltham, The Duke of 19:58, 30 April 2008 (UTC)[reply]
Date ranges were just the first run. In the future, I had every intention of fixing every single instance hyphens are used in place of en dashes in page titles that I could find. I just haven't figured out every type of article name that falls under that little paragraph in the MOS. I intend to ask advice of some people more knowledgeable about these things than I am when I run out of things I can find on my own. Also, if you want me to compile a list of the articles I change and put it somewhere I could, but I'm not going to have enough time to notify wikiprojects (due to the sheer number of them that are affected) on my own. (Oh and by the way, it's not hundreds of articles. It's thousands. There are almost 7000 articles with hyphens in date ranges alone.)--Dycedarg ж 19:45, 1 May 2008 (UTC)[reply]
Following up on Rich Farmbrough's comment about ISBNs: the usage of en dashes will break the ISBN magic for all ISBNS included in book references which are properly hyphenated. Notice that one of the following ISBNs is not highlighted in blue: ISBN 0-123456-789, ISBN 0–123456–789. The second ISBN is punctuated with en dashes rather than hyphens. Don't do this change! Also think about whether it will modify URLs. EdJohnston (talk) 18:05, 2 May 2008 (UTC)[reply]

Purging subpages

I tagged Ecuador category talk pages with Template:WikiProject Ecuador to place the categories in Category:Category-Class Ecuador pages. This also placed the categories in Category:WikiProject Ecuador articles. Categories are not articles, so I modified Template:WikiProject Ecuador to fix this. The categories still appear in Template:WikiProject Ecuador. One way to fix this is to, for example, open each category page and save them to purge the page. That takes a lot of time. Is there a bot that can purge each page listed in Category:Category-Class Ecuador pages so that I don't have to open up each category page and save? Gimmetrow did purge the Category:Category-Class Ecuador pages pages,[3] but this issue comes up every so often for me and I would like to know where to make such a purge request in the future. Thanks. GregManninLB (talk) 01:57, 29 April 2008 (UTC)[reply]

just wait the servers will update those pages in due time. βcommand 2 18:01, 29 April 2008 (UTC)[reply]
MelonBot does this occasionally, but only when it's urgently needed - if it's not vital for statistics or something, just wait for them to work through the job queue. More specifically for this instance though, if you look in Category:Category-Class articles (and note the name of the master category) you'll see that a good 90% of projects use the (admittedly slightly counterintuitive) convention of "Category-Class Foo articles, although some have adopted the "page" syntax. You might want to consider using Category:Category-Class Ecuador articles rather than "...pages". You might also want to consult with WikiProject Ecuador about this change, as the Category:WikiProject Foo articles category is often used as a parent category for all pages (articles and not) in the scope of a project. Again, the naming convention of "articles" is just that, convention (some projects just use Category:WikiProject Foo). Just a few thoughts. Happymelon 11:18, 30 April 2008 (UTC)[reply]

Free image cleanup bot

I would like to propose a bot to help start to clean up the formatting of descriptions/source/authorship information on at least some of our free images. New uploads are now including {{Information}} with correct fields filled in based on choices made during upload, but our older images are lacking this in most cases. Many also lack descriptions...often the uploader relied on the context of the image's usage for a description, but as images are orphaned over time due to article changes, this context is lost...and as uploaders leave over time, we lose the chance to obtain this information.

I propose the bot do the following:

  1. Search for images with the license tags {{self}}, {{PD-self}}, and {{GFDL-self}} that have only a single uploader.
  2. Place the {{Information}} template on the page if it is not already present.
  3. Place the text "user-made" in the "Source" field.
  4. Place a link to uploader's userpage in the "Author" field.
  5. Place the license tag in the "Permission" field. Personally, I think it would be useful to convert {{PD-self}} to {{PD-user}}, {{GFDL-self}} to {{GFDL-user}} and {{self}} to add the optional "author" parameter to make the source of the license more clear in the event of image renaming, copying, or other image maintenance.
  6. Grab any remaining stray text on the page and place in the "Description" field. If there is no description (i.e. the page contains nothing but the license tag), notify the uploader with {{subst:add-desc}}.

This should correct a lot of the entries we have in Category:Images with unknown source, since the authorship claim is clear when a user places the "self" license tags. And even if they do not return to add a description, the image will at least be categorized into Category:Images lacking a description for human attention. Kelly hi! 17:52, 29 April 2008 (UTC)[reply]

Sounds great. I'd like to thanks Kelly for coming up with this proposal. One thing that should be added is analysing the upload date would help, as I think January 2006 was when the PD tag was deprecated. A date for when the "information" template came into widespread use (on the upload form?) would be good as well. Carcharoth (talk) 18:14, 29 April 2008 (UTC)[reply]
Actually, the date shouldn't matter for the "self" licenses, as those have stayed stable over time. I haven't yet thought of a good way to bot-sort the {{PD}} images, as they are a mishmash of all types of sources, from self-made images to the Library of Congress. It would help for a bot to automatically notify the uploaders of those images, without the message being a deletion warning for an image that could be fixed. But that's probably a job for another bot. Kelly hi! 18:24, 29 April 2008 (UTC)[reply]
It would be interesting to see what the most linked to external website and internal links on the image pages. Image:Chaffinch47.jpg is one of many that links to www.biologie.uni-hamburg.de. Also, find the most commonly used word might provided other patterns. — Dispenser 00:35, 30 April 2008 (UTC)[reply]
I wish the best of luck to anyone who writes a bot to do this. You'd be amazed at the sort of garbage you find on image description pages. --Carnildo (talk) 21:40, 2 May 2008 (UTC)[reply]

Delivery of a note

I'm working on letting possibly interested editors know about Wikipedia:Meetup/DC 4. Does someone have a bot who would deliver a standard note to everyone on a list of editors - very similar to a WikiProject newsleter?

I've compiled a list of editors who posted on the page for the prior meetup, but haven't indicated whether or not they will attend this meetup. The list is at Wikipedia talk:Meetup/DC 4. The proposed note to go to each person on that list is at Wikipedia talk:Meetup/DC 4/Proposed reminder note - May 1.

Thanks in advance for any help on this! -- John Broughton (♫♫) 21:39, 30 April 2008 (UTC)[reply]

I can, just leave a note on my talkpage. βcommand 2 21:54, 30 April 2008 (UTC)[reply]
Done. Thanks. -- John Broughton (♫♫) 01:57, 1 May 2008 (UTC)[reply]

Search request

Would someone be able to take the list found at Wikipedia:WikiProject Green Bay Packers/Infobox needed and search through all the articles for {{NFL player}} and create a list of those articles that lack that template? Thanks, « Gonzo fan2007 (talkcontribs) @ 02:35, 1 May 2008 (UTC)[reply]

{{NFL player}} redirects to {{Infobox Gridiron football person}} so I made a list of all pages linked from Wikipedia:WikiProject Green Bay Packers/Infobox needed that don't use said template. I used the database do this and put the list at [4]. --Erwin85 (talk) 14:58, 1 May 2008 (UTC)[reply]
Thank you! « Gonzo fan2007 (talkcontribs) @ 18:26, 1 May 2008 (UTC)[reply]

Requested redirect migration

Could someone with a handy bot please migrate all usages of the redirect {{Painting}} to its target, {{Infobox Painting}}? It looks there are somewhere around 800-900 usages. The reason is that I would like to use {{Painting}} for a local copy of Commons:Template:Painting. Thanks! Kelly hi! 13:05, 1 May 2008 (UTC)[reply]

 Doing... Happymelon 17:20, 1 May 2008 (UTC)[reply]

CSV/PRN conversion to Wikitable bot?

Hi. Are there any handy-dandy ways to convert tables in CSV or PRN (comma or tab-separated) into wikitable formatting? Also can Wikitables do any calculations, i.e. percentages? Reason I'm asking is complex census data on a large number of topics/towns. Pls reply on my talkpageSkookum1 (talk) 19:41, 1 May 2008 (UTC)[reply]

Tagging PDFs

Images uploaded in the PDF format cannot be displayed in articles, and as a result the vast majority of PDFs hosted here need to be converted into another image format, converted into plain text into an article, or (if the PDF is unencyclopedic) tranwikied or deleted. The template {{BadPDF}} was created to notify people about this problem, but the vast majority of PDF files aren't tagged with it. If a bot could be programmed to add this template to all PDF files we currently have it would be great for people trying to fix this problem. There's an external tool here which can give a list of PDFs - there's about 1500. Hut 8.5 19:48, 1 May 2008 (UTC)[reply]

cites & reference in-line tags are way too frequently put before a comma or full stop.

Too often I see this[1]. <-- it looks ugly, it should look like this.[2] All we need is a bot that searches for: </ref>. removes the "." and looks for the <ref> that exists just before it, and places the "." in front of that. Many put extra spaces between the ref. also, so that needs to be accounted for, but I'm sure someone could come up with a bot to fix this, it is endemic to Wikipedia right now. 67.5.156.176 (talk) 07:42, 2 May 2008 (UTC)[reply]

This applies to all punctuation, so something like
re.sub('<ref(.*?)>(.*?)</ref>([.,;!?])','\3<ref\1>\2</ref>',text)
would work in python. I can't think off the top of my head where this would throw false positives (I've excluded all brackets and quotes, as these can be ambiguous). The issue would be finding these errors. They'd probably have to be found from a database dump. Ideally, it would be nice to have a continuously-running MOSbot to check RecentChanges for such simple and easily-fixed violations; when a new feature was added, we'd just have to search the most recent database dump for existing violations and fix them with a one-time script, and thereafter they'd be fixed in real-time. Happymelon 08:34, 2 May 2008 (UTC)[reply]

Please do not do this. I consider it more logical to put the reference immediately after the fact, and that usually means before the punctuation. See Wikipedia:Cite your sources#Ref tags and punctuation, which allows both styles. —AlanBarrett 17:38, 2 May 2008 (UTC)[reply]

I seem to recall that the MoS says you can put them before or after the punctuation, so long as it is consistent within an article. DuncanHill (talk) 17:40, 2 May 2008 (UTC)[reply]
Doh! I should have read Alan's comment! DuncanHill (talk) 17:41, 2 May 2008 (UTC)[reply]
Well if the MoS allows both styles, then this is a fast-track to Special:Blockip. No thankyou! Happymelon 21:54, 2 May 2008 (UTC)[reply]
Two things: 1) we've denied bot proposals for this before, and 2) the regexes to do this properly are a lot more complicated than the above. Gimmetrow 04:29, 3 May 2008 (UTC)[reply]
I'm surprised to see one mention placing something "immediately after the fact" as meaning to jump in front of proper grammar by truncating the punctuation of the sentence structure. i.e. before the fact is finished. I think policy needs to be reviewed in this instance! The latter form I gave is clearly superior in looks and logical structure. 67.5.147.10 (talk) 10:23, 3 May 2008 (UTC)[reply]

UN/LOCODE redirects - add values to infoboxes

There are redirects from UN/LOCODEs in the form UN/LOCODE:ABCDE (see Category:Redirects from UN/LOCODE ), if such a redirect goes to a page that has a Template:Infobox Settlement a new variable un_locode should be added after the variable area_code and the value should be the 5 chars, e.g. USNYC for UN/LOCODE:USNYC. The value is currently not displayed, as agreed by Template_talk:Infobox_Settlement#UN.2FLOCODE . UnLoCode (talk) 15:05, 2 May 2008 (UTC)[reply]

Image deletion bot

I know this has been proposed before and I know it'll be hard to have the community agree on this but I still think it would be worthwhile for a group of experienced bot operators to design a simple image deletion bot. (I also suppose that there have been many discussions in the past about this, though I'm not sure where to find them.) From what I understand, there hasn't been much complaint about the work of RedirectCleanupBot (talk · contribs) so the community might be more open to this. Specifically, I'm thinking about two tasks.

  1. Automatic deletion of unused unfree images.
  2. Automatic deletion of MetsBot-approved Commons dupes.

Let me quickly make the case for this and give my thoughts on why and how these bots should operate and on how to sell this as a plus for everyone rather than a dangerous slippery slope.

  1. Deletion of unused unfree images is usually done rather carelessly by admins to start with. By that I mean that there are routinely images that are in fact used but remain in the category. Admins that use the Bad Old Ones tool seem to be oblivious to the fact that this tool never reports properly the actual usage of images. This has in the past resulted in broken links to images when Twinkle batch deletion is used. Moreover, a bot could check that the image is never used over the 7-day grace period, which admins don't do. All in all, a bot would likely do a better job than any admin does.
  2. For those who don't know MetsBot's work on commons dupes, let me just say that it's a very very conservative bot. I've been checking its work for close to a year now and I've never seen it produce a false positive. If MetsBot concludes that the image meets all the requirements of Wikipedia:Criteria for speedy deletion#I8, then it is unquestionably right. Because of this, the fraction of dupes it tags as meeting these requirements is fairly small.

These are not difficult tasks to automate and so it's possible to write a very transparent bot code, which would go a long way into providing the insurances that the community will want before giving the green light to a deletion bot. Furthermore both tasks are as uncontroversial as they come and having admins performing them by hand is just a stupid waste of admin clicks. Pascal.Tesson (talk) 15:49, 2 May 2008 (UTC)[reply]

Be veryv very quite im hunting rabbit, those bots do exist and are running. βcommand 2 16:09, 2 May 2008 (UTC)[reply]
? A combination of bad grammar and mysterious formulation makes that comment impossible to grasp. Isn't the redirect bot the only bot that ever was granted admin rights? If these bots exist and are running, it shouldn't be covert. And goodness gracious, I sure hope you're not thinking of doing this through BetaCommandBot which is probably the least transparent bot out there and the one most suspected of running unapproved tasks. Pascal.Tesson (talk) 16:54, 2 May 2008 (UTC)[reply]
The community has a deathly fear of Skynet and getting one of these bots officially approved will be almost impossible. But these bots do exist and are operating behind the secenes. βcommand 2 16:59, 2 May 2008 (UTC)[reply]
Jeez, and then you wonder why people are pissed about BAG and bot operators ignoring community input. For what it's worth, your response here is entirely unacceptable. You're essentially telling me that you know what's best for the community and so you don't need to take its will into account. The mistrust of the community is not the product of its stupidity. It's the product of the stupidity of bot operators who think they can do what they want because their bots are so flawless. When you're in a hole, stop digging. Pascal.Tesson (talk) 17:11, 2 May 2008 (UTC)[reply]
those people who operate these bots due so without BAG approval, I know of two users who do this and this information is public, usage of WP:IAR is not a bad thing. You and many many others agree that these bots are a good thing, but just try and get one to past RfA, Ill watch it crash and burn in failure. people have an exreame fear of Skynet. βcommand 2 18:00, 2 May 2008 (UTC)[reply]
Well the redirect-deleting bot RfA is proof that it's very doable. The problem with previous RfAs for bots is that operators were proposing tasks where concerns of malfunction had a bit of merit. No chance this will happen in this case. Just like the redirect bot, this would be a trivial bot running a trivial, very much uncontroversial task. Pascal.Tesson (talk) 19:12, 2 May 2008 (UTC)[reply]

I'm afraid that making a safe image-deleting bot is impossible. There are two principal points of failure:

  • tricking the deletion bot to delete what it shouldn't;
  • tricking the tagging bot (MetsBot, OrphanBot, BetacommandBot, whatever) to tag what it shouldn't;

BEANS!!!, but how can you be sure that a vandal cannot tag a free image as FU, or fake the MetsBot template? MaxSem(Han shot first!) 18:06, 2 May 2008 (UTC)[reply]

It's actually just as easy to fool a well-intentioned admin, trust me. Moreover, the two cases I'm describing here are the last thing a vandal would be interested in attacking. Unused unfree images? What on earth might a vandal want to do with this? <maniacal laughter> I'm going to take this free image and tag it as fair use, it'll stay in the category for 7 days, no one will notice and then it'll be deleted by an unsuspecting bot. </maniacal laughter> This is beyond improbable. Same goes for deletion of Commons dupes. The deleting bot would verify on the fly that the Wikipedia and Commons images are the same (and even run the MetsBot code on the fly). You can't fool the bot to delete something it shouldn't. Seriously, if you can come up with a scenario, any scenario no matter how contorted, where an ill-intentioned user would take advantage of such a bot, I might take the objection more seriously. Pascal.Tesson (talk) 19:12, 2 May 2008 (UTC)[reply]
Ever tried to write a bot? The core principle of internet security says that you shouldn't trust anything that comes from the net. The bot shouldn't trust ANYTHING AT ALL. For example, "it'll stay in the category for 7 days" you say. Why can't a vandal tag an image as 7-days old? Ok, let's parse the revision history. Bah, many validly tagged images are often edited after tagging, so the bot should parse whole history, trying to figure out from diffs when the image was tagged. And there are zillions of conditions like that the bot should check. Sriously ,give me the code and I'll show you potentially unsafe parts. MaxSem(Han shot first!) 19:31, 2 May 2008 (UTC)[reply]
What you don't seem to understand is that if some vandal decides to tag an unused free image as unused-unfree and places it in today's deletion category, the image has 100% chance of getting deleted. Admins are just as dumb as bots and I certainly include myself in that category. I would actually argue that it's easier to fool an admin than a bot in such cases and, especially in the case of Commons dupes, it's very hard to imagine deletion ever being a problem since the copy is on Commons. Pascal.Tesson (talk) 19:52, 2 May 2008 (UTC)[reply]
(1) Do you have examples of incorrectly deletd images? (2) What about the scenario when attacker uploads a copy of image to Commons with obviously bogus description? As a BAG member experienced with bot development, I find approval of such bot very unlikely. MaxSem(Han shot first!) 19:59, 2 May 2008 (UTC)[reply]
Admins are just as dumb as bots. Easy there pal, not the thing to say :P CWii(Talk|Contribs) 20:02, 2 May 2008 (UTC)[reply]

I find it highly unlikely that any image deletion bot would ever get approved. The Skynet/I, Robot/etc crazies aside, image deletion is controversial enough as it is. The only reason the redirect bot bot was approved was because it was maybe 15 lines of code, was impossible to fool by its very nature, and was doing something that no one could argue shouldn't be done. This bot would be, by your own admission, highly complicated and not impossible to fool. (As stupid as admins may be, people are unwilling to accept that they are less fool-proof than bots.)--Dycedarg ж 20:14, 2 May 2008 (UTC)[reply]

Image deletion is indeed controversial. But these image deletions are not. They're as uncontroversial as deleting broken redirects. And I don't know where you're reading that I think the code would be highly complicated. On the contrary, it's a pretty simple bot to write. As far as I understand this work is already being taken care of by an unapproved bot so I guess I'm just proposing to make that process more transparent. Yes, it will always be possible to fool but it's beyond unlikely for someone to take advantage of that given how uninteresting it is for a vandal to have an image deleted. Just to answer MaxSem's objections above: (1) Yes I have already had to undo a lot of deletions from Category:Images on Wikimedia Commons which had been deleted with the Twinkle batch mode. The problem was the broken bad old ones tool (which provides incorrect info on usage). I suspect this still happens since the categories are created from a template which encourages admins to use bad old ones. I'm one of the few admins who works the backlog of Commons dupes and I can assure you that no admin would have the patience to do this without the help of MetsBot. If anyone tries to abuse this by placing fake MetsBot tags, they'll succeed easily, bot or no bot. Sure BEANS, but, come on, this would be as much fun to a vandal as placing a "kick me" post-it on his own ass. As for (2), again this is beyond unlikely and it's also true that admins who delete these images rarely double check the description. There is no net gain by having admins do this by hand. Pascal.Tesson (talk) 21:08, 2 May 2008 (UTC)[reply]
I never said my objections made sense. I said they are objections, that when presented by opposers at an RFA will spawn dozens of knee-jerk "per so and so" pile-on votes because that is the nature of RFA. Rationality is far from the ruling force in oppose sections for bot RFAs (or, for that matter, most RFAs in general.) The redirect bot is deleting redirects that redirect to nothing and have no page history; there is never a good reason to keep those, thus deleting them is beyond uncontroversial. And it still got 15 opposes. Give the denizens of RFA land something almost credible if you ignore objective reality to base their opposes on, and your bot RFA is doomed. But, by all means try anyway, I would love to be proved wrong.--Dycedarg ж 00:03, 3 May 2008 (UTC)[reply]
Hmmm... How many bot RfAs have there actually been? Everyone's acting as if this beyond impossible yet I don't see anybody actually giving it a serious go. Of course, the community will be unhappy if the first person that raises an objection is met by "oooooooooh you scawed of tha big Skynet, you moron!" Get a grip people. The only problem here is that bot operators are unwilling to address the community's concerns patiently. When WJBscribe did, it went pretty well. Sure, there are people who will never accept bots with the ability to delete. But there's also a very large portion of editors who can be persuaded if the case is made properly and if there's a willingness to listen to their concerns for checks and balances. Judging by some of the comments above, it sure seems like a few BAG-regulars feel they don't have to stoop as low as actually having to justify themselves to people who couldn't have a bot write "hello world" if their life depended on it. Sure, it's easier to just use deletion bots anyways but that's not the right way to do it and ultimately, this is exactly the attitude that fuels the growing resentment against bot operators. Don't get me wrong, I'm actually very much in favor of giving more leeway to bots but it should be obvious that the attitude against bots will only worsen if the sole principle for development is IAR. Pascal.Tesson (talk) 03:24, 3 May 2008 (UTC)[reply]
As someone who has ventured into the world of tagging images of questionable sourcing or copyright, I totally discount any possibility of reasonable reaction from the community when it comes to deletion of images. People who would not be at all disturbed by a request to source a questionable fact in an article turn completely hostile and unreasonable in response to a request to properly source, license, or justify use of an image. This task is best handled by people who are knowledgeable in copyright matters, behind the scenes, under WP:IAR. Otherwise we will end up inundated in stolen copyrighted images with bogus or no licenses, and the pillar of free content will crumble and collapse. Kelly hi! 04:05, 3 May 2008 (UTC)[reply]
Well I understand your point but clearly you haven't read what I've proposed. I'm talking about a bot to delete images only in two very very specific uncontroversial cases. Pascal.Tesson (talk) 04:12, 3 May 2008 (UTC)[reply]
Oh, I was only addressing the generic objections. Your bot is a good idea, go for it. Kelly hi! 04:38, 3 May 2008 (UTC)[reply]

My memory for remembering exact instances of these things is rather terrible, but the only one I can remember rather clearly was the one that Dragons flight tried for a bot that was meant to protect all the templates that appear on the main page, as that was sorely needed at the time. That RFA was a posterchild for failed bot RFAs, and was eventually withdrawn due to the timely arrival of cascading protection, which rendered the issue moot. I don't remember the details particularly well (or for that matter the name of the bot so as to direct you to the RFA page), but I don't remember any particular lack of the operator being willing to address the community's concerns. In any case, you are correct that the redirect bot is a proof of concept. Perhaps future bot RFAs will be easier with a live example of one that hasn't destroyed Wikipedia. I still don't think that this bot will pass, due to the fiery combination of bot admin rights and image deletion. But I wish you luck regardless.--Dycedarg ж 06:17, 3 May 2008 (UTC)[reply]

Request for new Bot: "ANI-Bot"

I had a brainstorm this morning and came up with this question:

Wouldn't it be convenient if Wikipedia had a bot that could read the ANI board and automatically notify those parties who are involved in a thread?

Let's say I were to reach a situation with a specific editor where I felt that it was necessary to bring that person to ANI. As a courtesy, someone needs to notify that editor that they and/or their recent activity are being discussed at ANI. But, sometimes it seems like it does nothing but further escalate the situation if the user bringing that editor to ANI is the one that notifies them. If a bot existed that could automatically do that, however....

The problem is, I have no idea what I'm doing when it comes to most scripts, so I don't think I would be the right person to create the bot. I'd be more than happy to help maintain it and monitor its activities once it's created and running, but as far as the creation goes... --InDeBiz1 (talk) 22:52, 2 May 2008 (UTC)[reply]

How would the bot know who the active parties are? This would require someone having to put some special template on the ANI thread stating who the active parties were, which is probably more hassle than just leaving a note on the users talk page. Also, I would rather someone warned we about a thread on ANI, not a bot. If you feel that by notifying a party will make things worse, than one could just ask someone else to notify the party. Personally I don't see the use for such a bot. « Gonzo fan2007 (talkcontribs) @ 23:04, 2 May 2008 (UTC)[reply]
I can see your argument and I'm not convinced that it's wrong. As I said, the idea came to me in a random brainstorm, so I thought I'd put it out there and see what folks thought about it. I'm not emotionally attached to it or anything like that.  :) --InDeBiz1 (talk) 23:08, 2 May 2008 (UTC)[reply]
Haha well I'm glad to see that your not going to get all emotional on me =D Lol, its a good idea, I just don't think bots are that smart ;-) But who knows, maybe someone with the no-how will figure something out. Cheers, « Gonzo fan2007 (talkcontribs) @ 00:11, 3 May 2008 (UTC)[reply]

Bot to remove sections from Template:Infobox actor

Currently, Template:Infobox actor is fairly widespread in its usage. Several fields (i.e. "notable roles") have been deleted from the template page, but a very large number of individual infoboxes out there still contain them (i.e. because of their deletion from the template itself, they don't appear on the article itself, but are still there in the coding/html). I don't know if this is the right place to ask, but can a bot be programmed to remove the fields "notable roles", "notable role", "influences", "influenced", "height", "baconnum", "imdb_id", "bgcolour", "children", "parents", "relatives", "restingplace", "restingplace_coordinates", "deathcause" and "nationality" (all of which have long been deleted from the template itself) and replace the field "partner" with "domesticpartner" and the field "location" with "birthplace"? Thanks, All Hallow's Wraith (talk) 05:01, 3 May 2008 (UTC)[reply]

Redirect bot

Sorry if something like this has already been proposed...

I was wondering if anyone would be interested in making a bot that made redirects to articles that contain capital letters (other than the first letter) or punctuation. The bot would see Xena: Warrior Princess, and automatically make a redirect at Xena warrior princess, Xena: warrior princess, Xena: Warrior princess, etc. Obviously it would only create a redirect where no article currently exists. It could also do the same for currently existing redirects. For example, based on AAFCO, it would make Aafco, which would redirect to the original target (in this case Association of American Feed Control Officials.)

As I do most of my Wikipedia reading on my phone, I go to most articles by typing in the URL. My phone autocompletes the en. URL, and I just edit that last part, which is by far the easiest way for me to access Wikipedia.

I seriously doubt that I am the only one who assumes URLs, and since this bot wouldn't change anything on existing articles, I don't see any potential drawbacks. If, for whatever reason, the redirect is not logical, it can always be changed later by a human editor. ~ JohnnyMrNinja 05:33, 3 May 2008 (UTC)[reply]

One potential drawback is the potential for creation of unnecessary redirects that reflect simple mistakes (e.g. typing errors) in article creation or pagemoves, pagemove vandalism, or pagemove disputes. Black Falcon (Talk) 05:41, 3 May 2008 (UTC)[reply]
I certainly agree, it will likely create many unnecessary or unlikely redirects anyways, but I would think the useful ones would heavily outweigh the silly ones. And again, as these would be in unused space anyways a few extra redirects couldn't really hurt. It could also not create a redirect if an article has already been deleted in that same space. ~ JohnnyMrNinja 05:47, 3 May 2008 (UTC)[reply]
  1. ^ rhetorical reference.
  2. ^ rhetorical reference.