To run a bot on the English Wikipedia, you must first get it approved. Follow the instructions below to add a request. If you are not familiar with programming consider asking someone else to run a bot for you.
If your task could be controversial (e.g. most bots making non-maintenance edits to articles and most bots posting messages on user talk pages), seek consensus for the task. Common places to start include WP:Village pump (proposals) and the talk pages of the relevant policies, guidelines, templates, and/or WikiProjects. Link to this discussion in your request for approval.
You will need to create an account for your bot if you haven't already done so. Click here when logged in to create the account, linking it to yours. (If you do not create the bot account while logged in, it is likely to be blocked as a possible sockpuppet or unauthorised bot until you verify ownership)
Create a userpage for your bot, linking to your userpage (this is commonly done using the {{bot}} template) and describing its functions. You may also include an 'emergency shutoff button'.
II
Filing the application
easy-brfa.js can be used for quickly filing BRFAs. It checks for a bunch of filing mistakes automatically! It's recommended for experienced bot operators, but the script can be used by anyone.
Enter your bot's user name in the box below and click the button. If this is a request for an additional task, put a task number as well (e.g. BotName 2).
Complete the questions on the resulting page and save it.
Your request must now be added to the correct section of the main approvals page: Click here and add {{BRFA}} to the top of the list, directly below the comment line.
For an additional task request: use {{BRFA|bot name|task number|Open}}
III
During the approvals process
During the process, an approvals group member may approve a trial for your bot (typically after allowing time for community input), and AnomieBOT will move the request to this section.
Run the bot for the specified number of edits/time period, then add {{Bot trial complete}} to the request page. It helps if you also link to the bot's contributions, and comment on any errors that may have occurred.
AnomieBOT will move the request to the 'trial complete' section by moving the {{BRFA}} template that applies to your bot
If you feel that your request is being overlooked (no BAG attention for ~1 week) you can add {{BAG assistance needed}} to the page. However, please do not use it after every comment!
At any time during the approvals process, you may withdraw your request by adding {{BotWithdrawn}} to your bot's approval page.
IV
After the approvals process
After the trial edits have been reviewed and enough time has passed for any more discussion, a BAG member will approve or deny the request appropriately.
For approved requests: The request will be listed here. If necessary, a bureaucrat will flag the bot within a couple of days and you can then run the task fully (it's best to wait for the flag, to avoid cluttering recent changes). If the bot already has a flag, or is to run without one, you may start the task when ready.
For denied/expired/withdrawn requests: The request will be listed at the bottom of the main BRFA page in the relevant section.
The following discussion is an archived debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA. The result of the discussion was Denied.
Function details: it can remove infobox fields that are already present and equal in Wikidata, to ease transition to the Phase 2 and help users focusing on text instead of data. For example, if {{Authority control}} would be designed to use Wikidata, the bot could remove the LCCN, GND, VIAF, etc. fields, and it can even import missing ones on Wikidata. See a test edit on it.wp.
Note: This bot appears to have edited since this BRFA was filed. Bots may not edit outside their own or their operator's userspace unless approved or approved for trial. AnomieBOT⚡07:04, 28 May 2013 (UTC)[reply]
WTF?!? It is now a global bot, and thus can remove interwiki links as many others. I'll wait for the result of this BRFA to let it running this task, of course. --Ricordisamoa07:20, 28 May 2013 (UTC)[reply]
Reading over the request for comment I find myself agreeing with Legoktm on this one. While I understand that this bot would not change fields to use Wikidata unless they already have identical data in them, I still think this is a bit too far removed from what was said at the RfC. I don't think having a bot automatically change fields to use Wikidata is really inline with "this modification should be done carefully and deliberately, at least at first". I also note that the {{Authority control}} template was used as an example in the discussion as well (not sure if that's where you got it from) to which the response was "If we do start using Wikidata in infoboxes, can we please discourage people from using bots to import it". I would say that judging from that RfC most people feel both that Wikidata is still finding its feet and we are also still working out what works best for en.wikipedia, so I would suggest that this is perhaps a little too early to start considering bots to automate the transition, or at the very least this would need some more community input. - Kingpin13 (talk) 12:35, 1 June 2013 (UTC)[reply]
I was clear on that, I don't see that there was any confusion. Unless you mean that my quote from Kaldari above was commenting on bots editing templates, not articles. Either way, it doesn't really change much, I don't see a consensus for this task presently. I'd say that if you want to go ahead with this task you'll need to get some more input from the community. - Kingpin13 (talk) 23:39, 6 June 2013 (UTC)[reply]
It is up to you to obtain consensus from the community for this task, or show that there is an existing consensus (presently there appears to be consensus that this is not a good bot task, for now at least). If you're not prepared to do that I can mark this as denied. If you want more time to do that I can mark this as expired, or you could withdraw it for the time being. - Kingpin13 (talk) 07:25, 14 June 2013 (UTC)[reply]
If there's any chance to get community consensus (even for the future), I'll try to. Where should the discussion take place? --Ricordisamoa17:46, 14 June 2013 (UTC)[reply]
Denied. No consensus for the task atm. You will need to start an RfC and establish a consensus for this task before it can be approved --Chris15:56, 15 July 2013 (UTC)[reply]
The above discussion is preserved as an archive of the debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA.
The following discussion is an archived debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA. The result of the discussion was Approved.
Function details: When Reflinks suggests citation templates to be added to articles, users sometimes save the suggestions without first removing incorrect |author= parameters, such as Log in om een reactie te plaatsen. for YouTube. This bot would remove those incorrect values, and perform any other AWB general fixes at the same time. See this edit for an example processed manually.
Discussion
I have noticed that, perhaps due to misclassification of metadata by site webmasters, Reflinks often populates the |author= with days of the week, dates, timestamps and other similar chaff. It may also include incidental "by" "[newspaper] staff" and "staff reporter", which are not recommended by our guidelines. Will these be targeted for removal? -- Ohc ¡digame!¿que pasa?02:58, 23 May 2013 (UTC)[reply]
It's not out of the question, but I'll start with the easiest errors to remove first, and get more sophisticated as time goes forward. Your suggestions are always valuable! GoingBatty (talk) 04:38, 23 May 2013 (UTC)[reply]
However, I won't remove dates, since sometimes the |date= parameter is empty, and I don't want to accidentally remove a valuable (albeit misplaced) piece of information. GoingBatty (talk) 00:47, 25 May 2013 (UTC)[reply]
Like Yes, that's the sort of stuff I often see directly imported via Reflinks and want to remove. It's true there is a fine line between cleanup and loss of valuable information, but that looks like excellent janitorial work. -- Ohc ¡digame!¿que pasa?01:46, 25 May 2013 (UTC)[reply]
These tasks are totally what bots are designed for, imo; they can remove millions of the types of errors that simply occur because you should have an army of editors more interested in content than tedious details, while the tedious details of uniformity make the articles more automated and easier on the readers. -68.107.136.227 (talk) 03:15, 29 May 2013 (UTC)[reply]
The above discussion is preserved as an archive of the debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA.
The following discussion is an archived debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA. The result of the discussion was Approved.
I've completed a full set of test edits here. This should give you a good idea of what the bot actually does. Thanks, and please let me know if you need clarification.
Discussion
Hello,
I just wanted to suggest a basic tweak in the algorithm that the articles with one slots already should also be included in the random draw. That way, it would not become too restrictive for selecting the random slots.
The current one also works fine enough though; but let me know if you favour that for any particular reason.
Also, would it be possible for a section to get two random slots in the draw?
Here's the thing: we only have ten slots. I see no real reason why one section should get two out of ten slots (one from 10%+, another from random draw) when another gets none...that doesn't seem balanced, at least to me. Theopolisme(talk)21:10, 22 May 2013 (UTC)[reply]
Fair enough. I'm taking this algorithm could be revisited if the sectioning is changed? I still have my last question unanswered though (Whether it's possible the bot awards a section two random slots, or it always awards only maximum one per section) TheOriginalSoni (talk) 22:25, 22 May 2013 (UTC)[reply]
"These slots can go to any section without a slot already", from the request details--so once a section is selected randomly, it won't be selected randomly again in that run. Theopolisme(talk)01:13, 23 May 2013 (UTC)[reply]
Approved for trial (8 days). Please provide a link to the relevant contributions and/or diffs when the trial is complete. When ready, this trial means 2 runs (I think) :)·Add§hore·Talk To Me!16:15, 27 May 2013 (UTC)[reply]
Trial complete. I ran the bot for week 29, see [1]. This is the equivalent of one run; I didn't want to do any more because I'm still waiting for the project's decision on how far into advance we should schedule. I've done two runs in total, though, without hiccups, so I'd think this would be enough for approval. Theopolisme(talk)01:26, 28 May 2013 (UTC)[reply]
I would ask the project to check, see if there is anything else they want, as the task seems so straight-forward, does not seem much to be discussed about it here. -68.107.136.227 (talk) 22:55, 28 May 2013 (UTC)[reply]
Hi! As a member of the project and among the ones who requested this task, I can confirm that I dont require anything except maybe a running list of plain-links for the entire schedule (And not just the current week). Other than that, everything looks perfectly in order to me. TheOriginalSoni (talk) 10:34, 29 May 2013 (UTC)[reply]
Theo, see my comment above. I meant I also would have wanted another page with the links for the entire schedule, and not just the current week. TheOriginalSoni (talk) 03:40, 31 May 2013 (UTC)[reply]
It does not provide plain links. So when I am trying to use the links (I primarily use it currently for 1- Making teahouse banners for TAFI and 2 - Making one line blurbs for all TAFIs) only, I'm forced to copy-paste the name of the article from the Schedule, a task taking a lot of time, and effort, considering how simply it can be automated. I hope that such a page will also come in handy should we require it in the future. TheOriginalSoni (talk) 14:09, 31 May 2013 (UTC)[reply]
I'm also surprised to see that the current week in the Schedule is not automatically removed whenever the week ends. Requesting Theo to implement that. TheOriginalSoni (talk) 12:51, 31 May 2013 (UTC)[reply]
Just so we're clear on this, will this task break because of adding the box for images? If so, would it be better to have the bot place (an empty box) and remove the picture box so incorrectly placed boxes wouldnt break the bot's function? TheOriginalSoni (talk) 14:09, 31 May 2013 (UTC)[reply]
Seeing as the archive Schedule archive for scheduled entries is both outdated and supposed to be manual, I think the bot would be very helpful here. Instead of deleting the week from the schedule, it would just move it to the archival page. TheOriginalSoni (talk) 14:30, 31 May 2013 (UTC)[reply]
Apprarently there is a schedule archive and a normal archive. Do the entries from HA selected for this week's schedule automatically make it to the normal "successful archive"? TheOriginalSoni (talk) 14:40, 31 May 2013 (UTC)[reply]
Once again Approved for trial (8 days). Please provide a link to the relevant contributions and/or diffs when the trial is complete. When ready, this trial means 2 runs (as before) :) ·addshore·talk to me!08:36, 1 June 2013 (UTC)[reply]
The above discussion is preserved as an archive of the debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA.
The following discussion is an archived debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA. The result of the discussion was Withdrawn by operator.
Function overview:
removing existed interwikis from local wiki (en.wiki) and add them to the item also it adds {{Interwiki conflict}} like this for fa.wiki to pages which have interwiki-conflict. this code is test in some wikis such as (fa, ar, hu, ckb, ur) and works fine.
Links to relevant discussions (where appropriate):
Edit period(s):
At the first it will work for this list (enwiki.txt) and finally it will work on new articles and new categories daily like: like here or here
Estimated number of pages affected:
Unknown. it will work daily
Already has a bot flag(Yes/No):
yes for interwiki.py also my bot is global bot
Function details:
My bot has flag on wikidata to update intewikis and it works fine. Now it works in fa.wiki and it added pages wiki have confilict to fa:رده:صفحههای دارای تداخل میانویکی and users can edit that pages to solving the confilicts faster :) Yamaha5 (talk) 11:20, 10 May 2013 (UTC)[reply]
Hmm. I will quickly link to this which as far as I can tell has the lists of articles you are trying to create without having to add a tracker template to thousands of pages. My bot is already and still constantly scanning every page in every list waiting for wikidata items to be created for them or for links to be added to wikidata. I don't see much point in duplicating the task but would prefer not to take action on this approval request myself. Back to you User:MBisanz·Add§hore·Talk To Me!20:06, 13 May 2013 (UTC)[reply]
the difference between my bot and yours is:
some of wikis like tl.wikipedia or os.wikipedia they changed their namespace aliases so bots can not detect them.
your bot doesn't show pages which doesn't have item but mine add pages to none-item category so users can merge or redirect that pages.
in my opinion it is not duplicate and my bot complete yours.
in another point my is not only link remover it adds interwikis to wikidata also solve some kind of item conflicts Yamaha5 (talk) 21:00, 13 May 2013 (UTC)[reply]
If this is a long-term ongoing task, I see no reason not to have two bots. Wikipedia is volunteer and anonymous, if the bot is useful, having a second or backup bot or bot that deals with different Wikipedias not addressed by first bot is good idea. I think this should be tested. -166.137.209.143 (talk) 13:50, 15 May 2013 (UTC)[reply]
I agree redundancy is a good thing. I'm not sure though if we want a bot adding the template, which predates wikidata. MBisanztalk10:51, 16 May 2013 (UTC)[reply]
A user has requested the attention of the operator. Once the operator has seen this message and replied, please deactivate this tag. (user notified) ·addshore·talk to me!11:05, 8 June 2013 (UTC)[reply]
The above discussion is preserved as an archive of the debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA.
The following discussion is an archived debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA. The result of the discussion was Approved.
As a TFA delegate, I think this is an excellent idea, not least because I suggested it! Move protection is generally put in place for the day of a featured article's appearance on the main page, because it prevents vandalism such as this taking place. However, it's quite easy for me to forget to do it at all, or set it to expire at the start of the day rather than at the end of the day. It would be very helpful to have an adminbot to do this limited task (and it would make my life, and that of current/future TFA delegates, slightly easier as a result). BencherliteTalk18:40, 9 May 2013 (UTC)[reply]
I am hopeless at botcode but I can spot typos in edit summaries: "Upcomping"? I have also mentioned this BRFA at WT:TFAR since that's where all the kool kids hang out people with an interest in TFAs tend to hang out. BencherliteTalk19:00, 9 May 2013 (UTC)[reply]
Agree that this is a great idea. It's logical and sensible and there's no real downside here so long as it operates properly. Thanks for the help, — Cirt (talk) 19:27, 9 May 2013 (UTC)[reply]
If we're having a bot do this, is it also worth having the bot fully protect all templates used in TFA? It's been less of a problem recently, but we used to get quite a bit of that sort of vandalism (e.g. penis appears floating on TFA, but editing that page will not remove it and someone has to check all unprotected/semi-protected transcluded templates). WJBscribe(talk)22:34, 9 May 2013 (UTC)[reply]
Alternatively, WP:FAP could be revived, cascade protected and the bot could transclude all the templates from TFA onto that page, instead of protecting each template. WJBscribe(talk)22:51, 9 May 2013 (UTC)[reply]
That sounds like a better idea to me. I think that we should stick to just move protection for this task, and look into using WP:FAP in another BRFA. Legoktm (talk) 04:21, 13 May 2013 (UTC)[reply]
I'm hopeless at python, and I still am not used to legoktm being highlighted blue now. but I don't see any issues with this bot.—cyberpowerChatOnline03:17, 10 May 2013 (UTC)[reply]
I reviewed the code, and I have the following comments/questions:
What will the bot do should the TFA title subtemplate not exist or contain the empty string for a particular day? I guess either pywikipedia will throw an exception at line 71 or you'll run into an undefined variable access at some point.
I see the bot is supposed to run at 5 minutes before midnight. If this is somehow delayed, it looks like the bot will skip today and go protect tomorrow's FA instead.
What happens if the listed FA title happens to be a redirect? Will it move-protect the page or the redirect? For that matter, what makes sense for it to do there? Off the top of my head, it seems that move-protecting the page and both move- and edit-protecting the redirect would make the most sense.
I note the query you are performing will set the protections to just move protection, removing any other protection (e.g. edit protection). This is obviously not right. While T48911 exists, no one has worked on it yet.
I wonder if the bot should try to restore the old move semi-protection after the page is off the main page.
Good bot idea. Needs some tweeks before implementing. Can it run half an hour before, so editors can notice if it has not, can it have an automatic kill switch if it is delayed? If it protects next day due to failed run, probably an annoyance, easily fixed, not crisis. -166.137.209.143 (talk) 13:56, 15 May 2013 (UTC)[reply]
I would rather, in fact, that it added move protection to articles at the next time it runs after the articles have been selected, rather than waiting until 5 minutes before the article hits the main page. This means there's plenty of time to see whether the bot has missed something and also stops any move wars taking place between selection and TFA day. (Admittedly I'm not aware of any naming issues in the 6 months that I've been selecting TFAs, but you never know...) BencherliteTalk15:35, 15 May 2013 (UTC)[reply]
Yes, it seems it would be useful to have time to catch an omission. I do not see any obvious harm in doing it early, moves at the last minute on a TFA seem unlikely. -166.137.209.148 (talk) 17:39, 15 May 2013 (UTC)[reply]
Ok so I've changed the workflow so that the bot will try and protect all existing TFAs that have been scheduled until it hits a non-scheduled day. I was thinking have it run around noon, and then maybe have another script run around 18:00 and drop a note on WP:AN if the page isn't protected yet. Re Anomie:
Now the bot will just stop if the subpage doesn't exist or an empty string.
I suggested changing the run time to around noon (anytime really would work since it should protect a few days in advance)
That sounds like a good idea, I'll code that in.
Fixed.
Since non-autoconfirmed users can't move pages anyways, it seems pointless imo.
Maybe I just don't know Python or pywikipedia well enough, but now it looks like it will protect the Template:TFA title subpage itself?
Ok
Seems not done yet, can't review. ;)
While it should be good for now, since the TFA is unlikely to have upload protection (or LQT "newthread" or "reply" protection) and create protection doesn't apply to pages that already exist which leaves only 'edit', I'd personally rather see it just blindly preserve all non-move protection types. Also, you should probably ignore any protection entries that have a "source" set just in case the article somehow gets transcluded onto a cascade-protected page, and preserve the cascading flag in case the article is itself cascade protected.
1. Seems I had this fixed locally, just never pushed to github.
3. Implemented. Though, the bot should probably re-instate edit semiprotection if it existed on the redirect...
4. Done as well. It seems some pages (noticed it on the main page) have "aft" protection, so will be useful. Any protection entry with the 'source' key is just ignored, and anything with 'cascade' will cause &cascade=1 to be set.
It looks like do_page() will reach the end of the function without returning any specific value if the page is not a redirect and should_we_protect() returns a falsey value. That would prematurely terminate the loop in main().
You're using p_status rather than real_p_status when calling should_we_protect() on the redirect target.
Err... You're passing the date instead of p_status when calling protect() in the non-redirect case?
Your loop in prot_status should be where protections with 'source' set are ignored. As it is, a direct protection may be overwritten with a 'source' protection if the page is protected both directly and by cascading.
Does the + operator instead of .append() on lines 70-71 do the right thing?
As mentioned, it would be good to re-set an overridden edit semi-protection on the redirect after the TFA protection expires.
Something good for preliminary testing would be to comment out lines 72-74 (the actual submission of the action=protect query) and instead just print params, to make sure the rest of the bot runs correctly (I do this sort of thing with pretty much every change I make to any of AnomieBOT's tasks). Good also would be to somehow feed in test pages to make sure all the code paths in do_page and its sub-functions are hit. Anomie⚔12:07, 26 June 2013 (UTC)[reply]
A long time ago east used to run a bot under his main account to do this (as you can see he didn't even bother to hide it), it'll be nice to get an approved bot doing this.
Ooh, this new (to me me at least :P) notification feature is cool. I uploaded the source code for my old bot here, maybe you can crib some ideas like IRC notifications or move vandalism done shortly before the protection cronjob. I know for sure that questions #3 and #4 are accounted for in my bot, I learned those mistakes the hard way. Be aware that my code is uncommented, messy (and that's being charitable), and was built on top of a fork of a now five year old version of pywikipedia. Good luck with your bot. — east718 | talk | 15:29, 24 June 2013 (UTC)[reply]
Thanks! If the bot protects as soon as the page is selected, it should reduce the chance of page-move vandalism, but a separate IRC bot notifying that it was recently moved is a good idea. I'll try and write a bot to do that soon. Legoktm (talk) 08:11, 26 June 2013 (UTC)[reply]
The dropdown list of protection reasons now includes "Forthcoming TFA" which might help - what I've been doing recently is adding a wikilink to the TFA blurb for that date/article in the "other/additional reason" field, which might be a useful habit for the bot to copy. BencherliteTalk10:30, 15 October 2013 (UTC)[reply]
Approved for trial (5 days). Please provide a link to the relevant contributions and/or diffs when the trial is complete. Naturally, report mishaps. Complain if trial parameters are unsuitable. Josh Parris06:48, 10 November 2013 (UTC)[reply]
The bot doesn't have admin powers (yet) so I'm not surprised that log is empty - I assume that Legoktm will be protecting the couple of TFAs I've scheduled in the last couple of days by running the script through his own account before he seeks +bot+sysop for the bot. BencherliteTalk22:44, 20 November 2013 (UTC)[reply]
Trial complete....or close enough. Operation Crossroads and Ambohimanga were protected. Some things I noticed and had to accommodate for:
Template:TFA title/ subpages are created by AnomieBOT II and may not exist even if the TFA has been scheduled. The bot now falls back upon WP:TFA/ subpages and uses the same regexes as AnomieBOT to determine the page title.
If a TFA is missing, the bot will look at least 35 days in the future from today before stopping.
The above discussion is preserved as an archive of the debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA.
The following discussion is an archived debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA. The result of the discussion was Request Expired.
Function details: Same functionality as described in previous BRfA, expect removing the wrapper when the linked file size is below some threshold. Last year the average web page was nearly 1 megabyte, so a threshold in the 2 to 10 megabyte range?
And as pointed out in the TfD the template is obsolete because the Icon is automatically added to PDF extensions via CSS (not supported in dying IE6) and Adobe Reader starts faster
now while Chrome/Safari/Firefox have built in PDF viewers.
Discussion
The decision at the TfD seems to be to move everything over to the standard citation template/module, rather than to simply remove the template altogether (regardless of the size). So surely this bot should be converting the template rather than simply removing the wrapper? This also means having to wait until the citation template is updated (I believe the discussion is still ongoing as to what to call the new parameters). This also leaves a bit of a question mark over what to do in cases where the PDFLink template is used for something other than a citation. - Kingpin13 (talk) 15:13, 8 May 2013 (UTC)[reply]
Well PDFlink and Cite * series of template have always been fundamentally incompatible. PDFbot was programmed to remove either nesting. Perhaps the TfD should be relisted since it's only used with plain text citation and the external links section? This BRfA is concerns with leaving the edge cases of 1) very large files and 2) files without a .PDF extension. — Dispenser17:32, 8 May 2013 (UTC)[reply]
In that case I'm sort of struggling to understand the closure. I'm guessing what was meant was that something like
<ref name=startVLBI100m>Proceedings of the 6th European VLBI Network Symposium, {{PDF|[http://www.mpifr-bonn.mpg.de/div/vlbi/evn2002/book/EPreuss.pdf The Beginnings of VLBI at the 100-m Radio Telescope]|100 KB}}, June 25th-28th 2002, Bonn, Germany</ref>
should become
<ref name="startVLBI100m">{{cite web|url=http://www.mpifr-bonn.mpg.de/div/vlbi/evn2002/book/EPreuss.pdf The Beginnings of VLBI at the 100-m Radio Telescope|title=Proceedings of the 6th European VLBI Network Symposium|date=June 25th-28th 2002|publisher=Bonn|location=Germany|format=PDF}}</ref>
(plus a formatsize parameter when that becomes available). But this does not cover cases where the PDFLink template is used outside of <ref></ref> tags (which you seem to suggest is most cases, e.g. external links). I'll ask Plastikspork to clarify what he judged consensus indicated should happen in these cases. - Kingpin13 (talk) 18:39, 8 May 2013 (UTC)[reply]
Sorry for not being more clear. By my reading of the discussion, there was consensus to replace it by standard citation templates when it was being used as a citation. I had not really put much thought into other cases, like where it was being used for simple external links. The deprecate and replace comment was meant for cases where it is being used in place of a citation tag. Thanks! Plastikspork―Œ(talk)03:29, 11 May 2013 (UTC)[reply]
Thanks for the clarification. Based on the above, Dispenser, I would say it's not necessary to remove this template in most cases. Let me know where you want to go with this request. - Kingpin13 (talk) 20:21, 24 May 2013 (UTC)[reply]
A user has requested the attention of the operator. Once the operator has seen this message and replied, please deactivate this tag. (user notified) I'm thinking withdraw? Unless you want to modify the task significantly to instead convert refs to cite templates (which would not be a trivial task) - Kingpin13 (talk) 14:13, 31 May 2013 (UTC)[reply]
The above discussion is preserved as an archive of the debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA.
The following discussion is an archived debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA. The result of the discussion was Withdrawn by operator.
Function details:
CSD#G13 is becoming a reality - articles will only be tagged by a different process as G13 if they meet specific criteria. This would be an amendment to my existing adminbot that should a) check that it was indeed that process that tagged it, and if yes, delete the page.
I don't think this is mass deletion - it's deletion based on specific matching criteria. The intent of this BFRA is clearly not to circumvent the process - it's to allow the bot - which already has the role of deleting compliant pages - to delete additional compliant pages (✉→BWilkins←✎) 11:15, 7 May 2013 (UTC)[reply]
Deleting pages that have not been edited in a while is a completely different case to deleting pages requested by the author. At least one person (the author) has reviewed it and decided it does not warrant keeping. It is claimed in the linked discussion that this proposal will result in 60-70,000 deletions. If that is done blind (ie without review) how can that be anything but mass deletion. In any case, there is still a need to obtain community approval for this task independent of the RfC. SpinningSpark16:42, 7 May 2013 (UTC)[reply]
It's my understanding that someone else (Scottywong, IIRC) is proposing the code to identify based on the criteria and tag it appropriately. This bot would merely a) check that it's tagged right, and b) check that the tag was added by the right bot, and then delete if they are both met (✉→BWilkins←✎) 17:20, 7 May 2013 (UTC)[reply]
I can't see this going forward without community approval. It should be emphasized these articles are not in article.space and are not being edited. I would like to see them deleted, but this requires a community consensus broader than RFBA. -166.137.209.143 (talk) 14:08, 15 May 2013 (UTC)[reply]
The AfC Helper tool adds since ages (over a year, maybe 1.5) the |declinets= and |decliner=. To solved a problem I had today: a "submitter" corrects a draft, rerequests a review by placing a note to the reviewer's talk page and didn't submit it "correctly": so the bot check if the last non-bot edit was the one adding that parameter and delete only these pages? mabdul23:52, 8 May 2013 (UTC)[reply]
More discussion on this would be good. I see that the original discussion has suggested that a bot not be the deleter ... which makes my original request somewhat irrelevant. However, there are perhaps some good things to come from the process (✉→BWilkins←✎) 11:34, 30 May 2013 (UTC)[reply]
The last I saw, "no bots acting on G13" was the consensus. This was rendered moot, for the time being - I thought I had withdrawn it earlier (✉→BWilkins←✎) 08:52, 1 August 2013 (UTC)[reply]
The above discussion is preserved as an archive of the debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA.
The following discussion is an archived debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA. The result of the discussion was Approved.
Function details: I have recently noticed that we have many instances of templates that should be substituted, such as unsigned templates (see recently approved BRFA for (even more recently) retired user). I've been running a similar task on Wikimedia Commons for some time now, and was recently approved on Wikidata to do the same. The bot gets a list of templates that should only be used substituted, then checks for transclusions and attempts to substitute them if possible. Hazard-SJ ✈ 03:22, 1 May 2013 (UTC)[reply]
If they should be substituted, then the task is bot appropriate. Is there some reason that AnomieBOT should do it over Hazard-Bot? I think with tasks that require routine bot work, having a couple of bots well-coded for the task makes sense. My opinion is that a trial at least for part of the task, the unsigned templates, is appropriate, unless there is a reason to not substitute them. -68.107.137.178 (talk) 08:51, 5 May 2013 (UTC)[reply]
As for AnomieBOT's current actions, the outlines as written by Anomie is quite a good implementation, and I have no problem ... I actually feel safer with AnomieBOT going through that category. However, I'd still like to substitute the unsigned templates since AnomieBOT isn't currently doing them and that should be safe enough in a limited number of templates. Hazard-SJ ✈ 03:31, 7 May 2013 (UTC)[reply]
Approved for trial (100 edits or 2 days). Please provide a link to the relevant contributions and/or diffs when the trial is complete.. For templates that specifically say they should be substed. ·Add§hore·Talk To Me!16:18, 27 May 2013 (UTC)[reply]
The above discussion is preserved as an archive of the debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA.
The following discussion is an archived debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA. The result of the discussion was Request Expired.
Function details: No changes at all. Just retrieve search result and processing for testing with real users (I'm doing a dissertation on the argument). Then results will be published.
Note: This bot has edited its own BRFA page. Bot policy states that the bot account is only for edits on approved tasks or trials approved by BAG; the operator must log into their normal account to make any non-bot edits. AnomieBOT⚡09:08, 23 April 2013 (UTC)[reply]
A user has requested the attention of the operator. Once the operator has seen this message and replied, please deactivate this tag. (user notified) Please let us know if we can close the task. MBisanztalk11:36, 9 May 2013 (UTC)[reply]
The above discussion is preserved as an archive of the debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA.
The following discussion is an archived debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA. The result of the discussion was Request Expired.
Note: The user account this request is for is also listed as the Operator, but the account name does not clearly indicate that the account is a bot and the account has very few edits. Please note that WP:Bot policy states that a bot account's username should make it immediately clear that the account is in fact a bot, which is normally done by having the account name end with the word "Bot". Also note that a bot may not operate itself, so the Operator field should identify the account of the human running the bot. AnomieBOT⚡15:39, 25 May 2013 (UTC)[reply]
Operator of the bot is corrected. This request was created using the bot-username, which automatically placed itself as the operator. The actual operator/initiator of this is Omer Rajput. Omer rajput (talk) 15:51, 25 May 2013 (UTC)[reply]
A user has requested the attention of the operator. Once the operator has seen this message and replied, please deactivate this tag. (user notified) any updates? ·addshore·talk to me!11:05, 8 June 2013 (UTC)[reply]
There's no official time limit. I generally try to give quite a long while for them to reply (about a month or sometimes even two), mainly because BAG is often very slow at approving requests, so it would be a bit unfair to enforce a faster response time than BAG itself. Also, I like to give a warning "e.g. If there's no response within a week, I will mark this request as expired", to give the operator a heads up. --Chris12:31, 24 June 2013 (UTC)[reply]
A user has requested the attention of the operator. Once the operator has seen this message and replied, please deactivate this tag. (user notified) Any updates? If there is no response soon, this request may be marked as expired. --Chris14:24, 2 July 2013 (UTC)[reply]
The above discussion is preserved as an archive of the debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA.
Bots in a trial period
The following discussion is an archived debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA. The result of the discussion was Approved.
Function details: Traverses Category:Empty categories awaiting deletion , does a purge/forcelinkupdate on each item. Checks for server congestion and has a backoff strategy.
This is another maintenance category update issue from templates that do conditional category inclusion based on time, which creates a known caching/category table issue. (See bugzilla #s 5382, 12019, 31628). This time it's empty categories marked with CSD C1, which allows them to be deleted after a four-day waiting period. The categorytable won't get updated when the four days expire, unless and until the empty category is edited or special-purged after the four days are up.
(Additional Null Bot News: I have recently deactivated task 2 and 3, as the project they related to, the deployment of WP:TAFI on the Main Page, has been discontinued.)
Trial complete.'tl;dr nattering: Three cycles complete. I manually reviewed the resulting category placements after each run and everything seems to have functioned as expected. One small surprise: I hadn't realized how many of the items in the category when I got to it were long-term stuck (weeks at least). The average workload for this task will be significantly smaller (25-50 purges/day, rather than 250) than originally predicted. No reports of trouble that I could find. --j⚛e deckertalk20:25, 25 May 2013 (UTC)[reply]
The above discussion is preserved as an archive of the debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA.
The following discussion is an archived debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA. The result of the discussion was Withdrawn by operator.
Function details: Will take over all of RileyBot's tasks. Fairly straightforward; once we verify that everything is working correctly, an admin should feel free to block the now unneeded bot.
Discussion
I take some issues with the idea that we should be blocking Riley's bot. Retired users are no different from current users and blocks are blocked on malfunctions, not at random. SnowolfHow can I help?14:08, 18 May 2013 (UTC)[reply]
A block isn't inherently necessary, although the idea is that currently no one is able to maintain or update the bot if it were to run into trouble. Also note that it hasn't run since April. Theopolisme(talk)14:16, 18 May 2013 (UTC)[reply]
Then we don't need to block it at all :) Abandoned bot accounts are not routinely blocked as far as I'm aware, and we've had bots run by retired users both maintained and unmaintained run for years :) SnowolfHow can I help?14:38, 18 May 2013 (UTC)[reply]
The blocking issue was raised here also. I think wp:bot is playing with fire making uthis a social network first, be nice to our retired fellows, versus encyclopedia that needs maintained, bot owners without password integrity should have their bots blocked, but I have said my piece.
Approved for trial. Please provide a link to the relevant contributions and/or diffs when the trial is complete.. Please test each task with a small number of edits <50. Please link to test edits below. ·Add§hore·Talk To Me!16:01, 19 May 2013 (UTC)[reply]
Sort of. I'll try to get this up and running before I go on vacation...I just need to set up a new repo, then install dependencies and configure pywikipedia...putting this on my todo list. Theopolisme(talk)03:20, 17 June 2013 (UTC)[reply]
The above discussion is preserved as an archive of the debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA.
The following discussion is an archived debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA. The result of the discussion was Withdrawn by operator.
Function details:
After chatting with the folks in #musicbrainz for a bit, it was pointed out that the release value is outdated, and we should be using the more superior release-group instead. With their API its trivial to get the release id, so it's a relatively simple task.
Discussion
Seems like an appropriate bot task. Would like to see the discussions linked, but probably not a big deal with this task. Still easier to provide all the information in the beginning rather than to discuss missing information that could be readily provided. -68.107.137.178 (talk) 07:57, 29 April 2013 (UTC)[reply]
The above discussion is preserved as an archive of the debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA.
Bots that have completed the trial period
The following discussion is an archived debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA. The result of the discussion was Approved.
Function details: Updates the tables in the article List of Indian Premier League records and statistics by parsing data from stats.espncricinfo.com (the external links below each table in the article). Most of the article is currently outdated information, even though a few users maintain some parts. They update them usually only when something significant happens, even though some of the tables contain data that is to be updated after every match. (Not all tables - only 34 of them - will be updated though)
Each table which is to be updated is linked to a function which updates it. The function first locates the section header under which the table lies. It then loads the page given by the URL in the function, and uses the DOM and XPath to extract and parse data. This parsed data is then used to construct table rows which then replace the old rows in the table, using the section header as a reference point. Translation tables are used to convert Cricinfo names, which can be confusing, into common names. Each run will usually take about 10 to 15 minutes (http://stats.espncricinfo.com/robots.txt requires a 15-second pause after each request) and even though this time can lead to edit conflicts, the bot will save the wikitext to a local file which can be used to edit without re-doing all the updates again.
Because section headers are used to locate the tables, even a slight change in a section header will cause an error (and the bot will skip to the next function). Also, adding, removing or reordering columns can result in a mismatch. For this reason the bot will not edit if it has new messages on its talk page, so that editors can inform me about such changes so I can update the bot according to the changes. (I have not tested this yet as there is no other account to post a message to my talk page, so someone should check the source code to see if it will work) (an editnotice should be created for the article when the bot is active).
Note that the bot's first edit may do things like replacing the deprecated align=left with style="text-align:left", closing some unclosed HTML tags and assigning sort keys to some of the values in the tables so that they sort properly. This bot will use a hard limit of 5 rows per table, with a few exceptions (the current article has a limit of 5 rows, but not a hard one - when there is a tie in the fifth spot, sometimes all rows with that value are used and sometimes the fifth row is left out, without any clear instructions or explanation). Also some minor changes may have to be done after the bot makes its first edit (but not after subsequent edits)
Note that the current season of the Indian Premier League will end on 26 May, after which this bot will not edit until next year (although its code can be forked to create bots for other major tournaments), take this into consideration when deciding when to approve this bot.
This bot will not mark edits as minor, nor will it set the bot flag on them.
Approved for trial (50 edits or 5 days). Please provide a link to the relevant contributions and/or diffs when the trial is complete.MBisanztalk11:39, 9 May 2013 (UTC)[reply]
First edit: It involves things such as assigning sort keys so that the tables sort correctly, and fixing unclosed <small> tags and deprecated attributes, so do not mind the big size change.
Human editors may not know the sorting algorithm used to keep all tables the bot maintains to 5 rows. Sometimes human editors have been updating a few tables before the bot is scheduled to run, and sometimes add (or sometimes remove) some rows in case of a tie at the fifth spot as they do not know the rules used by the bot. However the bot uses a hard limit of 5 (which is easier for automated programs) and if there is a tie, uses available parameters other than the one sorted first to determine which row is "better" (rather than treating all tied rows as equal, which causes undefined behaviour according to [2]) - the sorting algorithm can be found in User:IPLRecordsUpdateBot/Source/CricinfoDataParser.php and the sort orders for each function are listed in User:IPLRecordsUpdateBot/Source/StatsUpdateFunctions.php.
A user has requested the attention of a member of the Bot Approvals Group. Once assistance has been rendered, please deactivate this tag by replacing it with {{t|BAG assistance needed}}.jfd34 (talk)09:00, 9 May 2013 (UTC)[reply]
The above discussion is preserved as an archive of the debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA.
The following discussion is an archived debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA. The result of the discussion was Withdrawn by operator.
I'm not able to look at the source code where I am right now, but am I right in assuming that this is a newly written bot, and not based on the original code written for Legobot?
Forgot to ask. Since this is a newly written bot, should we expect it to perform exactly as the old one did, or have you made any improvements/other changes? Hersfoldnon-admin(t/a/c)18:26, 2 January 2013 (UTC)[reply]
FWIW, I have no idea why I requested that one as supervised.
Looking through the code, I didn't see anything that would handle the and templates?
From a technical implementation perspective, I found it to be faster to simply look through the categories, since that didn't require parsing templates and didn't involve fetching page content. It might be worth considering that. Legoktm (talk) 18:58, 2 January 2013 (UTC)[reply]
On line 78, please change key_re.match to key_re.search.
It is worth noting that the code still parses the page content, rather than checking categories. I would say that the bot is ready for a trial, in order to experimentally ensure that none of us overlooked any bugs when we read the code. →Σσς. (Sigma)05:38, 14 January 2013 (UTC)[reply]
Looks good Approved for trial. Please provide a link to the relevant contributions and/or diffs when the trial is complete. --Chris10:47, 19 January 2013 (UTC)[reply]
It means the operators' judgment and common sense should be used. If one run goes smoothly, then that's all it needs. If the run goes pretty smoothly, but still needs a few tweeks, the bot can have a slightly longer trial. If something goes really wrong the operator should stop the bot, report back here and we'll take it from there. --Chris07:09, 22 January 2013 (UTC)[reply]
A user has requested the attention of the operator. Once the operator has seen this message and replied, please deactivate this tag. (user notified) Any updates? ·addshore·talk to me!18:51, 16 June 2013 (UTC)[reply]
Withdrawn by operator. I'm withdrawing this because of the excessive amount of issues which have been found during the creation of this BRFA. I have been unable to find any solutions to the problems which there are with this bot (for example, this when there is no proof of the errors occurring in the console). I may, or may not re-open this in the future if a solution is found. Thine Antique Pen (talk) 22:32, 20 July 2013 (UTC)[reply]
The above discussion is preserved as an archive of the debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA.
Approved requests
Bots that have been approved for operations after a successful BRFA will be listed here for informational purposes. No other approval action is required for these bots. Recently approved requests can be found here (edit), while old requests can be found in the archives.
Bots that have been denied for operations will be listed here for informational purposes for at least 7 days before being archived. No other action is required for these bots. Older requests can be found in the Archive.
These requests have either expired, as information required by the operator was not provided, or been withdrawn. These tasks are not authorized to run, but such lack of authorization does not necessarily follow from a finding as to merit. A bot that, having been approved for testing, was not tested by an editor, or one for which the results of testing were not posted, for example, would appear here. Bot requests should not be placed here if there is an active discussion ongoing above. Operators whose requests have expired may reactivate their requests at anytime. The following list shows recent requests (if any) that have expired, listed here for informational purposes for at least 7 days before being archived. Older requests can be found in the respective archives: Expired, Withdrawn.