Wikipedia:Bots/Requests for approval: Difference between revisions
- العربية
- Arpetan
- Asturianu
- Avañe'ẽ
- تۆرکجه
- বাংলা
- Башҡортса
- Беларуская
- भोजपुरी
- Български
- Bosanski
- Català
- Čeština
- Corsu
- Dansk
- الدارجة
- Deutsch
- ދިވެހިބަސް
- Español
- Esperanto
- Estremeñu
- Euskara
- فارسی
- Føroyskt
- Français
- Galego
- ГӀалгӀай
- 贛語
- ગુજરાતી
- 한국어
- Հայերեն
- हिन्दी
- Hrvatski
- Ido
- Igbo
- Bahasa Indonesia
- Interlingua
- Íslenska
- Italiano
- עברית
- ಕನ್ನಡ
- ქართული
- Қазақша
- Кыргызча
- Ladino
- ລາວ
- Latviešu
- Lombard
- Magyar
- मैथिली
- Македонски
- Malagasy
- മലയാളം
- Malti
- मराठी
- مصرى
- Bahasa Melayu
- ꯃꯤꯇꯩ ꯂꯣꯟ
- Minangkabau
- မြန်မာဘာသာ
- Nederlands
- नेपाली
- 日本語
- Нохчийн
- Norsk bokmål
- Occitan
- Oʻzbekcha / ўзбекча
- پنجابی
- ပအိုဝ်ႏဘာႏသာႏ
- پښتو
- Piemontèis
- Plattdüütsch
- Polski
- Português
- Qırımtatarca
- Română
- Romani čhib
- Русский
- Shqip
- Sicilianu
- සිංහල
- Simple English
- سنڌي
- SiSwati
- Slovenčina
- Slovenščina
- Soomaaliga
- Српски / srpski
- Srpskohrvatski / српскохрватски
- Suomi
- Svenska
- தமிழ்
- ၽႃႇသႃႇတႆး
- తెలుగు
- ไทย
- Tsetsêhestâhese
- Türkçe
- Українська
- اردو
- Vèneto
- Tiếng Việt
- Walon
- ייִדיש
- 粵語
- 中文
Adding SmackBot 36 to open tasks. |
|||
Line 3: | Line 3: | ||
=Current requests for approval= |
=Current requests for approval= |
||
<!-- Add NEW entries at the TOP of this section, on a new line directly below this message. --> |
<!-- Add NEW entries at the TOP of this section, on a new line directly below this message. --> |
||
{{BRFA|SmackBot|36|Open}} |
|||
{{BRFA|PC78-bot||Open}} |
{{BRFA|PC78-bot||Open}} |
||
{{BRFA|Femto Bot|5|Open}} |
{{BRFA|Femto Bot|5|Open}} |
Revision as of 22:33, 8 October 2010
All editors are encouraged to participate in the requests below – your comments are appreciated more than you may think! |
New to bots on Wikipedia? Read these primers!
- Approval process – How these discussions work
- Overview/Policy – What bots are/What they can (or can't) do
- Dictionary – Explains bot-related jargon
To run a bot on the English Wikipedia, you must first get it approved. Follow the instructions below to add a request. If you are not familiar with programming consider asking someone else to run a bot for you.
Instructions for bot operators | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|
|
Bot-related archives |
---|
1, 2, 3, 4, 5, 6, 7, 8, 9, 10 11, 12, 13, 14, 15, 16, 17, 18, 19, 20 21, 22, 23, 24, 25, 26, 27, 28, 29, 30 31, 32, 33, 34, 35, 36, 37, 38, 39, 40 41, 42, 43, 44, 45, 46, 47, 48, 49, 50 51, 52, 53, 54, 55, 56, 57, 58, 59, 60 61, 62, 63, 64, 65, 66, 67, 68, 69, 70 71, 72, 73, 74, 75, 76, 77, 78, 79, 80 81, 82, 83, 84, 85, 86 |
|
Bot Name | Status | Created | Last editor | Date/Time | Last BAG editor | Date/Time |
---|---|---|---|---|---|---|
MacaroniPizzaHotDog Bot (T|C|B|F) | On hold | 2024-10-28, 20:59:48 | Primefac | 2024-10-30, 15:46:04 | Primefac | 2024-10-30, 15:46:04 |
BunnysBot (T|C|B|F) | Open | 2024-10-24, 15:12:05 | Bunnypranav | 2024-11-01, 09:15:23 | Primefac | 2024-10-30, 14:06:32 |
KiranBOT 12 (T|C|B|F) | Open | 2024-09-24, 15:59:32 | GreenC | 2024-10-31, 04:19:08 | The Earwig | 2024-10-05, 16:10:12 |
RustyBot 2 (T|C|B|F) | Open | 2024-09-15, 15:17:54 | Primefac | 2024-10-20, 11:42:26 | Primefac | 2024-10-20, 11:42:26 |
PonoRoboT 2 (T|C|B|F) | On hold | 2024-07-20, 23:38:17 | Primefac | 2024-08-04, 23:49:03 | Primefac | 2024-08-04, 23:49:03 |
Platybot (T|C|B|F) | In trial | 2024-07-08, 08:52:05 | Primefac | 2024-10-20, 11:46:49 | Primefac | 2024-10-20, 11:46:49 |
KiranBOT 10 (T|C|B|F) | On hold | 2024-09-07, 13:04:48 | Usernamekiran | 2024-10-06, 18:19:02 | The Earwig | 2024-10-05, 15:28:58 |
SodiumBot 2 (T|C|B|F) | In trial | 2024-07-16, 20:03:26 | Novem Linguae | 2024-08-08, 07:10:31 | Primefac | 2024-08-04, 23:51:27 |
DannyS712 bot III 74 (T|C|B|F) | In trial: User response needed! | 2024-05-09, 00:02:12 | DreamRimmer | 2024-10-06, 07:43:48 | ProcrastinatingReader | 2024-09-29, 10:59:04 |
AussieBot 1 (T|C|B|F) | Extended trial: User response needed! | 2023-03-22, 01:57:36 | Hawkeye7 | 2024-10-02, 03:25:29 | ProcrastinatingReader | 2024-09-29, 10:54:10 |
FrostlySnowman 10 (T|C|B|F) | In trial | 2023-03-02, 02:55:00 | DreamRimmer | 2024-10-15, 14:17:23 | SD0001 | 2024-09-18, 17:52:59 |
Current requests for approval
- The following discussion is an archived debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA. The result of the discussion was Approved.
Operator: Rich Farmbrough (talk · contribs)
Automatic or Manually assisted: Automatic
Programming language(s): Perl/AWB
Source code available: AWB, yes; Perl no.
Function overview: Hyphenate adjectival uses of nn mile
Links to relevant discussions (where appropriate): User_talk:Rich_Farmbrough/Archive/2010Oct#SmackBot_rides_again
Edit period(s): Continous
Estimated number of pages affected: 2226, about 2 new per day
Exclusion compliant (Y/N): Y
Already has a bot flag (Y/N): Y
Function details: Hypenate adjectival use of nn mile, nn miles and nn miles-per-hour, and their conversions Five examples here.
Discussion
Simple fix, may be a pilot for AWB. Rich Farmbrough 22:32, 8 October 2010 (UTC)[reply]
{{BAG assistance needed}}
Rich Farmbrough 23:35, 10 October 2010 (UTC)[reply]
- What about "nn-metre" or any other unit? Also note that many languages have a singular noun if used after a number ending in 1, so "river flows for 401 mile.", although grammatically incorrect, may have been used. Wikipedia being multi-cultural, I suspect there will be cases of this. The above link does not look like a discussion on the subject, merely a mention that spaces should have been dashes in a specific case. Also, per an example in WP:MEASUREMENT, [2] should not change (3 km) to (3-km). It seems there needs to be a wider discussion first. This looks more suitable for AWB with human supervision. — HELLKNOWZ ▎TALK 10:46, 11 October 2010 (UTC)[reply]
- I'm not looking at other units yet. This is big enough to weed out the "gotchas".
- I look for an indefinite article too: I suppose a definite article would also suffice. If there are plural/singular errors they should be fixed not used to prevent the fixing of this (which was in turn cited as a reason not to use non-breaking spaces). Where would we stop? Someone may have written "5 mile" and meant "5 mille" or "5 mils".
- Mosnum has examples of not hyphenating where abbreviations are used, (but no injuction) this will be respected.
- The subject is up for discussion:
- here
- on my talk page
- at Wikipedia talk:MOSNUM where Tony mentions ISO (which as Mosnum says we don't follow), but may mean the SI people (BIMP? BIPM?).
- at Template talk:Convert
- the last three of which are recent or only discovered by me just now. Rich Farmbrough, 16:47, 11 October 2010 (UTC).[reply]
- Would indefinite, definite and negative articles all work? "Pave a 5-mile line" vs. "Travel the 5-mile road" vs. "No 5-mile road left untravelled"?
- Of course, typos are user mistakes and bots cannot be blamed for fixing those. For now, both your and mine estimates of error margin are as good as guesses.
- Also, don't get me wrong, I am pro minor fixes if they are well-defined. For example, you have brought up two more discussions I was unaware of. I'm not necessarily suggesting VP/WT:MOS or anything large scale, I think a discussion here could be sufficient. — HELLKNOWZ ▎TALK 17:11, 11 October 2010 (UTC)[reply]
- Looks like pronouns work too "His 250-mile (400-kilometre) march to prevent Vienna falling into enemy hands was a masterpiece of deception, meticulous planning and organisation.", but that would need testing. I have investigated likely cases of "mile" for "miles" and only found a handful, which I fixed (of course I could see other errors in those articles…). Rich Farmbrough, 18:08, 11 October 2010 (UTC).[reply]
- Looks like pronouns work too "His 250-mile (400-kilometre) march to prevent Vienna falling into enemy hands was a masterpiece of deception, meticulous planning and organisation.", but that would need testing. I have investigated likely cases of "mile" for "miles" and only found a handful, which I fixed (of course I could see other errors in those articles…). Rich Farmbrough, 18:08, 11 October 2010 (UTC).[reply]
- Approved for trial (50 edits). Please provide a link to the relevant contributions and/or diffs when the trial is complete. MBisanz talk 23:54, 14 October 2010 (UTC)[reply]
- {{OperatorAssistanceNeeded}} what is the current status of this request? ΔT The only constant 00:16, 11 November 2010 (UTC)[reply]
- Ready to go any time. I'll do a live trial later tonight. Rich Farmbrough, 17:14, 15 November 2010 (UTC).[reply]
- Trial complete. here. Rich Farmbrough, 09:25, 16 November 2010 (UTC).[reply]
- Given the lack of objection from any of our many and varied grammar experts, Approved.. - Jarry1250 [Who? Discuss.] 16:42, 3 December 2010 (UTC)[reply]
- Trial complete. here. Rich Farmbrough, 09:25, 16 November 2010 (UTC).[reply]
- Ready to go any time. I'll do a live trial later tonight. Rich Farmbrough, 17:14, 15 November 2010 (UTC).[reply]
- The above discussion is preserved as an archive of the debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA.
- The following discussion is an archived debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA. The result of the discussion was Approved.
Operator: PC78 (talk · contribs)
Time filed: 15:40, Friday October 8, 2010 (UTC)
Automatic or Manually assisted: N/A
Programming language(s): N/A
Source code available: N/A
Function overview: List building with AWB
Links to relevant discussions (where appropriate): User talk:Xeno#Bot access for AWB? (perm)
Edit period(s): N/A
Estimated number of pages affected: 0
Exclusion compliant (Y/N): N/A
Already has a bot flag (Y/N): N/A
Function details: This account is required for the sole purpose of building lists on AWB with the nolimits plugin. The account would not be used for editing.
Discussion
Sorry, if I am missing something, but doesn't apihighlimits merely raise the "up to 500" limit to "up to 5000"? NoLimits plugin is an AWB specific list building tool. I am guessing it does multiple queries in any case, as even setting limit=max has never gotten me past 5000. — HELLKNOWZ ▎TALK 15:57, 8 October 2010 (UTC)[reply]
- I'm not sure how to answer you. As I've been told, the nolimits plugin removes the 25000 limit when building lists from very large categories or highly transcluded templates, and the plugin is restricted to admins and bots (though I don't personally see why it shouldn't be made more widely available). Since I'm not an admin, it is for this reason that I need the bot account. This is the course of action I was recommended in the discussion linked above. PC78 (talk) 16:05, 8 October 2010 (UTC)[reply]
- Yes, I read that, but noone made clear the distinction between plug-in limits and actual API limits. So this is merely so that AWB allows you to use the NoLimits plug-in. I have no objections either way, was just wondering, have fun! — HELLKNOWZ ▎TALK 16:09, 8 October 2010 (UTC)[reply]
- Someone else would have to comment on the API limits, since that is some way over my head. :) But yes, this is merely so I can use AWB with the NoLimits plugin. I don't need the account for editing, assuming that this would require further approval. PC78 (talk) 16:14, 8 October 2010 (UTC)[reply]
- Quick comment, (have to run) AWB devs placed a limit to the number of request AWB will do, in order to prevent server stress. the default API query is 500 for most items. with highlimits its 5000. (a factor of 10). So to get 25,000 results the average user must make 50 requests. however with highlimits thats down to 5. Which is a lot less stress on the servers. ΔT The only constant 16:21, 8 October 2010 (UTC)[reply]
- With respect to the limits that have been put in place, they do nontheless impact on the usefulness of AWB, and occasional use by a single user (i.e. me) surely won't place much additional stress on the server. I don't see why access shouldn't be granted on request to trusted users who have need for it. PC78 (talk) 16:40, 8 October 2010 (UTC)[reply]
- Δ seems to be saying that allowing high limits would actually place less strain, as AWB wouldn't need to make multiple requests on the server. –xenotalk 15:14, 12 October 2010 (UTC)[reply]
- I'll take your word for it; as I said above, this part of the discussion is over my head. :) PC78 (talk) 22:32, 12 October 2010 (UTC)[reply]
- Δ seems to be saying that allowing high limits would actually place less strain, as AWB wouldn't need to make multiple requests on the server. –xenotalk 15:14, 12 October 2010 (UTC)[reply]
- With respect to the limits that have been put in place, they do nontheless impact on the usefulness of AWB, and occasional use by a single user (i.e. me) surely won't place much additional stress on the server. I don't see why access shouldn't be granted on request to trusted users who have need for it. PC78 (talk) 16:40, 8 October 2010 (UTC)[reply]
- Quick comment, (have to run) AWB devs placed a limit to the number of request AWB will do, in order to prevent server stress. the default API query is 500 for most items. with highlimits its 5000. (a factor of 10). So to get 25,000 results the average user must make 50 requests. however with highlimits thats down to 5. Which is a lot less stress on the servers. ΔT The only constant 16:21, 8 October 2010 (UTC)[reply]
- Someone else would have to comment on the API limits, since that is some way over my head. :) But yes, this is merely so I can use AWB with the NoLimits plugin. I don't need the account for editing, assuming that this would require further approval. PC78 (talk) 16:14, 8 October 2010 (UTC)[reply]
rev 7235 (i.e. NoLimits 1.3.2.0) allows use of NoLimitsPlugins if the user has the "apihighlimits" right. -- Magioladitis (talk) 17:16, 8 October 2010 (UTC)[reply]
- I do not see a clear reason for this bot, and this is not the place to debate the appropriateness of limits. Please supply at least one example of how this would help the encyclopedia. Johnuniq (talk) 00:52, 9 October 2010 (UTC)[reply]
- So if the operator were to give an indication what the larger lists would be used for, your objection would be withdrawn? –xenotalk 15:14, 12 October 2010 (UTC)[reply]
- I'm not here to debate the appropriateness of limits (I'm quite capable of doing that at a more appropritate forum), merely to bypass them to assist with my contributions which in turn will benefit the encyclopedia. I'll give a few off the top of my head examples of where I would find this useful:
- Finding uses of {{Infobox person}} for individuals categorised as missing or similar. That template has 72,340 transclusions, yet I can only build a list based on the first 25,000. This is pertinent to a current proposal of mine to add new fields to the infobox. If I had a complete list of interections I would have had more data to base that proposal on, and could more thoroughly implement any forthcoming changes in mainspace.
- Intersections of {{Infobox person}} and {{Infobox Korean name}}. Ideally, {{Infobox Korean name}} should be subclassed inside {{Infobox person}}, but I've only been able to do this in a limited fashion because I can't get a complete list of intersections. Consolidating the infoboxes will improve article layout and appearance in these cases.
- There have been other occasions where hitting the limit has stopped me from doing something, and there will certainly be more in the future. Hopefully that satisfies your concern. PC78 (talk) 22:32, 12 October 2010 (UTC)[reply]
- Lists like these would be rather trivial to generate on the Toolserver. This might be able to do some. You can also request things with the query service. Mr.Z-man 18:48, 31 October 2010 (UTC)[reply]
- CatScan is pretty crappy (based on my own experience), and is limited to categories. I'll look into the query service, though. PC78 (talk) 17:03, 7 November 2010 (UTC)[reply]
- Lists like these would be rather trivial to generate on the Toolserver. This might be able to do some. You can also request things with the query service. Mr.Z-man 18:48, 31 October 2010 (UTC)[reply]
- I'm not here to debate the appropriateness of limits (I'm quite capable of doing that at a more appropritate forum), merely to bypass them to assist with my contributions which in turn will benefit the encyclopedia. I'll give a few off the top of my head examples of where I would find this useful:
- Support reasonable request. As an aside, perhaps it's time for a separate userright that would grant apihighlimits. –xenotalk 15:14, 12 October 2010 (UTC) (I realize researcher exists, but this is a userright that is not mandated by the community)[reply]
- "Researcher" might not be appropriate anyway, because it also grants browsearchive and deletedhistory. Unless it would confuse AWB, the bot account could be indef blocked to make 100% sure it won't accidentally be used for editing. But either way, I see no reason to deny this request. I say go for it, xeno. Anomie⚔ 00:10, 14 October 2010 (UTC)[reply]
- Given that the only opposition is editor question of usefulness and purpose, I see no problem in giving the bot flag and indef blocking the account for readonly. Although, I would prefer if instead AWB allowed users who request it to build larger lists. — HELLKNOWZ ▎TALK 11:53, 8 November 2010 (UTC)[reply]
Approved. As it is to be read-only, I'm also recommending that the flagging bureaucrat apply an indef block. Anomie⚔ 02:40, 11 November 2010 (UTC)[reply]
- The above discussion is preserved as an archive of the debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA.
- The following discussion is an archived debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA. The result of the discussion was Approved.
Operator: Rich Farmbrough (talk · contribs)
Automatic or Manually assisted: Automatic
Programming language(s): Perl/AWB
Source code available: AWB Perl no.
Function overview: Update oldest backlog list at Category:Wikipedia backlog
Links to relevant discussions (where appropriate): N/A
Edit period(s): Continous
Estimated number of pages affected: 1 per day
Exclusion compliant (Y/N): N
Already has a bot flag (Y/N): N
Function details: * Load the page:
- check the categories aren't empty, if they are replace with following month and check again
- If any changes save.
Discussion
Simple.
Rich Farmbrough 02:17, 6 October 2010 (UTC)[reply]
- Approved for trial (25 edits). Please provide a link to the relevant contributions and/or diffs when the trial is complete. –xenotalk 14:24, 8 October 2010 (UTC)[reply]
- Thank you. trail commenced here. Rich Farmbrough, 19:15, 9 October 2010 (UTC).[reply]
- Trial complete. -once I taught it to count and spell. Rich Farmbrough, 08:26, 10 October 2010 (UTC).[reply]
- Trial complete. -once I taught it to count and spell. Rich Farmbrough, 08:26, 10 October 2010 (UTC).[reply]
- Thank you. trail commenced here. Rich Farmbrough, 19:15, 9 October 2010 (UTC).[reply]
{{BAG assistance needed}} Rich Farmbrough 23:26, 10 October 2010 (UTC)[reply]
- Approved. MBisanz talk 23:42, 14 October 2010 (UTC)[reply]
- The above discussion is preserved as an archive of the debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA.
- The following discussion is an archived debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA. The result of the discussion was Approved.
Operator: Rich Farmbrough (talk · contribs)
Automatic or Manually assisted: Automatic
Programming language(s): Perl/AWB
Source code available: AWB Perl no.
Function overview: Manage my BRFAs
Links to relevant discussions (where appropriate): N/A
Edit period(s): Continous
Estimated number of pages affected: 1-3 per day
Exclusion compliant (Y/N): N, but Y on baggers talk pages.
Already has a bot flag (Y/N): Y
Function details:
- Generate BRFA from spec and post
- Update bot pages and my pages with appropriate status changes
- Tag stale BRFAs
- Ping BAG for very stale BRFAs
- Generate code for bot
- Run trial once authorised
- Post results
- Switch on task once authorised
Discussion
As Femto Bot doesn't have a bot flag yet I will be using it manually to test the code. [Update it now has a flag, and is effectively permitted to edit my userspace anyway 2010-10-08.]
Rich Farmbrough 01:49, 6 October 2010 (UTC)[reply]
- Where do BRFA specs come from? Do you write them and tell the bot to post them? How are stale BRFAs tagged? I don't think pinging BAG members should be done automatically. What does "generate code" for bot mean? How is the trial run — is your bot automatically told to run a trial once a trial is given? I don't think such automation should ever be done for trial runs of a bot. What are the posted results — is this a bot generated report or just the contribution list? — HELLKNOWZ ▎TALK 11:47, 8 October 2010 (UTC)[reply]
- Yes I write them at the moment, the bot will post them automatically.
- Stale BRFA's will be tagged with the bag assistance needed template.
- BAGBot was I think supposed to ping BAG members.
- Code generation will of course be done by AI. (and is there for information as much as anything - since it happens off wiki.)
- The trial is usually "50 edits" or similar. If the run is automatic (which, in fairness, will only be possible some of the time) there is no reason that the trial run can't be "5 edits", which can be reviewed almost immediately followed by 10 more, or 50 or whatever the BAGGER thinks appropriate.
- Posted results will be the contribs list - that's the current plan. It might be possible to generate a little more information - edit conflicts, time taken etc, but generally I would think this is not stuff BAG is interested in.
- Rich Farmbrough, 14:01, 8 October 2010 (UTC).[reply]
- I suppose automatic trials are useful for the reviewing BAGger, as long as everything is fine. But if anything goes wrong, they would have to revert the changes themselves or wait for you. That's my concern. But given you are a long-standing bot programmer, I hope this shouldn't be an issue. "Code generation will of course be done by AI.". Do you mean the code will be posted for review? Because it sounds like there is going to be an AI writing the code. :) —
HELLKNOWZ ▎TALK 16:05, 8 October 2010 (UTC)[reply]
- Well, the AI may take a little work yet... But your other point is valid: however it is just as valid for "manual" trials. I can do a trial and it not get reviewed for a couple of days. And that's why I said they can always say, "hmm 5 edit trial please." and either "That's borked, 5 rollbacks" or "Looks good, give me 10 more". It's also true to say that, for example AWB edits can't be put on this basis just yet. Rich Farmbrough, 23:20, 8 October 2010 (UTC).[reply]
- "I can do a trial and it not get reviewed for a couple of days." — but you would have to promptly revert any errors after the trial run, as you would be present. This is left to BAGger if you are not available and the trial was automated. Also, you didn't mention BAGger being able to ask for reverts as well, so that should balance it out. Regarding AI, I am still unsure if you are serious, but It'd be nice to see the first AI to make programs on demand. :) — HELLKNOWZ ▎TALK 00:50, 9 October 2010 (UTC)[reply]
- Well, could build in reverting (like revert them all - bot spelled X with a Y). Maybe this is an area we can feel our way, if baggers are uncomfortable they can ask for 1 edit, 1 edit, 2 edits... And it's also true that reviewers pick up errors that the botmeisters don't - that after all is one purpose of the review. As to the AI, yes it's tongue in cheek, but I certainly have written programs to write programs to write programs. Rich Farmbrough, 04:24, 9 October 2010 (UTC).[reply]
- Well, could build in reverting (like revert them all - bot spelled X with a Y). Maybe this is an area we can feel our way, if baggers are uncomfortable they can ask for 1 edit, 1 edit, 2 edits... And it's also true that reviewers pick up errors that the botmeisters don't - that after all is one purpose of the review. As to the AI, yes it's tongue in cheek, but I certainly have written programs to write programs to write programs. Rich Farmbrough, 04:24, 9 October 2010 (UTC).[reply]
- "I can do a trial and it not get reviewed for a couple of days." — but you would have to promptly revert any errors after the trial run, as you would be present. This is left to BAGger if you are not available and the trial was automated. Also, you didn't mention BAGger being able to ask for reverts as well, so that should balance it out. Regarding AI, I am still unsure if you are serious, but It'd be nice to see the first AI to make programs on demand. :) — HELLKNOWZ ▎TALK 00:50, 9 October 2010 (UTC)[reply]
- Well, the AI may take a little work yet... But your other point is valid: however it is just as valid for "manual" trials. I can do a trial and it not get reviewed for a couple of days. And that's why I said they can always say, "hmm 5 edit trial please." and either "That's borked, 5 rollbacks" or "Looks good, give me 10 more". It's also true to say that, for example AWB edits can't be put on this basis just yet. Rich Farmbrough, 23:20, 8 October 2010 (UTC).[reply]
Isn't this edit a little in advance of getting trial approval for this? VernoWhitney (talk) 22:49, 8 October 2010 (UTC)[reply]
- Just a little but I wanted to see what colour the pie was. Rich Farmbrough, 23:14, 8 October 2010 (UTC).[reply]
{{BAG assistance needed}}
Rich Farmbrough, 23:24, 10 October 2010 (UTC).[reply]
- Approved for trial (7 days). Please provide a link to the relevant contributions and/or diffs when the trial is complete. MBisanz talk 23:53, 14 October 2010 (UTC)[reply]
The point of a trial is that the operator reviews every single edit as it is made; there's no point to an "automatic" trial. Moreover, RF has a poor record of cleaning up mistakes when his tasks go wrong. I have had to revert innumerable broken edits by SmackBot. So I can't see how automated trials are going to improve things. — Carl (CBM · talk) 11:48, 18 October 2010 (UTC)[reply]
- Some errors here - Kingpin13 (talk) 12:01, 21 October 2010 (UTC)[reply]
- Also error here, should have been placed in the Requests to add a task to an already-approved bot section rather than Current requests for approval - Kingpin13 (talk) 10:28, 22 October 2010 (UTC)[reply]
- Thanks, one was known and not yet implemented, the other was implemented but untrialed code. All being well both should work now. Rich Farmbrough, 13:49, 22 October 2010 (UTC).[reply]
- And space suppression too. Rich Farmbrough, 14:22, 22 October 2010 (UTC).[reply]
- And space suppression too. Rich Farmbrough, 14:22, 22 October 2010 (UTC).[reply]
- Thanks, one was known and not yet implemented, the other was implemented but untrialed code. All being well both should work now. Rich Farmbrough, 13:49, 22 October 2010 (UTC).[reply]
- Please note the trial is over and the bot is still making (erroneous) edits for this task - please shut that portion off. –xenotalk 15:16, 2 November 2010 (UTC)[reply]
- My fault for giving it a bad BRFA name. Rich Farmbrough, 11:00, 5 November 2010 (UTC).[reply]
- Am I missing something or is this bot still making edits for this expired trial? VernoWhitney (talk) 13:54, 10 November 2010 (UTC)[reply]
- Yes but they are manually supervised. Rich Farmbrough, 15:55, 12 November 2010 (UTC).[reply]
- Yes but they are manually supervised. Rich Farmbrough, 15:55, 12 November 2010 (UTC).[reply]
- Am I missing something or is this bot still making edits for this expired trial? VernoWhitney (talk) 13:54, 10 November 2010 (UTC)[reply]
- My fault for giving it a bad BRFA name. Rich Farmbrough, 11:00, 5 November 2010 (UTC).[reply]
Trial complete. Rich Farmbrough, 21:44, 21 November 2010 (UTC).[reply]
Second trial
It's easy to appreciate why this BRFA hasn't been touched for three weeks, but since all the problems were with the code (and therefore fixable), Approved for trial (7 days). Please provide a link to the relevant contributions and/or diffs when the trial is complete. - Jarry1250 [Who? Discuss.] 11:51, 12 December 2010 (UTC)[reply]
- {{OperatorAssistanceNeeded}} Was this trial done? Mr.Z-man 04:13, 2 January 2011 (UTC)[reply]
- It's running now, it has submitted BRFA SmackBot
4243 , only yesterday. Rich Farmbrough, 10:05, 2 January 2011 (UTC).[reply]
- Looking at it's past edits and discussion:
- It's running now, it has submitted BRFA SmackBot
- Generate BRFA from spec and post – O.K.
- Update bot pages and my pages with appropriate status changes – O.K.
- Tag stale BRFAs – what does "stale" mean? Is it a time period? The bot can't detect things like wider discussion requests or some related discussion taking place elsewhere, etc.
- Means no templated status, last edit is by me, and more than 24 hours ago. Rich Farmbrough, 10:42, 3 January 2011 (UTC).[reply]
- 24 hours is hardly "stale" in current activity. Some BRFAs live on with weeks of no replies. The description page itself says "If you feel that your request is being overlooked (no BAG attention for ~1 week) you can add {{BAG assistance needed}} to the page." This number was probably based on experience rather than consensus, but is still more realistic. Are you O.K. with this being a week? — HELLKNOWZ ▎TALK 11:04, 3 January 2011 (UTC)[reply]
- Means no templated status, last edit is by me, and more than 24 hours ago. Rich Farmbrough, 10:42, 3 January 2011 (UTC).[reply]
- Ping BAG for very stale BRFAs – which BAG members? All? Those who commented?
- 72 hours after bag assistance is requested with no BAG response it will ping one "active" BAG member, wait for 24 hours then ping another, after that it will move to "inactive" members at one per 12 hours. Rich Farmbrough, 10:42, 3 January 2011 (UTC).[reply]
- Can it first ping those that have already participated in the discussion, i.e. before the BAN template. Also, don't ping inactive members, they don't participate for their own reasons and you can't tell who may get agitated by a random ping to a BRFA they have never seen before. — HELLKNOWZ ▎TALK 11:04, 3 January 2011 (UTC)[reply]
- Yes that makes sense. I don't realistically expect it to get to non-active members, but some of them have been classified non-active by me. If a ping makes them "agitated"... then well, they are less than inactive - they have effectively left, and should be removed from the roster completely - or at least classified as "on leave" or something. Rich Farmbrough, 19:49, 3 January 2011 (UTC).[reply]
- I thought you meant inactive at WP:BAG list? Inactive in BAG does not mean inactive. Well, anyway, as long as you don't get complaints and WT:BAG/BRFA doesn't, I suppose it is O.K. — HELLKNOWZ ▎TALK 20:00, 3 January 2011 (UTC)[reply]
- Yes that makes sense. I don't realistically expect it to get to non-active members, but some of them have been classified non-active by me. If a ping makes them "agitated"... then well, they are less than inactive - they have effectively left, and should be removed from the roster completely - or at least classified as "on leave" or something. Rich Farmbrough, 19:49, 3 January 2011 (UTC).[reply]
- 72 hours after bag assistance is requested with no BAG response it will ping one "active" BAG member, wait for 24 hours then ping another, after that it will move to "inactive" members at one per 12 hours. Rich Farmbrough, 10:42, 3 January 2011 (UTC).[reply]
- Generate code for bot – This is really your side of things and you may choose to generate the code as you wish, but this isn't something a blanket approval can be given for.
- Yes this is only for completeness, and out-with BAG's purview. Rich Farmbrough, 10:42, 3 January 2011 (UTC).[reply]
- You are welcome to mention this, but completeness of your feature documentation is not really the same as a list of actual WP-related tasks. — HELLKNOWZ ▎TALK 11:04, 3 January 2011 (UTC)[reply]
- Yes this is only for completeness, and out-with BAG's purview. Rich Farmbrough, 10:42, 3 January 2011 (UTC).[reply]
- Post results – (I assume of the trails) O.K.
- Run trial once authorised – as below
- Switch on task once authorised – I really prefer you activate the tasks yourself, especially those that edit fast or between other tasks. That is the point of BRFA after all. I can see how it can be easier for BAG member who already know and are aware of your automated system though. I prefer that some BAG members post how they feel about this. — HELLKNOWZ ▎TALK 10:37, 2 January 2011 (UTC)[reply]
- The last two points are again here for completeness, it may very much depend on the task, those where there is a simple matter of grabbing N pages and applying a fix are clearly more amenable to controlled trials (human or bot initiated) than those that require an error condition that is normally absent to occur. Rich Farmbrough, 10:42, 3 January 2011 (UTC).[reply]
- Last two points really need more BAG input. — HELLKNOWZ ▎TALK 11:04, 3 January 2011 (UTC)[reply]
- More BAG input: Yes, I agree that the tasks need to be started manually by Rich himself. - Kingpin13 (talk) 02:14, 9 May 2011 (UTC)[reply]
- Last two points really need more BAG input. — HELLKNOWZ ▎TALK 11:04, 3 January 2011 (UTC)[reply]
{{BAG assistance needed}} Rich Farmbrough 23:19, 4 January 2011 (UTC)[reply]
- That was annoying. Is there a good reason this request states that the bot will not respect bot exclusion? Anomie⚔ 23:45, 21 February 2011 (UTC)[reply]
- Yes there is, it was a bot limitation when I posted the BRFA, and not relevant to most of its work. However I will modify the code to check on BAG members talk pages. Rich Farmbrough, 23:54, 21 February 2011 (UTC).[reply]
- On the other hand if you wished to recuse yourself, as Xeno has you could simply have told me. Rich Farmbrough, 23:57, 21 February 2011 (UTC).[reply]
- On the other hand if you wished to recuse yourself, as Xeno has you could simply have told me. Rich Farmbrough, 23:57, 21 February 2011 (UTC).[reply]
- Yes there is, it was a bot limitation when I posted the BRFA, and not relevant to most of its work. However I will modify the code to check on BAG members talk pages. Rich Farmbrough, 23:54, 21 February 2011 (UTC).[reply]
- That was annoying. Is there a good reason this request states that the bot will not respect bot exclusion? Anomie⚔ 23:45, 21 February 2011 (UTC)[reply]
{{OperatorAssistanceNeeded|D}}
Any updates? MBisanz talk 10:13, 21 April 2011 (UTC)[reply]- Yes I ran the BAG notifying part, the result is Anomie's response above , and elsewhere they recuse themselves form my BRFAs. <Sigh> Rich Farmbrough, 18:50, 1 May 2011 (UTC).[reply]
- Incidentally the Bag assistance needed template above is still active from 4 January. This type of delay is the reason that I wanted to ping BAG. If however BAG members are unsympathetic to automated pings, and BRFAs are really going to take maybe a year to get through, there's really little point bothering. I noticce however that one of Anomies BRFAs took a few hours or days, ushered on a spurious concept of urgency. Rich Farmbrough, 22:58, 1 May 2011 (UTC).[reply]
- Incidentally the Bag assistance needed template above is still active from 4 January. This type of delay is the reason that I wanted to ping BAG. If however BAG members are unsympathetic to automated pings, and BRFAs are really going to take maybe a year to get through, there's really little point bothering. I noticce however that one of Anomies BRFAs took a few hours or days, ushered on a spurious concept of urgency. Rich Farmbrough, 22:58, 1 May 2011 (UTC).[reply]
- Yes I ran the BAG notifying part, the result is Anomie's response above , and elsewhere they recuse themselves form my BRFAs. <Sigh> Rich Farmbrough, 18:50, 1 May 2011 (UTC).[reply]
{{OperatorAssistanceNeeded|D}}
Alright, I'm late to the party it seems. The end goal of the bot seems like something most BAG members would support, and would mostly concern BAG members and bot operators. I understand that if the notices annoy them, people can opt out of them. However, I'm unclear about what's the "notifying" logic, at least in terms of what exactly is considered an "inactive" BRFA, and who gets noticed.
- The trial seems to have been done with 24 hours in mind, while a week is more sensible (at least if we're following BAG-related templates). So the bot would probably have to stay quiet for one week without BAG / Bot op activity (whichever applied).
- As far as BAG-related notices go, they should be first be given to BAG members which posted in the BRFA. Then failing a response (say in the next 24 hour period), a notice to another BAG member, preferably drawn at random from the active BAG member list (repeat ad-nauseum until you run out of BAG members or that you got a response).
Could you clarify these two aspects? Headbomb {talk / contribs / physics / books} 05:57, 9 May 2011 (UTC)[reply]
- If a week is the minimum delay, then so be it.
- Yes, I haven't implemented the "related bagger" functionality but I can do that. Rich Farmbrough, 08:27, 10 May 2011 (UTC).[reply]
Alright, then Approved.. Let's have one-week thing for now. If people feel that this is too slow/fast, just get a straw poll at WP:BAG or something. Headbomb {talk / contribs / physics / books} 09:33, 10 May 2011 (UTC)[reply]
- The above discussion is preserved as an archive of the debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA.
- The following discussion is an archived debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA. The result of the discussion was Approved.
Operator: Rich Farmbrough (talk · contribs)
Automatic or Manually assisted: Auto
Programming language(s): Perl
Source code available: No
Function overview: Periodically inspect regression test pages for other bots, report errors, stop the bot on fatal errors, reset the regression test pages.
Links to relevant discussions (where appropriate):
Edit period(s): 6 hourly [also on demand, e.g. when releasing a new build]
Estimated number of pages affected: 7 (four test pages, two talk pages, and a log)
Exclusion compliant (Y/N): N/A
Already has a bot flag (Y/N): N
Function details: Four pages will be used per bot supported (initially one bot)
- Cosmetic tests
- Minor tests
- Major tests
- Critical tests
Each page will be examined to see if it has been visited by the bot. If so the result of the visit will be tested. Results will be logged. Failures will be notified to the botop (me). Critical failures will be logged to the bot's talk page, stopping it [in the case of AWB bots]. The page will be restored to pre-test state. If not visited that will be logged too.
Discussion
- Clearly this is only currently applicable to SmackBot.
- As with similar tasks, the functionality
can[may] be made available to other users in due course via a slightly larger bot. Rich Farmbrough, 03:17, 2 October 2010 (UTC).[reply]
- To clarify, this bot task simply examines edits made by other bots, and then notifies the operator? –xenotalk 14:21, 8 October 2010 (UTC)[reply]
- It will also restore the page to pre-test state. Rich Farmbrough, 17:07, 9 October 2010 (UTC).[reply]
- Note: two minor clarifications above. Talk page stops only work with AWB - other mechanisms are available for other bots/tasks. And while the regular test (monitoring test) is planned to be quadurnal, of course testing with new releases/builds is good sense too (steam tests/regression tests). Rich Farmbrough, 17:14, 9 October 2010 (UTC).[reply]
- Note: two minor clarifications above. Talk page stops only work with AWB - other mechanisms are available for other bots/tasks. And while the regular test (monitoring test) is planned to be quadurnal, of course testing with new releases/builds is good sense too (steam tests/regression tests). Rich Farmbrough, 17:14, 9 October 2010 (UTC).[reply]
- It will also restore the page to pre-test state. Rich Farmbrough, 17:07, 9 October 2010 (UTC).[reply]
- So the only non-bot-talk namespace editing is reversion of other bot errors? Seems like the rest of the specification (testing, notification) is not directly relevant to what actually needs to be approved (mainspace edits). — HELLKNOWZ ▎TALK 22:57, 14 October 2010 (UTC)[reply]
- Any answer or can we move right to testing? MBisanz talk 22:39, 19 October 2010 (UTC)[reply]
- It will be entirely in user space of me and my bots. As H3llkn0wz says, this may not strictly need approving, but I am attempting to get and keep everything crystal clear, since it is far less effort to do it now than four years down the line when some wikidrama blows up. As the man says pay me now or pay me later. Rich Farmbrough, 00:25, 20 October 2010 (UTC).[reply]
- It will be entirely in user space of me and my bots. As H3llkn0wz says, this may not strictly need approving, but I am attempting to get and keep everything crystal clear, since it is far less effort to do it now than four years down the line when some wikidrama blows up. As the man says pay me now or pay me later. Rich Farmbrough, 00:25, 20 October 2010 (UTC).[reply]
- Speedily Approved. Only editing userspaces. MBisanz talk 18:56, 21 October 2010 (UTC)[reply]
- The above discussion is preserved as an archive of the debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA.
- The following discussion is an archived debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA. The result of the discussion was Request Expired.
Operator: AusTerrapin (talk · contribs)
Automatic or Manually assisted: Manually Assisted
Programming language(s): C# (AutoWikiBrowser)
Source code available: AWB source information
Function overview: Account is for semi-automated editing tools in a high edit rate per minute mode. This application is for manual use of AutoWikiBrowser (I am approved on my primary account), for selected tasks (see below). Addition of any other tools will be subject to separate BRFA(s).
Links to relevant discussions (where appropriate):
Edit period(s): Daily (or less)
Estimated number of pages affected: Will vary considerably depending on editing project of the day. I anticipate that a peak figure would be 500 per day but with no more than 1500 per week. Higher edit counts are more likely on weekends. Long term averages are likely to be significantly less than these figures.
Exclusion compliant (Y/N): Unknown - whatever the status is for AWB
Already has a bot flag (Y/N): N
Function details:
- Category name addition or substitution - this is in support of my existing work on category standardisation and article diffusion (primarily for WP:ODM). I prepare lists of articles that need to be add/changed to a standardised category name, or that need to be moved to a more appropriate diffusion category and then manually use AWB to implement the appropriate additions/substitutions.
- File name substitution - this is to update file name links for files that have been moved from Wikipedia to Commons with a changed file name in order to preserve the file link after the Wikipedia version has been deleted. I generate lists of articles that link to the Wikipedia file name using AWBs 'What links here' list generator and then manually use AWB to change the file name from the Wikipedia file name to the Commons file name. In doing so, I manually set up the filters to preserve the original piping (where used) but update from 'Image:' namespace to 'File:' namespace.
- Template addition - addition of applicable project templates (where they are missing) to articles within the scope of WP:ODM
- Prior to each run, I compile a list of articles that require modification and then undertake the substitution/template addition. As these task usually only involve 1 change per page, review of changes is quick and, subject to network/server speed, edits per minute may reach 5-10 edits per minute (without deliberately slowing down) - for semi-regular use, I belive this exceeds the AWB edit rate allowed for standard accounts and hence the establishment of a dedicated account and this BRFA in order to permit higher speed operation. If there is concern over the account name for use in the manner described, I am happy to modify it.
Discussion
- The current requested approval is too vague. Bots have to be approved for specific tasks, a new BRFA should be opened for each task you would like to do, not a generic one for all edits. - EdoDodo talk 16:57, 28 September 2010 (UTC)[reply]
- Based on feedback I've now modified the request. Please note that the intent is to speed up editing that I already perform. Cheers, AusTerrapin (talk) 17:50, 28 September 2010 (UTC)[reply]
- Information on the request. Guidelines advise editors to apply for bot accounts in case AWB is supposed to be used for high-frequency editing. I believe the user (who applied initially at AWB request and was rejected by me/Xeno as the bot-name wasn't approved by BAG) wishes approval for the bot name specifically for semi-automated AWB use. Wifione ....... Leave a message 03:52, 29 September 2010 (UTC)[reply]
- Indeed, that is exactly my intent. Cheers, AusTerrapin (talk) 08:47, 29 September 2010 (UTC)[reply]
- For what it is worth, a trial for the equivalent Commons account has now been conducted with details listed here. AusTerrapin (talk) 02:34, 02 October 2010 (UTC)[reply]
- Approved for trial (75 edits). Please provide a link to the relevant contributions and/or diffs when the trial is complete. Let's see how it goes. Try to give a sample of each of the three tasks, if possible. –xenotalk 13:06, 8 October 2010 (UTC)[reply]
- Trial complete. (Permanent link to edits). 77 edits performed, six subsequently deleted per discussion below. The specific task trial results are as follows:
- Task 1 - Category name addition or substitution. (Permanent link to edits). Trial consisted of diffusing 17 articles from two generic categories to four specific categories (one article belonged to both generic categories). Result 16 articles diffused from generic order recipient category to specific order class category. One article diffused from two generic order recipient categories to two specific order class categories. AWB reported a peak edit rate of 3 edits per minute (epm). Average edit rate 2.33 epm.
- Task 2 - Deprecated link substitution. (Permanent link to edits). Four deprecated file name links replaced on seven pages, in ten edits. Two pages included multiple file name changes. One edit initially changed to misspelt file name - this was the result of my typographical error and was subsequently corrected (this can be avoided in future by being more careful to thoroughly double check information first). One edit was to a user page - a note was manually left for the editor explaining the reason for the edit. (Since the file was being used as part of a draft article, and the file link would be broken upon deletion of the Wikipedia version of the file, I had judged that (with an explanatory note) it was reasonable to break with the usual WP convention on editing user pages.) AWB reported a peak edit rate of 2 epm. Average edit rate 1.25 epm.
- Task 3 - Template addition. (Permanent link to edits). 50 edits conducted to add project banner to 44 category and article talk pages. Second run added banner to six misspelt talk pages (a group of related pages). Upon investigation, I found that this was because I had generated the list via a CSV file which had the effect of stripping out the "ä" from a series of pages related to the 'Order of the Zähringer Lion'. This was a deficiency of the CSV format not AWB. I had checked the page names before creating the CSV file but had not rechecked them after loading into AWB. I identified the issue as part of checking the results of the edit run. I raised speedy deletes for the affected pages (all are now deleted) and re-ran the relevant sequence after fixing the page names. AWB reported a peak edit rate of 14 epm. Average edit rate 7.14 epm (3.5 epm when adding project banner template to existing talk pages and 8.8 epm when adding to new talk pages).
- Notes:
- Average edit rates are a gross average for each task excluding the time taken for breaks in AWB editing (eg to reset settings, etc). The effect is that they reflect the average edit rate during live AWB edit runs for each task.
- 100% file check conducted — all edits performed as expected other than where noted above. The two errors identified were essentially operator errors — one of which could just as easily have occurred in manual editing, the other of which was the result of the technical limitations of the CSV file format (I am now aware of it and therefore a repeat issue is unlikely). The errors have been fixed — regardless, they highlight the need for operator vigilance.
- Page save and load time in AWB was somewhat slow. This probably reflects a combination of larger article page sizes and/or slow server response times. Achieved peak and average edit rates per minute is likely to be higher when the server response is faster or pages are smaller. This is reflected in the considerably faster response times for creation of new talk pages in Task 3.
- Providing there are no objections, I intend to modify the wording of Function 2 to "Deprecated link substitution". This widens the coverage from substituting links only for deprecated files, to include deprecated page names (following page moves, etc) and deprecated template substitutions. The nature of the function is essentially the same, so I don't believe that there would be any benefit in conducting additional trials specifically on deprecated page and template links.
- AusTerrapin (talk) 18:53, 29 October 2010 (UTC)[reply]
- Hi AusTerrapin, thanks for your patience. At the moment I just have a couple of questions regarding task number 2. I notice the links you replaced in your example were originally to a page which wasn't deleted yet. However, what's the problem with simply redirecting the original file to the new one, and saving a large number of then pointless edits? This also applies to moved pages, which you mention in your notes; the move should automatically create a redirect from the old name, so replacements are pointless. - Kingpin13 (talk) 23:30, 4 November 2010 (UTC)[reply]
- Fair question. There are two key occasions when fixing links for pages may be warranted - double redirects and certain scenarios with piped redirects (particularly where the piping is for the new page title and it is linking via the redirect page; in this instance, to leave it as a piped redirect falls foul of other Wikipedia policy with regards to piped links so common sense needs to be applied). With regards to images, from a purely technical perspective you are correct that a redirect could be used. I am not convinced that leaving redirects behind for images is a particularly good practice, especially when the original title was wrong (as opposed to simply being different). In the series of image files for which I replaced links during the trial, the original uploader had accidentally swapped ribbon images and titles around and had seriously mistranslated at least one title - leaving these sort of errors around indefinitely is poor housekeeping. Regarding the timing of changing the links, that is a matter of expediency and cleaning up after myself - I transferred the files to Commons (correcting the naming when I did so) and then updated the link in all affected pages (using the 'What links here' function) and then tagged the original image for deletion as now being uploaded to Commons. By changing the links immediately, I prevent any period where the link becomes broken and don't have to monitor for when the original file is deleted just to come back and fix the links at that time (that would be asking for something to screw up). I should also note that I am conversant with Wikipedia's policies on redirects, etc and the utilisation of the bot account and listed AWB functions is something that is incidental to other editing tasks that I do - my intention is not to patrol Wikipedia looking for every link that might (policy aside) be a candidate for changing. Cheers, AusTerrapin (talk) 15:37, 6 November 2010 (UTC)[reply]
- Obviously fixing double redirects is no problem. As to if the piped link is pointing to a redirect, as I understand it there is no point in linking to a redirect in the pipe because it doesn’t change the text displayed, but similarly I don’t think there is any point in fixing these, because once it’s done is done, and although it would be better for the person to get it right the first time, it’s just worse to then change it later, see WP:R2D. In that case it seems like replacing the image links prior to deletion was sensible, to avoid having a bunch of red links (even for a short period). However, in general you should use the appropriate XfD first. For example, if there is a poor redirect which is confusing, take it to RfD before replacing the links to it. If you’re happy to only run the link replacement part of this bot if there is consensus at XfD (or a different appropriate venue such as RfC) or the task is bound to be uncontroversial, we should be able to approve this. One other question – are you wanting a bot flag for this account? - Kingpin13 (talk) 12:40, 8 November 2010 (UTC)[reply]
- A user has requested the attention of the operator. Once the operator has seen this message and replied, please deactivate this tag. (user notified) regarding Kingpin's points. - Jarry1250 [Who? Discuss.] 17:39, 16 November 2010 (UTC)[reply]
- Obviously fixing double redirects is no problem. As to if the piped link is pointing to a redirect, as I understand it there is no point in linking to a redirect in the pipe because it doesn’t change the text displayed, but similarly I don’t think there is any point in fixing these, because once it’s done is done, and although it would be better for the person to get it right the first time, it’s just worse to then change it later, see WP:R2D. In that case it seems like replacing the image links prior to deletion was sensible, to avoid having a bunch of red links (even for a short period). However, in general you should use the appropriate XfD first. For example, if there is a poor redirect which is confusing, take it to RfD before replacing the links to it. If you’re happy to only run the link replacement part of this bot if there is consensus at XfD (or a different appropriate venue such as RfC) or the task is bound to be uncontroversial, we should be able to approve this. One other question – are you wanting a bot flag for this account? - Kingpin13 (talk) 12:40, 8 November 2010 (UTC)[reply]
- Fair question. There are two key occasions when fixing links for pages may be warranted - double redirects and certain scenarios with piped redirects (particularly where the piping is for the new page title and it is linking via the redirect page; in this instance, to leave it as a piped redirect falls foul of other Wikipedia policy with regards to piped links so common sense needs to be applied). With regards to images, from a purely technical perspective you are correct that a redirect could be used. I am not convinced that leaving redirects behind for images is a particularly good practice, especially when the original title was wrong (as opposed to simply being different). In the series of image files for which I replaced links during the trial, the original uploader had accidentally swapped ribbon images and titles around and had seriously mistranslated at least one title - leaving these sort of errors around indefinitely is poor housekeeping. Regarding the timing of changing the links, that is a matter of expediency and cleaning up after myself - I transferred the files to Commons (correcting the naming when I did so) and then updated the link in all affected pages (using the 'What links here' function) and then tagged the original image for deletion as now being uploaded to Commons. By changing the links immediately, I prevent any period where the link becomes broken and don't have to monitor for when the original file is deleted just to come back and fix the links at that time (that would be asking for something to screw up). I should also note that I am conversant with Wikipedia's policies on redirects, etc and the utilisation of the bot account and listed AWB functions is something that is incidental to other editing tasks that I do - my intention is not to patrol Wikipedia looking for every link that might (policy aside) be a candidate for changing. Cheers, AusTerrapin (talk) 15:37, 6 November 2010 (UTC)[reply]
Request Expired. No response from operator. If you want to re-open this request, just undo this edit, address the questions above, and relist it. Anomie⚔ 03:17, 24 November 2010 (UTC)[reply]
- The above discussion is preserved as an archive of the debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA.
- The following discussion is an archived debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA. The result of the discussion was Approved.
Operator: Bsherr (talk · contribs)
Automatic or Manually assisted: Automatic, supervised
Programming language(s): AWB
Source code available: AWB
Function overview: Template orphaning and replacement
Links to relevant discussions (where appropriate): For example, WP:TfD holding cell.
Edit period(s): One time each, on demand.
Estimated number of pages affected: Intended first instance, 3,351. Other instances, varies.
Exclusion compliant (Y/N): N
Already has a bot flag (Y/N):
Function details: My intended first instance is the orphaning of Template:Do not delete. Then other similar instances as they arise. This would be performed using AWB bot functions.
Discussion
- Are you aware that SporkBot (talk · contribs) is already working on this task? — Train2104 (talk • contribs • count) 02:05, 21 September 2010 (UTC)[reply]
- Yes indeed. Plastikspork and I have communicated on it. My intention is to provide redundancy for these tasks as needed. The pace possibly indicates a need. --Bsherr (talk) 03:15, 21 September 2010 (UTC)[reply]
- Redundancy is not a bad thing, but I have now finished this particular task. However, I would certainly support having someone else helping out with cleanup tasks at TFD. Hopefully there won't be many more of these 150k transclusion jobs. Thanks! Plastikspork ―Œ(talk) 04:57, 22 September 2010 (UTC)[reply]
- There were a few problems using AWB noted on the SporkBot BRfA. Will you be able to deal with these? Otherwise it might make more sense to write a program to do this, or even just create a complete clone of SporkBot, if that is what is wanted. - Kingpin13 (talk) 16:20, 2 October 2010 (UTC)[reply]
- Could I have more information about the whitespace bug? --Bsherr (talk) 04:48, 3 October 2010 (UTC)[reply]
See this edit, maybe this has been fixed at AWB now, but not sure(Oh, sorry, I'm guessing you've already seen that, I'll ask at WT:AWB what the status of this is). Also see the first post at this section, most of these things would be possible with AWB, but you'd have to remember to change them manually, so it will require more work from you per run. - Kingpin13 (talk) 07:48, 3 October 2010 (UTC)[reply]- I've asked at WT:AWB, but tbh the whitespace isn't really a problem. It's if you're willing to manually change the settings on AWB for every run, which could be a bit painstaking - Kingpin13 (talk) 07:54, 3 October 2010 (UTC)[reply]
- Thanks, Kingpin! I have indeed already done over a thousand edits manually of the same type my bot would do automatically without issue. I like AWB, and I personally haven't bumped into anything that would make me prefer perl, though I'm all ears for advice. Customizing prior to each run is just part of handling this particular job, and I don't mind it. --Bsherr (talk) 21:07, 3 October 2010 (UTC)[reply]
- Ah, the whitespace removal shouldn't be a problem. If you could either link me to the manual edits you've made for this, or find something we can trial the bot on, then should be able to move towards approval :). Cheers, - Kingpin13 (talk) 09:13, 4 October 2010 (UTC)[reply]
- Sure! For the manual edits, take a look at my contribs on September 21, 2010. Let me know if that works! (Sorry for the delay in replying; I must have missed it on my watch list.) --Bsherr (talk) 15:33, 8 October 2010 (UTC)[reply]
- Ah, the whitespace removal shouldn't be a problem. If you could either link me to the manual edits you've made for this, or find something we can trial the bot on, then should be able to move towards approval :). Cheers, - Kingpin13 (talk) 09:13, 4 October 2010 (UTC)[reply]
- Thanks, Kingpin! I have indeed already done over a thousand edits manually of the same type my bot would do automatically without issue. I like AWB, and I personally haven't bumped into anything that would make me prefer perl, though I'm all ears for advice. Customizing prior to each run is just part of handling this particular job, and I don't mind it. --Bsherr (talk) 21:07, 3 October 2010 (UTC)[reply]
- Could I have more information about the whitespace bug? --Bsherr (talk) 04:48, 3 October 2010 (UTC)[reply]
- There were a few problems using AWB noted on the SporkBot BRfA. Will you be able to deal with these? Otherwise it might make more sense to write a program to do this, or even just create a complete clone of SporkBot, if that is what is wanted. - Kingpin13 (talk) 16:20, 2 October 2010 (UTC)[reply]
- Redundancy is not a bad thing, but I have now finished this particular task. However, I would certainly support having someone else helping out with cleanup tasks at TFD. Hopefully there won't be many more of these 150k transclusion jobs. Thanks! Plastikspork ―Œ(talk) 04:57, 22 September 2010 (UTC)[reply]
- Yes indeed. Plastikspork and I have communicated on it. My intention is to provide redundancy for these tasks as needed. The pace possibly indicates a need. --Bsherr (talk) 03:15, 21 September 2010 (UTC)[reply]
Approved. Uncontroversial, previously approved task. Only run this bot if there is consensus (at TfD) to change that template. - Kingpin13 (talk) 13:59, 9 October 2010 (UTC)[reply]
- The above discussion is preserved as an archive of the debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA.
- The following discussion is an archived debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA. The result of the discussion was Request Expired.
Operator: Rich Farmbrough (talk · contribs)
Automatic or Manually assisted:
Programming language(s): Perl, MediaWiki::API
Source code available: No
Function overview: Create and update WP:Mirror threads, this enables two or more users to share a discussion on their talk pages (for example), and receive the "you have new messages" banner if they wish to with a descriptive "last change" and edit summary.
Links to relevant discussions (where appropriate):
Edit period(s): Continuous.
Estimated number of pages affected: Initially maybe 4 or 5 a day. Will increase if the community uses it.
Exclusion compliant (Y/N): Yes and no.
Already has a bot flag (Y/N):
Function details:
Part the first
- Find new threads created using the template {{Reflect}}.
- Move the new thread to a unique thread page, replace {{Reflect}} with {{Reflection}}, setting display parameters according to {{Mirror me}}/{{Reflect me}} - for collapsing, colour scheme etc.
- Read the sig from the reflection and add the reflection to the authors page .. setting display parameters etc..
- Target time, within a few seconds.
Part the second
- Monitor the threads and update their reflections (subject to bots/nobots and {{Mirror me}}/{{Reflect me}}).
- The reflection will include the id number of the last change to the thread that it was updated with, to potentially allow other agents or multiple instances of Mirror Bot to update the reflections safely.
- The reflection will include a time-stamp as the last parameter, to enable archiving to be done with a simple tweak to existing archive bots.
- The reflection will include an unspecified number of parameters designed to allow the page creator control of the reflections that appear on it. These will be taken from either a page control template {{Mirror me}}/{{Reflect me}} or may be overridden by an editor.
- The reflection will include a parameter that represents part of the edit tot he thread to make the "last change" more useful.
- The edit summary will be something like Mirror Bot 'Reflecting Rich Farmbrough: "Grr.. another typo" ' where "Grr another typo" is the edit summary of the change to the thread.
- Target time, within a few minutes at worst, preferably within 15 seconds.
Planned {{Mirror me}}/{{Reflect me}} functionality
On a typical user talk page the user will be able to specify for all reflections:
- maximum update frequency per thread and total (initially missed updates will be lost)
- no updates at all
- colour scheme
- always collapse
- never collapse
- collapse if older than
- collapse if longer than
- update with bot flag on/off
- update minor on/off or reflect
- maybe other cool stuff.
- Example, archiving dead threads - bringing them back if they resurrect.
Potential benefits
- No more hopping around user talk pages - or cutting and pasting!
- Multi-way discussions without lots of watching.
- Can suppress those annoying batches of "you have new messages" from typo-prone users by setting a min time of, say 5 minutes.
- Less disc space used.
- Prettier talk pages.
- Ability to move talk threads to sub pages - by urgency, topic etc, and set the watching/updating parameters appropriately.
Dis-benefits
- Less serendipity from browsing user talk pages.
Discussion
Test request for pages in my own and Mirror Bot's user space, limited to around 100 edits per day. Rich Farmbrough, 09:50, 20 September 2010 (UTC).[reply]
- Neat idea! Martin (Smith609 – Talk) 16:28, 20 September 2010 (UTC)[reply]
{{BAG assistance needed}}
Rich Farmbrough, 02:45, 2 October 2010 (UTC).[reply]
- Simple is good, and this proposal is not simple. Yes, split conversations (A posts on B's talk; B replies at A; A replies at B) can be confusing – that is probably why many editors have a "if you post here, I will reply here" policy. I accept that some editors would be happy working with the aid of this bot, but I oppose automated edits of this nature because it adds a layer of complexity that onlookers may not understand, and it adds noise to watchlists. Onlookers and new editors may feel that they too should mirror conversations, promoting unnecessary complexity and confusion. The suggestion at WP:Mirror threads that some conversations may occur on subpages also presents problems: reviewing editors should not need to search for subpages to see discussions that have occurred. As I understand it, user B's comment and signature would be copied to A's talk, and vice versa. I am not happy with that because, whereas it is fine to occasionally copy a signature manually, we should not endorse the practice as generally a good idea because a signature from A should mean "A posted this comment here, just as you are reading it". In tricky cases (for example, if investigating whether an editor has breached some guideline), a reviewing editor would have to study two talk pages because one cannot be sure that two mirrored discussions are in fact identical. Finally, if A posts a mirrored section at B, what is B supposed to do? If B has opted in to this system, they may happily respond. Otherwise, B now has to wonder what
{{Reflect}}
means, and what they should do. Johnuniq (talk) 04:10, 2 October 2010 (UTC)[reply] - You raise a bunch of points, that illustrate my explanation was obviously lacking.
- "a layer of complexity" - well in a way - but no more complex than navboxes
- "it adds noise to watchlists" - yes if you watch every page you edit, and have the "noisiest" Mirror Bot settings. For the average user, using this solely on user talk pages with modest settings, they will get less watchlist noise, and about the same or less "you have messages". For example my good friend Dr Blofeld of Smersh (or is it Spectre?) was active on my talk page:
- (cur | prev) 14:13, 14 September 2010 Dr. Blofeld (talk | contribs | block) (43,118 bytes) (→Counties of China) (undo)
- (cur | prev) 14:13, 14 September 2010 Dr. Blofeld (talk | contribs | block) (43,003 bytes) (→Counties of China) (undo)
- (cur | prev) 14:08, 14 September 2010 Dr. Blofeld (talk | contribs | block) (42,587 bytes) (→Counties of China) (undo)
- (cur | prev) 13:54, 14 September 2010 Dr. Blofeld (talk | contribs | block) (41,863 bytes) (→Counties of China) (undo)
- (cur | prev) 13:53, 14 September 2010 Dr. Blofeld (talk | contribs | block) (41,751 bytes) (→Counties of China) (undo)
- (cur | prev) 13:52, 14 September 2010 Dr. Blofeld (talk | contribs | block) (41,729 bytes) (→Counties of China) (undo)
- (cur | prev) 13:41, 14 September 2010 Dr. Blofeld (talk | contribs | block) (40,945 bytes) (→Counties of China) (undo)
- (cur | prev) 13:41, 14 September 2010 Dr. Blofeld (talk | contribs | block) (40,945 bytes) (→Counties of China) (undo)
- (cur | prev) 13:41, 14 September 2010 Dr. Blofeld (talk | contribs | block) (40,938 bytes) (→Counties of China) (undo)
- (cur | prev) 12:49, 14 September 2010 Dr. Blofeld (talk | contribs | block) (40,465 bytes) (→Counties of China) (undo)
- (cur | prev) 12:48, 14 September 2010 Dr. Blofeld (talk | contribs | block) (40,362 bytes) (→Counties of China) (undo)
This is 11 "you have messages" if you are actively editing. If you had MB settings of 5 minutes per thread it would be reduced to 4, if you set the thread to passive it would be none.
- "As I understand it, user B's comment and signature would be copied to A's talk, and vice versa." No, they are reflected by page transclusion, the only thing that gets copied (essentially) is the edit summary. Some of the content could be copied as part of a dummy parameter to make diffs more informative.
- If someone gets a mirror thread on their page it essentially looks like this (prettier, because the template is still being developed)...
Test example, feel free to edit and transclude on your talk pages
{{Mirror thread|7357}}
The user can edit the thread easily enough - maybe I wouldn't dump a mirror thread on a newbie, but it is really fairly straightforward. Incidentally there was a spate of using this type of transclusion many moons ago, the reason it stopped was lack of "you have messages" - talkback templates rather replaced them, but they have their own problems.
- Sub pages - I am lost to what you mean by "reviewing editors should not need to search for subpages to see discussions that have occurred" - if you mean that all my discussions (for example) should occur on my talk page, then I understand, but you are forgetting that a good percentage of my discussions take place on other talk pages. (I happen to have some figures here: from the 6th of September to about 1 October - 237 edits to my talk page, 173 to other user talk pages, 327 on article talk pages, 177 on WP talk pages and 120 on template talk pages - which excludes conversations on WP pages like AfD, Village Pump etc - so you are seeing less than a quarter of my "discussions" on my talk page.) if you mean something else you will need to explain it a little more clearly to me. Rich Farmbrough, 14:53, 3 October 2010 (UTC).[reply]
- Approved for trial (25 edits). Please provide a link to the relevant contributions and/or diffs when the trial is complete. MBisanz talk 00:36, 17 October 2010 (UTC)[reply]
- Thank you. Rich Farmbrough, 00:25, 20 October 2010 (UTC).[reply]
- Thank you. Rich Farmbrough, 00:25, 20 October 2010 (UTC).[reply]
- {{OperatorAssistanceNeeded}} what is the current status of this request? ΔT The only constant 01:02, 11 November 2010 (UTC)[reply]
- The status is i'll get back to it when I've dealt with all the nonsense. Rich Farmbrough, 03:55, 17 November 2010 (UTC).[reply]
- A user has requested the attention of the operator. Once the operator has seen this message and replied, please deactivate this tag. (user notified) How's the nonsense coming? :P - Jarry1250 [Who? Discuss.] 16:15, 12 December 2010 (UTC)[reply]
- Well it's up and down. But more up than down I think. Rich Farmbrough, 18:13, 12 December 2010 (UTC).[reply]
- A user has requested the attention of the operator. Once the operator has seen this message and replied, please deactivate this tag. (user notified) Ready yet? Mr.Z-man 04:08, 2 January 2011 (UTC)[reply]
- Well it's up and down. But more up than down I think. Rich Farmbrough, 18:13, 12 December 2010 (UTC).[reply]
- A user has requested the attention of the operator. Once the operator has seen this message and replied, please deactivate this tag. (user notified) How's the nonsense coming? :P - Jarry1250 [Who? Discuss.] 16:15, 12 December 2010 (UTC)[reply]
- The status is i'll get back to it when I've dealt with all the nonsense. Rich Farmbrough, 03:55, 17 November 2010 (UTC).[reply]
- A user has requested the attention of the operator. Once the operator has seen this message and replied, please deactivate this tag. (user notified) Any updates? MBisanz talk 10:11, 21 April 2011 (UTC)[reply]
Request Expired. MBisanz talk 03:47, 5 May 2011 (UTC) {[subst:BB}}[reply]
- The following discussion is an archived debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA. The result of the discussion was Withdrawn by operator.
Operator: Allmightyduck (talk · contribs)
Automatic or Manually assisted: Automatic, unsupervised
Programming language(s): PHP using the Peachy Framework, and one direct API query.
Source code available: http://debugwiki.bot.duckydude.com/index.php?title=Source
Function overview: Notify sysop who has last event in block log of any WP:AIV backlogs.
Links to relevant discussions (where appropriate):
Edit period(s): Continuous
Estimated number of pages affected: Probably not more than 10 a week, AIV doesn't backlog that easily.
Exclusion compliant (Y/N): Yes
Already has a bot flag (Y/N):
Function details: This bot, every 5 minutes, will scan WP:AIV for the category Category:Administrative backlog, and if it is on the page, will read the block log from an API query. The sysop who performed the most recent entry in the block log will be given a template message on their talk page politely asking them if they would mind clearing that backlog.
Discussion
Is there consensus for this? I doubt a bot that "spams" (I realize it isn't spam, but it's an unrequested message) random sysops would have consensus, so it would be nice to see more discussion on this. It would be nice if instead of looking for the last blocks from all sysops, it looked for the last action from a list of sysops that have opted in to being notified by the bot. This would ensure that only administrators interested in clearing the backlog get the message, instead of for example a checkuser who happened to block someone last but is not involved in AIV getting the message. Ideally the list should be an on-wiki protected page, so admins can add and remove themselves easily as they wish. By the way, if you need any help with the programming, feel free to contact me on-wiki, on IRC, or by email, I've done a fair bit of programming with Peachy, so am able to help out or review your code. - EdoDodo talk 19:55, 3 September 2010 (UTC)[reply]
- This needs wider discussion. I'll go ahead and post to WP:VPP and WT:AIV. —I-20the highway 01:28, 6 September 2010 (UTC)[reply]
- Also, would this bot only inform one admin per backlog, or every admin that makes a block until the backlog is cleared? For example, say Admin A blocks a user, and then a backlog appears at AIV. Bot informs Admin A. While Admin A is clearing out the backlog, Admins B, C, and D all make blocks as well. Do admins B, C, and D also get notices? And will Admin A get a duplicate notice for every block he makes while clearing out the backlog? (This scenario assumes that A-D are all on any opt-in lists.) I would find that more than a little annoying, especially if these four all end up tripping over each other's feet trying to block everyone. Hersfold (t/a/c) 16:37, 8 September 2010 (UTC)[reply]
- Personally, I suggest the bot send the message to another admin only if the backlog isn't cleared after a specific amount of time, perhaps one hour, that the first message has been sent. - EdoDodo talk 17:09, 8 September 2010 (UTC)[reply]
- Well, the urgency I placed is because AIV reports can grow stale. Also, Hersfold, right now it informs the most recent admin every 5 minutes. So, since from what I have seen it doesn't take that long to clear an AIV backlog (from what I have seen!) only Admin A will get a notice. BUT, if the backlog isn't cleared within 5 minutes (changeable per discussion, obviously) it will notify the most recent admin AGAIN, which could be any admin, A, B, C, or D. It doesn't run every time a block is made. Allmightyduck What did I do wrong? 19:53, 8 September 2010 (UTC)[reply]
- Perhaps the bot should keep a log of which admins it has notified recently, so that it doesn't notify the same admin a bunch of times in one day. At the very least, it shouldn't be notifying the same admin two times in a row, but personally I would suggest not notifying the same admin more than once a day. - EdoDodo talk 20:08, 8 September 2010 (UTC)[reply]
- Well, the urgency I placed is because AIV reports can grow stale. Also, Hersfold, right now it informs the most recent admin every 5 minutes. So, since from what I have seen it doesn't take that long to clear an AIV backlog (from what I have seen!) only Admin A will get a notice. BUT, if the backlog isn't cleared within 5 minutes (changeable per discussion, obviously) it will notify the most recent admin AGAIN, which could be any admin, A, B, C, or D. It doesn't run every time a block is made. Allmightyduck What did I do wrong? 19:53, 8 September 2010 (UTC)[reply]
- Personally, I suggest the bot send the message to another admin only if the backlog isn't cleared after a specific amount of time, perhaps one hour, that the first message has been sent. - EdoDodo talk 17:09, 8 September 2010 (UTC)[reply]
- Also, would this bot only inform one admin per backlog, or every admin that makes a block until the backlog is cleared? For example, say Admin A blocks a user, and then a backlog appears at AIV. Bot informs Admin A. While Admin A is clearing out the backlog, Admins B, C, and D all make blocks as well. Do admins B, C, and D also get notices? And will Admin A get a duplicate notice for every block he makes while clearing out the backlog? (This scenario assumes that A-D are all on any opt-in lists.) I would find that more than a little annoying, especially if these four all end up tripping over each other's feet trying to block everyone. Hersfold (t/a/c) 16:37, 8 September 2010 (UTC)[reply]
- Oppose unless restricted to opt-in list As urgent as WP:AIV reports are, bot-notifying administrators who have previously responded to them is a bad idea, because it could encourage admins to prevent unwanted notices by ignoring the area altogether. If Template:Admin backlog isn't attracting enough attention, editors can post requests for administrative action at WP:AN and WP:AN/I. Consideration should also be given to approving RFAs of trusted editors who are willing to block vandals, even if they are not considered to be qualified for other areas of administrative activity, provided they agree to restrict their tool usage to anti-vandalism activities until they have sufficient experience and understanding of other administrative tasks to perform them correctly. Peter Karlsen (talk) 19:09, 10 September 2010 (UTC)[reply]
- Why would any admin who regularly performs blocks want what could end up as hundreds of templates on his or her talk page? This sort of spamming technique isn't helpful (it would only make me block the bot if it kept spamming me every five minutes there was a backlog, and I was busy trying to block a load of sockpuppets or something). We need to encourage more admins to check AIV, not have a bot annoy them so much that they end up doing it. —fetch·comms 01:05, 14 September 2010 (UTC)[reply]
- This should absolutely be an opt-in system, if its done at all. AIV was listed as backlogged at least 5 times in the last 500 edits to the page (going back ~33 hours), so a more accurate estimate would probably be something like 25 per week. In all but one case, the backlog was cleared in under 10 minutes, frequently under 5. Mr.Z-man 03:47, 14 September 2010 (UTC)[reply]
{{OperatorAssistanceNeeded}} Status? Mr.Z-man 04:10, 26 September 2010 (UTC)[reply]
- Still going on, just working on code for subscriptions and other additions. Is there a template to put this on hold temporarily while I finish? Allmightyduck What did I do wrong? 12:30, 26 September 2010 (UTC)[reply]
- Just let us known when you're done coding and I'll be happy to approve a trial. - EdoDodo talk 15:10, 28 September 2010 (UTC)[reply]
- Oppose unless restricted to opt-in list. I perform admin AIV duties regularly and would not want a bot notifying me. -- Alexf(talk) 17:18, 13 October 2010 (UTC)[reply]
- FYI, the Wiki::logs() function does what you need instead of the API query. Wiki::logs( array( 'block' ) ) should work fine. (X! · talk) · @135 · 02:13, 18 October 2010 (UTC)[reply]
- Either an opt-in or opt-out list is definitely needed. At the very least it should adhere to
{{bots}}
tags before posting to the admin's talk page, as this gives a universal method of opting out. --slakr\ talk / 05:53, 25 October 2010 (UTC)[reply] - There seems to be consensus against this task or at least strong opposition to it. Any further thoughts would be helpful in determining next steps. MBisanz talk 05:35, 27 October 2010 (UTC)[reply]
- I realized that, and definitely using an Opt-In list, versus opt out. Allmightyduck What did I do wrong? 16:35, 27 October 2010 (UTC)[reply]
- Withdrawn by operator. I'll come back when I have a better task. Allmightyduck What did I do wrong? 16:36, 27 October 2010 (UTC)[reply]
- The above discussion is preserved as an archive of the debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA.
- The following discussion is an archived debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA. The result of the discussion was Request Expired.
Operator: Lightmouse (talk · contribs)
Automatic or Manually assisted: Manually assisted
Programming language(s): AWB, monobook, vector, manual
Source code available: Source code for monobook or vector are available. Source code for AWB will vary but versions are often also kept as user pages. Some coding will be done on an as required basis.
Function overview: Janitorial edits to units
Links to relevant discussions (where appropriate):
This request duplicates the 'units of measure' section of Wikipedia:Bots/Requests for approval/Lightbot 3 but it is an application for some aspects of use of a non-bot account as directed by Arbcom.
Edit period(s): Multiple runs. Often by batch based on preprocessed list of selected target articles.
Estimated number of pages affected: Individual runs of tens, or hundreds, or thousands.
Exclusion compliant (Y/N): Not applicable.
Already has a bot flag (Y/N): Not applicable.
Function details:
- Add {{convert}} to metric units so they display non-metric units.
- Add {{convert}} to non-metric units so they display metric units.
- Add text to metric units so they display non-metric units.
- Add text to non-metric units so they display metric units.
- Modify existing text conversions of units. This will be to correct errors, improve the conversion, improve appearance, improve consistency, change abbreviation, change spelling
- Modify existing template conversions of units. This will be to correct errors, improve the conversion, update the template, improve appearance, improve consistency, change abbreviation, change spelling
- Remove existing text conversions of units in order to replace it with a better template.
- Remove existing template conversions of units in order to replace it with better text.
- Remove existing template conversions of units in order to replace it with a better template.
- Add links to uncommon units
- Modify links to units. This will be to correct errors, make it more direct, improve appearance, improve consistency, change abbreviation, change spelling
- Remove links to common units
- It is not intended to add templates other than {{convert}} but if a better template exists, it will be considered
- For this application, the scope of the term 'conversion' includes more than one unit in the output e.g. 60 PS (44 kW; 59 hp)
Please note that this is for the activity of a non-bot account as directed by Arbcom and comes into the BAG category of 'manually assisted'. Lightmouse (talk) 23:01, 24 August 2010 (UTC)[reply]
Discussion
- Recused MBisanz talk 01:53, 26 August 2010 (UTC)[reply]
- Any news? —I-20the highway 00:54, 16 September 2010 (UTC)[reply]
- You appear to be doing this without approval already via AWB, How do your respond to that? ΔT The only constant 23:59, 10 November 2010 (UTC)[reply]
- Note. An ongoing Arbitration Request for Amendment is in progress. — HELLKNOWZ ▎TALK 12:55, 12 November 2010 (UTC)[reply]
- Temporarily archiving request without prejudice as an effort to refocus attention on Lightbot 5. With operator's permission. Questions about whether Lightmouse was in violation of ArbCom ruling should be taken up there, not here. Regards, - Jarry1250 [Who? Discuss.] 20:50, 18 December 2010 (UTC)[reply]
- The above discussion is preserved as an archive of the debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA.
Requests to add a task to an already-approved bot
- The following discussion is an archived debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA. The result of the discussion was Approved.
Operator: Peter Karlsen (talk · contribs)
Time filed: 20:44, Friday October 8, 2010 (UTC)
Automatic or Manually assisted: automatic
Programming language(s): python
Source code available: Standard pywikipedia
Function overview: template maintenance resulting from TFDs
Links to relevant discussions (where appropriate):
Edit period(s): continuous
Estimated number of pages affected: up to 5000 per day, quite often less
Exclusion compliant (Y/N): yes, native to pywikipedia
Already has a bot flag (Y/N): yes
Function details: When an administrator requests bot work at Wikipedia:Templates for discussion/Holding cell, the template task directed will be performed (remove, replace, etc.) The bot will not be automatically activated upon listing of a task in the TFD holding cell - I will review all requests before directing the bot to perform them.
Discussion
Seems uncontroversial. Approved for trial. Please provide a link to the relevant contributions and/or diffs when the trial is complete. Do 1 small task. I'm not going to give a specific edit limit because the bot may as well not have to stop in the middle of handling a TfD. Anomie⚔ 00:16, 14 October 2010 (UTC)[reply]
- Trial complete. [3] is a permanent link to the edits. Peter Karlsen (talk) 05:23, 14 October 2010 (UTC)[reply]
- Approved. [4]
- The above discussion is preserved as an archive of the debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA.
- The following discussion is an archived debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA. The result of the discussion was Approved.
Operator: Rich Farmbrough (talk · contribs)
Automatic or Manually assisted: Automatic
Programming language(s): Perl/AWB
Source code available: AWB/Perl no.
Function overview: Canonicalise clean up tags to enable dating
Links to relevant discussions (where appropriate): N/A
Edit period(s): Continuousish
Estimated number of pages affected: 0 - this will only be done on pages already being edited.
Exclusion compliant (Y/N): N
Already has a bot flag (Y/N): N
Function details: For some, most or all maintenance tags (templates) on the page the following will be done:
- removal replacement and reduction of leading, inter token and trailing spaces and underscores to the minimum number of spaces required.
- removal of leading :, msg:, template: Template: Msg:
- Replacement of some all or any template names listed on the what links here (redirects only) page by the template name as shown at the top of the page
- Replacement of a large variety of mis-spellings of "date", together with known aliases of date that are not parameters of the templates
- Replacement of a large variety of mis-spellings, abbreviations and translations of month names within the date parameter
- Replacement of a modest variety of mis-formattings, abbreviations of years within the date parameter
- Removal of un-desirable date components (time, day of week, day number time-zone etc)
- Rearrangement of components into monthname 4-digit-year
- Removal of duplicate date parameters
- Removal of certain cruft, vandalism and errors from date parameters
- De-substituting of the template
- Replacement of invalid dates with the current date
For clarity maintenance tags excludes infoboxes, cite templates, navboxes, succession boxes, interwiki sister links (commons, wikitionary, wikisources etc.), portal boxes, convert, language, mark-up and formatting templates: to these only the rule 2 above will be applied.
In addition:
- Mis-spellings of Subst:, use of various DATE/Date templates, substituting of templates such as "fact now", removal or corrections of copy-pastes from template documentation which break the intended syntax and other multifarious, nefarious and toothfarious errors.
- Special re-arrangement and re-formatting where required of dated maintenance templates that do not use a date= parameter
- Certain limited conversions between section versions of templates and templates with a section (list, table...) parameter
- Certain limited conversions between non-stub and stub versions of cleanup templates.
- Certain limited conversions between BLP and non BLP versions of templates.
- AWB's General Fixes, excluding reference ordering, and with limited orphan tagging.
- Replacement of Subst:CURRENTMONTHNAME and Subst:CURRENTYEAR with the build month and year of the ruleset (rules are generally built several times a month, and certainly for each new month) to overcome T4700.
Discussion
To do its dating task properly SmackBot has evolved many additional rules over and above simply inserting "|date=October 2010" inside templates. The importance of these rules cannot be overstated, and indeed many of them have become part of AWB general fixes, whether by knowledge sharing or independently. It is also the case that many minor fixes that are not essential to dating templates have been added, in order to get the most value out of each edit. By and large these fixes, trivial individually though they are, seem to appreciated by the community, or at least non-contentious. Nonetheless a change on 6th of September resulted in some high WikiDrama a few weeks later which readers may be familiar with. For this reason, and because drama knows no reason, nor yet bounds, I have pulled all SmackBot's custom find and replace rules, and fallen back to running on Full General Fixes (less reference ordering) alone, while I BRFA the more useful rules back. Since there are over 5000 rules, BAGGERs may be alarumned, especially if they are also reviewing Femto Bot 4. Have no fear! The urgent set are covered in this BRFA, the bulk of the rest should be in one additional batch: I will then review what is left.
As I said The importance of these rules cannot be overstated : the proof of the pudding is that without them 85% of pages requiring dating of tags fail to be dated. A rapid approval of this BRFA would be appreciated, whilst I am aware there is a lot here, I hope none if it actually causes any problems. Rich Farmbrough, 22:14, 6 October 2010 (UTC). [reply]
Detailed explanation to Fram
| ||||
---|---|---|---|---|
This was actually out of date even then, including both false positives excluding real redirects: however a glance will show that this is a very incomplete set.
Here are some possibilities. Rich Farmbrough, 16:12, 7 October 2010 (UTC).[reply]
<Replacement>
<Replace>{{$1|$2date=October 2010$3</Replace> <Comment>fix nn nnnnn Year specific:Any ISO date or just year to current</Comment> <IsRegex>true</IsRegex> <Enabled>true</Enabled> <Minor>false</Minor> <RegularExpressionOptions>IgnoreCase</RegularExpressionOptions> </Replacement>
<Replacement> <Find>{{\s*(Citation[ _]+needed|Facts|Citeneeded|Citationneeded|Cite[ _]+needed|Cite-needed|Citation[ _]+required|Uncited|Cn|Needs[ _]+citation|Reference[ _]+needed|Citation-needed|An|Sourceme|OS[ _]+cite[ _]+needed|Refneeded|Source[ _]+needed|Citation[ _]+missing|FACT|Cite[ _]+missing|Citation[ _]+Needed|Proveit|CN|Source\?|Fact|Refplease|Needcite|Cite[ _]+ref[ _]+pls|Needsref|Ref\?|Citationeeded|Are[ _]+you[ _]+sure\?|Citesource|Cite[ _]+source) *([\|}\n])</Find> <Replace>{{Citation needed$2</Replace> <Comment /> <IsRegex>true</IsRegex> <Enabled>true</Enabled> <Minor>false</Minor> <RegularExpressionOptions>IgnoreCase</RegularExpressionOptions> </Replacement>
<Replacement> <Find>{{(Citation[ _]+needed)((?:\|\s*(?:(?:text|reason|category|discuss|topic|1)\s*=[^\|{}]*|[^\|{}=]*))*)}}</Find> <Replace>{{$1$2|date=October 2010}}</Replace> <Comment /> <IsRegex>true</IsRegex> <Enabled>true</Enabled> <RegularExpressionOptions>IgnoreCase</RegularExpressionOptions> </Replacement>
|
Trial run
How about a 20,000 trial run? Rich Farmbrough, 16:05, 8 October 2010 (UTC).[reply]
5,000? Rich Farmbrough, 20:28, 8 October 2010 (UTC).[reply]
1,000? Rich Farmbrough, 23:34, 10 October 2010 (UTC).[reply]
500? Rich Farmbrough, 19:03, 11 October 2010 (UTC).[reply]
100? Rich Farmbrough, 22:56, 11 October 2010 (UTC).[reply]
20? Rich Farmbrough, 17:50, 12 October 2010 (UTC).[reply]
5? Rich Farmbrough, 22:07, 12 October 2010 (UTC).[reply]
1? Rich Farmbrough, 14:31, 13 October 2010 (UTC).[reply]
- Approved for trial (250 edits). Please provide a link to the relevant contributions and/or diffs when the trial is complete. –xenotalk 14:40, 13 October 2010 (UTC)[reply]
- Excellent. — Preceding unsigned comment added by Rich Farmbrough (talk • contribs) 18:20, 13 October 2010
- Trial complete. here Rich Farmbrough, 01:06, 14 October 2010 (UTC).[reply]
- Trial complete. here Rich Farmbrough, 01:06, 14 October 2010 (UTC).[reply]
- Could you explain these edits? [7] [8] [9] [10] [11][12] [13][14] They don't appear to do anything substantive. –xenotalk 13:10, 18 October 2010 (UTC)[reply]
- Yes these are similar to the edits discussed on the foot of the page. The are items in [Category:Templates with invalid dates] which are there due to the changes to the {{Cleanup}} template on the 30th September which you saw discussed on my talk page that day. You will also have seen the request to clean the category on my talk page since. They will be cleaned out by any edit or within a month or two they will expire from cache. Fortunately or unfortunately this is the first category that SmackBot tackles and exceeds the size of the trial run by a factor of two (usual backlog is 10-15 articles). Rich Farmbrough, 04:31, 22 October 2010 (UTC).[reply]
- Ok, so you chose to do dummy edits instead of null edits - probably not a choice I would have made, and probably not something you should have rolled into this trial which is supposed to cover the bot's normal operations. I don't think that approval should be granted to change the first-letter capitalization of templates when consensus does not exist for a "Ucfirst" schema - they should just be left as-is. –xenotalk 13:44, 22 October 2010 (UTC)[reply]
- Hm. Did you miss the point that no-one objects? Did you miss the point that the vast majority of cleanup templates are ucfirst, a defacto agreement? Did you miss the point that no-one objects? Rich Farmbrough, 03:20, 24 October 2010 (UTC).[reply]
- Rich, the majority of cleanup templates being ucfirst is because your bot changed them to be that way. Please obtain consensus for your personal belief that templates should always be ucfirst. –xenotalk 03:28, 24 October 2010 (UTC)[reply]
- I read that a couple of times today, and apparently I have only imagined that several people objected to that, so let me state it here clearly:
I now object to a bot changing capitalization of the first letter of a transcluded templates if that's all it does to that transclusion.
Amalthea 18:04, 24 October 2010 (UTC)[reply]- We are talking cleanup templates here. But Amlathea's objection is the precise wording of what's needed to get this running again then someone say so. Rich Farmbrough, 04:48, 31 October 2010 (UTC).[reply]
- We are talking cleanup templates here. But Amlathea's objection is the precise wording of what's needed to get this running again then someone say so. Rich Farmbrough, 04:48, 31 October 2010 (UTC).[reply]
- Hm. Did you miss the point that no-one objects? Did you miss the point that the vast majority of cleanup templates are ucfirst, a defacto agreement? Did you miss the point that no-one objects? Rich Farmbrough, 03:20, 24 October 2010 (UTC).[reply]
- Ok, so you chose to do dummy edits instead of null edits - probably not a choice I would have made, and probably not something you should have rolled into this trial which is supposed to cover the bot's normal operations. I don't think that approval should be granted to change the first-letter capitalization of templates when consensus does not exist for a "Ucfirst" schema - they should just be left as-is. –xenotalk 13:44, 22 October 2010 (UTC)[reply]
- Yes these are similar to the edits discussed on the foot of the page. The are items in [Category:Templates with invalid dates] which are there due to the changes to the {{Cleanup}} template on the 30th September which you saw discussed on my talk page that day. You will also have seen the request to clean the category on my talk page since. They will be cleaned out by any edit or within a month or two they will expire from cache. Fortunately or unfortunately this is the first category that SmackBot tackles and exceeds the size of the trial run by a factor of two (usual backlog is 10-15 articles). Rich Farmbrough, 04:31, 22 October 2010 (UTC).[reply]
Footnotes
- ^
Moderately long explanation about template name diffusion
Clearly some of these names are more likely than others (about 91 possibilities are in use), however the key factor here is that people tend, quite reasonably, to replicate the tag names they have seen: both literally and by analogy. If all a given editor sees is "One source" they will tend to replicate that: they may of course use "One Source" or "Onesource" (especially if they have been exposed to run together words in other template names) or even "OneSource": this is all well and good except when they get red links{*} and get frustrated.
It is, for this reason, perfectly wise and helpful to these editors to create a small array of redirects - it would be better and more efficient if, for example we knew that of 100,000 attempts to enter "One source" there were 10,000 "Onesource" and only one "OneSource", we would probably create the first redirect and not bother with the second - or at least it would inform our decision on a similar template that is only expected to be used 10 times. - but we haven't much data on that as far as I know, although I have gathered a little on template name-space diffusion and consolidation, relating to the template redirect {{Infobox actor}} and its former redirects.
A problem arises, however, if we leave these template redirects languishing in articles forever. The sample editors are seeing is now, let us say, the six actaul redirects to One source (excluding T:SINGLE and T:ONES.
- {{One source}}
- {{Singlesource}}
- {{Single source}}
- {{Oneref}}
- {{Onesource}}
- {{1source}}
plus maybe our OneSource and One Source.
At this point an editor who is used to seeing spaced templates and recalls {{1source}} or {{Oneref}} is likely to enter "1 source" or "One ref" or even 1-source...
We now have the position where instead of dealing with redirects that are one step removed (Coding theory if anyone is interested) from our canonical name, we have to deal with items two, three and more steps away.
The further this goes
- the more redirects we need - until we have completed the dictionary - in this example a relatively small 3x3x4 = 36 redirects covers all combinations generated by the implicit rules. In the unref example the number is in the thousands - but even 36 * current number of templates is rather undesirable (though not infeasible).
- the more chance we have that separate domains start to blur. In the documentation cited you can see this with "Uncited" this has been on the edge of two domains of the partition and has been moved from one to the other. Prevent the thought that this is a rare occurrence even now! Sceptics and skeptics are invited to view "[1]" a list of hundreds of cases.
- the more confusing it becomes for users trying to extract, consciously or subconsciously, the rules for template naming. Do we use Sentence case? Title Case? UPPER CASE? lower case? CamelCase? Do we abbr as mch as poss.? And whn we d, do we use fll. stps. (prds.)? Dowenotleavespaces? Or-do-we-separate-words? And_if_so_how? (I have myself spent time in the last few month trying to choose a valid redirect to "Unreferenced section", and I hazard I work more with these tags than anyone.) This discourages users from using the templates, and ultimately from editing - it is an unnecessary part of the massive learning that is required to become a fluent editor.
Therefore replacing template redirects in articles, while not being a pressing problem, seems worthwhile at least where it can be built into another, ideally bot, edit.
(*) Example at Talk:Dachau_massacre#Changes, first bullet of second list.
{{BAG assistance needed}}
Rich Farmbrough 23:30, 10 October 2010 (UTC)[reply]
Irrelevant stuff
|
---|
Discussion of trial run
This edit [15] was obstensibly for this trial but:
The edit summaries for bot trials must be specific to the trial, if anyone is supposed to be able to tell what is being tested! — Carl (CBM · talk) 11:31, 18 October 2010 (UTC)[reply]
|
{{BAG assistance needed}} Rich Farmbrough 23:37, 20 October 2010 (UTC)[reply]
I think, unfortunately, that Rich sometimes attracts drama, which is of course why we are reading this BRFA in the first place. If, however, we look specifically at his ability to run this bot task, it would be a massive leap (not to mention a mistake) to render this anything other than Approved.. - Jarry1250 [Who? Discuss.] 17:13, 16 November 2010 (UTC)[reply]
- The above discussion is preserved as an archive of the debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA.
- The following discussion is an archived debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA. The result of the discussion was Approved.
Automatic or Manually assisted: Automatic, unsupervised
Programming language(s): Perl
Source code available: User:AnomieBOT/source/tasks/EditorReviewArchiver.pm
Function overview: Archive old requests at WP:Editor review.
Links to relevant discussions (where appropriate): Wikipedia:Bot requests#New bot needed
Edit period(s): Periodic
Estimated number of pages affected: Depends on the volume of reviews
Exclusion compliant (Y/N): Yes
Already has a bot flag (Y/N): Yes
Function details: The bot will do the following:
- Notify editors on their talk pages when their review is about to be closed.
- Close reviews that meet the conditions (currently: reviewed, at least 30 days old, no activity for 7 days, and user notified of impending closure at least 3 days ago).
- Add closed reviews to the archive pages.
- (Possible future) Update archive listings, e.g. on Wikipedia:Editor review/Header.
Discussion
This is a replacement for Wikipedia:Bots/Requests for approval/DustyBot 5. Anomie⚔ 03:16, 6 October 2010 (UTC)[reply]
{{BAGAssistanceNeeded}}
Anyone? Anomie⚔ 01:10, 14 October 2010 (UTC)[reply]- Seems to me like a straight-forward, previously approved, and non-controversial task from a trusted operator, don't see a reason to not trail or even speedy. — HELLKNOWZ ▎TALK 23:47, 14 October 2010 (UTC)[reply]
- Speedily Approved. MBisanz talk 23:48, 14 October 2010 (UTC)[reply]
- The above discussion is preserved as an archive of the debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA.
- The following discussion is an archived debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA. The result of the discussion was Approved.
Operator: Peter Karlsen (talk · contribs)
Automatic or Manually assisted: Automatic
Programming language(s): Python
Source code available: Standard pywikipedia
Function overview: changes all transclusions of {{PD-old}}
or {{pd-old}}
to {{PD-old-100}}
Links to relevant discussions (where appropriate): Wikipedia:Bot_requests#Simple_task.
Edit period(s): Continuous
Estimated number of pages affected: 11,000
Exclusion compliant (Y/N): yes, native to pywikipedia
Already has a bot flag (Y/N): yes
Function details: from the bot request by Mechamind90:
When an image is tagged with
{{PD-old-70}}
or{{PD-old-100}}
on both English Wikipedia and the Wikimedia Commons, the Wikipedia templates are each practically identical to their Commons counterpart. However, when an image is just tagged with{{PD-old}}
, it means 100 on Wikipedia but 70 on the Commons. Perhaps on Wikipedia this can be resolved, since it's probably a hassle for Commons users to replace 70-year with 100-year
when public domain images are copied from Wikipedia to Commons, which is a frequent practice for all free-content images that are initially uploaded locally. Therefore, every transclusion of {{PD-old}} or {{pd-old}} on an image will be converted to {{PD-old-100}}. ({{PD-old-100}}
, presently a redirect to {{PD-old}}
, will become a high-risk template as a result, and should be fully protected.)
Discussion
It might be better to go the full TfD route, having {{PD-old}} be renamed to {{PD-old-100}} and then delete the redirect once it is orphaned. That would certainly solve the ambiguity problem. And then you could take care of it with KarlsenBot 4 ;) Anomie⚔ 00:27, 14 October 2010 (UTC)[reply]
- Users may prefer the convenience of being able to use the template through the {{PD-old}} or {{pd-old}} syntaxes to which they are accustomed, without having such invocations suddenly produce red links. The 100 year template is by far the most commonly used at 11,143 transclusions, while
{{PD-old-70}}
only has 854. I'd rather run an ongoing bot task than make life more difficult for a large number of editors. Peter Karlsen (talk) 06:35, 14 October 2010 (UTC)[reply] - This situation differs from common template mergers at TFD, where the source template being deleted, instead of redirected to the target, probably has fewer than 1,000 uses. My estimate of up to 5,000 edits per day for task 4 is based on those rare instances in which somewhat heavily used but "bad" templates are simply removed altogether. Peter Karlsen (talk) 07:21, 14 October 2010 (UTC)[reply]
- Ok. Approved for trial (31 edits). Please provide a link to the relevant contributions and/or diffs when the trial is complete. Anomie⚔ 03:54, 27 October 2010 (UTC)[reply]
- Trial complete. [16] is a permanent link to the edits. Peter Karlsen (talk) 20:30, 27 October 2010 (UTC)[reply]
- Approved. Anomie⚔ 22:11, 27 October 2010 (UTC)[reply]
- Trial complete. [16] is a permanent link to the edits. Peter Karlsen (talk) 20:30, 27 October 2010 (UTC)[reply]
- The above discussion is preserved as an archive of the debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA.
- The following discussion is an archived debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA. The result of the discussion was Withdrawn by operator.
Operator: Peter Karlsen (talk · contribs)
Automatic or Manually assisted: Automatic
Programming language(s): Python
Source code available: Standard pywikipedia, invoked as python replace.py "-links:User:Peter Karlsen/amazonlinks" -regex -always -namespace:0 -putthrottle:10 "-summary:removing redundant parameters from link[s], [[Wikipedia:Bots/Requests for approval/KarlsenBot 2|task 2]]" "-excepttext:http\:\/\/www\.amazon\.com[^ \n\]\[\}\{\|\<\>\.\,\;\:\(\)\"\']*\&node\=\d*" "(http\:\/\/www\.amazon\.com[^ \?\n\]\[\}\{\|\<\>\.\,\;\:\(\)\"\']*)\?[^ \?\n\]\[\}\{\|\<\>\.\,\;\:\(\)\"\']*" "\\1"
Function overview: removes associate codes from amazon.com urls
Links to relevant discussions (where appropriate): Wikipedia:Bot_requests#Amazon_associate_links
Edit period(s): Continuous
Estimated number of pages affected: 20,000
Exclusion compliant (Y/N): yes, native to pywikipedia
Already has a bot flag (Y/N): yes
Function details: External links to amazon.com are sometimes legitimate. However, the acceptance of associate codes in such links encourages unnecessary proliferation of links by associates who receive payment every time a book is purchased as a result of a visit to the website using the link. Per the bot request, the portion of every amazon.com link including and after the question mark will be removed, thereby excising the associate code while maintaining the functionality of the link.
Discussion
Why not grab the link rel="canonical" tag from the amazon.com page header? It has a clean URL which can then be re-used, and if grabbed programmatically, there's no chance of accidentally latching onto someone's affiliate account; the affiliate system is cookie based. —Neuro (talk) 11:34, 8 October 2010 (UTC)[reply]
Are there no examples of Amazon urls with parameters that are needed to reach the correct target? What if it is a FAQ of some Help page that the external link points to? And I agree with the above, that it would be better to replace the url with the site's specified canonical url (e.g. <link rel="canonical" href="http://www.amazon.com/Dracula-Qualitas-Classics-Bram-Stoker/dp/1897093500" /> for this page) — HELLKNOWZ ▎TALK 12:02, 8 October 2010 (UTC)[reply]
- Amazon urls for which the parameters after the question mark actually are relevant (those beginning with http://www.amazon.com/gp/help/) can simply be excluded from the bot run. There are substantial benefits, both in design simplicity, and ability to complete the task, derived from modifying the existing urls: if the bot attempts to visit amazon.com 20,000 times to retrieve canonical urls for each link with an associate code, there's a significant possibility that my IP address will be blocked from further access to amazon for an unauthorized attempt to spider their website. Peter Karlsen (talk) 16:40, 8 October 2010 (UTC)[reply]
- Then you will have to produce a list of exclusions or a list of inclusions of urls, so that no problems arise, as these edits are very subtle and errors may go unnoticed for a very long time. — HELLKNOWZ ▎TALK 17:16, 8 October 2010 (UTC)[reply]
Exclusion of all urls beginning with http://www.amazon.com/gp should be sufficient. Peter Karlsen (talk) 17:52, 8 October 2010 (UTC)[reply]- What about [17] (no params)? Sure, it's unlikely to be in a reference, but that's just an example. — HELLKNOWZ ▎TALK 10:09, 9 October 2010 (UTC)[reply]
- Yes, it doesn't appear that exclusion of http://www.amazon.com/gp would be sufficient. However, by preserving the &node=nnnnnn substring of the parameters, the breakage of any links should be avoided (for instance, http://www.amazon.com/kitchen-dining-small-appliances-cookware/b/ref=sa_menu_ki6?&node=284507 produces the same result as the original http://www.amazon.com/kitchen-dining-small-appliances-cookware/b/ref=sa_menu_ki6?ie=UTF8&node=284507, http://www.amazon.com/gp/help/customer/display.html?&nodeId=508510 is the same as http://www.amazon.com/gp/help/customer/display.html?ie=UTF8&nodeId=508510, etc.) Peter Karlsen (talk) 17:06, 9 October 2010 (UTC)[reply]
- And are you certain there are no other cases (other parameters, besides "node"), as I picked this one randomly. — HELLKNOWZ ▎TALK 17:40, 9 October 2010 (UTC)[reply]
- Yes. As another example, http://www.amazon.com/b/ref=sv_pc_5?&node=2248325011 links to the same page as the original http://www.amazon.com/b/ref=sv_pc_5?ie=UTF8&node=2248325011. None of these sort of pages would normally constitute acceptable references or external links at all; when amazon.com links are used as sources, they normally are to pages for individual books or other media, which have no significant parameters. However, just in case, links to amazon help and product directory pages are now covered. Peter Karlsen (talk) 17:53, 9 October 2010 (UTC)[reply]
- Nevertheless an automated process cannot determine unlisted (i.e. blacklist/whitelist) link suitability in the article, no matter where the link points to. Even if a link is completely unsuitable for an article, a bot should not break it; it is human editor's job to remove or keep the link. — HELLKNOWZ ▎TALK 17:59, 9 October 2010 (UTC)[reply]
- Some bots, such as XLinkBot (talk · contribs), purport to determine whether links are suitable. However, since this task is not intended for that purpose, it has been modified to preserve the functionality of all amazon.com links. Peter Karlsen (talk) 18:06, 9 October 2010 (UTC)[reply]
- XLinkBot works with a blacklist, I am referring to "unlisted (i.e. blacklist/whitelist) link[s]", i.e., links you did not account for. In any case, the error margin should prove very small, so I have no actual objections. — HELLKNOWZ ▎TALK 18:09, 9 October 2010 (UTC)[reply]
- Some bots, such as XLinkBot (talk · contribs), purport to determine whether links are suitable. However, since this task is not intended for that purpose, it has been modified to preserve the functionality of all amazon.com links. Peter Karlsen (talk) 18:06, 9 October 2010 (UTC)[reply]
- Nevertheless an automated process cannot determine unlisted (i.e. blacklist/whitelist) link suitability in the article, no matter where the link points to. Even if a link is completely unsuitable for an article, a bot should not break it; it is human editor's job to remove or keep the link. — HELLKNOWZ ▎TALK 17:59, 9 October 2010 (UTC)[reply]
- Yes. As another example, http://www.amazon.com/b/ref=sv_pc_5?&node=2248325011 links to the same page as the original http://www.amazon.com/b/ref=sv_pc_5?ie=UTF8&node=2248325011. None of these sort of pages would normally constitute acceptable references or external links at all; when amazon.com links are used as sources, they normally are to pages for individual books or other media, which have no significant parameters. However, just in case, links to amazon help and product directory pages are now covered. Peter Karlsen (talk) 17:53, 9 October 2010 (UTC)[reply]
- And are you certain there are no other cases (other parameters, besides "node"), as I picked this one randomly. — HELLKNOWZ ▎TALK 17:40, 9 October 2010 (UTC)[reply]
- Yes, it doesn't appear that exclusion of http://www.amazon.com/gp would be sufficient. However, by preserving the &node=nnnnnn substring of the parameters, the breakage of any links should be avoided (for instance, http://www.amazon.com/kitchen-dining-small-appliances-cookware/b/ref=sa_menu_ki6?&node=284507 produces the same result as the original http://www.amazon.com/kitchen-dining-small-appliances-cookware/b/ref=sa_menu_ki6?ie=UTF8&node=284507, http://www.amazon.com/gp/help/customer/display.html?&nodeId=508510 is the same as http://www.amazon.com/gp/help/customer/display.html?ie=UTF8&nodeId=508510, etc.) Peter Karlsen (talk) 17:06, 9 October 2010 (UTC)[reply]
- What about [17] (no params)? Sure, it's unlikely to be in a reference, but that's just an example. — HELLKNOWZ ▎TALK 10:09, 9 October 2010 (UTC)[reply]
- Then you will have to produce a list of exclusions or a list of inclusions of urls, so that no problems arise, as these edits are very subtle and errors may go unnoticed for a very long time. — HELLKNOWZ ▎TALK 17:16, 8 October 2010 (UTC)[reply]
- This bot seems highly desirable, although I don't know if something more customized for the job than replace.py would be desirable. Has this been discussed with the people who frequent WT:WPSPAM (they may have some suggestions)? How would the bot know which pages to work on? I see "-links" above, although my pywikipedia (last updated a month ago, and not used on Wikipedia) does not mention -links in connection with replace.py (I suppose it might be a generic argument?). As I understand it, -links would operate on pages listed at User:Peter Karlsen/amazonlinks: how would you populate that page (I wonder why it was deleted)? I suppose some API equivalent to Special:LinkSearch/*.amazon.com is available – it would be interesting to analyze that list and see how many appear to have associate referral links. It might be worth trying the regex on a good sample from that list and manually deciding whether the changes look good (i.e. without editing Wikipedia). I noticed the statement ".astore.amazon.com is for amazon affiliate shops" here, although there are now only a handful of links to that site (LinkSearch). Johnuniq (talk) 01:54, 9 October 2010 (UTC)[reply]
- User:Peter Karlsen/amazonlinks is populated from Special:LinkSearch/*.amazon.com via cut and paste, then using the bot to relink the Wikipedia pages in which the external links appear. The -links parameter is described in the replace.py reference [18]. I will post a link to this BRFA at WT:WPSPAM. Peter Karlsen (talk) 17:06, 9 October 2010 (UTC)[reply]
- Also, I've performed limited, successful testing of a previous regex to remove associate codes on my main account [19], with every edit manually confirmed (similar to the way that AWB is normally used.) Peter Karlsen (talk) 17:28, 9 October 2010 (UTC)[reply]
- Strictly speaking you are not "Remove[ing] associate code from link[s]", you are "removing redundant parameters from link[s]", as majority of those edits do not have any associate parameters. I am always strongly suggesting of having a descriptive summary for (semi)automated tasks, preferably with a link to a page with further info. — HELLKNOWZ ▎TALK 17:40, 9 October 2010 (UTC)[reply]
- I can rewrite the edit summary as described, with a link to this BRFA. Peter Karlsen (talk) 17:56, 9 October 2010 (UTC)[reply]
- Strictly speaking you are not "Remove[ing] associate code from link[s]", you are "removing redundant parameters from link[s]", as majority of those edits do not have any associate parameters. I am always strongly suggesting of having a descriptive summary for (semi)automated tasks, preferably with a link to a page with further info. — HELLKNOWZ ▎TALK 17:40, 9 October 2010 (UTC)[reply]
- Approved for trial (50 edits). Please provide a link to the relevant contributions and/or diffs when the trial is complete. MBisanz talk 22:55, 19 October 2010 (UTC)[reply]
- The above discussion is preserved as an archive of the debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA.
- The following discussion is an archived debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA. The result of the discussion was Approved.
Operator: Magog the Ogre (talk · contribs)
Automatic or Manually assisted: Manually assisted.
Programming language(s): Perl
Source code available: No, unless requested.
Function overview: Bot will assist the deletion of images on commons via a two-step process. The bot will not actually delete any pages: I will do that with my account.
Links to relevant discussions (where appropriate): Please see the thread I am creating at AN: WP:AN#OgreBot + Commons images.
Edit period(s): Supervised, thus only when I'm available to edit.
Estimated number of pages affected: Thousands (until backlog cleared). Hundreds per day.
Exclusion compliant (Y/N): Y
Already has a bot flag (Y/N): N
Function details:
;Stage one
Bot will look at 100 images at a time in Category:Wikipedia files on Wikimedia Commons.
- It will determine if each Commons image is a duplicate of the en.
- It will determine if the Commons image has appropriate licensing to match the en image.
- It will determine if the Commons image page lists the uploader at en.
Bot will then print out a list via the PHP page on my local server, printing something in this format:
- [checkbox here] [250px of en image] [250 px of commons image]
{{lf|en.imagename}}
, [commons link here], [wording indicating if image is dupe: if not, print out each's resolution], [wording indicating if licensing is right], [wording indicating if uploader is linked], [uploader username] [Wikitext for en image], [Wikitext for commons image], [textbox for why image was not approved, may remain blank].
Stage two(see #Restate of purpose)
I will manually check next to each image upon approving it for deletion or not approving it. input a list of images from Category:Wikipedia files with a different name on Wikimedia Commons for the bot. I will indicate either that I am approving the image or not approving the image. If I indicate I am not approving the image, I may also include a nsd or npd flag; I have no plans to worry about a {{puf}} tag at this time.
- If I approve the image, the bot will unlink all instances from en and replace them with the instances on commons
- The bot will verify there is not a superseding and conflicting image on en.
- The bot will ensure that resolution information is maintained properly (in case of higher res on commons)
- If I do not approve the image, the bot will tag the image with
{{NoCommons|edit summary}}
. If I choose the nsd or npd flags, the bot will tag the image with {{subst:nsd}} or {{subst:npd}} and notify the original uploader. - Where there are any errors in this, bot will remember them. Errors may include protected pages, duplicate images found (could be a problem, say, if the image was hidden in comments), or confusion due to resolution issues (e.g., resolution is listed in an infobox, and it's too hard for the bot to parse). The bot will print out a list of errors on the server side so I can manually fix them.
*If I leave the edit summary blank, the bot will simply ignore the image.
Finally, once unlinking has been done, bot will print out a page with a "delete this image" button for me to click. It will also print out any errors which need to be fixed manually before the image can be deleted. Magog the Ogre (talk) 03:40, 26 September 2010 (UTC)[reply]
While I can understand everyone is worried this will not save time, I must whole-heartedly disagree. I've spent time on the backlog, and I spend a huge amount of time the menial work of that I've listed above that the bot can do.
Discussion
Frankly, I am doubtful whether this will work well. First, I doubt this on grounds of practicality. If it were easy to validate automatically, Metsbot would still be running. Again, if it were easy, Commonshelper would do a much better job than it does. Second, I question the necessity. Some form of move-to-commons backlog has existed since forever, either of images moved and needing processed, or of images not yet moved. The sky has not yet fallen and probably never will. Next, I believe that this proposal aims to solve the wrong problem. So long as local upload of free content is allowed by default, rather than redirecting all free uploads to Commons unless the uploader jumps through hoops (similar to what it required to get a blank upload form), the problem will never go away. The very first technical step in resolving these backlogs should be to prevent them growing further by reducing local free content uploads to an absolute minimum and maximising direct uploads to Commons. Finally, with so much of the backlog having been processed already, I would argue that the remaining images include an abnormally high proportion of crap. That is to say, images which should not have been uploaded to Commons, images which are licensed incorrectly, images which lack descriptions and sources, and so on. These should be processed manually with due consideration of the appropriate action - deletion included - rather than being assumed to be ok unless some glaring error is picked up automatically. For these reasons, I strongly oppose any automation of this process. [An afterthought: I would have fewer objections to the non-backlog, that is to say the newest uploaded-to-Commons categories, being processed with automated assistance.] Angus McLellan (Talk) 16:12, 26 September 2010 (UTC)[reply]
All this is already availible in various templates. The problem is that the more automation you build in the weaker the sanity check on the copyright status becomes.©Geni 16:14, 26 September 2010 (UTC)[reply]
- To respond to both of your concerns: the bot is something I'm already considering to write for myself, without making any edits, to assist in the manual deletion. The only part that I really need approval for is the ability to unlink images automatically after I've already approved them for deletion. And of course, again, I'm reviewing each image manually, ensuring there are no obvious copyright issues for sanity check reasons. Finally: to the concern dealing with Metsbot, again, this information is all something I'm going to create anyway on the back end (no edits = no approval needed), and it's only meant to assist me as the deleter; it will not do anything radical. Magog the Ogre (talk) 18:19, 26 September 2010 (UTC)[reply]
- I'm a bit confused on how this bot will work, but I'm not sure it is entirely necessary. As the system already shows on the bottom of every image page whether it is a duplicate of some Commons file or not, the likelihood of the uploader's name being missed by the bot and whatnot seems too great to be efficient. Also, what is necessary is the validation of the local files first—if a local file never listed the source and/or author's name, both that file and the Commons one should be tagged. A bot would never be able to help identify this, so those checks would need to be done manually for each image anyway. Basically, I don't think this will save enough time to be necessary at this point. /ƒETCHCOMMS/ 01:01, 27 September 2010 (UTC)[reply]
Restate of purpose
OK, it's become obvious I did a poor job of selling this bot and explaining its actions. Please ignore the entirety of stage one above. That's already something I'm going to write, and I'm doing it for me because it saves time. But it's not actually any bot edit, and as such doesn't need approval. The only important edits that this bot will do is:
- I will input a list of files from Category:Wikipedia files with a different name on Wikimedia Commons that have been manually reviewed by me as acceptable transfers to commons. Not by the bot, by me. I have been unclear about this, apologies. The bot will then complete step two above: unlink the English image where acceptable, and replace it with the commons image. E.g., File:NameOnEnglish.jpg -> File:NameOnCommons.jpg. If may also input information into the bot indicating I've declined to transfer the image; the bot will then replace the {{subst:ncd}} tag with
{{NoCommons|my reasoning}}
, and possibly add a {{subst:nsd}} or {{subst:npd}} tag to the image if I specify (and warn the uploader). This is really only a semi-automated bot; frankly, I could do it in JavaScript which wouldn't need bot group approval; however, it would be much less time consuming to have the bot do the edits, rather than my browser. Magog the Ogre (talk) 01:24, 27 September 2010 (UTC)[reply]
- OK, so the bot is just unlinking after you delete the files and not actually doing any "real" reviewing? That seems reasonable enough; however, I have a suggestion:
- Make a template to place on the image page of a reviewed file. This should tell the bot to change the links for that image.
- Let any admin add the template onto an image he/she reviews.
- Have the bot change the template once the links are updated.
- This should place the image in a new category for speedy deletion under F8, as all of the images will have been checked by admins beforehand and shouldn't require any extra checks beforehand, so that category can be cleared out quickly and daily.
- I think it is possible to have a template check the revisionuser who added it, name the admin on a parameter, and have the bot relink only the images that have been checked by an admin. This is similar to the John Bot II system and is more efficient, as more users can help out. Does this sound reasonable? /ƒETCHCOMMS/ 02:26, 27 September 2010 (UTC)[reply]
- I can do that, but I might want to add it as an additional function, because it will create an extra step. The admin would have to a) add the template, b) wait until the bot is run again by me, and delinks, then c) delete the image. But if you think this is a good idea, I'd be happy to implement it instead or in addition. Magog the Ogre (talk) 02:42, 27 September 2010 (UTC)[reply]
- Well, any admin could delete the file later, like clearing out the CSD categories every 00:00 UTC or whenever, and that's a quick task with a batch-delete script. If you can add this function, it would let more users help out in clearing the backlog. /ƒETCHCOMMS/ 04:11, 27 September 2010 (UTC)[reply]
- Alright; when do I start? Magog the Ogre (talk) 01:02, 28 September 2010 (UTC)[reply]
Approved for trial (50 edits). Please provide a link to the relevant contributions and/or diffs when the trial is complete. although given my work with images, I will recuse from final approval. MBisanz talk 22:45, 19 October 2010 (UTC)[reply]
- Alright, done. I went quite a bit over on the 50, more like 90, because the last image I instructed the bot to delink had 50 transclusions (!). Magog the Ogre (talk) 05:25, 26 October 2010 (UTC)[reply]
- {{BAG assistance needed}} - when can I get an update here? Magog the Ogre (talk) 22:47, 1 November 2010 (UTC)[reply]
- While I can't view the deleted images, it looks like everything went OK, and I don't see much harm in approving this request. Gigs (talk) 02:08, 16 November 2010 (UTC)[reply]
- Approved. Looks fine to me (I can see the deleted images). Of course, this is a task for which most of the hard work is still performed by a human, and in that sense there's less to approve (the bot operator remains responsible for their actions). - Jarry1250 [Who? Discuss.] 17:13, 16 November 2010 (UTC)[reply]
- While I can't view the deleted images, it looks like everything went OK, and I don't see much harm in approving this request. Gigs (talk) 02:08, 16 November 2010 (UTC)[reply]
- The above discussion is preserved as an archive of the debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA.
- The following discussion is an archived debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA. The result of the discussion was Approved.
Operator: H3llkn0wz (talk · contribs)
Automatic or Manually assisted: Automatic
Programming language(s): WikiSharpAPI (C#)
Source code available: Not now, I will make API available when it's actually usable.
Function overview: Reference and citation maintenance
Links to relevant discussions (where appropriate): Wikipedia:Bot_requests/Archive_35#Correct_archive_parameters_if_url_is_archive.org, User_talk:H3llBot#accessdate.
Edit period(s): Continuous (when I'm online)
Estimated number of pages affected: All encountered pages with issues, limited by main archival task speed (5-8epm), I suspect this task will raise this to 6-9epm.
Exclusion compliant (Y/N): Y
Already has a bot flag (Y/N): Y
Function details:
1) If a citation's |url=
is a valid Wayback archive link, set |archiveurl=
and |archivedate=
to match it and trim the |url=
to the original link, if
- no
|archiveurl=
is set -and- - no
|archivedate=
is set -or-|archivedate=
is of broken syntax/unrecognised -or-|archivedate=
is the same as actual link's archive date
2) Remove {{Wayback}} template and add corresponding |archiveurl=
and |archivedate=
in the preceding citation if
- {{Wayback}} has
|url=
set and|date=
set, and|title=
not set or equal to citation's|title=
-and- - citation has no
|archiveurl=
set and no|archivedate=
set, and with|url=
matching {{Wayback}}'s url
- See example for both fixes.
X) As an addition, I want to improve the previous task's (BRFA, description) functionality a bit:
When adding |archivedate=
to a citation
- If one of {{Use dmy dates}}, {{Use mdy dates}}, or {{Use ymd dates}} templates is present, use that respective date format for the field
- Otherwise use citation's
|accessdate=
(or|date=
if former is missing/invalid) date format - Otherwise use yyyy-mm-dd (e.g. 2010-12-31) date format
Discussion
I hope you mean yyyy-mm-dd! Rich Farmbrough, 21:04, 30 September 2010 (UTC).[reply]
- Oh - and that's a great improvement to a great task. Rich Farmbrough, 21:05, 30 September 2010 (UTC).[reply]
- Yes, yyyy-mm-dd, my bad; the code uses Ymd, so all's well! Also, thanks. — HELLKNOWZ ▎TALK 21:59, 30 September 2010 (UTC)[reply]
Approved for trial (50 edits). Please provide a link to the relevant contributions and/or diffs when the trial is complete. MBisanz talk 22:46, 19 October 2010 (UTC)[reply]
- Trial complete. Done. edits here. — HELLKNOWZ ▎TALK 19:41, 7 November 2010 (UTC)[reply]
- I see a number of cases where the bot created dates like "28-09-2007": [20][21][22][23][24][25][26][27][28][29][30][31][32][33][34]
- I also see a few cases where you copied the format of
|date=
when|accessdate=
did exist and had a valid format, for example the second 2 in [35]. Anomie⚔ 01:02, 19 November 2010 (UTC)[reply]- {{OperatorAssistanceNeeded}} Any reply? Anomie⚔ 03:20, 24 November 2010 (UTC)[reply]
- Yes, thanks for reply. I got so paranoid about not accidentally adding dmy after Rich's comment that I went and ended up doing exactly that... Regarding
|date=
before|accessdate=
, I checked them in reverse order. I also did not first check if the date is valid, so the bad date params caused the bot to default to dmy, which in turn was ymd. Should be OK in recognising formats now [36]. ymd format: one edit was fixed, one fixed manually, bot-fixed rest: [37][38][39][40][41][42][43][44][45][46][47][48][49]. accessdate priority: Bot-fixed: [50][51][52][53][54]. Hopefully I didn't miss anything. Sorry for the mess, I was definitely far over Ballmer's peak at the time. — HELLKNOWZ ▎TALK 11:53, 24 November 2010 (UTC)[reply]- Is [55] correct? It seems to have chosen 13 May 2008 even though the accessdate is 2010-11-07. Also, I see in your sandbox edit that the bot output dates as "28/09/2007", "2007/09/28", and "28-09-2007". The bot should never output any of those formats, even if some misguided human did use them. The bot should always output either "September 28, 2007", "28 September 2007", or "2007-09-28". Anomie⚔ 16:09, 24 November 2010 (UTC)[reply]
- The Iran article has a {{Use dmy dates}} template, so that instance should be correct. Regarding digit separator, the bot attempted to mimic the original date format's separator ("/", "\", "."). I will disable this. Similarly, I will then only allow the "M d, y", "d M y" and "y-m-d" formats. I don't have the irc logs any more, but I ran a date format check in summer and "28-09-2007" appeared roughly as often as "September 28, 2007" did. This is why I was allowing this format as well. There are featured articles using dmy only. But I suppose "Do not use year-final numerical date formats.." surpasses "Dates in article references should all have the same format." Will post a sandbox edit in the evening. — HELLKNOWZ ▎TALK 16:34, 24 November 2010 (UTC)[reply]
- sandbox edit. Ignoring separators and not using dmy. — HELLKNOWZ ▎TALK 09:16, 25 November 2010 (UTC)[reply]
- Ah, ok on the Iran page. Sandbox edit looks good now. Approved. Anomie⚔ 16:08, 25 November 2010 (UTC)[reply]
- Is [55] correct? It seems to have chosen 13 May 2008 even though the accessdate is 2010-11-07. Also, I see in your sandbox edit that the bot output dates as "28/09/2007", "2007/09/28", and "28-09-2007". The bot should never output any of those formats, even if some misguided human did use them. The bot should always output either "September 28, 2007", "28 September 2007", or "2007-09-28". Anomie⚔ 16:09, 24 November 2010 (UTC)[reply]
- Yes, thanks for reply. I got so paranoid about not accidentally adding dmy after Rich's comment that I went and ended up doing exactly that... Regarding
- {{OperatorAssistanceNeeded}} Any reply? Anomie⚔ 03:20, 24 November 2010 (UTC)[reply]
- The above discussion is preserved as an archive of the debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA.
- The following discussion is an archived debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA. The result of the discussion was Approved.
Operator: Smith609 (talk · contribs)
Automatic or Manually assisted: Automatic
Programming language(s): PHP
Source code available: [56]
Function overview: Facilitate the addition of references by adding ref tags where requested.
Links to relevant discussions (where appropriate): User_talk:Citation_bot#Suggestion
Edit period(s): Continuous (when triggered by edits)
Estimated number of pages affected: dozens per day (depending on user take-up)
Exclusion compliant (Y/N): Yes
Already has a bot flag (Y/N): Yes
Function details:
- User enters
{{ref pmid|1234}}
(or ref jstor, ref doi...) - Bot replaces
{{ref pmid|1234}}
with<ref name="AuthorYear">{{cite pmid|1234}}</ref>
or<ref name="AuthorYear" />
, as appropriate
Discussion
I take it this is almost the same as Cite doi replacement, and given task 6 approval, this would be a minor change. So I don't see any problems. — HELLKNOWZ ▎TALK 23:37, 14 October 2010 (UTC)[reply]
Approved for trial (50 edits). Please provide a link to the relevant contributions and/or diffs when the trial is complete. MBisanz talk 22:44, 19 October 2010 (UTC)[reply]
- Related edits can be observed at Special:Contributions/Citation_bot_1. Martin (Smith609 – Talk) 03:22, 23 October 2010 (UTC)[reply]
- I think that I've got this implemented; interested parties are invited to examine the source code or to suggest test cases – scenarios that might be problematic would be warmly received before this goes live. Martin (Smith609 – Talk) 06:26, 23 October 2010 (UTC)[reply]
- What is the current status of this request? ΔT The only constant 01:03, 11 November 2010 (UTC)[reply]
- Looks like it's working fine to me. Just waiting for approval. (It's difficult to just do a batch of 50 edits related to this task because the functionality is to be added to the existing bot tasks, which will be performed concurrently.) Martin (Smith609 – Talk) 17:16, 11 November 2010 (UTC)[reply]
- Trial complete.
- Looks like it's working fine to me. Just waiting for approval. (It's difficult to just do a batch of 50 edits related to this task because the functionality is to be added to the existing bot tasks, which will be performed concurrently.) Martin (Smith609 – Talk) 17:16, 11 November 2010 (UTC)[reply]
{{BAG assistance needed}}
Approved. MBisanz talk 08:26, 16 December 2010 (UTC)[reply]
- The above discussion is preserved as an archive of the debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA.
- The following discussion is an archived debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA. The result of the discussion was Approved.
Operator: Smith609 (talk · contribs)
Automatic or Manually assisted: Automatic, in response to user input
Programming language(s): PHP
Source code available: [57]
Function overview: Add names to anonymous reference tags
Links to relevant discussions (where appropriate): User_talk:Citation_bot#Suggestion
Edit period(s): Continuous
Estimated number of pages affected: 1–3 thousand at present; ongoing rate of dozens per day
Exclusion compliant (Y/N): Yes
Already has a bot flag (Y/N): Yes
Function details: Citations created with {{cite pmid}}, {{cite doi}}, {{cite jstor}} etc only contain a unique article identifier. Thus it is difficult for editors to recognize what is being cited, and difficult to use the reference elsewhere in the article.
If no name=
parameter is present in the ref tag containing these templates, the bot will add name=FirstauthorYear
from the information in Wikipedia. (The bot already creates this information in subtemplates, per Wikipedia:Bots/Requests_for_approval/DOI_bot_2.) If there is already a citation with this name it will append "a", "b", etc after the year to ensure that the ref names are distinct.
If there are multiple identical citations in the article, duplicate citations will be replaced with <ref name=Refname />
. (Identical means "every parameter has the same value", whitespace notwithstanding.)
Discussion
So this basically makes the reference markup code more readable by adding a reference name (in some style, like Harvard)? Though reading the suggestions, I doubt adding anything more than last name + year + optional letter is needed. I suppose it is preferred over bare cite pmid's. I would suggest making <ref name="Smith 2002">
though, with quotes and spaces. — HELLKNOWZ ▎TALK 23:29, 14 October 2010 (UTC)[reply]
Approved for trial (50 edits). Please provide a link to the relevant contributions and/or diffs when the trial is complete. MBisanz talk 22:43, 19 October 2010 (UTC)[reply]
- Related edits can be observed at Special:Contributions/Citation_bot_1. Martin (Smith609 – Talk) 03:23, 23 October 2010 (UTC)[reply]
- I've coded up the script (see successful edit and would appreciate any test cases that anyone may wish to offer, so that I can be sure that the bot is as robust as possible before I proceed further. Martin (Smith609 – Talk) 07:25, 23 October 2010 (UTC)[reply]
- If there's no comments then I guess it's Trial complete.. Martin (Smith609 – Talk) 00:43, 14 November 2010 (UTC)[reply]
- I've coded up the script (see successful edit and would appreciate any test cases that anyone may wish to offer, so that I can be sure that the bot is as robust as possible before I proceed further. Martin (Smith609 – Talk) 07:25, 23 October 2010 (UTC)[reply]
{{BAG assistance needed}}
Approved. MBisanz talk 08:26, 16 December 2010 (UTC)[reply]
- The above discussion is preserved as an archive of the debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA.
- The following discussion is an archived debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA. The result of the discussion was Withdrawn by operator.
Operator: VernoWhitney (talk · contribs)
Automatic or Manually assisted: Automatic
Programming language(s): Python
Source code available: Not written yet
Function overview: Mass rollback of all articles a user's contributed to
Links to relevant discussions (where appropriate): Wikipedia:Administrators' noticeboard/Incidents/CCI (intermittently, I found comments regarding it in at least the "Implementing bot?", "Questions", and "A running count of progress please" sections) and User talk:Moonriddengirl/Archive 26#CCI tools.
Edit period(s): Occasional, as needed
Estimated number of pages affected: Many
Exclusion compliant (Y/N): N
Already has a bot flag (Y/N): Y
Function details: This will identify the earliest edits which meet some threshold (such as those generally used at WP:CCI of increasing an article's size by 100 bytes and excluding those edits which are likely reversions) made by a particular contributor to all articles. It will then roll back the articles to the version immediately prior to the contributor's first substantial edit and leave an appropriate message on the article's talk page (probably based upon {{CCId}}).
Since rolling back all of a known copyright violators touched articles has been mentioned (even in the Signpost), I figured it would be a good idea to have a bot ready in case there is support for such an action. This would be for use in the same situations where a sufficient amount of an editor's contributions have been determined to be copyvios that Special:Nuke is used for their created articles. VernoWhitney (talk) 20:20, 22 September 2010 (UTC)[reply]
Discussion
- Proposed bot is a follow-on to Uncle G's article blanking operation approved a few days ago and which has done a preliminary run (I think about 10% of the task). I believe we're waiting for user experience and feedback from Uncle G's preliminary run before going on with the other 90%. This follow-on has been discussed at CCI and seems generally supported by the people engaged with such details, barring possible surprises from Uncle G's operation. This is a good time to be developing and testing VW's bot, but IMO deployment shouldn't begin until we've gotten some more experience (at least a week's worth, say) from the results of the first operation. I'm guessing it will take that long to get all details of VW's bot ironed out anyway. The total # of articles to be rolled back is presumed to be around 13,000. General overview of the surrounding issue is at:
- 71.141.90.138 (talk) 00:32, 23 September 2010 (UTC)[reply]
- Just to be clear: the first execution of this task may be for the Darius Dhlomo CCI if there is solid support for it, but I also wish it to be a possible tool for other CCIs should the need arise. VernoWhitney (talk) 00:37, 23 September 2010 (UTC)[reply]
{{BAGAssistanceNeeded}}
Is anyone out there? Judging from the progress so far it won't be needed for this particular CCI, but I still think it would be handy to have this tool available, so any feedback whatsoever would be appreciated. VernoWhitney (talk) 19:05, 8 October 2010 (UTC)[reply]
- Assuming any runs are prior agreed upon from CCI, there should not be any major problems with. But it should be agreed what's the first date, change threshold, what talk page template to use, etc. Also, what constitutes a reversion? Edit summary with script tag or phrases like rv/revert/undo? Finally, I think it would be best if the bot could make a list of all proposed reversions, and outline borderline cases for manual review. DD case is huge, and regular cases aren't that big as to taking too much time to properly review. — HELLKNOWZ ▎TALK 23:16, 14 October 2010 (UTC)[reply]
- Yes, it would only be run if there was consensus for it at CCI. I imagine the "default" settings would be for edits from any date that added more than 100 bytes of content, since those are the standards for listing edits for human CCI review, but those of course could be set differently for any given run. I erred when mentioning the talk page template earlier: the talk page template would be based on {{CCI}}, but additionally include at least the fact that it was done automatically, and a link to the particular version immediately prior to the contributor's first edit which meets whatever threshold has been set up for the run. By reversion I mean simply replacing the current content of the article with that of an earlier version.
- The whole point of the bot is to avoid going through proposed reversions and borderline cases, because it is only to be used when so many of a contributor's edits that the collateral damage is acceptable (again, akin to Special:Nuke). A likely case for the use of this is the nigh-inevitable return of Siddiqui (talk · contribs)—the sockmaster behind both Wikipedia:Contributor copyright investigations/Paknur and Wikipedia:Contributor copyright investigations/AlphaGamma1991. While DD's CCI is huge, it is (I'm fairly certain) not our largest and it's only one of the 40 cases open right now. VernoWhitney (talk) 00:21, 15 October 2010 (UTC)[reply]
- Approved for trial. Please provide a link to the relevant contributions and/or diffs when the trial is complete. Maybe 3 - 5 users worth of rollback for a trial. MBisanz talk 22:31, 19 October 2010 (UTC)[reply]
- {{OperatorAssistanceNeeded}} what is the current status of this request? ΔT The only constant 01:02, 11 November 2010 (UTC)[reply]
- I asked about an appropriate editor/target for this trial at WT:CCI and didn't get a response - and then promptly forgot about it with working up to my RFA. I'll ask for some more attention and see if any of the current CCIs are good candidates. VernoWhitney (talk) 01:11, 11 November 2010 (UTC)[reply]
- Trial complete. First user trial completed. Feedback continuing at Wikipedia talk:Contributor copyright investigations/Archive 1#Rollback bot. VernoWhitney (talk) 14:09, 12 November 2010 (UTC)[reply]
- {{OperatorAssistanceNeeded}} I could wade through all the various threads of the discussion, or I could just ask you whether there were any problems :) How's it looking? Still needed? - Jarry1250 [Who? Discuss.] 18:08, 3 December 2010 (UTC)[reply]
- The only problem that showed up was one unnecessary edit to an article's talk page when the article had already been reverted by another editor, and I've since added a check for that. Other than that the conversation included talk about tweaking the edit summaries and the message the bot uses, but there hasn't been another clear occasion to do another test run yet (since MBisanz said 3-5 users). There are thankfully few cases where all of an editor's contribs are not worth checking, but I think it is still needed at least for the same situation as I ran the first test on: a contributor indef-blocked for copyvios who keeps returning as a new sock adding more (and mostly) copyvios. VernoWhitney (talk) 19:43, 3 December 2010 (UTC)[reply]
- Judging by the comments at User talk:VernoWhitney, the bot is creating an over abundance of unproductive edits that may or may not be related to CopyVio problems. This is expecting a number of other users to keep tabs on a bots edits and a very large number are being reverted. This is not the purpose of bots. Bots should be making uncontroversial edits that, except in rare cases, don't need oversight by real users. I am very appalled by the approach this bot is taking to editing Wikipedia and would like an immediate halt of it's use and a very big rethinking of the purpose and process by which the bot makes edits. A bot that creates more work for users is not a prodoctive bot, but instead a vandal! Sadads (talk) 18:35, 10 December 2010 (UTC)[reply]
- This is not a task which can produce edits that do not require any user attention. There is practically no chance that all articles edited have been just copyvio and the editor remained only contributor. That said, there does need to be consensus to use a tool to make such edits. — HELLKNOWZ ▎TALK 13:26, 19 December 2010 (UTC)[reply]
- Judging by the comments at User talk:VernoWhitney, the bot is creating an over abundance of unproductive edits that may or may not be related to CopyVio problems. This is expecting a number of other users to keep tabs on a bots edits and a very large number are being reverted. This is not the purpose of bots. Bots should be making uncontroversial edits that, except in rare cases, don't need oversight by real users. I am very appalled by the approach this bot is taking to editing Wikipedia and would like an immediate halt of it's use and a very big rethinking of the purpose and process by which the bot makes edits. A bot that creates more work for users is not a prodoctive bot, but instead a vandal! Sadads (talk) 18:35, 10 December 2010 (UTC)[reply]
- The only problem that showed up was one unnecessary edit to an article's talk page when the article had already been reverted by another editor, and I've since added a check for that. Other than that the conversation included talk about tweaking the edit summaries and the message the bot uses, but there hasn't been another clear occasion to do another test run yet (since MBisanz said 3-5 users). There are thankfully few cases where all of an editor's contribs are not worth checking, but I think it is still needed at least for the same situation as I ran the first test on: a contributor indef-blocked for copyvios who keeps returning as a new sock adding more (and mostly) copyvios. VernoWhitney (talk) 19:43, 3 December 2010 (UTC)[reply]
- {{OperatorAssistanceNeeded}} I could wade through all the various threads of the discussion, or I could just ask you whether there were any problems :) How's it looking? Still needed? - Jarry1250 [Who? Discuss.] 18:08, 3 December 2010 (UTC)[reply]
- Trial complete. First user trial completed. Feedback continuing at Wikipedia talk:Contributor copyright investigations/Archive 1#Rollback bot. VernoWhitney (talk) 14:09, 12 November 2010 (UTC)[reply]
- I asked about an appropriate editor/target for this trial at WT:CCI and didn't get a response - and then promptly forgot about it with working up to my RFA. I'll ask for some more attention and see if any of the current CCIs are good candidates. VernoWhitney (talk) 01:11, 11 November 2010 (UTC)[reply]
Needs wider discussion.. Seeing how this has/will get stalled, I see BAG's reluctance to take action for a task that can be applied for more than one case. There is support for using such a tool to revert copyvios; but there seems to be little open, direct support for the results produced by this particular implementation. Although, as I pointed above, it would be near impossible to create a perfect tool that would not require user attention. The question is whether the community supports the current implementation. I suggest you start a broader discussion referring to the actual trial edits and make a straight point: "Does the community want this kind of output from this kind of task?". Of course, it's all up to you, but at least then the BAG can refer to this a "consensus for the task", because at present this will probably not get blanket approved for copyvio cases. — HELLKNOWZ ▎TALK 13:26, 19 December 2010 (UTC)[reply]
- Obviously there's not consensus for the current implementation (I can provide links to the discussions if you'd like), but what about continuing trial once I've gone through the code to incorporate the feedback that I got from the aborted second trial and reduce the false positive rate? VernoWhitney (talk) 14:43, 19 December 2010 (UTC)[reply]
- Approved for extended trial (30–50 edits and/or 1–2 users). Please provide a link to the relevant contributions and/or diffs when the trial is complete. OK, let's do a run with the feedback incorporated. However, following that, the "wider discussion" and community response will be necessary if the task is to be further trialed/approved. Do you have any links to Uncle G bot's post-run feedback? — HELLKNOWZ ▎TALK 16:06, 19 December 2010 (UTC)[reply]
- Has this trial been done? Mr.Z-man 04:16, 2 January 2011 (UTC)[reply]
- Not yet, the holiday has delayed the coding needed. VernoWhitney (talk) 12:42, 3 January 2011 (UTC)[reply]
- {{OperatorAssistanceNeeded}} Any progress? Anomie⚔ 03:27, 24 February 2011 (UTC)[reply]
- Some progress, but not enough to address all of the issues which people had with the first run. Since it looks like it will be a while before I can finish coding you can consider this withdrawn for now and I'll just reopen it after I have the time I need to put into it. VernoWhitney (talk) 12:59, 24 February 2011 (UTC)[reply]
- Ok, Withdrawn by operator.. Just undo this edit, add any necessary comment, and relist it when you're ready. Anomie⚔ 00:08, 25 February 2011 (UTC)[reply]
- Some progress, but not enough to address all of the issues which people had with the first run. Since it looks like it will be a while before I can finish coding you can consider this withdrawn for now and I'll just reopen it after I have the time I need to put into it. VernoWhitney (talk) 12:59, 24 February 2011 (UTC)[reply]
- {{OperatorAssistanceNeeded}} Any progress? Anomie⚔ 03:27, 24 February 2011 (UTC)[reply]
- Not yet, the holiday has delayed the coding needed. VernoWhitney (talk) 12:42, 3 January 2011 (UTC)[reply]
- Has this trial been done? Mr.Z-man 04:16, 2 January 2011 (UTC)[reply]
- Approved for extended trial (30–50 edits and/or 1–2 users). Please provide a link to the relevant contributions and/or diffs when the trial is complete. OK, let's do a run with the feedback incorporated. However, following that, the "wider discussion" and community response will be necessary if the task is to be further trialed/approved. Do you have any links to Uncle G bot's post-run feedback? — HELLKNOWZ ▎TALK 16:06, 19 December 2010 (UTC)[reply]
- The above discussion is preserved as an archive of the debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA.
- The following discussion is an archived debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA. The result of the discussion was Approved.
Taxobot 3
Operator: Smith609 (talk · contribs)
Automatic or Manually assisted: Manually supervised by users
Programming language(s): PHP
Source code available: Will be available at Google Code, which currently hosts code for existing task (WP:BRFA#Taxobot 2).
Function overview: This function will help editors who wish to replace an existing {{taxobox}} with an {{automatic taxobox}} (see below).
Links to relevant discussions (where appropriate): Template_talk:Taxobox#Usability. Note that this task will only be performed in cases where, at the editor's discretion, an automatic taxobox is beneficial.
Edit period(s): When explicitly triggered by an editor.
Estimated number of pages affected: One page per user activation.
Exclusion compliant (Y/N): Yes
Already has a bot flag (Y/N): No; approval subject to approval of Task 2.
Function details:
Template:Automatic taxobox is a template that removes the clutter from Template:Taxobox, automatically generating taxonomic information based on a series of templates that are invisible to the user, and will be generated by Taxobot if Task 2 is approved.
In some cases, it is already desirable to upgrade to an automatic taxobox. At present, this must be done by hand, which makes it easy to introduce mistakes.
If a user decides that the {{automatic taxobox}} template is appropriate for a page, the bot will present the user with a side-by-side comparison of the wikicode and output of the existing taxobox and the proposed replacement.
The bot will generate the replacement by removing redundant parameters (e.g. |phylum=
) from the existing taxobox; re-naming other parameters (e.g. |genus_authority=
→ |authority=
); and retaining others (e.g. |image=
). It will also suggest improvements (e.g. by using the {{geological range}} template in the |fossil_range=
parameter, if possible). The generated wikicode can be amended by the user, and the results previewed.
Once the editor has verified the results, the bot will replace the existing taxobox with the approved automatic taxobox.
The user will be asked to provide their username, which will be displayed in the bot's edit summary; only valid usernames will be allowed to use the tool. (This system works well at User:Citation bot and has been proposed in the other bot task request.)
I propose that during the initial testing period, only I (Smith609 (talk · contribs)) am authorised to activate the bot. Once the bot is operating as I expect, I suggest allowing other users to use the bot, with the output being scrutinized by myself (and the BRFA team?) during the trial period. During the trial period, this task will only operate on organisms for which an automatically-generated taxonomy already exists.
Discussion
- Note; this can now be previewed; so far I've only tested it on the page Mollusca, but it should work (with varying success) elsewhere. Martin (Smith609 – Talk) 19:10, 28 September 2010 (UTC)[reply]
- Oppose. What is bad with this:
- example 1. I want to edit for example Vauxia and I will want, for example, change the family of this genus. I will click at "edit this page" button http://en.wikipedia.org/w/index.php?title=Leptomitus&action=edit and I can not change this, because it is impossible to change it this way. I can not change the article page by clicking "edit this page" per Wikipedia:How to edit a page, so this is non-standard method.
- even one of two examples from Wikipedia:Bots/Requests_for_approval/Taxobot_2, http://en.wikipedia.org/w/index.php?title=Leptomitus&oldid=388019372 contain some errors. So even bot and the template author is not familiar enough with this, so how can be familiar with this non-standard solution thousands of wikipedians.
- There is no need to change {{taxobox}} to {{automatic taxobox}} in articles. Instead of this it is easier to incorporate new features of automatic taxobox into taxobox, if needed.
- There is solved how is is possible roboticaly change existing articles to this other method. The idea of hierarchical structure is good, but the practical implementation (using additional webpages) is bad (at least meantime). There is not solved, how could be easily possible (at least as easy as in actual solution in taxobox template, that is used for 6 years) to edit existing informations BY WIKIPEDIANS.
- There have changed since Wikipedia:Bots/Requests for approval/Taxobot 1 to Wikipedia:Bots/Requests for approval/Taxobot 2 only one thing: that creating of this is activated by user and performed by a bot. Nothing other have changed since Request for Taxobot 1, that have been criticized for example for this "there is even no discussion if "{taxobox}" should be replaced with "{automatic taxobox}".
- There must be such solution that allows Wikipedia, anyone can edit. If a user will not understand how User:Citation bot works (there are certainly thousands of wikipedians that are not familiar with this), then such user will not understand how User:Taxobot works, and he/she will be able to change nothing. --Snek01 (talk) 18:27, 15 October 2010 (UTC)[reply]
- Sounds to me like these criticisms are directed at the implementation of Template:Automatic taxobox, and are not relevant to the task requested here. This template is under development, and bot requests such as this are vital steps on the route to a mature template that is intuitive to edit. Indeed, this bot's primary function is to make it easy for editors to interact with automatic taxoboxes. Until the template is in a stable and suitable state and supported by bots where helpful, it is premature to discuss its use throughout Wikipedia. Martin (Smith609 – Talk) 18:38, 15 October 2010 (UTC)[reply]
- Support although I'm interested in how the needed "Template:Taxonomy"s get created? Does the user have a chance to edit these, or does the bot assume the taxobox being replaced has correct/complete data? ErikHaugen (talk) 21:25, 1 November 2010 (UTC)[reply]
- The user is required to check and validate the data extracted from the taxobox by the bot. You can try that part yourself at tools:~verisimilus/Bot/taxobot. Martin (Smith609 – Talk) 21:53, 1 November 2010 (UTC)[reply]
- I've done the majority of the coding and am ready to begin a trial. Since the comments above are off topic, I'm marking this {{BAG assistance needed}}. Thanks, Martin (Smith609 – Talk) 05:21, 24 October 2010 (UTC)[reply]
- The user is required to check and validate the data extracted from the taxobox by the bot. You can try that part yourself at tools:~verisimilus/Bot/taxobot. Martin (Smith609 – Talk) 21:53, 1 November 2010 (UTC)[reply]
- Approved for trial (20 edits). Please provide a link to the relevant contributions and/or diffs when the trial is complete.. Okay, let's see whether this is a pracitcal implementation; as you request, just yourself please at this moment in time :) – Jarry1250 [Who? Discuss.] 17:34, 16 November 2010 (UTC)[reply]
- Thanks. I'll get testing as soon as I'm free. Martin (Smith609 – Talk) 05:52, 20 November 2010 (UTC)[reply]
- Preliminary testing has begun. Comments welcome! Martin (Smith609 – Talk) 01:22, 26 November 2010 (UTC)[reply]
- Looks good so far; I'd like to voice a preference, however, that the bot ONLY applies taxonomies to taxa where the taxonomy templates have already been created by an editor. This will prevent the accidental complications of erratic, outdated, or simplified automatic taxonomy creation. Bob the Wikipedian (talk • contribs) 04:36, 26 November 2010 (UTC)[reply]
- Absolutely; that's all that the bot will do at this point. Martin (Smith609 – Talk) 04:44, 26 November 2010 (UTC)[reply]
- Looks good so far; I'd like to voice a preference, however, that the bot ONLY applies taxonomies to taxa where the taxonomy templates have already been created by an editor. This will prevent the accidental complications of erratic, outdated, or simplified automatic taxonomy creation. Bob the Wikipedian (talk • contribs) 04:36, 26 November 2010 (UTC)[reply]
- Preliminary testing has begun. Comments welcome! Martin (Smith609 – Talk) 01:22, 26 November 2010 (UTC)[reply]
- Thanks. I'll get testing as soon as I'm free. Martin (Smith609 – Talk) 05:52, 20 November 2010 (UTC)[reply]
- Approved for trial (20 edits). Please provide a link to the relevant contributions and/or diffs when the trial is complete.. Okay, let's see whether this is a pracitcal implementation; as you request, just yourself please at this moment in time :) – Jarry1250 [Who? Discuss.] 17:34, 16 November 2010 (UTC)[reply]
- It makes sense, whilst the bot is at it, to perform a little basic tidyup; it thus converts pages to use Template:Fossil range where possible (it has an error-cathcher built in so that if the fossil range template generates an error, it won't be converted; here's an example); uses Template:Species list where conversion is straightforward
; and adds missing authority information from the Global Names Database (example; see Patterson, D. J.; Cooper, J.; Kirk, P. M.; Pyle, R. L.; Remsen, D. P. (2010). "Names are key to the big new biology". Trends in Ecology & Evolution. 25 (12): 686. doi:10.1016/j.tree.2010.09.004. / API). Since these are all associated with automating the taxobox they seem to fall within the scope of this task; I thought it best to mention them so that they don't slip under the radar. Martin (Smith609 – Talk) 05:09, 26 November 2010 (UTC)[reply]
- Trial complete. View 20 trial edits. The bot currently checks the parsed output of the taxobox template and only makes an edit if there's a 100% match in the HTML (with some permissiveness; e.g. if a link points to a different target). This should make it impossible for the bot to cause damage. I'll look at relaxing the match once consensus emerges as to whether the template should be rolled out more broadly. Martin (Smith609 – Talk) 01:01, 30 November 2010 (UTC)[reply]
- Impressive! Biased approve. Bob the Wikipedian (talk • contribs) 03:13, 30 November 2010 (UTC)[reply]
The output looks good and the edits are user-triggered, so there aren't any issues I see. The actual few comments on whether this should or should not be done at all is a little irrelevant as this is editor-triggered tool. By the same way editors could do this manually, just a lot more cumbersome. Anyway, Approved. (Mandatory disclaimer: if in the future the community finds it unnecessary to do this, then obviously the approval is suspended.)
- The above discussion is preserved as an archive of the debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA.
- The following discussion is an archived debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA. The result of the discussion was Approved.
Operator: Basilicofresco (talk · contribs)
Automatic or Manually assisted: auto
Programming language(s): Python
Source code available:
Function overview: Additional fixes for the task Wikipedia:Bots/Requests for approval/FrescoBot 2
Links to relevant discussions (where appropriate): Wikipedia:Bot owners' noticeboard#Wikilink simplification
Edit period(s): montly or less
Estimated number of pages affected: 35k?
Exclusion compliant (Y/N): Y
Already has a bot flag (Y/N): Y
Function details: Wikipedia:Bots/Requests for approval/FrescoBot 2 already fixes a wide range of wikilink syntax problems and some redundancies. I plan to extend it a bit with:
- Simplification of selected piped wikilinks in the "shortened sort-of-piped link". This substitution has to improve the intuitiveness and readability of the wikitext, for this reason I will be pretty conservative and I will convert only few popular suffixes:
- suffix "s", [[architect|architects]] --> [[architect]]s
- suffix "es", [[virus|viruses]] --> [[virus]]es
- suffix "n", [[Croatia|Croatian]]n --> [[Croatia]]n
- suffix "ns", [[Sardinia|Sardinians]] --> [[Sardinia]]ns
- suffix "an", [[Europe|European]] --> [[Europe]]an
- suffix "ian", [[Egypt|Egyptian]] --> [[Egypt]]ian
- suffix "ic", [[logarithm|logarithmic]] --> [[logarithm]]ic
- suffix "ist", [[steel guitar|steel guitarist]] --> [[steel guitar]]ist
- suffix "ern", [[Middle East|Middle Eastern]] --> [[Middle East]]ern
- In this way I will avoid to create not-so-intuitive shortcuts, eg:
- I will not convert [[Mi-171|Mi-171Sh]] --> [[Mi-171]]Sh
- I will not convert [[Rødby|Rødbyhavn]] --> [[Rødby]]havn
- I will not convert [[comp|compositions]] --> [[comp]]ositions
- I will not convert [[Windows 2.0|Windows 2.03]] --> [[Windows 2.0]]3
- I will not convert [[Siddha|Siddhar]] --> [[Siddha]]r
- Title linked in text (Check Wikipedia #48). Wikilinks to the current page will be de-linked (example).
- [[pagename]] --> pagename
- [[pagename|label]] --> label
- Additional syntax fixes:
Please note, these fixes will be added to this collection.
Discussion
Comments? Is there any exception for #2? Are there other link fixes that I should fix? -- Basilicofresco (msg) 06:15, 20 September 2010 (UTC) Magioladit asked my help for #1 because FrescoBot is faster than AWB bots. After catching up with the backlong he doesn't expect many links created per week. -- Basilicofresco (msg) 10:31, 20 September 2010 (UTC)[reply]
- For additional syntax fixes, you can add [[Brahin (meteorite_|Brahin]] --> [[Brahin (meteorite)|Brahin]] since that is a plausible typo. — HELLKNOWZ ▎TALK 18:19, 22 September 2010 (UTC)[reply]
- Yep, good idea. -- Basilicofresco (msg) 20:48, 22 September 2010 (UTC)[reply]
- I see little benefit in part 1; those are minor problems that hardly merit their own bot looking after them. They could be integrated into AWB's general fixes, though (perhaps they already are). Ucucha 20:52, 22 September 2010 (UTC)[reply]
- Please, take a look at Wikipedia:Bot owners' noticeboard#Wikilink simplification. The main reason for this request is that AWB is not fast enought to reduce the whole number of redundant links. -- Basilicofresco (msg) 21:14, 22 September 2010 (UTC)[reply]
- Why is this urgent? Ucucha 21:18, 22 September 2010 (UTC)[reply]
- It is not actually urgent, but speed matters. Here are some numbers:
- within enwiki-20100730 file dump there were 25142 articles in need of #1 replacement;
- within enwiki-20100916 file dump there are 24156 articles in need of #1 replacement.
- It means -4% in about 6 weeks. How many years will take to AWB users to reduce the number of these articles by 50%? -- Basilicofresco (msg) 16:51, 23 September 2010 (UTC)[reply]
- And in that time SmackBot was AWBing about 3-600 articles a day, with Smack sleeping I would bet the number is actually going up. Rich Farmbrough, 16:17, 3 October 2010 (UTC).[reply]
- And in that time SmackBot was AWBing about 3-600 articles a day, with Smack sleeping I would bet the number is actually going up. Rich Farmbrough, 16:17, 3 October 2010 (UTC).[reply]
- So what? If this were a major problem, I could see your point, but I don't see how it is one. Ucucha 17:00, 23 September 2010 (UTC)[reply]
- I do not think that this must be a major problem just to warrant a bot operation. There are many small tedious fixes that can be performed as well. Besides, these additions build up with previous tasks and eventually we may have a bot that does a long list of corrections. I have nothing against small steps when they lead to overall improvement. — HELLKNOWZ ▎TALK 18:02, 23 September 2010 (UTC)[reply]
- This task is just an enhancement of what FrescoBot is already doing. [[foo|foo]] is already fixed by FrescoBot and it's an approved task. I agree that FrescoBot could do much other tasks at the same time but at some point we have to find a balance of what is "too much" and what is "too few". -- Magioladitis (talk) 20:20, 23 September 2010 (UTC)[reply]
- For me code cleanup is an important matter. Many other errors (unbalanced brackets, etc.) are caused because sometimes wikicode isn't easy to check with naked eye. -- Magioladitis (talk) 22:45, 23 September 2010 (UTC)[reply]
- It is not actually urgent, but speed matters. Here are some numbers:
- Why is this urgent? Ucucha 21:18, 22 September 2010 (UTC)[reply]
- AWB does already do some or most of it. If FrescoBot was going to do clean up run specifically on the non-broken but improvable links, then I would say use Awb with genfixes, tagging and unicodifying. For broken tags, approval for a trial run is a no-brainer. Rich Farmbrough, 16:17, 3 October 2010 (UTC).[reply]
- Yes, AWB does most of it. The good thing with FrescoBot is that is faster and it already has approval to do most of the job already. I thought it would be a speedy approval but it takes some time. -- Magioladitis (talk) 00:32, 4 October 2010 (UTC)[reply]
So if there are no objections we can go for a non-BAG close? :) -- Magioladitis (talk) 08:26, 13 October 2010 (UTC)[reply]
- Approved. MBisanz talk 23:47, 14 October 2010 (UTC)[reply]
- The above discussion is preserved as an archive of the debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA.
- The following discussion is an archived debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA. The result of the discussion was Request Expired.
Operator: H3llkn0wz (talk · contribs)
Automatic or Manually assisted: Automatic
Programming language(s): WikiSharpAPI (C#)
Source code available: Not now, I will make API available when it's actually usable.
Function overview: Reference and citation completion
Links to relevant discussions (where appropriate): User:RjwilmsiBot is probably the relevant almost-same task bot.
Edit period(s): Continuous (when I'm online)
Estimated number of pages affected: All encountered pages with issues
Exclusion compliant (Y/N): Y
Already has a bot flag (Y/N): Y
Function details:
1) Add title and bot comment to bare referenced external links:
- <ref>[http://news.bbc.co.uk/1/hi/technology/8759590.stm]</ref> → <ref>[http://news.bbc.co.uk/1/hi/technology/8759590.stm Users report 'fault' on iPhone 4 <!-- Bot generated title -->]</ref>
2) Visit the sites from a known list of sites (manually selected) and fill citations with available missing info. This includes visiting archived copy if the original is unavailable.
2.a) Remove redundant title labels:
- "BBC News - Users report 'fault' on iPhone 4" → "Users report 'fault' on iPhone 4".
2.b) Add |title=
, |date=
, |author=
, |publisher=
, |work=
and |location=
, etc. that can be unambiguously identified from sites, such as, [58] meta-data.
2.c) Change {{Cite web}} to {{Cite news}}, {{Cite journal}} and similar where appropriate.
Discussion
- This task seems pretty similar to a task once preforemed by User:DumZiBoT. Read the documentation at User:DumZiBoT/refLinks. I'd urge you to read through the opperator's talk page to see the problems he ran into. Tim1357 talk 21:22, 13 September 2010 (UTC)[reply]
- Thanks for helpful pointer. I think I'll go through all the archives and compile a list of things that need to be taken into account, then possibly do a dry run. — HELLKNOWZ ▎TALK 22:01, 13 September 2010 (UTC)[reply]
- There seems little value in you processing the news sites that RjwilmsiBot already covers - see USer:Rjwilmsi/CiteCompletion. Adding archive links or covering other non-news sites would certainly be useful. User:ThaddeusB has/had something that deals with archive links. Rjwilmsi 07:43, 19 September 2010 (UTC)[reply]
- I'm already archiving links. I was going to start with video game news sites and such, so that it does not overlap with RjwilmsiBot. — HELLKNOWZ ▎TALK 10:50, 19 September 2010 (UTC)[reply]
- I would suggest sharing information with Rjw and others. Since you use pye and he uses AWB, a central repository of architecture neutral information would be potentially be usable by all. Rich Farmbrough, 03:05, 2 October 2010 (UTC).[reply]
- Yes, it would, I will make my lists and regexes available, when I have coded the thing. Also, I am using C# with my own framework, not py, so same as AWB except stand-alone. — HELLKNOWZ ▎TALK 11:22, 2 October 2010 (UTC)[reply]
- I would suggest sharing information with Rjw and others. Since you use pye and he uses AWB, a central repository of architecture neutral information would be potentially be usable by all. Rich Farmbrough, 03:05, 2 October 2010 (UTC).[reply]
- I'm already archiving links. I was going to start with video game news sites and such, so that it does not overlap with RjwilmsiBot. — HELLKNOWZ ▎TALK 10:50, 19 September 2010 (UTC)[reply]
- There seems little value in you processing the news sites that RjwilmsiBot already covers - see USer:Rjwilmsi/CiteCompletion. Adding archive links or covering other non-news sites would certainly be useful. User:ThaddeusB has/had something that deals with archive links. Rjwilmsi 07:43, 19 September 2010 (UTC)[reply]
- Thanks for helpful pointer. I think I'll go through all the archives and compile a list of things that need to be taken into account, then possibly do a dry run. — HELLKNOWZ ▎TALK 22:01, 13 September 2010 (UTC)[reply]
- Any update on Rich's suggestion? MBisanz talk 05:36, 27 October 2010 (UTC)[reply]
- I have not made any content yet, so there is nothing to share. The main task is taking too much fiddling, so I may need to put this on a hold for a while. I am aware of the BRFA and accompanying issues though. — HELLKNOWZ ▎TALK 11:45, 27 October 2010 (UTC)[reply]
Request Expired. As usual, if you solve those fiddly issues and want to reactivate this request, feel free to simply undo this edit and relist it. Anomie⚔ 02:51, 24 November 2010 (UTC)[reply]
- The above discussion is preserved as an archive of the debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA.
- The following discussion is an archived debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA. The result of the discussion was Denied.
Operator: Ganeshk (talk · contribs)
Automatic or Manually assisted: Automatic
Programming language(s): AutoWikiBrowser and CSVLoader
Source code available: Yes, available at WP:CSV and WP:WRMS
Function overview: To create gastropod species and genera articles based on data downloaded from the WoRMS database. The bot will run under the supervision of the Gastropods project.
Links to relevant discussions (where appropriate):
Edit period(s): Weekly
Estimated number of pages affected: 500 per week
Exclusion compliant (Y/N): N/A
Already has a bot flag (Y/N): Y
Function details: The bot will create species and genera articles under the supervision of WikiProject Gastropods. Here are the steps:
- Bot operator will propose a new family that needs creating on the project talk page.
- Gastropod project members will approve the family and provide a introduction sentence. Here is an example.
- Bot operator will download the species data from WoRMS using AutoWikiBrowser and WoRMS plugin. Only accepted species will be downloaded.
- Bot operator will run AutoWikiBrowser with CSV plugin to create the articles using a generic stub template, project provided introduction sentence and the data downloaded from WoRMS.
- Bot operator will maintain a log on the User:Ganeshbot/Animalia/History page.
There are about 10,000 species articles (approx.) that are yet to be created.
Note: This bot has been approved to create a smaller set of similar stubs in March, 2010. This request is for getting an approval for all new families approved by Gastropods project.
Discussion
Note: For anyone new to the discussion, in Ganeshbot 4 (later amended at Wikipedia talk:Bot Approvals Group/Archive 7#Wrong way of the close a BRFA) this bot was approved to create about 580 stubs for the genus Conus. Despite stating that "about 580 stubs will be created (nothing more)",[59] Ganeshk was somehow under the impression that further approval was not required to create additional articles. When this was brought to the community's attention at various places, including WT:Bots/Requests for approval#Wikipedia:Bots/Requests for approval/Ganeshbot 4, Ganeshk stopped creating the articles without approval. Anomie⚔ 02:09, 19 August 2010 (UTC)[reply]
- I'm almost positive something like this involving plant articles or insects cratered badly when it was found the source database had errors. Are we sure this database is reliable to use? MBisanz talk 01:56, 19 August 2010 (UTC)[reply]
- From what I hear from members of the Gastropods project, WoRMS has the best experts in the world. Their database is not infallible, but overall beneficial. — Ganeshk (talk) 02:05, 19 August 2010 (UTC)[reply]
- You're probably thinking of anybot, although there were other issues that contributed to that situation becoming a major clusterfuck. Anomie⚔ 02:16, 19 August 2010 (UTC)[reply]
Needs wider discussion. I see you already informed WikiProject Gastropods. Please advertise this request at WP:VPR to solicit input from the wider community. Anomie⚔ 02:09, 19 August 2010 (UTC)[reply]
- I have posted a note at the VPR, Wikipedia:Village pump (proposals)#Species_bot. — Ganeshk (talk) 02:39, 19 August 2010 (UTC)[reply]
- I understand your concern MBisanz, but in my understanding, no database is completely invulnerable to mistakes. Though this may be contested, WoRMS is a somewhat reliable source, considering it is gradually revised by several specialists. Information provided by WoRMS may or may not change with time. It evolves, as does Wikipedia. We from project gastropods aim to closely observe those changes, so that the informations contained in the gastropod articles are true to their source, at least. I recognize that a large number of stub articles created in an instant can make things difficult, mainly because we are just a few active members, but then again I think this bot is very beneficial to the project, if used with caution. --Daniel Cavallari (talk) 02:30, 19 August 2010 (UTC)[reply]
How many articles can the gastgropod project check with just a few active members? The bot created about 100 stubs a day for the past few months for a total of 15000 stubs? Have these 15000 stubs been checked? I checked a few and found concerns. I volunteered to point out problems if I could speak directly to the gastropod family experts, but I was insulted by a gastropod member for my poor spelling, repeatedly insulted. I think the inability to work with other members of the community and the unwillingness to accept criticism and the tendency to focus on personal insults over taxonomic issues spell disaster for this bot. The bot is either continuing to run or is being operated as an account assistant by its operator, this also makes it hard to know what the bot is doing. The operator will have to have all rules of bot operation explicity outlined, as he took his own statement of "580 articles (nothing more)" to mean 15000 articles. What other bot rules will be misinterpreted? —Preceding unsigned comment added by JaRoad (talk • contribs) 03:05, 19 August 2010 (UTC)[reply]
I am also concerned that gastropod members are using what they consider a "somewhat reliable" resource that is evolving through time like Wikipedia. Wikipedia is not considered a reliable source for Wikipedia articles. Writers are expected to use reliable, stable, and non-primary sources, not "somewhat reliable" sources. —Preceding unsigned comment added by JaRoad (talk • contribs) 04:20, 19 August 2010 (UTC)[reply]
- JaRoad, This link will show you that the bot has actually stopped creating articles as of 8/15/10. — Ganeshk (talk) 04:29, 20 August 2010 (UTC)[reply]
- If quality control is being questioned, I suggest that members of the gastropod project agree on an acceptable percentage of defective articles generated. Then, select and examine random articles that were produced by Ganeshbot. Determine the percentage of defectives and take it from there. Anna Frodesiak (talk) 05:29, 19 August 2010 (UTC)[reply]
- Comment Although my general reaction is very much against bot-creation of articles (I think it is crazy), I was impressed with the couple of species articles I looked at. However, I know little to nothing about gastropods (or bots). It is dismaying that the original BAG approval was so badly misunderstood: it seemed quite clear to me. I wonder, from a broader point of view, whether this is a wise thing to be doing at all. What happens when the WoRMS[60] database is enhanced or corrected? How do such changes get here? A content fork on WP is not helpful. What about changes in the WP articles: can relevent changes be fed back to WoRMS? What do the denizens of WoRMS think about all this? Similar thoughts for WikiSpecies[61] (FAQ[62]). I have seen some discussion about why this data should be going into WP rather than WikiSpecies but since the latter is supposed to drive the former I don't understand the rationale for the data coming to WP first. What to the WikiSpecies folks think? Anyway, just my thoughts. Thincat (talk) 10:41, 19 August 2010 (UTC)[reply]
- With regard to your question about how changes on WoRMS can get here, I have plans to write a bot that will compare Wikipedia with WoRMS and list articles that will need updating. I intend to file a BRFA for that in the future. WoRMS was happy to hear that Wikipedia was using of their database as a 'trustworthy' taxonomic data source. We are listed under the user list for their web services functionality. — Ganeshk (talk) 00:00, 20 August 2010 (UTC)[reply]
Support As I have written previously there is unified support of this by Wikiproject gastropods members. The bot is running since March 2010 without any problems. I would like to thank to User:JaRoad, who have found a "mistake" affecting 6 articles (or maximally up to additional 10 articles, and some thinks that it even was not an mistake) in highly specialized theme in this family Category:Velutinidae. The "mistake" was made by one of wikiproject gastropods members. It was made neither by a bot nor by a bot operator. We have remedied it and we have taken precautions. The bot is specialized in creating extant (living species) marine gastropod articles, that is only a small part of the project. The bot works systematically according to its operator instructions. Additionally the bot works in cooperation with WoRMS http://www.marinespecies.org/users.php (see Wikipedia listed there). That guarantee also automatic or semi-automatic update in the future, if necessary. Maybe it seems for other wikipedians, that nobody takes care about those generated articles. That would be incorrect prejudice. See for example the history of "List of Conus species" where is exactly written "all species checked". For example last month have one user uploaded ~1000 encyclopedic images and he have added them mostly into those articles started by this bot. This bot is doing exactly the same thing, that would human members of the wikiproject gastropods do. There are no known real issues with this bot. Feel free to formally approve it. Thank you. --Snek01 (talk) 13:21, 19 August 2010 (UTC)[reply]
Support The core of the gastropod team stands by the accuracy of the articles, and so do I. I watched as the first batch was prepared. It was meticulously fact-checked by JoJan and others before the bot generated the stubs. The bot is an asset to the project, and ought to continue. Furthermore, the introductory statement to this page has an objectionable tone of indictment. Anna Frodesiak (talk) 13:46, 19 August 2010 (UTC)[reply]
Support I find the bot stubs to be very good, certainly as good (or better than) stubs that are created manually by project members or other contributors. We are using the most up to date system of taxonomy. And yes, as Anna says, we reviewed the process very carefully over many weeks before the process was put into effect because we understand the possible dangers of mass bot generation of stubs.This is not our first experience with bot generated stubs, a good number were created back in 2007. Thanks, Invertzoo (talk) 17:10, 19 August 2010 (UTC)[reply]
Oppose Due to the misunderstanding, there are now fifteen thousand stub articles about slugs and snails, largely unchecked, and for which there is frequently no information to be added. The aim of the Wikiproject is to have a similar article for all 100,000 articles in the database. I cannot personally see any reason for this. We should have articles about gastropods that have more information about them, where the article can be fleshed out and more content added. I share the concern about the WorMs database, and do not think that there is any need to reproduce it in Wikipedia. Elen of the Roads (talk) 18:09, 19 August 2010 (UTC)[reply]
- All of them are checked (by me or other project member) prior its creation. By the way, the task of the bot is to create less than 19.000 articles (according to the information Wikipedia_talk:WikiProject_Gastropods/Archive_3#More effective importing) from which the majority of them is already done. There is only need to finish the task in progress. --Snek01 (talk) 19:17, 19 August 2010 (UTC)[reply]
- Yes, but the task of the bot was only to create 600 articles, not nineteen thousand of the things. The bot was allowed to operate on the basis that the Wikiproject would expand the entries - and it was only supposed to create entries on Conus sp, which are rather better documented than most. "I have checked prior to creation" does not really address the requirement to check after creation, and add further information. There is no reason to duplicate the WorMs database in wikipedia. Elen of the Roads (talk) 21:44, 19 August 2010 (UTC)[reply]
- Elen, I think Wikipedia has the potential to realize E. O. Wilson's vision of creating an Encylopedia of life, "an electronic page for each species of organism on Earth", each page containing "the scientific name of the species, a pictorial or genomic presentation of the primary type specimen on which its name is based, and a summary of its diagnostic traits.".[63][64] If the bot is creating accurate articles of species that have been reviewed at WoRMS (please note that the bot only downloads records that are marked as accepted), what is the harm in having a page for that species on Wikipedia? The page will develop over time as people come in and add additional information. The bot gives the page a good starting point. — Ganeshk (talk) 00:20, 20 August 2010 (UTC)[reply]
- Elen, but we are expanding those stubs and checking them when needed (the only thing what is usually need to check are wikilinks only when linking to homonyms). For example Conus ebraeus, Conus miliaris was expanded nicely as well as other ones. Even your presumtion "There is no reason to duplicate the WorMs database in wikipedia." is wrong. If there is encyclopedic content somewhere that is useful for wikipedia, then we will normally duplicate it, for example we duplicate some images from Flickr on Wikipmedia Commons as well as we duplicate encyclopedic text content from any other free source. Look for example at article Velutina velutina how it is "duplicated" from WoRMS and tell me, what are you unsatisfied with. You have written "I cannot personally see any reason for this." Is the reason, that I would not be able to make this Start class article(s) without Ganeshbot enough for you? Even if you still will not see any reason for this, then you do not need to disagree with it, because other people consider it not only reasonable, but also necessary. I have started about ~2000 articles by myself and I am not a bot. Of course I have also expanded much more ones. I must say, that starting them was quite tiresome sometimes. I would like to also enjoy expanding articles as you. Would you be so generous and could you allow me to focus on expanding articles of any gastropod instead of starting them, please? --Snek01 (talk) 00:50, 20 August 2010 (UTC)[reply]
- Elen, I am sorry, but I don't understand what you mean when you say about these new stubs that "there is frequently no information to be added". On the contrary, I think every single one of them "can be fleshed out and more content added". That is the whole purpose of creating them, so that we can easily add images or more info with more references. Invertzoo (talk) 02:58, 21 August 2010 (UTC)[reply]
- If that's the case, you won't object to the bot creating articles only at the pace that you can flesh them out, and you'll be OK with finish fleshing out the 15,000 its already created before its allowed to create any more.Elen of the Roads (talk) 11:07, 22 August 2010 (UTC)[reply]
- You yourself are very much against the idea of a large number of stubs, I can see that, but as far as I know, there does not appear to be a WP guideline against stubs. And, unlike many other kinds of stubby articles, these species stubs have a fact-filled taxobox and intro sentence, as well as a decent reference, so they are actually already quite rich in information, despite being physically short still. It may not seem so to you, but these stubs are already quite useful to a reader who is curious to find out more about a certain species. I also think you will find that throughout most of Wikipedia's biology coverage of individual species, especially those of invertebrates and lower plants, stubs are the norm rather than the exception. At the Gastropods project we have been creating rather similar stubs by hand for a very long time without any objections. Thanks for your interest, Invertzoo (talk) 15:42, 24 August 2010 (UTC)[reply]
- If that's the case, you won't object to the bot creating articles only at the pace that you can flesh them out, and you'll be OK with finish fleshing out the 15,000 its already created before its allowed to create any more.Elen of the Roads (talk) 11:07, 22 August 2010 (UTC)[reply]
- Elen, I am sorry, but I don't understand what you mean when you say about these new stubs that "there is frequently no information to be added". On the contrary, I think every single one of them "can be fleshed out and more content added". That is the whole purpose of creating them, so that we can easily add images or more info with more references. Invertzoo (talk) 02:58, 21 August 2010 (UTC)[reply]
Support – The bot doesn't do anything else that what we, the members of the project, have been doing manually all these years, The Gastropoda is one of the largest taxonomic classes in the animal world. Without a bot, we're facing an impossible task. The data from WoRMS are very reliable, made by the best experts in the world, You won't find a better expert anywhere to check these data, so who do you want to check those data ? As to the so-called mistake in Velutina, I advise the community to read the disccusion at Wikipedia talk:WikiProject Gastropods#Phalium articles, The integrity of the content generated by the bot is not at stake, but the bot permission is the real issue. This bot has saved the members of this project perhaps thousands and thousands hours of work, generating all those new articles. Once an article exists, it is much easier to add information. I'm in the process of uploading to the Commons about 2,500 photos of shells of sea snails from an internet source with a license suitable for the Commons. This is an enormous job that can't be done by a bot because each name has to be checked if it is not a synonym. I cannot insert these photos into wikipedia, unless there is already an article about a genus or the species in question. Otherwise, this would take me years if I have to create all those articles. For most people consulting wikipedia about gastropods, and certainly for shell collectors, the photo is the most important part of the article, The text is more a matter for experts or knowledgeable amateurs, who understand what a nodose sculpture or a stenoglossan radula represents. JoJan (talk) 18:57, 19 August 2010 (UTC)[reply]
Support – As I see it, the bot is not a mere addendum, but a necessity. Taking into account the number of species described, we're dealing with the second most diversified animal phylum, the phylum Mollusca, and it's largest class, the class Gastropoda. There are tens of thousands of extant and fossil gastropod species, and creating each one of those stubs would be an inhuman task... That's why we need a bot. WoRMS is not absolute, but it is one of the most reliable online databases available. I understand that, with proper supervision and due caution, no harm will come out of Ganeshbot. Daniel Cavallari (talk) 00:10, 20 August 2010 (UTC)[reply]
Oppose as currently inplemented. The lack of prior approval and poor communicationskills by bot operator and project will continue to be a problem. The bot operator has now posted a list of 100s of problematic articles, various types of synonyms that should be redirects rather than articles. The project members could have spent time looking for problems and readily found these instead of fighting to protect the bot. It would have established a wikipedia-beneficial future method for dealing with bad bot articles. These articles need fixed now, no bad taxonomic article should sit on Wikipedia while editors know its bad. The bot operator created no plan for fixing these articles. Neither did the wiki project.
In my opinion a bot set up to scour multiple species data bases at the request of a h uman editor could greatly benefit writers of species articles. The hujman editor could verify a dozen species in an hour or two then ask the bot to create just the formatted article with taxonomy box, categories, stub tags. This could save the human editor many hours of tedious work. The bot could get species from algae, molluscs, plants, dinosaurs. It could be multiple bots, even, with a central page for requests. This would be the best of both worlds: more articles, decided by humans, tedium bny bots. JaRoad (talk) 01:41, 22 August 2010 (UTC)[reply]
- Let me just say two things in response to JaRoad's comments. Firstly his assessment of our "communication skills" is based solely on his current personal perspective over the last several days, and as such it is arguably not at all relevant to the bot issue. Secondly and more importantly: if you talk to any invertebrate zoologist who is actually a taxonomist, he or she will tell you that articles or entries that use what may or may not be a synonym name are an extremely common occurrence, not only here on Wikipedia but throughout all writings on biological taxa, especially at the genus and species level. I think you will find this same issue within every large taxon of invertebrates that has not been exhaustively studied, whether the articles or entries are or were created by humans or by a bot. I would not even call these "bad" articles or "bad bot articles". The nomenclatural issues on many species of gastropods are extremely complex. First rate experts within the field very often disagree in quite polarized ways as to what the "correct" name should be for a species of gastropod. I can give you several examples if you like. There really isn't a way to simply "verify" species names as JaRoad suggests. Thank you, Invertzoo (talk) 03:13, 22 August 2010 (UTC)[reply]
The topic is the bot not me. Taxonomy is not the topic either. Editors make decisions about species validity on wikipedia. My suggestion is that only editors make these decisions. Although my suggestion is a counter proposal to this bot, this bot could make a useful tool as part of this counter proposal. I have not suggested any way to simply verify species names. JaRoad (talk) 04:49, 22 August 2010 (UTC)[reply]
- No, I am sorry, but you are quite wrong on this point, which is indeed about taxonomy and nomenclature. Editors on Wikipedia must not and can not make decisions about which species are valid; that counts as Original Research, which is not allowed here. All we can do is to cite a reliable reference to back up the use of a name as it is currently applied to a certain morphotype. The validity of a species and a species name is a weighty scientific opinion, which can only be determined by an expert researcher who knows the relevant historical primary literature well, who has consulted the relevant type descriptions in that family, and who has examined the actual type material for all of the claimed taxa, by visiting the various museums throughout the world that have the types of the relevant species and supposed synonyms and carefully examining that material. Invertzoo (talk) 15:24, 22 August 2010 (UTC)[reply]
Yes, they do. Wikipedia editors decide that WoRMS is a reliable resource eand its listing of species versus synonyms is going to be used, therefore WoRMS listing of accepted names is a source for valid species. Then if WoRMS is in disagreement with another secondary or tertiary source the editor decides which of the two sources is the correct one for the name of the article and how and why the other source earns a mention as to the controversy rather than being the name for the article. Mollusc editors have already decided that the chosen taxonomists on WoRMS will be the deciders of species names on Wikipedia, hence you have chosen to confer validity on the WoRMSZ set of species names, not all of which are accepted 100% by all mollusc taxonomists. This is done for all controversial species of any type of organism on Wikipedia. Maybe you only create articles about noncontroversial species.
Back to the suggestion I raised. This removes the wholesale stamp of validity on one database and returns it to where it belongs: to the editors creating the articles through secondary and tertiary resources. JaRoad (talk) 16:18, 22 August 2010 (UTC)[reply]
- This your surmise is wrong. The decision about articles is always on human editors, who are trying to independently evaluate available information. Then they are making their own human decisions when one source is in disagreement with another. Things are being done exactly as you wish to be done. --Snek01 (talk) 09:43, 23 August 2010 (UTC)[reply]
Arbitrary section break
To summarize the discussion so far:
- WikiProject Gastropods fully intends to create all these stubs anyway, and in much the same manner as the bot does. The bot just saves the tedium of actually copying and pasting the infobox and such.
- There is some concern over the accuracy of the WoRMS database, but it has been contended that the database is populated and actively maintained by experts in the field and thus should be reliable. Is there any reason to doubt this?
- There is concern that the 15000 already-created stubs have not been reviewed by the project. Is there work on reviewing this backlog, and if so what is the progress? Is there any reason not to accept the suggestion that bot creation of more articles should wait until that backlog is taken care of?
- Note that that does not mean this BRFA should be postponed until that time, just that a condition of approval be "the bot will not start creating more articles until the existing backlog is taken care of".
- There is some concern that, as gastropod classification is changed and species are merged, no one will bother to update the many stubs created here. Is this a legitimate concern? Is this being considered?
- There is some concern that the classification system used by WoRMS is not generally accepted by mainstream scientists in the field. Is this a legitimate concern? Even if so, does the bot creation of these articles actually prevent proper weight being given to other mainstream classification systems?
Did I miss anything? Anomie⚔ 16:17, 24 August 2010 (UTC)[reply]
A few opinions:
- "...thus should be reliable. Is there any reason to doubt this?..." Again, why not do what a factory does, and check random samples, and set a standard for acceptable % of faulty articles. Or, at least figure out the magnitude of this problem within the 15,000 articles. We might be talking about only 30 articles.
- Tag specific groups of articles with an incertae sedis template that states something like "This is a group/clade/family of which the taxonomy may be in flux.."
- Establish a plan for the very valid concern that classifications WILL change.
- Keep producing articles. Incoming content and images will otherwise have nowhere to land.
- So, is this a debate over WoRMS and their people, or the endemic flux of the whole class? If it is the latter, then we should wait 30 years before producing stubs. We know that's not going to happen. So, if it is the latter, then produce the stubs, and work around the problem.
- Anna Frodesiak (talk) 01:01, 25 August 2010 (UTC)[reply]
- I am not at all clear as to what is supposed to constitute a "faulty article". To my mind, not that they are absolutely perfect (is there such a thing?), but the great majority of all of our stubs are currently at a (relatively) good level of correctness, bearing in mind how crazy and how vast the field of gastropod malacology is, and this level of correctness applies to both those that stubs that were made by hand and those that were produced by automation. Such synonym articles as currently exist can not really be considered "faulty" because the information is completely verifiable, even though it may not represent some kind of ultimate biological truth (if there even is such a thing). The supposed error in the few Velutina stubs is arguably not an error at all. The set up of each family is checked before the bot is run. If we are going to demand 100% perfection in accuracy in all stubs, or perhaps in all articles in the whole encyclopedia, then most work on Wikipedia will grind to a halt. We certainly do agree that it turned out the bot was not authorized to create so many stubs, and this is unfortunate, but almost all of us at the Project had no idea that the authorization was lacking. I feel it is important not to impose some kind of punitive demands as "retribution" for what was a genuine misapprehension. Thanks for your patience and understanding, Invertzoo (talk) 04:13, 26 August 2010 (UTC)[reply]
- The WikiProject is being asked to take some ownership of the issue, and to give a plausible assurance of planning and quality checking, in the terms outlined by Anna Frodesiak. "Faulty" means that when a clueful human reads and digests the article, including checking sources, that errors or strong defects are noticed. Not just a quick "that looks good", but a thoughtful appraisal of whether the article is sound and warrants inclusion in Wikipedia (Is it sufficient as it stands? Would it need significant improvement to merit being an article? How likely is it that thousands of such articles would ever be improved? Instead of articles, would the topics be better handled some other way, such as a list? Is it likely that classifications will change? How could that feasibly be handled?). Johnuniq (talk) 07:18, 26 August 2010 (UTC)[reply]
- Thank you Johnuniq for a very clear and cogent message that is also constructive and helpful in tone; that was a very welcome contribution to the discussion. Yes, the project can certainly set something up along the lines that Anna and you have suggested in terms of checking. Just so you know, Daniel and I for the last year have made our way through 6,000 of the older pre-existing stubs (many machine made dating from 2007, and many handmade from 2004 onwards) updating those stubs and fixing them up to reach a better quality and a standardized format. That work has included updating the taxonomy using the most recent overall system and many other improvements. So two of us at least are already used to working for a year with one approach to quality control. If you can give the Project some time to work out what would be the best system to check new stubs and the best system for updating taxonomy and nomenclature, and who will do what, that would be good. Unfortunately I am currently on vacation (until September 6th), so I cannot spare anywhere near as much time on here each day as I would at home. Best wishes to all, Invertzoo (talk) 16:23, 26 August 2010 (UTC)[reply]
- The WikiProject is being asked to take some ownership of the issue, and to give a plausible assurance of planning and quality checking, in the terms outlined by Anna Frodesiak. "Faulty" means that when a clueful human reads and digests the article, including checking sources, that errors or strong defects are noticed. Not just a quick "that looks good", but a thoughtful appraisal of whether the article is sound and warrants inclusion in Wikipedia (Is it sufficient as it stands? Would it need significant improvement to merit being an article? How likely is it that thousands of such articles would ever be improved? Instead of articles, would the topics be better handled some other way, such as a list? Is it likely that classifications will change? How could that feasibly be handled?). Johnuniq (talk) 07:18, 26 August 2010 (UTC)[reply]
- There are not known real issues with this bot. Generated stubs are useful, complete and valuable as they are. Nobody have proved evidence of any problem.
- There is necessary nothing to do with generated stubs. Normal continuous checking for taxonomic updates would be fine for every article of any species either human created or Bot created, but it is not necessary.
--Snek01 (talk) 00:26, 27 August 2010 (UTC)[reply]
- By "faulty article" I mean a small error in the taxobox or such. That's all. After all, these stubs usually contain only a single sentence stating that the subject is a gastropod, and what family it is etc. Simple.
- If 1 out of 1,000 stubs gets something wrong in the taxobox, I do not see that as a reason to stop the bot. It is doing more good than harm. Wikipedia must have an acceptable margin for error. I think, upon examination, that gastropod articles have fewer errors than general articles.
- Johnuniq wonders if such simple articles are worth existing if they consist of so little. Each species needs to be represented, even if only visited once a year. Articles get drive-by improvements from the large body of occassional users. The sooner Wikipedia has all species represented the better. The world needs a comprehensive, centralized dBase. I'm thinking of the state of things in 10 years. Let's get critical mass. This whole problem of conflicting species info is related to lack of centralization.
- I would like to hear what Ganeshk says about bots handling sweeping changes to groups of articles when classifications change.
- Also, it would be nice to see an automated system for checking article, if necessary. Anything to assist or avoid manual checks.
- The bottom line for me, is, if we deem WoRMS a good source within a reasonable margin of error, create all 100,000 articles, and deal with problems en masse with bots.
- Finally, any comment on my suggestion for a incertae sedis template? Anna Frodesiak (talk) 01:35, 27 August 2010 (UTC)[reply]
- Anna Frodesiak, I am slightly concerned by your "the world needs a comprehensive, centralised database". You do realise that Wikipedia cannot fulfil this function (Wikipedia does not consider itself a reliable source). Elen of the Roads (talk) 09:23, 27 August 2010 (UTC)[reply]
- Finally, any comment on my suggestion for a incertae sedis template? Anna Frodesiak (talk) 01:35, 27 August 2010 (UTC)[reply]
- An unreliable source now. But Wikipedia is only a few years old. In a decade or two, who knows? Critical mass might be just what this class of animals needs. Anna Frodesiak (talk) 11:15, 27 August 2010 (UTC)[reply]
- Anna, to your question about the bot handling the changes, it will be difficult for the bot to update an article where the humans have done subsequent edits (the order is lost). The bot can create subpages similar to the unaccepted page to alert the human editors about discrepancies in status, taxonomy etc. — Ganeshk (talk) 11:47, 27 August 2010 (UTC)[reply]
- But when classifications change, doesn't that usually just mean a search and replace? Anna Frodesiak (talk) 13:24, 27 August 2010 (UTC)[reply]
- It is not just a case of search and replace. Here is an example. I had to change the introduction sentence, add a new category and make other edits to accommodate the classification change. The bot cannot make these decisions. It will make a mess. — Ganeshk (talk) 13:58, 28 August 2010 (UTC)[reply]
- But when classifications change, doesn't that usually just mean a search and replace? Anna Frodesiak (talk) 13:24, 27 August 2010 (UTC)[reply]
- Ganesh is right when stating that the bot cannot handle the changes in taxonomy, only report them on a subpage. These changes have to be done manually (as I have been doing in the last few days) because there are sometimes ramifications into other genera. Every change has to be checked thoroughly. Also the new name may not have an article yet, either for itself or for the whole genus. This has complicated my task, keeping me busy for hours on one change from a synonym to the accepted valid name. That's why it's such a shame that the bot has been halted temporarily. It could have created these articles in seconds while it took me hours to do so.
- And as to the disputed need for all these stubs, I can state these aren't really stubs since they contain already a lot of information : the latest taxonomy (most handbooks and websites are running far behind in this), eventually synonyms (again very useful for checking the validity of a name). From the moment they exist, it's easy to add the type species or even a photo. These are important characteristics, wanted by most readers of these articles (such as shell collectors). Text can be added in a later stage and eventually it will be done so. Of course our ultimate goal is to add the finishing touch to each article, but that's a goal for the far future, unless a few hundred new collaborators join our project. JoJan (talk) 14:03, 27 August 2010 (UTC)[reply]
- I'm hearing two things:
- 1. The bot cannot handle changes.
- 2. The bot can create articles in seconds.
- My questions:
- How could a bot help with what you are doing right now?
- (Big picture): If the bot creates 90,000 more articles, and there are classification shifts, what then? Will we have an ocean of inaccurate articles with no automated way of fixing them? Anna Frodesiak (talk) 14:23, 27 August 2010 (UTC)[reply]
- I'm hearing two things:
- The bot cannot help us with the changes, as this involves many things, such as deleting the talk page of the synonym (CSD G6) (I can as I'm an administrator), creating new articles for a genus that was referred to (as I just did for Brocchinia). The new synonyms have to be included into the taxobox of the accepted name (and changing the accession date for WoRMS in the template). While doing so, I have already sometimes remarked that there were additional new synonyms for the accepted name. These other synonyms have to be changed too. Furthermore, one has to make a choice between making a redirect to the already existing article of the accepted name or making a move from the synonym to not yet existing article of the accepted name. As you can see this involves a lot of things that only can be done by us and not by a bot.
- I think Ganesh is best placed to answer this question. But, in my opinion, this shouldn't be too difficult for a bot to accomplish. JoJan (talk) 15:02, 27 August 2010 (UTC)[reply]
- So the answer to Anna's second questions is yes, there might be a surfeit of articles needing changes, at least for a while? Ganesh and yourself both at one point seemed to be saying that it was not possible for a bot to make the changes, although having the bot make articles for all the new synonyms would be possible. —Preceding unsigned comment added by Elen of the Roads (talk • contribs)
- The answer is yes, it will take time for the changes to be fixed. But I won't call it a ocean of inaccurate articles. Out of 15,000 stubs, only 300 articles had a classification change in the last 6 months. If the Gastropod team continues to review the articles as the bot is creating articles, we will not end up with a mountain of articles that need fixing. Already 30 articles out of 300 have been fixed. — Ganeshk (talk) 18:37, 28 August 2010 (UTC)[reply]
- So the answer to Anna's second questions is yes, there might be a surfeit of articles needing changes, at least for a while? Ganesh and yourself both at one point seemed to be saying that it was not possible for a bot to make the changes, although having the bot make articles for all the new synonyms would be possible. —Preceding unsigned comment added by Elen of the Roads (talk • contribs)
- Elen of the Roads:
- Why "...a surfeit of articles needing changes, at least for a while..."? Why just for a while?
- Why do bots make articles for new synonyms? Don't we get rid of those articles?
- Ganeshk:
- If the bot makes another 15,000 articles, won't 2% have problems, just like the first 15,000? Won't the sum total then be 30,000 articles all experiencing a 2% per six-month classification change? It seems that JoJan spent a lot of energy fixing 30 out of 300. I'm still a bit unclear about how to maintain 100,000 articles with such labour-intensive checking.
- Elen of the Roads:
- If I am talking nonsense, please say. Anna Frodesiak (talk) 22:10, 28 August 2010 (UTC)[reply]
- Anna, Wikipedia is mostly text based. That makes it difficult for computer programs (bots) to analyze and update. If MediaWiki (the software that runs Wikipedia) had some kind of database support, the bot could have easily kept the articles in sync with WoRMS. — Ganeshk (talk) 01:37, 29 August 2010 (UTC)[reply]
Another arbitrary section break
Overview from Wikipedia:Village_pump_(proposals)#Species_bot:
- another five wikipedians have shown support for this task of the Ganeshbot.
- one wikipedian (User:Shimgray) have shown support for generic articles only, while is being "opposed to creating a large number of individual species articles".
- no one have shown disagreement in the village pump.
--Snek01 (talk) 15:45, 29 August 2010 (UTC)[reply]
I would like to thank to all for their comments (including those ones, that have never edited any gastropod-related article and those ones, that have never created any gastropod-related article so they have experience neither with this Bot nor with gastropods). I would summarize the task (to be everybody sure, that it is OK):
|
- 3) additinal checking and improving.
- There are yearly, half-yearly or continuously made checking for NEW changes in the source and those NEW things are being implemented, if they are considered to be OK.
- Articles are normally improved during normal editing process.
This describes the real situation how it have been working and how it works.
Everybody can comment any phase of this process anytime. Usual and often possibilities are like this:
- For not yet created taxa article: "Better/updated source for the taxon EXAMPLE-TAXON-1 is the source EXAMPLE-SOURCE-1. Use the source instead of WoRMS."
- For allready created taxa articles: "Better/updated source for the taxon EXAMPLE-TAXON-2 is the source EXAMPLE-SOURCE-2. Update it (manually or robotically)."
Wikiproject Gastropods members will be happy to do it. Put your notice at Wikipedia talk:WikiProject Gastropods.
Consider, that this (formal) request for approval deals with phase 1) and phase 2) only. If somebody have comments to phase 3), then feel free to share your opinions at the Wikipedia talk:WikiProject Gastropods. Thanks. --Snek01 (talk) 15:45, 29 August 2010 (UTC)[reply]
Snek01 - unless you are a BAG editor, you can't close this. Apologies if you are a BAG editor, but the little table on the requests for approval page says that you are not. The instructions are specific that it has to be closed by a BAG editor (and I would have expected one that has not got an interest in running the bot, but it doesn't say that anywhere, so perhaps not expected) Elen of the Roads (talk) 17:06, 29 August 2010 (UTC)[reply]
- Elen, I don't see where Snek01 mentioned that he is closing the request and approving it. He was posting a summary of the discussion at the Village pump and this page so far. Atleast that is how I read that. — Ganeshk (talk) 17:26, 29 August 2010 (UTC)[reply]
- (edit conflict) The list of BAG members is at WP:BAG, and Snek01 is not one. I'm not sure where exactly Snek01 supposedly closed the discussion (all I see is an attempted summary), but to be 100% clear: this BRFA is not closed yet, and the bot does not (yet) have permission to create any additional articles. Anomie⚔ 17:28, 29 August 2010 (UTC)[reply]
Apologies to Snek01 if I have misread his post. To be clear, I thought it was a genuine error on his part...but accept it seems to have been a genuine error on mine. Elen of the Roads (talk) 22:12, 29 August 2010 (UTC)[reply]
I have comments, nut cannot post easily on this lonmg post. Of course, I risk an incorrectly spelled word attack tangent, among other tangents, by gastropod project members. And I would like my concerns adressed. Wikipedia is an encyclopedia, not home to the latest taxonomy of gastropods, but the most robustly accepted taxonomy. This needs adressed more widely: what gastropod members are doing.JaRoad (talk) 17:40, 29 August 2010 (UTC)[reply]
Strong Oppose Looking at the Conus articles, these are all IDENTICAL! I strongly oppose any attempt to automatically create completely identical stub articles. The fact that a species exists does not mean that there needs to be an article on it that has absolutely zero unique information. That is what Wikispecies is for. Create redirects, but not a word is more useful than the genus article. Even the source database has virtually no information on these species. It is absurd to create thousands of articles with two expansion templates on them that will not (or simply cannot) be solved. And at the very least, please don't use quotation marks where they shouldn't be. The Conus species sting, not "sting." Reywas92Talk 01:25, 5 September 2010 (UTC)[reply]
- The Conus articles are similar where they need to be similar (articles on species within one genus will always be very similar when they are quite short,) but they are not identical. Each species has its own authority and date listed for the species name. This enables a researcher to go to the primary literature and find the original description, so it's very essential. Many have a list of synonyms, an important and useful feature. As you will also see if you look at (say) the first 10 articles, quite a few of the Conus species articles already have things such as images added to them, as well as the common name where applicable, distribution info, and so on. The reason "stinging" was in quote marks is because the tooth and venom apparatus of the cone snail is not what most people think of as a sting: it is not primarily defensive and does not protrude from the hind end of the animal. Instead it originates in the mouth and is applied out of the snout of the creature. It is used to immobilize prey before eating, so in several respects if you are hit by a Cone snail it's a lot more like a rattlesnake bite than a bee sting or scorpion sting. Best, Invertzoo (talk) 23:02, 13 September 2010 (UTC)[reply]
- These creatures are not notable on their own. Few meet the GNG. Existence ≠ notability. I'm sure fantastic lists could be generated for species of each genus that incorporates the authority and date, synonyms, common names, and distribution. When someone is able to add further information, then create the article. Perhaps the bot could create redirects to the genus articles while also inserting the generic template until it can be expanded to be useful. There are over sixty thousand gastropods, and there should not be a nearly empty article for each and every one of them, or even ten thousand. And just because it is not the traditional connotation of stinging, the quotation marks are incorrect. Otherwise, I would suggest to instead clarify what you described above in the article because the quotations are meaningless. Reywas92Talk 23:57, 13 September 2010 (UTC)[reply]
- The Conus articles are similar where they need to be similar (articles on species within one genus will always be very similar when they are quite short,) but they are not identical. Each species has its own authority and date listed for the species name. This enables a researcher to go to the primary literature and find the original description, so it's very essential. Many have a list of synonyms, an important and useful feature. As you will also see if you look at (say) the first 10 articles, quite a few of the Conus species articles already have things such as images added to them, as well as the common name where applicable, distribution info, and so on. The reason "stinging" was in quote marks is because the tooth and venom apparatus of the cone snail is not what most people think of as a sting: it is not primarily defensive and does not protrude from the hind end of the animal. Instead it originates in the mouth and is applied out of the snout of the creature. It is used to immobilize prey before eating, so in several respects if you are hit by a Cone snail it's a lot more like a rattlesnake bite than a bee sting or scorpion sting. Best, Invertzoo (talk) 23:02, 13 September 2010 (UTC)[reply]
Support - It seems that the concerns have been well thought out by the users proposing this bot and that it would serve a beneficial service. I support its creation. Antarctic-adventurer (talk) 18:56, 9 September 2010 (UTC)[reply]
Support &ndash As a Wikiproject Gastropod member I fully support Ganeshbot.
Seascapeza (talk) 18:06, 12 September 2010 (UTC)[reply]
Oppose – The bot would be better off compiling a small number (possibly 1) of list articles than populating the wiki with uninformative stubs that are unlikely to be expanded much in the foreseeable future. See my comments at Wikipedia talk:WikiProject Tree of life#The Great Bot Debate. --Stemonitis (talk) 05:46, 14 September 2010 (UTC)[reply]
- All of them are being expanded and are likely to be expanded surely. In a foreseeable future, probably tomorrow, can be roboticaly expanded over 4000 stubs with description and ecology data. We have enough data for this. But first there have to be stubs for this. Is 4000 expanded articles until the end of 2010 the foreseeable future? --Snek01 (talk) 10:34, 14 September 2010 (UTC)[reply]
- But this is all guesswork. We cannot see into the future. There are currently no plans that I'm aware of to fill these articles with ecology, descriptions, etc. Experience suggests that the vast majority of the articles created en masse like this are not substantively expanded in the short or medium term. Most of Polbot's articles on species on the IUCN Red List (see, for instance, Category:IUCN Red List endangered species) haven't been massively altered, for instance, although they have been repeatedly modified/updated/tweaked, diverting industry that could have been used elsewhere. If you have the data to create 4000 decent articles, with meaningful data, then please submit a proposal to create those. I think we'd all support that. That does not mean, however, that 4000 one-sentence articles are equally desirable. --Stemonitis (talk) 14:41, 14 September 2010 (UTC)[reply]
- If you can not see into the future, then it is your problem. Wikiproject gastropods members can see into the future of creating gastropod related article with the most precise accuracy than any other wikipedian. They know what are plans for the project, they know from their practical experience what are advantages and disadvantages of bot generated articles. If you want to know something more than just one task of this bot, read Wikipedia:WikiProject Gastropods (for example [65]). Experience suggests that all 15000 articles created by Ganeshbot are considered useful and there were found no errors. Imagine that somebody would suggest a bot for creating towns in the USA that would have six different types of informations in it. Would there be any problem? Would be somebody asking about its immediate expansion? If you personally consider those stubs article unused, then I respect your opinion. But some other people consider those article useful, because they provide valuable informations for them. And the purpose of Wikipedia is to provide informations. ADDITIONALLY they are considered very useful by members of one certain Wikiproject. Project is something, what is "carefully planned to achieve a particular aim". I do not think that all members of Wikiproject Gastropods are so stupid to suggest something harmful. --Snek01 (talk) 16:47, 14 September 2010 (UTC)[reply]
- Right, because when there's nothing other than names and synonyms, there aren't going to be errors. If it is correct that the articles can be robotically expanded with description and ecology data, then why isn't that part of the immediate plan?? Start with a bot that will create articles with a paragraph of unique info, not a single identical sentence. As described below, comparing gastropods to American towns is a false analogy: those have millions of potential contributors and have indeed grown substantially. The only likely contributors to these is a niche Wikiproject and experience shows that obscure species stubs barely grow at all. It would be much more beneficial to create fewer quality articles at at time than thousands of substubs at once. Reywas92Talk 00:57, 15 September 2010 (UTC)[reply]
- If you can not see into the future, then it is your problem. Wikiproject gastropods members can see into the future of creating gastropod related article with the most precise accuracy than any other wikipedian. They know what are plans for the project, they know from their practical experience what are advantages and disadvantages of bot generated articles. If you want to know something more than just one task of this bot, read Wikipedia:WikiProject Gastropods (for example [65]). Experience suggests that all 15000 articles created by Ganeshbot are considered useful and there were found no errors. Imagine that somebody would suggest a bot for creating towns in the USA that would have six different types of informations in it. Would there be any problem? Would be somebody asking about its immediate expansion? If you personally consider those stubs article unused, then I respect your opinion. But some other people consider those article useful, because they provide valuable informations for them. And the purpose of Wikipedia is to provide informations. ADDITIONALLY they are considered very useful by members of one certain Wikiproject. Project is something, what is "carefully planned to achieve a particular aim". I do not think that all members of Wikiproject Gastropods are so stupid to suggest something harmful. --Snek01 (talk) 16:47, 14 September 2010 (UTC)[reply]
- But this is all guesswork. We cannot see into the future. There are currently no plans that I'm aware of to fill these articles with ecology, descriptions, etc. Experience suggests that the vast majority of the articles created en masse like this are not substantively expanded in the short or medium term. Most of Polbot's articles on species on the IUCN Red List (see, for instance, Category:IUCN Red List endangered species) haven't been massively altered, for instance, although they have been repeatedly modified/updated/tweaked, diverting industry that could have been used elsewhere. If you have the data to create 4000 decent articles, with meaningful data, then please submit a proposal to create those. I think we'd all support that. That does not mean, however, that 4000 one-sentence articles are equally desirable. --Stemonitis (talk) 14:41, 14 September 2010 (UTC)[reply]
Support - If the source database is up-to-date, it is very useful to have the basic framework of the species pages up and running. It ensures the taxonomy is correct, taxoboxes are present, the authority is present and synonyms are added (and also redirected?). It also ensures there is at least one reference. If the project members are confident they can expand the articles, I don't see a problem. They are the ones actually working on these articles, so why would other people frustrate these efforts? Furthermore, there is a lot of debate about bot creation, but what about AWB? See User:Starzynka and his creations. This bot is doing a far better job than the stuff he or she is creating and nobody seems to be bothered with that. All the messed up species and genus pages I have come across are not made by a bot using a reliable source, but by users taking a list of red linked articles and creating pages en-masse using AWB without adding any additional info or even checking if the list they are working from is actually correct. That, in my opinion, is something everyone should oppose. Not this though, because this bot is actually doing good work. Ruigeroeland (talk) 07:50, 16 September 2010 (UTC)[reply]
Attempting to move this forward
I see three major objections to this task above:
- The reliability of WoRMS. It seems that this is best determined at Wikipedia talk:WikiProject Gastropods, as the articles are being proposed for creation. I also note the existence of WP:NOTUNANIMOUS and the fact that all except one editor seem to be ok with WoRMS as long as the articles are reviewed by a human after creation.
- The ability of the project to review the articles as they are rapidly created. This can be handled by restricting the bot to creating articles only as long as the number of articles awaiting review is not too large.
- The ability of the project to update articles as the taxonomy changes. This problem already exists anyway and will grow anyway as the articles are manually created, and it seems the project is already working on this problem. Please continue the discussion of this problem elsewhere, as it seems tangential to this BRFA.
Taking into account the concerns expressed above, I propose the following:
- The existing 15000 articles created without approval, minus any already reviewed by the project, are awaiting WikiProject review.
- Ganeshbot will not create any gastropod articles while the number of articles awaiting WikiProject review is more than 500. The review process must ensure each article is a good stub that "is sound and warrants inclusion in Wikipedia" (to quote a proposal above), and should expand the articles to at least "Start" class as much as possible.
- Creation of a set of gastropod articles by Ganeshbot will be proposed at Wikipedia talk:WikiProject Gastropods. The members of the project and any other interested editors will discuss and come to a consensus on whether the proposed set of articles is desirable, and any necessary details of content or formatting to be included.
This is basically what is proposed by GaneshK and WikiProject Gastropods, with the rate of creation automatically limited to match the project's review capacity. The identification of articles awaiting review can be done by listing them on a WikiProject subpage with editors removing articles from the list as they are reviewed, or by applying a template and/or hidden category to the articles that editors will remove as the article is reviewed. If the latter, I would also approve the bot to run through the 15000 articles already created (and only those articles) to append the template/category to any article that has not been edited by one of the WikiProject Gastropods reviewers since creation.
I am inclined to approve the bot under these terms if it seems generally acceptable. Comments? Anomie⚔ 18:55, 29 August 2010 (UTC)[reply]
- Anomie, the accuracy of the articles was never under question. So your requirement that all of the existing articles be reviewed "that they are sound and warrant inclusion in Wikipedia" and expanded to "Start class" is unacceptable. I am requesting for an approval without any limitations on the quantity. If this is not acceptable to BAG, please reject this request. — Ganeshk (talk) 19:24, 29 August 2010 (UTC)[reply]
- Indeed, the accuracy of the articles, at the moment of creation by the bot, is not questioned. WoRMS has the best experts in the world. Checking them by us would be original research and a breach of wikipedia policies. The so-called inaccuracies are actually changes in taxonomy and this happens all the time (especially since a few years, since genetic research has been used to determine the exact position in the taxonomy). This doesn't concern this bot, since it only creates new articles and is not concerned with changes within existing articles. JoJan (talk) 19:55, 29 August 2010 (UTC)[reply]
- If the bot is requesting approval for the creation of stubs with no limits on quantities, it's asking for the ridiculous. With this demand, there is no point in further wasting anyone's time with discussion. I request BAG close this request to allow a bot to create an unrestricted quantity of articles as "denied." JaRoad (talk) 20:00, 29 August 2010 (UTC)[reply]
- Indeed, the accuracy of the articles, at the moment of creation by the bot, is not questioned. WoRMS has the best experts in the world. Checking them by us would be original research and a breach of wikipedia policies. The so-called inaccuracies are actually changes in taxonomy and this happens all the time (especially since a few years, since genetic research has been used to determine the exact position in the taxonomy). This doesn't concern this bot, since it only creates new articles and is not concerned with changes within existing articles. JoJan (talk) 19:55, 29 August 2010 (UTC)[reply]
The question would seem to come back to - do we want 100,000 one line + taxobox articles created automatically all in one go.--Elen of the Roads (talk) 22:18, 29 August 2010 (UTC)[reply]
- Yes, we need about less than 10,000 for marine gastropods to be created by bot to finish the task. Those articles are with classification, authorities, some of them have added where the species was described, synonyms, proper introductory sentence and vernacular names. All of this is the bot doing. Be serious in numbers and do not spread such startlers, please. --Snek01 (talk) 09:57, 30 August 2010 (UTC)[reply]
- A limit to checking backlog before more stubs created is not necessary per JoJan's statement just above: ("...Indeed, the accuracy of the articles...")
- A condition that new stubs be expanded to "start" is unrealistic and unnecessary.
- Consensus on necessary details of content or formatting of proposed set of articles is fine. Consensus on desirability or necessity of proposed set of articles is not.
- An Unreviewed category is fine with me.
- Again 100,000 one line stubs is fine, and necessary. This is what Wikipedia is about: Creating a page to which content can be added. No page, no home for photos or details. Give the masses of occasional users (who otherwise won't create an article, or may create one with a flawed or missing taxobox) a place to add their content.
- JaRoad: If WoRMS as a source is fine, then there is no reason to limit quantity, provided that classification updates can be performed en masse.
- These articles will be created anyway. I made hundreds without a bot. They were far less accurate, and all needed checking. It took many, many hours. No bot, and I will have to get back to making them myself. Nobody wants that.
- If a human and not a bot were creating these, there would be no question or complaints.
- A percentage of random, newly created, accepted, non-gastropod articles, will contain many inaccuracies. That is considered acceptable. The margin of error in the bot stubs is relatively tiny.
- These bot articles, upon creation in the mainspace, are fine. As JoJan has stated, "... the accuracy of the articles, at the moment of creation by the bot, is not questioned." Anna Frodesiak (talk) 01:07, 30 August 2010 (UTC)[reply]
I see my attempt has failed. I'll leave it to another BAGger to figure out consensus here. Anomie⚔ 11:34, 30 August 2010 (UTC)[reply]
{{BAGAssistanceNeeded}} User:Anomie has been "inclined to approve the bot under [his own] terms". Do not create your own terms. Terms are already created and they are called wikipedia policies. Also User:Anomie is welcomed, because he is familiar with the theme. --Snek01 (talk) 13:36, 30 August 2010 (UTC)[reply]
- One Wikipedia policy is that bots obtain approval from the community before operating. Another Wikipedia policy is community input. It concerns me, this unwillingness to discuss the issue with the community and get input from the community, coupled with antagonism toward members of the communty who attempt even to move forward with the bot. This appears to be headed for long-term insurmountable communications issues. There is clearly no authority to have created the 15,000 bot articles, when the bot operator emphasized his intention to create only the articles for a single genus or family. If this bot is authorized, what will project gastropod members take that to mean? If this bot moves forwards the strictest of terms and clear-cut understanding must be gained. --JaRoad (talk) 06:23, 16 September 2010 (UTC)[reply]
One of the issues here is about the need and notability of simple stubs. So, I posted here. If consensus can be reached on that matter, we can focus on the remaining issues. I hope this helps. Anna Frodesiak (talk) 05:56, 14 September 2010 (UTC)[reply]
Hi Elen: As I have just expressed to Invertzoo, I am becoming increasingly concerned with the potential maintenance of bot-created gastropod articles. This seems to be a group in such flux, that it might be unrealistic to expect that we could keep up with the changes in taxonomy.
I am a strong supporter of bot-created stubs that contain only rudimentary information. I think that this is in keeping with the idea of Wikipedia. However, this might be appropriate only for taxonomically stable groups, but not for gastropods. Anna Frodesiak (talk) 12:47, 14 September 2010 (UTC)[reply]
- Agree there is less problems with eg creating stubs for geographical locations as there are fewer changes outside of the most volatile areas of the world. ALSO though, there are more people likely to update eg geographical info (that's my/mums/grans hometown; I stayed there, uncle Bob lives there etc) than there are people with enough knowledge and interest to update gastropod articles. Elen of the Roads (talk) 13:46, 14 September 2010 (UTC)[reply]
- Although the project gastropod members seem intent on up-to-the-minute taxonomy changes, I point out that this is an encyclopedia and many of the resources in WoRMS are primary research with only one publication. There should be no attempt to update gastropods minute-by-minute, this is not a taxonomy journal, it's an encyclopedia, and information should be available in published sources, not a single rewriting of a taxon.
- The community of wikipedia writers that creates the species articles on wikipedia has pretty much come to the consensus that each species is important enough to merit an article, even if it is only a stub. I prefer having the species in a list in a genus article, if there is no distinct information on the species, but the articles do not require instantaneous updating. It takes years, even in the world of gastropods, for new taxonomies to be accepted. It used to take decades. It's not necessary that someone is updating them. --JaRoad (talk) 06:23, 16 September 2010 (UTC)[reply]
It might be wise to separate the matter into three distinct parts:
- 1. bot creating gastropod articles
- 2. worth of resulting stubs (currently under discussion)
- 3. management of a large number of gastropod stubs with changing taxonomy
- 1. There appears to be substantial proof that the bot creates accurate stubs from a good source.
- 2. Some say lists would be better.
- 3. With a (very approximate) 4% annual change in taxonomy, can the project maintain updates? If not, what value would remain in the stubs? Only the authority is static in the stubs. Plus, of course, improvements such as content and images. Ganeshk previously said that bots cannot help update taxonomy. But, JoJan recently stated: "Changes in taxonomy can be reported by another bot...".
Anna Frodesiak (talk) 22:13, 14 September 2010 (UTC)[reply]
- The taxonomic changes should not affect this discussion. Taxonomic changes would also affect manually created articles. One could argue it is better to have all species, so you can implement the changes to all at once, rather than having only a few, with the risk of new ones being created by inexperienced contributors using an old classification which is probably more widely known and found on internet. Ruigeroeland (talk) 14:59, 15 September 2010 (UTC)[reply]
- Furthermore, I would suggest to let the bot at least make genus articles with all species in it. I did this manually for the moth family Tortricidae and it works like a charm. It also prevents people from putting orphan tags on every species article you make. Ruigeroeland (talk) 15:03, 15 September 2010 (UTC)[reply]
Another issue, there is a comment by Invertzoo at Wikiproject Gastropods talk page about less than ideal choices made for the bot articles, "If a few of them have things that some people argue are less than ideal, these are human choices, and were not due to the bot; plus these supposedly less than idea things are a matter of opinion, not of fact."
I would like to know what human choices were made that contributed to "less than ideal" results in the bot articles. If there are "less than ideal" choices being made, will they continue to be made, will every comment by someone outside of Project Gastropod be dismissed as a "matter of opinion," and what, if anything, is Wikiproject Gastropod doing about this to allow all interested members of the community to weigh in? Where are these choices being made? Where were comments made about these "less than ideal" choices?
Again, completely ignored multiple times, up-to-the-minute taxonomy is not encyclopedic, as it arises in the primary literature, not an appropriate source for an encyclopedia. Are the bot and the project members using primary research from WoRMS to create these articles? If this is the case, I request the stubs be deleted. --JaRoad (talk) 17:12, 18 September 2010 (UTC)[reply]
- "If this is the case, I request the stubs be deleted." That is plain nonsense. Would you like to put outdated info on wikipedia because that is encyclopedic? If that is the case, go work on a paper encyclopedia. The great thing about wikipedia is the fact you can include the latest information. Ruigeroeland (talk) 17:45, 18 September 2010 (UTC)[reply]
- Hmmmm. I suggest the bot not be approved and the stubs be deleted. It seems that wikiproject gastropod members, at least one and assuming Ruigeroeland is a member, have confused the encyclopedia for their personal taxonomy journal.
- Operating or authorizing the operation of a bot to create 15,000, much less 85,000 stubs for a small group of editors unconnected to the goals and means of producing an encyclopedia is irresponsible. --JaRoad (talk) 18:06, 18 September 2010 (UTC)[reply]
This was a reply to JaRoad's message before this last one, but it got caught in an edit conflict:
- Hello again JaRoad. We are all working to try to improve the encyclopedia. When you, or anyone else, finds something you feel is "less than ideal" in a group of species articles, (such as the sentence I had created a while ago for the family Velutinidae), please feel free to Be Bold and immediately change it to something better; that is the Wikipedia way. You can leave a friendly note on the Project talk page too if you like, but that is not required. As for WoRMS, it is not primary literature, it is secondary literature. It cites sources in the primary literature. See for example [66]. Best wishes to you and to all, Invertzoo (talk) 18:16, 18 September 2010 (UTC)[reply]
- Then, when you said:
- "5. There is no solid evidence that any of our new bot-generated stubs have genuine errors in them. If a few of them have things that some people argue are less than ideal, these are human choices, and were not due to the bot; plus these supposedly less than idea things are a matter of opinion, not of fact. In our experience over the last 3 years, human-generated stubs have been found to be more likely to contain small errors (such as typos or small omissions) than the Ganeshbot stubs have been."
- ... where and how was your "less than ideal" "human choice" originally suggested, specifically the choice(s) that led to a "less than ideal" article? And how and where did the discussion about its "less than ideal" nature occur? It is not clear from the example you give, as your example does not appear to be a bot article. Knowing how the articles are created and how the choices are made helps in making sure they are without "human choices" that are "less than ideal."
- If the bot is to make these articles, then other wikipedia writers have the right to know how problems are both prevented, by discussion beforehand, and how they are corrected, by discussion after. Both how and where would be cleared up by showing this example of the "less than ideal" article and its resolution.
- I suggested, also, that a single taxonomist's database compilation of the primary literature is not quite secondary literature, unless that taxonomist is including, in the database, a review and discussion of the reasoning. Is WoRMS doing this? I cannot find these types of citations in the links included in the bot articles or the subsequent corrections made in them. If this is the source of the information for wikipedia articles, a review and/or discussion of the taxonomic revision, the wikipedia articles must cite the taxonomic revision, not just link to the taxon page at WoRMS. This would require the bot to add a sufficient line that the reader can tell what the source of the information is. This taxonomic revision, also, considering the dynamic nature of gastropod taxonomy, should have more support than just the database, also. Or is WoRMS a single taxonomic expert compiling the most recent primary literature on a taxon, without a discussion and review? In this case, the information must be supported by an additional source, and this would require the bot to post suggested stubs, and an editor to add the correct additional information. --JaRoad (talk) 18:37, 18 September 2010 (UTC)[reply]
- Then, when you said:
- By the way, Ruigeroeland is not a member of the gastropod project. Ruigeroeland works on Lepidoptera: butterflies and moths. Invertzoo (talk) 18:22, 18 September 2010 (UTC)[reply]
- Hello again JaRoad. I am sorry but I do not find your definition of WoRMS as being a primary source acceptable within professional limits. As for the process we use to set up the bot articles, it is visible on the project page and in the archives of that page. We have tried over time to answer all of your various and numerous objections and questions, but the focus often appears to shift, which makes it difficult. I do understand that you strongly disapprove of these bot stubs, that you want them all deleted and that you want no more to be created, but frankly I feel that your grounds for requesting those actions are not really sufficient to warrant such extreme courses of action. Very few (if any) other editors seem to agree with your extreme stance on these points. On Wikipedia generally I think it is accepted that we all need to be able to tolerate a state of affairs where things are moving towards a high degree of excellence, but are not there yet. A certain amount of imperfection is acceptable as one moves along towards the goal. Thank you for your interest and for your efforts to improve the encyclopedia, Invertzoo (talk) 22:11, 18 September 2010 (UTC)[reply]
arbitrary section break
Here are the points Project Gastropods would like to make:
- We completely agree that the bot was not authorized to create all these new stubs (other than the Conus stubs.) However, this was a genuine oversight and misunderstanding on our part, not a deliberate flouting of policy. We should not be subjected to punitive measures or special restrictions as a result of this unfortunate misunderstanding. Any new suggestions as to how we should proceed at this point in time should be thought out as a completely separate issue.
- 1. There is consensus at "Tree of Life" that species are intrinsically notable and that species stubs are valuable to have for the reasons suggested: easy expansion, easy and fast adding of images and other info.
- 2. There is no Wikipedia guideline against stubs or against the number of stubs a project should have.
- 3. Thus there is no formal limit to the number of stubs a project should currently have, assuming the stubs are not full of errors. (In any case we are generating stubs by hand every day and have been for several years without a formal checking system in place for content errors; this is commonly the case in the rest of Wikipedia.)
- 4. At the Project we check the content of a planned family of bot-generated species stubs carefully before they are produced. After they are produced, a few in each family are checked by hand. It is not necessary to manually check every single one of them, as the content is standardized, and assuming the bot is running normally, the stubs within one family will all be similar in structure. The bot seems sound, and no problems with it have shown up yet, but were the bot to develop a weird glitch, that should be immediately obvious from checking a few stubs in each family. In the unlikely circumstance that a glitch were to come into effect, an automated fix should be quite easy and fast to implement.
- 5. There is no solid evidence that any of our new bot-generated stubs have genuine errors in them. If a few of them have things that some people argue are less than ideal, these are human choices, and were not due to the bot; plus these supposedly less than idea things are a matter of opinion, not of fact. In our experience over the last 3 years, human-generated stubs have been found to be more likely to contain small errors (such as typos or small omissions) than the Ganeshbot stubs have been.
- 6. If one word is considered seriously misleading, or if quote marks did need removing around one word in a large batch of stubs, that could be changed by automated software in a matter of seconds.
- 7. If taxonomy on the family level or below subsequently becomes somewhat out of date due to revisions by experts, or if the nomenclature has been tweaked subsequent to the creation of the stub, this does not in any way invalidate the species article, and should not be considered an error.
Points explaining why it is so important and valuable to us (and to Wikipedia) to have a full set of stubs to cover the whole class of gastropods:
- 1. We are constantly finding new free images which can be added in to new stubs, that is assuming the stubs are available. JoJan currently has shell images of 2,500 species (!) that are waiting to be added to the project. A leading malacologist in Brazil has also offered to give us a large number of free images. We are constantly finding other new information (with refs) that can rapidly be added, that is, if a stub already exists. When stubs are not pre-existing, having to create a new stub by hand every time you need one is a slow process which is very wasteful of human time and energy. User:Snek01 creates a few new articles on species almost every day of the year (!) If he could use a pre-existing framework of stubs, that would enormously increase his productivity, enabling him to fix up and flesh out far more articles each day.
- 2. Wikipedia works precisely because people enjoy expanding articles and fixing them up. This is a situation where Wikipedia can really benefit from a "Be Bold" approach.
- 3. The Gastropods Project staff has expanded very significantly over the last 3 years, from 5 to 23; nine new people joined in 2009 alone! Even though not all of the 23 editors are extremely active, it does show that in another year or so we might have significantly more manpower and possibly manpower that is a lot more expert. We may have people who are quite willing to take on one whole superfamily or another large taxon and fix up all of the stubs in that taxon. We must think of the future as well as the present and we need to have faith in the overall Wikipedia process, which has proven to work so well over time.
Additional commentary:
- 1. The stubs which were generated recently are based on the website WoRMS and [67], the highest quality secondary source available to us, and one which is managed by some of the best professionals worldwide in malacology. JoJan is now in frequent email contact with WoRMS, so any ambiguities can be cleared up quickly.
- 2. As the WoRMS website corrects or updates their listings, these updates can readily be checked once a month and listed for us by a bot (this has already been done once) and the changes can then be implemented by hand by project members, as JoJan has being doing in the last week or so. When checked once a month, the number of changes will not be more than we can easily manage.
- 3. WoRMS covers only marine species, thus land snails and freshwater snails are not covered. One of our members has created a count of the stubs on extant marine gastropod that have already been created, and those that remain to be created.
- (Note: not counted 9 families of Sacoglossa with 284 species.)
Extant marine gastropods done by Ganeshbot to be done from WoRMS number of families 132 137 number of articles/species 15.000 articles (species + genera) my guess is about 3.000-5.000 species, but it is certainly less than 10.000 articles
- I have tried to count families precisely, but this table serves as an very approximate overview only. But it clearly shows, that the most diversified families are already done. --:Snek01 (talk) 23:00, 14 September 2010 (UTC)[reply]
Thank you all for your patience. Invertzoo (talk) 13:43, 19 September 2010 (UTC)[reply]
- A couple of points, first about how ambiguities will be cleared up:
- "JoJan is now in frequent email contact with WoRMS, so any ambiguities can be cleared up quickly."
- Why do ambiguities have to be cleared up via e-mail with WoRMS? If the literature are not clear, then this ambiguity in the secondary literature should be what is included in the article, not the results of a personal discussion. There's no way that JoJan's e-mails with WoRMS can meet verfiability. Also, one use of the primary literature for an encyclopedia is for an editor to clear up the secondary literature. Again, if it's not clear, then it's the verifiable ambiguity which belongs, not the personal interview via e-mail clarification. A personal e-mail clarification is original research.
- "The threshold for inclusion in Wikipedia is verifiability, not truth—whether readers can check that material in Wikipedia has already been published by a reliable source, not whether editors think it is true."
- Second, I'm disappointed that project gastropod members cannot even see how others can question their ability to understand what was authorized and worry about future instructions. This is what was asked for:
- "Gastropod project would like to use automation to create about 600 new species articles under the genus, Conus. This task has been verified and approved by the project."
- "This task is no way close to what AnyBot did. I had clearly mentioned that about 580 stubs will be created (nothing more)."
- It's clear you were not communicating with the bot operator. I keep trying to find the discussions about the families that led to creating 15,000 articles, but there don't seem to be any. Just a list with sentences. Where is the discussion on project gastropd that shows how you came to the conclusion that this authorization was for 15,000 articles, or the discussion about how you will deal with a future approval that guarantees you don't again misinterpret "500 stubs will be created (nothing more)" as authorization for 15,000 articles.
- Can you quote the numbers more precisely as to how many articles will be created? Will authorizing 5000 lead to 150,000 articles?
- Your misunderstanding is so outrageous and tied directly to a statement designed to reduce fears of the bot producing 1000s of unwanted stubs. The other bot created far fewer articles than this bot ultimately has created.
- The vast chasm between what you initially asked for and what you eventually created concerns me still.
- Third and most important:
- If you're creating groups of identical species stubs, there's no need to create anything but a genus article with a list. When pictures are added the article can be edited and saved under the genus name with the addition of the binomial to the taxobox and to the lead sentence sentence, plus the picture.
We have answered all of these points at least once before, sometimes in great detail. Thank you. Can I ask you to please let everyone know what the IP address was that you edited under, quoting from your current user page (User:JaRoad) "for about 5 years", before you registered as JaRoad only 5 weeks ago? All of our histories are completely open and available for anyone to peruse, as is that of our Project; your history on the other hand is a mystery. For someone who makes such sweeping demands for deletion of articles, it would be good to be able to view your history on Wikipedia. If you have nothing to hide, I cannot imagine any reason why you would want to withhold that key piece of information. Thank you for your cooperation, Invertzoo (talk) 21:45, 19 September 2010 (UTC)[reply]
- Invertzoo, there is an aspect in which your challenge is irrelevant. It remains the case that Ganeshbot was only authorised to create 500 articles, and somehow created 15,000 instead. It also remains the case that the wider community is somewhat uncertain of the value of 15,000 articles that contain no real information - although I do accept that adding pictures is adding value, can I ask how many of Ganeshbot's articles now have pictures added? It is also potentially the case that "I am in contact with WORMS by email" is not a reliable source.....?? While project gastropod's ambition to have an article on every single slug on the planet is admirable, using a bot to create 100,000 articles without information is perhaps...of less value to the project as a whole??? And demanding JaRoad out himself really does go slightly beyond the pale, I think. Elen of the Roads (talk) 22:53, 19 September 2010 (UTC)[reply]
- JaRoad is a newbie and serves on Wikipedia as a Troll, who primary edit not articles, but talk pages. I have to presume good faith so I have no right to think, that JaRoad could be for example on-line seller of shells, sockpuppet, or any other one with bad intentions for Wikipedia. Also questions by Elen of the Roads are being repeated. Over 3200 articles have image(s), but it is not known how many of them were created by Ganeshbot. I guess, that about 1000 articles by Ganeshbot were improved with image(s) during only 6 months. But nobody knows it precisely. Certainly it can be said, that large ratio of images and hundreds of images could not be effectively used on the Wikipedia without Ganeshbot. Anyway, this question is irrelevant for Ganeshbot approval as well as many other ones. Key questions are: 1) Is Ganesbot against guidelines? NO.; 2) Is Ganeshbot adding informations to the Wikipedia. YES. Nobody - even Ellen - can not claim, that Ganeshbot is creating articles without information. One of the most strategically valuable information added by Ganesbot is its synonyms section, that prevents users creating duplicite articles under its synonymous name (because of existence of full text search). Even the only existence of blue link instead of red link gives information valuable in articles like the List of marine gastropods of South Africa and similar ones. For example the first family in that list has 10 articles, all(!) of those 10 articles were created by Ganeshbot, 6 articles of those 10 articles have already images in them, some of those blue links are redirecting to article with different name, so later it will result in updating of names in the list. --Snek01 (talk) 01:20, 20 September 2010 (UTC)[reply]
- I must agree with Snek. Although I don't think he is a troll, I think this has been a huge drain on the energies of the project. This has gone on long enough. We have addressed his concerns throughly, and thoughtfully. Can a decision be rendered now? If the bot is approved, wonderful -- 2,300 images await homes. If not, I will begin generating stubs manually. JaRoad can tag them { { AfD } } individually if he likes. But, I will be making a lot of them, and they will look identical to the bot-created ones. Anna Frodesiak (talk) 01:56, 20 September 2010 (UTC)[reply]
Yet another arbitrary section break
I must also agree with Invertzoo in that we have answered your questions, and agree with Elen of the Roads that demanding JaRoad out himself is inappropriate.
JaRoad: This is an excerpt from some of the text I added and you removed from your talk page. It is relevant here. (You have every right to remove text from your talk. I only wish we had the right to remove offensive text that attacks contributors from your user page.):
I see that you have a strong point of view. Perhaps a more constructive approach would be to neutrally ask other editors what they think, and achieve consensus, and also to gather information before rendering a POV. You arrived with a POV, and do not have a monopoly on the truth. I, as a member of the gastropod project, neutrally made inquires and asked sensible questions in order to make up my mind. (I was even against the bot during my investigation.)
We have made an all-out effort to respond to your concerns. You, however, (rather single-handedly), have pushed your point of view on such matters as the credibility of WoRMS, the worth of species stubs, the accuracy of the bot's work, and the ability for the project to maintain such stubs. You have yielded or compromised on none of your initial points.
A little research, and a few queries would have saved us a lot of time, considering that we have since shown those points of view to be, not only inaccurate, but also largely in disagreement with the opinions of the community.
So, to expedite this matter, please respond to the answers we have provided, point by point, telling us whether you accept or reject them, and why. We all want to get back to contributing to the project. Thank you. Anna Frodesiak (talk) 09:19, 20 September 2010 (UTC)[reply]
- I would not invite JaRoad to answer every point we made above, as he has already shown himself capable of finding fault with everything we say about every aspect of what we do on Wikipedia. From what he says on his User page it appears that JaRoad is critical of a large part of the Wikipedia community (emphasis mine): "...an all out attack on me by editors whose primary purpose is not editing articles. This is a big battle on Wikipedia and always the social camp has more time to waste and is more invested in their social life making the encyclopedia writers a poor second. Too bad. Wikipedia would already be great if editing were the priority.
- Asking JaRoad to reveal his previous editing identity and contribution history is certainly not "outing" [68], outing is posting personal information such as RL name, date of birth etc. What I am asking for is simple transparency, so we can see whether or not this person has a problematic history on Wikipedia. As it says on that same WP policy page: "tracking a user's contributions for policy violations [is not harassment]; the contribution logs exist for editorial and behavioral oversight."
- And let's also be quite clear about one thing: decisions on Wikipedia are not made by showing who can hang on the longest and argue the most relentlessly; decisions here are made by assessing consensus. We cannot resolve this ourselves: a new person from Bot Approvals will have to attempt to see if some kind of consensus can be found. To quote from the policy page here [69] "Sometimes voluntary agreement of all interested editors proves impossible to achieve, and a majority decision must be taken". It is clear that 2 editors (JaRoad and Elen of the Roads) are in favor of extreme sanctions against the Project's past and future use of Ganeshbot. But over the last few weeks and over several different talk pages, numerous others editors who are not part of our project have supported our position, either partially or completely, once the issues involved were properly explained. And numerous editors within the project support our position. We will see what transpires. Thank you. Invertzoo (talk) 13:15, 20 September 2010 (UTC)[reply]
- Invertzoo, can you please moderate your language a little. I am not in favour of extreme sanctions against anything. Ganeshbot needs a sanction (ie approval) to act. I am not proposing sanctions (ie a restriction) on its previous action, even though these were without approval. The 15,000 slugs and snails are here - as long as they are not all eating my lettuces, let us delight in them and continue to improve the articles which Ganeshbot created. My concern is only whether Ganeshbot should be sanctioned (ie approved) to make any more articles at this time. Opinions do seem to differ as to the relative level of value of these articles in their original form, but with sufficient input I am sure that a consensus witll form. Elen of the Roads (talk) 14:28, 20 September 2010 (UTC)[reply]
- Thanks for a more moderate response that is not aligned with JaRoad's extreme position. I would still like to know if you are in favor of transparency on Wikipedia, and whether you still consider asking JaRoad to link to his previous contribution history and talk page history as an IP contributor to be "outing"? I consider it to be a basic policy necessity. It is not possible to know if one can trust the comments of a user (who claims to have been editing for 5 years) if we have no access whatsoever to his contribution history and talk page history before August 15th of this year. When we have a history to look at, we can see if a particular user has accumulated warnings before, or even blocks, and if so based on what kind of behavior. Thank you for your time and consideration. Invertzoo (talk) 15:14, 20 September 2010 (UTC)[reply]
- I would let that line of questioning drop if I were you, as it could easily backfire on you. Take his statements at face value - if you feel you have refuted them, then simply point that out. Attacking the editor is never the way to go. Elen of the Roads (talk) 15:46, 20 September 2010 (UTC)[reply]
- Thanks for a more moderate response that is not aligned with JaRoad's extreme position. I would still like to know if you are in favor of transparency on Wikipedia, and whether you still consider asking JaRoad to link to his previous contribution history and talk page history as an IP contributor to be "outing"? I consider it to be a basic policy necessity. It is not possible to know if one can trust the comments of a user (who claims to have been editing for 5 years) if we have no access whatsoever to his contribution history and talk page history before August 15th of this year. When we have a history to look at, we can see if a particular user has accumulated warnings before, or even blocks, and if so based on what kind of behavior. Thank you for your time and consideration. Invertzoo (talk) 15:14, 20 September 2010 (UTC)[reply]
- Invertzoo, can you please moderate your language a little. I am not in favour of extreme sanctions against anything. Ganeshbot needs a sanction (ie approval) to act. I am not proposing sanctions (ie a restriction) on its previous action, even though these were without approval. The 15,000 slugs and snails are here - as long as they are not all eating my lettuces, let us delight in them and continue to improve the articles which Ganeshbot created. My concern is only whether Ganeshbot should be sanctioned (ie approved) to make any more articles at this time. Opinions do seem to differ as to the relative level of value of these articles in their original form, but with sufficient input I am sure that a consensus witll form. Elen of the Roads (talk) 14:28, 20 September 2010 (UTC)[reply]
Invertzoo has a good point here. This is not outing. I have been stung by trolls before and this fits the pattern exactly. I'm not saying that this is what's happening here, but it's so hard to tell the difference. Knowing his history on Wikipedia would help a great deal. I am curious as to why JaRoad has been silent on this. Why not be forthcoming? It would favour him and give him credibility.
It's hard to take his statements at face value considering his pre-judgement, his tendentious posts, and the statement on his user page. He didn't arrive asking questions. He came with a POV and has stuck to that regardless of new information.
As for the bot being approved, I suggest:
- allowing this bot to produce stubs without limits on numbers
- adding as much generic information to the lede as possible relevant to the group (genus, family, etc)
- monitoring resulting stubs
- reevaluating the worth of the bot based upon the stubs' condition after several months.
Elen, how does this sound to you? You must see how dedicated we are to improving the gastropod project. It is our aim and interest to improve the project, not just to blindly make stubs and walk away. We will fill them with images and content over the years, as will others. Isn't this exactly what Wikipedia is about? Are there specific conditions you would like to see met in order to lettuce :) make these new stubs? Many thanks. Anna Frodesiak (talk) 18:23, 20 September 2010 (UTC)[reply]
- I see no objections at all to this. If it is not approved though, you could ask someone to AWB all these articles. There are some people who are creating 100s of articles with even less info without anyone even noticing, let alone objecting, using AWB. If you let them do these, they will be doing something useful for a change. :) Ruigeroeland (talk) 19:24, 20 September 2010 (UTC)[reply]
- Thank you Ruigeroeland. Invertzoo (talk) 21:12, 20 September 2010 (UTC)[reply]
I'm not the one with the extreme position, that would probably be the editor who disagrees with the value of species stubs. I've not only changed my position slightly, I indicated that change by posting comments and questions and by clarifying specifically that wikipedia has already established that species stubs are considered valuable articles. I suggested the bot make the articles from proposed lists. This was ignored. I asked questions. They were ignored. I offered help, this was insulted and belittled. I changed my position, I read others' posts, I was called a troll and hounded personally in response. How professional of project gastropod. Can't disagree with or address my points? Attack me. Hound me.
I think that project gastropod's unwillingness to compromise and their attacking those who disagree with plans means this bot will be trouble. Any one who raises issues will be insulted for their spelling, called a troll, hounded by Invertzoo. You want unlimited chaos? That could be obtained by giving this bot unlimited approval to create stubs with no community oversight except by project gastropod members.
People have already expressed disagreement about other bots creating species stubs. The issue is sensitive. It requires editors with diplomacy to be able to deal with sensitive issues. Floundering until you settle on demanding someone out themself, calling them a troll, and insulting their English does not speak of the sort of diplomacy and consideration for working with community consensus that should come with a bot with unlimited approval to create stubs. --JaRoad (talk) 05:11, 21 September 2010 (UTC)[reply]
Closing comment
This debate has reached no consensus, and as such the bot approval is defaulting to Denied.. There are many reasons for closing this as no consensus, and I’ll explain why I’ve judged that to be the result below.
Firstly some background may be useful to editors not familiar with this, for future reference. I hope this brief summary will be useful for others, and me to help collect my thoughts. This bot was originally approved to create no more than 600 pages, for the Gastropod WikiProject. A number of concerns were raised even then, and the bot was approved only after a lot of discussion, and with a limit on the speed at which it could create articles. This limit on the speed was later removed. However, following this the bot started creating thousands upon thousands of pages (~15,000), many more than the approved number. The bot was shut down, as it was no longer doing approved edits, as I, and a number of other editors, pointed out. It was decided that to continue editing, the bot would need to be approved via BRfA, and I said that to be approved, it would need community consensus first. The BRfA was submitted, and various discussions with the community took place following this.
Now my comments on actually closing this. Firstly, some of the conduct at this BRfA has been exceedingly poor, with personal attacks, and a lack of willingness to work with others. This is true for both some of the supporters and opposition. I believe this has contributed to a battleground mentality, where rather than trying to work together, users feel this is a win or lose situation. Also contributing to this, is the absence of compromise: the supporters do not appear to want any limits on this bot, or to accept anything other than having the bot approved fully, as they proposed it. This is also true for some of the opposition, who will apparently accept nothing less than having the bot shutdown completely. However, because the supporters are the ones proposing this, they need to work in cooperation with the community, to reach a proposal which suits everybody (reasonably). Because this has not been done, there is no consensus reached.
Commenting on the task itself, there seems to be mixed feelings from the community on bot created stubs. Some are opposed to it, partly due to previous bad results from such tasks, as well as the nature of this WikiProject. Since it is a relatively small project on a niche topic, there are understandable concerns that the project will struggle to maintain 25,000 articles. The project’s arguments that they can maintain this number of articles are unconvincing, since they are speaking about having, in the past, maintained very few articles over a very large period of time. Which does not prove that they can “keep on top” of 25,000 articles. Judging by discussions in forums other than this one however, there does seem to be some large community support for bot generated stubs in general. However, few of the comments by users at those forums seem relevant to this particular bot task.
How to move forward
If the project still wishes to move forward with this bot, I would suggest forming a consensus before submitting another BRfA. Using a request for comment or other appropriate venue, as BRfA isn’t particularly suited to building consensus. There has been some suggestion of running this task even without approval, and I would strongly recommend that this is not done. Using automated, or even semi-automated (such as AWB) methods for content creation require approval from BAG, which you do not currently have. Fully manual creation of these pages is, of course, permitted. But I would suggest that rather than doing this, the project works on keeping the pages it currently has up to date, to help convince others that you are capable of maintain these articles, rather than simply creating more pages which there is no evidence that you are maintaining. In future discussions I would remind everybody to stay cool, to listen to each other’s arguments, and to compromise. It is the lack of compromise and cooperation that has led to a lack of consensus, and, subsequently, this being closed as no consensus - Kingpin13 (talk) 08:16, 14 October 2010 (UTC)[reply]
- The above discussion is preserved as an archive of the debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA.
Bots in a trial period
- The following discussion is an archived debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA. The result of the discussion was Approved.
Operator: Basilicofresco (talk · contribs)
Automatic or Manually assisted: auto (after a period of testing)
Programming language(s): python
Source code available: not yet (pywikipedia + custom script)
Function overview: analizes selected articles, checks a matching target on Commons and then add {{commons}} or {{commons cat}}.
Links to relevant discussions (where appropriate):
Edit period(s): few times per year or less
Estimated number of pages affected: few thousands?
Exclusion compliant (Y/N): Y
Already has a bot flag (Y/N): Y
Function details: this task aims to add to the article the link to Commons when unambiguous related media content is found.
- It will start from a offline generated list of selected articles with these characteristics:
- round brakets in the article name, eg. Alcobaça (Portugal);
- "External links" section (I plan to improve in the future the ability of the script to place the template in the right place even without "External links" section);
- article without any Commons template (exclusion regex: ([Cc]ommons|[Pp]ic|[Cc]ommonspar|[Cc]ommonspiped|[Cc]ommonsme|[Ss]isterlinkswp|[Ww]ikicommons|[Cc]ommonstiny|[Cc]ommons-gallery|[Gg]allery-link|[Cc]ommonsimages|[Ww]ikimedia[ _]Commons|[Cc]ommons-inline|[Ww]ikicommons-inline|[Cc]ommons[ _]category|[Cc]ommons[ _]cat|[Cc]ommonscat-inline|[Cc]ommons[ _]cat[ _]left|[Cc]ommons2|[Cc]ommonsCat|[Cc]ommoncat|[Cc]ms-catlist-up|[Cc]atlst[ _]commons|[Cc]ommonscategory|[Cc]ommonscat|[Cc]ommonsimages[ _]cat|[Cc]ommons[ _]cat4|[Cc]ommonscat[ _]left|[Cc]ommons[ _]and[ _]category|[Cc]ommons[ _]and[ _]cat)).
- checks Commons for a matching gallery or category with:
- same name (eg. "Alcobaça (Portugal)" --> does Commons:Alcobaça (Portugal) exist?)
- same name adding "category" (eg. Alcobaça (Portugal) --> does Commons:Category:Alcobaça (Portugal) exist?)
- same name after removing brakets (eg. Lynx (web browser) --> does Commons:Lynx web browser exist?)
- same name after removing brakets and adding category (eg. Lynx (web browser) --> does Commons:Category:Lynx web browser exist?)
- same name after replacing brakets with a comma (eg. Haren (Groningen) --> does Commons:Haren, Groningen exist?)
- same name after replacing brakets with a comma and adding category (eg. Haren (Groningen) --> does Commons:Category:Haren, Groningen exist?)
- if a redirect is found on Commons, then it takes the redirect destination
- adds the right template in the right place (eg.
{{commons|Alcobaça (Portugal)}}
or{{commons cat|Alcobaça (Portugal)}}
at the top of the External links section)
Discussion
{{BAGAssistanceNeeded}} Within 10 days I did not see any question. Can I start a test run? -- Basilicofresco (msg) 13:20, 15 July 2010 (UTC)[reply]
- Seems straightforward. It might be more straightforward to check for the presence of commons templates using the API's prop=templates than a regex, as then you don't have to worry about capitalization, space versus underscore, new redirects, and the like. Anomie⚔ 16:15, 15 July 2010 (UTC)[reply]
- Seems like most of this (except for the external links section bit) can be done with a toolserver database query. Im not sure if you have a toolserver account, but you may always ask at WP:DBR for some help. DB queries are much faster and in my oppinion, easier, than using the mediawiki API. Tim1357 talk 23:17, 15 July 2010 (UTC)[reply]
Thank you for your suggestions. Well, the query would create the list much faster... but I'm (still) not used to sql and in order to avoid mistakes I would prefer to keep strict control on every step of the task. I'm going to start from a dump generated list of pre-selected articles (step 1) and this will greatly speed up the whole process. -- Basilicofresco (msg) 07:54, 18 July 2010 (UTC)[reply]
- I'm leaving tomorrow for a trip, so I will not able to run any script until second half of August. See you! -- Basilicofresco (msg) 07:58, 21 July 2010 (UTC)[reply]
{{BAGAssistanceNeeded}} I'm back. I will run the script on my home computer so the efficiency of the list-creator script is not critical and most of all does not affect Wikimedia servers. -- Basilicofresco (msg) 14:30, 18 August 2010 (UTC)[reply]
- It all looks Basilicofresco, but I'd like to see some community discussion about a bot adding these templates. Spam a few talk pages explaining what you hope to do. Tim1357 talk 00:49, 19 August 2010 (UTC)[reply]
- Ok, done! -- Basilicofresco (msg) 11:11, 20 August 2010 (UTC)[reply]
- Could you link the discussions? –xenotalk 14:51, 3 September 2010 (UTC)[reply]
- Sure: Wikipedia talk:WikiProject Images and Media/Commons#FrescoBot 6, Template talk:Commons#FrescoBot 6, Template talk:Commons category#FrescoBot 6. No replies. If you feel I missed the appropriate talk page, feel free to start there the discussion. -- Basilicofresco (msg) 10:11, 6 September 2010 (UTC)[reply]
- Could you link the discussions? –xenotalk 14:51, 3 September 2010 (UTC)[reply]
- Ok, done! -- Basilicofresco (msg) 11:11, 20 August 2010 (UTC)[reply]
- It seems no one cares ... Approved for trial (25 edits). Please provide a link to the relevant contributions and/or diffs when the trial is complete. Mr.Z-man 19:00, 2 October 2010 (UTC)[reply]
Trial complete. This morning I wrote and tested the script. Fixed 1st and 2nd edit due a stupid typo. No problems on subsequent edits. As you can see, if a redirect is found on Commons, the bot follows it and then analyze the target. -- Basilicofresco (msg) 10:42, 10 October 2010 (UTC)[reply]
I have no hurry, however after 4 months... ;) {{BAGAssistanceNeeded}} -- Basilicofresco (msg) 21:10, 19 October 2010 (UTC)[reply]
November has arrived and I had not one single complaint about this task. If you are still doubtful, the best thing to do is to approve a 500 edits trial and wait for any reaction. -- Basilicofresco (msg) 23:11, 1 November 2010 (UTC)[reply]
- For the record, the edits are here. I noticed 14 cases where your bot linked to a category when a page or redirect to a page exists on Commons, for example this edit linked to Commons:Category:Asparagus rather than Commons:Asparagus (from Commons:Asparagus (genus)). In fact, in that particular example how did it find Commons:Category:Asparagus at all?
- I also see the edit to Georgia (U.S. state) was removed without explanation, although probably because the article had {{Sister project links}}. It may be worth checking for that template too. Anomie⚔ 23:18, 10 November 2010 (UTC)[reply]
First of all, thank you for your attention.
- Categories vs. galleries: well, IMHO the link to the category is almost always a better choice over the gallery page. Gallery pages are usually poor mantained, there are just few images and the gallery itself rarely add any real value. Categories are easier to mantain and to scale up (adding sub-categories). Moreover well written and well mantained gallery pages are usually already linked from en.wiki... so I suggest to prefer categories over galleries (if both available).
- Commons vs. Sister project links: you are right, probably Tpbradbury removed the link to Commons due the {{Sister project links}}. However should be noted that {{Sister project links}} simply "provides links to the 'Search' page on the various Wikimedia sister projects". That means that it does not grant that any related content actually exist, it is just a (blind) guess. {{Commons}} and {{Commons cat}} instead state that Wikimedia Commons actually has media related to the subject and provide a link to it. This is a precious information.
Basilicofresco (msg) 20:23, 11 November 2010 (UTC)[reply]
- I only asked about the gallery versus category because your function details list checking for galleries first. As for the other, that sounds like a discussion that should be started somewhere else. Anomie⚔ 03:14, 12 November 2010 (UTC)[reply]
Refined proposal
The "Categories vs. galleries" issue can be resolved using {{Commons and category}} (I almost forgot about it). So, here is the proposal:
- If a related category or page can be found on Commons (see Function details above), the bot adds the right template at the top of the External links section.
- If on Commons exist both category and page (gallery), then {{Commons and category}} should always be preferred over {{Commons}} because gallery pages are usually poor mantained, there are just few images and the gallery itself rarely add any real value. Categories are easier to mantain and to scale up (adding sub-categories). Moreover well written and well mantained gallery pages are usually already linked from en.wiki.
- The presence of {{Sister project links}} should not affect the insertion of {{Commons cat}} or {{Commons}} because should be noted that {{Sister project links}} simply "provides links to the 'Search' page on the various Wikimedia sister projects". That means that it does not grant that any related content actually exist, it is just a (blind) guess. {{Commons}} and {{Commons cat}} instead state that Wikimedia Commons actually has media related to the subject and provide a link to it. This is a precious information. It is the difference between the search function and a link.
If this proposal sounds reasonable, please write below: "uhm... sounds reasonable" and sign. ;) Thanks. -- Basilicofresco (msg) 08:45, 21 November 2010 (UTC)[reply]
- Works for me. BTW, you may want to drop a note on Template talk:Sister project links since your post at Template talk:Commons category doesn't seem to be drawing any response. Anomie⚔ 15:08, 21 November 2010 (UTC)[reply]
Approved. WP:SILENCE seems to apply to the discussions regarding {{Sister project links}} vs {{Commons cat}}. Anomie⚔ 02:38, 1 December 2010 (UTC)[reply]
- The above discussion is preserved as an archive of the debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA.
- The following discussion is an archived debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA. The result of the discussion was Approved.
Taxobot 2
Operator: Smith609 (talk · contribs)
Automatic or Manually assisted: Manually supervised by users
Programming language(s): PHP
Source code available: Available at Google Code
Function overview: This bot will help users to create the back-end templates that support Template:Automatic taxobox. The bot will transform a user's input into the correct syntax, and will suggest input to the user based on existing data in Wikipedia.
Links to relevant discussions (where appropriate): Template_talk:Taxobox#Usability
Edit period(s): When triggered by a user.
Estimated number of pages affected: A couple of pages per user activation; eventually (as use of Automatic Taxobox becomes more widespread) there will be one page per taxon.
Exclusion compliant (Y/N): Yes
Already has a bot flag (Y/N):
Function details:
This bot can be activated by a user when a new Automatic taxobox is created.
A link will be provided in the template to the HTML bot interface. The user will be asked to provide their username, which will be displayed in the edit summary; only valid usernames will be allowed to use the tool. (This system works well at User:Citation bot.)
- [*] The user clicks the link. The bot requests from the user:
- The taxonomic rank of the taxon
- The taxonomic parent of the taxon
- The scientific name of the taxon (pre-filled to the name of the referring page).
- The bot validates the user's input and presents any possible errors (unrecognized rank? parent lacks WP page?) to the user.
- The user corrects and confirms the input
- The bot converts this input into a new page at Template:Taxonomy/taxon x, formatted per the correct syntax. (If the page already exists the user is presented with an error message.)
- The bot checks to see whether the parent taxon already has a page at Template:Taxonomy/parent taxon.
- If so, the bot's work is done; the new taxon has been connected to the existing tree of life.
- If not, the bot helps the user to create taxonomy for the parent taxon.
- The user is asked to provide the information listed at [*] above.
- If the parent taxon has a WP page, the fields are pre-filled from the article on the parent taxon.
- If the parent taxon lacks a WP page, WikiSpecies is consulted for this information.
- The bot returns to [*], helping the user to create new back-end pages until the taxonomic hierarchy is linked to the existing tree of life.
Whilst the bot physically creates the pages, all the information entered has been manually verified by users and is entered in a rigid format. The bot will never amend existing data.
I propose that during the initial testing period, the only user authorised to activate the bot is myself (Smith609 (talk · contribs)). Once the bot is operating as I expect I suggest allowing other users to use the bot, with the output being scrutinized by myself (and the BRFA team?).
Discussion
- I'd be ok with this. I like the new Automatic taxobox format and I think this will greatly improve the conversion. Two questions related to this request: If the parent taxon does not have a Template:Taxonomy/parent taxon and the bot pre-fills data sourced from the parent taxon's Template:Taxobox on the article page, will there be appropriate instructions and warnings that the pre-filled fields should be checked and not assumed to be correct? I know of many instances where genus and family articles do not agree in all taxonomy fields. Will the bot then also prompt the user to replace the parent taxon's old taxobox with the shiny new automatic taxobox? And one unrelated question to think about for the future: How easy or self-evident will it be to insert a new taxon level within an existing automatic taxobox structure for the uninitiated? Say there's a genus and Wikipedia treats it as only containing species, but there's an accepted infrageneric classification and a editor wants to insert it between the genus and species levels. Would the user just edit an automatic taxobox on a species page to list the parent as the new taxon, thus prompting this bot to kick in and do its magic? Cheers, Rkitko (talk) 22:06, 16 September 2010 (UTC)[reply]
- The warnings that you suggest are definitely a good idea.
- Replacing old taxoboxes with automatic taxoboxes is a great idea, but for simplicity is not covered within this bot request. I will create a separate bot request when I have the time to implement this feature, if it is wanted.
- Modifying automatic taxoboxes is also a little tricky at present; again, this is something that I can submit a bot request for in the future. There are a couple of ways that "easy editing" can be added to the taxobox template and this is something that I indend to discuss once the template is a little more widely used.
- Martin (Smith609 – Talk) 00:06, 17 September 2010 (UTC)[reply]
- I've got a draft version of the bot going at http://toolserver.org/~verisimilus/Bot/taxobot/taxobot.php. Obviously, no edits will be committed until the approval process is complete and I have checked for bugs and added an input-verification system, but this should give interested parties an idea of what to expect. Martin (Smith609 – Talk) 21:45, 17 September 2010 (UTC)[reply]
- Hey Martin, thanks for your replies above. All sounds good to me! I tried out the tool using Stylidium graminifolium as an example. When it pre-filled the data from the taxobox for the parent, it just displayed "S" instead of picking up the section Linearis as the parent. Not a big issue, but might be confusing to some. Any ideas why it did that? Cheers, Rkitko (talk) 21:42, 21 September 2010 (UTC)[reply]
- Fixed – Stylidium sect. Lineares is now returned as the parent. Martin (Smith609 – Talk) 16:49, 22 September 2010 (UTC)[reply]
- Ah, also, running the script on Stylidium sect. Debilia, it chose Stylidium for the pre-filled parent taxon, ignoring Stylidium subg. Tolypangium. I assume it's picking "major" taxa ranks to pre-fill. Not a bad idea, but should be explained. Rkitko (talk) 21:51, 21 September 2010 (UTC)[reply]
- Fixed – now selects "Stylidium subg. Tolypangium" as parent genus. Martin (Smith609 – Talk) 16:39, 22 September 2010 (UTC)[reply]
- Hah, ok, last one: Try running Quercus berberidifolia. It pre-fills the taxon field with just "Quercus" and not the full binomial. It also pre-fills just a "Q" in the parent field. Minor issues, I know, since the function of the bot is not to do everything for you. Just curious. Cheers, Rkitko (talk) 22:09, 21 September 2010 (UTC)[reply]
- Fixed – Now recognizes "Section Quercus" as parent. Martin (Smith609 – Talk) 16:47, 22 September 2010 (UTC)[reply]
Thanks for all these reports. It looks like there was a problem with handling the semi-duplicate data provided in binomial / trinomial parameters. Also with non-alphanumeric characters (e.g. spaces) in taxonomic names. These should be readily fixed; I've made a start but will complete the process anon. Martin (Smith609 – Talk) 14:44, 22 September 2010 (UTC)[reply]- All these should now be fixed. Thanks for the reports, and do let me know any other unusual cases. Martin (Smith609 – Talk) 16:49, 22 September 2010 (UTC)[reply]
- I have a rare period of free time coming up so am eager to move on with the bot approvals process if possible. If it's possible to approve a trial quickly, or to suggest any necessary amendments, that would be great! Martin (Smith609 – Talk) 16:24, 20 September 2010 (UTC)[reply]
- There don't seem to have been any objections to the principle of the bot's operation; therefore I'm requesting a trial period of fifty edits. Martin (Smith609 – Talk) 22:35, 24 September 2010 (UTC)[reply]
- Looks great as far as finding that parent taxon goes. I did a couple tests using taxa I had created articles for a few years ago (though I don't have any recent ones to try out) and they worked perfectly. Can't wait to get this working...btw, you know the video game Spore's wikia website already HAS a template that does all this automatic taxonomy stuff? Lucky them.... Bob the Wikipedian (talk • contribs) 21:58, 26 September 2010 (UTC)[reply]
Trial
- Approved for trial (50 edits). Please provide a link to the relevant contributions and/or diffs when the trial is complete. MBisanz talk 23:09, 30 September 2010 (UTC)[reply]
- Edits underway and available for inspection at Special:Contributions/Taxobot; I've added automatic taxoboxes created via this method at Leptomitus and Vauxia (using novel taxonomic information from an original source) and Bactroceras (using taxonomic information from existing taxoboxes). As always, comments are very welcome! Martin (Smith609 – Talk) 01:01, 1 October 2010 (UTC)[reply]
- WOAH. That template actually looks a bit scary. So the implementation is clear to me-- all one does is they place {{automatic taxobox}} on the page and add the fossil range, authority, and subdivisions, as needed. The part that's scaring me, though, is where is the data coming from? Bob the Wikipedian (talk • contribs) 21:43, 2 October 2010 (UTC)[reply]
- Never mind-- I figured it out by reading the documentation for the template. Looks like the bot is doing a good job so far. Bob the Wikipedian (talk • contribs) 22:49, 2 October 2010 (UTC)[reply]
- The following discussion is an archived debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA. The result of the discussion was Approved.
Operator: Wikitanvir (talk · contribs)
Automatic or Manually assisted: Mostly automatic
Programming language(s): Python (pywikipedia)
Source code available: Yes
Function overview: Addition, change, or removal of interwikis.
Links to relevant discussions (where appropriate):
Edit period(s): Daily
Estimated number of pages affected: Few
Exclusion compliant (Y/N): Maybe
Already has a bot flag (Y/N): Yes, in bnwiki.
Function details: Add or remove or change of interwikis.
Discussion
Have you read through the local bot policy? Will the bot be staying out of the template namespace, as required by our policy on interwiki linking bots? Will the bot be running with the new -cleanup option of interwiki.py that removes interwikis only to inexisting pages, or with the previous more aggressive -force option that removes interwikis to different namespaces as well? - EdoDodo talk 15:05, 28 September 2010 (UTC)[reply]
- Yes, I'm aware of local bot policy, The bot will mostly run in main namspace, but sometimes it will run in category namespace too. It's not specially made to run in template namespace, so it won't. And yes, it will run with -cleanup option, instead of previous -force option. :) — Tanvir • 18:15, 28 September 2010 (UTC)[reply]
- Approved for trial (50 edits). Please provide a link to the relevant contributions and/or diffs when the trial is complete. Good good. Feel free to take the time you need for trial, don't worry about going over the limit too much: WP:NOTBUREAUCRACY. - EdoDodo talk 18:22, 28 September 2010 (UTC)[reply]
- Trial complete. — Tanvir • 15:44, 5 October 2010 (UTC)[reply]
- {{BAG assistance needed}} My bot has finished the trial for 6 days now. Like to see a outcome. Regards, — Tanvir • 12:10, 11 October 2010 (UTC)[reply]
- Approved. Trial looks okay. Sorry for the delay. - EdoDodo talk 15:44, 14 October 2010 (UTC)[reply]
- Approved for trial (50 edits). Please provide a link to the relevant contributions and/or diffs when the trial is complete. Good good. Feel free to take the time you need for trial, don't worry about going over the limit too much: WP:NOTBUREAUCRACY. - EdoDodo talk 18:22, 28 September 2010 (UTC)[reply]
- The above discussion is preserved as an archive of the debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA.
- The following discussion is an archived debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA. The result of the discussion was Approved.
Operator: Richard Melo da Silva (talk · contribs)
Automatic or Manually assisted: Automatic
Programming language(s): Python
Source code available: Standard pywikipedia
Function overview: Add and correct interwikis.
Links to relevant discussions (where appropriate): Authorization to test in the Lusophone Wikipedia
Edit period(s): Continuous
Estimated number of pages affected: 2 per day
Exclusion compliant (Y/N):
Already has a bot flag (Y/N): N
Function details: This robot is intended to correct interwikis.
Discussion
Please create a user page for your bot, that identifies what it does and who operates it, as required by the bot policy. Also, are you aware of our policy on interwiki linking bots, specifically the part that requires you to stay out of the Template namespace? - EdoDodo talk 06:03, 11 September 2010 (UTC)[reply]
Done. Yes, I'm aware of yours policy on interwiki linking bots. I made this edition because I knew there was no documentation in any Wikipedia. RmSilva can talk! 00:06, 12 September 2010 (UTC)[reply]
Aside from the actual task of this bot, what about the possibility of confusion with User:CorenSearchBot which is commonly referred to as CSBot and even uses it itself? VernoWhitney (talk) 23:55, 23 September 2010 (UTC)[reply]
- I do not know this bot. I created the CSBot on 2009-02-18, in Portuguese Wikipedia, well before editing here. RmSilva can talk! 16:25, 26 September 2010 (UTC)[reply]
- Sorry, that was more a question to the BAG members reviewing this. I just wanted to point out that there may be a possibility of confusion and leave it up to them to decide if it's a problem or not. VernoWhitney (talk) 15:13, 27 September 2010 (UTC)[reply]
- Approved for trial (50 edits). Please provide a link to the relevant contributions and/or diffs when the trial is complete. Before proceeding with test edits, please make a visible note at the top of the bot's user page (or just below the {{bot}} template) that makes it clear that the bot is unrelated to CorenSearchBot to avoid confusion. Feel free to take however much time you need for trial. - EdoDodo talk 15:14, 28 September 2010 (UTC)[reply]
- If approved, I'll notify Coren in case he wishes to simply let his bot sign as CorenSearchBot, not CSBot. Acather96 (talk) 19:27, 28 October 2010 (UTC)[reply]
- {{OperatorAssistanceNeeded}} what is the current status of this request? ΔT The only constant 15:08, 16 November 2010 (UTC)[reply]
- I'm here! In these weeks I am somewhat active in all projects, but but I will return soon. RmSilva can talk! 12:03, 22 November 2010 (UTC)[reply]
Approved. Bot has slowly made more or less the the trial edits (currently has 44) and they all look okay. Since the task is a standard one and edits look okay I won't be overly bureaucratic about it, and I'll just go ahead and approve this. - EdoDodo talk 10:13, 4 December 2010 (UTC)[reply]
- The above discussion is preserved as an archive of the debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA.
- The following discussion is an archived debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA. The result of the discussion was Request Expired.
Operator: Frozen Wind (talk · contribs)
Automatic or Manually assisted: automatically
Programming language(s): Python
Source code available: pywikipedia with -cleanup
Function overview: Maintain interwikis
Links to relevant discussions (where appropriate):
Edit period(s): forever
Estimated number of pages affected: various
Exclusion compliant (Y/N): unknown
Already has a bot flag (Y/N): N
Function details:
Discussion
Approved for trial (20 edits). Please provide a link to the relevant contributions and/or diffs when the trial is complete. Mr.Z-man 04:11, 26 September 2010 (UTC)[reply]
- {{OperatorAssistanceNeeded}} Any progress? Anomie⚔ 01:04, 14 October 2010 (UTC)[reply]
- It's finding nothing. I'm changing it to a interwiki bot. Frozen Windwant to be chilly? 22:05, 15 October 2010 (UTC)[reply]
- A user has requested the attention of the operator. Once the operator has seen this message and replied, please deactivate this tag. (user notified) And? Anomie⚔ 00:52, 11 November 2010 (UTC)[reply]
- It's finding nothing. I'm changing it to a interwiki bot. Frozen Windwant to be chilly? 22:05, 15 October 2010 (UTC)[reply]
Request Expired. No response from operator. Anomie⚔ 03:17, 24 November 2010 (UTC)[reply]
- The above discussion is preserved as an archive of the debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA.
- The following discussion is an archived debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA. The result of the discussion was Approved.
Automatic or Manually assisted: Automatic, unsupervised
Programming language(s): Perl
Source code available: User:AnomieBOT/source/tasks/AccidentalLangLinkFixer.pm
Function overview: Apply the Colon trick when someone forgot it.
Links to relevant discussions (where appropriate):
Edit period(s): Continuous
Estimated number of pages affected: Any pages added to Category:Pages automatically checked for accidental language links
Exclusion compliant (Y/N): Yes
Already has a bot flag (Y/N): Yes
Function details: The bot will monitor the language links and categories for pages added to Category:Pages automatically checked for accidental language links. When either list changes, the bot will apply the "Colon trick" to any category or interlanguage links that seem accidental. See User:AnomieBOT/docs/AccidentalLangLinkFixer for much more detail (that page will also be linked from all edit summaries).
Discussion
See [70] for an example edit in my userspace. Anomie⚔ 23:22, 11 September 2010 (UTC)[reply]
- I have advertised this at WP:VPR. Anomie⚔ 23:36, 11 September 2010 (UTC)[reply]
- I'd approve this for trial, but I'm the one that suggested it, so I'd better recuse. =) –xenotalk 18:12, 13 September 2010 (UTC)[reply]
Approved for trial (15 edits). Please provide a link to the relevant contributions and/or diffs when the trial is complete. Mr.Z-man 04:09, 26 September 2010 (UTC)[reply]
- Trial started. Not sure how long it will take, it depends on population of Category:Pages automatically checked for accidental language links and people making the errors that need fixing. As mentioned above, edit summaries for this task will contain links to User:AnomieBOT/docs/AccidentalLangLinkFixer and so should be easily detectable. The bot will automatically post at User talk:AnomieBOT when the trial edits are complete, and I will update this BRFA. Anomie⚔ 19:46, 26 September 2010 (UTC)[reply]
- {{OperatorAssistanceNeeded}} what is the current status of this request? ΔT The only constant 15:06, 16 November 2010 (UTC)[reply]
- One more edit since the last update, unfortunately in reaction to some vandalism: [78] Seven to go, unless someone decides to approve it before all 15 trial edits are done. Anomie⚔ 18:24, 16 November 2010 (UTC)[reply]
Approved. Close enough, no objections. Mr.Z-man 04:37, 2 January 2011 (UTC)[reply]
- The above discussion is preserved as an archive of the debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA.
- The following discussion is an archived debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA. The result of the discussion was Approved.
Operator: Lightmouse (talk · contribs)
Automatic or Manually assisted: Automatic supervised
Programming language(s): AWB, monobook, vector, manual
Source code available: Source code for monobook or vector are available. Source code for AWB will vary but versions are often also kept as user pages.
Function overview: Janitorial edits to units
Links to relevant discussions (where appropriate):
This request duplicates the 'units of measure' section of Wikipedia:Bots/Requests for approval/Lightbot 3. That BRFA was very similar to the two previous approvals: Wikipedia:Bots/Requests for approval/Lightbot and Wikipedia:Bots/Requests for approval/Lightbot 2.
Edit period(s): Multiple runs. Often by batch based on preprocessed list of selected target articles.
Estimated number of pages affected: Individual runs of tens, or hundreds, or thousands.
Exclusion compliant (Y/N): Yes, will comply with 'nobots'
Already has a bot flag (Y/N): No
Function details:
Edits will add conversions to the following metric or non-metric units: foot, mile, mm, cm, m, km, plus their squares and cubes.
Discussion
- I suppose it doesn't particularly matter if this appears under "Current requests for approval" or "Requests to add a task to an already-approved bot"; but for the record, this bot is presently flagless and blocked indefinitely to enforce Wikipedia:Requests for arbitration/Date delinking#Lightmouse automation. Lightmouse has an amendment before the Arbitration Committee, and the committee has indicated that any amendment is contingent on approval being granted by BAG, so the block and prevailing remedies are not necessarily hurdles with respect to bot approval. See related discussion at Wikipedia:Bots/Requests for approval/Lightbot 4. –xenotalk 19:03, 16 August 2010 (UTC)[reply]
- How will the bot know that it should not modify units which appear within quotations, since there is no rigorous way to identify quotations automatically?
- In the early days of automation, this was a problem for everyone. However, AWB now has the very efficient 'HideMore' method for avoiding template, image, and quotes. Where Lightbot was updating templates, quotes weren't an issue so it had the option of running to the full extent of automation. For the addition of conversions it will be run with human supervision. Lightmouse (talk) 08:54, 17 August 2010 (UTC)[reply]
- I cannot believe any software on the planet can automatically detect quotes when, as in Wikipedia, there is no requirement that the quotes be marked up with any particular tags. I am not just concerned about adding conversions, I am concerned with making any change whatsoever to units within quotes. I think you owe us an exact explanation, in plain language, understandable by those who do not write bots, of what kind of fully automatic changes will be made to units. Jc3s5h (talk) 15:14, 17 August 2010 (UTC)[reply]
- Apparently it also looks for tagged quotes and double quotation marks (according to ["mask ... text between two quotation characters"]). A human will still need to detect any remaining quotes in single quotation characters. All conversions will be made with a human watching. There won't be 'fully automatic' changes. Lightmouse (talk) 15:34, 17 August 2010 (UTC)[reply]
- The heading of this request states "Automatic or Manually assisted: Automatic". Any mention of semi-automatic edits contradicts the heading. I submit this request is malformed and must be repaired before approval can be considered.
- The instructions say that 'Manually assisted' means "User must manually confirm every change". I take that to mean there is no option for auto-save even when the human is watching. It seems to have the effect of nullifying the application. It doesn't have an option for "User must watch changes just in case." If I've misunderstood, then please tell me what a manually-assisted bot can do that a normal editor can't. It might be a useful option. Lightmouse (talk) 16:23, 17 August 2010 (UTC)[reply]
- I think everyone capable of judging whether certain changes carried out by bots are desirable is entitled to understand what proposed bots will do. If the structure of the Requests for approval page inhibits that understanding by not allowing accurate descriptions of bots, the structure should change. Could you state where the "instructions" you referred to are? Jc3s5h (talk) 19:22, 17 August 2010 (UTC)[reply]
- Yes, communication needs to be clear. If this BRFA isn't clear, then we need to clarify. The instructions for how to fill in this form are still at the top of this page. It says "Manually Assisted: User must manually confirm every change"
- I see that the automatic section actually says "Automatic: Specify whether supervised or unsupervised". On that basis I should have said "Automatic supervised". In previous incarnations of Lightbot, it said 'Automatic' because that was the worst case, the Lightbot 4 BRFA was simply a copy of the successful unit components of Lightbot 3 BRFA. All the discussion on Lightbot 4 focussed on the unit list and so I simply copied it again but reduced the unit scope massively. That explanation may not be acceptable to you but that is how it happened. Can you please tell me the difference between "Manually Assisted: User must manually confirm every change" and Not a bot? Lightmouse (talk) 20:12, 17 August 2010 (UTC)[reply]
- Further, I stated my concern about any kind of edit to quotes, and you ignored that concern and just reiterated that conversions will be supervised. I interpret your unwillingness to assure us that the bot will not make any change to any quotation (that is, anything a well-educated human would recognize as a quotation, regardless of markup) as an acknowledgment that fully automatic changes will be made to some quotations. A specific example of such quotations are quotes that are indicated by indention, rather than the <blockquote> element, because of the strange quirks exhibited by the <blockquote> element. Jc3s5h (talk) 15:50, 17 August 2010 (UTC)[reply]
- The phrase 'you ignored' suggests I'm being negative to you. If I misunderstood you, or you misunderstood me, I'm sorry. I took your point that no fully automatic system can detect a quote that has no indication other than indentation. I'm merely emphasising that a human is also in the loop and thus isn't fully automatic (a mode that's more suited to well-defined technical changes to templates). That may not be an answer that will lead to your support, but I said it with good intent. Lightmouse (talk) 16:23, 17 August 2010 (UTC)[reply]
- The fact that there would be no fully automatic edits changes the complexion of the discussion entirely. My main concern with editor-approved changes is that the style and size of the window showing the editor the proposed changes might not provide enough context to know if the change is appropriate or not. Jc3s5h (talk) 19:16, 17 August 2010 (UTC)[reply]
- OK. I think we're now focussing on a key issue. The three options: Automatic unsupervised (not being requested); Automatic supervised (I think this is the closest to what was requesting); and Manually assisted (I don't understand the difference between this and 'not a bot'). I think the two threads are merging now. Can we continue the debate at the bottom of the page? Lightmouse (talk) 20:17, 17 August 2010 (UTC)[reply]
- How will the bot identify articles where a consensus exists that it would be overly repetitions to provide conversions for every measurement, and instead provides conversion factors in a footnote (or similar mechanism)? Jc3s5h (talk) 19:12, 16 August 2010 (UTC)[reply]
- In all my time on Wikipedia, this issue has only cropped up a few times. One example related to maritime exclusion zones expressed in nautical miles. Another example related to weapons (old ship guns perhaps) expressed in inches. Those don't apply here because they aren't in the list of units. There is currently a debate going on about tables in US road junction lists. That doesn't apply here because they don't show the unit name in the table anyway so the code won't pick it up. Lightmouse (talk) 08:54, 17 August 2010 (UTC)[reply]
- I interpret this to mean that the bot cannot tell if there is a consensus to limit the number of conversions, that Lightmouse has seen a few instances of this in the past, but by happenstance, those particular articles would not have been modified by the bot. I oppose bots that will ignore the consensus style of an article, even if it does not happen often. Jc3s5h (talk) 15:14, 17 August 2010 (UTC)[reply]
- I don't know how a human can detect what consensus applies to an article. Lightmouse (talk) 16:32, 17 August 2010 (UTC)[reply]
- (ec; not yet considering Jc3s5h's comment)
Also, for the record, the bot is subject to a community ban, which may not necessarily be removed if Arbcom agrees to the BAG approval.(I still believe that to be the case, but I can't find any reference in the archives, so I'll strike my comment.) - That being said, this seems reasonable, provided
- The list of changes to be made is published before or immediately after the any test runs, and any change in the code should be followed by a new test run.
- It's made clear that only simple application of the units should be involved (e.g., no "foot pounds" or "pounds force", and "units" which may occur with a non-unit meaning should only be run in semi-automated mode)
- An off switch should be provided for non-admins, in case the bot runs wild, as previous of his bots have done.
- — Arthur Rubin (talk) 19:17, 16 August 2010 (UTC)[reply]
- Could you link the community ban? –xenotalk 19:21, 16 August 2010 (UTC)[reply]
- I can't find it in a fairly complex search of AN*, so I'll have to withdraw the comment. It won't be repeated unless I can find the link. Perhaps it was during the time there was a separate Community Ban forum? — Arthur Rubin (talk) 20:26, 16 August 2010 (UTC)[reply]
- WP:CSN? [is a subpage of WP:AN, so presumably would've been caught in a prefix search] –xenotalk 20:27, 16 August 2010 (UTC)[reply]
- I can't find it in a fairly complex search of AN*, so I'll have to withdraw the comment. It won't be repeated unless I can find the link. Perhaps it was during the time there was a separate Community Ban forum? — Arthur Rubin (talk) 20:26, 16 August 2010 (UTC)[reply]
- Could you link the community ban? –xenotalk 19:21, 16 August 2010 (UTC)[reply]
- On second thought, by point 1 above, I mean the full list of transformations to be performed by the bot, in a form similar to the most detailed form presented in Wikipedia:Bots/Requests for approval/Lightbot 4 (now withdrawn). — Arthur Rubin (talk) 08:40, 17 August 2010 (UTC)[reply]
- I request a copy of the AWB source code. My request is aimed primarily at learning more about AWB. Depending on how successful I am at understanding it, I might or might not make comments on the function of the bot based on source code. Jc3s5h (talk) 15:56, 17 August 2010 (UTC)[reply]
- I haven't written the code yet. I'm glad I didn't because I've seen so many changes being discussed over the last month or so. And I suspect that you won't want to look at [[84]] which I will be using to plagiarise. Remember that this doesn't just depend on code, several contributors appear to be unaware of target list processing, which is almost equally important. If you want to learn about AWB, you may wish to look at wp:awb. I still think it's easier to demonstrate maintain/convert units than to explain. Lightmouse (talk) 16:47, 17 August 2010 (UTC)[reply]
I've done a search of the Wikipedia database and identified 8 out of 3,385,487 articles that contain 'feet' or 'ft' between single quotes (about 2 per million). These articles can be modified or put on a whitelist. I hope that helps. Lightmouse (talk) 19:44, 18 August 2010 (UTC)[reply]
Query and suggestion
I have followed this page and the previous Lighbot 4 application. While the Arbitrators have said they’re willing to give the applicant another go at automation, by contrast, what I see here is a apparent presumption of guilt, an unwillingness to afford the flexibility of human input that is often central to good automation on WP—in this case, for dealing with the subtle and complex matters surrounding units of measurement. Such flexibility was given to the applicant until last year; it was largely successful, and enabled him to engage with the community and with individual users on many issues that would otherwise have remained otherwise dormant.
The application is for a time-limited, supervised trial. Lightmouse seems to have bent over backwards to accommodate concerns and to gain the trust of members, after the Arbitrators gave in-principle endorsement to the resumption of his work. The process has been going around in circles for many weeks. But the applicant is receiving a seemingly endless line of questioning in this BAG application that appears to seek ever more detail (such as comprehensive lists of units) before the code is even written or trials started; ironically, such questioning does not appear to be accompanied by any firm idea about the role of such detail in the application. While it is part of BAG’s role to probe applicants, this strategy is doesn't seem to be appropriate for the nature of the task that Lightmouse is applying to conduct as a trial. WP is riddled with fiddly little issues concerning the expression of units and conversions. Most of them go undiscussed, and remain in text in inconsistent or illogical forms. Many of them could and probably should be taken to WT:MOSNUM for discussion in the wider community. I suggest that Lightmouse is ideally placed, in running a trial, to identify some of these issues, using his considerable experience to refine both the social and technical aspects of unit editing. It is through such operation that issues might be discussed openly.
BAG should either say no or take the ball that Arbcom has passed it and approve a trial. It is not possible to assess the operation without a trial, so why not get on with it? If there is still concern, BAG might consider a shorter trial than the three months, with reportage of any issues at any time. But every indication is that the trial will be a valuable contribution to the project; I ask you to peruse, for example, a recent interaction about title consistency on LM’s talk page, to get a sense of his dedication to working through unforeseen and difficult issues with other editors. Tony (talk) 08:51, 24 August 2010 (UTC)[reply]
- Perhaps it is time for a trial; but the code and the list of transformations must be published (by Lightmouse) before the run; and reported errors must be corrected or consensus that they are not errors obtained before additional tests. — Arthur Rubin (talk) 15:03, 24 August 2010 (UTC)[reply]
- the Arbitrators gave in-principle endorsement to the resumption of his work - Not really. Kirill specifically stated "I would like to see a current statement from BAG indicating specifically which functions you will be performing" (emphasis mine). Most other arbitrators agreed with him. If anything, ArbCom has mandated thorough review and specific details before the request is approved and the restriction lifted. And if the code isn't finished, a trial would be premature for all involved. Mr.Z-man 15:36, 24 August 2010 (UTC)[reply]
- Why is it so difficult getting somebody rehabilitated??? There seems to be so little trust and good faith. That, with the perennial drama of conflict, it's no wonder editors leave... Ohconfucius ¡digame! 15:45, 24 August 2010 (UTC)[reply]
- As I see it, there are many issues in Lightbot's previous incarnations, among which are:
- Misunderstanding of his mandate. (Partially BAGs, fault, as they did approve the absurd "make changes in date formats".)
- Bad coding, leading to the bot doing something he didn't intend.
- And failure to recognize that, even when pointed out to him. (This may have have partially resulted from main point 3, below, which is not a problem, here.)
- Failure to recognize that a consensus had not yet been obtained for his actions, in spite of BAG approval.
- I don't see #3 as a problem here (except that he doesn't seem to note that quotes are not necessarily bounded by quotation marks.), but none of these require an assumption of bad faith, only of misunderstanding. "Rehabilitation" assumes that he did something wrong, and is willing to work correctly in the future. These issues deal with mistakes, and, even in good faith, we need to establish clearly that he knows what he's doing. — Arthur Rubin (talk) 16:37, 24 August 2010 (UTC)[reply]
- As I see it, there are many issues in Lightbot's previous incarnations, among which are:
{{BAGAssistanceNeeded}} We've been discussing this for 6 weeks now. Units can be maintained/converted using supervised automation, it's been done successfully on thousands of small pieces of text throughout Wikipedia. If there isn't enough evidence already, then a trial run will provide more. If BAG has specific questions, I'd be happy to respond to them. The janatorial conversion and maintenance of units of measure is tedious by hand. It's an ideal task for automation using unremarkable and proven methods e.g. regex and target article list filtering. It would help greatly if BAG allow us to move forward to demonstration by example, i.e. the supervised trial stage. Lightmouse (talk) 16:55, 24 August 2010 (UTC)[reply]
- Recused MBisanz talk 07:39, 27 August 2010 (UTC)[reply]
- Propose a 50 edit trial. If there are resolvable problems, we can have another trial. If there are unresolvable problems we can say "no". If there are no problems but people still have concerns we can have a 100 edit trial. Rich Farmbrough, 16:51, 27 August 2010 (UTC).[reply]
- I'm a bit late to this discussion, but I oppose any automated addition of unit conversions to articles. A number of recent discussions have strongly indicated that there is no longer a consensus for the MOS guideline on units as it currently exists. Given the lack of consensus, we should certainly not be permitting anyone to make such edits by bot. Gatoclass (talk) 12:07, 30 August 2010 (UTC)[reply]
- Sorry? Can you provide details? Which discussions, which consensus, and which aspecdts of the "MOS guideline on units". First I've heard of this. Tony (talk) 14:22, 30 August 2010 (UTC)[reply]
- I'm a bit late to this discussion, but I oppose any automated addition of unit conversions to articles. A number of recent discussions have strongly indicated that there is no longer a consensus for the MOS guideline on units as it currently exists. Given the lack of consensus, we should certainly not be permitting anyone to make such edits by bot. Gatoclass (talk) 12:07, 30 August 2010 (UTC)[reply]
- Okay, here are a couple of links to previous discussions, there may have been more but I don't remember now where they occurred. Here's one discussion regarding precedence of units, and here's another concerning linked names. It seems to me at the least that the issues surrounding unit conversion are complex enough to make them unsuitable for bot automation. Gatoclass (talk) 04:33, 31 August 2010 (UTC)[reply]
There's debate in those four pages but as usual Wikipedia debates it's difficult to draw explicit conclusions. If conclusions have been documented somewhere, it might be useful to read them to see how they apply to this application. We've been discussing theory for weeks now without example edits.
Last week I made a request for BAG input, so I hope it's ok to make another.
Formal request for BAG input As Rich Farmbrough suggests, I propose a 50 edit trial. If there are resolvable problems, we can have another trial. If there are unresolvable problems we can say "no". If there are no problems but people still have concerns we can have a 100 edit trial. Lightmouse (talk) 10:43, 31 August 2010 (UTC)[reply]
- I would like to see the source code available 48 hours before the trial takes place, together with a description of how selection of the article list or category will work in conjunction with the source code to minimize errors. Jc3s5h (talk) 11:17, 31 August 2010 (UTC)[reply]
- A test batch of 50 represents 1/60,000th of the entire Wiki article population. Should anything go wrong, the risks are minimal. There is always the revert button. I am concerned that, with the above request, if the selection criteria are too narrow, the sample may be unrepresentative of the population of articles in mainspace. This would consequently risk greater potential disruption when a larger trial run is authorised because problems are not faced early on. Ohconfucius ¡digame! 11:23, 31 August 2010 (UTC)[reply]
- Ohconfucius, if the explanation is "the script is robust enough that it will work well on any article", that's fine. If the script has weaknesses that must be overcome by careful selection of the articles processed, that needs to be explained. Jc3s5h (talk) 12:07, 31 August 2010 (UTC)[reply]
- Hey, pardon me, but it seems that you were the one implying a carefully selected list was needed. --Ohconfucius ¡digame! 13:01, 31 August 2010 (UTC)[reply]
- In the past the logic of Lightmouse's scripts would not be sophisticated enough to perform appropriate actions on any random article, but if the script were only allowed to process a carefully selected list of articles, then the weaknesses of the script could be averted. However, Lightmouse typically would explain how the script worked, but didn't explain his strategy in composing the list of articles to be processed, so the script looked like it would do bad things. So I am saying that if the script isn't robust enough to deal correctly with most random articles (and relying on his supervision of each edit to catch the ones that fall through the cracks) then the article selection strategy must be explained.
- Given Lightmouse's customary way of working, I think the assumption must be the script WILL contain weaknesses that must be overcome by article selection, unless Lightmouse states otherwise. Jc3s5h (talk) 13:15, 31 August 2010 (UTC)[reply]
- To err, is human. Yet you seem to be either setting a higher standard for Lightmouse, or you are assuming a lower level of competence. Either way, it's not 'charitable'. Also, how about some examples where things have gone wrong like you said, so that we are all clear what specifics you are referring to...? --Ohconfucius ¡digame! 13:51, 31 August 2010 (UTC)[reply]
- You haven't answered my question about what you think the risks are in giving the go-ahead on a batch of 50 articles. Pray tell... how would you select the 50??? Ohconfucius ¡digame! 16:50, 31 August 2010 (UTC)[reply]
- Given Lightmouse's customary way of working, I think the assumption must be the script WILL contain weaknesses that must be overcome by article selection, unless Lightmouse states otherwise. Jc3s5h (talk) 13:15, 31 August 2010 (UTC)[reply]
- "Pray tell... how would you select the 50???" The script and the selection process go together; they must be designed in concert. Both must be made available so after an apparently successful trial, we will be better able to judge if it is really successful, or if it was just lucky and there are other articles around that would have failed. As for my personal preferences, I'd prefer to bug the Congress critters who take campaign contributions from companies who find it's cheaper to
bribeprovide political support for Congress critters than to modify their equipment to use SI. Then send the tapes to the Washington Post or 60 Minutes and wait for the blogosphere to demand Andy Rooney's birth certificate. Jc3s5h (talk) 17:56, 31 August 2010 (UTC)[reply]
- I don't get this American political talk. Sure, the script is written with a job in mind. It's objective can be to change American spellings to British, or it can be to add
{{convert}}
templates to articles where there are 'naked' units of measure such as feet, miles, litres, hectares. But you want this palaver for a test of 50 articles??? shome mishtake shurely (sic). --Ohconfucius ¡digame! 02:24, 1 September 2010 (UTC)[reply]
- I don't get this American political talk. Sure, the script is written with a job in mind. It's objective can be to change American spellings to British, or it can be to add
- "Pray tell... how would you select the 50???" The script and the selection process go together; they must be designed in concert. Both must be made available so after an apparently successful trial, we will be better able to judge if it is really successful, or if it was just lucky and there are other articles around that would have failed. As for my personal preferences, I'd prefer to bug the Congress critters who take campaign contributions from companies who find it's cheaper to
We've been discussing this for six weeks now and I've been at pains to respond to detailed requests during this extended period; six weeks of talk seems quite enough for a 50-edit trial. I hope you'll forgive me if I now focus on responding to BAG. If BAG wants a trial to proceed, I'd be happy to develop and publish the code at their request. Lightmouse (talk) 12:01, 31 August 2010 (UTC)[reply]
A quick note
I should point out that the committee is expecting that a normal review of the proposal take place, but I see here scrutiny that is both unusual and difficult to justify given the relatively limited scope of testing at this stage of a bot request. In particular, some of the demands placed on Lightmouse appear to be unreasonable and designed to derail the process rather than borne out of a genuine concern for the technical accuracy of the proposed bot.
One of the basic principles on which Wikipedia operates is that of Assuming Good Faith; while Lightmouse was placed under a restriction because they had been (in the Committee's opinion) careless with automated editing in the past, they are now given an opportunity to resume their well intended contributions— and arbitrators will not look kindly on bad faith or attempts to sabotage the process. — Coren (talk) 23:19, 3 September 2010 (UTC)[reply]
- I'm willing to assume good faith. The fact that Lightmouse failed to understand the scope (or, to be more precise, his bots were accused of exceeding both the stated scope and common sense) of his previous bots suggests we should be more careful in describing the scope, so there would be less likely to be disagreement. /4's proposed scope clearly exceeded common sense; but there could still be reasonable argument about whether an edit is in the proposed scope here. — Arthur Rubin (talk) 03:09, 4 September 2010 (UTC)[reply]
I think I will reiterate one point from above. The mere fact that a 50 article trial is approved and successful does not bind BAG to approve the BRFA (although BAG would normally do so). The question of which articles will be selected is not moot, for example I would expect astronomy articles to be skipped. However a test, successful or not does move things forward towards the eventual acceptance or refusal of this BRFA. Rich Farmbrough, 03:47, 7 September 2010 (UTC).[reply]
Approved for trial (50 edits). Please provide a link to the relevant contributions and/or diffs when the trial is complete. Mr.Z-man 04:08, 26 September 2010 (UTC)[reply]
Trial complete. Done. Lightmouse (talk) 10:14, 26 September 2010 (UTC)[reply]
- I've looked through the edits and they seem to be good. Can you maybe comment about how these 50 articles were selected? Are they simply the first 50 from your master list for the final bot run? If so, could you give a comment about which articles are selected (or not selected) for this list? AKAF (talk) 08:59, 1 October 2010 (UTC)[reply]
There was more than one list. The list creation task involved trying to ensure examples that demonstrated the range of units. Each list that was created was then processed to eliminate some articles from the list (e.g. articles that define units of measure). I then ran AWB. As far as I recall, for each list, it was the first few items. I'm now making a formal request to either run the bot or to run a larger trial (e.g. 500 articles). Lightmouse (talk) 18:13, 13 October 2010 (UTC)[reply]
- Seems a reasonable request given how the first trial went. Approved for extended trial (200 edits). Please provide a link to the relevant contributions and/or diffs when the trial is complete.. - Jarry1250 [Who? Discuss.] 20:43, 18 December 2010 (UTC)[reply]
Trial complete. Done. Lightmouse (talk) 11:39, 24 December 2010 (UTC)[reply]
- I had a flick through a few and they seem good, but we can wait for feedback (ie. complaints), if any. - Jarry1250 [Who? Discuss.] 15:43, 24 December 2010 (UTC)[reply]
- Okay, no complaints, and it is clearly in the operator's best interests to be careful with this one. Approved. - Jarry1250 [Who? Discuss.] 16:52, 1 January 2011 (UTC)[reply]
- The above discussion is preserved as an archive of the debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA.
Superceded by Lightbot 13. Withdrawn Lightmouse (talk) 17:10, 29 August 2011 (UTC)[reply]
- Withdrawn by operator. Lightmouse (talk) 17:13, 29 August 2011 (UTC)[reply]
Template:BRFA #switch:Trial Template:BRFA Template:BRFA Template:BRFA
Bots that have completed the trial period
Template:BRFA Template:BRFA Wikipedia:Bots/Requests for approval/Approved
Denied requests
Bots that have been denied for operations will be listed here for informational purposes for at least 7 days before being archived. No other action is required for these bots. Older requests can be found in the Archive. Template:BRFA Template:BRFA Template:BRFA Template:BRFA Template:BRFA Template:BRFA Template:BRFA Template:BRFA Template:BRFA Template:BRFA Template:BRFA Template:BRFA Template:BRFA Template:BRFA Template:BRFA Template:BRFA Template:BRFA Template:BRFA Template:BRFA Template:BRFA Template:BRFA Template:BRFA Template:BRFA Template:BRFA
Expired/withdrawn requests
These requests have either expired, as information required by the operator was not provided, or been withdrawn. These tasks are not authorized to run, but such lack of authorization does not necessarily follow from a finding as to merit. A bot that, having been approved for testing, was not tested by an editor, or one for which the results of testing were not posted, for example, would appear here. Bot requests should not be placed here if there is an active discussion ongoing above. Operators whose requests have expired may reactivate their requests at anytime. The following list shows recent requests (if any) that have expired, listed here for informational purposes for at least 7 days before being archived. Older requests can be found in the respective archives: Expired, Withdrawn. Template:BRFA Template:BRFA Template:BRFA Template:BRFA Template:BRFA Template:BRFA Template:BRFA Template:BRFA Template:BRFA Template:BRFA Template:BRFA Template:BRFA Template:BRFA Template:BRFA Template:BRFA Template:BRFA Template:BRFA Template:BRFA Template:BRFA Template:BRFA Template:BRFA Template:BRFA Template:BRFA Template:BRFA Template:BRFA Template:BRFA Template:BRFA Template:BRFA Template:BRFA Template:BRFA Template:BRFA Template:BRFA Template:BRFA Template:BRFA Template:BRFA Template:BRFA Template:BRFA Template:BRFA Template:BRFA Template:BRFA Template:BRFA