Wikipedia talk:Bot policy

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Ligulem (talk | contribs) at 10:38, 13 March 2008 (→‎clarification). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Archive
Archives
Archive 1 Archive 2
Archive 3 Archive 4
Archive 5 Archive 6
Archive 7 Archive 8
Archive 9 Archive 10
Archive 11 Archive 12
Archive 13 Archive 14
Archive 15 Archive 16
Archive 17 Archive 18

Control proposals
Archive policy
Archive interwiki (also some approvals for interwiki bots)

Non-Trivial Bots?

Hi, Have any of you thought of a framework/policy where non-trivial bots could be written for Wikipedia? At first glance, most of the bots I have seen around are still quite simple and pretty effort intensive in terms of precious human hours that is required to babysit them. If I am wrong, please accept my apologies and point me to a non-trivial bot. Else, could you please post your ideas here? My first thought is to use the lexicon of Wordnet to automate a few things that still need much human effort. For instance, I saw that User:RussBot still requires its owner to make disambiguation decisions. But I think about 90% of the decisions on linking French to France vs French people could be automated with a simple expert system that would decide if the page is about a person (easy to decide if there is a category anyway, but there can be other rules) and then look up word frequecy co-occurances via Wordnet and make a decision. In general, this type fo expert system framework can then be offered to users at a much later stage, say 2 years, after testing. In a nutsell (pun intended) it would be a simple expert system shell that could be used to write useful bots. My driving thought is that "Wikepdia seems ever so effort intensive" and could use some more automation. Afterall, this is/was the computer age until a few years ago. The key item that would help, of course, would also be an API to a suitable high level language way above Perl or Python. But to begin with one may have to just use those. I had a few Perl-based routines that did reasoning a few years ago, and could try and find them. But before that it would be useful to hear what the group here thinks. And from a practical standpoint, it would be of tremendous algorithmic help if one had access to a page visit affinity table of some type. Of course at the page granularity level, this would be expensive, but one could use multi-categories for this, just as most supermarkets do for thier semi-item-level affinity tables. Do any of you know if such a table exists within Wikipedia, and if so, if it can be accessed? Anyway, your suggestions will be appreciated. History2007 (talk) 03:50, 15 January 2008 (UTC)[reply]

The problem with this sort of thing is that such a bot would always make mistakes. Humans that make mistakes are tolerated, but bots aren't; they're expected to be perfect and a lot of fuss is made when they aren't. So only tasks that can actually be properly automated are – and these very rarely involve manipulation of actual content – Gurch 01:02, 16 January 2008 (UTC)[reply]

That is a social impact issue, and has been addressed in many settings. You may be interested to know that many financial portfolios are arranged automatically, yet they give the results to customers a few days later, so it does not look like a computer did it. And these are bigtime firms - whose names shall not be mentioned. The way those problems are overcome is that for a somewhat lengthy trial period, the system is supervised, and once it is as good as a human it is allowed to make small changes. In any case, to begin with these nontrivial bots could find things that people could not. I think computers are here (for better or worse) so we might as well use them. As is, anyone with knowhow could actually write a program right now that modifies content, and do not need permission to do it. All they need to do is log in, let it screen scape the content and feed the http tokens back in for edits. It does not need to be decalred as a bot. A few years ago, my company did a few programs like that and they worked fine in finding errors in forms that people had filled. It would have taken huumans MUCH longer to check all those forms. . And for cleaning up multi-redirects, it would work as well as a human. As for adding "See also" items, that could also be done very nicely. If there is a "See also", it will add as the last item, else will try to place it afrer references If there is no references section, it will do nothing. Then more complicated tasks may come in. One would need a Turing test to know who did those edits.... But before any of this is even attempted, I would like to get feedback and suggestions and agreements from whoever sets these policies. In any case, my current thought is to start the design of a little language called "WikiScript" that allows nonprogrammer users to write better bots. It would include "Wikipedia specific" elements so you do not have to rely on general purpose languages, and would avoid iterative constructs. But it would be made available only to users who are careful with it. If you like, I will let you know as we go along. Anyway, do you know if Wikipedia keeps a category based visit affinity table? That would be useful to know. It would probably say a lot about users anyway. Regards History2007 (talk) 04:07, 16 January 2008 (UTC)[reply]

It would probably help if I knew what on earth an "affinity table" was; the article doesn't seem related. If it's something to do with page views, though, we struggle to collect statistics on those at all, there's so many of them (two billion, yes, billion, per day) so you're out of luck – Gurch 04:24, 16 January 2008 (UTC)[reply]

I am sorry, I should have defined that better. I should probably also go and edit the Wiki-pages on those concepts for under Market basket there is a mention on it, but not that clearly. In data analysis the term refers to how items are viewed or purchased together. E.g. Suppose you have 26 pages {A, B, C, D... Z} then you get a 26 by 26 table where each entry such as DM tells you the number of times that pages D and M are viewed by the same user. Retailers love to do that for cross selling on products, e.g. they love to know that people who buy ski-hats also tend to buy ski-gloves. Web-site visit affinitiy tables tell websites about their visitors, etc. So the table would have categories as rows and columns and numbers as entries. It would be too expensive to keep that table at the page level I guess, but for key pages something may be done. There are standard computing approaches for dealing with these things. This type of affinity information may help determine if a link "may" be useful between categories, and then suggest a link between some pages. But I am getting ahead of the game here. In the next few days I will edit the relevant Wikepedia pages to add these things. I was surprised that the page on Market basket did not have it, except for a mention of the forever repeated beer and diapers story. The page Market basket analysis was empty and needs to be filled in. I will try to clean those pages up in a few days, but will probably take at least a week to do it right. The page on Association rule learning actually has errors in it and also needs help, but that is another story. Anyway, is there a page that describes the kind of statistics that Wikepedia keeps? Thanks History2007 (talk) 07:51, 16 January 2008 (UTC)[reply]

You could start with Special:Statistics. Gimmetrow 07:55, 16 January 2008 (UTC)[reply]
You realize we have 6,828,030 articles? Maintaining a 46621993680900-cell table, and updating it 20,000 times a second, is very much non-trivial, and I'm not sure the benefit would be worth the enormous investment in resources required – Gurch 08:07, 16 January 2008 (UTC)[reply]

I specifically said that a page level table would be "too expensive". And the resulting table would be very, very sparse. In practice these things are done by sampling, so one will not keep track of every visitor, so the table is not updated all the time. It does not have to be exact, it is intended as a rough guide. The method is to sample first, know which top categories deserve attention and then only store those that are relevant. My guess is that given 2 million articles, one can get away with an initial table of 200,000 entries for the top categories, then refine from there. And the table gets updated once in a while. It does not need to be exact. Given all the words that get typed into all these "talk pages" the disk resurces would be a fraction of what users are talking with each other about. I noticed that Alexa does analysis for Wikipedia, so they will give you a quote for doing it if you ask them! But the Special:Statistics page had too little info. History2007 (talk) 08:34, 16 January 2008 (UTC)[reply]

Anonymized page view data is available in raw form here. If you want to do something with them, you are welcome to try. Note the file sizes; a single hour of page view date, compressed, comes to 20 Mb. Note that these files contain data for all Wikimedia projects – Gurch 10:15, 16 January 2008 (UTC)[reply]

Wow! That was a lot of data. So you do keep these things around. But given that these are at the page level, it may be more than I am ready to take on at the moment without a 10 processor SUN server on my desk! By the way, I do not know how I forgot to mention this fact, but the MOST widely known example of an affinity table is Amazon.com's use of "customers who bought book A also bought book B". I guess Amazon has close to a million books or so (I am not sure) but they probably keep a very sparse table, and it does not need to be exact. So at some future point a similar feature for Wikipedia may be useful anyway. History2007 (talk) 11:35, 16 January 2008 (UTC)[reply]

By the way, that would be a good example of how an online encyclopedia can do things that a printed one can never do. For it is almost impossible to know the affinity of topics within a printed encyclopedia, but with Wikipedia the computer can do that. And it will probably be interesting. History2007 (talk) 11:49, 16 January 2008 (UTC)[reply]

Heh, that's an interesting idea. Although that Amazon feature is notorious for making rather odd suggestions. I can see it now: "People who read History of Portugal were also interested in Star Trek and List of Pokémon" :) – Gurch 00:36, 17 January 2008 (UTC)[reply]

Yes, that is true. On the other hand, at times the heart of novelty and being interesting is "an unusual thought". Even if Amazon's suggestion is off once in a while, there is no major harm done except for an extra mouse click. That is how interesting ideas come about in many cases - an odd idea that leads to A that leads to B that leads to C .... What Amazon does not do (I think) is to ask users for feedback on the useflness of that link. Given that Wikipedia users are much more "responsive" let us say, the suggestions can over time be scored on a 1-10 scale and those that are rejected will just be given such a low weight in the affinity table that they no longer show up. In fact, at the beginning you probably thought my idea of affinity table for Wikipedia was odd, but maybe it will be useful. Personally, as a start, I would love a feature that would do a "semi-random link" from a page. A random link is usually a watse of time, but if I am looking at a page about Portugal, since you mentioned it, it would be nice to click on "random portugal" and get a link that for instance tells me about wine bottle cork production in Portugal - someone once told me that was a big industry there somehow. Anyway, I just did a 1st cut of the page on Market basket analysis but did not write about algorithms, computational costs, etc. yet History2007 (talk) 03:54, 17 January 2008 (UTC)[reply]

Probably the best way to do that would be to recursively generate a list of the contents of Category:Portugal and its subcategories, and then jump to a random article in that list. Of course not every article has its own category; a similar thing could be done with the links on a page, but that would tend to lead to only tangentially-related things – Gurch 05:48, 17 January 2008 (UTC)[reply]

Bots - requiring approval

I'm requesting clarification on the Bot policy

Does the Bot policy apply to read-only bots?

Bots that only read an article, talkpage, noticeboard

Does the Bot policy only apply to editing bots?

Does the Bot policy only apply to automatic editing bots?

Bots that do not require a specific 'Do you want to make this change? [y][n]' keystroke from the editor.--Lemmey (talk) 20:32, 22 February 2008 (UTC)[reply]
Read-only bots do not need approval, but if they're going to be doing a LOT of reads, then they should generally work offline from a database dump. Scripts that do not have a specific y/n approval for each edit need approval on the English wikipedia. Scripts that do include a specific y/n approval for each edit do not, at present, *require* approval, but if they're going to do a lot of edits it's probably a good idea to have BAG vet them. Otherwise, they are liable to get blocked. See also Wikipedia:Bot_owners'_noticeboard#-BOT_Process. Gimmetrow 20:47, 22 February 2008 (UTC)[reply]

Do I need a request for approval?

I have a question about whether I need an RFA, so I can't really ask it on the RFA page. I am one of the maintainers of Metacity. It is released using a release script. Wikipedia contains Template:Latest stable release/Metacity and Template:Latest preview release/Metacity. I could add a section to the script which updated these templates. It would make, on average, one edit per week. Would that need to operate using a bot account? If I gave copies of the script to other projects, would they need their own bot accounts? Marnanel (talk) 15:57, 28 February 2008 (UTC)[reply]

You don't need a bot account to make two edits per week - just have the script use your ordinary username and password. By the way, RFA is a different process, Wikipedia:Requests for adminship. — Carl (CBM · talk) 16:00, 28 February 2008 (UTC)[reply]
Oops. *changes title*. Thanks. Marnanel (talk) 16:02, 28 February 2008 (UTC)[reply]

nobots

Template:RFCpolicy

See below for updated proposal

I believe the policy should be amended to require all bots to honor {{nobots}} (with exceptions made on a case by case basis, but user talk pages pretty much never being given an exception). Are there any objections? —Locke Coletc 02:26, 9 March 2008 (UTC)[reply]

yes, this has always been an opt-in for bot operators. there is no reason to force it upon operators. if you ask most will be willing. but templates and bots do not mix well. βcommand 02:30, 9 March 2008 (UTC)[reply]
What is the reason to not force it on operators? If exceptions are granted, I don't see the problem with the existing system other than it being another criteria to consider during bot approval. —Locke Coletc 02:40, 9 March 2008 (UTC)[reply]
A lot of bots do process work which really ought to ignore nobots, and I think that's more the norm. BAG can require it on a case-by-case basis for tasks which ought to allow for an opt-out, such as delivering a project newsletter to its members. Gimmetrow 02:35, 9 March 2008 (UTC)[reply]
Really? Can you list the bots which should ignore nobots and list the ones that shouldn't? It's not that I don't believe you, I just find that hard to believe. And where's the harm in this if it's still possible to allow exceptions (this just forces the matter to be brought up during approval, basically)? —Locke Coletc 02:40, 9 March 2008 (UTC)[reply]
lets flip this on its head, if you want to force it on a bot do it at the BRFA. βcommand 02:41, 9 March 2008 (UTC)[reply]
I'm not seeing the difference: if it is mandated in the policy (again, with the possibility of exceptions), what's the problem? —Locke Coletc 02:49, 9 March 2008 (UTC)[reply]
Policy mandates the norm, not the exception. Most bots, like interwiki, category, template and image deletion, talk page template, and antivandalism bots should ignore nobots. Gimmetrow 02:56, 9 March 2008 (UTC)[reply]
Then word the requirement such that it only applies to user talk page notifications. That would be the norm, I would expect. —Locke Coletc 03:00, 9 March 2008 (UTC)[reply]
how about not?, there are reasons for talkpage ignoring nobots. if you think a bot should follow that method, bring it up during the BRFA. βcommand 03:05, 9 March 2008 (UTC)[reply]
I'm bringing it up here because I believe it should be part of the policy. Not something that must be brought up for each approval individually. Also, in the case of at least one bot approval, the discussion barely lasted a day before the approval was given. —Locke Coletc 03:06, 9 March 2008 (UTC)[reply]
With the exception of user/user talk pages where there's a modicum of ownership, nobody owns any page here and can't order a bot not to visit. It just wouldn't work with things like image tagging or WikiProjects. "Then word the requirement such that it only applies to user talk page notifications.": I'd be happy with that I guess, providing the BAG had the authority to override it on a case by case basis as needed. --kingboyk (talk) 12:18, 9 March 2008 (UTC)[reply]
I don't see why most bots should look at the nobots thing; it's only really reasonable when the bot is making an announcement of a change rather than the change itself. Most bots don't make announcements and so shouldn't worry about nobots. If a particular bot is causing problems on a particular page, the right thing to do is for that particular bot operator to implement a blacklist for that bot. — Carl (CBM · talk) 03:15, 9 March 2008 (UTC)[reply]
Really it's only relevant on user talk page notification, which is all I'm suggesting for the moment really. If a bot doesn't make notifications, then it would automatically be exempt from this requirement. —Locke Coletc 03:18, 9 March 2008 (UTC)[reply]
This has something to do with a specific bot, I suspect. Have you considered asking the bot operator if he will implement nobots for the notification part of the process? Gimmetrow 03:20, 9 March 2008 (UTC)[reply]
Gimmetrow, he is trying to force nobots onto me. I specifically had to disable this due to abuse. βcommand 03:21, 9 March 2008 (UTC)[reply]
It shouldn't be difficult to limit compliance to User_talk, which is all that is being suggested. —Locke Coletc 03:23, 9 March 2008 (UTC)[reply]
While it currently has to do with one specific bot, I believe the matter should be resolved for all current and future bots which may suffer from a similar issue. I don't believe my request is unfair since the most common bot framework already implements support for this template (meaning nothing need be done besides turning the feature on or off, depending on use). —Locke Coletc 03:23, 9 March 2008 (UTC)[reply]
/me bangs head on wall. you have no right to enforce your POV on all bots. do it on a case by case basis. βcommand 03:26, 9 March 2008 (UTC)[reply]
I'm not forcing my POV, I'm participating in community discussion on enforcing this on all bots. Doing it on a case by case basis isn't really acceptable as it should be a prerequisite of approval (again, assuming the bot leaves user page notifications, which are all I'm concerned with here). —Locke Coletc 03:28, 9 March 2008 (UTC)[reply]
Since BCB has an opt-out equivalent to {{nobots}}, what's the problem? Gimmetrow 03:30, 9 March 2008 (UTC)[reply]
This isn't just about BCB. And AFAIK, his method requires manual intervention (being added to a list) whereas {{nobots}} is instant as soon as the user places it on their User_talk page. —Locke Coletc 03:33, 9 March 2008 (UTC)[reply]
Manual intervention (where the bot operator maintains a blacklist) is a perfectly reasonable way to limit the bot's operations. It will be much more robust than using a template. — Carl (CBM · talk) 03:38, 9 March 2008 (UTC)[reply]
How would it be "more robust"? Adding a template to your own user talk page is far simpler than engaging in a dialog with another editor and potentially waiting to be added to the list (while still being given notifications you don't wish to receive). Besides, {{bots}} is a de facto standard, hence why making all bots comply with it is far simpler than creating some new method that requires manual maintenance/intervention to use. —Locke Coletc 03:43, 9 March 2008 (UTC)[reply]
its not the standard, its just the most public method. my method is nicer on the servers than nobots also. βcommand 03:45, 9 March 2008 (UTC)[reply]
How is it "nicer on the servers than nobots"? —Locke Coletc 03:47, 9 March 2008 (UTC)[reply]
I dont load the talkpages of the users who have been opted out, that causes less stress than having to load the talkpage and then do nothing. βcommand 03:49, 9 March 2008 (UTC)[reply]
In the case of {{nobots}}, you could use Special:Whatlinkshere beforehand, creating a list of editors who opt out of all bots entirely. For those using {{bots}} to opt out of specific bots, yes, you would probably need to load the page and parse the template, but the load would be negligible, and still less than loading the page and subsequently posting a response (which would require the servers to update the database, invalidating the cache for the user talk page, forcing it to be regenerated). At any rate, we need to balance ease of use and standardization with server load, and in this case, I believe the community is better served by requiring {{bots}}/{{nobots}} compliance. —Locke Coletc 03:54, 9 March 2008 (UTC)[reply]
(undenting) This is a bad idea. {{nobots}} is suppose to be optional. If a programmer wants to use it then they can. Otherwise forget about it. Besides most bots have an opt out system and others need to ignore {{nobots}}, for example removing fair use images outside the mainspace. Mønobi 04:49, 9 March 2008 (UTC)[reply]
It's supposed to be optional? According to who? "Most bots have an opt out system" What's wrong with standardizing the opt out system on what is the de facto standard ({{nobots}} and {{bots}})? Why should editors need to learn the different and unique magical invocations to deny a bot access to their user talk page? What is wrong with using something that works and an editor can learn about quickly and easily (and probably once found, expects to work)? —Locke Coletc 05:07, 9 March 2008 (UTC)[reply]
My answer would be that it unfairly limits the ability of a bot to function effectively. At the base level, forcing {{nobots}} compliance allows one user (the one placing the tag) to limit another editor's (the one running the bot) ability to edit wikipedia, and that just seems to fly in the face of Wikipedia practice. To answer you more directly, what if a bot (because several scripts are also pretty much bots) is warning a vandal, should the vandal be able to block the warning? Are scripts and tools like TW or AWB immune or can they be blocked too? Adam McCormick (talk) 05:15, 9 March 2008 (UTC)[reply]
Exactly. This is why AWB follows the nobot template by default, because it tends to perform simpler tasks. What if I added nobot to my talk page so cluebot didn't message me, or I added nobot so I could have a farm of nonfree images in my userspace? Mønobi 05:18, 9 March 2008 (UTC)[reply]
Exceptions are a very real possibility, I mentioned this from the outset of this discussion. That doesn't change the fact that bots dispensing notifications would be better off obeying {{nobots}} and {{bots}}. —Locke Coletc 05:30, 9 March 2008 (UTC)[reply]
A bot isn't subject to editing "privileges" like an editor, and nothing (nothing) stops the bot operator from making a log of editors who opted out and manually checking issues on their User_talk page separately. —Locke Coletc 05:30, 9 March 2008 (UTC)[reply]
We're not just talking about bots, we're talking about we editors who run them, and we definitely do have "privileges." What you're asking for is for all editors who run bots, unless specifically excepted (and I would assume any bot wanting to edit user pages would try to get such an exception), to do work their bots could have done because specific users dislike the fact that a bot is the one doing it. What is the difference between me going back immediately after my bot and making any edit it couldn't and the situation now? There is none (as far as who and what gets tagged), except that there is a lot more work for many dedicated editors who run bots and have more-productive edits to make. Adam McCormick (talk) 06:00, 9 March 2008 (UTC)[reply]
It has nothing to do with a bot being the editor, it has to do with wishing to have no bot activity on specific pages (in this case, User_talk pages). And if you want to get even more detailed, this is really only about notification messages (not re-categorization, image removal, or other types of edits one could conceivably want to automate even on User_talk pages). And the point of going back and doing the edit manually is to verify that the tag isn't being used abusively (to avoid having policies enforced, for example). —Locke Coletc 06:05, 9 March 2008 (UTC)[reply]
I'm not asking the point, I asking what having a bot operator do that accomplishes. It doesn't change what edits are made, it doesn't change the amount of clutter on your talk page, it only changes who makes the edit. Adam McCormick (talk) 06:35, 9 March 2008 (UTC)[reply]
The counterpoint would be that within a user's talk page, they are granted a degree of ownership over the page. Specifically this issue revolves around Bots that go and post notices to user talk pages. If a page is in violation of a rule like no fairuse images, thats a reason to violate NoBots. Telling someone who has NoBots that their template is up for deletion, doesn't seem to be. MBisanz talk 06:24, 9 March 2008 (UTC)[reply]
But the proposal was to force all bots to comply with {{nobots}} which goes way beyond user talk pages. Adam McCormick (talk) 06:35, 9 March 2008 (UTC)[reply]
Well I didn't make the proposal as a policy change, so I can't speak to that, only what my opinion is on what direction is should go in. MBisanz talk 06:37, 9 March 2008 (UTC)[reply]
The issue as I see it is that the specific problem being addressed is too small for such a sweeping policy change. I don't disagree that having bots respect the template is a WP:NICE thing to do, but forcing all bots to do it is an extreme. Adam McCormick (talk) 06:40, 9 March 2008 (UTC)[reply]
Yes, my original proposal was overly broad, but I still believe a narrower policy change would benefit the project and contributors. —Locke Coletc 06:43, 9 March 2008 (UTC)[reply]
It seems like the argument has collapsed to "Can bots be stopped from leaving innocuous messages on talk pages with {{nobots}}?" which doesn't seem to be of enough import to require all bots to comply. Is that really all this is about, not wanting messages? Adam McCormick (talk) 06:29, 9 March 2008 (UTC)[reply]
Well I think the issue goes to specifically that BetacommandBot currently requires approval from the owner before it will not post notices. And there is also the issue of removing redlinked cats from userpages that haven't gone to UFCD. I have not reviewed all the other conceivable reasons. MBisanz talk 06:33, 9 March 2008 (UTC)[reply]
(double ec)That's the crux of the issue, yes. And I don't believe it's such an impossible thing to request bot operators do (obeying {{nobots}} or {{bots}} placed only on User_talk pages). —Locke Coletc 06:43, 9 March 2008 (UTC)[reply]
I think you might underestimate the amount of processing logic involved in parsing out the exact meaning of those templates. It's not impossible, but it's not as easy as turning on a "comply" flag (outside of AWB and pyWikiBot at least) Adam McCormick (talk) 06:52, 9 March 2008 (UTC)[reply]
Perhaps it would be beneficial for us if you would restate your amended position for us? Adam McCormick (talk) 06:49, 9 March 2008 (UTC)[reply]
See section below. —Locke Coletc 06:55, 9 March 2008 (UTC)[reply]

Updated proposal

At a minimum, this is (after discussion above) what I believe is needed:

Bots which deliver notifications must obey {{bots}} when found on a User_talk page.

Note that I'm excluding {{nobots}} because I think it might be better if users were forced to explicitly deny bots which leave notifications (ensuring that newer bots which may leave notifications the user is interested in aren't accidentally silenced). Thoughts? —Locke Coletc 06:55, 9 March 2008 (UTC)[reply]

I believe that this will at very least answer issues concerning vandals using these tags to avoid conforming to policy. I have no more direct issues with this, though I don't see that it is necessary. Adam McCormick (talk) 07:02, 9 March 2008 (UTC)[reply]
See WP:AN/B, while my current issue is with one specific bot, I'd like to see the matter permanently resolved for all current and future bots (and not just for myself, but other editors who also deal with these kinds of problems). —Locke Coletc 07:10, 9 March 2008 (UTC)[reply]
So it is necessary because you demanded it be so? Adam McCormick (talk) 07:21, 9 March 2008 (UTC) I get your reasoning, I don't see that your reasoning necessitates such a solution. Adam McCormick (talk) 07:22, 9 March 2008 (UTC)[reply]
As I indicated on the other page, I've explored all the avenues available to me to stop receiving notifications from this specific bot. All avenues failed. Further, the bot author has indicated he wouldn't add people unless he believed they weren't just "being lazy" (his words, from WP:AN/B). This is why I believe it should be required in policy, to deter future and current bot operators from operating under the belief that they can be dicks whenever they want. —Locke Coletc 07:35, 9 March 2008 (UTC)[reply]
Just because Beta won't bend to your will does not necessitate a change in the entire system. Understand that I'm not defending his reasoning or his actions, I find his conduct uncivil, but changing the entire system because of one rogue bot operator (at the demand of one or two editors) is excessive. Adam McCormick (talk) 07:43, 9 March 2008 (UTC)[reply]
Actually, per Wikipedia:AN/B#Community_proposal there are at least 10 users, including several long-established users, who are requesting him to do it. MBisanz talk 08:00, 9 March 2008 (UTC)[reply]
I agree that there are plenty of users who want Beta to respect nobots, but we've only heard from a couple who want to see this made mandatory for more than just Beta. I am among those who believe that Beta needs to allow any legitimate user onto his blacklist but demanding nobots, specifically, and to demand it of all bots editing user talk pages, for no other reason than because Beta isn't doing it just seems to cross a line. Adam McCormick (talk) 08:11, 9 March 2008 (UTC)[reply]
I normally find myself defending BC and BCB, but I support this idea. At the very least bots should follow the tag on user pages. -- Ned Scott 08:02, 9 March 2008 (UTC)[reply]
I think that if we are going to discuss this it can't be about BCB, it needs to be about whether there is a significant need for all bots to comply with it or just a need for some process by which the community can demand an existing bot be made to comply. Adam McCormick (talk) 08:10, 9 March 2008 (UTC)[reply]
It seems simpler to me to require compliance and then carve out exceptions on a case by case basis than to allow non-compliance and then require it after-the-fact. —Locke Coletc 08:27, 9 March 2008 (UTC)[reply]
It's all a question of numbers. How often is non-compliance actually going to be an issue (is it only when bots are as contentious as BCB?)? How often would bots would need exceptions (As often as Beta gets a flame message on his talk page, or more)? I would think that bots would need exceptions much more often than the number of times a bot is actually causing a disruption. Maybe I'm wrong here, I don't have accurate estimates of how often these things happen. Would you argue that exceptions would be a rare occurence?
Note: I'm sorry to leave the discussion but it's 1:30 am here. Hope there is more input by tomorrow Adam McCormick (talk) 08:35, 9 March 2008 (UTC)[reply]
Considering the BAG appear to be in the habit of "Speedily approving" bots in barely two minutes (and closing discussion of approvals in twenty four hours, then reverting and protecting the page when someone tries to continue discussion), I don't think this will add much to the "process". If a bot doesn't even leave user-page notifications it's automatically exempt from the requirement (as reworded) anyways. Of the bots that need to leave user page notifications, I suspect it'll be quick and painless to determine a) if the bot obeys {{bots}} and if b) it's even necessary or sensible (should this new bot under consideration be given an exception and why). Anyways, goodnight. —Locke Coletc 08:46, 9 March 2008 (UTC)[reply]
BAG is not in that habit. Gimmetrow 20:44, 9 March 2008 (UTC)[reply]
Perhaps you would care to explain Wikipedia:Bots/Requests for approval/Non-Free Content Compliance Bot. It may not be a habit yet, but it is a disturbing incident. —Locke Coletc 20:57, 9 March 2008 (UTC)[reply]

follow tag by default

I support this proposal, though I would also support a slight alternative: make {{bots}}-compliancy required by default, but allow exceptions to be requested as needed. There's no reason why that shouldn't work, and is far more than reasonable. Humans can't complete and monitor everything bots can do, that's the whole point of being cautious about bots on Wikipedia. This is not an unreasonable request. -- Ned Scott 08:01, 9 March 2008 (UTC)[reply]

That seems reasonable to me. I don't think it'll be such a big deal once included as a requirement to simply ask if the bot supports the tag, and if not, why not (does it warrant an exception, or maybe the operations are such that the tag would be irrelevant). —Locke Coletc 08:47, 9 March 2008 (UTC)[reply]
Strongly oppose. First there is no point making a rule for all bots that only applies to the few that leave mass messages. Second as an operator of a bot that does leave mass message it should be up to the operator. {{nobots}} works fine for my bot that leaves AfD notifications and it already follows it but my bot that leaves image copyright notices should not be able to be ignored unless the user has a good reason. So out of the minority of bots that leaves message a good part of them shouldn't be following {{nobots}} anyways. BJTalk 12:02, 9 March 2008 (UTC)[reply]
" Second as an operator of a bot that does leave mass message it should be up to the operator." No, it should be up to the community. And obviously policy edits (image copyrights) would be except form these polite requests from the community. -- Ned Scott 04:17, 10 March 2008 (UTC)[reply]
But that means the initial complaint would be nullified. If policy issues are exempt, BCB would be exempt, and that started this whole mess. Seems like it would avoid issues from some bot owners though. Adam McCormick (talk) 05:33, 10 March 2008 (UTC)[reply]
Yeah, that's kind of unworkable. Just because I upload an image doesn't automatically make me responsible for maintaining it whenever some new policy (or some new enforcement of an existing policy) makes the rounds. So no, policy notifications shouldn't be exempt. —Locke Coletc 05:47, 10 March 2008 (UTC)[reply]
The {{bots}} system was poorly designed, and I wouldn't support making it mandatory. It relies on an undesirable property of older editing code like pywikipedia, that the entire page source is downloaded before any edit is made. Once there is a good way to get edit tokens from the API, there will be no reason to do that. In particular, nobots won't even work on pywikipedia any more once it is updated to use the API, unless extra downloads are added just to look for it. If the system were redesigned to use some sort of centralized databases, for example Wikipedia:Nobots/BOT_NAME, I would be less unhappy. — Carl (CBM · talk) 12:18, 9 March 2008 (UTC)[reply]

I cannot and will not support such a poor system as the nobots. I dont care if you call 10 editors in a vote consenus, I dont. I have a very effetive method that is abuse proof, it has been in operation for many months without issue. As I have said before, bots and templates dont mix well. I have written one bot that actually attemps to parse a template, RFC bot. I had to re-write that bot no less than three times due to template related issues. (no offense is meant) I am not sure what drugs the devs where on when they wrote the processor, but their syntax is a basterdized language. βcommand 16:14, 9 March 2008 (UTC)[reply]

Why can't we use a category to identify the (presumably small) set of users who have used bots/nobots on their talk page efficiently? Only if one or other is used would downloading and parsing be required.
Also, could someone please flesh out the reasoning for why users should not be able to opt out of notification? Whether it's about deletion of their image or article, or giving a vandalism warning, the notifications are primarily designed to help the user. It seems to me that users should be able to refuse such help. It wouldn't stop anti-vandal bots from doing their job.
The argument had been made that it's equivalent if the botop has a blacklist of users not to notify. Leaving aside the issue of botops who decline to add people to their blacklist, this is shifting the burden of effort and comprehension. Why should a user who prefers not to get notifications have to jump through hoops for every new notification bot?
Finally, this proposal does seem more important than one current bot. Would it improve support if this proposal included a grandfather clause, whereby it was merely advisory for currently-approved bots? That way, the only change it makes is to require either compliance or an explicit justification for exception in new bot approval requests. Bovlb (talk) 16:07, 9 March 2008 (UTC)[reply]
I just disabled the following of {{nobots}} for one of my bots. By making an edit or uploading an image you are responsible for it, thus you get messages when there is a problem with your edits or images. Is there a problem here? BJTalk 16:32, 9 March 2008 (UTC)[reply]
I (and most other bot ops) probably have no intends of adhering to {{bots}}. Mønobi 16:54, 9 March 2008 (UTC)[reply]
If it becomes policy I assume you would change your mind? —Locke Coletc 05:12, 10 March 2008 (UTC)[reply]
Eh? And who died and made you the arbiter of these kinds of decisions? Oh right, nobody. And this is precisely why it needs to be mandatory (with threat of block/ban for non-compliance) because of attitudes like this which are far too prevalent. —Locke Coletc 05:12, 10 March 2008 (UTC)[reply]
If you're referring to your "do not notify" list, it does have one major issue: you sometimes refuse to add people to it. --Carnildo (talk) 20:30, 9 March 2008 (UTC)[reply]

I have to say that I don't see forcing compliance with what is being described (by several bot operators) as a broken/poorly implemented system is the way to go here. I can completely see the reasoning behind not wanting talk page notifications, but I don't see forcing {{bots}} on people as the answer. I could perhaps support a proposal to add a question to the BRFA template prompting the user to enter an opt-out method (or N/A, as the case may be). Then leave it up to the community (or BAG, if they deem appropriate) to question the creator on their proposed method, or lack thereof. That would mean that everything was upfront and their is a reference somewhere for people looking to opt-out for a specific bot. I simply haven't been convinced that changing bot policy is the way to go here. - AWeenieMan (talk) 18:23, 9 March 2008 (UTC)[reply]

I agree with your reasoning that it might be a good idea to ask whether new bots will opt in, somewhat similar to every new admin being asked to join WP:AOR. Adam McCormick (talk) 18:40, 9 March 2008 (UTC)[reply]
Seems like a waste of time, very few bots leave talk page messages. BJTalk 18:51, 9 March 2008 (UTC)[reply]
I think it would be a waste of time to ask every bot, but it would make sense to ask bots that will leave such messages. At least then they can be categorized and there won't be editors who think that {{nobots}} is an absolute. Adam McCormick (talk) 19:54, 9 March 2008 (UTC)[reply]
Then it shouldn't be a problem to add to the policy if it's such an uncommon issue. —Locke Coletc 05:12, 10 March 2008 (UTC)[reply]

Off topic: I have started a thread on the bot owners' noticeboard about the possibility of reworking nobots so that parsing page text is no longer necessary. — Carl (CBM · talk) 18:43, 9 March 2008 (UTC)[reply]

zomg

It doesn't have to be the nobots system, but I think the community is asking for something that makes less work for everyone. Bot ops, as great and intelligent as you are, and as well discussed as bot tasks are before hand, there will be unanticipated situations, or things in the project or user namespace that shouldn't be touched at all (unless related to policy, etc).

My own proposal in the above thread (which has gone off topic to a more general rant by local bot ops) was more of a state of mind, rather than the exact technical approach or some bureaucracy. When someone proposes a bot task then it should follow such request, unless the task itself would need to edit any and all pages regardless of a user's personal preferences. We don't even need to force this from a technical standpoint, but from a community standpoint. Encourage this as a default setting for bots, but allow bot ops to use desecration. Wouldn't that be simple and fix a lot of problems in itself? Unless your bot has to edit every page, give it a way to ignore a page, because we can't be expected to watch/clean up after something as powerful (for a lack of better words) as a bot.

So don't get hung up on the technical details. The community is making a request, and one that makes all of our lives easier. This is not black and white, this is not "this specific standard doesn't work, so I'll oppose the idea altogether", this is a community discussion. -- Ned Scott 04:17, 10 March 2008 (UTC)[reply]

And this is more than just what Locke Cole has brought up. Even if Locke never brought up the topic, this opt-out concept is something that the community has desired for a long time. Heck, I'm not even worried about half the stuff Locke brought up, but there are other reasons to consider these things. -- Ned Scott 04:21, 10 March 2008 (UTC)[reply]
Agreed, {{nobots}} simply seemed like the most community friendly method available (and I wasn't interested in reinventing the wheel with this proposal). But to be clear, I don't care what system is used so long as it is uniformly supported by all applicable bots and is simple for an editor to opt-in/opt-out as appropriate. I'm not married to {{nobots}}, so if there's some better way (that is automatic and requires no intervention by the bot operator to "turn on"), I'm game. —Locke Coletc 05:15, 10 March 2008 (UTC)[reply]

Summary

So now we have three different debates. 1) What system to use. 2) When to follow said system. 3) Who gets to choose when to use said system. So first you have to create a system that all the bot ops have no objections with ({{nobots}} can't be followed by bots using the future API). Then you have to get every bots that touches the effected pages (see #2) to use the new system, including hostile bot ops (have fun with Betacommand). On top of it all, then you have a power struggle between the bot ops and those who want to control when the bots follow the new system. All this is to a response to a problem that doesn't exist ("BAWWWWWW Betacommand" doesn't count). So you just keep on ahead pushing for proposals that the people who do the actual coding oppose, I'm going to do something more useful with my time. BJTalk 08:24, 10 March 2008 (UTC)[reply]

This is not a constructive attitude to display, for one. #1 is something we can discuss later, all we need to agree on is that it must be consistent amongst all bots which must comply with it (I don't think anyone would disagree with that). #2 is also simple if we want to limit it to notifications on user talk pages. #3 I'm not sure I understand what you're saying. If it's this "power struggle" you refer to later, I don't see why operators would be opposed to complying as non-compliance obviously causes more aggravation than compliance. If Jimbo is to be believed, editors are our best resource, irritating them with notifications they're not interested in receiving (tantamount to spam e-mail) is obviously unwise. So what's your problem with this? If we agree on the principles (editors should be able to opt out of notifications on their user talk pages on a bot-by-bot basis) where is there a dispute against this? —Locke Coletc 08:33, 10 March 2008 (UTC)[reply]
The disagreement is with when messages should be ignored and who decides this. From what I can tell you have three different types of message. 1) Opt in messages. 2) Vandalism or image notices. 3) Unsolicited messages (rare). The debate is over #2, if your edit tripped a vandalism bot, your newly uploaded image has issues (the image backlogs are almost empty) or one of your older image need attention, you should get a message. In the rare occurrence where editors have uploaded hundreds of images then left most bots have a opt out list (people have voiced issues with Betacommand's handling of his). Image tagging bots should not respect a general "no bots here" system, if you have so many image uploaded where it becomes a problem ask the bot op. BJTalk 09:13, 10 March 2008 (UTC)[reply]
Why should uploading an image be any different than making an edit to an article? Why are we assuming the uploader is responsible for the image months or years after they uploaded it? So if there were ever a bot created to (hypothetically speaking) automatically tag unreferenced/unsourced statements in articles, it would track down who made the contribution and post a notification demanding they source the statement? Is that really where we're headed here? Image tagging bots should respect whatever system we come up with out of this, unless (and this is my only exception so far as image tagging bots are concerned) the user just uploaded the image. In other words: the bot is notifying the user of a mistake they just made, not a mistake they made months or years ago. —Locke Coletc 10:24, 10 March 2008 (UTC)[reply]
As I said, the image backlogs are almost empty. Once this happens BCB and others will only leave messages for new uploads. My bot will still leave messages for older images that get orphaned but no action is requested. BJTalk 07:54, 12 March 2008 (UTC)[reply]
That's great, until something new is discovered to be wrong with all current fair-use images and the cycle begins anew. I'd rather solve the problem permanently than leave it to chance that I'll never be bothered about it again... BTW, do you have any comments on User:Locke Cole/Bot Page Exclusion? —Locke Coletc 02:53, 13 March 2008 (UTC)[reply]

Proposal - Maintain status quo

Despite the fact that some folks are disturbed when bot operators ignore {{nobots}}, I don't think there is enough of a problem or a significant enough justification for making adherence to the nobots template policy. My suggestion is that BAG members request nobots or similar compliance when it makes sense, and also that they see to it that bot operators are involved in the creation of anything nobots-like if/when nobots is deprecated. Avruch T 17:05, 10 March 2008 (UTC)[reply]

Seems reasonable to me. It appears that nobots is already being considered for the appropriate bot requests. — Carl (CBM · talk) 17:18, 10 March 2008 (UTC)[reply]
Please see #zomg. -- Ned Scott 23:16, 10 March 2008 (UTC)[reply]

Possibly stupid suggestion

I have no idea if this is a reasonable approach, so I'm just throwing it out. What if bots should, when they are looking to drop a message to a user, look for a specific subpage "User talk:ExampleUser/notices", and if the page exists, drops the message there instead of on the talk page, still going to the regular user talk page if not present? The advantages: users can opt-in for this method if they don't want messages: they can create the page, then take it off their watch list; alternatively, a user may want to simply keep those all there, watching the page as they would their talk page (though they won't get the new messages announcement box). It could be extended to be bot specific ("/BCB notices") for BCB, etc. From the bot standpoint, it would seem to be easy to implement, and less a hassle of programming hassle towards nobots/bots template (since it would have to seek that first, parse, interpret, and operate) than to determine if a page exists and write to that instead of a different page. --MASEM 18:00, 10 March 2008 (UTC)[reply]

This has the problem that extra HTTP queries have to be done for each user before the edit can be made. It's possible to combine these somewhat, but only if the list of all users who will get an announcement is known ahead of time (and the naive implementations won't do that). The goal of any new system should be to keep the number of HTTP queries to a minimum, even in a naive implementation. — Carl (CBM · talk) 18:12, 10 March 2008 (UTC)[reply]
I'm kind of confused: isn't there already an HTTP transaction already being done simply to post the notice? If nothing else, checking to see if the bot isn't supposed to post a message would reduce the number of HTTP transactions for users who have opted out. —Locke Coletc 20:00, 10 March 2008 (UTC)[reply]
It would be less total transactions for people who opt out (1 to check instead of 2 to edit), but 3 instead of 2 transactions for the vast majority who won't have opted out.
In any case, the point of a users' talk page is to accumulate messages. It seems quite unlikely that a user will gather so many messages over a sustained period of time that a secondary page is justified for automated ones. — Carl (CBM · talk) 20:06, 10 March 2008 (UTC)[reply]
Are additional HTTP queries really that big of a deal though? Remember we're just talking about bots that make user page notifications, most bots wouldn't really be affected by this. —Locke Coletc 20:21, 10 March 2008 (UTC)[reply]
As an optional system, it's acceptable. But if some sort of mandatory system is going to be considered, it should be close enough to optimal that we can talk about it with a straight face. I find it difficult to support a system that I would be embarrassed to implement in my own bot. — Carl (CBM · talk) 21:49, 10 March 2008 (UTC)[reply]
I think we need to separate the technical issue from the policy issue. I'm sure there's some method that can be agreed upon that would address your concerns on the technical side, my concern is with the policy side. And again, the only reason I called out {{nobots}} in my initial proposals was because it seemed to be the only semi-standard method available, it wasn't an endorsement of only that method (though if some other method is agreed upon, I strongly suggest {{nobots}} and related templates be deprecated and this new method be rolled out to pages which currently use the old one). —Locke Coletc 21:57, 10 March 2008 (UTC)[reply]
From a bot perspective, the additional queries certainly are a problem: I managed to speed ImageRemovalBot up by a factor of three by reducing the number of queries it made. --Carnildo (talk) 00:21, 11 March 2008 (UTC)[reply]
I was just thinking that there was probably more overhead involved with dealing with the image pages themselves than with the notification system. But I suppose that depends entirely on what the bot is doing besides the user page notifications. At any rate, thank you for the info on HTTP query relevance. I'll need to think about this, but I think I might have a workable and expandable solution that might satisfy folks (at least on the technical side; there's still the matter of whether this should be policy). —Locke Coletc 06:12, 12 March 2008 (UTC)[reply]
I've created a rough idea at User:Locke Cole/Bot Page Exclusion, please feel free to edit it or discuss it on the talk page there (or here, but it'd probably be better to talk it out there). —Locke Coletc 06:33, 12 March 2008 (UTC)[reply]

Yet another idea

Instead of focusing on nobots or any other centralized system, why not simply ask that all bot ops have some form of opt-out or ignore process? Again, this obviously wouldn't be an option offered when the edits have to be done, such as when related to policy and images, etc. This is mostly what we do now, but the problem comes up when some bot ops don't offer this, and refuse to do so even when the edit is not a have to edit. -- Ned Scott 23:21, 10 March 2008 (UTC)[reply]

Could you be more precise about which operators a refusing to turn off which sorts of edits? Refusing to keep a blacklist could be either inappropriate or benign, depending on the details. — Carl (CBM · talk) 23:27, 10 March 2008 (UTC)[reply]
If a user isn't sufficiently polite when requesting opt-out from BetacommandBot's notifications, Betacommand will refuse to put them on the opt-out list. --Carnildo (talk) 00:24, 11 March 2008 (UTC)[reply]
I was wondering whether Ned was talking about multiple bots or just that one. — Carl (CBM · talk) 00:27, 11 March 2008 (UTC)[reply]
BCB is the only one I can think of off hand, but I figure our policy should say something to this effect for future situations. -- Ned Scott 07:30, 12 March 2008 (UTC)[reply]
I offer an opt out list, you can even add yourself! BJTalk 01:43, 11 March 2008 (UTC)[reply]
I think that, unless the notices are seen as a "have to" edit, that opting someone out whatever the method should absolutely not be up to the bot owner's discretion. Does anyone have an objection to (for instance) requiring Betacommand, specifically, to add Locke Cole to the list of users that BCBot doesn't leave talk page notices for, and enforcing this with blocks if the bot continues to leave messages on Locke's talk page? This is obviously causing him some distress —Random832 18:48, 11 March 2008 (UTC)[reply]
It's obvious it's caused a few people distress, at least. Look at User talk:BetacommandBot, specifically the many past messages. I'm not the only one requesting a way to opt out of receiving notifications. I'm just the only one willing to push this as far as it needs to go to make it a standard feature of all bots. —Locke Coletc 06:04, 12 March 2008 (UTC)[reply]

automated_messages page proposal

I would like to bring up here an idea for dealing with mass/automated messages delivered to users by bots or tools (basic idea was posted by Obuibo Mbstpo on WT:CANVASS):

Bots which post non-critical messages to users should for each user X on their list look for a page "user:X/automated_messages". If that page exists, the bot can post its message to that page. If that page doesn't exist, the bot should assume that user X doesn't want to receive automated notifications by bots or other tools.

If a user X prefers to receive automated messages on his/her talk page, he/she can create a redirect at "user:X/automated_messages", pointing to "User talk:X". Notification bots would then post their messages to that user's talk page.

Users are not required nor expected to "archive" their "automated_messages" page. They can simply delete messages they no longer need, as they like. Of course they can add that page to their watchlist but there is no obligation to do so.

Using a separate page for automated messages has the advantage, that the "you have a message" bar does not pop-up when a new notification is delivered (feature still available by creating a redirect as described above).

Using a standard location per user does have the advantage that the existence or non-existence of that page can be interpreted by bots/tools as described above.

Implementing this scheme for bot programmers would be easy and efficient, as it would not be needed to load and scan a page looking for markers like {{nobots}} to decide if a user wants a notification.

The "automated_messages" page would be for non-critical notifications. Bots depositing critical information could still use the talk page of a user, ignoring the automated_messages page. The question whether a bot is depositing critical notifications that warrant posting on users talk pages and ignoring the automated_messages page should be decided on a per bot basis during the BRFA.

--Ligulem (talk) 00:47, 13 March 2008 (UTC)[reply]

absolutely not, please stop dreaming up ideas that are very bad. I will personally refuse to do this and I know every other sane bot op will also. Instead of crusading against bot messages why not do something productive? this proposal has no thoughts about the server or on bot loads. this is a very inefficient and poorly thought out idea. βcommand 01:08, 13 March 2008 (UTC)[reply]
Instead of posting unfounded personal accusations and documenting your bad faith, you could have used the server space above here to enlighten us how exactly this proposal is badly thought out and why this is bad for server load. In fact, it is exactly this kind of your posts here, which is unproductive. If the only arguments you have are personal attacks, then we must assume that your technical arguments are probably not as well founded as you pretend to be. Also your refusal to implement this is no argument against this proposal. In fact, it rather shows that you haven't actually understood it. --Ligulem (talk) 08:33, 13 March 2008 (UTC)[reply]
Fine then, since his idea is bad, why not contribute an alternative of your own (besides your present method of only begrudgingly adding people to a manually maintained opt-out list)? What about my proposal at User:Locke Cole/Bot Page Exclusion (which could be expanded to include some of the ideas Ligulem is proposing)? —Locke Coletc 02:51, 13 March 2008 (UTC)[reply]
User:Zscout370/Botoptout is about the best option that I have seen, Please note the wording. that is about the only opt-out method besides what I use. βcommand 02:57, 13 March 2008 (UTC)[reply]
I agree with β here. At least with {{nobots}}, a bot could potentially download a list of tranclusions periodically and work from a cached list. Gimmetrow 01:31, 13 March 2008 (UTC)[reply]
What about User:Locke Cole/Bot Page Exclusion? —Locke Coletc 02:51, 13 March 2008 (UTC)[reply]
This, or something like it, would be fine with me -- meaning that I would be willing to implement support for it in my own bot. It's both relatively simple for new users to figure out and light on extra HTTP requests by the bot. — Carl (CBM · talk) 03:01, 13 March 2008 (UTC)[reply]
Checking an opt-out list every five minutes, as User:Locke Cole/Bot Page Exclusion suggests, seems rather unreasonable. That's a lot of overhead. Also it seems somewhat unusual for a bot to need to decipher sections for its other tasks; supporting this would, I think, require extra programming in most cases. On the other hand, a lot of bots do interwiki, category or template work, so if you wanted to implement an on-wiki exclusion list, something using links, categories or templates is likely not to require much additional work for the programmer. Gimmetrow 08:12, 13 March 2008 (UTC)[reply]

clarification

Perhaps, it might help if I give some specific examples I have in mind.

Critical notifications are, for example, if a user uploads an image that violates copyright policy. A bot delivering a notice caused by that should post to the user's talk page, not to the automated_messages page. That's a critical notification and there should be for example no general opt-out lists editable by everyone.

Non-critical notifications are for example messages about new AfD's, PROD's, TfD's, delivered by bots. Such messages do clutter up a user's official talk page and users should not archive these notifications.

The main motivation is, that a user's talk page should really be reserved for important communication, which actually deserves flashing the "you have a new message" bar. Other stuff should be treated separately. My proposal is to put that on a user's "automated_messages" page as explained above.

--Ligulem (talk) 10:38, 13 March 2008 (UTC)[reply]