Jump to content

Wikipedia:Bot requests: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
SineBot (talk | contribs)
m Signing comment by 5.75.62.30 - "→‎wwikia bot: new section"
Line 415: Line 415:


please make bot for adding articles from wikia example nintendo.wikia.com <!-- Template:Unsigned IP --><small class="autosigned">—&nbsp;Preceding [[Wikipedia:Signatures|unsigned]] comment added by [[Special:Contributions/5.75.62.30|5.75.62.30]] ([[User talk:5.75.62.30#top|talk]]) 07:05, 20 January 2018 (UTC)</small> <!--Autosigned by SineBot-->
please make bot for adding articles from wikia example nintendo.wikia.com <!-- Template:Unsigned IP --><small class="autosigned">—&nbsp;Preceding [[Wikipedia:Signatures|unsigned]] comment added by [[Special:Contributions/5.75.62.30|5.75.62.30]] ([[User talk:5.75.62.30#top|talk]]) 07:05, 20 January 2018 (UTC)</small> <!--Autosigned by SineBot-->

== bot for creating new categorys ==

please make bot for creating new categorys example people birth by day

Revision as of 07:08, 20 January 2018

This is a page for requesting tasks to be done by bots per the bot policy. This is an appropriate place to put ideas for uncontroversial bot tasks, to get early feedback on ideas for bot tasks (controversial or not), and to seek bot operators for bot tasks. Consensus-building discussions requiring large community input (such as request for comments) should normally be held at WP:VPPROP or other relevant pages (such as a WikiProject's talk page).

You can check the "Commonly Requested Bots" box above to see if a suitable bot already exists for the task you have in mind. If you have a question about a particular bot, contact the bot operator directly via their talk page or the bot's talk page. If a bot is acting improperly, follow the guidance outlined in WP:BOTISSUE. For broader issues and general discussion about bots, see the bot noticeboard.

Before making a request, please see the list of frequently denied bots, either because they are too complicated to program, or do not have consensus from the Wikipedia community. If you are requesting that a template (such as a WikiProject banner) is added to all pages in a particular category, please be careful to check the category tree for any unwanted subcategories. It is best to give a complete list of categories that should be worked through individually, rather than one category to be analyzed recursively (see example difference).

Alternatives to bot requests

Note to bot operators: The {{BOTREQ}} template can be used to give common responses, and make it easier to keep track of the task's current status. If you complete a request, note that you did with {{BOTREQ|done}}, and archive the request after a few days (WP:1CA is useful here).


Please add your bot requests to the bottom of this page.
Make a new request
# Bot request Status 💬 👥 🙋 Last editor 🕒 (UTC) 🤖 Last botop editor 🕒 (UTC)
1 Automatic NOGALLERY keyword for categories containing non-free files (again) 27 11 Anomie 2024-08-04 14:09 Anomie 2024-08-04 14:09
2 Clear Category:Unlinked Wikidata redirects 9 6 Wikiwerner 2024-07-13 14:04 DreamRimmer 2024-04-21 03:28
3 Fixing stub tag placement on new articles Declined Not a good task for a bot. 5 4 Tom.Reding 2024-07-16 08:10 Tom.Reding 2024-07-16 08:10
4 Adding Facility IDs to AM/FM/LPFM station data Y Done 13 3 HouseBlaster 2024-07-25 12:42 Mdann52 2024-07-25 05:23
5 Tagging women's basketball article talk pages with project tags Y Done 20 4 Usernamekiran 2024-09-05 16:55 Usernamekiran 2024-09-05 16:55
6 Bot that condenses identical references Coding... 12 6 ActivelyDisinterested 2024-08-03 20:48 Headbomb 2024-06-18 00:34
7 Bot to remove template from articles it doesn't belong on? 3 3 Thryduulf 2024-08-03 10:22 Primefac 2024-07-24 20:15
8 One-off: Adding all module doc pages to Category:Module documentation pages 7 3 Andrybak 2024-09-01 00:34 Primefac 2024-07-25 12:22
9 Draft Categories 13 6 Bearcat 2024-08-09 04:24 DannyS712 2024-07-27 07:30
10 Remove new article comments 3 2 142.113.140.146 2024-07-28 22:33 Usernamekiran 2024-07-27 07:50
11 Removing Template:midsize from infobox parameters (violation of MOS:SMALLFONT)
Resolved
14 2 Qwerfjkl 2024-07-29 08:15 Qwerfjkl 2024-07-29 08:15
12 Change stadium to somerhing else in the template:Infobox Olympic games Needs wider discussion. 8 5 Jonesey95 2024-07-29 14:57 Primefac 2024-07-29 13:48
13 Change hyphens to en-dashes 16 7 1ctinus 2024-08-03 15:05 Qwerfjkl 2024-07-31 09:09
14 Consensus: Aldo, Giovanni e Giacomo 17 5 Dicklyon 2024-08-14 14:43 Qwerfjkl 2024-08-02 20:23
15 Cyclones 3 2 OhHaiMark 2024-08-05 22:21 Mdann52 2024-08-05 16:07
16 Substing int message headings on filepages 8 4 Jonteemil 2024-08-07 23:13 Primefac 2024-08-07 14:02
17 Removing redundant FURs on file pages 4 2 Jonteemil 2024-08-12 20:26 Anomie 2024-08-09 14:15
18 Need help with a super widespread typo: Washington, D.C (also U.S.A) 32 10 Jonesey95 2024-08-26 16:55 Qwerfjkl 2024-08-21 15:08
19 Dutch IPA 4 3 IvanScrooge98 2024-08-25 14:11
20 AnandTech shuts down 9 6 GreenC 2024-09-01 18:39 Primefac 2024-09-01 17:28
21 Date formatting on 9/11 biography articles 5 2 Zeke, the Mad Horrorist 2024-09-01 16:27
22 Discussion alert bot 6 4 Headbomb 2024-09-08 12:29 Headbomb 2024-09-08 12:29
23 Regularly removing {{coords missing}} if coordinates are present BRFA filed 11 2 Usernamekiran 2024-09-07 13:19 Usernamekiran 2024-09-07 13:19
24 Latex: move punctuation to go inside templates 3 2 Yodo9000 2024-09-07 18:59 Anomie 2024-09-07 03:38
25 Removing spurious nobot notice BRFA filed 4 2 DreamRimmer 2024-09-07 12:55 DreamRimmer 2024-09-07 12:55
26 de-AMP bot
Resolved
4 3 Primefac 2024-09-09 16:01 Primefac 2024-09-09 16:01
Legend
  • In the last hour
  • In the last day
  • In the last week
  • In the last month
  • More than one month
Manual settings
When exceptions occur,
please check the setting first.


Convert protocol relative URLs to http/https

All protocol relative links on Wikipedia should be converted to either http or https. As of June 2015, Wikipedia is 100% HTTPS only and because protocol relative links are relative to where they are hosted it will always render as HTTPS. This means any underlying website that doesn't support HTTPS will break. For example:

[1] (//americanbilliardclub.com/about/history/)

..the http version of this link works. The article American rotation shows it in action, the first three footnotes are broken because they use a protocol relative link to a HTTP only website. But Wikipedia is rendering the link as HTTPS.

More info at WP:PRURL and Wikipedia:Village_pump_(technical)#Protocol_relative_URLs. It's probably 10s of thousands of links broken. -- GreenC 21:06, 8 June 2017 (UTC)[reply]

This should only be done if the existing link is proven to be broken, and where forcing it to http: conclusively fixes it. Otherwise, if the link is not dead under either protocol, it is WP:COSMETICBOT. --Redrose64 🌹 (talk) 21:45, 8 June 2017 (UTC)[reply]
Well let's ask, what happens if you keep them? It creates a point of failure. If the remote site stops supporting HTTPS then the link immediately breaks. There is no guarantee a bot will return years later and recheck. WP:COSMETICBOT is fine but it shouldn't prevent from removing a protocol that causes indefinite maintenance problems and MediaWiki no longer really supports. By removing it also discourages editors from further usage, which is good. -- GreenC 22:07, 8 June 2017 (UTC)[reply]
That reasoning makes no sense. If a bot converts the link to https and the remote site stops supporting HTTPS, then the link immediately breaks then too. Anomie 00:22, 9 June 2017 (UTC)[reply]
Different reasoning. IABot forces HTTPS on all PR URLs since Wikipedia does too, when it analyzes the URL. It's erroneously seeing some URLs as dead as a consequence since they don't support SSL. The proposal is to convert all non-functioning PR URLs to HTTP when HTTPS doesn't work.—CYBERPOWER (Message) 02:22, 9 June 2017 (UTC)[reply]
@Cyberpower678: The proposal, as specified above by Green Cardamom (talk · contribs) is not to convert all non-functioning PR URLs to HTTP when HTTPS doesn't work, but to convert all PR URLs to either http or https. No exceptions were given, not even those that are presently functioning. This seems to be on the grounds that some are broken. --Redrose64 🌹 (talk) 09:06, 9 June 2017 (UTC)[reply]
Do I want to get rid of PR URLs? I personally think we should because they confuse editors, confuse other bots, ugly and non-standard etc they're an unnecessary complication. If we don't want to get rid of them (all), we still need to the fix broken HTTP links either way. -- GreenC 14:35, 9 June 2017 (UTC)[reply]
  • As someone who's been strongly involved with URL maintenance over the last 2 years, I think this bot should be run on Wikipedia, and should enforce protocols. It's pushing WP:COSMETICBOT but if the link ends up being broken because only HTTP works, then that will create other issues. The task can be restricted to only converting those not functional with HTTPS, but my first choice is to convert all. — Preceding unsigned comment added by Cyberpower678 (talkcontribs) 01:38, 13 June 2017 (UTC)[reply]
Opining as a bot op: I personally don't think this can be read as having community consensus because it's going to create a lot of revisions for which there is no appreciable difference. Yes it would be nice if wikipedia was smart enough to figure out if the relative URL is accessable only via HTTP or can be accessed via https, but the link is clicked in the user's browser and therefore the user doesn't know that the content may be accessable via HTTPS or HTTP. Ideally, users entering relative URLS could be reminded via a bot that it's better to be explicit with what protocol needs to be used to get to the content. The counter is we could set a bot to hunt down all the relative URLS and put a maintanance tag/category in the reference block so that a human set of eyes can evaluate if the content is exclusively available via one route or if the content is the same on both paths.

TLDR: This request explicitly bumps against COSMETICBOT, needs further consensus, and there might be a way to have "maintenance" resolve the issue. Hasteur (talk) 12:38, 13 June 2017 (UTC)[reply]

Those are all good ideas but too much for me to take on right now. Agree there is no community consensus about changing relative HTTPS links; However existing relative HTTP cases broken in June 2015 should be fixed asap. A bot should be able to do it as any broken-link job without specific community consensus (beyond a BRFA). Broken links should be fixed. That's something I can probably do, unless someone else wants to (I have lots of other work..). Note this fix would not interfere with any larger plans to deal with relative links. -- GreenC 15:26, 13 June 2017 (UTC)[reply]
Bump. -- GreenC 17:13, 9 August 2017 (UTC)[reply]

Substitution of music infobox templates

I no longer have the time to maintain my bot's task 1, and it has been more difficult than I expected, partially due to AWB quirks and a lack of helpful documentation. So, I would like someone else to help substitute the following templates, removing integer parameters in those which are named "Infobox", and removing hidden comments in the first line of the templates. If more than one of these is on a page then all should be substituted in the same edit. The bot does not really need to do anything else since most of the cleanup is handled by the substitution through Module:Unsubst-infobox (although if you want you can additionally replace |length={{Duration|m=mm|s=ss}} with |length=mm:ss and stuff like that).

Furthermore, pages in these categories should not have the templates substituted due to errors which need to be fixed manually for various reasons. I would have done the substitutions sooner by using User:AnomieBOT/docs/TemplateSubster but it would substitute all of the transclusions, and at a very slow rate (since AnomieBOT does a lot of other things and never runs tasks in parallel). The table nesting errors are probably bot-fixable but I couldn't do it with AWB and I can't write in Python.

I believe these would not count as cosmetic changes, since some of these are per the result of TfD discussions which closed with a consensus to merge, and because pages in these templates with deprecated parameters are automatically included in a tracking category and would be removed from the category upon substitution. About 200,000 pages would be affected. Jc86035 (talk) 07:33, 23 October 2017 (UTC)[reply]

@Jc86035: I should be able to assist with these. Do you have your bot configuration for these tasks that you can send to me? I'll still need to file a BRFA, but would like to make sure it is manageable for me as well. Nihlus 08:01, 23 October 2017 (UTC)[reply]
@Nihlus: My bot task so far has solely focused on removing articles from the error categories and not actually substituting the templates or making other fixes, although I can send my AWB configuration to you if it helps. Jc86035 (talk) 08:04, 23 October 2017 (UTC)[reply]
@Jc86035: Sure. Although, I am not sure how to exclude pages in certain categories as I've never attempted that, if it's even possible. And are you going to be running your bot still for the tasks mentioned in the BRFA you filed? Nihlus 08:18, 23 October 2017 (UTC)[reply]

@Nihlus: No, I don't think so. I don't think I have the time to do it and an experienced pywiki user could do the fixes much faster than I could.

Most of these fixes, for the Module:String errors, would probably involve changing the parameter name from |Last/Next single/album= to |prev/next_title= to bypass the Module:String fixes if there's no date in the parameter and the title doesn't contain any slashes (and removing the italics, bold, quote marks (only if paired at start/end or the title is enclosed in a link – see AWB configuration for how irritating these are)), and wait three to twenty years for the Wikidata-compatible infoboxes to come around for the dates to be added. (Note that |Last single= and similar can also occasionally be |last_single=.) There are also

other problems
  • chronologies where brackets are used outside the quotes/italics for the titles to indicate an acoustic version, a remix, a re-release, etc., and those probably need extra handling or a separate parameter (e.g. |prev_type=acoustic to add "(acoustic)") after the song title (I haven't updated the templates for these yet because I didn't take them into account). Some of these may need to be removed e.g. "(digital download)" which is meaningless these days (probably manual except for acoustic, remix, rerelease, remastered)
  • chronologies where a featured artist is indicated in brackets, possibly with <small>, after the song/album title; the featured artist should be removed
  • chronologies where an extended play is labelled "(EP)" and "EP" is not part of the actual title; remove "(EP)" (probably manual)
  • chronologies where the formatting is malformed; most of these have been fixed but there are still quite a few with misplaced italics and the like (probably manual at this point, the bot's already fixed a few thousand of them)
  • chronologies where the album or single is "TBA"; I'm not sure what to do with these but I think they would be retained for albums, with the title moved to the |prev/next_title= parameter from the |Last/Next album= parameter, and removed for singles
  • chronologies where US/UK or other is indicated and breaks the bracket parsing (example); these should be split into two manually (example) because there are not a lot of them
  • chronologies where three or more songs are in a single and |prev_title2=/|next_title2= can't handle them; if there is only one link for the songs then it should be split into one link for each song separated by " / " with all links having same target (probably manual)
  • chronologies for A-side/B-side singles with one article for both songs; same as above but use |prev/next_title2= parameters for second title
  • chronologies where there is a <br> tag inside a song title; these probably need to be handled manually, or &shy; or a zero-width space should be added based on context? not sure (probably manual)
  • chronologies where there is a month inside the year bracket; the month should be removed
  • chronologies where the years in the brackets are linked; remove the links
  • variations of subtemplates inside each other (except templates not in the above list like {{YouTube}}, which should be nested); move the second-outermost pair of right curly brackets (and repeat if needed) so that the templates are not nested
  • {{Audio sample}} or the templates being merged into it where the description is just "Artist nameSong title" or similar without any other qualifiers (example); remove the description

There are also other probably-automatable fixes which do not remove pages from the error categories:

other problems
  • {{Duration}} used where there is only one value in |length=; convert it to a plain mm:ss or h:mm:ss time because the infoboxes handle it automatically now
  • "soundtrack chronologies" and similar for franchises/members of a group and not artists/composers (example)); I believe consensus is to remove the chronology parameters or the {{Extra chronology}} containing them
  • country flags should be changed to the name of the country in brackets, possibly within <small> but not sure
  • date ranges like "1998-1999", "1998-99" and "1998-9" should be changed to "1998–1999" (en dash); date ranges like "November 1998 - April 1999" should be changed to "November 1998 – April 1999" (nbsp? + en dash + space)
  • overcapitalization of genre names; only the first in the list may be capitalized (especially if the list is comma-separated). Obviously excluding always-capitalized genre names like R&B
  • possibly others?

Naturally, I did not get around to any of these, and none of these are in the AWB configuration. Pretty much all of the AWB configuration is adding <br /> and fixing italics, quote marks, brackets, etc..

This discussion (and some others on Ojorojo's talk page) may help. Jc86035 (talk) 09:07, 23 October 2017 (UTC)[reply]

The page list is probably too long for AWB to do on its own (I think it sets its own limit at 25,000), so I would collate all of the pages transcluding the templates into a text document, remove duplicate lines with BBEdit or another similarly featured text editor (leaving one of each group of duplicates), then do the same for the pages in the error categories, then stick both lists into the same text document and remove duplicate lines (leaving none of the duplicate lines). Jc86035 (talk) 09:19, 23 October 2017 (UTC)[reply]

In my ongoing quest to clean up the music templates, I'm happy to inform everyone that the Singles template is now free of errors and only contains valid template fields (be they old or new). - X201 (talk) 08:42, 16 November 2017 (UTC)[reply]

Nihlus, are you doing or going to do this task (just the substitution, not the other things)? If you aren't it's fine since I might be able to do this myself at some point in the next four months. Jc86035 (talk) 08:04, 26 November 2017 (UTC)[reply]

BG19bot

The bot BG19bot was very helpful but has not been working for more than 6 months. And the bots "owner" has not been on Wikipedia since August. is there a way to start it up again, or a similar bot can be created?. BabbaQ (talk) 23:30, 25 October 2017 (UTC)[reply]

Under circumstances not completelly investigated, BG19bot is at the moment inactive. I am willing to fill out a BRFA for all of its tasks. I will probably do in the nxt few days. -- Magioladitis (talk) 22:51, 12 November 2017 (UTC)[reply]

Symbol parameter in Infobox former country

There was a change (relatively) recently to {{Infobox former country}} in which the symbol parameter was changed to symbol_type_article. (See also: Template talk: Infobox former country#"Symbol" not currently functional.) Other than the parameter's name nothing about it has changed (at least from the user's point of view) so it should just be a straight swap. Since the template is used on >3000 pages this seems like a job best suited to a bot and apparently there is one already which hunts down depreciated parameters. Could this please be added to that bot's tasks (or if not another bot set up to do so). Thanks. Alphathon /'æɫ.fə.θɒn(talk) 16:35, 30 October 2017 (UTC)[reply]

Coding... Nihlus 17:37, 30 October 2017 (UTC)[reply]
@Nihlus: What's the status of this? (No rush, just trying to clear out the Bot requests list so botops can find things to do more easily.) ~ Rob13Talk 17:17, 9 December 2017 (UTC)[reply]
Coding... --Gabrielchihonglee (talk) 13:59, 15 January 2018 (UTC)[reply]
Wikipedia:Bots/Requests_for_approval/Gabrielchihonglee-Bot_3 --Gabrielchihonglee (talk) 01:18, 16 January 2018 (UTC)[reply]

I regularly clean up links to bad sources. The interface does not permit linksearching by namespace, and in any case many links mentioned on Talk are being proposed as sources. I would like to suggest:

  1. A bot to replace http[s]://somedomain.com/someurl with {{deprecated link|somedomain.com/someurl}} on Talk;
  2. A page containing the list of domains to be substituted;
  3. A review process for adding links to the bot's page.

This would :

  1. Speed up review of deprecated links;
  2. Reduce the chances of users adding bad sources in good faith following talk page suggestions.

Some examples:

Thoughts? Guy (Help!) 22:06, 16 November 2017 (UTC)[reply]

Can you please clarify some terms? What is a "bad source"? What would the template "deprecated link" say?
You might want to get consensus for this sort of modification of editors' talk page posts at a place like VPP before making this request here. – Jonesey95 (talk) 00:26, 17 November 2017 (UTC)[reply]
"Bad sources" are sources people keep proposing/using despite being disallowed or plain bad with very few if any legitimate uses. {{Deprecated source}} is currently a basic template. Jo-Jo Eumerus (talk, contributions) 16:48, 22 November 2017 (UTC)[reply]

thepeerage.com

We seem to have many instances of {{Cite web}} with |publisher= set to [http://www.thepeerage.com ThePeerage.com] or [http://www.thepeerage.com/info.htm ThePeerage.com]; for example on Henry de Beaumont.

This needs to be changed to |website=thepeerage.com. Can anyone oblige, please? Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 18:36, 26 November 2017 (UTC)[reply]

I changed the requested result to "thepeerage.com" above, which appears to be the correct address. – Jonesey95 (talk) 18:40, 26 November 2017 (UTC)[reply]
@Pigsonthewing: so u wanna change *{{cite web|last=Lundy |first=Darryl |date=31 January 2011 |url=http://www.thepeerage.com/p10288.htm#i102873 |title=Henry Beaumont, 1st Earl of Buchan |publisher=[http://www.thepeerage.com ThePeerage.com]}} into *{{cite web|last=Lundy |first=Darryl |date=31 January 2011 |url=http://www.thepeerage.com/p10288.htm#i102873 |title=Henry Beaumont, 1st Earl of Buchan |website=thepeerage.com}}? --Gabrielchihonglee (talk) 13:10, 15 January 2018 (UTC)[reply]
@Gabrielchihonglee: Precisely so; and likewise where the original version is http://www.thepeerage.com/info.htm. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 13:16, 16 January 2018 (UTC)[reply]
Coding..., Got it, thanks for your explanation! --Gabrielchihonglee (talk) 13:50, 16 January 2018 (UTC)[reply]

Report on paid editors

Can someone create a report with the following information in a table based on {{Connected contributor (paid)}}?

  • Every editor listed in any of the Userx parameters.
  • The associated employerx parameter.
  • The associated clientx parameter.
  • The article page associated with this.
  • Whether the user is indefinitely blocked.

You should skip any pages that aren't in the Talk: namespace. (e.g. general disclosures on user pages, etc). ~ Rob13Talk 21:38, 26 November 2017 (UTC)[reply]

Personally, I'd be curious to compare those numbers to the stats on {{paid}} usage. Primefac (talk) 22:57, 26 November 2017 (UTC)[reply]
 Doing... @BU Rob13: I've emailed you about this. Mdann52 (talk) 07:49, 15 December 2017 (UTC)[reply]

Special character de-corrupter

Very often, because of encoding issues, you have situations like é → é.

This is often due to copy-pasting, or bot insertions. It would be nice if a bot could find all corrupted equivalent of all special Latin characters (possibly others too), and then do a de-corruption pass e.g. [2]/[3].

This might be best as a manual AWB run though.. Headbomb {t · c · p · b} 13:41, 1 December 2017 (UTC)[reply]

Related problems with file names on Commons: Rename files with wonky Unicode encoding. — Dispenser 06:16, 2 December 2017 (UTC)[reply]

Removing captions of deleted images in infoboxes

Hello fellow Wikipedians -

The image removal bots (e.g. CommonsDelinker) are doing their jobs to remove deleted images but I have noticed they don't delete the existing captions if there were one. For example, this diff removed a deleted photo but not the existing caption. I was wondering if there could be one which will remove the captions on infoboxes without images or is there already one? Iggy (talk) 22:04, 8 December 2017 (UTC)[reply]

On the one hand, Iggy, I can see your point. On the other hand, the captions do not show if there is no image, so it's not really super-necessary to add an additional check to remove the caption text. However, if you think that this should be done, it might be worth bringing it up at WP:BOTN, because it would really involve changing the existing code rather than making an entirely new bot. Primefac (talk) 22:07, 8 December 2017 (UTC)[reply]
One problem is that captions may be provided in different ways. But the main concern of the bot is to prevent the position of the former image from appearing as a redlink. --Redrose64 🌹 (talk) 16:30, 9 December 2017 (UTC)[reply]

Automatic archiving

Running IABot on an article checks whether an archive exists for each citation, and adds links to existing archives, but if an archive does not exist then it does not create one. Is there a bot (or script) which can be run on an article to create archives for all external links, or could someone who has the skills create one? In view of the constant link rot problems, this would be incredibly useful. Dudley Miles (talk) 08:52, 12 December 2017 (UTC)[reply]

@Cyberpower678: Since it is your bot, I figured I'd ping you. --Elisfkc (talk) 02:27, 13 December 2017 (UTC)[reply]
If you run my tool and check the optional checkbox to archive all non-dead references, Archives will be created.—CYBERPOWER (Merry Christmas) 03:46, 13 December 2017 (UTC)[reply]
@Cyberpower678: Thanks for your bot which is extremely useful, but it appears to use existing archives, not create new ones. I cannot get it to run at present but looking at it when I ran it on List of Sites of Special Scientific Interest in Suffolk on 6 September at [4], on ref 42, Bangrove Wood, it has added an archive created on 19 December 2013, but on the following ref 43, Barking Woods, it has not added an archive. I ticked the checkbox. Dudley Miles (talk) 09:44, 13 December 2017 (UTC)[reply]
The bot will not archive websites if there already exists an archive. The devs at the Wayback Machine do not like other bots to start pushing archive requests to their servers, as they have their own bots doing that already. If you want it to use a different archive, you can tell the bot to use something different under the URL management tool. Also some pages may not archive as they are somehow blocking the Wayback Machine from crawling the page.—CYBERPOWER (Merry Christmas) 16:07, 13 December 2017 (UTC)[reply]
Thanks very much for clarifying. Dudley Miles (talk) 16:36, 13 December 2017 (UTC)[reply]

Copy RCDB number to Wikidata

I made a category called Category:Articles needing their RCDB number moved to Wikidata, which has articles placed in it by having an RCDB number in {{Infobox roller coaster}} or {{RCDB}}. I was able to keep up with copying these numbers to the Wikidata entries for a while, but now I'm having trouble. I'd love if someone could make a bot that would be able to help me out. Elisfkc (talk) 02:25, 13 December 2017 (UTC)[reply]

@Elisfkc: Copying them into Wikidata should be possible with Harvesttemplates. Removing them here can be done with AWB once that is complete. You can ask at d:WD:Bot requests if you need help with harvesttemplates, and I can remove them here if you would like, after copying. --Izno (talk) 02:42, 29 December 2017 (UTC)[reply]

A consensus is forming at Wikipedia talk:WikiProject Disambiguation#Proposal to tag all disambiguation links to tag all remaining disambiguation links in Wikipedia with a {{disambiguation needed}} tag. From our most recent count, about 16,454 disambiguation links remain. Around 5,500 of these are already tagged, leaving a little under 10,000 to tag. What is needed here is, first, to get a list of all links to disambiguation pages from mainspace pages that do not already have this tag; second, wait about ten days to see if any of those are short term links that will be fixed quickly; third, re-check that list to see what links from that initial list have been fixed; and fourth, have a bot tag all remaining disambiguation links. bd2412 T 21:51, 14 December 2017 (UTC)[reply]

Note: In order to distinguish these from older uses of the tag to identify difficult links, we will actually need to tag these with the template redirect {{Needdab}}. bd2412 T 13:35, 15 December 2017 (UTC)[reply]
@Bd2412: From my personal experience, the remaining disambiguation links fall into 2 categories: recently created links and links that are difficult or impossible to disambiguate. In the first case, adding {{disambiguation needed}} would be useful, in the second case, not as much. Plus there are rare cases where it actually makes sense to link to the disambiguation page (when more than one of the meanings or uses of a term are relevant to the link). Kaldari (talk) 07:26, 2 January 2018 (UTC)[reply]
@Kaldari: Longstanding difficult links are probably already tagged with {{disambiguation needed}}, for the most part. For those that are not, it may still be useful because tagging may draw the attention of subject matter experts, who have the specialized knowledge to fix the link even if it is difficult for the average editor. As for intentional links to disambiguation pages, these must conform to WP:INTDABLINK. If they do, then they don't show up as errors, and there is no need to tag them. If they do not, then they need to be fixed like any other link. bd2412 T 02:48, 4 January 2018 (UTC)[reply]

Replace parameter in all MMA bio infoboxes

Replace the "other_names" param to "nickname", in only mixed martial arts biographies. Can someone create that? Thanks. TBMNY (talk) 17:49, 15 December 2017 (UTC)[reply]

TBMNY, is there a consensus to implement this change? Primefac (talk) 18:04, 15 December 2017 (UTC)[reply]
There isn't any, but this wouldn't be controversial in the least bit, as the other_names parameter has been used exclusively for nicknames in MMA bios, but the output "Nickname(s)" is more appropriate in that spot. If you need consensus, I can try to get it, but again, I feel like this is a pretty basic change that isn't controversial in any way. TBMNY (talk) 18:15, 15 December 2017 (UTC)[reply]
There was a large discussion recently at WT:RL regarding nicknames (and whether to even use them in an infobox), as well as a while ago at WT:RU and a few other places. So yes, I definitely think you need to get a consensus for this sort of mass change. Primefac (talk) 18:28, 15 December 2017 (UTC)[reply]
Needs wider discussion. Mdann52 (talk) 09:32, 16 December 2017 (UTC)}[reply]

Thousands of articles generating errors

The new form of {{lang-xx|italic}} without '' in the second part, has generated errors in thousands of articles. Thus, I suggest to bring back the old form of the template in which the old versions which have the markup for italics are reconsidered as correct ! Mark Mercer (talk) 17:51, 15 December 2017 (UTC)[reply]

This conversation should move to Template talk:lang. – Jonesey95 (talk) 18:26, 15 December 2017 (UTC)[reply]

Can anyone remove all the following lines of code from all of the listed articles?

Code and articles on this page. Abyssal (talk) 02:58, 20 December 2017 (UTC)[reply]

Still really need some help with this. Abyssal (talk) 04:11, 25 December 2017 (UTC)[reply]

Any takers? Abyssal (talk) 14:26, 2 January 2018 (UTC)[reply]

@Abyssal:, so, for every article in the section "Articles", you want to remove any line that matches with any item in "Code"? -- Gabrielchihonglee (talk) 12:33, 15 January 2018 (UTC)[reply]
Y Done by John of Reading. @Abyssal: Just checked some pages and seems that someone did that for you already. --Gabrielchihonglee (talk) 12:48, 15 January 2018 (UTC)[reply]

CR and CRH S-line templates

Most articles for Chinese railway lines were recently renamed, and a lot of pages need to be updated. Could someone make a bot to

  • standardize the naming and categorization of these templates (except CRT templates), renaming all "High-Speed", "Passenger" and "Intercity" to lowercase, matching the name of the line article (this could be done by appending "Railway" to that part of the template name for each template and finding the redirect target), changing CR to CRH for high-speed lines (redirects may need to be deleted manually), and appending <noinclude>[[Category:People's Republic of China rail transport succession templates]]</noinclude> if the page is not already in that category;
  • update Template:CR lines and Template:CRH lines to the current article titles (some links will break when pages are purged due to inconsistent capitalization and this is probably unavoidable without a lot of unnecessary redirects);
  • go through articles where the CRH templates are used, changing |system=CR to |system=CRH and updating template/page titles and station names (maybe adding |notemid=Part of the [[Name high-speed railway]] for sub-lines which are part of designated corridor lines); and
  • nominate all unused redirects for CSD or RfD?

There may be other issues with the templates and articles which I haven't addressed. This should affect about 100 templates and 450 articles (a surprisingly small number, given the number of railway stations in China). Consider doing genfixes.

Thanks, Jc86035 (talk) 17:36, 22 December 2017 (UTC)[reply]

English Lsjbot

Perhaps there should be an English version of the Lsjbot thats on Swedish and Cebuano Wikipedias to generate stub articles interlinked in Wikidata for current redlinks for non-controversial topics like airports, locations (ex, comarcas of Spain, municipalities, political subdivisions ets), events (ex, aviation accidents), geology, small cities, military divisions and awards, technologies, plants, animals, medicine, etc...--PlanespotterA320 (talk) 02:35, 29 December 2017 (UTC)[reply]

No. --Izno (talk) 02:39, 29 December 2017 (UTC)[reply]
Why not? There are many infoboxes and articles that are tedious to constantly write.--PlanespotterA320 (talk) 13:13, 29 December 2017 (UTC)[reply]
If you don't know about the Wikidata community's issues about Lsjbot, I have not much to say except go find out by doing some basic research. If you don't know about the English community's issues with Wikidata... I suggest you do some more basic research. If you don't know about the English community's issues with bot-made articles that are permastubs forever... I suggest you work on that as well. --Izno (talk) 13:23, 29 December 2017 (UTC)[reply]
A better answer is that this would need a consensus discussion at WP:Village pump (proposals) with strong participation. There are some English Wikipedia community members who are very strongly opposed to anything to do with Wikidata, so I would expect such a discussion not to reach consensus. Anomie 15:28, 29 December 2017 (UTC)[reply]
@Anomie: The main problem with Lsjbot IMO is that it has historically used poor quality databases to create millions of stubs that will never be maintained. For example, for locations, it uses GeoNames which is basically just a dumping ground for geography data with little to no quality controls. It also has the bad habit of creating thousands of articles about subjects that already exist in Wikipedia (but under slightly different names) that have to be manually merged. I'm also not happy with its plant and animal articles since it basically just created a time capsule of 2013 taxonomy (according to the Catalog of Life, which has better data quality than GeoNames, but also has its own problems). No one is handling the thousands of taxonomy changes that take place every year to keep them up to date. As it is, we barely have enough volunteer editors on English Wikipedia to keep our species articles relatively free of rot, and we could easily create 10 times more species articles via bot, but we would also need to grow our biology editor pool by ten times. Kaldari (talk) 06:30, 2 January 2018 (UTC)[reply]
Echoing everything Kaldari said. Absolutely not. Lankiveil (speak to me) 23:19, 4 January 2018 (UTC).[reply]

Newspaper

I would like to request for a bot that could fill in the Publisher after use of the Refill tool such as |publisher=Aftonbladet. As many Swedish subject articles uses one or two of the few main newspaper sources that are available in Sweden I would like for the bot to fill in for the sources aftonbladet.se as Aftonbladet, expressen.se as Expressen, svd.se as Svenska Dagbladet, kvp.se as Kvällsposten and dn.se As Dagens Nyheter. If those could be filled in at Publisher it would help seversl thousands of articles.BabbaQ (talk) 13:32, 30 December 2017 (UTC)[reply]

Replace "IMDb award" by "IMDb event"?

There is a template which takes an IMDb page, I think, and an event name as parameters - e.g. {{IMDb award|Venice_Film_Festival|Venice Film Festival}} - but it creates broken links. Maybe it relied on some redirect on IMDb's side and they changed their format, I do not know. There is another template which uses a IMDb event code instead of a page name - e.g. {{IMDb event|0000681|Venice Film Festival}}, which creates a correct link. See both at work:

Is there any chance a bot could fix those? I guess it would need to search the IMDb to get the event codes, which I do not know if it is allowed... (both by us and them). Thanks. - Nabla (talk) 17:23, 30 December 2017 (UTC)[reply]

I've done a couple of hundred mannually. There's only about 50 left now. -- WOSlinker (talk) 01:03, 31 December 2017 (UTC)[reply]

Archiving stale reports at AIV

A consensus is emerging here that a bot to clear stale AIV reports would be desirable. Reports that have been open for more than 6-8 hours are usually considered declined by default. An edit summary along the lines of "listed for >6 hours without any admin willing to block" is appropriate. I see a couple main obstacles to this, and I was hoping this board could help with them.

  1. This task needs to be run at least every 2 hours. That way we can set the minimum archive time to 6 hours, and get all the threads before they're 8 hours old. This has been the sticking point with the currently extant archive bots that I have checked.
  2. It has to archive individual bullet points, not sections. I don't think this is a great technical hurdle but if it is we can reformat AIV to accommodate the bot.
  3. It mustn't break HBC AIV helperbot5, which archives actioned reports. Pinging the current operator, @JamesR:.

Cheers, Tazerdadog (talk) 23:42, 1 January 2018 (UTC)[reply]

The current bot archiving actioned reports should probably also be responsible for stale unactioned reports. --Izno (talk) 13:54, 2 January 2018 (UTC)[reply]
How difficult would it be to update the current bot? Tazerdadog (talk) 22:25, 6 January 2018 (UTC)[reply]
The current not just clears to my knowledge, not archives, so I think it’d be pretty easy to add another clear condition to it. TonyBallioni (talk) 05:23, 7 January 2018 (UTC)[reply]
Can someone with more technical knowledge update the bot to do this? Tazerdadog (talk) 16:50, 13 January 2018 (UTC)[reply]

Bot for automatically updating Alexa rank in infobox

I recently updated the infobox information in On-Line Encyclopedia of Integer Sequences. As part of that update, I added an Alexa parameter to the infobox. It seems that the value of this parameter requires frequent updates. I just checked the Alexa link and found that the rank information in the infobox is no longer up-to-date. I think there are probably many more infoboxes with this parameter and regular bot runs to update those parameters seem like a good idea to me. -- Toshio Yamaguchi 14:49, 6 January 2018 (UTC)[reply]

This report says that |alexa= is used 2,257 times in {{Infobox website}}. – Jonesey95 (talk) 15:41, 6 January 2018 (UTC)[reply]
There is an Alexabot over at Wikidata, though the operator, Tozibb, says that they have been having technical issues so the last updates are from December. I have been working on Module:Alexa/{{Alexa/sandbox}} (credit to RexxS for writing the initial module), though it's not as useful right now since it generates arrows based on the two most recent Wikidata values (or based on four local parameters) and most Wikidata items with the Alexa rank property only have one value (furthermore, I provided an incomplete enwiki search to Tozibb for finding items to add data to). Referencing capability needs to be added before it can be used, either with local values or with the Wikidata data. Jc86035 (talk) 16:13, 6 January 2018 (UTC)[reply]

Bot that notifies users on their talk page if a set of pages are created.

I'm looking to create a bot that can automatically use {{AC notice}}~~~~ to inform users an article from a set of pages has been created. I noticed that I have a lot of redlinks in my Watchlist that I only have there to find out if a page is created eventually. If a bot could use addtext.py to select an article it detects has changed after a refresh of a a group of pages. If an article has been created, regardless of its contents (or lack thereof), uses it for 1= and adds the parameter to the talk pages of users that have provided it with a request to notify them upon its creation. Does this make sense? I don't know, but I hope it does. Thank you anyways! ―Matthew J. Long -Talk- 22:03, 6 January 2018 (UTC)[reply]

I'm sorry but it is Impossible because a user's watch page is not publicly available according to Help:Watchlist#Privacy --Gabrielchihonglee (talk) 13:00, 15 January 2018 (UTC)[reply]
This request isn't asking for a bot to access people's watchlists, so it's not impossible for the reason you claim. Anomie 21:56, 15 January 2018 (UTC)[reply]

Blacklisted URL

I had been using a link to a pdf article as citation in my a articles on Tamil films.

This has been used in numerous articles for the past one year or so. Now I find that this link URL is blacklisted and a Bot has placed a notification to that effect in many articles. It will be a tiring job to replace the link in individual articles.

Is it possible to do "replace ..... with ...."

The blacklisted link is: https://chasingcinema.files.wordpress.com/2015/09/text.pdf
to be replaced with: https://indiancine.ma/texts/indiancine.ma%3AEncyclopedia_of_Indian_Cinema/text.pdf

Thank you.--UKSharma3 (User | talk | Contribs) 10:25, 7 January 2018 (UTC)[reply]

According to MediaWiki_talk:Spam-whitelist#Wordpress, files.wordpress.com has been removed from the global blacklist and a bot should go around and remove the blacklist templates in due course. DH85868993 (talk) 10:43, 7 January 2018 (UTC)[reply]

Bot to tag article talk pages for WikiProject New York City

As per this discussion on my talk page, there are about 433 New York City Subway station articles tagged by WikiProject New York City Public Transportation, the vast majority of which are missing a tag for WikiProject New York City. The list is here. I was wondering if a bot could go around and add {{WPNYC}} tags to the talk pages that are missing them. epicgenius (talk) 21:28, 8 January 2018 (UTC)[reply]

While tagging these, it would be useful to set a WT:NYC project |importance= for the pages that are obviously low, mid, high, etc., and leave the unsure-importance ones blank for later assessment.
Regarding |class=, would inheriting {{WikiProject Trains}}' |class= be desired/appropriate?   ~ Tom.Reding (talkdgaf)  21:57, 8 January 2018 (UTC)[reply]
@Tom.Reding: Yes, I think it would be OK to inherit classes from {{WikiProject Trains}}. The importance could be set as low for all of these tags, since I don't think any single station is particularly essential to NYC itself. epicgenius (talk) 22:27, 8 January 2018 (UTC)[reply]
Epicgenius, BRFA filed.   ~ Tom.Reding (talkdgaf)  00:03, 9 January 2018 (UTC)[reply]
@Tom.Reding: Thanks. Also, the |transportation-importance= parameter is redundant since the vast majority of the time, it was already defined under the WP:TRAINS template. epicgenius (talk) 05:49, 9 January 2018 (UTC)[reply]

Simple multi-article search and replace request

Can someone replace the following code:

[[File:Canis dirus reconstruction.jpg|right|50 px]]<!-- [[Dire wolf]] -->

with

[[File:Canis dirus reconstruction.jpg|thumb|right|Artist's restorations of a ''[[Canis dirus]]'', or dire wolf.]]<!-- [[Dire wolf]] -->

across these articles? Abyssal (talk) 15:34, 9 January 2018 (UTC)[reply]

Abyssal, are you the creator of all the drafts you want to change?   ~ Tom.Reding (talkdgaf)  16:14, 9 January 2018 (UTC)[reply]
Tom.RedingYup. Abyssal (talk) 16:20, 9 January 2018 (UTC)[reply]
Abyssal,  Done, made 19 replacements (and found a typo exclusion).   ~ Tom.Reding (talkdgaf)  17:20, 9 January 2018 (UTC)[reply]
Thanks, Tom.Reding! Abyssal (talk) 17:26, 9 January 2018 (UTC)[reply]

List of non-marine molluscs of a country

I request to create lists by country "List of non-marine molluscs of COUNTRY" based on data from the http://www.iucnredlist.org/search website. It is very time consuming task, even if I can filter out freshwater / terrestrial / gastropods / bivalves of certain country at the IUCN website. If a Bot could make a list of species with the references, that would be great.

This is realizable (there onece existed a Polbot, that was able to create stubs like this [5] based on iucnredlist.org and there is possible to make various list based on the such as this one List of least concern molluscs).

Examples of the work in progress.

There are number of lists missing:

You can virtually make lists of non-marine molluscs for all countries (I will manually merge them with existing lists when needed).

It would be great, if you could at least sort those species into sections "Freshwater gastropods", "Land gastropods" and "Freshwater bivalves" (or make working lists of those three groups).

This task is suitable for non-marine molluscs. This task is not suitable for marine molluscs (that are placed in separate lists on Wikipedia), because there are not enough data for them on IUCN.

If you could pre-prepare such lists (in a User namespace), I would finish the task manually (I will sort species in systematic order, I will add families, I will update outdated info, I will generally check-out lists). Thanks. --Snek01 (talk) 21:11, 9 January 2018 (UTC)[reply]

Snek01, this should be brought up at Wikipedia talk:WikiProject Tree of Life first, to determine whether or not it's desirable, if it would duplicate existing info, or if it could be accomplished via the existing category structure, etc. I believe there's also a moratorium on bots mass-creating articles.   ~ Tom.Reding (talkdgaf)  21:32, 9 January 2018 (UTC)[reply]
I requested to do it in my User namespace. If you are afraid that could happen something bad, bot operator can create one list only first. You will see what will happen. Thanks for your solicitude. There is even no need a Wikipedia approved Bot for this task. There is need knowledge how to datamine data from iucnredlist.org. --Snek01 (talk) 23:34, 9 January 2018 (UTC)[reply]
Oh, I misunderstood (re Polbot). Dumps into userspace would be fine. I'm currently working on another IUCN related project that's fairly large, otherwise I'd offer to help. It would still be worthwhile x-posting to WT:TREE, as there are others there that don't watch WP:BOTREQs who might also be able to help.   ~ Tom.Reding (talkdgaf)  23:44, 9 January 2018 (UTC)[reply]

Would it be possible for a bot to automatically fix errors like Special:Permalink/778228736, where some redirect category templates are placed within {{Redirect category shell}} but some aren't? feminist (talk) 10:29, 10 January 2018 (UTC)[reply]

us-highways.com

http://us-highways.com/ was previously used for a website called U.S. Highways: From US 1 to (US 830), a self-published site on the history of the United States Numbered Highway System. The creator of the website (Robert V. Droz) ran into some unrelated legal issues in his home state of Florida and let the site lapse. The domain name has been assumed by a commercial enterprise completely unrelated to the former site. As an SPS, the site should have never been used as a source in articles, but it was. Fredddie and I feel that it would be preferable to remove citations and links to the site at this time. Would some bot operator be amenable to replacing any citations to the site with {{citation needed}} tags and removing any links in an external links section of the articles? Imzadi 1979  12:04, 13 January 2018 (UTC)[reply]

There are also a few links labeled "Florida in Kodachrome", but the domain is the same. –Fredddie 16:35, 13 January 2018 (UTC)[reply]
@Imzadi1979 and Fredddie: Not taking this on yet, but do you have a consensus in hand to do this? Hasteur (talk) 23:33, 15 January 2018 (UTC)[reply]
I personally support this. It may be AWB-able though. --Rschen7754 01:21, 16 January 2018 (UTC)[reply]

Unreliable source? documentation

Hello, I have been pinged by User:Mattythewhite, who was informed of this by User:Helper201 that there is a documentation that the unreliable source? tags should be outside the ref tags not in them.

There may be unreliable source? tags found on articles in the references section, so a bot should be used to change the following:-

<ref>Reference {{Unreliable source?|date= }}</ref> → <ref>Reference </ref>{{Unreliable source?|date= }}
so it looks something like this [1][2][unreliable source?] Reference 1 does not abide to the documentation while ref 2 does. It is a difficult task to manually find all the articles with unreliable source? tags in the references sections. Iggy (Swan) 16:32, 13 January 2018 (UTC)[reply]

  1. ^ Reference [unreliable source?]
  2. ^ Reference
Personally, I've always included any tags within the reference just before the closing tag, especially with {{sps}}. I don't necessarily think this is a good idea. –Fredddie 16:39, 13 January 2018 (UTC)[reply]
Looking at documentation, it seems that {{sps}} should be inside the ref tags. It would then seem to me that rather than have a bot clean up the {{Unreliable source?}} tags, we should come up with consistent rules for their usage. –Fredddie 16:44, 13 January 2018 (UTC)[reply]
I'd say the use of the unreliable source? tags in some articles within text and others in the references section would be somewhat inconsistent within the project, whether or not it agrees with the documentation is a different question. Iggy (Swan) 16:59, 13 January 2018 (UTC)[reply]
@Fredddie: {{sps}} explicitly states it should be used outside ref tags. Nihlus 11:50, 14 January 2018 (UTC)[reply]

please make bot for adding articles for footballdatabase.eu

footballdatabase.eu has more articles about football please make bot for adding articles for site — Preceding unsigned comment added by 37.254.182.198 (talk) 09:06, 14 January 2018 (UTC)[reply]

Declined Not a good task for a bot. See Wikipedia:Bot requests/Frequently denied bots#Bots to create massive lists of stubs. Anomie 17:51, 14 January 2018 (UTC)[reply]
but in other wikipedias use the bot to adding articles example ceb.wikipedia.org — Preceding unsigned comment added by 5.22.4.221 (talk) 07:35, 15 January 2018 (UTC)[reply]
That's their business, what other Wikipedias allow has no bearing on what we do. --Redrose64 🌹 (talk) 12:01, 15 January 2018 (UTC)[reply]

But most of the articles that users create are small articles and it takes a relatively long time to create, so the robot's difference with the users who make small articles is also faster than the robot. — Preceding unsigned comment added by 5.22.35.28 (talk) 12:42, 15 January 2018 (UTC)[reply]

The Cebuano wiki is a different wiki and has no weight here (additionally, there is a discussion to close it). You've been told no by three people now. Please move on. Nihlus 12:49, 15 January 2018 (UTC)[reply]

make a translate bot

plase make translate bot to translate articles in other wikipedias and not in english wikipedia use google translate — Preceding unsigned comment added by 5.219.141.214 (talk) 11:38, 14 January 2018 (UTC)[reply]

Declined Not a good task for a bot.. Tried, failed. Primefac (talk) 16:34, 14 January 2018 (UTC)[reply]
Have you read Google TranslatePasta before? It's quite entertaining. Hasteur (talk) 23:37, 15 January 2018 (UTC)[reply]

Tag talk pages of articles about English with Template:WikiProject English language

WP:Article alerts recommends having a bot tag the talk pages of articles with relevant topical wikiproject banners so that the AA bot produces more meaningful results. This would also be useful for getting this barely active project rolling better; I'd been looking into manually going article to article doing this, but it looked to be a rather daunting task even with AWB, and I'm on a Mac, so I'd have to run AWB in a VM or something anyway.

Would start with Category:English languages and its subcats.

Various subcats of Category:Words are going to qualify but will probably have to be done manually (e.g. about 99% of the content of Category:Neologisms, Category:Slang, etc., are English, but a handful of articles in such categories are not and so should not be tagged as within the scope of this project. Similarly, the majority of articles under Category:Punctuation have a section on English and would get tagged, but in a few cases the English coverage has been split out into separate spinoff articles like Quotation marks in English which should get tagged while the main article on the mark would not. We'll probably want to exclude most literature-related categories, but would include Shakespeare (for having had a profound effect on English, in contributing more stock phrases than any other body of work besides the King James Bible). Category:Lexicographers and other such bios will also need manual tagging.  — SMcCandlish ¢ >ʌⱷ҅ʌ<  19:09, 17 January 2018 (UTC)[reply]

wwikia bot

please make bot for adding articles from wikia example nintendo.wikia.com — Preceding unsigned comment added by 5.75.62.30 (talk) 07:05, 20 January 2018 (UTC)[reply]

bot for creating new categorys

please make bot for creating new categorys example people birth by day