Jump to content

Wikipedia:Village pump (technical)

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by NE Ent (talk | contribs) at 18:35, 14 October 2013 (Proposal to Reduce the API limits to 1 edit/30 sec. for logged out users: restore oppose). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

 Policy Technical Proposals Idea lab WMF Miscellaneous 
The technical section of the village pump is used to discuss technical issues about Wikipedia. Bugs and feature requests should be made at Bugzilla (see how to report a bug). Bugs with security implications should be reported to security@wikimedia.org or filed under the "Security" product in Bugzilla.

Newcomers to the technical village pump are encouraged to read these guidelines prior to posting here. Questions about MediaWiki in general should be posted at the MediaWiki support desk.


Some help by a technically skilled user is needed there. Thank you in advance. --Leyo 09:01, 20 September 2013 (UTC)[reply]

Disallow VE software changes from the WMF for the foreseeable future

After testing the new release of VE, announced here in #VisualEditor weekly update - 2013-09-26 (MW 1.22wmf19), it became quite obvious that I am apparently the first one to test these changes before they are implemented live, since the predictable bugs were very easy to find and will create loads more problems (luckily, with the opt-in, the amount of VE edits has dropped dramatically).

I have posted these problems in the above thread (where one developer responded) and at Wikipedia:VisualEditor/Feedback#Error reports before they even happen!, where no one from the WMF could be bothered to reply.

I then filed Bugzilla 54737, which was closed as invalid. Apparently, we first need to get consensus that theior product and new release sucks before they can act upon it, instead of using their brains and testing it for themselves. Furthermore, even if we get a consensus, "on average, they take a month or two to process"[1]. But anyway, here we are.

I would like to see whether people agree that, considering the current state of VE and the quality of the weekly releases (buggy, badly and incorrectly described, ...), it would be a lot better if we didn't get any VE updates or releases until most of the major bugs are solved and we get an actually working product. The WMF is still trying to use the Wikipedia's as their forced "community testing ground" (see nice older pages like [2]: "The VisualEditor project aims to create a reliable rich-text editor for MediaWiki. It is a top priority for the Wikimedia Foundation and it is available for testing on the English Wikipedia."), whether we want this or not.

This has to stop. They can test it on their own pages if they want to, or on testwiki, or on Wiki-versions that explicitly agree to be a testing ground; but they shouldn't bother us with it, and they should stop pushing new, untested, faulty releases to us. Fram (talk) 07:37, 30 September 2013 (UTC)[reply]

Do these updates affect the opt-in nature? equazcion | 07:44, 30 Sep 2013 (UTC)
... you told them to not push any updates for 3 months in the bug! 1) You're not even giving them a chance to fix the bugs that exist, 2) that is not what Bugzilla is for. --Rschen7754 07:45, 30 September 2013 (UTC)[reply]
As long as it's opt-in only and being used by people that are using it with an eye to evaluating the results and the interface, I can't get too excited about releases, Fram. Even well-run projects have the occasional regression. If I see evidence that people testing Visual Editor are damaging Wikipedia and not cleaning up after themselves, my excitement level will quickly rise.
Conceptually, I agree with you that until they get tables, complex templates that include styles and table formatting, and tables themselves working, there's no actual reason for anyone but themselves to test the code.—Kww(talk) 07:52, 30 September 2013 (UTC)[reply]
Don't you think you're getting a little bit extremist? You want them to fix VE's problems by... doing nothing. Sure. Adam Cuerden (talk) 07:58, 30 September 2013 (UTC)[reply]
No, he wants them to fix VE's problems by fixing VE's problems. As Fram notes, they don't need the live English Wikipedia community in order to find problems, since many problems are already apparent. A beta software release for live testing should really only be done once obvious inadequacies are taken care of and further bugs aren't becoming readily apparent via genuine efforts at non-live testing. That said, personally, I don't much care if they want to continue plodding down this half-assed path, as long as VE remains opt-in and doesn't cause problems for non-testers. equazcion | 08:09, 30 Sep 2013 (UTC)
Like Equaczion said, I want them to fix bugs at Testwiki, and at every Wiki that actively welcomes VE and its updates, be it Mediawiki or another language version or Wikisource or whatever. I don't want them to push clearly untested updates to here and everywhere. I have left feedback at Mediawiki as well, but no one seems to care that their release notes are utterly wrong. I seriously doubt that anyone had tested any of the VE "improvements" of the version 19 before they implemented it at Mediawiki and announced the rollout here for this week, and I doubt as well that anyone but me has independently tested it but me (WMF developer Qgil has tested my bug reports and confirmed them, thanks for that). Why would we allow them to push their releases when their quality is way sub-par? Whether it will affect the opt-in, I don't know, since I'm unable to test that (and I doubt they have done). I don't see the value in rushing a new release every week when there are so many major bugs left and so little enthusiams here to test or use it, or to clean up after those that still use it. Fram (talk) 08:30, 30 September 2013 (UTC)[reply]
I'm afraid I don't see the point in this request. I think, although WMF should have withdrawn VE entirely after it was determined that there were serious encyclopedia-damaging bugs, that there's no point in withholding updates. There is no indication that WMF is likely to restore previously existing serious bugs, and I consider the possibility that an update adds new serious bugs likely, but unlikely to be detected without using a live Wiki. I could be wrong, — Arthur Rubin (talk) 09:57, 30 September 2013 (UTC)[reply]
Well, I tested the new release V19. There are, at the moment, a number of serious bugs in file handling (someone should really tell the WMF that it is about "files", not "images", since quite a few years), e.g. with the moving of files. Moving files works very poorly and causes more problems than it solves. So when this new version doesn't address this bug, but instead opens up this "functionality" for templates and other things, then yes, the WMF is actually expanding known bugs to new possibilities, and testing in their own pages at Mediawiki were more than sufficient to find these problems (and a few others to boot). Pushing these changes (with the accompanying incorrect description, which no one is allowed to correct apparently) to all wikipedia instead of testing them locally (at Mediawiki and Testwiki) is irresponsible behaviour and only makes Wikipedia worse. It isn't too bad for us, now that we have opt-in, but we could at least send the message that theit approach is totally wrong. Fram (talk) 10:05, 30 September 2013 (UTC)[reply]

Woah, woah, you want developers to spend their precious time testing software before releasing it? This is Web 2.0/"agile"/"waterfall"/insert buzzword here! Your users are your testers! Regression testing, test suites, fuzzing, what are those, some new websites or somethin? To be fair to the WMF, this problem is hardly limited to them. The trendy thing in software development these days seems to be letting your end users find the mistakes you made, and then bitching at them for not providing a ready-to-go patch to fix your errors, or if you're a large corporation potentially calling the police on them for finding bugs with security implications. After all "open source" means "other people do my work for me," right? I mean, who's got time to test software anymore? That's what they do when they write actually important software, right, like software that can kill people if it screws up (medical devices, military hardware, etc.), just throw the latest HEAD revision onto the device and ship it when the deadline comes. --108.38.191.162 (talk) 16:49, 30 September 2013 (UTC)[reply]

Cute, but even open-source developers only recommend live use after major issues are dealt with. The difference between the old and new-age open-source ways is that works in progress aren't "closed" quite as often, so instead of limiting them to a hired or even select group, they're often made available for all to test, should they wish to and be able to find the test release. Things still aren't generally implemented live or even put in plain sight until they're pretty reliable. By even the most relaxed industry standard, VE is in the alpha stage, and in order to participate in testing, users should need to have the requisite knowledge to navigate to a non-live testing ground. equazcion | 18:12, 30 Sep 2013 (UTC)
Oh I'm aware of that. I'm a programmer myself. My point was mainly in regards to the belief some people have that "open source" is magic pixie dust that will fix all the bugs in your software. This is more common in the corporate world, where some suits seem to think a legion of programmers will materialize to fix all your bugs for free. As for testing, if the WMF wants to learn how to do things right, they should look at projects like the Linux kernel, Perl, and distributions like Debian and Gentoo, all of which have automated test suites, groups of dedicated testers, and in fact require testers to sign off on code before it's marked for release. --108.38.191.162 (talk) 19:33, 30 September 2013 (UTC)[reply]
  • Fram, I get that you dislike VE and the way it's been deployed. That's great; it's certainly not a rare opinion! But this suggestion seems to be deliberately pointed - everyone else has to not be told about ongoing work because you dislike the project?
Given one of the major community complaints about VE, and every other major technical project, is a lack of communication, demanding less communication is going to be massively counterproductive and serve to make things just that bit less pleasant for everyone involved further down the line. Andrew Gray (talk) 19:45, 30 September 2013 (UTC)[reply]
    • No, that's not what I am suggesting. It's not that I don't want to hear about ongoing work (assuming that their reports are actually correct, the current one isn't), but that I don't want them to deploy the actual updates until a few months have passed, most of the major bugs have been fixed, and things have thoroughly been tested at testwiki and at any wiki that agrees to be used for that purpose (e.g. Mediawiki). It seems that I haven't explained my proposal very well, or that my current disagreement about the latest release notes have been mixed up with this proposal. But I repeat: I call here for a moratorium on deployments, further VE code updates, not for silent deployments. apologies for the misunderstanding! Fram (talk) 19:58, 30 September 2013 (UTC)[reply]
  • Fram, this is one of those cases of watching an inevitable result. Now that WMF has lost English Wikipedia as an involuntary testbed, their first few releases will inevitably represent quality degradations to the point that many of the projects that have it marked as "opt-out" will move to "opt-in". They will then get even less feedback, so their quality will degrade further. WMF will either have to learn to test code on their own, or the project will collapse. I'm hoping for the former. One way or the other, us raising a bigger fuss about it probably won't help.
I'm personally hoping that the VE development team starts to use WMF's QA team to evaluate releases (right now, my understanding is that they don't, something which doesn't surprise me but does sadden me). It probably is reasonable to ping Mr. Forrester and ask him what QA stages VE goes through. The last quote I have from him on the topic is "<+James_F> Elitre1: We test the fixes, yes, but clearly not enough. :-( We've spoken with the QA team about working with them, but currently we don't do any significant work with them, no."—Kww(talk) 20:05, 30 September 2013 (UTC)Pinging Mr. ForresterKww(talk) 22:49, 2 October 2013 (UTC)Pinging Mr. ForresterKww(talk) 06:23, 4 October 2013 (UTC)It's a reasonable and specific question, Mr. ForresterKww(talk) 03:59, 6 October 2013 (UTC)Perhaps it's some kind of bug in WP:Notifications, Mr. ForresterKww(talk) 20:05, 8 October 2013 (UTC)[reply]
@Kww: Hey, sorry, indeed Notifications doesn't let you just endlessly ping people to avoid people abusing it. I didn't see this until pinged by Scott. I talked in some detail about testing strategy at the IRC office hours last week - was there a specific question you wanted to ask? Also, per WP:DENY, please don't use a banned user's hate site as a reference, even if it seems reasonable to assume that the log is accurate. Jdforrester (WMF) (talk) 21:48, 8 October 2013 (UTC)[reply]
Reading IRC logs is always painful, so maybe I missed what I was looking for, Jdforrester. I saw mention of different kinds of tests, but I didn't see mention of who does the testing. When are changes specifically evaluated by someone that didn't write the code? Is there a specific set of regression tests that all releases must pass before they are released to a production version of WIkipedia?—Kww(talk) 22:10, 8 October 2013 (UTC)[reply]
@Kww: Changes are tested by the developer, the review, several computer (see the unit tests, the integration tests and the browser tests, as well as the dirty-diff testing) and by me. The test are run before, during, and after the code is merged. Yes, code cannot be merged (let alone deployed) without passing the unit and integration tests; we hope to be able to add the browser tests into this list when they are more stable). Jdforrester (WMF) (talk) 22:38, 8 October 2013 (UTC)[reply]
Note that they are currently hiring a full-time VE tester. Andrew Gray (talk) 20:17, 30 September 2013 (UTC)[reply]
It's either amusing or sad or both that among the "pluses" for a candidate to possess (but not requirements) are "Knowledge of wiki markup and experience as an editor of Wikipedia or another MediaWiki wiki" and "Not only know how to test VisualEditor but also why to test". Similar requirements were applied to the development team, I'm guessing. EEng (talk) 19:44, 6 October 2013 (UTC)[reply]
Shouldn't they have filled that position six months ago?—Kww(talk) 20:22, 30 September 2013 (UTC)[reply]
(ec)Any reason to believe whatever Jdforrester tells us? His track record isn't exactly spotless over the last couple of weeks... He asked for feedback about the status report, but can apparently not be bothered to read it or act on it. He has replied to my request about where he got the "edit summary" figures from, but his response (on his talk page) is seriously unconvincing. Fram (talk) 20:23, 30 September 2013 (UTC)[reply]
It's always good to at least know what the official statement is, regardless of whether you personally find it credible.—Kww(talk) 20:35, 30 September 2013 (UTC)[reply]
The feedback that the VE team/WMF should be getting is not on the level of bugs in this or that feature, but that they have failed in how they have managed this project. Until they have demonstrated some organizational changes, and demonstrated (on a suitable test bed) that they can deliver adequately tested software, they should stop deploying it. Alternately: the fuss they have gotten is the feedback. Do they need more? ~ J. Johnson (JJ) (talk) 21:28, 8 October 2013 (UTC)[reply]

VE avoided by 85% of new usernames

I want to note how, in the final days on the top menu, the VisualEditor was avoided by 85% of the new usernames being tracked for choice of text editor. See: wp:VEDASH for the VE dashboard graph which showed the average low 15% usage among new usernames, who mainly preferred to use the wikitext source editor for 85% of edits, even though VE was still on the edit-tabs at that time. Also see below: "#VE opt-in usage near 0%". -Wikid77 15:57, 9 October 2013 (UTC)[reply]

VE opt-in usage near 0%

After VE was removed from WP's top menu on 24 September 2013, to become an easy opt-in feature in Special:Preferences, then the VisualEditor was avoided by 99.7% of users being tracked for choice of text editor. See: wp:VEDASH for the VE dashboard graph which showed the average usage (after 25 September 2013) remained well below 1% of all edits, often ranking as 15% (0.002, or 2 edits per thousand). As many experienced software developers have emphasized: WYSIWYG interfaces can be very tedious to use, and many power users quickly switch to text-based editing of pages, as faster to perform the work at hand. That is why we computer scientists developed hypertext markup languages, as copy/paste text languages, to allow diff-links between revisions, with new features by a macro scripting language (for templates), and to also allow multi-word search in markup keywords (although most browsers still "find string" rather than "hunt words" in multiple spots). It can be much faster to keep wp:checklists of intended text changes, to focus on each step to edit, and then re-proofread the final page to checkmark each step as successfully done. By comparison, point-and-click steps are not obvious in a diff listing. Also see above: "#VE avoided by 85% of new usernames". The actual usage levels of VE needed to be emphasized, for future consideration. -Wikid77 15:57, 9 October 2013 (UTC)[reply]

Most of those 99.7% of users still don't know that they have to manually go to their preferences and re-enable VisualEditor. It's not a "choice" if you don't know that the option exists. From the users' perspective (not counting the <1% of people involved in Kww's default-state RFC), VisualEditor just silently disappeared two weeks ago, with no indication that they are even allowed to opt-in if they wanted to. Whatamidoing (WMF) (talk) 17:43, 10 October 2013 (UTC)[reply]
I'm forced to agree that this isn't a very telling statistic, and describing it as people "avoiding" VE is a gross misinterpretation. Although on the flip side, if VE had been remotely popular, we would still be seeing more opt-ins as people went looking for preferences, and we'd also be seeing some comments from people wondering where it went. Unless I'm not looking in the right places, I've seen no mention of it thus far. equazcion 17:50, 10 Oct 2013 (UTC)
We've seen a handful of comments like this one, which appeared within hours at the Mediawiki feedback page. Not everyone asks their technical questions here, and of course if your question has just been asked and answered on the same page, then most people won't ask it again. Whatamidoing (WMF) (talk) 18:34, 10 October 2013 (UTC)[reply]
Well, I didn't only look here (as in this page), but on help desk and teahouse as well, but thanks for pointing out the VE feedback forum at MediaWiki. Although I'm not sure who would know to look there if they merely noticed VE had disappeared and weren't previously involved in testing/feedback in some way. People who just liked using it would probably be asking where it went in some Wikipedia venue, I would think. equazcion 19:07, 10 Oct 2013 (UTC)
Whatamidoing, is it going to be WMF's official policy to portray the RFC and the subsequent change in VE's status as something that did not represent community consensus? I really am beginning to find these efforts to portray our RFC process as a problem tedious. If the WMF can come up with a more accurate way to sample consensus, I'm eager to listen.—Kww(talk) 00:43, 11 October 2013 (UTC)[reply]
I was planning to ignore this (both because I'm tired of complaining about Whatamidoing's communications, and because I wanted to try and keep this discussion above the level to which Whatamidoing would like to bring it down), but since you mention it, I find the little backhanded snipe here ("not counting the <1% of people involved in Kww's default-state RFC") childish and disgusting. VE must be very close to her heart for her to feel hurt enough to lash out as she's been doing, and I try to sympathize, but I think everyone feverish about this at the WMF could stand to take half a chill before they say more of the wrong things to more people here. equazcion 02:13, 11 Oct 2013 (UTC)
Whether the RFC process' oversampling of experienced editors and metapedians (i.e., people like you and me) is a "problem" depends on the question you're asking. I can't think of a better system for figuring out how a complex policy should be applied to an article: someone on the WP:TOP5000 list is much more likely to be able to deal with a NOR or NPOV issue than someone who has only made one tiny edit.
However, the RFC process is pretty obviously not an effective method of getting responses from brand-new editors or from prospective editors. So I think the answer to your question is, who's in your community? The RFC process is an excellent method of determining the views of the highly active editors (the three or four thousand people like you and me who make more than a hundred edits each month). If that's "the community", then you very likely have community consensus represented on that page. If your idea of "the community" includes the tens of thousands of editors who made just five or ten edits in a month, then that RFC does not seem to include their views (which might or might not agree with the RFC's outcome; nobody really knows).
Equazcion, I'm not sniping about the RFC for being too small; it seems to be about the third largest ever. I'm only assuming that everyone who participated in the RFC, and later noticed that VisualEditor disappeared, is smart enough to make the connection between the two. So 99.7% of users haven't opted in, and since the number of people opting in (~0.3%) is smaller than the number of people who participated in the RFC (~0.9%), we can assume that at least some of those 99.7% know exactly what happened (and also any others who missed the RFC but saw the VPT and AN discussions). But that's still a vast majority of editors, including tens of thousands of new editors, who just don't know, and therefore can't be "choosing" in either direction. Whatamidoing (WMF) (talk) 18:26, 14 October 2013 (UTC)[reply]

Proposal to Reduce the API limits to 1 edit/30 sec. for logged out users

We seem to be getting attacked by SpamBots a lot recently, or bots inadvertently get logged out during it's runs. Or we have incidents like User:RotlinkBot editing from IP farms, that can't be range blocked. Either way, legitimate bots shouldn't be editing from IPs and the SpamBots tend to come from IPs. I propose the API limits for editing while logged out should be set to 1 edit/30 sec. That way, the potential damage is manageable. Please note that the API is different from editing Wikipedia directly. It will not effect the IP editors on Wikipedia. It will only effect automated tasks, aka bots, that are using an IP instead of a username. Any input on this?—cyberpower ChatOnline 21:20, 2 October 2013 (UTC)[reply]

Yes... if an RfC is to be held here, I shall unwatch this page. --Redrose64 (talk) 21:41, 2 October 2013 (UTC)[reply]

One per minute seems a little too strict; maybe one per thirty seconds? -- Ypnypn (talk) 21:43, 2 October 2013 (UTC)[reply]

Ok. I have changed it to 30 seconds.—cyberpower ChatOnline 23:36, 2 October 2013 (UTC)[reply]
  • Support one per thirty seconds, which is about as fast as a human could reasonably edit. Robert McClenon (talk) 21:58, 2 October 2013 (UTC)[reply]
  • Comment only a bot could reasonably edit faster than that, and a bot that isn't logged in is a bot that is malfunctioning. The User:RotlinkBot issue is a special case that may involve malice or malware. Robert McClenon (talk) 22:00, 2 October 2013 (UTC)[reply]
  • Question Tools such as WP:TW use mw:API for editing pages. For example, if you wish to nominate a file for deletion, WP:TW makes three API edits within a short period of time. Are IP editors able to use these tools somehow? --Stefan2 (talk) 22:27, 2 October 2013 (UTC)[reply]
    No. IP editors can't use Twinkle. Twinkle will not be phased by this change.—cyberpower ChatOnline 23:34, 2 October 2013 (UTC)[reply]
    An IP editor could use Twinkle by using it with something such as Greasemonkey. I don't know if any of our long-term IP editors do so, but it's certainly possible. Anomie 01:31, 3 October 2013 (UTC)[reply]
    Thinking about Stefan2's question, we probably want to extend the notice to Twinkle users, WP:AFCH users and more to give them a heads up that this is coming Hasteur (talk) 23:59, 2 October 2013 (UTC)[reply]
  • Oppose Way too short. It's easy to see that last spelling error just as you hit Save page ... to make IPs wait 30 seconds is inappropriate. What's the current limit? NE Ent 00:05, 3 October 2013 (UTC)[reply]
    The API is different from editing Wikipedia directly. The API is used for bots and external programs. They can edit normally on Wikipedia itself. — Preceding unsigned comment added by Cyberpower678 (talkcontribs) 00:10, 3 October 2013 (UTC)[reply]
    Oh, that's very different... never mind (see Emily Litella if you're too young to understand) (of course getting a contributor who don't know what they're talking about is what you get for forum shopping to WP:AN) NE Ent 01:13, 3 October 2013 (UTC)[reply]
    If it's true this affect VE editing, as stated below, my original opposition was correct. NE Ent 18:35, 14 October 2013 (UTC)[reply]
  • Comment Can anyone provide me examples of why we allow anonymous writes via the API at all?—Kww(talk) 00:22, 3 October 2013 (UTC)[reply]
    I can't. Hence this proposal, but their may be external editing programs that might use the API, so that would be one example, but other than that, IP edits shouldn't be happening through the API.—cyberpower ChatOnline 00:26, 3 October 2013 (UTC)[reply]
    WP:Teahouse's scripts are usable by anons and use the API to make edits. Anomie 01:35, 3 October 2013 (UTC)[reply]
    Why do we let anonymous users edit at all? Oh yeah, its a foundation principle. Whether people edit via the API or HTML interface, it should make no difference. What matters is the substance of the edit. Legoktm (talk) 02:26, 3 October 2013 (UTC)[reply]
  • Support disabling entire write API for anons unless someone can think of a good reason why they need it. -- King of 00:35, 3 October 2013 (UTC)[reply]
    Note that things such as submitting AFTv5 feedback or using the VisualEditor are included in the write API. Anomie 01:39, 3 October 2013 (UTC)[reply]
    Hm. Would an IP need to make multiple VisualEditor edits within a short period of time? --Stefan2 (talk) 12:01, 3 October 2013 (UTC)[reply]
  • Support Yeah, with King on that. I can't see any reason to allow anonymous API edits at all. I had no idea that was currently allowed. Spammers will probably still find ways, but there's no reason to make it this easy for them. But short of disallowing anon API edits altogether, yes, a rate limit should be imposed. equazcion | 00:52, 3 Oct 2013 (UTC)
  • Support If I understand mw:API:Main page correctly, this shouldn't affect real people in any way (I don't understand how you would even be able to make an API edit at all as a human), but it should be able to slow down botspam, so it's seemingly a good first step. I'd suggest that we investigate Equazcion's ban-API-edits-entirely proposal. Nyttend (talk) 01:14, 3 October 2013 (UTC)[reply]
    I wouldn't recommend that. There may be legitimate reasons to have to IP edit through the API, but since it's almost never going to happen, limiting it to 1 sounds like the reasonable first step.—cyberpower ChatOnline 01:21, 3 October 2013 (UTC)[reply]
    You can use Special:ApiSandbox, make a POST request through your browser, etc. Legoktm (talk) 02:29, 3 October 2013 (UTC)[reply]
  • Comment So far, this proposal is long on rhetoric and low on facts. There is an assertion that spam bots are using the API while logged out. Links? What's to stop the spambot from screen-scraping the UI edit form, which many probably already do? There is talk of User:RotlinkBot, but that's a registered account and if it continued its unapproved actions after being blocked I don't see any links showing that either. And if it really had an "IP farm", couldn't it cycle through the farm to make N edits every 30 seconds (one per IP)? And what's to stop a spambot from spamming from various registered accounts until the checkusers block its IP? Anomie 01:31, 3 October 2013 (UTC)[reply]
    • As I said, spam bots would probably manage to continue making spam edits, but currently we're almost encouraging it by making it remarkably easy. I don't see much if any legitimate reason to allow anonymous API edits whatsoever. For the RotlinkBot history, see Wikipedia:Archive.is RFC. equazcion | 01:44, 3 Oct 2013 (UTC)
    • I just noticed your mention above that the article feedback tool uses the API. If disallowing anonymous API edits would disable article feedback for anonymous users, that would indeed be a problem. A 30-second rate limit shouldn't interfere with that though. equazcion | 01:55, 3 Oct 2013 (UTC)
  • @Cyberpower678: It appears there are a number of people who are not familiar with the API. Perhaps you could add a statement at the top of the RfC similar to this in order to help people understand the proposal better.

The API (or Application Programming Interface) is the way software interacts with Wikipedia. The API is for bots and automated tools, not for humans. Click here to see the API.

Briefly explaining the API at the top of the RfC might save a lot of time for those who are not familiar with it. FWIW, I think 1 edit per minute would be fine for IP editors. Best. 64.40.54.196 (talk) 02:20, 3 October 2013 (UTC)[reply]
  • No. Ratelimiting API edits is not the right way to fight spam. If you do that, they'll just screenscrape. Bots logging out is a failure of the bot to use assert=edit properly. Legoktm (talk) 02:23, 3 October 2013 (UTC)[reply]
    Screenscraping is more difficult than the API. It's at least one step to fight spambots.—cyberpower ChatOnline 02:29, 3 October 2013 (UTC)[reply]
    Not really. It would take about 5 minutes to write an edit function that screenscrapes. Really it would take no time at all, Pywikibot-compat ships with one. Legoktm (talk) 03:26, 3 October 2013 (UTC)[reply]
    Yeah I'm not really getting Lego's logic there. The API is built for automated edits, so that legit automated tools don't need to screen scrape. Why provide the same ease to likely illegitimate ones? Better to make it harder for them, at least. If they do resort to screen scraping, that'll still end up being slower than API edits, and will be far less reliable at making successful edits. equazcion | 02:36, 3 Oct 2013 (UTC)
    You just said that any anonymous edit is an illegitimate edit. Are you sure you meant that? Legoktm (talk) 03:26, 3 October 2013 (UTC)[reply]
    Are you sure I said that? equazcion | 03:47, 3 Oct 2013 (UTC)
    Gr, I read too fast, sorry. Regardless, do you have any evidence that proves that anonymous edits via the API are likely to be bad? Legoktm (talk) 18:54, 3 October 2013 (UTC)[reply]
  • Comment: I don't think this is the right way to go about this, but if it does happen, I think a limit of 10 per 5 minutes (or some other multiple of 1 per 30 seconds) should be used instead, to allow an occasional short burst. Jackmcbarn (talk) 02:31, 3 October 2013 (UTC)[reply]
Thanks, Werieth and cyberpower. This what I was asking about since before I registered this account, I edited for years as an unregistered IP account. I guess my follow-up question would be what does it mean to edit through the API and not the regular Wikipedia editor...but I have the feeling that will be a technical answer and I'm guessing the bottom line is that regular editors don't edit through the API. I guess you can confirm or deny this? Thanks again! Liz Read! Talk! 17:01, 3 October 2013 (UTC)[reply]
@Liz: It means using anything else than the default edit form for editing, be it Twinkle, VisualEditor, Popups, or any other from the dozens of tools which can edit pages on your behalf. The only notable exception I can think of is wikEd, which merely enhances the default form. Matma Rex talk 17:17, 3 October 2013 (UTC)[reply]
Well, Matma Rex, I think you need to have a registered account to use Twinkle but not for VisualEditor. Since VE was intended to make Wikipedia more "user-friendly" for casual users who might use an IP account, this sounds like it could negatively affect them. I know that, unfortunately, sometimes I make an edit and immediately notice a typo and need to correct it. Thirty seconds doesn't sound long but when you think of it as two edits per minute, it does slow you down when you're making small edits.
I wish there was a way to distinguish unregistered accounts from spambots. How about only 1 edit every 5 seconds? Liz Read! Talk! 17:47, 3 October 2013 (UTC)[reply]
  • So the fact that this will effectively block all IP editors (who are forced to use VE which edits via the API on their behalf according to the core developer above) emphasizes that this proposal is a request to require all IP editors to create an account to edit on Wikipedia against the spirit of pillar 3. Technical 13 (talk) 18:02, 3 October 2013 (UTC)[reply]
  • I meant 5P#3 and have corrected it above. Technical 13 (talk) 15:51, 3 October 2013 (UTC)[reply]
    The same argument as in my first reply about "Wikipedia is free content" still applies. The "free" in free content refers to the reader's right to modify, redistribute, sell our content under various conditions (e.g., you need to give attribution, at least an URL). It does not refer to everyone's right to edit the actual content on Wikimedia sites, but if they disagree they can fork. Using that logic, you could also say that blocking users goes against our principles, as they can no longer edit, or that open proxies should be able to edit. Besides, (and this is the main point) they are not proposing to completely eliminate IP edits, but to throttle anonymous edits through the API, which bots and scripts (notably VE) use. I hope you can understand the different between distributing modified or original copies with attribution and actually editing and submitting the changes to the original source. If you want to argue the case for your opinion using basic Wikimedia principles, I would personally go with founding principle #2. Maybe I'm wrong though, in which case I would love to know the original meaning of editing in "Wikipedia is free content". πr2 (tc) 18:07, 3 October 2013 (UTC)[reply]
  • WP:5P#3 specifically says, "Wikipedia is free content that anyone can edit, use, modify, and distribute". Setting this ratelimit would prevent IPs from editing Wikipedia (except to use it as a social networking site because they will only be able to freely edit talk spaces). Technical 13 (talk) 19:00, 3 October 2013 (UTC)[reply]
    • T13, don't you think that's just a little too much hyperbole? This doesn't prevent IP addresses from editing articles (and in fact, I'm not sure where you're seeing a connection to talk pages at all), except for a few edge cases. I think those edge cases, combined with the very small benefit, makes this proposal not worth it, but let's not go around saying things like outright preventing IPs from editing Wikipedia. Writ Keeper  19:22, 3 October 2013 (UTC)[reply]
  • Writ Keeper, if IPs edit with VisualEditor, and VE edits via the API, and the API is ratelimited to only allow 1 edit every 30-60 seconds, then that prevents IP editors from editing in a convenient manner being able to catch and quickly fix their typos or whatnots as is exampled below. If IPs can't edit conveniently, then they are forced to register and account or not edit. As an example use case, there are 0 Category:G13 eligible AfC submissions waiting to be assessed. The way that I personally review these is to open 5-10-20 of them in separate tabs and let them load. As soon as the first one is loaded, I zip though all of them and click "review" to open the AFCH script. Once I get to the end of the list doing that, I zip though all of them again clicking "tag for G13" for all of the ones that are blank or obviously nothing. Doing this takes about 5 seconds to tag 10-15 pages as G13. If I wasn't logged in, I wouldn't be able to review G13 eligible drafts because I would be locked out by this proposal. Now, I admit this is an edge case, but there are lots of various edge cases like this. Technical 13 (talk) 19:35, 3 October 2013 (UTC)[reply]
    • No you wouldn't, because IPs don't currently use VE. They might in the future, in which case this proposal would be much more significant (though it still wouldn't *absolutely* deny them, which is what you said), and there are other things that IPs would use the API for legitimately, but right now, *any* use of the API by a logged-out editor remains an edge case, not a main case. This proposal is not unreasonable. Writ Keeper  19:43, 3 October 2013 (UTC)[reply]
  • Support. This is highly unlikely to stifle any good edits, but more than likely to stifle many bad ones. bd2412 T 16:06, 3 October 2013 (UTC)[reply]
  • Oppose. Not a bad idea on paper, but the possibility of limiting good edits through things like VE, the Teahouse scripts, etc. is not worth making things very slightly less easy for spambots. Writ Keeper  18:03, 3 October 2013 (UTC)[reply]
  • Oppose if VE edits through the API, as stated above. I do not use VE, but I often make multiple edits per minute, manually, not using any bots or tools. Here is an example of 11 edits in 5 minutes from my contribution history, which contains many such examples:
    • 22:46, 2013 September 27 (diff | hist) . . (+8)‎ . . m WRGC (AM) ‎ (Fixing "Pages with citations using unnamed parameters" error.) (current)
    • 22:46, 2013 September 27 (diff | hist) . . (+6)‎ . . m WPVM-LP ‎ (Fixing "Pages with citations using unnamed parameters" error.) (current)
    • 22:45, 2013 September 27 (diff | hist) . . (+10)‎ . . m WPHY-CD ‎ (Fixing "Pages with citations using unnamed parameters" error.) (current)
    • 22:45, 2013 September 27 (diff | hist) . . (+10)‎ . . m WPGC-FM ‎ (Fixing "Pages with citations using unnamed parameters" error.) (current)
    • 22:44, 2013 September 27 (diff | hist) . . (+6)‎ . . m WLBT ‎ (Fixing "Pages with citations using unnamed parameters" error.) (current)
    • 22:44, 2013 September 27 (diff | hist) . . (+5)‎ . . m WLAB ‎ (Fixing "Pages with citations using unnamed parameters" error.) (current)
    • 22:44, 2013 September 27 (diff | hist) . . (-2)‎ . . m WKND (album) ‎ (Fixing "Pages with citations using unnamed parameters" error.) (current)
    • 22:43, 2013 September 27 (diff | hist) . . (-35)‎ . . m WET Web Tester ‎ (Fixing "Pages with citations using unnamed parameters" error.) (current)
    • 22:43, 2013 September 27 (diff | hist) . . (+6)‎ . . m WDBD ‎ (Fixing "Pages with citations using unnamed parameters" error.) (current)
    • 22:42, 2013 September 27 (diff | hist) . . (+8)‎ . . m WCQS ‎ (Fixing "Pages with citations using unnamed parameters" error.) (current)
    • 22:42, 2013 September 27 (diff | hist) . . (-4)‎ . . m WBAL-TV ‎ (Fixing "Pages with citations using unnamed parameters" error.) (current)
I'm fixing small errors with legitimate edits. If these edits were blocked just because I used VE, I would not be able to fix these errors as efficiently. – Jonesey95 (talk) 18:37, 3 October 2013 (UTC)[reply]
@Jonesey95: It doesn't apply to registered editors, aka you.—cyberpower ChatOnline 14:18, 4 October 2013 (UTC)[reply]
  • Support - IP editors wanting to work faster can............. register. They should............... register. Why we still do not insist that everyone ................. register is beyond me. Carrite (talk) 06:00, 4 October 2013 (UTC)[reply]
  • Oppose After seeing more discussion, particularly from those who support this proposal, I see absolutely no reason to do this. The stated goal is to "stop" spambots, but this is only an extremely minor and temporary inconvenience for the spambots, if the assumption that these spambots actually use the API is even correct. This is far from justifying the furthering of the treatment of IP editors as second-class editors. And I'm very disappointed that there are editors who support this because it makes things worse for IP editors.
    Also, Speedy Close because there is no way this proposal will have any of the positive benefits it claims but will certainly give us the drawbacks that have been identified (e.g. breaking WP:Teahouse for IP editors). There's no need to continue to waste everyone's time with it or to continue with the anti-IP sentiments some are raising. Anomie 11:31, 4 October 2013 (UTC)[reply]
  • Oppose It should be obvious that anonymous users shouldn't run bots. However, anonymous users need to be able to use tools such as mw:VE. Also, it is trivial for spambots to edit using the standard edit form. Download the standard edit form, extract some information from it and post your spam to the server. Disabling the API wouldn't really make things more difficult for spambots. --Stefan2 (talk) 20:35, 4 October 2013 (UTC)[reply]
  • Comment To confirm, yes, VisualEditor does indeed use the API, as do many other tools (and long-term I imagine the wikitext editor will too; it's an insane architecture right now). This means this would negatively impact anonymous users of VisualEditor and similar tools, like the suggested longer-term set of curatorial bits and bobs around categorisation (a "souped-up" HotCat gadget, possibly), language links (extending Wikidata's current gadget), etc.. I would strongly advise against wanting this configuration change. Jdforrester (WMF) (talk) 01:26, 5 October 2013 (UTC)[reply]
    • I'd hope the wikitext editor would not exclusively use the API, or you'd be locking out non-JavaScript-supporting clients from editing at all. Degraded experience is fine, "go away" isn't. Anomie 11:19, 6 October 2013 (UTC)[reply]
  • Comment This is pointless. I do a lot of spambot blocking, and occasional reverting. But what I see is edits spaced out over minutes, so I do not think this wll help stop spambots. I would suggest putting in an edit filter first to try to catch this abnormal behaviour before a permanent code change. For bots out of control that can happen when logged in, so I see the gain is small in that respect. Graeme Bartlett (talk) 04:14, 5 October 2013 (UTC)[reply]
  • Oppose per the above. Humans, whether directly or indirectly, do edit Wikipedia through the API, and many people make more than two edits per minute at times.  Hazard SJ  03:27, 6 October 2013 (UTC)[reply]
  • Oppose. I don't see any indication that spambots actually use the API, and it's reasonable to assume that most probably don't. I took a look at Wikipedia:Archive.is RFC, as Equazcion suggested, and it appears that in most cases the edits were spaced out by minutes or hours. It's difficult to see what defense this proposal would provide against its intended target. —Emufarmers(T/C) 05:37, 7 October 2013 (UTC)[reply]
  • Oppose. When doing an operation such as moving content from one article to another, I have both articles open in two separate browser tabs. I perform the operation, then save the edits near-simultaneously. If this change would prevent an IP editor doing useful work such as this (for instance if they were using the VE) it would be a bad idea. Perhaps a different threshold that would clearly identify a bot (6 edits in 2 minutes maybe?). --LukeSurl t c 12:47, 7 October 2013 (UTC)[reply]
  • Oppose. API usage is legitimate when editing via some external tool, browser or script, including VE and some toolserver/labs stuff. Wikipedia website is just one most popular interface, but that doesn't mean there aren't or shouldn't be others. Yes, spambots most likely use API, but so do legitimate IPs. And this wouldn't solve the issue, just inconvenience spammers to edit slower or use website POSTs. Saying IPs should register because they use some external editor instead of the website is counter to our philosophy. —  HELLKNOWZ  ▎TALK 10:31, 10 October 2013 (UTC)[reply]
  • Support. Actually, I don't see any reason why we should allow anonymous API edits at all. --NaBUru38 (talk) 17:15, 12 October 2013 (UTC)[reply]

Testing new version of GettingStarted experience

Hey, for those interested, soon we're going to be running a controlled experiment (an A/B test) of a new version of the GettingStarted experience delivered to new users right after they register. This means that for about a week, the experience when you register will differ depending on whether you have an odd or even user id. There's a thread on this talk page, and I would really appreciate any detailed feedback people might have. Thanks, Steven Walling (WMF) • talk 02:20, 4 October 2013 (UTC)[reply]

BTW, this test is live now. Steven Walling (WMF) • talk 00:20, 9 October 2013 (UTC)[reply]

timeouts

Anyone else getting timeouts on log pages for administrators? Special:Log/Drmies and Special:Log/Kww both get 504 errors for me. If I choose a non-admin, say Special:Log/Technical_13 or Special:Log/PantherLeapord, I get a response. — Preceding unsigned comment added by Kww (talkcontribs) 19:32, 4 October 2013 (UTC)[reply]

I haven't seen a timeout, but I have noticed that viewing my log takes a lot longer than usual. ​—DoRD (talk)​ 20:04, 4 October 2013 (UTC)[reply]
Kww, yes. Long page loads in general, too. Killiondude (talk) 20:10, 4 October 2013 (UTC)[reply]
Hmm, I can get each individual log, but I can't get "all logs". I wonder if there's just a raw capacity limit. I've been an admin for a while, so I have a healthy logbook, but I wouldn't think that either me or Drmies would be record-seting. DoRD, when you say you didn't get timeouts, do you mean that you are able to get your own, or that you are able to see mine?—Kww(talk) 20:36, 4 October 2013 (UTC)[reply]
I was talking about my log, but now it is timing out as well :\ ​—DoRD (talk)​ 21:03, 4 October 2013 (UTC)[reply]
Add me to the list - I can't get Drmies's or mine. Peridon (talk) 09:50, 5 October 2013 (UTC)[reply]
I'm able to get DoRD's logs after about a 2-3 minute delay. Drmies and Kww give me the timeout. equazcion | 09:59, 5 Oct 2013 (UTC)
I just got Peridon's logs after a delay similar to DoRD's. equazcion | 10:03, 5 Oct 2013 (UTC)
I just got DoRD's after a long wait, but mine timed-out. Peridon (talk) 12:00, 5 October 2013 (UTC)[reply]
Me too. Using the limit parameter doesn't help. There seems to be an issue somewhere :) -- zzuuzz (talk) 13:18, 5 October 2013 (UTC)[reply]
Hopefully fixed in gerrit:87168. Legoktm (talk) 18:52, 5 October 2013 (UTC)[reply]
Still getting 504 for Drmies. Peridon (talk) 10:08, 6 October 2013 (UTC)[reply]
So the commit I linked above has been merged into MediaWiki core, but not deployed it, so the errors will still occur. It should be deployed to enwiki in less than 2 days. Legoktm (talk) 01:17, 9 October 2013 (UTC)[reply]
Thank you it seems to be working now. -- zzuuzz (talk) 05:30, 11 October 2013 (UTC)[reply]
Yes, for me too. JohnCD (talk) 11:31, 12 October 2013 (UTC)[reply]

Something suddenly wrong with ref numbering/lettering -- and more (on IE10, at least)

The referencing machinery is so complex I don't know how to begin investigating what's wrong. Suddenly (within the last 12-24 hours?), well... here's an example:

Article text.<ref group=upper-alpha>Ref text</ref>
{{reflist|group=upper-alpha}}

should produce

Article text.[A]
A. ^ Ref text

Instead, it's producing

Article text.[A]
1. ^ Ref text

Here's actual, live code so you can see what's happening for you.

Article text.[A]

  1. ^ Ref text

If the first thing on the immediately previous line is A then things are OK for you; if it's 1 then you're getting the malfunction I'm getting. If it's anything else then the universe has gone mad.

The code's set up in User:EEng/sandbox if you want to try it yourself. EEng (talk) 23:09, 5 October 2013 (UTC)[reply]

Looks fine for me. Werieth (talk) 23:11, 5 October 2013 (UTC)[reply]
Yeah looks good to me too. A, a, and 1 are displaying in your sandbox page, respectively. equazcion | 23:12, 5 Oct 2013 (UTC)
Wait, wait, let me guess... You're using some browser other than IE. 'Cause I'm on IE10.9.9200.16686 (for the avoidance of doubt). Just tried it on Chrome and it works correctly. So, any fellow IE sufferers getting the same problem? (Please, no gloating from Chrome/Mozilla/Safari types.) EEng (talk) 23:35, 5 October 2013 (UTC)[reply]
But, but, but Internet Exploder© is the best browser ever invented! Werieth (talk) 23:45, 5 October 2013 (UTC)[reply]
I said NO GLOATING! EEng (talk) —Preceding undated comment added 23:49, 5 October 2013 (UTC)[reply]
I wasnt gloating IE6 is the best browser ever created, and I still use it. Werieth (talk) 23:50, 5 October 2013 (UTC)[reply]
It displays correctly for me in IE10, but I'm using an older version (10.0.9200.16540). equazcion | 23:40, 5 Oct 2013 (UTC)
I don't suppose that for the greater good you'd like to upgrade to .16686 just to see? Why does IE need 16000 versions anyway???? OK, someone out there knows what's going on with this, I'm sure. We await your wise counsel. EEng (talk) 23:47, 5 October 2013 (UTC)[reply]
It's called Agile software development. You know, like VisualEditor.... Risker (talk) 03:24, 6 October 2013 (UTC) Not gloating, I use IE all the time, just not this version.[reply]
I would but they tell me it may require a reboot, and I've got a hot streak of 78 days of uninterrupted uptime going on my computer. That and I never use IE except to check the problems other people report with it, which I don't experience because I use browsers that work and don't require reboots to update =] equazcion | 23:50, 5 Oct 2013 (UTC)
I said NO GLOATING! But where exactly are you looking for this upgrade? And is the newest version offered beyond 16686, by any chance? EEng (talk) 23:55, 5 October 2013 (UTC)[reply]
Wait. Just answered my own question. "You've got the latest Internet Explorer for Windows 7, but you can be one of the first to try Internet Explorer 11 Release Preview." Well, gang, what do you think? Shall I stay in the frying pan, or jump into the fire? It says, "Be the first to try Internet Explorer 11 Release Preview." Who can resist that? I'd be the first! How proudly my friends and loved ones will be as they accept the Bold Self-Sacrifice Medal, on my behalf, from Mr. Gates himself! EEng (talk) 23:57, 5 October 2013 (UTC)[reply]
If I'm not mistaken, you can only install 11 if you have Windows 8, but (seems I'm mistaken, a preview release for 7 is out) yeah if you can I would. PS. this seems to be saying the latest version is a lower number than the one I have installed. I dunno. Anyway you should really invest in a real browser, they don't cost that much more :) </end gloat (for now)> equazcion | 00:02, 6 Oct 2013 (UTC)
Yeah, well, your mother wears army boots. Anyone else seeing this? See test example I've inserted at the end of my original post. EEng (talk) 00:19, 6 October 2013 (UTC)[reply]

Wait, there's more going on than that! There's also extra vertical whitespace between each article title and the horiz line below it, and other formatting oddities -- all look good in Chrome. I've rebooted, cleared IE cache, the works. Here's what's weird -- when first reloading a page, the spacing between title and horiz line is correct, then at the last moment more space opens up between them -- it's about the same timing as when, on watchlist, all the little dropdown gadgets and stuff appear at the last moment. Hmmm. Now really -- is it just me? EEng (talk) 02:56, 6 October 2013 (UTC)[reply]

See this other bug which has a similar-sounding delay. Related? EEng (talk) 13:24, 6 October 2013 (UTC)[reply]

C'mon, really? Am I the only one? Anyone??? EEng (talk) 00:54, 9 October 2013 (UTC)[reply]
I installed the IE11 preview and I'm still not seeing your malfunction there, FYI. equazcion | 04:44, 9 Oct 2013 (UTC)
Looks fine in a seldom-used 10.0.9200.16721. Of course, if you really have a version 10.9.9200.xxxxx it may have all sorts of top-secret features. NebY (talk) 16:35, 13 October 2013 (UTC)[reply]

Page view statistics broken (again)

Henrik's tool produces an "HTTP 500 Internal Server Error" on any wiki page. Just wondering if anyone can help. Thanks in advance, XOttawahitech (talk) 04:39, 6 October 2013 (UTC)[reply]

Oops, should be fixed now - the database had fallen over, someone needed to push it upright again. Stats for yesterday should be up soon, in an hour or two. henriktalk 07:30, 6 October 2013 (UTC)[reply]
Still not up, Henrik. Do you have an idea when it may be working again? Thanks--أخوها (talk) 19:25, 8 October 2013 (UTC)[reply]
It's working for me, at the moment. equazcion | 19:29, 8 Oct 2013 (UTC)

Saving talk pages

Is it just me, or is anyone else finding that it's taking two minutes or so to save a talk page post? They're opening slower than most pages too. Peridon (talk) 11:24, 7 October 2013 (UTC)[reply]

I've been getting intermittent slowness when previewing and saving in general, ever since that was first reported around a week ago. It keeps fixing itself and reappearing. equazcion | 11:29, 7 Oct 2013 (UTC)
The issue is probably the fact that most talk pages haven't been archived in about a week since the MiszaBot's went down. Legobot went on a archiving spree today, so most pages should be a bit faster. Legoktm (talk) 00:02, 8 October 2013 (UTC)[reply]
I noticed Legobot taking over archiving for a lot of Misza-configured pages today. Thanks for doing that. I'm not entirely convinced page length is actually the root of the particular speed issue we're seeing, but that is still quoite noice. equazcion | 01:39, 8 Oct 2013 (UTC)
Talk pages have always been slower than articlespace pages for me; how much slower varies. - The Bushranger One ping only 17:49, 8 October 2013 (UTC)[reply]

Server issues part MCMXVI

Just noticed the servers are going cough-sputter-hack, with HotCat not loading sometimes (sometimes taking five-to-six refreshes before correctly loading) and pages on both en.wiki and Commons occasionally loading in an "unstyled" state. - The Bushranger One ping only 17:30, 8 October 2013 (UTC)[reply]

Anyone Else Noticing Wikipedia Loading Problems?

WP has been acting wonky the past few minutes. Pages load slowly--look like HTML 4, early-web renderings--no sidebar, just hyperlinks and text. And I'm not seeing my Twinkle tabs. I've tried purging and doing a force-refresh (Ctrl+Shift+R) but no luck. Anyone else experiencing this? -- Veggies (talk) 17:33, 8 October 2013 (UTC)[reply]

Yep, just posted about this above (and got the "unstyled" format the first time I clicked 'edit' here). Just keep refreshing until it works... - The Bushranger One ping only 17:35, 8 October 2013 (UTC)[reply]
I'm getting problems intermittently, but refreshing a few times usually fixes them. Specifically it seems to be loading slower and forgetting to load parts of the pages for me. Ks0stm (TCGE) 17:36, 8 October 2013 (UTC)[reply]

There was a minor issue with the bits caches so you may have seen some unstyled content for a bit. All should be resolved now. ^demon[omg plz] 17:51, 8 October 2013 (UTC)[reply]

Overlapping portal boxes in Safari 6.0.5

Could someone help out at Portal talk:Australia#Error on portal page. please? -- John of Reading (talk) 18:31, 8 October 2013 (UTC)[reply]

Speak up about the trademark registration of the Community logo.

Getting rid of "thank"

How do I get rid of the "(thank)" option that has suddenly shown up when displaying diffs? Beyond My Ken (talk) 21:42, 8 October 2013 (UTC)[reply]

See Wikipedia:Notifications/Thanks#How to turn off this feature. PrimeHunter (talk) 22:18, 8 October 2013 (UTC)[reply]
Thanks, but the advice there was to check the "exclude from future experiements" button, which I already have checked - so something must have changed. Beyond My Ken (talk) 23:44, 8 October 2013 (UTC)[reply]
You're right. It worked last time I tried but not now. I guess WMF has decided it's no longer a feature experiment and shouldn't be disabled by "Exclude me from feature experiments". This in Special:MyPage/common.css should remove the link but not the surrounding parentheses:
.mw-thanks-thank-link {display: none;}
PrimeHunter (talk) 00:00, 9 October 2013 (UTC)[reply]
@PrimeHunter: Thank you, that worked. Beyond My Ken (talk) 01:08, 9 October 2013 (UTC)[reply]
Well, it seemed to work perfectly at first, but now it's leaving behind a pair of parentheses "()" and a "|" on history pages where "Thanks" used to be. I can live with that, but I do wish WMF would stop fucking around with this stuff and leave the interface alone, or at least provide opt-outs for any shiny new toys that old farts like me don't want to deal with. (The old interface was good enough for 120,000 edits, I don't need new bells and whistles, thank you.) Beyond My Ken (talk) 03:53, 9 October 2013 (UTC)[reply]
Weird. That should work. Try clearing your cache etc.? On the backend, this preference is dependent on the extension which includes some odds-and-ends related to Vector. That extension is getting phased out, though the preference will almost certainly stay as far as I know. I'll try to check if it's being futzed with currently. In any case, I think you can also use personal CSS to hide the feature, though so far I have only been able to do what PrimeHunter did. Ping: Okeyes (WMF) and Kaldari Steven Walling (WMF) • talk 00:10, 9 October 2013 (UTC)[reply]
I'm not sure why the preference would not work any more, but it's possible it was changed. Probably best to file a bug on this. Kaldari (talk) 00:26, 9 October 2013 (UTC)[reply]
I think the whole Thank thing is kinda hokey but really, are you such a curmudgeon that just being confronted by the opportunity to send a thank-you is too much to bear? I mean, there are starving children in China browsing via dialup and they would give anything to have a nice Thank button they could click, and here you are just throwing buttons away like garbage. Count your blessings. EEng (talk) 00:50, 9 October 2013 (UTC)[reply]
"Undo" was the outside option on that line, now "Thanks" is, so when I go to undo an edit, I hit thanks instead.

Please forward email addresses of unthanked Asian children and I will thank them for... something or other. Beyond My Ken (talk) 01:03, 9 October 2013 (UTC)[reply]

No, no, it's that they lack buttons they can click to do the thanking. They get 1/10 cent per click, you know. EEng (talk) 02:30, 9 October 2013 (UTC)[reply]
Maybe we should send them some of those cricket toys that click when you press them? You know, the ones they used for recognition signals in the D-Day invasion? Beyond My Ken (talk) 03:53, 9 October 2013 (UTC)[reply]
Exactly. I was going to send you a thanks for posting this, but I didn't want you to think I meant to undo your post. :) --Onorem (talk) 01:15, 9 October 2013 (UTC)[reply]
One of the options (I think it's Rollback -- can't remember for sure) doesn't even say "Are you sure?" -- just goes ahead and does whatever it does, so now I'm afraid to click any of them. EEng (talk) 02:30, 9 October 2013 (UTC) P.S. BTW, what does the "Unlink" item on the Twinkle menu do -- I'm really afraid to try that one![reply]
I never understood the unlink feature. The Twinkle documentation basically makes it sound like it's supposed to remove all incoming links to the current page. You can click it and see a list of links it's supposed to remove before actually confirming the operation. What's puzzling is that you'd think the link list would match What Links Here, but it usually doesn't. There's not much explanation around on how the two differ and I gave up trying to find out a while ago. Maybe Ill take a look through the code sometime for kicks. equazcion | 02:44, 9 Oct 2013 (UTC)
I just tested it randomly a couple of times and it does now seem to match What Links Here (for the namespaces defined in Twinkle preferences). Maybe when I last checked there was a problem with the code, or I was just mistaken. Anyway there's your answer, it seems. equazcion | 03:03, 9 Oct 2013 (UTC)
Way above my pay grade for sure. Thanks for investigating. Now can someone please tell me if I'm the only one with the mysterious IE malfunction? I can stand it a while, I guess, but it must be very confusing for people reading these articles. EEng (talk) 03:20, 9 October 2013 (UTC)[reply]
Yep, it's back: " | bedanken)" at the end of every line in a history. Thanking people must no longer be an experiment, and they didn't see fit to provide us with a preference to turn it off.—Kww(talk) 05:17, 9 October 2013 (UTC)[reply]
Please assume good faith. I just said above that, as far as I know, no one intended to turn off the preference to hide the thank button. It might have happened as part of a backend housecleaning, and if it did, it should be undone. We may someday ask that people use personal CSS or a userscript to hide the button, once some time has passed and it's clear how widely it's been adopted. We have not done such an analysis, and the opt-out preference should still be respected. If it isn't, it's a bug. Steven Walling (WMF) • talk 05:22, 9 October 2013 (UTC)[reply]
I actually wasn't assuming bad faith, Steven. I assumed that features eventually stop being experimental. For some, I expect you to produce a preference to turn them off, and others, I expect you to just incorporate as a permanent part of the interface. While I think the "thank" feature is the silliest thing I've ever encountered, I don't expect to have a preference for every line item in a history display. I object to the idea that a 500 line history has to have 500 individual thank-you buttons, but I that seems to be a done deal.—Kww(talk) 05:48, 9 October 2013 (UTC)[reply]
That's entirely reasonable. Thanks for the explanation. BTW: I think you're not the only one who finds the button in History to make less sense. We should maybe reconsider that placement. Diffs make perfect sense, and maybe the watchlist too (since most changes there are unique). But a history page takes more inspection usually. Steven Walling (WMF) • talk 05:53, 9 October 2013 (UTC)[reply]
If distinguishing the undo link from the thank link is an issue (it was for me), the alternative is to make them visually distinct. I set some loud colors on .mw-history-undo since it's an important function. — Scott talk 09:47, 9 October 2013 (UTC)[reply]

User:Username/common.js page not creating

I seem to be unable to create the page User:Antiqueight/common.js - when I try I get the page up with no text box (everything else seems to be there) and when I click on save it seems to. But hours later there is still no page (and I purged cache). I was going to try out a script. I have Twinkle turned on. I don't know if this is related as when I try to go to the page above it has a note there pointing me to where to go to change the Twinkle preferences. I don't know where to look for answers so I thought I'd try here. No rush as I'm about to keel over for the night. But if you can HELP :-) --Antiqueight confer 22:48, 8 October 2013 (UTC)[reply]

What do you mean by "no text box"? Is there no edit box to type content into? It's not possible to create a page with no content. A page creation must contain at least 1 byte in order to save. PrimeHunter (talk) 22:54, 8 October 2013 (UTC)[reply]
Well - that was it. With wikiEd enabled there was no text box at all - so you are right - there was nothing to save (I hadn't thought of that). When I turned wikEd off the text box appeared. Thanks for the prompt. I went back and looked at the page and turned off everything I could think of and then up popped the text box. I've added the code and it all seems to work. I should probably leave playing with it til tomorrow!--Antiqueight confer 23:07, 8 October 2013 (UTC)[reply]
It looks like wikiEd isn't compatible with CodeEditor (which was recently deployed to WMF projects to add syntax hilighting and such to all code pages) - probably because CodeEditor just replaces the entire edit thing with its own UI. Not sure who would be able to fix/address this, but maybe if I mention it here someone else will magically know. -— Isarra 19:44, 10 October 2013 (UTC)[reply]
On Template:Bug, Cacycle said that the versions of WikiEd as of a few days ago should be not activating on pages with CodeEditor. Have you tried bypassing your cache? Anomie 21:38, 10 October 2013 (UTC)[reply]

Article taking ridiculously long time to save

Hi, I've been making some edits to Panama Canal, and the article is consistently taking a ridiculously long time to save, even when I am only editing a relatively short section. Is there anything obvious about the article that is unusual or inefficient or broken? 86.128.4.151 (talk) 02:05, 9 October 2013 (UTC)[reply]

Without and with {{Panama Canal map}}
Without With
CPU time usage 7.456 seconds 12.373 seconds
Real time usage 7.875 seconds 13.258 seconds
Preprocessor visited node count 37173 67762
Post-expand include size 329993 750995
Template argument size 69012 155049
I would say {{Panama Canal map}}. There's something happened to the processing of WP:RDTs recently, unrelated to template or module changes, that has slowed them right down. The {{Panama Canal map}} RDT itself hasn't been edited since 16 June 2013, and of the subtemplates, the only edit since 10 July 2013 is this one, which is insignificant. However, RDTs handle a lot of small images, such as , and images are held on a different server than the wikitext. If that server is slow, RDTs will be slow, and so will pages that use RDTs.
By way of experiment, I did a full-article preview of Panama Canal both without and with the {{Panama Canal map}}, and I found that although the highest expansion depth just went from 29 to 31, some of the other figures were significantly greater, see right. Notice in particular that the last two rows are more than doubled. --Redrose64 (talk) 09:23, 9 October 2013 (UTC)[reply]
Thanks for taking the time to look. 86.160.83.13 (talk) 11:14, 9 October 2013 (UTC)[reply]

WMFlabs.org

Does anyone know why http://tools.wmflabs.org/enwp10/cgi-bin/list2.fcgi?run=yes&projecta=Medicine&importance=Top-Class&quality=C-Class is not working? The link was generated is from the table to the right. I was trying to help a WP:Student assignment for medical students know which articles they should pick for maximum-impact. In this case I clicked on the top-importance C-class articles. Thanks. Biosthmors (talk) pls notify me (i.e. {{U}}) while signing a reply, thx 09:57, 9 October 2013 (UTC)[reply]

Biosthmors Worked fine for me. Graeme Bartlett (talk) 10:42, 9 October 2013 (UTC)[reply]
Weird. Thanks Graeme Bartlett. So no one gets "Internal Server Error. The server encountered an internal error or misconfiguration and was unable to complete your request. Please contact the server administrator, mpelletier@wikimedia.org and inform them of the time the error occurred, and anything you might have done that may have caused the error. More information about this error may be available in the server error log. Additionally, a 500 Internal Server Error error was encountered while trying to use an ErrorDocument to handle the request."? I guess I should send an email, then? Biosthmors (talk) pls notify me (i.e. {{U}}) while signing a reply, thx 10:56, 9 October 2013 (UTC)[reply]
I get the same "500 Internal server error", so you're not alone. Fram (talk) 11:08, 9 October 2013 (UTC)[reply]
I am getting that error now. Also on other tools like https://tools.wmflabs.org/copyvios/. So it must be intermittent but widespread. Graeme Bartlett (talk) 11:29, 9 October 2013 (UTC)[reply]
I sent an email and linked to this discussion. Biosthmors (talk) pls notify me (i.e. {{U}}) while signing a reply, thx 12:47, 9 October 2013 (UTC)[reply]
  • The issue is known, and should be fixed today. The problem is that Tool Labs has been more popular faster than expected and some less-well behaved tools are eating up resources and breaking other tools. I'm going to put some partitioning in place to help alleviate the issue. — MPelletier (WMF) (talk) 13:23, 9 October 2013 (UTC)[reply]
Thanks, MPelletier. Can you please post here when you believe the problem is fixed, so that we can test? I'm seeing the same Internal Error since yesterday when trying to use a saved Catscan2 bookmark. – Jonesey95 (talk) 17:40, 9 October 2013 (UTC)[reply]

Looks like the server is in trouble again. I have just been trying http://tools.wmflabs.org/catscan2/catscan2.php, but there is no connection. No response either at http://tools.wmflabs.org/ --BrownHairedGirl (talk) • (contribs) 23:17, 10 October 2013 (UTC)[reply]

It's planned maintenance that turned out to take much longer than was planned. See http://lists.wikimedia.org/pipermail/labs-l/2013-October/001748.html. Anomie 01:10, 11 October 2013 (UTC)[reply]
Catscan is probably another issue. Nikkimaria (talk) 18:08, 11 October 2013 (UTC)[reply]

Updating Dashboard

I attempted to update WP:Dashboard by removing historical pages and adding The Teahouse, but my edits were reverted by Legobot (see here). Help‽ ʍw 13:11, 9 October 2013 (UTC)[reply]

Chances are Legobot needs to be informed of which boards to include/exclude, and simply removing them from the page will just trigger an "update" where it puts things back the way it thinks they should be. I'm sure @Legoktm: can fix it, and perhaps modify the behavior so that we can include/exclude via templates on the dashboard pages? equazcion | 13:58, 9 Oct 2013 (UTC)
  • Perhaps create wp:Dashboard/new for progress: Until all the bugs in Legobot can be fixed, try updating wp:Dashboard/new to make upgrades without artificial restraints, and then other users can read that page to review the enhancements. Per the "wiki" concept, nothing derails progress like pages which require permission to update. Also it is not just problems with Legobot, but many Bots have trouble, such as date-bots removing date links from calendar(!) pages, where any fool would know a page with 365 date links is probably not "over-linked" for dates. Perhaps Bots should be called "idiOTs". However, even intelligent people can write bad software due to all the complex factors to consider. Meanwhile, focus on a better version, as wp:Dashboard/new, until the bugs in Legobot can be corrected. We are months behind in updates to valuable tools, due to being sidetracked by the planned VE disruption fiasco. -Wikid77 (talk) —Preceding undated comment added 20:20, 9 October 2013 (UTC)[reply]
    • I'm actually rather appreciative myself of those who maintain bots, as they provide some pretty important behind-the-scenes stuff that would otherwise be a pain. Especially Legoktm, who took it upon himself to take over for more than several retired bots. Developers tend to get it in the arse from the audience because there are always flaws, no matter how much care a programmer tries to devote, and this particular problem is really nothing to get bent out of shape about. equazcion | 04:15, 10 Oct 2013 (UTC)
@64.40.54.7: Makes sense.
@Equazcion: Your first comment is pretty much what I thought. It might be nice to be able to add or remove noticeboards through the templates at the dashboard, though it's certainly not a necessary feature (it's not that often that new noticeboards are created or old ones are closed). If such functionality is difficult/impossible to add, there should at least be a notice that all updates and modifications to those pages must go through Legoktm (although I half-agree with Wikid77 that that's a bit contrived).
@Wikid771: By all accounts, this it not the result of "bugs" in Legobot, though it is an unusual procedure to have to go through to update the pages. The biggest problem here seems to be that Legotktm has not been being notified of the needed updates to the templates/bot (until now). I have no idea what you're getting at with "WP:Dashboard/new"; what would that be, a sandbox? We're not talking about sweeping upgrades and modifications to everything at the dashboard, just removing links to defunct noticeboards and adding a new one. There really isn't that serious of a problem with the pages as they are, they're just a little bloated with all the links to historical boards (the most serious problem is the lack of a link to the Teahouse).
ʍw 12:20, 10 October 2013 (UTC)[reply]

Resolved
See here. ʍw 22:46, 10 October 2013 (UTC)[reply]

randomly ordering Special lists

On some pages in the Special namespace, usernames should appear in random order, re-randomized with each refreshing, rather than alphabetically. I thought of this for Special:ListUsers/checkuser, because I shouldn't select as if I was canvassing for a friend and the person at the top of the alphabet shouldn't get overburdened with contacts for the kinds of contacts that should be distributed more evenly. Perhaps randomness should even be the default, with alphabetical being the optional alternative. Nick Levinson (talk) 15:54, 9 October 2013 (UTC)[reply]

It is possible to write Lua script modules which can read any content page (but not wp:Special_pages) and then redisplay the data, such as in randomized order, so that the same entries/usernames do not get "top billing" at the start of a list every time. It is also possible to write a Lua-based template for edit-preview, which could accept a block of text as copy/paste from a page, and then re-display the data in any format, almost instantly. -Wikid77 (talk) 20:32, 9 October 2013 (UTC)[reply]
@Wikid77: You should really check your facts before you say whether technical things are possible or not. I just checked, and it is impossible to access the content of Special:ListUsers/checkuser from a Lua module on Wikipedia. It is possible to get some data about the title, such as the title text and the namespace name, but any attempts to get the page content just return nil. — Mr. Stradivarius on tour ♪ talk ♪ 04:33, 11 October 2013 (UTC)[reply]
Thank you for clarifying (which I added into my message above), as why I had mentioned to focus on reformatting of "copy/paste" text from a Special page. Indeed, this is not UNIX, where users could merely save output from stdout into any file or application ("grep > myfile") for further processing. If the developers need something valuable to do, perhaps they could provide a Scribunto interface to allow a Lua script to read from the Special pages. -Wikid77 (talk) 05:19, 11 October 2013 (UTC)[reply]
I think its very unlikely for that to happen to non-transcludable special pages. Bawolff (talk) 12:00, 13 October 2013 (UTC)[reply]

Teahouse looks strange

The Teahouse questions at the top of the page are indented but the ones at the bottom are all the way to the left against the border.— Vchimpanzee · talk · contributions · 21:24, 9 October 2013 (UTC)[reply]

There was a stray closing </div> causing it that I removed. Chris857 (talk) 21:52, 9 October 2013 (UTC)[reply]

Inserting tables

The toolbar above the edit box has a button to insert a table, with options for header row, style with borders, making it sortable, and number of rows and columns. I suspect that most new tables added to articles must be made from it. The header row is in a slightly darker grey color.

But shouldn't the first column be a header column as well, as detailed Wikipedia:Manual of Style/Accessibility/Data tables tutorial#Overview of basics? Or at least included as another option in the gadget? Cambalachero (talk) 14:46, 10 October 2013 (UTC)[reply]

The section you linked to is just documentation on how to make header rows and columns, among other things, rather than a recommendation. You'll notice most of the table examples on that page lack a header column, and from experience I'm pretty sure most article tables don't have one (although I could be mistaken). equazcion 15:51, 10 Oct 2013 (UTC)

Cite error formatting

The bottom of Talk:Miller–Rabin primality test currently displays the weird notice . While the cite error is real, it should not be processed by MathJax.—Emil J. 16:07, 10 October 2013 (UTC) [EDIT: Since many people were confused by an unrelated error affecting the <math> markup I used to depict the error message, I replaced it with a screenshot. Also, let me stress that the notice is only visible with MathJax enabled.—Emil J. 13:22, 11 October 2013 (UTC)][reply]

Narrowing it down, <strong class="error">a b c</strong> also gets processed by MathJax. Something is wrong either with MathJax setup, or with CSS for this element-class combination.—Emil J. 16:16, 10 October 2013 (UTC)[reply]

Looks like that might be this as reported above. I'm not always seeing the error in your post on every load of this page, but intermittently. equazcion 16:24, 10 Oct 2013 (UTC)
I’ve only put the message in <math> tags here to imitate what I see at the linked talk page, but that’s not how it is actually coded. The bug you mention affects PNG output, whereas the misformatted error message I’m talking about only shows up if you enable MathJax (otherwise it is invisible).—Emil J. 16:48, 10 October 2013 (UTC)[reply]
I actually do see the error here intermittently, despite your coding choice and my having the default preference chosen. I also note that whenever a page load here shows the error in this section, it also shows up in the section above -- and when the error does not show up here, it does not show up above either. equazcion 16:51, 10 Oct 2013 (UTC)
It looks to me as if the issue is at least partly caused by copying a quote from the article page, including a ref tag, with no reflist on the talk page. I have commented out the ref tag. DES (talk) 17:04, 10 October 2013 (UTC)[reply]
Sigh. I intentionally left the ref unfixed so that the error message is visible. The problem I’m reporting is about the error message, not about the ref that triggered the message. The error message needs fixing whether this particular instance of the error is corrected or not.—Emil J. 17:13, 10 October 2013 (UTC)[reply]
Fine I will remove the comment symbols. I would have had no problem if you had done so. DES (talk) 17:39, 10 October 2013 (UTC)[reply]
(edit conflict) Ah okay I think I misunderstood. EmilJ is saying that even when the error doesn't show up, this ref shouldn't be getting formatted as a math expression (at least I think that's what they're saying). Sorry about that. I'm not very familiar with math coding and how references within them are supposed to be handled, so I'll let someone else comment on this. equazcion 17:06, 10 Oct 2013 (UTC)
No. I’m saying that when the error does show up, the error message shouldn’t get formatted as a math expression.—Emil J. 17:08, 10 October 2013 (UTC)[reply]
(ec) That makes sense, i failed to read the above fully I think. I agree that even where there is a cite error, it shouldn't be parsed as part of a math expression. DES (talk) 17:11, 10 October 2013 (UTC)[reply]
Although it seems to me that whatever text a ref produces will get processed based on the tags surrounding it, no? It could be probably be gotten around by changing the CSS to override purely in cases of errors, or for all refs if that's what people want, but I don't think this is actually a bug. equazcion 17:18, 10 Oct 2013 (UTC)
User:Trappist the monk might have some insight. equazcion 17:21, 10 Oct 2013 (UTC)
Missing reflist errors are only emitted after the whole page is processed, so they cannot depend on tags surrounding <ref>’s even if they wanted. Anyway, there doesn’t seem to be anything unusual about how the message is formatted in the HTML, as you can see for yourself in the HTML source (note that the message is always there, even if it’s made invisible by the default CSS). As I already wrote above, the problem can be triggered anywhere by <strong class="error">a b c</strong> a b c. By default, this shows up in boldface red (as expected), but with MathJaX enabled, it is processed as if it were a math expression (making it show up in a math italic font without spaces, black, or worse if the text happens to include LaTeX commands).—Emil J. 18:12, 10 October 2013 (UTC)[reply]
With the default PNG preference, I'm seeing the error text as a PNG, so it would appear this isn't dependent on mathjax, and that the math tags are processed after the ref tags are expanded. I'm not sure if a ref tag would ever realistically expand into something containing latex code, but I guess that's something to consider. I'm not sure what you mean exactly about the span tag -- if you're saying spans classed with "error" are processed as math even without math tags, the following code: <span class="error">a^{n-1}=1\pmod n</span> displays as ordinary red text with no math, at least for me, no matter which preference I have chosen (I tried both): My mistake, you said strong tags, not span tags -- with mathjax enabled, strongs classed with "error" do indeed show math: a^{n-1}=1\pmod n. Interesting... equazcion 18:38, 10 Oct 2013 (UTC)
There are no math tags anywhere close to the ref in question. Are you really saying that if you go to Talk:Miller–Rabin primality test, and scroll to the bottom of the page, there is a PNG containing the error message that I copied above? That certainly did not happen in my tests.—Emil J. 18:52, 10 October 2013 (UTC)[reply]
No, actually, I was about to post another correction and edit conflicted with you: I only see a PNG when there is no ACTUAL error -- the PNG I referred to is actually from the PASTED error text you posted (which makes this confusing :) ). When the actual red error shows up and my preference is set to PNG, the error shows as bold red text, NOT as an image -- so you were right about that as well. equazcion 18:55, 10 Oct 2013 (UTC)
I don't think I have much to contribute to this conversation. The error message at the top of this section has, except for one brief instance, always appeared as bold red text. It did briefly appear as italicized black text in a .png image but if I backtrack through the page history I can't see a black italic .png version. I have not seen the error message in either guise at Talk:Miller–Rabin primality test.
Even though it's apparent that this problem has nothing to do with citations, I would add that whenever including references in a talk page, it's appropriate to use {{reflist-talk}} and set |close=1.
As I preview this edit I'm seeing the error message as black italic .png.
Trappist the monk (talk) 19:10, 10 October 2013 (UTC)[reply]

Beware intermittent loss of CSS format styles

There have been problems this week with file-cache failures for bits.wikimedia.org or upload.wikimedia.org (for the math-tag png cache images), and in some cases, none or only partial CSS styles have been applied in pages. Hence, it is possible that a math-tag style was active in some cases, as when the cite-references error message was formatted. Some users have noted to refresh the page several times and take-your-pick of the results. Unfortunately, with all the hundreds (thousands?) of new CCS classes in span-tags, sometimes failing to process in the browser, the overall interface to Wikipedia has become "WYSIHUH?" or "WYSIWTF?" and perhaps we should write an essay "wp:WYSIHUH" to explain the convoluted results which can often appear in pages. -Wikid77 (talk) 05:50, 11 October 2013 (UTC)[reply]

Right-to-Left failing

When trying to see the next 200 pages in a category in a right-to-left language, it fails. It was working as recently as a few weeks ago.

See: [3] and click on the next 200 button - for non-Farsi readers using Chrome with translation on will assist you. Carlossuarez46 (talk) 21:21, 10 October 2013 (UTC)[reply]

must be something transient - tried it just now (and also with "uselang=en", "uselang=he", and "uselang=fa"), and found no problem. can you try it when logged out? if logging out solves the problem, it may be something in your account - maybe some preference, a gadget, or something in your "common.js" or <skin>.js. peace - קיפודנחש (aka kipod) (talk) 06:00, 11 October 2013 (UTC)[reply]

Can't access tools.wmflabs.org

RESOLVED: Was offline maintenance, back now. -Wikid77 14:40, 11 October 2013 (UTC)[reply]

I just clicked "Articles created" at the bottom of my Special:Contributions page and got this: "Unable to connect. Firefox can't establish a connection to the server at tools.wmflabs.org." --Anthonyhcole (talk · contribs · email) 05:38, 11 October 2013 (UTC)[reply]

  • Perhaps re-try in 30 minutes: Still stuck 25 minutes later. There are still some occasional connection failures with wp:WMFLabs, even though the system is much faster than the old toolserver system, but I have found a re-try within 15 minutes often works, as compared to multi-hour waits to re-connect with the old toolserver. Perhaps there is a current WMFLabs upgrade in progress now. -Wikid77 (talk) 06:05, 11 October 2013 (UTC)[reply]
The servers are undergoing maintenance, and it's taking longer than expected. See #WMFlabs.org above. – Jonesey95 (talk) 06:12, 11 October 2013 (UTC)[reply]
  • WMFLabs back online again: I just confirmed the "articles created" button works again under Contributions. Perhaps there might be a better hour to run server maintenance, to allow extra time. -Wikid77 14:40, 11 October 2013 (UTC)[reply]

spam

How is gods name does wikipedia stop spam? I have installed blacklists, captchas, etc on my own wikis and I am still besieged with spam.

Its disheartening that smaller but substantial hosting sites, which are similar to wikia, such as http://www.shoutwiki.com have no effective way to fight spam and to continue to allow anonymous editing, they must manually block spam bots.

This spam scourge indirectly effects wikipedia editing, because any smaller mediawiki site blocks anonymous editors. These potential editors never are introduced to the ease and wonder of editing a wiki.

Is there any working extension which can rid our sites of the relentless spam bots?

Does wikipedia have hosting, which allows me to get the umbrella protection from spam bots, but the freedom to modify the wiki the way I want? (unlike wikia), with root access?

Has the foundation ever considered hosting wikis?

These spam bots are relentless. Igottheconch (talk) 06:31, 11 October 2013 (UTC)[reply]

Have you seen mw:Manual:Combating spam? PrimeHunter (talk) 09:41, 11 October 2013 (UTC)[reply]
The short answer to How does Wikipedia stop spam? is: with difficulty, and only by continuous, dedicated, tedious work by an army of new page patrollers, recent changes patrollers and administrators flagging, deleting and blocking. They are helped by edit filters and increasingly intelligent bots, but basically it comes down to a lot of volunteer effort. JohnCD (talk) 10:32, 11 October 2013 (UTC)[reply]
  • Meanwhile, Bill Gates promised spam would be a "thing of the past" within five years, at the World Economic Forum in Switzerland in 2004, many years after predicting, "The Internet is a passing fad" in 1995. Should have been a comedian, because who would think he was serious. Anyway, we need to plan better tools to help our volunteers resist spam attacks. However, it might require creating wp:Micropages, as articles which are limited in size by edit-filters, with little room to add spam links. -Wikid77 (talk) 15:35, 11 October 2013 (UTC)[reply]
The question of whether there should be an officially supported MediaWiki hosting service was recently raised on the wikitech-l mailing list, see here and replies. That was just a kite-flying exercise, and IMHO there is no chance of such a service being introduced any time soon. Many spambots can be defeated with the Abuse Filter extension (searching Special:Version here for "abuse filter" shows that it is installed here; try that on your wiki). The polite name used here is "edit filter", but the extension is called the "abuse filter". Setting up filters is easy after you have done a few. You would probably need help to get started. Johnuniq (talk) 09:38, 12 October 2013 (UTC)[reply]

Site registered on Wikipedia's blacklist? - Yahoo! groups

I am trying to add an external link (uservoice.com/forums/209451) to the Yahoo! Groups article. But I get a error message saying that the site is registered on Wikipedia's blacklist. The website in question is an official feedback forum used by Yahoo! which contains thousands of entries. Just wondering why it is on a black list here? Thanks in advance, XOttawahitech (talk) 15:54, 11 October 2013 (UTC)[reply]

It was added to the global blacklist. You may be interested in previous discussions about this site. πr2 (tc) 17:56, 11 October 2013 (UTC)[reply]
@Ottawahitech: You should probably request it on MediaWiki talk:Spam-whitelist. :-) πr2 (tc) 00:18, 12 October 2013 (UTC)[reply]

Help formatting web refs

I made this edit but just now realized it looks hideous. I tried to cite different individual sections of the same web-page, which meant I couldn't bundle them all under one ref name, but now it just looks like a bunch of separate refs to the same thing. Any advice on how to deal with this? Hijiri 88 (やや) 16:53, 11 October 2013 (UTC)[reply]

Use author-name cites or paragraph superscripts: A common technique is to abbreviate the extra cites to just have author name, page or paragraph, as literal text in a reftag, or add the year of publication if the author had multiple sources:

<ref>Smith. p. 23. para. 4</ref>
<ref>Smith 1998. p. 23. para. 4</ref>

However, another method is to have only one reftag cite for the source, but then add a paragraph superscript at each point where the reftag is used "[13] :¶4" by appending "<sup>:&para;4</sup>" as the superscript:

<ref name=Smith/><sup>:&para;4</sup>

The code "&para;" is the HTML entity to show the standard paragraph symbol "¶" which many readers will immediately realize means "paragraph number". Or for section, put ":sect.4" etc. There are templates which can show a paragraph superscript or just Template:Sup, but are easy to forget, and to make matters worse Template:Para is for "parameters" (should be "param") as hijacking the common abbreviation for paragraph "para" when "parm" is more common for "parameter", and so just remember "&para;" to show the paragraph symbol as a common-sense way to indicate one of several paragraphs on a webpage. -Wikid77 09:41, 12 October 2013 (UTC)[reply]

Are there statistics for access to articles by direct Wikipedia search?

This question was asked on the Help Desk.— Vchimpanzee · talk · contributions · 19:15, 11 October 2013 (UTC)[reply]

VisualEditor weekly update - 2013-10-10 (MW 1.22wmf21)

Hey all,

Here is a copy of the weekly update for the VisualEditor project. This is to make sure you you all have as much opportunity to know what is happening, tell us when we're wrong, and help guide the priorities for development and improvement:

VisualEditor was updated as part of the wider MediaWiki 1.22wmf21 branch deployment on Thursday 10 October. In the week since 1.22wmf20, the team worked on fixing bugs and stability improvements to VisualEditor.

When you delete or backspace over a node (like a template, reference or image), the node will first will become selected before a second press of the key will delete it, making it more obvious what you are doing and avoiding accidental removals of infoboxes and similar (bug 55336).

If you hold down the ⇧ Shift key whilst resizing an image, it will now snap to a 10 pixel grid instead of the normal free-hand sizing. A number of improvements were made to the transactions system which make the undo/redo system able to cope better with real-time collaboration, where multiple users will be able to edit a page at the same time in one session.

The save dialog was re-written to use the same code as all other dialogs (bug 48566), and in the process fixed a number of issues. The save dialog is re-accessible if it loses focus (bug 50722), or if you review a null edit (bug 53313); its checkboxes for minor edit, watch the page, and flagged revisions options now layout much more cleanly (bug 52175), and the tab order of the buttons is now closer to what users will expect (bug 51918).

The code for the action buttons on the right (RTL environments: left) of the toolbar was re-written slightly to improve its flexibility. The display of the help and edit notice menus is now improved, including the addition of a close button (bug 52386). The width of the format drop-down was made adjustable so that long labels don't cause it to break (bug 54870), and a bug that caused the toolbar's menus to get shorter or even blank when scrolled down the page in Firefox is now fixed (bug 55343).

A complete list of individual code commits is available in the 1.22/wmf21 changelog, and all Bugzilla bugs closed in this period are on Bugzilla's list.

Following the regular MediaWiki deployment roadmap, this should be deployed here (for opted-in users) on Thursday 17 October.

Hope this is helpful! As always, feedback on what we're doing is gratefully received, either here or on the enwiki-specific feedback page. Please ping me using {{ping|Jdforrester (WMF)}} to make sure I see it promptly if you have any thoughts or corrections.

Jdforrester (WMF) (talk) 21:03, 11 October 2013 (UTC) [reply]

version 1.22 wmf20 - miniscule breaking change

so in version 1.22 wmf20, the JS variable wgVectorEnabledModules is no more. this is not a huge deal - this variable was marginally useful when "vector" skin was deployed as an extension, and some of its modules (specifically "collapsiblenav") were considered optional: this variable indicated wheterh a specific module was installed.

since then, vector was integrated into core, its modules are no longer "optional", and this variable became superfluous.

there is nothing in mediawiki namespace that makes use of this defunct variable, but there are some 30-odd pages in userspace that seem to refer to it, maybe half of them are legit scripts. some of these are broken regardless, but at first glance it looks as if the removal of this variable have borked a single-digit number of scripts in userspace.

so if something that used to work for you stopped working, you may want to check to see if you are using one of those scripts, and if so, suggest to the script owner to remove the reference to this defunct variable. peace - קיפודנחש (aka kipod) (talk) 21:33, 11 October 2013 (UTC)[reply]

Show all types of Units

I really would appreciate if I can see units in all the system. For weight I would not like to use Google to convert it from KG to Lb. It would be great if you can work it out. — Preceding unsigned comment added by 98.250.95.86 (talk) 04:01, 12 October 2013 (UTC)[reply]

  • Template:Convert handles most conversions: Use the common Template:Convert to convert kg to pounds:
{{convert|23|kg|lb}}           → 23 kilograms (51 lb)
{{convert|23|kg|lb|abbr=on}} → 23 kg (51 lb)
For all units, see: Template:Convert/list_of_units, as a table which shows each unit-code to specify for each unit name in the list. Template {convert} is reasonably fast, about 50 conversions per second, and so it will not slow down the edit-preview or reformatting of most articles. -Wikid77 (talk) 10:38, 12 October 2013 (UTC)[reply]

Timing out issue on California State Route 1

Could someone check out the California State Route 1 page? It frequently seems to either time out, or take a very long time to save the page -- even if you try to save from the wiki source code instead of the visualeditor. I suspect it is the sheer amount of template use on the page, especially the use of templates to generate the table on the major intersections section. Thanks. Zzyzx11 (talk) 07:28, 12 October 2013 (UTC)[reply]

Update on this: a discussion has started on the article's talk page, and there are plans to convert these templates to the more efficient Lua based system. Thanks. Zzyzx11 (talk) 07:42, 12 October 2013 (UTC)[reply]
I get the same issue with Highway 401, where I get a "server down for maintenance" error when I try to save an edit. The edit is saved, but the screen still shows and it takes a mighty long time. The nesting of hundreds of templates surely plays a role in this. - Floydian τ ¢ 09:01, 12 October 2013 (UTC)[reply]

Diff two pages

It is possible to manually construct a link that compares one page with another: [//en.wikipedia.org/w/index.php?diff=111111111&oldid=222222222]. Is there a way to automate that? What I want is to have a link that shows the difference between the current version of a module and the current version of the module's sandbox. I don't see a way for a module to get the required oldid values to generate the diff link—or is there? Johnuniq (talk) 09:53, 12 October 2013 (UTC)[reply]

You can use Special:ComparePages for this. Example -- John of Reading (talk) 10:19, 12 October 2013 (UTC)[reply]

Image size and ratio for portraits in lists

Is there a preferred or mandated image size and ratio for portraits in lists? For example the list of field marshals at Field marshal (United Kingdom) are all of differing ratios and some are whole body images while other are head and shoulders images and yet others are pretty much focused on the head. I would suggest that an image size and ratio (possibly the golden ratio?) be mandated while a head and shoulders portrait should be preferred. This would of course require portrait versions, cropped from available images, to be created but that would not be an insurmountable difficulty. Greenshed (talk) 10:13, 12 October 2013 (UTC)[reply]

Wikipedia:RefToolbar 1.0 does not autofill from ISBN, DOI, URL, etc.

It used to and now it doesn't. It's been a month or more since it worked for me.-- Brainy J ~~ (talk) 17:25, 12 October 2013 (UTC)[reply]

It's because an admin needs to change var defaultRefTagURL = 'http://reftag.appspot.com/'; to var defaultRefTagURL = '//reftag.appspot.com/'; – long story short, there's a URL protocol difference (more info). Theopolisme (talk) 18:12, 12 October 2013 (UTC)[reply]
Thank you for bringing this up! Based on the lack of complaints, I thought it just stopped working for me. --NeilN talk to me 18:19, 12 October 2013 (UTC)[reply]
Wait a sec, even if that fix is implemented, I don't think ISBN lookup will work. It appears that at least the relevant script is no longer available on the server (although the others -- DOI, URL, etc. -- appear to be there). Theopolisme (talk) 18:22, 12 October 2013 (UTC)[reply]

Requested a fix at MediaWiki_talk:RefToolbarLegacy.js#Protocol_relative_link. Theopolisme (talk) 15:14, 13 October 2013 (UTC)[reply]

 Done --Redrose64 (talk) 19:01, 13 October 2013 (UTC)[reply]

Template question

This includes some line breaks that would show up in transclusions, and are shown here just for ease of reading

In my sandbox, I passed parameter value "foo" to parameter "team, and it produces "foo women's basketball"

However, if I enter "foo Lady", it generates "foo Lady basketball" i.e dropping "women". This is desirable, but I don't understand how it happens. Clearly, something detects the word Lady" and suppresses "women" but I looked at the code and do not see it.


The goal is to make it work for 2013–14 Southern Utah T–Birds basketball team, where the team name indicates the gender (but the word "Lady" is not present), and the infobox link should be:

not

Update: I asked here, because Technical 13 claimed to be studying, but gave an answer here:

It's because of the the code on Template:Infobox NCAA team season/team that uses a module to ReGex replace (.*)Lady blah blah (and it is case sensitive).

So that I can learn something about templates, can someone point me to this? I don't see it.--SPhilbrick(Talk) 02:11, 13 October 2013 (UTC)[reply]

{{Infobox NCAA team season/team}} :
|(.* Lady)(.*) women's|%1%2|plain=false}}
--- and right after that ---
|(.* Cowgirl)(.*) women's|%1%2|plain=false}}
It's around the 18th line down (line 22 in the image) just before the documentation starts. These two code segments are groups of parameters for the first and second "#invoke:String" statements at the beginning of the template (#invoke:String refers to running Module:String). For simplicity's sake, it looks something like this:
{{#Invoke:String|replace| {{#Invoke:String|replace| -- One replacement occurs, and then a second replacement is run on its result.
---- the bulk of the template code, which once done, its result put through the replacements: ----
|(.* Cowgirl)(.*) women's|%1%2|plain=false}} ---- Parameters for the "inner" #invoke:String replacement that occurs first
|(.* Lady)(.*) women's|%1%2|plain=false ---- Parameters for the very top #invoke:String replacement, that occurs second
I hope this is somewhat clear but feel free to ask things. See the screenshot to the right of the code formatted and syntax-colored, hopefully for a bit more clarity. equazcion 02:33, 13 Oct 2013 (UTC)
You might also find the syntax highlighter gadget useful. —Remember the dot (talk) 03:55, 13 October 2013 (UTC)[reply]
Thanks. The odd thing is, I thought I did a search for the word "Lady" but I must be wrong, as it is right there.I see, this is a subtemplate--SPhilbrick(Talk) 12:47, 13 October 2013 (UTC)[reply]

Equal hight divs

Consider this.

The main eyesore is the inequal height of the 'columns'. I have now spent weeks trying to find a solution to equalize the heights in CSS. I have explored flexboxes and table-like display porperties. My final conclusion: It can not be done.

What's in the way are the headers; I want them outside the divs, but that breaks any box model I've tried. Is there any brilliant web designer who has a simple solution that does not involve inordinate ammounts of CSS, hacks, tables, javascript and still be compatible with all browsers? Edokter (talk) — 12:00, 13 October 2013 (UTC)[reply]

Equal columns are the bane of all designers' existence, and as far as I'm aware, there's no non-javascript cross-browser solution in existence that wouldn't be considered a hack. If I were doing this I'd do it with javascript, and have the non-equal columns to fall back on for browsers without javascript. It's not really too big of an eyesore, and the vast majority of people will see the javascript version. equazcion 12:39, 13 Oct 2013 (UTC)
And the HTML purists are still wondering why we use tables for presentation... I know it's trivial to do in jQuery, but it really should not need to be that way. Edokter (talk) — 15:15, 13 October 2013 (UTC)[reply]
I say just go with a table, since it's easy, clean, and works everywhere. Jackmcbarn (talk) 19:23, 13 October 2013 (UTC)[reply]
I would lose the flexibility of the entire framework; the sole reason I wanted to move away from tables. Edokter (talk) — 19:37, 13 October 2013 (UTC)[reply]
What if you used a new table for every row? Then the layout is still just as flexible (and it's still no more complex). Jackmcbarn (talk) 19:40, 13 October 2013 (UTC)[reply]
The current style is not possible using pure tables either. Edokter (talk) — 20:53, 13 October 2013 (UTC)[reply]
What about the way I have it now (for "In the news" and "On this day")? Jackmcbarn (talk) 21:23, 13 October 2013 (UTC)[reply]
It works for display, but it breaks the flow of the headers with regard to the content in screen readers, so it is not really a solution. Edokter (talk) — 22:50, 13 October 2013 (UTC)[reply]
I'm out of ideas then. Sorry I couldn't be of more help. Jackmcbarn (talk) 00:38, 14 October 2013 (UTC)[reply]

MLA style citation - inconsistent with the example in Wikipedia:Citing_Wikipedia

I just noticed that if I use the "Cite this page" tool, the MLA citation looks like this (citing Sokoban):

Wikipedia contributors. "Sokoban." Wikipedia, The Free Encyclopedia. Wikipedia, The Free Encyclopedia, 21 Sep. 2013. Web. 13 Oct. 2013.

Comparing with the example given in Wikipedia:Citing_Wikipedia, there are two obvious deviations:

  • According to Wikipedia:Citing_Wikipedia, the publisher is "Wikimedia Foundation, Inc.". According to the citing tool, the publisher is "Wikipedia, The Free Encyclopedia", an exact reduplication of the name of the website.

I'm totally confused. Which should I follow? — Preceding unsigned comment added by Xiaq (talkcontribs) 16:12, 13 October 2013 (UTC)[reply]

Request deletion of user Javascript page

What's the proper way to request the deletion of a user Javascript page, such as User:GoingBatty/script/Sources.js? {{db-u1}} doesn't seem to work in this case. Thanks! GoingBatty (talk) 23:10, 13 October 2013 (UTC)[reply]

Don't know what the right place is since templates won't work here, but I've deleted it for you. ^demon[omg plz] 23:17, 13 October 2013 (UTC)[reply]
For the future, you can just ask an admin to do it, or, you could put this at the top of the page: //[[Category:Candidates for speedy deletion by user]] - Please delete this page because ... . That should add the page to the proper csd list and provide the readable message for the reviewing admin. equazcion 23:21, 13 Oct 2013 (UTC)
Thanks to both of you! GoingBatty (talk) 23:58, 13 October 2013 (UTC)[reply]
This question comes up a lot. {{db-u1}} and the like all work just fine in javascript; they just don't give any indication of doing so. Is there anywhere good that we could clarify this? Jackmcbarn (talk) 00:41, 14 October 2013 (UTC)[reply]
I added it to the template documentation. equazcion 01:08, 14 Oct 2013 (UTC)
There has to be a way to force the template to visibly work on said JavaScript pages on top of just adding the page to the category. I'm thinking that it would require a little snippet of code in MediaWiki:Common.js that would detect that it is the only thing on the page and display the proper Db box for people to see. If I worked up something like this, is there any administrators that would be willing to implement it or do we need to have a big RfC to get something like that added to the site common.js? Technical 13 (talk) 02:33, 14 October 2013 (UTC)[reply]

"See TfD"

I don't (much) mind the nomination for deletion of templates that I've used, or the labeling of these templates as possibly ripe for deletion. However, the way that <See TfD> is mass-autosplattered seems a bit problematic. When I look at my (yes, yes) article Rob Hornstra, I see that it now has chunks autoconverted into bold italic. Is there a known bug in the implementation of the mass-autosplattering, or was my own previous formatting of the article, uh, sub-optimal, thereby inviting this or a similar disaster?

(In order to make this more comprehensible and to avoid further fuck-ups and compound confusion, I'm resisting the temptation to fiddle with the article for now. I'll let the TfD run its course. Though if somebody here who knows what they're doing wishes to jump in, go ahead. Incidentally, I do realize that a number of the external links need attention.)

Apologies if this is a matter that has already come up somewhere. -- Hoary (talk) 01:46, 14 October 2013 (UTC)[reply]

I've fixed the bold-italic bug with this edit to the TfD template. The problem was that the apostrophes in the article were interacting with the apostrophes in the template, making the bolding and italics appear in the wrong place. As for whether the tagging itself is a good idea, that's probably better discussed at WT:TFD. — Mr. Stradivarius ♪ talk ♪ 02:32, 14 October 2013 (UTC)[reply]

Aha, neat fix! Well done. And thank you. -- Hoary (talk) 09:48, 14 October 2013 (UTC)[reply]

defaultsummaries.js edit request

Could someone who knows JavaScript review the edit request at MediaWiki talk:Gadget-defaultsummaries.js? This needs to be reviewed by someone who knows what they're doing before it goes live. — Mr. Stradivarius ♪ talk ♪ 02:07, 14 October 2013 (UTC)[reply]

And could someone take a look at the request on MediaWiki talk:RefToolbar.js and have a look over the code please? — Martin (MSGJ · talk) 10:04, 14 October 2013 (UTC)[reply]
Why do people keep disabling the editprotected tags? It makes it harder for people who actually know some JS to approve the requests since they can't find them Legoktm (talk) 17:01, 14 October 2013 (UTC)[reply]

In API, action=opensearch returns only 15 items instead of 100 result asked for

In the API sandbox (https://en.wikipedia.org/wiki/Special:ApiSandbox#action=opensearch&format=json&search=R&limit=100&namespace=0) I am querying for search results for "R" as follows; '/w/api.php?action=opensearch&format=json&search=R&limit=100&namespace=0' but it returns only the following 15 results.

  "R",
  [
      "Race and ethnicity in the United States Census",
      "Russia",
      "Romanization",
      "Rock music",
      "Romania",
      "Republican Party (United States)",
      "Roman Catholic Church",
      "Rome",
      "Record producer",
      "Republic of Ireland",
      "Royal Navy",
      "Radio broadcasting",
      "Rolling Stone",
      "Rugby union",
      "Rhythm and blues"
  ]

I am querying for 100 search results but it is still only returning 15 results always. Can anyone please tell as to what I am doing wrong or the search results are limited to 15 results only. I am sure for a letter R there should be many articles. - Harin4wiki (talkcontribs) 09:38, 14 October 2013 (UTC)[reply]

Searching (including using the API) is resource intensive and hence far more limited than other API queries (15 could well be the non-bot limit). What are you trying to do? Could it be done a different way? For example, do you just want to search article titles or whole articles? - Jarry1250 [Vacation needed] 10:26, 14 October 2013 (UTC)[reply]
I've actually been having the same problem for a while. I figured I must've be doing something wrong, since I'm not so familiar with the API's search actions. An actual error gets returned with the results when you set the search limit to a number over 100, but if the limit is between 15 and 100, it simply cuts off at 15 with no error. Kinda strange. equazcion 10:33, 14 Oct 2013 (UTC)
That does sound like a bug. - Jarry1250 [Vacation needed] 10:59, 14 October 2013 (UTC)[reply]
Note the bug would be in the search backend, not the API module. I just checked the code for the API module and all it does is pass the limit to PrefixSearch which passes it to the search backend. Anomie 13:24, 14 October 2013 (UTC)[reply]

I am only searching for the articles that are starting from the characters which the user keys in. So I want a list of articles which start from the characters the user has keyed in and then I can get the entire article once the user clicks on a specific article. But getting only 15 search results is too limited. I need to get atleast 50 to 100 search results. - Harin4wiki talk

Okay, in which case you should use https://en.wikipedia.org/w/api.php?action=query&list=allpages&apprefix=R&aplimit=100 or similar. Search is way more powerful than you require. - Jarry1250 [Vacation needed] 10:59, 14 October 2013 (UTC)[reply]

Toolserver problem accessing Autoblock checker

Basically, it won't. I get a 403 telling me something's expired. I don't even know if this is the right place to ask... Peridon (talk) 16:43, 14 October 2013 (UTC)[reply]

Toolserver user accounts expire if the user does not renew them every 6 months. The autoblock checker (FYI, it helps to provide a link to the problem) is run by User:Nakon. They seem to be only intermittently active, their last edit was 2 weeks ago. Mr.Z-man 16:55, 14 October 2013 (UTC)[reply]
Ta. I've left a message. Peridon (talk) 17:06, 14 October 2013 (UTC)[reply]