Wikipedia:Village pump (technical)

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by 70.49.171.136 (talk) at 01:10, 27 June 2015 (→‎Formatting after table). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

 Policy Technical Proposals Idea lab WMF Miscellaneous 
The technical section of the village pump is used to discuss technical issues about Wikipedia. Bug reports and feature requests should be made in Phabricator (see how to report a bug). Bugs with security implications should be reported differently (see how to report security bugs).

Newcomers to the technical village pump are encouraged to read these guidelines prior to posting here. Questions about MediaWiki in general should be posted at the MediaWiki support desk.


Impending bot armageddon

Hidden in the link at the end of today's Tech News is a bit of a bombshell. For people who didn't notice, most of our bots are going to stop working on 2 July if their code isn't updated.

According to the announcement, the following bots need to be fixed. (I've restricted the list to bots active on this wiki.)

We are reliant on quite a few of those bots for the smooth functioning of our wiki, so it's very important that they are fixed. Also, as TheDJ says above, there are also a number of user scripts that need fixing as well.

I've started a list of users to notify about this at User:Mr. Stradivarius/API continuation/users to notify, and a message to send to them at User:Mr. Stradivarius/API continuation/message. It would be very helpful if people could help me to expand the list to include user scripts, and to help copy edit the message. Once that's done, we can send it out using Special:MassMessage. Hopefully that will prevent wiki-meltdown on 2 July. — Mr. Stradivarius ♪ talk ♪ 00:53, 9 June 2015 (UTC)[reply]

This was also announced at WP:BOTN, with the complete list of bots that are known to be at risk. If you have know bot owners who don't frequent BOTN here (especially people at other projects), then please reach out to them. (Also, someday we need to create a template for major issues like this—maybe something like {{warning|All your bots are going to break}}. This only seems to happen once or twice a year, but I worry that people won't notice these messages in time.) Whatamidoing (WMF) (talk) 01:29, 9 June 2015 (UTC)[reply]
@Whatamidoing (WMF): You know we have a feature called Wikipedia:Mass message senders, and I do have a talk page that yells at me everytime someone edits it. I only check the bot operators' pages once in a blue moon, and I'm not subscribed to any email lists. I learned of this after my bots broke. – Wbm1058 (talk) 18:17, 13 June 2015 (UTC)[reply]
Is there a full list of affected bots? The announcement only names those with over 10,000 deprecation warnings over the course of a week. Some bots operate at a lower frequency but are equally as important MusikAnimal talk 01:49, 9 June 2015 (UTC)[reply]
CBNG and CBIII have had a source code change that Damian will push to live soon - RichT|C|E-Mail 08:51, 9 June 2015 (UTC)[reply]
Anomie is checking with Legal if they can release all the account names: "I already have the list of *accounts* affected: there are 510 with between 1000 and 10000 hits. Of those, 454 do not contain "bot" (case insensitively)" https://lists.wikimedia.org/pipermail/wikitech-l/2015-June/081953.html --Sitic (talk) 19:24, 9 June 2015 (UTC)[reply]
I'm guessing that the non-bot users will mostly be people using Huggle, Twinkle, AWB, STiki, and other similar tools. I'm not sure which of the tools is affected, though. — Mr. Stradivarius ♪ talk ♪ 22:08, 9 June 2015 (UTC)[reply]
@Mr. Stradivarius: there is now a list for all the bots which where not mentioned in the original mail: https://lists.wikimedia.org/pipermail/wikitech-l/2015-June/082037.html --Sitic (talk) 16:15, 11 June 2015 (UTC)[reply]
Not all the bots, just the ones that hit the warning more than 1000 times during the week sampled (May 23–29, IIRC). Anomie 00:09, 13 June 2015 (UTC)[reply]
  • Mine and everyone else using Peachy have been fixed.—cyberpowerChat:Online 08:55, 9 June 2015 (UTC)[reply]
  • Does anyone know whether there is a fix planned/available for Pywikibot? That would cover about half the bots in the list. — Mr. Stradivarius ♪ talk ♪ 09:40, 9 June 2015 (UTC)[reply]
You can find some info here: Wikisource:Scriptorium#Pywikibot_compat_will_no_longer_be_supported_-_Please_migrate_to_pywikibot_core. --Mpaa (talk) 12:19, 9 June 2015 (UTC)[reply]
Debugging related to mandatory HTTPS implementation. Confusion results when an unannounced breaking change is implemented right after the announcement of a scheduled breaking change.
  • Impending?? Both of my bots are already down, and failing in a library API call. I guess I need to scramble to fix it as soon as I can. – Wbm1058 (talk) 17:17, 13 June 2015 (UTC)[reply]

GET: http://en.wikipedia.org/w/api.php?action=query&list=categorymembers&cmtitle=Category%3AArticles+to+be+merged&format=json&cmlimit=500 (0.29201698303223 s) (0 b)
Warning: Invalid argument supplied for foreach() in C:\php\botclasses.php on line 263
GET: http://en.wikipedia.org/w/api.php?action=query&list=embeddedin&eititle=Template%3ARequested+move%2Fdated&eilimit=500&format=json (0.26001501083374 s) (0 b)
Warning: Invalid argument supplied for foreach() in C:\php\botclasses.php on line 496

/**
     * Sends a query to the api.
     * @param $query The query string.
     * @param $post POST data if its a post request (optional).
     * @return The api result.
     **/
    function query ($query,$post=null) {
        if ($post==null)
            $ret = $this->http->get($this->url.$query);
        else
            $ret = $this->http->post($this->url.$query,$post);
        return json_decode($ret,true);
    }

    /**
     * Returns an array with all the members of $category
     * @param $category The category to use.
     * @param $subcat (bool) Go into sub categories?
     * @return array
     **/
    function categorymembers ($category,$subcat=false) {
        $continue = '';
        $pages = array();
        while (true) {
            $res = $this->query('?action=query&list=categorymembers&cmtitle='.urlencode($category).'&format=json&cmlimit=500'.$continue);
            if (isset($x['error'])) {
                return false;
            }
            foreach ($res['query']['categorymembers'] as $x) {  // this is line 263
                $pages[] = $x['title'];
            }
            if (empty($res['query-continue']['categorymembers']['cmcontinue'])) {
                if ($subcat) {
                    foreach ($pages as $p) {
                        if (substr($p,0,9)=='Category:') {
                            $pages2 = $this->categorymembers($p,true);
                            $pages = array_merge($pages,$pages2);
                        }
                    }
                }
                return $pages;
            } else {
                $continue = '&cmcontinue='.urlencode($res['query-continue']['categorymembers']['cmcontinue']);
            }
        }
    }

    /**
     * Returns all the pages $page is transcluded on.
     * @param $page The page to get the transclusions from.
     * @param $sleep The time to sleep between requets (set to null to disable).
     * @return array.
     **/
    function getTransclusions($page,$sleep=null,$extra=null) {
        $continue = '';
        $pages = array();
        while (true) {
            $ret = $this->query('?action=query&list=embeddedin&eititle='.urlencode($page).$continue.$extra.'&eilimit=500&format=json');
            if ($sleep != null) {
                sleep($sleep);
            }
            foreach ($ret['query']['embeddedin'] as $x) {  // this is line 496
                $pages[] = $x['title'];
            }
            if (isset($ret['query-continue']['embeddedin']['eicontinue'])) {
                $continue = '&eicontinue='.$ret['query-continue']['embeddedin']['eicontinue'];
            } else {
                return $pages;
            }
        }
    }

What do I need to change? Wbm1058 (talk) 17:37, 13 June 2015 (UTC)[reply]

I don't find that the email announcement linked above adequately explains this. It may be a slow slog through the documentation if I don't get some help here. Can anyone confirm that there are breaking changes in the latest MediaWiki release?
Link to MediaWiki API help: action=queryWbm1058 (talk) 19:10, 13 June 2015 (UTC)[reply]
Hmmm, it says that rawcontinue is currently ignored

Wikimedia sites are going HTTPS only – some time ago, I tried changing my bot's queries to use https. That didn't work, so my bots still use http. Is that the problem? Wbm1058 (talk) 19:38, 13 June 2015 (UTC)[reply]

Wbm1058, this change won't happen for another couple of weeks, so the cause of your bots breaking is presumably something else. When did everything break? Whatamidoing (WMF) (talk) 05:32, 14 June 2015 (UTC)[reply]
I'm guessing that this is due to the switch to https-only, which happened a couple of days ago. As Whatamidoing said, the change in the continuation format will not happen until 2 July on this wiki. — Mr. Stradivarius ♪ talk ♪ 08:15, 14 June 2015 (UTC)[reply]
@Whatamidoing (WMF): My bot's last successful update was at 06:18, 12 June 2015. Per /Archive 137 § Forced HTTPS, "At some point between 07:38 and 10:00, 12 June 2015 (UTC), the user preference "Always use a secure connection when logged in" lost its effect, and regardless of its setting, Wikipedia became HTTPS only." I've inquired about solutions over at m:Talk:HTTPS#Bots. – Wbm1058 (talk) 21:20, 14 June 2015 (UTC)[reply]
@Wbm1058: I know nothing about PHP, but below in /Archive 137#Self redirect when retrieving https pages? User:Flominator also had an issue in his PHP tool due to the HTTPS switch. --Sitic (talk) 22:10, 14 June 2015 (UTC)[reply]

OK, my HTTPS issues have been solved, but just a reminder that I still have only about two weeks to figure out what this is about. – Wbm1058 (talk) 18:41, 16 June 2015 (UTC)[reply]

Add the parameter rawcontinue=1 to you get request for list queries. I tested it, and it works.—cyberpowerChat:Online 23:26, 17 June 2015 (UTC)[reply]
With gerrit:219198 there is a fix for compat which adds the rawcontinue parameter  @xqt 16:23, 25 June 2015 (UTC)[reply]

Scripts that need fixing

I've just been looking at the number of user scripts that need fixing, and according to this search the number is 111. That includes the Popups and Contribsrange gadgets, along with some popular user scripts like Dabfinder. (And a lot of clones of the old version of closeAFD.) We should probably start going through and just fixing these. — Mr. Stradivarius ♪ talk ♪ 05:39, 20 June 2015 (UTC)[reply]

I've just fixed popups. — Mr. Stradivarius ♪ talk ♪ 06:20, 20 June 2015 (UTC)[reply]
And contribsrange. — Mr. Stradivarius ♪ talk ♪ 06:28, 20 June 2015 (UTC)[reply]
Do you think that the same changes need to be made at other wikis, too? Whatamidoing (WMF) (talk) 06:45, 20 June 2015 (UTC)[reply]
@Whatamidoing (WMF): Yes. For example, try the same search at Commons or at the French Wikipedia. There are a lot of old scripts out there that need to be updated. (The trick is knowing which are widely used and which are just for personal use or for testing, etc.) — Mr. Stradivarius ♪ talk ♪ 09:16, 20 June 2015 (UTC)[reply]
I've just fixed a few dozen more scripts, bringing the number of search results down to 79 (and there are a few false positives in there as well). If anyone wants to help, be my guest. :) — Mr. Stradivarius ♪ talk ♪ 03:42, 25 June 2015 (UTC)[reply]
 Done Hurrah! Now all the scripts on enwiki are fixed. (Or at least, all the ones that the search found.) The seven remaining results are all false positives. I went and fixed quite a few people's common.js, monobook.js and vector.js as well. — Mr. Stradivarius ♪ talk ♪ 16:07, 26 June 2015 (UTC)[reply]

Spurious warnings

Sorry, maybe that word is too strong here, but the bulk of warnings that RMCD bot is receiving are coming from simple requests to read the contents of the current talk page.

These are prop=revisions requests, where rvlimit=1 ... the application doesn't need to pull up historical revisions of talk pages. So, I have two options to make the warning go away:

It doesn't matter which I use here, so this is much to do about nothing. I assume that even the largest possible talk page given the software limits may be retrieved in full with a single API request, so there is never any need to continue here. Couldn't you limit the warning messages to when applications actually make the continuation requests? Was this distinction made in determining who to send the mass-message notice to? I'm still looking to see whether my applications actually make any continue requests, but I see some in my library, so that needs to be updated whether my applications use those functions that continue or not. Wbm1058 (talk) 19:54, 22 June 2015 (UTC)[reply]

The warnings will presumably go away when the default changes on 2 July, so you can just wait it out and then your problem will be solved. I suppose displaying the warning by default could be seen as a waste of bandwidth, but that's probably better than not having the warning on by default and no-one knowing about the API change before it happens. — Mr. Stradivarius ♪ talk ♪ 23:31, 22 June 2015 (UTC)[reply]

HTTPS by default

Hi everyone.

Over the last few years, the Wikimedia Foundation has been working towards enabling HTTPS by default for all users, including anonymous ones, for better privacy and security for both readers and editors. This has taken a long time, as there have been different aspects to take into account. Our servers haven’t been ready to handle it. The Wikimedia Foundation has had to balance sometimes conflicting goals, having to both give access to as many as possible while caring for the security of everyone who reads Wikipedia. This has finally been implemented on English Wikipedia, and you can read more about it [link-to-blog-post here] here.

Most of you shouldn’t be affected at all. If you edit as registered user, you’ve already had to log in through HTTPS. We’ll keep an eye on this to make sure everything is working as it should. Do get in touch with us if you have any problems logging in or editing Wikipedia after this change or contact me if you have any other questions. /Johan (WMF) (talk) 12:43, 12 June 2015 (UTC)[reply]

There's a blog post at the Wikimedia Foundation blog now. /Johan (WMF) (talk) 13:09, 12 June 2015 (UTC)[reply]
To editor Johan (WMF): – You have to know what a real drag this is. Not only do I want a CHOICE in the matter, and would continue to choose HTTP as long as the edit summary field's autofil function does not work when I'm on the HTTPS server, you should also consider what Redrose64 said above, that some users are unable to use HTTPS connections. The part in the blog post about "all logged in users have been accessing via HTTPS by default since 2013" is just not true, either. We've been given a choice up until now, and I for one do not want to give that up. I want to be able to CHOOSE whether or not I'm on the HTTP server or the HTTPS server. – Paine  14:21, 12 June 2015 (UTC)[reply]
Yes, we do know. The answer I was given when I asked about this is that any form of opt-out would also leave potential security risks in our implementation which make it difficult to safeguard those who do not opt-out. Because of this, we’ve made implementation decisions that preclude any option to disable HTTPS, whether logged in or not. This renders the current opt-out option ineffective, and the option will be removed at a later date after we’ve completed the transition process. /Johan (WMF) (talk) 14:27, 12 June 2015 (UTC)[reply]
You have had to use HTTPS to access the site when logging in as it's been used for the login process, though. /Johan (WMF) (talk) 14:30, 12 June 2015 (UTC)[reply]
It's evidently a weighty issue. And I do realize that I don't edit WP in a vacuum, that I must eventually accept this situation for the good of all. And frankly, I don't have a problem with having to stay on HTTPS as pertains to the "big picture". My problem is very basic and concerns the fact that I no longer have a drop-down list from which to pick my edit summaries, because that function is thwarted by my IE-10 when I am on any HTTPS server. If that little quirk could be fixed, I'd be a happy camper whether I'm on a secure server or not. – Paine  15:47, 12 June 2015 (UTC)[reply]
I'm not very familiar with IE myself, but I'll ask around and see if anyone knows a simple fix. /Johan (WMF) (talk) 16:12, 12 June 2015 (UTC)[reply]
@Johan (WMF): IE10 won't enable autocomplete on HTTPS pages when the "Cache-Control: no-cache" HTTP header is set (which Wikipedia does). Changing it from "no-cache" to "must-revalidate, private" would allow autocomplete, but may have other unintended consequences. --Ahecht (TALK
PAGE
) 16:34, 12 June 2015 (UTC)[reply]
@Paine Ellsworth: It seems like IE 11 does not have this problem, and all users would eventually be required to update to it by the end of the year (by Microsoft). Did you try IE 11? Tony Tan · talk 02:09, 14 June 2015 (UTC)[reply]
Yes, Tony Tan, I upgraded to Win8.1 and IE-11 yesterday and was pleased to pass it on that it has given me back what I had lost with the older browser and Windows software. Thank you very much for your kind thoughts and Best of Everything to You and Yours! – Paine  02:26, 14 June 2015 (UTC)[reply]
I also see I am struck with using HTTPS, which is nuisance and a bother as I longer have a drop-down list from which to pick my edit summaries. How can a drop-down list be re-implemented? It was the only degree of automated help we had in what is otherwise an unfriendly article editing environment. Hmains (talk) 17:44, 12 June 2015 (UTC)[reply]
So how do I use the website in http then? I do not want extra security to protect me. I don't need protecting. This is a nonsense. Why am I being forced to use https even though I don't want to use it? There was an opt out. The opt out has been removed despite the fact that those using the opt out very clearly want to opt out. — Preceding unsigned comment added by 86.18.92.129 (talk) 19:46, 12 June 2015 (UTC)[reply]
Hi, the reason explanation I've been given is that any form of opt-out would also leave potential security risks in our implementation which make it difficult to safeguard those who do not opt-out. /Johan (WMF) (talk) 19:53, 12 June 2015 (UTC)[reply]
I'll try to figure out if there is a solution to that, Hmains. /Johan (WMF) (talk) 19:53, 12 June 2015 (UTC)[reply]
Johan (WMF), Re: "the reason explanation I've been given is that any form of opt-out would also leave potential security risks in our implementation which make it difficult to safeguard those who do not opt-out", would you be so kind as to ask for a one-paragraph explanation as to why they believe this to be true and post it here? Not a dumbed-down or simplified explanation, but a brief, fully technical explanation for those of us who are engineers? Thanks! --Guy Macon (talk) 20:49, 12 June 2015 (UTC)[reply]
Sure. Just so you know, they're getting a lot of questions at the moment, as well as handling the switch for the hundreds of Wikimedia wikis that aren't on HTTPS yet, but I'm passing on all questions I get that I can't answer myself. /Johan (WMF) (talk) 21:18, 12 June 2015 (UTC)[reply]
The engineering-level explanation is that in order to help prevent protocol downgrade attacks, in addition to the basic HTTPS redirect, we're also turning on HSTS headers (gradually). The tradeoff for HSTS's increased protections is that there's no good way to only partially-enforce it for a given domainname. Any browser that has ever seen it from us would enforce it for the covered domains regardless of anonymous, logged-in, logged-out, which user, etc. Once you've gone HSTS, opt-out just isn't a viable option. /BBlack (WMF) (talk) 21:56, 12 June 2015 (UTC)[reply]
@Jason Quinn: see the answer above. /Johan (WMF) (talk) 22:12, 12 June 2015 (UTC)[reply]
To editor Johan (WMF): I don't see what the problem is: create a cookie named something like IAcknowledgeThatHttpIsInsecure which can be set from a dedicated page: if this cookie is set, do not send the Strict-Transport-Security (HSTS) header and do not force redirect to HTTPS. Yes, people who have received the Strict-Transport-Security header will get a browser error, but I assume all browsers that implement HSTS allow some way for the user to manually override or ignore it (something like "I know what I'm doing", then set a security exception); and the users can be warned in advance on the dedicated page that sets the cookie. If you're afraid an attacker will set the cookie on an unsuspecting user (through a fake Wikipedia page) and thus bypass HSTS, please note that (1) this attack always exists anyway, because an attacker who can do this can setup a fake HTTP wikipedia.org proxy domain anyway (in both cases, it will impact those users who did not receive the HSTS header), and (2) you can mitigate the attack by letting the cookie's content contain a MAC of the client's IP address (or some other identification string), with a MAC key that Wikimedia keeps (and the cookie is honored only if the MAC matches). You might also display a warning in the HTML content if the cookie is set, reminding of its existence and impact, and giving a link to remove it should the user change their mind. The performance cost of all of what I just described should be completely negligible in comparison with the performance cost of doing HTTPS in the first place. And this should all be very simple to implement. On a personal note: I promise to donate 150€ to the Wikimedia foundation (adding to the 100€ I donate about once a year) if and when a way to access it through HTTP using the former URLs is brought back; conversely, until this happens, I will be too busy to consider how I can work around this inconvenience to contribute either financially or by editing articles. (I could also go on to emphasize how, as a cryptographer, I think the idea of forcing users to go through HTTPS to read publicly accessible and publicly editable information is absolute idiocy, but the cryptophile zealots have made up their mind already.) --Gro-Tsen (talk) 19:43, 13 June 2015 (UTC)[reply]
Keyed MAC of client IP address is not going to work due to dynamic IPs that change (And I'm not sure that there exists any other unique identifier that would be appropriate. Keep in mind, for your scheme to work, the browser cannot receive an HSTS header even once). Note that deleting an HSTS setting from your browser is actually much more hidden then you'd normally think, and are generally not meant to be user overridable. While you're correct that HSTS cannot prevent a malicious proxy if the user has never visted wikipedia before (unless we do HSTS preloading, which we do not yet), your scheme weakens the protection of HSTS, since a malicious proxy only has to set a cookie for wikipedia, not necessarily catch the user at the first visit. Furthermore, in order for the redirect not to take place, the cookie must be non-secure. Hence the malicious proxy might as well just pretend to be some fake subdomain, e.g. http://fake.wikipedia.org (Which since its fake, does not have HSTS, unless we set the includeSubDomains flag for HSTS, which we don't currently, and would prevent us from ever hosting a non-secure service on any subdomain), use some method to load traffic from that address (easy), and then set your IAcknowledgeThatHttpIsInsecure cookie with the domain field set to .wikipedia.org. Last of all, your scheme is also incompatible with HSTS preloading, which presumably the WMF is eventually going to pursue. Bawolff (talk) 00:53, 14 June 2015 (UTC)[reply]
OK, I'll give up on trying to solve other people's problems with HTTPS and focus on mine: to this effect, do you (or anyone else) knows if there at least exist some reliable transparent Wikipedia mirror on HTTP (perhaps something like "wikipedia-insecure.org") that allows both reading and editing and that I could use (by spoofing my DNS to point there) without the trouble of setting up my own? (I hope we can agree that a mirror served under a different domain cannot weaken security since anyone can set up such a thing.) I'll find a way to disable HSTS on my browser somehow. --Gro-Tsen (talk) 23:02, 14 June 2015 (UTC)[reply]


It's worth giving some background here to understand the need for security. One of last year's revelations was that Wikipedia editors were being targeted by the NSA. So if you weren't using HTTPS (and probably even if you were), you were likely helping to build a database profile on your reading habits. But worse, your e-mail and other communications were probably also targeted for follow-up simply because you edit Wikipedia. What difference does it make? Nobody in the general public knows! The collected information is used in secret fashion in secret ways by undisclosed people. But there are real dangers to you. Supposedly, the information is being used only for national security related to terrorism. That's not true, however, because it is known from the same leaks that it is being used for more than that, for instance, in the war on drugs. And, it is also known that collected information is sometimes abused by those who have access to it for personal reasons. The use could also include (and probably is) helping to decide whether you get security clearance for future dream job. it could potentially even be used to sabotage a hopeful's political career or in general help silence people with oppositional points of view. In other words, this information has the potential to be used by people now or in the future to negatively affect your life and destiny without you even knowing. The WMF has decided (and rightfully so) that there's a need to protect users from dangers that they might not even be aware of. When it comes to this, many people say things like "I'm not doing anything wrong" or "I've got nothing to hide" but the problem is that you can't say you're doing nothing wrong because it's third parties who determine that, not you. And you do have stuff to hide even if you are completely a law-abiding citizen. This issue that affects you even if you think it doesn't. People are talking above about certain countries that do not allow HTTPS and how IP users there should be not be forced to use HTTPS because Wikipedia would be blocked for them. Well, those are great examples where governments being able to see what you are reading could get you arrested, imprisoned, or worse. The use of HTTPS is only a minor step in combating the abuse of government-level surveillance but it's a step in the right direction. @Johan (WMF), it'd be interesting to know why the implementation cannot safely handle an opt-out because naively I don't see why the one should affect the other. Maybe this exposes a flaw in the implementation. Jason Quinn (talk) 21:17, 12 June 2015 (UTC)[reply]
Hi Jason Quinn, thanks. I'm passing on the question to someone better suited to answer it than I am. /Johan (WMF) (talk) 21:20, 12 June 2015 (UTC)[reply]
On January 12, 2016, Windows 7 users will be required to install Internet Explorer 11 and Windows 8 users will be required to update to Windows 8.1 anyway, so you don't need to worry about the autocomplete problem in IE10. That problem doesn't occur in IE11. GeoffreyT2000 (talk) 21:26, 12 June 2015 (UTC)[reply]
Wikipedians were NEVER targeted by the NSA, why would they be? I don't know where you people are getting your information from and if some wikipedian came along and said that s/he was being targeted, then s/he was either being paranoid (like 90% of americans) or s/he is doing soemthing "illiegal" so its the best interest of wikipedia to report that person to NSA, not ENFORCE this stupid idea....Again Wikipedia is an INTERNATIONAL website, its NOT only for AMERICA....why should the rest of the world have to pay for the fears of a few paranoid psychopaths that are better off in jail..oh and BTW, HTTPS has and will NEVER be secure, the "s" in https never stood for secure...@Jimbo Wales:, Why would you allow this?--Stemoc 21:43, 12 June 2015 (UTC)[reply]
At the right is the main slide itself so you and others can decide for themselves what it means. The slide explicitly uses Wikipedia as an example of the websites that they are "interested in" and confirms that they are interested in "typical users" of such websites. Given the context of the slide (exploiting HTTP for data collection), it is unreasonable to assume readers and editors were not being targeted. We all were targeted and all our traffic to and from Wikipedia would have been caught up in the named NSA collection programs. It would be naive to think otherwise. If there is one thing that's been learned in the last year, it's that "if it can be done, it is" kind of summarizes what's been going on and "mitigated" does not described their collection techniques. As for other countries being denied access by the global removal of HTTP support, that is a point that should be debated. But I already mentioned that there are countries were the use of HTTP might literally allow Wikipedia readers to be executed for readings the "wrong" stuff. The meaning of a "free" encyclopedia would have to be discussed and the dangers of access in these countries would have to be considered and weighed in such a debate. And, regardless of how you perceive the US, it's possible the US could become as bad. Jason Quinn (talk) 22:30, 12 June 2015 (UTC)[reply]
It is certainly a bit of a backtrack by @Jimbo Wales:.Blethering Scot 22:43, 12 June 2015 (UTC)[reply]
The real win here (imo) is making Firesheep style attacks totally impossible and thwarting non-state sponsored, and lower budget state sponsored adversaries. One assumes that the NSA will probably just slurp up the unencrypted inter-data center links (For those of you not close enough to use eqiad directly. Imagine a world where the sum of human knowledge fully deployed IPSec). Given the funding level of the NSA, I expect that they probably have traffic analysis capabilities to be able to tell who is visiting a page of interest (especially for a site like wikipedia, which imo seems like the perfect target for a traffic analysis type of attack against a TLS secured connection). However https does make it much harder to simply collect it all, and any measure that increases the cost of ubiquitous surveillance should be applauded. Bawolff (talk) 22:50, 12 June 2015 (UTC)[reply]
All I see Jason is a bunch of American websites.....Mate, if NSA want to spy on you, it WILL SPY on you, you don't have to eff up wikipedia for them to stop and basically, by forcinghttps onto the wikipedia, would you not think that it will make NSA more interested? because only a person with something to hide would do this ..So Jimmy loses his battle with NSA in terms of NSA and this is what he comes up with? moving to https which honestly is just as secure as http...After this was defeated last year, i honestly felt like we lived in a democracy where the voice of the people was heard and adhered........back to communist wikipedia we go..yeah Jason, executed for reading the wrong stuff on wikipedia like How to build a Bomb or How to join ISIS......oh right, we don't have those pages cause wikipedia is NOT a terrorist organization...--Stemoc 22:59, 12 June 2015 (UTC)[reply]
(a) Non-Americans arguably have got more to fear from NSA surveillance; the legal framework allows for the collection of great swathes of foreign data. (b) The decision was made by Wikimedia, which is in no way a democracy. (c) Do actually read up on the issues you're arguing. Alakzi (talk) 23:29, 12 June 2015 (UTC)[reply]
Yeah, you really shouldn't let your anger and/or frustration allow such bullshit from your fingers and keyboard, Stemoc. "Communist Wikipedia"? no more than an airline practices communism when they check for bombs and weapons as we board – no more than when we have to pass through a building security point that helps to protect us while we're on the premises - is it communism to own a .357 and be ready to shoot a criminal who tries to steal from you? or to hurt your loved ones? Privacy, security, if you don't try to work with structures that protect them, then you're no better than the criminal, terrorist or agency that tries to circumvent them. Best of Everything to You and Yours! – Paine  00:03, 13 June 2015 (UTC)[reply]
Calm down lady, this is just an Encyclopedia, not your ebay, paypal, bank account or your social networking sites where privacy is a MUST NEED for safety reasons.. the MAIN reason this site was created was to allow users to browse and edit anonymously so no one really knows your true identity or location, if you are using your real name and stuff, I'd advice you to invoke the 'Vanish' policy and start anew or get your account renamed, I think people keep forgetting that this is NOT like every other site they visit, infact wikipedia is based on facts and if you are scared to write down fact on articles because you fear the NSA then i really really pity you... only crooks fear the government....let that be known...and p.s, I'm brown and I don't give a shit about the NSA...as usual, the wiki revolves around America...pathetic.--Stemoc 02:47, 13 June 2015 (UTC)[reply]
@Stemoc: Out of curiosity, what do you think about the following hypothetical situation: Someone (Lets say Alice) thinks she might have <insert weird disease here>. Alice wants to look it up on Wikipedia, but is worried that her ISP is tracking which websites she visits, and will sell the information to her insurance company (or whomever else is the highest bidder), who in turn will jack up the price of her insurance beyond what she can afford, on mere suspicion of having the disease, even if she doesn't have that. Is that a legitimate reason to want/need privacy when browsing Wikipedia? You may trust the government (For some reason), but do you really trust your ISP? What about the guy sitting across the room at the starbucks on the same wifi network? Bawolff (talk) 06:12, 13 June 2015 (UTC)[reply]
Bawolff, again, another "american" problem....I have an IDEA, why not make a us version for https?, brilliant now, e.g, anyone that wants to be logged in on https, log in at https://us.en.wikipedia.org and everyone else at the old link at http://en.wikipedia.org, this will solve the problem once and for all, why "force" everyone onto https, its the same as pushing everyone over the cliff and telling them to swim instead of building a bridge to get across, those who can't swim or having health (ISP) problem will surely drown..I fought this the last time it happened and I will fight it yet again..--Stemoc 11:43, 13 June 2015 (UTC)[reply]
+1. Live in a country with universal health care...Or has privacy laws...I am an IT professional with a Computer Science Degree and 30+ years of experience. I know the implications of not using HTTPS, an I also know the NSA can bypass that easily if they care to. This (not allowing an opt-out) is total garbage and a false sense of security...˥ Ǝ Ʉ H Ɔ I Ɯ (talk) 11:56, 13 June 2015 (UTC)[reply]
Now cut that out, buddy, or I'll hit you with my purse! Hey, waitasec – how did you know I'm a "lady"? You been hackin' into my HTTP??? – Paine  12:46, 13 June 2015 (UTC)[reply]
Little old me? hacking? NEVAH!......--Stemoc 17:01, 13 June 2015 (UTC)[reply]
@Bawolff: We're not done with all of our plans for securing traffic and user privacy. This will be covered in deeper detail in a future, engineering-focused blog post after the initial transition is complete. But just to hit some highlights in your post: we do have an active ipsec project, which is currently testing on a fraction of live inter-DC traffic. We're also looking into what we can do for some forms of traffic analysis, e.g. mitigating response length issues. We plan to move forward as soon as possible on heavier HTTPS protection mechanisms like HSTS Preloading, HPKP, etc as well. We're committed to doing this right, we're just not done implementing it all yet :) -- BBlack (WMF) (talk) 01:53, 13 June 2015 (UTC)[reply]
@BBlack (WMF): I appreciate there's more to come, and I'm happy to see that its (finally) happening. However I think its important to give our users the full picture, the good, and the bad. HTTPS is great for confidentiality and integrity. Its somewhat ok for providing privacy, particularly against a weak adversary, and it makes bulk matching of packets against fixed strings in the body of the request impossible (Which is quite important given the selective censorship threat wikipedia faces). But its questionable how well it would hold up against a powerful nation state actor trying to simply de-anonoymize you. It certainly wouldn't hold up against a targeted attack, and its questionable if it would prevent a more broad attack (albeit it would certainly make a broad attack quite a bit more expensive to pull off). I'm also quite doubtful you can really foil traffic analysis with padding TLS sessions, unless you use extreme amounts of padding, far past what is acceptable performance wise. p.s. The ipsec project link is limited to those in the WMF-NDA group, so I can't see it (I'm in the security nda group only). However I can certainly see in puppet that IPSec is enabled on a small number of servers, and I noticed it was mentioned when I was reading the WMF quarterly report. Bawolff (talk) 03:03, 13 June 2015 (UTC)[reply]
@BBlack (WMF): It is great to see that the WMF is finally switching to HTTPS by default. I look forward to seeing Wikipedia send HSTS (includeSubDomains, long max-age, preload) and HPKP headers! However, phab:T81543 seems to have restricted access. Thanks, Tony Tan · talk 02:39, 14 June 2015 (UTC)[reply]
One of the nice things I just noticed that is really nice is that ru.wikipedia.org has an A+ on the SSL labs test [1]. Here's to looking forward to that for all Wikimedia domains, once HSTS is turned up :D Bawolff (talk) 05:20, 14 June 2015 (UTC)[reply]

Not such a difficult fix

Just want to make sure that everyone catches what contributors TTO (at phab:T55636) and GeoffreyT2000 (above) have been kind enough to share with us. Several of the above users may be happy to hear that I can confirm what TTO and GeoffreyT2000 say about Win8.1 and IE-11. I just upgraded, and the new software thus far seems to work a lot better under HTTPS than my old Win8.0 and IE-10 did. Forms do indeed autofil, which means that my old drop-down boxes with my edit-summary choices do show up again. I still sympathize with all the users above who feel they've lost something with this change, however, like I said, we don't edit in a vacuum any more than we become passengers on aircraft all by ourselves. As an analogy, airport security can be a real hassle and a serious time cruncher on occasion, but compare that to what has happened, and still could happen, and there should be none of us who would not want that security to keep our flying times safe. Same for the conversion to HTTPS – it is quite the hassle for some, but the very real need to protect our privacy and security is an overwhelming priority, in my humble opinion. So, /Johan (WMF), you don't have to find an IE fix for me, and I greatly appreciate the fact that you said you would! I also deeply thank the rest of you for your enlightening responses here. Best of Everything to You and Yours! – Paine  23:32, 12 June 2015 (UTC) [reply]

Thank you. I'll still at least ask around to see if there's anything I can do. We want editing Wikipedia to be as simple as possible, no matter which browser people use. If one is OK with upgrading to IE 11, that's probably the best solution, though. /Johan (WMF) (talk) 01:25, 13 June 2015 (UTC)[reply]
So, here's what I got on this issue so far. Yes, there appears to have been an open Phabricator ticket since 2013 reporting this issue, and no, given the number of tickets, the team that dealt with the transition wasn't aware of it. We'd obviously have preferred to be. Sorry, and I really mean it. Causing trouble for people who edit Wikipedia is the opposite of what we want to achieve. We're still in the process of transitioning (English Wikipedia was one of the first to switch over, and there are more than 800 Wikimedia wikis) and I haven't found an easy fix so far (except for upgrading to Internet Explorer 11), as this isn't so much a bug as how Internet Explorer 10 intentionally behaves. The team will be able to focus more on this as soon as the HTTPS transition is complete. We're not ignoring the issue. /Johan (WMF) (talk) 12:10, 16 June 2015 (UTC)[reply]

This broke my bot :( I'm using RestClient library to make API requests, and it apparently is unable to verify the certificate. Getting the error SSL_connect returned=1 errno=0 state=SSLv3 read server certificate B: certificate verify failed (RestClient::SSLCertificateNotVerified) Surely that's an issue on my end? I can force it to not verify the certificate but then what's the point of using HTTPS? MusikAnimal talk 18:32, 13 June 2015 (UTC)[reply]

I took a quick look, and it seems that this library has a way to pass to the SSL library the CA certificates to be used for verification. It probably just doesn't have a default set of CA certificates. The solution would be to give it a copy of the correct root certificates to use. --cesarb (talk) 21:26, 13 June 2015 (UTC)[reply]

Who loses

@Johan (WMF): Hi, while you are here I would like to have something specific clarified. As always with these sorts of major changes, most people win and some people lose. I personal am iffy about the distribution of relative ideological and technical interest and need for this particular project, but I accept that that merely puts me in the middle of the Wikipedian spectrum, between people like TomStar81 who wants nothing to do with the ACLU and people like Jason Quinn who thinks it keeps us from being roasted on an open flame.

However in these sorts of changes I care less about who wins, because that's obvious. I can read the spam-ish blog post to find that out. I am more interested in the question: who loses?

Who does HTTPS hurt? Can we come to an understanding of this? Surely every change, no matter the size, hurts some stakeholders. ResMar 03:58, 13 June 2015 (UTC)[reply]

  1. Can someone clarify what is going on with the IE 10 issues? Was the WMF aware of this problem? Is it really that significant?
  2. Can someone clarify what the effect will be in mainland China? Can you quantify the impact there?

Thank you. ResMar 04:00, 13 June 2015 (UTC)[reply]

Hi, good question that deserves a good answer, not just what I can come up with on the top of my head. I'll ask around about a few things to make sure I (or maybe someone else, I'll spend much of this weekend travelling) can reply properly. /Johan (WMF) (talk) 04:19, 13 June 2015 (UTC)[reply]
Great! Thank you. I think this discussion so far has been high on posturing, low on content (speaking about the community response here), and I'd love to see a frank cost-benefit analysis from the WMF on this matter, and an associated community critique. After all, this is the communication that the volunteers so crave. Not, frankly, blog announcements. ResMar 04:44, 13 June 2015 (UTC)[reply]
I'd also like to see my transparency on WMF's the analysis. Everything seems to be shrouded in unnecessary secrecy. On the subject of China. I'm not that familiar with the situation, but according to https://en.greatfire.org/search/wikipedia-pages - HTTPS is currently not blocked There seems to be conflicting info on if HTTPS is blocked. The greatfire website says https is not blocked, but there actual test data seems to suggest that both normal http and https on zh is blocked starting may 19 [2] (The switchover for zh to https happened on June 9, so change in blocking status seems unrelated) but en is fine (both https and non-https). There are about 324 pages that are censored on the HTTP version, mostly on zh, however on en we had Students for a Free Tibet, Tiananmen_Papers, Tiananmen_square_massacre, Tibetan_independence_movement blocked. Switching to HTTPS forces china to decide either to block all of wikipedia or none of wikipedia (Possibly they can distinguish between languages and block say all zh, but not en. I'm not that familar with SNI, but my impression is the domain is sent in the clear). FWIW, greatfire strongly advocates switching to https on zh wikipedia [3], although they are obviously a special interest group that believes Chinese censorship needs to be fought tooth and nail. I imagine the situation is similar for Russia, which rumor had (Although I've not seen direct sources for this) was trying to censor pages related to Ukraine on ru, but can't anymore due to https. The other impact, is that it makes harder (but certainly not impossible depending on their traffic analysis capabilities) for China to generate lists of people who try to visit certain politically sensitive topics (Its unclear if they actually do that. I haven't heard of any evidence that they do, but it wouldn't surprise me). Other potential things to keep in mind, in the past China has DDOS'd websites (GitHub) that host material China finds objectionable, but cannot be censored selectively due to HTTPS and are too popular to block outright (However, I consider it very unlikely they would do something like that to Wikipedia. Wikipedia has a low enough popularity in China, that they would probably just block it totally if they decided to do something about Wikipedia). Bawolff (talk) 05:18, 13 June 2015 (UTC)[reply]
Regarding secrecy, or at least part of it: yeah, we didn’t really enjoy springing this on the community, though the WMF has publicly been talking about the intent to switch to HTTPS for the past years. The reason we didn’t say anything about the specific deadlines or make public the transition until it was in progress was because public statements opened us to possibility of man-in-the-middle attacks. Letting everyone know meant letting bad actors, so to speak, know our plans and timeline. We couldn’t have this debate in public without telling the world what we intended to do, which could have compromised the security and privacy of readers and editors in certain areas. We’d have preferred not having to worry about that, obviously. /Johan (WMF) (talk) 19:33, 16 June 2015 (UTC)[reply]
@Johan (WMF): But this discussion and these plans were open and public, where any "bad actors" could surely have followed them. Surely that workboard was missing an item relating to fixing bots that didn't operate on wmflabs.org. I can only do so much to stay tuned to such things, and a proactive heads up, perhaps by email, would have been appreciated. I asked about this last December on the Village Pump, and never got a response. How am I supposed to know about venues such as m:HTTPS, where I might have gotten help last December? Wbm1058 (talk) 16:50, 20 June 2015 (UTC)[reply]
An example I've been given is that not knowing our time plan made it much more difficult to e.g. hack DNS and traffic at a border, proxy traffic as HTTPS back to us but make it seem to everyone they're connected to us, as HSTS support in modern browsers will prevent the downgrade and warn about it. I'd have loved to be able to give everyone this would cause trouble for a heads up, and we do understand it has caused more work for people we don't wish to cause any unnecessary work for. We'd definitely have preferred to not found ourselves having to choose between either, as we saw it, putting user security in certain areas at risk or not having proper, open communication.
Are you still having the problems you had last December? /Johan (WMF) (talk) 13:37, 22 June 2015 (UTC)[reply]

@Resident Mario: Other people that HTTPS could potentially hurt which we know about (Personally I think this is an acceptable hurt): People who use IE6 on windows XP will not be able to view any page on wikipedia. (IE6 on XP is incompatible with modern best practices for HTTPS). People on very old browsers which don't support SNI (e.g. Android 2.3.7, IE 8 / XP, Java 6u45), will get a certificate error when visiting a sister project (But wikipedia will be fine). Bawolff (talk) 20:02, 13 June 2015 (UTC)[reply]

@Bawolff: Sounds reasonable. ResMar 20:21, 13 June 2015 (UTC)[reply]
@Bawolff: The Wikimedia certificate uses subjectAltName, not Server Name Indication. SAN is supported by IE6. LFaraone 05:27, 14 June 2015 (UTC)[reply]
@LFaraone: IE6 doesn't work because it only supports SSLv3, and we require at least TLS1.0 (To prevent downgrade attacks/POODLE). We use both subject alt name, and SNI and wildcard certs. If no SNI is sent you get a certificate for *.wikipedia.org with an alt name of wikipedia.org. Which is great if you're browsing wikipedia. Not so great if your browsing wiktionary.org. Bawolff (talk) 05:45, 14 June 2015 (UTC)[reply]
@Bawolff: Browsing wiktionary.org works fine even if the browser doesn't send SNI. If the SNI is absent, the server sends a diffenert certificate whose subject alternative names include domain names of all sister projects. 191.237.1.8 (talk) 06:42, 14 June 2015 (UTC)[reply]
Oh, you're absolutely right, users get a uni cert when they don't have SNI. I saw the SNI behaviour of switching certificates and just assumed it would be broken without SNI. My bad. Bawolff (talk) 11:09, 14 June 2015 (UTC)[reply]
@Bawolff: I just checked my IE6, it has TLS 1.0
—Telpardec  TALK  20:57, 16 June 2015 (UTC)[reply]
Yes, but its disabled by default. The type of people who use internet explorer are probably not messing with the TLS settings. When I was running IE6 under wine, enabling TLS1.0 didn't seem to help anything, but that was probably just wine not working great. Bawolff (talk) 04:19, 17 June 2015 (UTC)[reply]

To editor Resident Mario: The switch to HTTPS will badly hurt those who chose to change their browser's default list of certification authorities and who, specifically, do not trust GlobalSign (the root authority from which Wikipedia's certificate emanates). At the very least, they will be forced to add security exceptions for all Wikipedia domains, and quite possibly will be locked out of Wikipedia altogether because browsers do not always allow security exceptions on HSTS sites. In effect, the switch means that users are forced to either trust everything that GlobalSign signs if they wish to use Wikipedia, whereas so long as HTTP transport was permitted, one could at least read Wikipedia on HTTP if one does not care about the security of public information on Wikipedia but doesn't want to trust GlobalSign. (I can't explain the problem with GlobalSign because I don't want to risk being sued for libel, but let's say that one might not necessarily wish to trust all, or any, certificate authorities.) So the irony is that this change, which is supposed to protect the "security" of users, actually forces security-conscious users to downgrade theirs, in effect a Trojan horse kind of attack. (In all fairness, Web browsers and HTTPS in general should be blamed for having an absurdly rigid approach to security: one can't restrict a certificate authority to certain domains, or things like that, so I can't say "I trust GlobalSign only for signing certificates in the wikipedia/wikimedia/wiktionary/etc.org domains".) --Gro-Tsen (talk) 21:15, 13 June 2015 (UTC)[reply]

For real? Any person who intentionally messes with their root certificate store, should be technically competent enough to make their own trust decisions of Wikimedia certs, by say verifying them in some other way. If you're not, you have no business removing CAs from your trust store. Bawolff (talk) 21:45, 13 June 2015 (UTC)[reply]
About 10% of HTTPS websites use GlobalSign, so it is not a Wikipedia-specific issue. One could say the same for any other CA that the WMF may decide to use. Moreover, Bawolff makes a great point that someone technically competent enough to mess with trusted roots would be able to work around this as well. They must know how to do so already, since there are numerous other sites using GlobalSign! If someone really lost faith in the CA system, they should try using Convergence, Perspectives, or Certificate Patrol. Tony Tan · talk 03:07, 14 June 2015 (UTC)[reply]
@Resident Mario: To answer your second question, according to zh:Template:Wiki-accessibility-CHN, zh.wikipedia.org is currently completely blocked in China using DNS poisoning. HTTPS versions of all other Wikimedia projects are not blocked. @Gro-Tsen: If you manually remove GlobalSign root certificates from your browsers' trust stores, you can manually add Wikipedia's leaf certificate to the trust store so that your access to https://en.wikipedia.org/ is not blocked by your browsers. 191.237.1.8 (talk) 05:09, 14 June 2015 (UTC)[reply]

To editor Resident Mario: In short: HTTPS everywhere hurts everyone. HTTP was built with network proxy and caching servers to decrease page load times. These are intermediate servers run by your ISP to reduce backbone data requests. Australians will bemost affected since 87 ms away from our Virginia data centers, so they'll have a 200 ms ping. Due the design of HTML, these requests can stack meaning that 200ms could bloat to 2 seconds. Now <100 ms is considered ideal, 1 sec users become frustrated, and at 10 seconds they'll look for something else. (Proponents will weasel around this by saying your browser caches content, which helps if you don't back to the Google search results)

Additionally, anyone who say this'll stop the $53 billion a year NSA is delusional. Methods for the NSA to get the WMF private keys: Court order (ala Lava Soft who shutdown over this), intercepting and backdooring hardware (Cisco routers, Hard drives), to recruiting/bribery of employees. This basically leaves ISP spying on users (Verizon Wireless adds a advertizing tracking ID to all HTTP requests), but considering how willing WMF is to toss aside net neutrality... — Dispenser 15:08, 17 June 2015 (UTC)[reply]

On that first part: well yes and no. Most browsers now support SPDY and/or HTTP2/0, for which https is a requirement and which will give you a 20-700% speed boost. Especially this last part is probably going to significantly increase the speed for the majority of the users in those areas. Second, that area is served from the San Francisco caching center, so it's slightly closer then Virginia at least, though still so far away, that there is a good point. I do know that WMF is watching the performance impact of this change around the world, and I think they were already considering adding another caching center for Asia/Oceania regardless, so if it really does drop measurably, then that consideration might get higher priority. —TheDJ (talkcontribs) 01:09, 18 June 2015 (UTC)[reply]
We send anti-caching headers (Because people edit and then things become outdated). ISP level caching servers that conform to the http spec should not be caching wikipedia pages whatsoever. So HTTPS won't really affect caching efficiency. Well lots of people go on and on about NSA, I really think the threat that this move is more designed to address is someone like China or Russia, altering pages in a Mitm fashion to make articles less NPOV. Bawolff (talk) 02:23, 18 June 2015 (UTC)[reply]
This isn't an anti-NSA measure, it's due to security and privacy concerns on a number of different levels, not all of them related to governments. /Johan (WMF) (talk) 13:37, 22 June 2015 (UTC)[reply]

And another problem: No browser history!

@Johan (WMF): - In addition to losing the drop-down edit summaries (as mentioned above), I've also lost the browser history for all newly-visited Wikipedia pages. Why the exclamation point?? Because this is absolutely crucial -- in fact, integral -- to my ability to work on Wikipedia. I totally depend on having those page links, which give me quick & easy access to all recently-visited pages.

Johan, you said above, "We want editing Wikipedia to be as simple as possible, no matter which browser people use." (I am using IE 8.) Please tell me there is going to be a technical fix for this problem ASAP. Because if there isn't, there is a very real possibility that I will have to give up editing. I am a long-time (since 2006), very conscientious editor, with nearly 60,000 edits. So I truly hope that does not become necessary. Cgingold (talk) 09:11, 13 June 2015 (UTC)[reply]

P.S. - I raised the very same issues a couple of years ago during the last discussion on this subject, which was resolved to my satisfaction when I learned that it was possible to opt out. So this is really a sore point for me. It sure would have been nice if you guys at least had the consideration to place a banner at the top of all pages for a week or two giving all of us a heads up about the impending change. Matter of fact, I believe I made the same point last time! :-( Cgingold (talk) 09:20, 13 June 2015 (UTC)[reply]
Best advice I can give is to use IE11 or another non broken browser. —TheDJ (talkcontribs) 10:19, 13 June 2015 (UTC)[reply]
Yup, this is a problem for me too that is admittedly a considerable annoyance. I always opted out previously for this reason. Connormah (talk) 11:21, 13 June 2015 (UTC)[reply]
@Cgingold: If you mean that you lost your browser history for all of the http domains, I would say: deal with it yourself. It's a petty issue. You will regenerate the URLs soon enough as you visit the new pages again; it's no different than if you were to clear your browser history. If you have lost the ability to generate new URLs in your URL history, then that is a problem. I hope it can be fixed, but if it cannot...wouldn't it be easier for you to move up to an Internet browser that's less than six years old? ResMar 13:48, 13 June 2015 (UTC)[reply]
Even if it was "merely" the loss of older browser history that I was referring to -- which it wasn't -- that would hardly be "petty", my friend. You might want to check your attitude at the door before you trivialize another editor's problem. But of course, I was talking about the fact that my browser no longer generates new URL links in the browser history. And it is indeed a very serious problem. Cgingold (talk) 21:29, 13 June 2015 (UTC)[reply]
Petty? The switch to HTTPS is petty. It is stark raving mad to switch to https to avoid NSA surveillance. I cannot believe the reasoning there, some people need to take their tin foil hats off. I bet if anyone were to read this at the NSA then would have a right good laugh at us all. Even if they were inclined to mine data off this site then the switch to https would be of little impediment to a body of that resources. Why do we not only operate on tor and demand VPN usage if we are trying to protect the hypothetical drug smugglers, money launderers and terrorists that apparently have abandoned the onion sites in favour of WP talk pages? There is no benefit for this change in policy and the reasoning behind is deranged.--EchetusXe 17:46, 13 June 2015 (UTC)[reply]
I am not here to hear your opinion, I am here to assess the damage. ResMar 19:39, 13 June 2015 (UTC)[reply]
@Connormah: As a sysop, you should probably use HTTPS. Otherwise, your account is at risk of being hijacked in a Firesheep-style attack, especially when you use a public network. A sysop account would be really useful for someone intending harm. :( If there are big issues, upgrading your browser to a newer version of IE, Chrome, Firefox, etc. should help. Tony Tan · talk 03:15, 14 June 2015 (UTC)[reply]
Cgingold, I just wanted to say that, yes, we really do care about your problems, we appreciate all the work you're doing, and I will ping you personally as soon as I have good answer or solution. /Johan (WMF) (talk) 12:15, 16 June 2015 (UTC)[reply]

For reference, IE < 11 represents about 5.5% of our traffic [4]. Bawolff (talk) 18:54, 13 June 2015 (UTC)[reply]

How about a 'in the clear' sub-wiki?

Like http://itc.en.wikipedia.org which just reflects the normal wiki. Then all users of 'normal' wikipedia get HTTPS, but people who want/need HTTP have to specifically ask for it.˥ Ǝ Ʉ H Ɔ I Ɯ (talk) 09:38, 13 June 2015 (UTC)[reply]

It would more likely be http://en.insecurewikipedia.org, but I don't think there would be many fans to maintain such a system.. We will have to see about what kind of case can be made for that, but I think it is unlikely that it will happen. —TheDJ (talkcontribs) 10:25, 13 June 2015 (UTC)[reply]
Anyone could setup a proxy to do this (e.g. http://crossorigin.me/https://en.wikipedia.org [maybe that's a bad example, as it doesn't fix the links]. Anyways, point is that it is trivial to set up an independent proxy to an HTTPS site. Allowing edits might be trickier, but not impossible ). Bawolff (talk) 18:28, 13 June 2015 (UTC)[reply]

We have had a discussion

Just a note that we have had a discussion at the village pump about this earlier this year (WP:VPR/HTTPS). The discussion was closed as WP:CONEXCEPT due to the highly technical nature of the issue.

From my point of view, this move to HTTPS-by-default is the correct one. Mozilla (Firefox), Chromium (Chrome), the IETF, and W3C TAG are all behind moving websites on the Internet in general to HTTPS and deprecating insecure HTTP.

HTTPS guarantees the authenticity of content sent from Wikipedia servers as it travels through the Internet, prevents tampering (whether it is censorship in another country or your internet service provider injecting ads or adding invasive tracking headers), and curbs mass surveillance (by a gov't or an internet provider) by making it difficult and expensive to monitor articles being read or written by individuals.

Regarding the potential negative effects of switching to HTTPS for older clients/browsers, we should be able to find a workable solution for them fairly quickly. A lot of the issues mentioned are software bugs that can be fixed without going back to HTTP. Google uses HTTPS by default, and there does not seem to be an issue with anyone using Google. Tony Tan · talk 20:43, 13 June 2015 (UTC)[reply]

Thank you so much, Tony, for pointing out that Google doesn't cause these kinds of problems! Somehow, I hadn't even noticed that -- I guess precisely because it doesn't cause any problems... SHEESH!! If these issues are, in fact, entirely unnecessary, then WHY WERE THEY IGNORED by WMF's tech people when they had been explicitly pointed out on this very page a couple of years ago??? Inexcusable. I am sitting here literally shaking my head in disbelief... Cgingold (talk) 21:48, 13 June 2015 (UTC)[reply]
Well, google (the search engine anyways, not counting other sites google runs) does its own auto-complete with javascript based on what it thinks you want to search for. It does not use the built in remember what I typed previously browser feature. You used the word "issues" in the plural. As far as I'm reading, the old version of IE disables auto-complete on HTTPS is the only actual issue reported in this thread that could possibly not affect Google (Or for that matter, is a reasonable complaint imo). Am I mistaken? Edit: I guess you're also complaining about browser history, so that makes 2 issues. All things considered, both are essentially minor inconveniences, both are experienced only by a relatively small number of users, and the autocomplete one has an easy way of mitigating (update your browser). Not exactly what I'd call the end of the world. Bawolff (talk) 04:45, 14 June 2015 (UTC)[reply]

Please enable HTTP mode

Hi. I'm from Iran. After WP enabled https as default (and no access to http), we have a lot of problem to access WP due to Internet censorship. Because Iranian government abuses https protocol. It's very slow and pages do not load properly. Time-out error happens frequently. Editing is not easy anymore. Please enable HTTP option for restricted countries again. Wikipedia is a great contribution to humanity. Thanks. --188.158.107.24 (talk) 10:41, 14 June 2015 (UTC)[reply]

All people everywhere possess the inalienable right to have access to information of any and every kind. And they should be able to express that right without intervention by any company, organization or government, to include suppression, censorship and secret monitoring. The sole exception would be information that is kept secret for reasons of national security. What I don't understand is why any government would suppress and censor this right by committing abuse of HTTPS and not also commit abuse of HTTP? Is HTTP really that much harder to abuse? to suppress and to censor? Since many of the problems that have erupted since Wikipedia converted to HTTPS-only are shown to be due to users using older versions of software, and perhaps older hardware as well, maybe if you upgraded to recent versions you would find that rather than governments being the problem, usage of non-recent versions of hardware and software is the problem? – Paine  16:06, 14 June 2015 (UTC)[reply]
They try to block HTTPS and other encrypted traffic because they can't see what you're doing. Cleartext traffic like HTTP can be examined. They want to give people some access to the Internet, because they know it's generally a lost cause to try to block Internet access completely, and trying to do so might spark a revolt, but they want to retain the ability to block some content, and keep tabs on what you're doing. For instance, China's "Great Firewall" selectively blocks access to information on things like the Tienanmen massacre through multiple techniques, including a blacklist of certain sites, and traffic analysis. --108.38.204.15 (talk) 22:33, 14 June 2015 (UTC)[reply]
@Legoktm: you might know who to pass this concern onto. Magog the Ogre (tc) 22:35, 14 June 2015 (UTC)[reply]
I think I understand what it feels like to be faced with Internet censorship; I spend half my time in China, where the Great Firewall disrupts access to websites that are commonly used in countries like the U.S. It is very, very frustrating. What I do want to point out, however, is that by enabling forced HTTPS encryption, governments like that of Iran will be forced to make the decision to either block all of Wikipedia or none of it, instead of being able to selectively filter by the topic of individual articles. While in the short term users may find access to be unstable or even impossible, the government may eventually be forced to stop interfering with Wikipedia traffic if it decides that access to the "good" information is more important than filtering the "bad" information. So in the long run, it may be better to keep Wikipedia HTTPS only if users eventually end up having access to all of Wikipedia, without censorhip. There is no guarantee, but I think we should at least wait and see. Tony Tan · talk 01:50, 15 June 2015 (UTC)[reply]
@108.38.204.15: Out of curiosity, do you have a source for information about the great firewall using traffic analysis? Most of the things I read seem to suggest they mostly use deep packet inspection and DNS posioning. And I'd be really interested in reading any publicly available info about how their system works. Bawolff (talk) 02:10, 15 June 2015 (UTC)[reply]
I'm suspicious that HTTPS will do nothing to stop spying by the NSA or GCHQ, but has been introduced to make it much harder for whistleblowers to sit in the middle and see who they are spying on. It seems we're stuck with it though, and if you're using ancient browsers such as IE8, you'll just have to upgrade. Akld guy (talk) 06:24, 15 June 2015 (UTC)[reply]
That doesn't really make sense to me. What realistic opportunities would a whistleblower ever have to be in the middle of an NSA/GCHQ communication? And even if they were in such a position, the transport security of Wikimedia would be rather irrelevant. To the best of my knowledge, no whistleblower has ever intercepted communications in transit over the internet in order to release for the public interest. Whistleblowers are usually in a trusted position, and legitimately have access to the data which they decide to divulge. Bawolff (talk) 07:47, 15 June 2015 (UTC)[reply]
I want to clarify one thing that's turned up a couple of times in the general discussion (and I'm not replying to any specific user here). There have been a number of comments regarding the NSA. We know that the NSA has targeted Wikipedia traffic, and the Wikimedia Foundation doesn't believe Wikipedia readers and editors ought to be targeted, but while this may have been tangentially related to concerns over the NSA, it wasn’t the driving force. There are other governments and private actors to take into account, and, for example, the Firesheep style attacks that Bawolff has mentioned. Rather, it was driven by concern for the privacy and security of editors and readers all over the world, which means there are many different problems to consider. /Johan (WMF) (talk) 08:00, 15 June 2015 (UTC)[reply]
  • Just to add my 5c, I do remember using a university Internet network a year ago that completely banned HTTPS (so I could use Wikipedia only in HTTP). I do not know the origin of this block (this should be definitely a setting by university network administrator), and I do not know if that block is still there (I haven't used it since then), but I would like to inform that such networks do exist, and I don't think there is a way to track them — NickK (talk) 09:16, 15 June 2015 (UTC)[reply]
Such networks probably exist, but I think it would be up to the network administrators to whitelist Wikipedia's servers if they believe access to Wikipedia is important. They would probably do it after realizing that it is no longer possible to access Wikipedia on plain HTTP. Tony Tan · talk 05:26, 16 June 2015 (UTC)[reply]
If Iran blocks HTTPS, there's no way Wikipedia/WMF will be changing their minds by blocking access to Wikipedia for Iranians through HTTP, which is probably a desirable outcome for the regime anyways. WMF should set up additional HTTP servers for static access to Wikipedia (no-edit access) then with a disclaimer stating that the content may be modified by third party man-in-the-middle vandalism in big banner statements at the top and bottom of every page. -- 70.51.203.69 (talk) 04:44, 17 June 2015 (UTC)[reply]
It would be trivial for the men-in-the-middle to remove the disclaimers. (talk to) TheOtherGaelan('s contributions) 06:16, 17 June 2015 (UTC)[reply]
Yes, it would, however, it would reenable access to populations who are completely blocked form using HTTPS. If the governments in question actively block HTTPS, then we are just falling into their hands by removing access to Wikipedia from their populations, to limit their populations access to information by voluntarily falling into the schemes of their governments to censor the internet by removing access to Wikipedia completely, as they filter out HTTPS. -- 70.51.203.69 (talk) 11:31, 18 June 2015 (UTC)[reply]
Never really saw the logic behind moving to https...so it either stops the governments from snooping the accounts of say 10,000 wikipedians (people who browse and randomly edit the wiki) or by moving to https, it blocks 1.2bn-2bn users from COMPLETELY accessing the website..If i was the guy incharge of making the decision, I'll choose the latter. I'd rather have a billion users being able to access this site than help 10,000 users from "hiding" behind closed doors and randomly attacking their government and making this site look bad....sadly, I don't work for the site and I sympathize with those that can no longer access the site..if WMF had actually done their research before doing this, they would realise it was those users who contributed a lot to the website than those 10,000 who use the site for their own personal agendas...alas...the weak shall inherit the wiki..and for the 1000th time, enwikipedians demands supersedes the demands of other language wikis--Stemoc 11:53, 18 June 2015 (UTC)[reply]
Billion? Do you have a citation for that? Before anyone says China, China is not currently treating https access to Wikipedia any differently then http access. I'm keenly interested in who this actually blocks, so if anyone has actual information about people who are blocked... please say so. Bawolff (talk) 21:39, 18 June 2015 (UTC)[reply]
If the governments that currently block HTTPS really intended to completely remove their citizens' access to all of Wikipedia, they would have already done so over HTTP. Precisely because they still see value in some of Wikipedia's content, they chose to filter instead of block. HTTPS removes the filter option, so they will have to either allow or block all traffic to Wikipedia. When they made the decision, Wikipedia was still available over HTTP, so they chose to block HTTPS and filter HTTP, achieving their purpose of allowing access to some information while blocking others.
Now that Wikipedia can only be accessed on HTTPS, they are forced to re-evaluate their decision. They are now forced to decide between blocking all of Wikipedia, or allowing all of it. While all of Wikipedia is blocked as of now (due to their earlier decision based on a situation that has since changed), they may eventually be forced to allow it if they think public access to certain resources is important. This was the case for GitHub. When GitHub switched to HTTPS-only, China eventually decided to allow all GitHub traffic because of its importance to software development, even though there were other information on there that the gov't wanted to censor. It may be a while before HTTPS becomes unblocked; perhaps the governments are waiting for Wikipedia to enable HTTP access again, which would make it unnecessary for them to allow HTTPS and give up filtering. Tony Tan · talk 07:34, 21 June 2015 (UTC)[reply]
Or they could tell people to use Baidu Baike, or similar local service. -- 70.51.203.69 (talk) 12:33, 23 June 2015 (UTC)[reply]
On that note, does that mean that Wikipedia has a TOR address? (Does Iran successfully block TOR?) -- 70.51.203.69 (talk) 12:36, 23 June 2015 (UTC)[reply]
You do not need a website to have a "TOR address" to use Tor to access the website. You can use Tor to access any website that does not block Tor exit node IPs. .onion addresses are used for concealing the location of the web server. Tony Tan · talk 20:43, 23 June 2015 (UTC)[reply]

Horizontal numbered lists

Horizontal numbered lists do not seem to exist in WP. Do they exist?SoSivr (talk) 10:11, 19 June 2015 (UTC)[reply]

See Template:Flatlist#Syntax for ordered lists. PrimeHunter (talk) 10:20, 19 June 2015 (UTC)[reply]
Thanks. I was mainly looking at template hlist because of its name which alludes to "horizontal list". The only downside with this implementation of horizontal numbered/ordered lists is that one wikitext line is needed for each item of the list.SoSivr (talk) 09:33, 23 June 2015 (UTC)[reply]
T:Hlist should probably implement numbered lists... Under the hood, both of them use the hlist HTML class for styling.

You could also provide the Html 5 representation of the list, but there are bots that clean that up. --Izno (talk) 13:40, 23 June 2015 (UTC)[reply]

AFAIK, there is no support for horizontal lists in HTML5, but support is planned for CSS level 4. -- [[User:Edokter]] {{talk}} 16:13, 23 June 2015 (UTC)[reply]
I meant simply the use of HTML markup rather than wiki markup. Though that's interesting to hear. Surprised it isn't in html/css yet since the markup of horizontal lists has been of interest for at least the past decade (just googling around)... I suppose you can do it with classing regardless? --Izno (talk) 17:08, 23 June 2015 (UTC)[reply]

Session data loss message

Does anyone know the name of the MediaWiki message that pops up if you are trying to save an edit, but session data has been lost? Happens if you disconnect from the internet for a while, and then reconnect and submit changes. Conifer (talk) 10:09, 20 June 2015 (UTC)[reply]

Here are some messages mentioning session data:
PrimeHunter (talk) 12:19, 20 June 2015 (UTC)[reply]
@Conifer: It happens much more often than an occasional disconnection can explain. See Wikipedia:Village pump (technical)/Archive 137#"Loss of session data" error on Save page above. The message for that one is MediaWiki:session fail preview. --Redrose64 (talk) 12:59, 20 June 2015 (UTC)[reply]
Thanks for finding the message; I've posted an edit request at MediaWiki talk:session fail preview. Didn't realize it was such a common problem for other editors. Conifer (talk) 08:42, 21 June 2015 (UTC)[reply]
Agree, it occurs often for me as well, when making long edits in particular. ResMar 17:49, 23 June 2015 (UTC)[reply]

Anyone know when this is getting fixed? It's getting so I have to double check every edit to see if it saved. --NeilN talk to me 11:29, 24 June 2015 (UTC)[reply]

"Loss of session data" on save

I see former threads about this, but I can't make heads or tails of the discussion on Phab (which doesn't display the year in the posts, so I don't really know if this is recent or not). Can someone offer an update? This is happening all the time to me. Maury Markowitz (talk) 16:15, 25 June 2015 (UTC)[reply]

It's happening to me, too, over on Wiktionary, FWIW (=confirming that it's not a WP-specific issue). -sche (talk) 16:24, 25 June 2015 (UTC)[reply]
@Maury Markowitz: There are no years shown in phab:T102199, this is true: it's because phabricator doesn't display the year for posts or actions that are newer than a month or so. It uses the space saved to show the day of the week instead, for some reason. --Redrose64 (talk) 16:32, 25 June 2015 (UTC)[reply]

HHVM

No edits done in 2015 have ever been tagged with HHVM. GeoffreyT2000 (talk) 21:26, 20 June 2015 (UTC)[reply]

Because HHVM is default. See mw:HHVM/About, edits were tagged for debugging/analysis.--Edgars2007 (talk/contribs) 21:42, 20 June 2015 (UTC)[reply]
@Edgars2007: So can we mark the tag as Not Active like it should be? EoRdE6(Come Talk to Me!) 03:24, 24 June 2015 (UTC)[reply]
We can't do it on-wiki. I submitted gerrit:220381 that will do it. It got accepted today and should take effect here on July 2nd. Jackmcbarn (talk) 03:36, 24 June 2015 (UTC)[reply]

Talk page creator Bot

I do not know if this already exists, but I would suggest creating a bot that finds articles lacking a talkpage and creates one by using info from the page's categories.--Catlemur (talk) 12:50, 21 June 2015 (UTC)[reply]

So the bot can talk to itself ? —TheDJ (talkcontribs) 13:18, 21 June 2015 (UTC)[reply]
It sounds like the editor wants a bot to tag non-existing talk pages for WikiProjects (based on the reference to the "page's categories"). --Izno (talk) 16:21, 21 June 2015 (UTC)[reply]
What Izno said.--Catlemur (talk) 17:52, 21 June 2015 (UTC)[reply]
This happens from time to time but on a demand basis. A Wikiproject requests a bot to tag all articles in a category tree, or page list generated from one, with headers, creating talk pages as it goes. The problem is without that curation it potentially can do far more harm than good. The category tree has many inclusions of categories within categories it which make sense individually but lead to category trees based on a topic being much larger than would be expected. As an example there was a recent run to add WP:NZ related articles to the project, which required a lot of manual pruning of the categories. See Wikipedia:NZWNB#Bot request.--JohnBlackburnewordsdeeds 18:08, 21 June 2015 (UTC)[reply]
Perhaps it could be made less sensitive to limit the possibility of an error.--Catlemur (talk) 16:28, 22 June 2015 (UTC)[reply]

Lilypond version

As requested on Help talk:Score#Version?, it would be useful to know the version of Lilypond that Wikipedia uses in its implementation of the Score Extension. Different versions of Lilypond have different syntax; for example, it seems that Wikipedia does not allow for the \tuplet function, which wasn't present in earlier versions. If the version is outdated, I think it would be a good idea to update to 2.18, the latest stable version. Fern 24 (talk) 14:04, 21 June 2015 (UTC)[reply]

I hadn't seen a link to the question on the talk page of the creator of the extension, so I added one, and HTH. --Elitre (WMF) (talk) 20:57, 21 June 2015 (UTC)[reply]
It's 2.14 as that was the stable version at the time of writing the extension. Beeswaxcandle (talk) 21:01, 21 June 2015 (UTC)[reply]
@Beeswaxcandle: Thanks for the information; I've added it to Help:Score. Do you know, does the extension needs to be updated to update Lilypond? I was under the impression that each MediaWiki implementation could install a new version of Lilypond and simply point the extension to the new version. There's no great rush to update, but 2.14 is more than four years old now and the latest versions offer some useful features that are currently missing. Fern 24 (talk) 13:24, 22 June 2015 (UTC)[reply]
Sorry, that's not something I can help with. I'm just the prime user of the extension over on Wikisource. I wonder if the best approach would be to wait for 2.20 (the next stable release expected in a few months' time) and then log a phabricator request for an update—in whatever form that needs to happen. With respect to tuplets, these can be done with the \times command. Beeswaxcandle (talk) 20:50, 22 June 2015 (UTC)[reply]
Thanks for your help. I think I'll do as you suggest about the update, and I'll submit a request at Phabricator when the new version comes out. About the tuplets, I ended up using \times, but with \tuplet you no longer have to type the command for each set of triplets - perhaps it would make more sense if you see what I mean, here. Fern 24 (talk) 20:10, 23 June 2015 (UTC)[reply]

Failure to download as pdf

A reader reported that attempting to download the article Reptation as a PDF failed, giving the error message "! Dimension too large".

I just tried it and got the same message. Does anyone know the problem/solution?--S Philbrick(Talk) 15:11, 21 June 2015 (UTC)[reply]

The problem you are reporting sounds like a potential issue in the code of the MediaWiki software or the server configuration. It would be nice if you could send the software bug to the Phabricator bug tracker by following the instructions How to report a bug. This is to make developers of the software aware of the issue. If you have done so, please paste the number of the bug report (or the link) here, so others can also inform themselves about the bug's status. Thanks in advance! --AKlapper (WMF) (talk) 07:47, 22 June 2015 (UTC)[reply]
Link to T103408--S Philbrick(Talk) 19:22, 22 June 2015 (UTC)[reply]
Ironic that "dimension too large" is coming up on an article about very long linear, entangled macromolecules :-) Nyttend (talk) 21:35, 22 June 2015 (UTC)[reply]

15:24, 22 June 2015 (UTC)

c-uploaded didn't get deleted

I just discovered File:High Line 20th Street looking downtown.jpg by accident. Can anyone imagine why it didn't get deleted? I uploaded it back last November with {{c-uploaded}}, and it should have been deleted as soon as it was off the Main Page, but it's still there. I can't remember what bot is (or was then) responsible for doing c-uploaded deletions. Nyttend (talk) 18:58, 22 June 2015 (UTC)[reply]

I just happened to see this by accident. I'm not sure that I was aware that one of my images was on the Main Page - but then, my memory ain't what it used to be. BMK (talk) 22:51, 22 June 2015 (UTC)[reply]
I found a few more with a "what transcludes here" search and took care of them. There were eight, with dates ranging from November 2014 to June 2015. I don't know the answer to the bot question though. If there's a bot that is supposed to delete these, it would have to be an admin-bot. -- Diannaa (talk) 18:56, 23 June 2015 (UTC)[reply]
DYKUpdateBot is an admin, and I think it might have worked with these, but I could easily be wrong. Nyttend (talk) 21:11, 23 June 2015 (UTC)[reply]

Something up with collapsing?

It doesn't seem to be working on AN/I. BMK (talk) 22:52, 22 June 2015 (UTC)[reply]

Please be more specific. The show/hide links I tried at WP:ANI worked for me in Firefox. PrimeHunter (talk) 23:02, 22 June 2015 (UTC)[reply]
None of the collapsing templates on AN/I (collapse, hat etc.) are working for me. I tried changing a cllapse to a aht, and no difference, I tried creating a new collapse elsewhere (i.e. not on AN/I), and it worked. I closed and opened Firefix, no joy. I'll see what it looks like with another browser. BMK (talk) 23:10, 22 June 2015 (UTC)[reply]
It's an intermittent problem affecting various pages, I first noticed it when protecting a page and saw that the "Instructions and special-case notes" box wasn't collapsed. The problem is that some collapsible boxes are displaying uncollapsed and without the [hide] link; this normally suggests that the JavaScript file that handles collapsing hasn't got through. --Redrose64 (talk) 23:15, 22 June 2015 (UTC)[reply]
Judging by the error I'm seeing in the JavaScript console I think it may have to do with this class having recently been removed. This is also affecting the Warn module of Twinkle, where you are unable to issue anything other than a level one warning. I've reported it and I was told it is being fixed. MusikAnimal talk 23:30, 22 June 2015 (UTC)[reply]
Glad to hear that someone has a handle on the problem. I just checked and the page is collapsing fine under Chrome. I also purged the page on Firefox and got no change, still no collapsing. I assume that the bug will be fixed at some point and not worry about it. BMK (talk) 23:44, 22 June 2015 (UTC)[reply]
FWIW: no collapsing, no hide link in mainspace for me (Firefox 38.0.5 atop WinXP). See infobox data InChI in Aspirin, Ammonia. -DePiep (talk) 08:54, 23 June 2015 (UTC)[reply]
This is because the code that uses this module is not actually declaring the usage. It seems this module is no longer guaranteed to be provided by default, so any code making that assumption was already broken, but now it it's visibly broken. —TheDJ (talkcontribs) 09:53, 23 June 2015 (UTC)[reply]
Seems to be working again, fingers crossed -- Diannaa (talk) 20:17, 23 June 2015 (UTC)[reply]
Me too. BMK (talk) 23:56, 23 June 2015 (UTC)[reply]

PC protection in page logs

When I go to "All public logs" for Marian Dawkins I can only see one action logged - semi-protection on 8th January 2014. However, the page appears to be PC1 protected. Am I missing something? Is it just displaying differently for me? Is PC protection listed somewhere else? Wouldn't it make more sense to include it in "All public logs"? Also, the log implies that the article is still semi-protected, which it can't be since some of the edits weren't automatically accepted.Striking through this coment as the log gives the expiry date. 12:34, 23 June 2015 (UTC) Yaris678 (talk) 08:42, 23 June 2015 (UTC)[reply]

You have to check the log for "Marian Stamp Dawkins", which was the previous title of the page. Jenks24 (talk) 08:46, 23 June 2015 (UTC)[reply]
@Yaris678: Logs for moved pages can be confusing; there is a summary at WP:MOVE#How to move a page, item 4. It's certainly an anomaly that if a page is protected at the time of the move, you get a log entry like 'moved protection settings from "Foo" to "Bar" (Foo moved to Bar)'; but if it is under PC at the time of the move, you don't get that log entry even though the PC setting is copied. --Redrose64 (talk) 10:26, 23 June 2015 (UTC)[reply]
Thank you for the explanation. That is needlessly confusing. Page logs should follow the moved page, as with page history. Has anyone tried logging that as a bug? Yaris678 (talk) 12:34, 23 June 2015 (UTC)[reply]
Page lots should not follow the moved page, because logs frequently are related to the page title itself. We need the deletion log to stay with the deleted title, for example (what would happen to the deleted edits otherwise?), and what would you do with protection logs for salted pages? Nyttend (talk) 18:34, 23 June 2015 (UTC)[reply]
@Nyttend: It would be a start if PC behaved like prot, i.e. if you moved a page under PC, there would be a log entry on the new page name like 'moved pending changes settings from "Foo" to "Bar" (Foo moved to Bar)'.
Better still would be if this log entry showed what that setting was, and when it expires. Something like 'moved protection settings [edit=autoconfirmed] (expires 23:59, 23 June 2015 (UTC)) from "Foo" to "Bar" (Foo moved to Bar)'. --Redrose64 (talk) 18:46, 23 June 2015 (UTC)[reply]
So you're suggesting that the page's log get a note mentioning the protection? Now that sounds unambiguously helpful, and it hadn't occurred to me at all. Note that PC and protection do behave similarly in that both of them create page history entries; if you go to [17] and look at 31 December, you'll see the PC entry. Nyttend (talk) 21:08, 23 June 2015 (UTC)[reply]
If I want to know when the current prot or PC expires, the easiest thing for me (as an admin) to do is to click the "protect" tab and press End; second easiest is to click the "history" tab and then "View logs for this page". This is fine when there have been no moves since the last protection; if the page has been moved, I see the enigmatic 'moved protection settings from "Foo" to "Bar" (Foo moved to Bar)' for a protected page, nothing at all relevant for a page with PC. Yes it's recorded in the page history, but for frequently-edited pages (as pages with prot or PC often are) it can mean going back through several screens to find the entry.
This brings me to the next problem. Even when I do find the history entry for the prot, the information that I want isn't always there, because of the 255-byte limit on history entries: a typical entry might be "Changed protection level of Foo: Violations of the biographies of living persons policy ([Edit=Allow only autoconfirmed users] (expires 23:59, 23 June 2015 (UTC)) [Move=Allow only administrators] (indefinite))". Now imagine that the page name is quite long, also that the reason for protection selected from the dropdown has been supplemented by a fairly lengthy custom reason; this can mean the loss of some information, which may include the expiry date and time, even the protection level. I see an obvious way to economise on space here: don't include the page name. If you're viewing a page history, it's at the top; if you're looking at the watchlist or somebody's contribs, it's shown earlier on the line, so for all three lists, including it in the log is totally redundant. Space can also be gained by shortening "Allow only autoconfirmed users" and "Allow only administrators" to "autoconfirmed" and "sysop". We do this in the page logs - why not in the history? --Redrose64 (talk) 22:37, 23 June 2015 (UTC)[reply]

Template:Left

Something has changed recently in relation to the {{left}} template. The last changes to the template were in March this year, made by Frietjes (talk · contribs) and Plastikspork (talk · contribs), but it's not clear whether that was the direct cause. The problem I'm seeing is at Template:Lea Valley Lines where the red vertical lines should be continuous, but instead there are gaps: and these gaps are occurring at the points where {{left}} is used. The intent, I think, is so that on a row like the one for Cheshunt, the distance (in this case 14m 01ch) is positioned to the left of the station name - instead, it's appearing below, and causing vertical separation. Has some CSS changed that may have affected this template? --Redrose64 (talk) 09:01, 23 June 2015 (UTC)[reply]

I can't see the spacing problem that you are describing. However I do see some mismatched double braces, which are likely to be causing an error somewhere. — Martin (MSGJ · talk) 10:23, 23 June 2015 (UTC)[reply]
Time for me to try all browsers then. I see the problem in Firefox 38.0.5 (under Windows XP); more to follow. --Redrose64 (talk) 10:28, 23 June 2015 (UTC)[reply]
OK, the accompanying screenshot is in three panes: the top one is from IE8 (Chrome Version 43.0.2357.130 m and Opera Version 12.17 are similar), and is the correct display; the middle one is what I see in Firefox, notice that two rows are broken (there were many more); the bottom one is Safari 5.1.7, where only one row shows breakage. --Redrose64 (talk) 11:15, 23 June 2015 (UTC)[reply]
white-space: nowrap; on the table is obstructing the float; see [18]. Alakzi (talk) 11:17, 23 June 2015 (UTC)[reply]
yes, I have noticed the gaps in the route diagrams as well, and pointed out the problem in this discussion. changes like this fix it for me. User:YLSS indicated that there may be a better solution. Frietjes (talk) 13:43, 23 June 2015 (UTC)[reply]

Can we add WikiProject Poland template to all articles that are missing it but have the milhist-Poland taskforce template?

I see numerous articles that have Template:Milhist assessment page for Wikipedia:WikiProject Military history/Polish military history task force, but no Template:WikiProject Poland. I'd think they should be an automated way to add WP:POLAND template, copying assessment from the milhist? How could this be done? Where can I ask for this (if not here)? Example of a page with both templates: Talk:Uhlan. Example of a page that only has the milhist one but should have both: Talk:8th Uhlan Regiment of Duke Jozef Poniatowski. Thanks, PS. If you reply here, please WP:ECHO me back. --Piotr Konieczny aka Prokonsul Piotrus| reply here 09:23, 23 June 2015 (UTC)[reply]

@Piotrus: This is one of the approved tasks for Yobot (talk · contribs), which is operated by Magioladitis (talk · contribs), and you would file a request at WP:BOTREQ - but before doing that, there are rules that need to be satisfied, see User:Yobot#WikiProject tagging. --Redrose64 (talk) 10:18, 23 June 2015 (UTC)[reply]

WMFLabs: Revision history not working

The revision history tool does not appear to be working for any page that I've tried. Is this a bug or does the tool not work any more? Any workarounds/mirrors for this? Thanks. --Cpt.a.haddock (talk) 16:31, 23 June 2015 (UTC)[reply]

I've had the same problem since yesterday. Using Firefox 38.0.5, so not likely to be my browser. --Alan W (talk) 18:33, 23 June 2015 (UTC)[reply]
I'm thinking that this might be the cause. Looks like quite a mess, and it might take a while to get the tools functioning completely as before. --Alan W (talk) 03:11, 24 June 2015 (UTC)[reply]
Central page for outage information: wikitech:Incident documentation/20150617-LabsNFSOutage. --AKlapper (WMF) (talk) 09:41, 24 June 2015 (UTC)[reply]
Thanks, Andre. I work with things like this all the time (mostly reporting them, not fixing them). Not claiming to be an expert in any of the technologies involved, but I understand just enough to say Ouch! I wouldn't want to be responsible for fixing this problem. The average Wikipedian probably has no idea of all the work that must go on behind the scenes to get these tools to function correctly. We tend to take such things for granted. --Alan W (talk) 04:57, 26 June 2015 (UTC)[reply]

Letterhead class?

I like to make use of a certain snippet of code, div class="letterhead", to make visually attractive quote delineations—it makes text like it's been written into a yellow notepad. However, this appears not to work anymore: for example it fails to appear here or, as I used it, here. I like this formatting option...where has it gone? ResMar 17:46, 23 June 2015 (UTC)[reply]

It's still listed here as an option. Ed [talk] [majestic titan] 17:49, 23 June 2015 (UTC)[reply]
It was removed a few days ago. -- [[User:Edokter]] {{talk}} 19:00, 23 June 2015 (UTC)[reply]
 Philippe (WMF): Why? It's still present on other wikis, e.g. here. ResMar 19:04, 23 June 2015 (UTC)[reply]
More information. ResMar 19:06, 23 June 2015 (UTC)[reply]
I know that Philippe and I talked about it before removing and it was only removed because of the question and because we couldn't think of anywhere it was used anymore (it was originally used for a strategy announcement about 5 years ago). If the community is using it there is certainly no reason not to add it back :) it's sadly tough to really gauge how much it's used which is why we didn't realize. Jalexander--WMF 19:08, 23 June 2015 (UTC)[reply]
I doubt it's used much, but I used it in several Signpost stories last year, and it's perfect for them when quoting long-form announcements. I'll add it back now. Thanks, James! Ed [talk] [majestic titan] 19:14, 23 June 2015 (UTC)[reply]
And this is why ppl shouldn't put 1 off crap like that in MediaWiki:Common.css.... People start using it. We now have a global CSS rule for a few signpost articles ???? That's crazy. —TheDJ (talkcontribs) 19:25, 23 June 2015 (UTC)[reply]
Start an RFC, (don't) get it removed. --Izno (talk) 20:55, 23 June 2015 (UTC)[reply]
@TheDJ: It's not like it's labor-intensive, and it's only on the English Wikipedia, not globally. Ed [talk] [majestic titan] 22:43, 23 June 2015 (UTC)[reply]
After four years of it laying around did you seriously not expect it to be taken advantage of? I'm used to hearing that kind of logic from the WMF—less and less now, thankfully—not from a fellow member of the community. ResMar 00:00, 24 June 2015 (UTC)[reply]
First of all, I'm critiquing WMF for putting it in, in the first place and then NOT remembering to remove it. And second, I translate the above as "we don't care about site performance for our users, if you do, that's your problem". I and a few others spent years trying to keep the size of MediaWiki:Common.css down to somewhat reasonable proportions. This is one more example of the total insular view of the community towards technical matters. And the reason why we have so few users able to look after it properly. Whatever, I shouldn't let my own frustration cloud this. This was bad judgement by WMF and now we (community) can't easily fix it anymore. That doesn't mean we should keep it in, it will just be very hard to get it out and fix all current uses to inline the style instead of depending on a class. PS. with global i mean site scoped, instead of scoped to those users who need it because their page has such an element. —TheDJ (talkcontribs) 06:37, 24 June 2015 (UTC)[reply]
 TheDJ: I know enough code to know that it's ultimately a problem, but also enough about matters in the movement to see it used (by the election committee no less) here and want to have access to it too. You have to break things to make them better, sure, but merely turning off a feature a lot of editors rely on is the wrong way to approach the problem. I suggest that this is one more example of the total insular view techies towards community matters. Make a list of the class's uses, provide an alternative, and create a deprecation schedule. I am reminded of a certain "routine" removal that the WMF made at one point in the past of some parsing code at the end of the text of a page load that broke a large number of bots relying on raw dumps. That content is back there again now. ResMar 14:00, 25 June 2015 (UTC)[reply]

Text is not centered

For some reason, I noticed that text in templates such as {{Decadebox}} is not being centered. GeoffreyT2000 (talk) 18:23, 23 June 2015 (UTC)[reply]

Template:Decadebox doesn't have centered text by design, no? ~SuperHamster Talk Contribs 18:35, 23 June 2015 (UTC)[reply]
Examples? If you use IE9 and saw centered text (headers) in infoboxes, you saw an IE bug which has now been fixed. -- [[User:Edokter]] {{talk}} 18:58, 23 June 2015 (UTC)[reply]
@Edokter: I can see the same problem, demonstrated in the same (now) left-aligned information in Template:MedalTableTop. Conversion to a normal wikitable style allows normal text centering for columns, but use of "text-align:center" doesn't seem to work. I'm using latest Firefox. SFB 19:18, 23 June 2015 (UTC)[reply]

Is there an administrator who frequents this place who can get this sorted? I've had to fix well over ten infoboxes today and yesterday. Alakzi (talk) 18:00, 25 June 2015 (UTC)[reply]

Edit link took me to previous subsubsection

When trying to edit Wikipedia:Village_pump_(policy)#MOS:IDENTITY_clarification, a rather long section with 15 subsubsections and counting, in one of the subsubsections, I hit the "edit" link and was taken to the source of the previous subsubsection. After a bit of futility, I found that I could go to the next subsubsection, edit it, and get the subsubsection I wanted. It worked. Meanwhile, in doublechecking to see how reproducible the error is, I am now finding the error is not happening.

I have default skins, use Firefox Portable on Windows. I'd tell you version numbers, but I have no idea what they are, and both seem to keep the information Top Secret. Choor monster (talk) 18:55, 23 June 2015 (UTC)[reply]

The reason is probably that a section was moved up right around that time; you probably loaded the page before the other edit was saved, and clicked on the edit link after. עוד מישהו Od Mishehu 19:26, 23 June 2015 (UTC)[reply]
Sneaky. Thanks! Choor monster (talk) 19:33, 23 June 2015 (UTC)[reply]
We get this a lot at busy pages like WP:RFPP. -- Diannaa (talk) 20:15, 23 June 2015 (UTC)[reply]

File upload wizard broken

Apparently something has broken the WP:File Upload Wizard for some users. The script (MediaWiki:FileUploadWizard.js) chokes on line 2401:

   if ($.isDomElement(target)) return target;

The error message is: "TypeError: $.isDomElement is not a function".

Two users have reported the error (which manifests itself in the entire script failing to load) on the wizard's talkpage since 22 June, and I can replicate on my machine under Firefox 37.0.2. It doesn't seem to apply to all users though, as occasional uploads using the wizard still show up in the logs (the latest one I can see at 18:44, 23 June 2015).

I can't figure out why this is no longer working. Can somebody help? Fut.Perf. 20:13, 23 June 2015 (UTC)[reply]

The module jquery.mwExtension that adds this function is deprecated and is not loaded by default anymore. Max Semenik (talk) 22:47, 23 June 2015 (UTC)[reply]
What can be used instead? This needs to be fixed quickly. Fut.Perf. 05:35, 24 June 2015 (UTC)[reply]
It wouldn't have broken if it had declared it's dependency by enforcing to load the module. See also: Rersourceloader and userscripts. That's doesn't fix the deprecation part, but should make it work for now. —TheDJ (talkcontribs) 07:01, 24 June 2015 (UTC)[reply]
I don't use the upload wizards - they never worked properly when they were new, so after about five attempts with no success, I went back to the old ways. If you're interested, the links are: Wikipedia:Upload/old (English Wikipedia); c:Commons:Upload (Commons). --Redrose64 (talk) 10:04, 24 June 2015 (UTC)[reply]
A couple of similar problems have been reported with Twinkle at WT:TW#Reports to UAA not working  —SMALLJIM  12:33, 24 June 2015 (UTC)[reply]

Old Hedonil script

Would I be correct to assume that this script is no longer working, and should be removed (as useless) from my .js:

mw.loader.load('//meta.wikimedia.org/w/index.php?title=User:Hedonil/XTools/XTools.js&action=raw&ctype=text/javascript');

I've been hoping for it to be revived. First it quit showing the XTools stats. Then it started redirecting to labs:

https://tools.wmflabs.org/xtools-articleinfo/index.php?pageid=27092849&project=en.wikipedia.org&uselang=en

The above is also the redirect when you go to any article's Page/History/Revision History Statistics (absolutely no information}

Now it doesn't seem to work at all. Any feedback on this? — Maile (talk) 23:38, 23 June 2015 (UTC)[reply]

The xTools have been unstable for a long time but there is still work on them. See for example #xTools not working and User talk:cyberpower678#Revision history statistics. I don't know the details of meta:User:Hedonil/XTools/XTools.js but I guess it will work if the xTools themselves work. PrimeHunter (talk) 00:01, 24 June 2015 (UTC)[reply]
There's an alternate page-info tool at http://vs.aka-online.de/cgi-bin/wppagehiststat.pl -- Diannaa (talk) 00:11, 24 June 2015 (UTC)[reply]
The maintainers are still active and we are working on recruiting more, but Hedonil left xTools' code in such a convoluted state, it's hard to follow and virtually impossible to debug. Our primary goal is to get a new environment set up to restore stability to xTools, then we work on a rewrite that the maintainers can more easily maintain. Following that, the gadgets will be moved on to the xtools environment and Hedonil's JS will be redirected directly to xTools where everyone can maintain the script. We then plan to revive Wikiviewstats, which will likely need to be rewritten too, as I spent 3 hours looking for the bug and couldn't find what is causing Wikiviewstats to be broken. But first we need more maintainers. As xTools gets more maintainers, things will get done faster. We are currently discussing methods to recruiting new users.—cyberpowerChat:Limited Access 02:36, 24 June 2015 (UTC)[reply]
Thanks for all the explanations. Diannaa, that alternate tool gives exactly the information I'm interested in. — Maile (talk) 12:44, 24 June 2015 (UTC)[reply]
@C678cyberpower, you've got wiki mail. --Ancheta Wis   (talk | contribs) 15:50, 24 June 2015 (UTC)[reply]
I didn't get a thing. Did you send it to the right user? I'm Cyberpower678. Do you perhaps have Yahoo!?—cyberpowerChat:Limited Access 18:40, 24 June 2015 (UTC)[reply]

Page revision History statistics has changed

Previously page revision history used to show who created the page and list of users with maximum edits along with other details but now Akshay Kumar shows this. --Cosmic  Emperor  13:31, 24 June 2015 (UTC)[reply]

@CosmicEmperor: Please see User_talk:Cyberpower678#Revision_history_statistics --NeilN talk to me 13:38, 24 June 2015 (UTC)[reply]

jQuery.escapeRE vs. mw.RegEx.escape

jQuery.mwExtension was apparently deprecated a few days ago. Also apparently, the function has already been removed from the deployed site? At least we were getting issues at WT:TW. I've replaced the calls in Twinkle, but I'm noticing javascript errors in site script as well. In debug mode, mediawiki.util.js is throwing errors since it's using $.escapeRE.

Amalthea 16:28, 24 June 2015 (UTC)[reply]

It's no longer depended on by any of the default modules which are enabled on all pages. Since it was an undeclared dependency for Twinkle, it was causing errors. The issue with mediawiki.util is only in debug mode and caused by the way the caching is different for debug mode. —TheDJ (talkcontribs) 19:12, 24 June 2015 (UTC)[reply]
Related: phab:T103498. --AKlapper (WMF) (talk) 08:43, 25 June 2015 (UTC)[reply]

Where is class=wikitable not used?

Please see the thread raised by Quiddity (WMF) at Help talk:Table#Where is class=wikitable not used? and discuss there. --Redrose64 (talk) 21:29, 24 June 2015 (UTC)[reply]

Edits not appearing

After two or three days of this, I'm pretty sure it's not just me. And it's sporadic. Edits I do, that may show up in a preview, vanish when I do a save. Some that do show up, the next day aren't there, not in my Contributions or anything. And some of them will be there if I open the edit window, and will be there in my Contributions, but not immediately on the Watch list, and not showing up on the article for several minutes. And then some of my edits just work like they're supposed to. — Maile (talk) 21:33, 24 June 2015 (UTC)[reply]

@Maile66: This is Wikipedia:Village pump (technical)/Archive 137#Post not showing up immediately. Refresh the page (F5 in most browsers) and it should show. --Redrose64 (talk) 21:38, 24 June 2015 (UTC)[reply]
And so it is. The saga continues. Thanks. — Maile (talk) 22:02, 24 June 2015 (UTC)[reply]

Citations

Hello all; since User:Citation bot is down, does anyone know of any other good tools for expanding bare citations consisting only of a doi or bibcode (NOT a sole URL, so no ReFill), or any other good replacements for the bot? StringTheory11 (t • c) 23:13, 24 June 2015 (UTC)[reply]

VisualEditor's Cite tool will expand DOIs for you. Not sure if it does bibcode though. — Mr. Stradivarius ♪ talk ♪ 23:40, 24 June 2015 (UTC)[reply]
You may want to also have a look at Help:Citation tools. Dalba (talk) 04:11, 25 June 2015 (UTC)[reply]
mw:Citoid (in VisualEditor; opt in via Beta Features in your prefs to try it out) can handle most, but not all, dois. The docs don't say anything about Bibcode, but it does mention "bibtex" as a supported format, so it might be planned. I've added phab:T103900 to its list in case they hadn't thought about it yet. Whatamidoing (WMF) (talk) 19:54, 25 June 2015 (UTC)[reply]
Thanks, will try Citoid. However, can the WMF please actually make it work with the source editor as well, which I prefer for everything other than editing tables? It seems like the WMF is trying to force us to use VE with this move, which doesn't come across well. StringTheory11 (t • c) 01:23, 26 June 2015 (UTC)[reply]
The citoid service will be added to the wikitext editor when it covers more sources. If you want to try it out in the wikitext editor, then perhaps User:Salix alba could tell us whether his user script to do that is still working. Whatamidoing (WMF) (talk) 21:23, 26 June 2015 (UTC)[reply]

Major scripting breakage

Seems most of the scripting based functions suddenly failed in the past hour or so - major failure on things like Hotcat or other normally working scripts. Dl2000 (talk) 03:02, 25 June 2015 (UTC)[reply]

Cease fire... looks like things are functioning normally again. Did some resetting of the Preferences, plenty of cache clearing reloads, cleared some browsing history, ensuring the browser scripting is allowed. Not sure what bombed, not sure which side(s) it came from, but probably was worth doing plenty of browser-side and preference resets anyway. Dl2000 (talk) 03:14, 25 June 2015 (UTC)[reply]
(edit conflict) I had a random JavaScript failure in CodeEditor in the last hour as well, but it seems to be working again now, as are my other scripts. — Mr. Stradivarius ♪ talk ♪ 03:16, 25 June 2015 (UTC)[reply]
I just got it again while trying to edit User:Martijn Hoekstra/watchthingy.js. The error message was "Uncaught TypeError: $(...).data(...).fn.codeEditorMonitorFragment is not a function" in index.php:114. This seems to be an intermittent thing rather than a one-off. — Mr. Stradivarius ♪ talk ♪ 03:25, 25 June 2015 (UTC)[reply]
Please file a ticket in phab about that codeEditorMonitorFragment thing if you have time, so that I can remember to fix it. —TheDJ (talkcontribs) 07:44, 25 June 2015 (UTC)[reply]
Done at phab:T103802. — Mr. Stradivarius ♪ talk ♪ 08:45, 25 June 2015 (UTC)[reply]

Special:RecentChanges

When I do searches for multiple accounts, etc., I often get hits for userpages that have a transcluded Special:RecentChanges template. I can't find this template to see how many users do this. I ask because it drives me rangy getting all those false positives. Thoughts? Anna Frodesiak (talk) 06:30, 25 June 2015 (UTC)[reply]

@Anna Frodesiak: Do you mean pages that transclude the actual list of recent changes? You can do that with the code {{Special:RecentChanges}}, but it's not a template; you're transcluding the actual special page itself. You can't use Special:WhatLinksHere with recent changes (unless, apparently, you are using Flow), but if the users in question are transcluding recent changes indirectly through another template, you could find the links from that template instead. What do you mean by doing searches for multiple accounts, by the way? There might be a better way of doing whatever it is you're trying to do. — Mr. Stradivarius ♪ talk ♪ 06:49, 25 June 2015 (UTC)[reply]
Hi, Mr. Stradivarius. Yes, pages like this. It contained {{Special:RecentChanges|limit=1000}}. This is a problem in this example: I find a new account. He's posted some inappropriate content at his userpage, such as "Everyone at school hates ███████ ███████ because he is a poo-poo-head...". Five minutes earlier he registered another account and did the same. So, sometimes I will search for such a string and what comes are recent changes transclusions at userpages containing (←Created page with Everyone at school hates ███████ ███████ because he is a poo-poo-head...) Anna Frodesiak (talk) 08:36, 25 June 2015 (UTC)[reply]
@Anna Frodesiak: You could try searching with the insource keyword: that will exclude any transcluded content. You might have to tweak the search if the vandals have used any fancy wikimarkup, though. — Mr. Stradivarius ♪ talk ♪ 08:56, 25 June 2015 (UTC)[reply]
Hi, Mr. Stradivarius. You are very sweet. That page could be upside down and make as much sense to me. :) I must confess, I'm still trying to figure out how to get this to appear. My browser doesn't show that at all. I didn't want to bug you for help yet because you'd think me daft. Anyway, I'll live with the false positives. Maybe when I encounter them, I'll drop the user a line and ask if he could remove the transclusion if they're not using it. Anna Frodesiak (talk) 09:16, 25 June 2015 (UTC)[reply]
@Anna Frodesiak: I probably just wasn't very good at explaining it - sorry for talking in technobabble. :) If you make a normal search for "Everyone at school hates ███████ ███████ because he is a poo-poo-head..." then MediaWiki will look for that text found in the displayed page. So it will find all of the false positives from people who put {{Special:RecentChanges}} on their user pages. However, if you search for insource:"Everyone at school hates ███████ ███████ because he is a poo-poo-head...", then MediaWiki will look for that text in the source wikitext, not the displayed page. The source wikitext of the false positives won't contain the actual text "Everyone at school hates ███████ ███████ because he is a poo-poo-head..." - it will only contain something like {{Special:RecentChanges}}, and so MediaWiki won't include those pages in the search results. — Mr. Stradivarius ♪ talk ♪ 10:15, 25 June 2015 (UTC)[reply]
Ahhhhh I see, Mr. Stradivarius.I get it now. No need to be sorry. I am very duncy when it comes to this sort of thing. Okay, I just add "insource:" and then the text and it will give me only the text where it is actually typed. Splendid. Thank you for this. It will be a huge help. Anna Frodesiak (talk) 10:34, 25 June 2015 (UTC)[reply]

Pages without Wikidata equivalent

Hi. Does somebody know whether Wikipedia has a special page for articles without a Wikidata item? Not to be confused with Special:WithoutInterwiki, which addresses pages without equivalents in other languages, regardless of having / not having an item on Wikidata. --Gikü (talk) 08:55, 25 June 2015 (UTC)[reply]

Special:UnconnectedPages will do that. If you're ever looking for a particular special page, you can check Special:SpecialPages (if you can remember that) - Evad37 [talk] 10:03, 25 June 2015 (UTC)[reply]
Thank you, that'll do it. I was searching for it on rowiki, and the name didn't say much in the Special:SpecialPages list. --Gikü (talk) 11:11, 25 June 2015 (UTC)[reply]
As an FYI, UnconnectedPages recently regressed in a small amount per d:Wikidata:Status updates/2015 06 20. I think the use case requested here is fine, just making a note. --Izno (talk) 13:45, 25 June 2015 (UTC)[reply]

Load times are immense for JS scripts

It's taking 60+ seconds to load all of my JS. Half of it loads immediately but the other half takes 60+ seconds. This just started recently.—cyberpowerChat:Online 13:42, 25 June 2015 (UTC)[reply]

Did you check which specific scripts from which servers have problems to load, e.g. via the "Network" tab of your web browser's developer tools? Wondering if there's any kind of pattern. Also, which browser and version is this about? --AKlapper (WMF) (talk) 08:53, 26 June 2015 (UTC)[reply]

Edits go in wrong section

Okay, this might just be me being a careless idiot, but several times over the last couple of days, I've clicked to edit a section, typed a comment, then either previewed or saved and found that my comment was inserted in the section above the one I intended to edit. For example, just now I was looking at this revision of an Rfd thread, clicked beside "what is the best story" to edit that section, typed a comment and saved (no preview, lazy I know) and found my comment inserted in the section above: [19]. Clearly, according to the edit summary, I edited the section above ("sampernandu"), but I'm sure that I clicked beside the "best story" section. I think a clue might come from the fact that another user inserted a new section at the top of the page ([20]) and it could be that the system just counts the number of sections down from the top when you click to edit (I don't know how this works, just guessing) so then my second-from-the-top edit link was really (according to the database) for the section that I ended up editing. But I've never had this happen before this week. Has there been a recent change that would cause this behaviour, and is it something that can/should be fixed? Ivanvector (talk) 15:33, 25 June 2015 (UTC)[reply]

No, it's been possible for years, because as you surmise, the sections are sequentially numbered. At the moment my URL bar shows https://en.wikipedia.org/w/index.php?title=Wikipedia:Village_pump_%28technical%29&action=edit&section=44 so I'm editing section 44, but if an earlier section is archived or a subsection is added to an earlier section, the numbering will change. This thread above is pretty much the same problem. --Redrose64 (talk) 15:40, 25 June 2015 (UTC)[reply]
I see, thanks for the explanation. I guess I just need to be more careful with the preview button :) Ivanvector (talk) 14:24, 26 June 2015 (UTC)[reply]

Combining separates rows of input into a single row to create a table

I’m trying to build an Excel file to help me create tables of basketball statistics.

I’m using the table in Michael_Jordan#NBA_career_statistics as a model.

However that template, as used in this example has four rows of data input for each row of data output. Because I will have the stats in an Excel spreadsheet, it will be natural and easy for me to have one row of data for each year. I think it should be possible to use this template with one long row of input data for each year. However, if I simply concatenate the data, it does not work.


I would be grateful if someone could tell me what I am missing. Is it necessary to break up the Rose RMI missing some character or some other way to combine the rows?

See User:Sphilbrick/basketball stats where I copied the first year of data for Jordan as is in the table works, but in the second example when I catenate the four rows into a single row it doesn’t work.--S Philbrick(Talk) 18:19, 25 June 2015 (UTC)[reply]

@Sphilbrick: The "|-" lines need to be their own line, but you can concatenate everything else, provided you replace newlines followed by pipes with double pipes. Example:
Year Team GP GS MPG FG% 3P% FT% RPG APG SPG BPG PPG
1984–85 Chicago 82 82 38.3 .515 .173 .845 6.5 5.9 2.4 .8 28.2
Jackmcbarn (talk) 18:29, 25 June 2015 (UTC)[reply]
Thanks for the quick response. --S Philbrick(Talk) 19:20, 25 June 2015 (UTC)[reply]
Do you need to use the templates? If not, then you can just drag and drop CSV files (e.g., from Excel) into VisualEditor. Whatamidoing (WMF) (talk) 19:58, 25 June 2015 (UTC)[reply]
That’s an intriguing possibility. I’ve used visual editor for simple text but not for tables. On the one hand the templates automatically create links to seasons and team years. On the other hand, I want to use it for a college team so the AutoLink to NBA seasons is an appropriate and I’ll have to create a new template for that. Second in my specific case I’m talking about Pepperdine and there are no team years for them, so perhaps I’ll try generating a table and copy and pasting using visual editor.--S Philbrick(Talk) 20:50, 25 June 2015 (UTC)[reply]
I think that's going to work, thanks again.--S Philbrick(Talk) 20:59, 25 June 2015 (UTC)[reply]
Good luck. Don't forget that you (usually) still have to set table class separately in the wikitext editor for copy-pasted tables, and background colors still have to be done manually. If all the lines between cells are invisible after the save, then that's the likely cause. Whatamidoing (WMF) (talk) 22:11, 25 June 2015 (UTC)[reply]

I'm not quite getting it. You mentioned that I needed to set the table class separately which was a hint. Plus you mentioned that the lines between cells might be invisible and they are.

I tried two different things. First, (which is also documented at User:Sphilbrick/basketball_stats) I clicked on the visual editor edit button and simply copied and pasted a table. That table doesn't have shading for the header rows and doesn't have lines between cells. Is it possible to enter a table this way and then set the table class?

As an important aside, because I get a bit annoyed when people come and ask questions they can easily look up, I checked out the visual editor help; specifically:

Help:VisualEditor/User_guide#Editing_tables

That largely is has a placeholder, if there is other documented help please point iy out and I'll be happy to read it. The second thing I tried was to go into visual editor and tell it I wanted to insert a table. However that brings up a blank 4 x 4 table and if I copy and paste the whole table it seemed to copy it into the upper left most cell.

You also mentioned CSV format. The data I'm working with is in a file I've saved in CSV format but I'm simply copying and pasting it so I don't know that the format means anything. Should I be importing the CSV file? If so, I don't see how.--S Philbrick(Talk) 15:11, 26 June 2015 (UTC) Whatamidoing Ping.--S Philbrick(Talk) 18:34, 26 June 2015 (UTC)[reply]

If you save the spreadsheet as a CSV file, then you can actually drag-and-drop the file into VisualEditor. Steps: Open VisualEditor, pick up mouse, drag the file into the middle of the Wikipedia article, and voilà, it auto-imports immediately. You can set the header cells (select them, and go to the menu in the main toolbar, where you would normally set paragraph vs section headings, and choose "Header cell" rather than the default "Content cell"). Copying and pasting also works for normal tables.
But when you save the page, there's no wikitable class set (that's phab:T85577), so the lines between the cells are invisible. And you can't set or change the class (yet) in VisualEditor. So you have to save (or switch) and go back to it in the wikitext editor, and add the standard class="wikitable" to the first line, after the {| that starts the table to make it display properly to readers.
I apologize for the incomplete documentation. It's on my list. I'm hoping to get something solid written before Wikimania, when a lot of translation work usually happens. Whatamidoing (WMF) (talk) 22:33, 26 June 2015 (UTC)[reply]
Dinner calls, will be back soon.--S Philbrick(Talk) 22:37, 26 June 2015 (UTC)[reply]
Thanks that worked. No problem regarding documentation I fully understand the challenge of getting everything done at the same time. I mentioned it mainly for two reasons, first to let you know I wasn't simply begging for help I was trying to figure it out myself, and second there was a possibility that there was some documentation and I was simply looking in the wrong place. I know I recently had some issues with with references and after questioning I learned I was looking in the wrong page for the documentation so I wanted to check in case it existed and I just didn't know where to look.--S Philbrick(Talk) 23:17, 26 June 2015 (UTC)[reply]
Resolved
--S Philbrick(Talk) 23:17, 26 June 2015 (UTC)[reply]

Request to add collapsible sections to a nav template

The template:Oral pathology is getting out of hand, and there are still many links to be added.

Another user has suggested collapsible sections, however after a few hours of messing around, apparently this is beyond my ability.

Please would someone be able to add collapsible sections to this nav template? Please note also that it would be desirable for the overall feel of the template to be retained (colors, font, layout), as these are standardized across medical pages. Many thanks if you can help. Kind regards, Matthew Ferguson (talk) 21:00, 25 June 2015 (UTC)[reply]

@Matthew Ferguson 57:  Done Jackmcbarn (talk) 21:15, 25 June 2015 (UTC)[reply]
This looks ideal. Thanks jackmcbarn. Matthew Ferguson (talk) 22:25, 25 June 2015 (UTC)[reply]
Sample text for the Centralnotice/Sitenotice:
Should Wikipedia run a site-wide banner protesting the proposed amendment for freedom of panorama in EU? Discuss

I'd like to request assistance in putting the existence of the discussion (Wikipedia talk:Freedom of Panorama 2015) to the centralnotice/sitenotice. Community is largely unaware of the discussion. This was done during the SOPA debate as well. -- A Certain White Cat chi? 22:10, 25 June 2015 (UTC)

Have you tried posting this at MediaWiki talk:Watchlist-details? I think these are the people who do it. — Maile (talk) 23:22, 25 June 2015 (UTC)[reply]
No, that's for watchlist notices, which only appear on the watchlist - for this you need either SiteNotice or CentralNotice. SiteNotice only affects the English Wikipedia, and can be updated by admins at MediaWiki:Sitenotice. CentralNotice can add a banner on multiple wikis simultaneously, and requests are handled on Meta. — Mr. Stradivarius ♪ talk ♪ 23:29, 25 June 2015 (UTC)[reply]
I am not familiar which one would be more relevant. I do not believe I have ever requested something to be added in mass notification before. Needless to say this notice should be visible by editors only since it is a discussion to weather or not put a banner to mass notification. -- A Certain White Cat chi? 07:12, 26 June 2015 (UTC)
Ah, sorry, you were talking about a notice about the discussion, not the proposed notice itself. I misunderstood. A watchlist notice does indeed sound like a good idea for advertising the discussion itself, so I've added it. — Mr. Stradivarius ♪ talk ♪ 08:05, 26 June 2015 (UTC)[reply]
Indeed. We currently have something like Wikipedia:SOPA initiative but hardly anyone is aware of it. -- A Certain White Cat chi? 09:10, 26 June 2015 (UTC)
Right. So again, I would recommend you put this request on MediaWiki talk:Watchlist-details to have them post a watchlist notice linking to the discussion. Good luck. — Maile (talk) 12:21, 26 June 2015 (UTC)[reply]
I see it's on the watchlist right now. — Maile (talk) 12:25, 26 June 2015 (UTC)[reply]
Yep, that's what I meant when I said I added it. :) — Mr. Stradivarius ♪ talk ♪ 12:28, 26 June 2015 (UTC)[reply]
(edit conflict) Just for the sake of completeness, I should also mention that we have the option of restricting a SiteNotice to logged-in editors only. That's achieved by adding the SiteNotice as normal, and then adding <p></p> or similar to MediaWiki:Anonnotice. That notice is for anonymous editors only, and SiteNotice functions differently depending on whether it is enabled or not. — Mr. Stradivarius ♪ talk ♪ 12:27, 26 June 2015 (UTC)[reply]

When would the discussion be sufficient/adequate to put the banner on site notice for guests? There will be no point to the banner if it is too late. -- A Certain White Cat chi? 20:57, 26 June 2015 (UTC)

Deprecated cite parameters

When creating the article Jet mill I used Wikipedia citation tool for Google Books to help me get the citations formatted exactly correct. Now I see the page is on Pages containing cite templates with deprecated parameters. Unfortunately, nothing tells me which parameters are deprecated, but I see that the citation tool used the "coauthor" parameter, which happens to be on the no-no list.

I can guess about which parameters to fix, but I know of no way I can quickly verify that I have corrected all the problems. What can I do to verify that my citations are correct?

How can I get the citation tool fixed?

I will be watching this space. Comfr (talk) 03:06, 26 June 2015 (UTC)[reply]

As a guess, |pages= contains a dash and a second number does not come after the dash. In any of the citations. The rest of each of the citations look fine. --Izno (talk) 13:28, 26 June 2015 (UTC)[reply]

Image previews

Can anyone explain why, in the infobox at Eden Land, hovering over the A Fool Who'll article wikilink shows the image on that article .... but hovering over the other album in that chronology (Our Swan Song) fails to show the image? Similarly, at A Fool Who'll, the wikilink to the previous album in the chronology (Eden Land) also fails to show the image as a preview. Just hovering over the wikilinks in the thread you are reading also shows the difference between articles in which the image appears and those in which it doesn't appear. I create and edit many album articles, but I don't recall seeing any where the primary image in wikilinked articles fails to appear as a preview. BlackCab (TALK) 11:53, 26 June 2015 (UTC)[reply]

Hi BlackCab, using popups I am able to see album covers of Eden Land and Our Swan Song just fine when I hover over the links, both here and in the respective article pages. screenshot - NQ (talk) 12:04, 26 June 2015 (UTC)[reply]
Baffling. I ticked the Popups box (it hadn't been ticked) and it is still quite selective about what it shows as a preview. Those two album articles are the first I've noticed that don't reveal the image on preview. BlackCab (TALK) 12:26, 26 June 2015 (UTC)[reply]
@BlackCab: I just noticed that you added the image to the article quite recently. You probably just need to clear your browser cache. Regards - NQ (talk) 12:29, 26 June 2015 (UTC)[reply]
Alright. You're talking about Hovercards, which I did not have enabled. You're right - not displaying. My mistake. - NQ (talk) 12:34, 26 June 2015 (UTC)[reply]
I added the A Fool Who'll image an hour or two ago, but the other two articles, with images, have been in existence for some years. I wondered whether it was the way the images had been uploaded, so I replaced the FUR template on one, but with no change. BlackCab (TALK) 12:54, 26 June 2015 (UTC)[reply]
The only difference I can see are the dimensions of the images. According to mw:Extension:Popups, Hovercards has a hard dependency on mw:Extension:PageImages which "uses the first non-meaningless image used in the page." Maybe it excludes lower resolution images? - NQ (talk) 13:44, 26 June 2015 (UTC)[reply]
(edit conflict) Maybe the 200×200 image in Our Swan Song is too small for Hovercards. I don't know the details but the description to the right of mw:Talk:Beta Features/Hovercards says: "First image is not always used due to image proportions, size, or quality". In my limited tests, Hovercards scaled larger images down to 300px and didn't display smaller images. PrimeHunter (talk) 13:46, 26 June 2015 (UTC)[reply]
Yes. Replaced it with a slightly higher resolution cover [21] and now it loads fine. - NQ (talk) 13:51, 26 June 2015 (UTC)[reply]

Popups

Is there an issue with pop-ups? Mine aren't working at the moment. Check my preferences to make sure it wasn't accidentally turned off.--S Philbrick(Talk) 14:29, 26 June 2015 (UTC)[reply]

@Sphilbrick: Works fine for me. Do you have hovercards enabled? - NQ (talk) 14:41, 26 June 2015 (UTC)[reply]
When Hovercards first came out, I tried it, however I got a conflict because I was getting information from that gadget and pop-ups and it was quite confusing, so I turned it off. I just try turning it on, and when on, I see Our Swan Song but not much else. I just turned it off so now the swansong image doesn't appear I'm not getting much of anything. If I hover over your name I get a small box that says "user: NQ" but none of the links to contributions etc.--S Philbrick(Talk) 14:53, 26 June 2015 (UTC)[reply]
Sometimes popups freeze when used alongside hovercards. Since you seem to have it disabled, perhaps another script in your common/vector/monobook.js is causing a conflict? - NQ (talk) 15:07, 26 June 2015 (UTC)[reply]
Popups now working. I've done nothing special, unless enabling howvercards and disabling it counts. FTR, popups was not working immediately after that. I'm going to mark this as resolved, although that doesn't mean the problem was identified.
Resolved
--S Philbrick(Talk) 18:33, 26 June 2015 (UTC)[reply]

Formatting after table

In this edit, I tried to include two simple-format tables similar to the example at Help:Table#Multiplication table. However, when I previewed the edit, the paragraph immediately following the first table appeared to the right of the table (without a gutter in between) and not below.

I was able to work around this by inserting a {{-}} after the table, but why did I need to? Was my table markup wrong in some way, or what?

--70.49.171.136 (talk) 15:56, 26 June 2015 (UTC)[reply]

Your first table uses the attribute align="left" which floats the table. All you need do is omit that attribute. --Redrose64 (talk) 16:06, 26 June 2015 (UTC)[reply]
Oh, thanks. I ass-u-med that the attribute applied at the level of cell content. --70.49.171.136 (talk) 01:10, 27 June 2015 (UTC)[reply]

Date problem in a reference

On this page, journal dates with the format yyyy-mm give an error message.— Vchimpanzee • talk • contributions • 21:39, 26 June 2015 (UTC)[reply]

Being interpreted as a ambiguous date? See Help:CS1 errors#bad date. Nthep (talk) 22:00, 26 June 2015 (UTC)[reply]
Yes, yyyy-mm is disallowed as ambiguous by MOS:BADDATEFORMAT and Help:CS1 errors#bad date. The linked Signpost page has 2015-02 and 2015-04 which seem clear but would 2003-04 have meant 2003-2004 or April 2003? February 2015 and April 2015 are allowed. PrimeHunter (talk) 22:18, 26 June 2015 (UTC)[reply]