Wikipedia:Village pump (technical)

 Policy Technical Proposals Idea lab Miscellaneous
The technical section of the village pump is used to discuss technical issues about Wikipedia. Bug reports and feature requests should be made in Phabricator (see how to report a bug). Bugs with security implications should be reported differently (see how to report security bugs).

Newcomers to the technical village pump are encouraged to read these guidelines prior to posting here. Questions about MediaWiki in general should be posted at the MediaWiki support desk.

« Older discussions, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137
Centralized discussion
Proposals: policy other Discussions Ideas
• An RfC concerning increasing the activity requirements for bureaucrats.
• An RfC concerning increasing the activity requirements for administrators.
• A proposal to gradually offer both wikitext and VisualEditor to new accounts
• Discuss proposals for celebration of the upcoming 5 millionth article on English Wikipedia
• An RfC for a banner alert campaign on the threat to Freedom of Panorama in Europe
• An RfC to permit trusted non-admins to close TFD discussions with uncontroversial delete outcomes
• A proposal to forbid IPs from participating in the RfA process.
• A proposal to elevate WP:BRD to guideline status
• An RfC on "edit in Wikidata" links, for templates using Wikidata

Note: inactive discussions, closed or not, should be archived.

HTTPS by default

Hi everyone.

Over the last few years, the Wikimedia Foundation has been working towards enabling HTTPS by default for all users, including anonymous ones, for better privacy and security for both readers and editors. This has taken a long time, as there have been different aspects to take into account. Our servers haven’t been ready to handle it. The Wikimedia Foundation has had to balance sometimes conflicting goals, having to both give access to as many as possible while caring for the security of everyone who reads Wikipedia. This has finally been implemented on English Wikipedia, and you can read more about it [link-to-blog-post here] here.

Most of you shouldn’t be affected at all. If you edit as registered user, you’ve already had to log in through HTTPS. We’ll keep an eye on this to make sure everything is working as it should. Do get in touch with us if you have any problems logging in or editing Wikipedia after this change or contact me if you have any other questions. /Johan (WMF) (talk) 12:43, 12 June 2015 (UTC)

There's a blog post at the Wikimedia Foundation blog now. /Johan (WMF) (talk) 13:09, 12 June 2015 (UTC)
To Johan (WMF): – You have to know what a real drag this is. Not only do I want a CHOICE in the matter, and would continue to choose HTTP as long as the edit summary field's autofil function does not work when I'm on the HTTPS server, you should also consider what Redrose64 said above, that some users are unable to use HTTPS connections. The part in the blog post about "all logged in users have been accessing via HTTPS by default since 2013" is just not true, either. We've been given a choice up until now, and I for one do not want to give that up. I want to be able to CHOOSE whether or not I'm on the HTTP server or the HTTPS server. – Paine  14:21, 12 June 2015 (UTC)
Yes, we do know. The answer I was given when I asked about this is that any form of opt-out would also leave potential security risks in our implementation which make it difficult to safeguard those who do not opt-out. Because of this, we’ve made implementation decisions that preclude any option to disable HTTPS, whether logged in or not. This renders the current opt-out option ineffective, and the option will be removed at a later date after we’ve completed the transition process. /Johan (WMF) (talk) 14:27, 12 June 2015 (UTC)
You have had to use HTTPS to access the site when logging in as it's been used for the login process, though. /Johan (WMF) (talk) 14:30, 12 June 2015 (UTC)
It's evidently a weighty issue. And I do realize that I don't edit WP in a vacuum, that I must eventually accept this situation for the good of all. And frankly, I don't have a problem with having to stay on HTTPS as pertains to the "big picture". My problem is very basic and concerns the fact that I no longer have a drop-down list from which to pick my edit summaries, because that function is thwarted by my IE-10 when I am on any HTTPS server. If that little quirk could be fixed, I'd be a happy camper whether I'm on a secure server or not. – Paine  15:47, 12 June 2015 (UTC)
I'm not very familiar with IE myself, but I'll ask around and see if anyone knows a simple fix. /Johan (WMF) (talk) 16:12, 12 June 2015 (UTC)
IE10 won't enable autocomplete on HTTPS pages when the "Cache-Control: no-cache" HTTP header is set (which Wikipedia does). Changing it from "no-cache" to "must-revalidate, private" would allow autocomplete, but may have other unintended consequences. --Ahecht (TALK
PAGE
) 16:34, 12 June 2015 (UTC)
It seems like IE 11 does not have this problem, and all users would eventually be required to update to it by the end of the year (by Microsoft). Did you try IE 11? Tony Tan · talk 02:09, 14 June 2015 (UTC)
Yes, Tony Tan, I upgraded to Win8.1 and IE-11 yesterday and was pleased to pass it on that it has given me back what I had lost with the older browser and Windows software. Thank you very much for your kind thoughts and Best of Everything to You and Yours! – Paine  02:26, 14 June 2015 (UTC)
I also see I am struck with using HTTPS, which is nuisance and a bother as I longer have a drop-down list from which to pick my edit summaries. How can a drop-down list be re-implemented? It was the only degree of automated help we had in what is otherwise an unfriendly article editing environment. Hmains (talk) 17:44, 12 June 2015 (UTC)
So how do I use the website in http then? I do not want extra security to protect me. I don't need protecting. This is a nonsense. Why am I being forced to use https even though I don't want to use it? There was an opt out. The opt out has been removed despite the fact that those using the opt out very clearly want to opt out. — Preceding unsigned comment added by 86.18.92.129 (talk) 19:46, 12 June 2015 (UTC)
Hi, the reason explanation I've been given is that any form of opt-out would also leave potential security risks in our implementation which make it difficult to safeguard those who do not opt-out. /Johan (WMF) (talk) 19:53, 12 June 2015 (UTC)
I'll try to figure out if there is a solution to that, Hmains. /Johan (WMF) (talk) 19:53, 12 June 2015 (UTC)
Johan (WMF), Re: "the reason explanation I've been given is that any form of opt-out would also leave potential security risks in our implementation which make it difficult to safeguard those who do not opt-out", would you be so kind as to ask for a one-paragraph explanation as to why they believe this to be true and post it here? Not a dumbed-down or simplified explanation, but a brief, fully technical explanation for those of us who are engineers? Thanks! --Guy Macon (talk) 20:49, 12 June 2015 (UTC)
Sure. Just so you know, they're getting a lot of questions at the moment, as well as handling the switch for the hundreds of Wikimedia wikis that aren't on HTTPS yet, but I'm passing on all questions I get that I can't answer myself. /Johan (WMF) (talk) 21:18, 12 June 2015 (UTC)
The engineering-level explanation is that in order to help prevent protocol downgrade attacks, in addition to the basic HTTPS redirect, we're also turning on HSTS headers (gradually). The tradeoff for HSTS's increased protections is that there's no good way to only partially-enforce it for a given domainname. Any browser that has ever seen it from us would enforce it for the covered domains regardless of anonymous, logged-in, logged-out, which user, etc. Once you've gone HSTS, opt-out just isn't a viable option. /BBlack (WMF) (talk) 21:56, 12 June 2015 (UTC)
see the answer above. /Johan (WMF) (talk) 22:12, 12 June 2015 (UTC)
To Johan (WMF): I don't see what the problem is: create a cookie named something like IAcknowledgeThatHttpIsInsecure which can be set from a dedicated page: if this cookie is set, do not send the Strict-Transport-Security (HSTS) header and do not force redirect to HTTPS. Yes, people who have received the Strict-Transport-Security header will get a browser error, but I assume all browsers that implement HSTS allow some way for the user to manually override or ignore it (something like "I know what I'm doing", then set a security exception); and the users can be warned in advance on the dedicated page that sets the cookie. If you're afraid an attacker will set the cookie on an unsuspecting user (through a fake Wikipedia page) and thus bypass HSTS, please note that (1) this attack always exists anyway, because an attacker who can do this can setup a fake HTTP wikipedia.org proxy domain anyway (in both cases, it will impact those users who did not receive the HSTS header), and (2) you can mitigate the attack by letting the cookie's content contain a MAC of the client's IP address (or some other identification string), with a MAC key that Wikimedia keeps (and the cookie is honored only if the MAC matches). You might also display a warning in the HTML content if the cookie is set, reminding of its existence and impact, and giving a link to remove it should the user change their mind. The performance cost of all of what I just described should be completely negligible in comparison with the performance cost of doing HTTPS in the first place. And this should all be very simple to implement. On a personal note: I promise to donate 150€ to the Wikimedia foundation (adding to the 100€ I donate about once a year) if and when a way to access it through HTTP using the former URLs is brought back; conversely, until this happens, I will be too busy to consider how I can work around this inconvenience to contribute either financially or by editing articles. (I could also go on to emphasize how, as a cryptographer, I think the idea of forcing users to go through HTTPS to read publicly accessible and publicly editable information is absolute idiocy, but the cryptophile zealots have made up their mind already.) --Gro-Tsen (talk) 19:43, 13 June 2015 (UTC)
OK, I'll give up on trying to solve other people's problems with HTTPS and focus on mine: to this effect, do you (or anyone else) knows if there at least exist some reliable transparent Wikipedia mirror on HTTP (perhaps something like "wikipedia-insecure.org") that allows both reading and editing and that I could use (by spoofing my DNS to point there) without the trouble of setting up my own? (I hope we can agree that a mirror served under a different domain cannot weaken security since anyone can set up such a thing.) I'll find a way to disable HSTS on my browser somehow. --Gro-Tsen (talk) 23:02, 14 June 2015 (UTC)

It's worth giving some background here to understand the need for security. One of last year's revelations was that Wikipedia editors were being targeted by the NSA. So if you weren't using HTTPS (and probably even if you were), you were likely helping to build a database profile on your reading habits. But worse, your e-mail and other communications were probably also targeted for follow-up simply because you edit Wikipedia. What difference does it make? Nobody in the general public knows! The collected information is used in secret fashion in secret ways by undisclosed people. But there are real dangers to you. Supposedly, the information is being used only for national security related to terrorism. That's not true, however, because it is known from the same leaks that it is being used for more than that, for instance, in the war on drugs. And, it is also known that collected information is sometimes abused by those who have access to it for personal reasons. The use could also include (and probably is) helping to decide whether you get security clearance for future dream job. it could potentially even be used to sabotage a hopeful's political career or in general help silence people with oppositional points of view. In other words, this information has the potential to be used by people now or in the future to negatively affect your life and destiny without you even knowing. The WMF has decided (and rightfully so) that there's a need to protect users from dangers that they might not even be aware of. When it comes to this, many people say things like "I'm not doing anything wrong" or "I've got nothing to hide" but the problem is that you can't say you're doing nothing wrong because it's third parties who determine that, not you. And you do have stuff to hide even if you are completely a law-abiding citizen. This issue that affects you even if you think it doesn't. People are talking above about certain countries that do not allow HTTPS and how IP users there should be not be forced to use HTTPS because Wikipedia would be blocked for them. Well, those are great examples where governments being able to see what you are reading could get you arrested, imprisoned, or worse. The use of HTTPS is only a minor step in combating the abuse of government-level surveillance but it's a step in the right direction. @Johan (WMF), it'd be interesting to know why the implementation cannot safely handle an opt-out because naively I don't see why the one should affect the other. Maybe this exposes a flaw in the implementation. Jason Quinn (talk) 21:17, 12 June 2015 (UTC)
Hi Jason Quinn, thanks. I'm passing on the question to someone better suited to answer it than I am. /Johan (WMF) (talk) 21:20, 12 June 2015 (UTC)
On January 12, 2016, Windows 7 users will be required to install Internet Explorer 11 and Windows 8 users will be required to update to Windows 8.1 anyway, so you don't need to worry about the autocomplete problem in IE10. That problem doesn't occur in IE11. GeoffreyT2000 (talk) 21:26, 12 June 2015 (UTC)
Wikipedians were NEVER targeted by the NSA, why would they be? I don't know where you people are getting your information from and if some wikipedian came along and said that s/he was being targeted, then s/he was either being paranoid (like 90% of americans) or s/he is doing soemthing "illiegal" so its the best interest of wikipedia to report that person to NSA, not ENFORCE this stupid idea....Again Wikipedia is an INTERNATIONAL website, its NOT only for AMERICA....why should the rest of the world have to pay for the fears of a few paranoid psychopaths that are better off in jail..oh and BTW, HTTPS has and will NEVER be secure, the "s" in https never stood for secure..., Why would you allow this?-- 21:43, 12 June 2015 (UTC)
At the right is the main slide itself so you and others can decide for themselves what it means. The slide explicitly uses Wikipedia as an example of the websites that they are "interested in" and confirms that they are interested in "typical users" of such websites. Given the context of the slide (exploiting HTTP for data collection), it is unreasonable to assume readers and editors were not being targeted. We all were targeted and all our traffic to and from Wikipedia would have been caught up in the named NSA collection programs. It would be naive to think otherwise. If there is one thing that's been learned in the last year, it's that "if it can be done, it is" kind of summarizes what's been going on and "mitigated" does not described their collection techniques. As for other countries being denied access by the global removal of HTTP support, that is a point that should be debated. But I already mentioned that there are countries were the use of HTTP might literally allow Wikipedia readers to be executed for readings the "wrong" stuff. The meaning of a "free" encyclopedia would have to be discussed and the dangers of access in these countries would have to be considered and weighed in such a debate. And, regardless of how you perceive the US, it's possible the US could become as bad. Jason Quinn (talk) 22:30, 12 June 2015 (UTC)
It is certainly a bit of a backtrack by .Blethering Scot 22:43, 12 June 2015 (UTC)
The real win here (imo) is making Firesheep style attacks totally impossible and thwarting non-state sponsored, and lower budget state sponsored adversaries. One assumes that the NSA will probably just slurp up the unencrypted inter-data center links (For those of you not close enough to use eqiad directly. Imagine a world where the sum of human knowledge fully deployed IPSec). Given the funding level of the NSA, I expect that they probably have traffic analysis capabilities to be able to tell who is visiting a page of interest (especially for a site like wikipedia, which imo seems like the perfect target for a traffic analysis type of attack against a TLS secured connection). However https does make it much harder to simply collect it all, and any measure that increases the cost of ubiquitous surveillance should be applauded. Bawolff (talk) 22:50, 12 June 2015 (UTC)
All I see Jason is a bunch of American websites.....Mate, if NSA want to spy on you, it WILL SPY on you, you don't have to eff up wikipedia for them to stop and basically, by forcinghttps onto the wikipedia, would you not think that it will make NSA more interested? because only a person with something to hide would do this ..So Jimmy loses his battle with NSA in terms of NSA and this is what he comes up with? moving to https which honestly is just as secure as http...After this was defeated last year, i honestly felt like we lived in a democracy where the voice of the people was heard and adhered........back to communist wikipedia we go..yeah Jason, executed for reading the wrong stuff on wikipedia like How to build a Bomb or How to join ISIS......oh right, we don't have those pages cause wikipedia is NOT a terrorist organization...-- 22:59, 12 June 2015 (UTC)
(a) Non-Americans arguably have got more to fear from NSA surveillance; the legal framework allows for the collection of great swathes of foreign data. (b) The decision was made by Wikimedia, which is in no way a democracy. (c) Do actually read up on the issues you're arguing. Alakzi (talk) 23:29, 12 June 2015 (UTC)
Yeah, you really shouldn't let your anger and/or frustration allow such bullshit from your fingers and keyboard, Stemoc. "Communist Wikipedia"? no more than an airline practices communism when they check for bombs and weapons as we board – no more than when we have to pass through a building security point that helps to protect us while we're on the premises - is it communism to own a .357 and be ready to shoot a criminal who tries to steal from you? or to hurt your loved ones? Privacy, security, if you don't try to work with structures that protect them, then you're no better than the criminal, terrorist or agency that tries to circumvent them. Best of Everything to You and Yours! – Paine  00:03, 13 June 2015 (UTC)
Calm down lady, this is just an Encyclopedia, not your ebay, paypal, bank account or your social networking sites where privacy is a MUST NEED for safety reasons.. the MAIN reason this site was created was to allow users to browse and edit anonymously so no one really knows your true identity or location, if you are using your real name and stuff, I'd advice you to invoke the 'Vanish' policy and start anew or get your account renamed, I think people keep forgetting that this is NOT like every other site they visit, infact wikipedia is based on facts and if you are scared to write down fact on articles because you fear the NSA then i really really pity you... only crooks fear the government....let that be known...and p.s, I'm brown and I don't give a shit about the NSA...as usual, the wiki revolves around America...pathetic.-- 02:47, 13 June 2015 (UTC)
@Stemoc: Out of curiosity, what do you think about the following hypothetical situation: Someone (Lets say Alice) thinks she might have <insert weird disease here>. Alice wants to look it up on Wikipedia, but is worried that her ISP is tracking which websites she visits, and will sell the information to her insurance company (or whomever else is the highest bidder), who in turn will jack up the price of her insurance beyond what she can afford, on mere suspicion of having the disease, even if she doesn't have that. Is that a legitimate reason to want/need privacy when browsing Wikipedia? You may trust the government (For some reason), but do you really trust your ISP? What about the guy sitting across the room at the starbucks on the same wifi network? Bawolff (talk) 06:12, 13 June 2015 (UTC)
Bawolff, again, another "american" problem....I have an IDEA, why not make a us version for https?, brilliant now, e.g, anyone that wants to be logged in on https, log in at https://us.en.wikipedia.org and everyone else at the old link at http://en.wikipedia.org, this will solve the problem once and for all, why "force" everyone onto https, its the same as pushing everyone over the cliff and telling them to swim instead of building a bridge to get across, those who can't swim or having health (ISP) problem will surely drown..I fought this the last time it happened and I will fight it yet again..-- 11:43, 13 June 2015 (UTC)
+1. Live in a country with universal health care...Or has privacy laws...I am an IT professional with a Computer Science Degree and 30+ years of experience. I know the implications of not using HTTPS, an I also know the NSA can bypass that easily if they care to. This (not allowing an opt-out) is total garbage and a false sense of security...˥ Ǝ Ʉ H Ɔ I Ɯ (talk) 11:56, 13 June 2015 (UTC)
Now cut that out, buddy, or I'll hit you with my purse! Hey, waitasec – how did you know I'm a "lady"? You been hackin' into my HTTP??? – Paine  12:46, 13 June 2015 (UTC)
Little old me? hacking? NEVAH!......-- 17:01, 13 June 2015 (UTC)
@Bawolff: We're not done with all of our plans for securing traffic and user privacy. This will be covered in deeper detail in a future, engineering-focused blog post after the initial transition is complete. But just to hit some highlights in your post: we do have an active ipsec project, which is currently testing on a fraction of live inter-DC traffic. We're also looking into what we can do for some forms of traffic analysis, e.g. mitigating response length issues. We plan to move forward as soon as possible on heavier HTTPS protection mechanisms like HSTS Preloading, HPKP, etc as well. We're committed to doing this right, we're just not done implementing it all yet :) -- BBlack (WMF) (talk) 01:53, 13 June 2015 (UTC)
I appreciate there's more to come, and I'm happy to see that its (finally) happening. However I think its important to give our users the full picture, the good, and the bad. HTTPS is great for confidentiality and integrity. Its somewhat ok for providing privacy, particularly against a weak adversary, and it makes bulk matching of packets against fixed strings in the body of the request impossible (Which is quite important given the selective censorship threat wikipedia faces). But its questionable how well it would hold up against a powerful nation state actor trying to simply de-anonoymize you. It certainly wouldn't hold up against a targeted attack, and its questionable if it would prevent a more broad attack (albeit it would certainly make a broad attack quite a bit more expensive to pull off). I'm also quite doubtful you can really foil traffic analysis with padding TLS sessions, unless you use extreme amounts of padding, far past what is acceptable performance wise. p.s. The ipsec project link is limited to those in the WMF-NDA group, so I can't see it (I'm in the security nda group only). However I can certainly see in puppet that IPSec is enabled on a small number of servers, and I noticed it was mentioned when I was reading the WMF quarterly report. Bawolff (talk) 03:03, 13 June 2015 (UTC)
It is great to see that the WMF is finally switching to HTTPS by default. I look forward to seeing Wikipedia send HSTS (includeSubDomains, long max-age, preload) and HPKP headers! However, phab:T81543 seems to have restricted access. Thanks, Tony Tan · talk 02:39, 14 June 2015 (UTC)
One of the nice things I just noticed that is really nice is that ru.wikipedia.org has an A+ on the SSL labs test [1]. Here's to looking forward to that for all Wikimedia domains, once HSTS is turned up :D Bawolff (talk) 05:20, 14 June 2015 (UTC)

Not such a difficult fix

Just want to make sure that everyone catches what contributors TTO (at phab:T55636) and GeoffreyT2000 (above) have been kind enough to share with us. Several of the above users may be happy to hear that I can confirm what TTO and GeoffreyT2000 say about Win8.1 and IE-11. I just upgraded, and the new software thus far seems to work a lot better under HTTPS than my old Win8.0 and IE-10 did. Forms do indeed autofil, which means that my old drop-down boxes with my edit-summary choices do show up again. I still sympathize with all the users above who feel they've lost something with this change, however, like I said, we don't edit in a vacuum any more than we become passengers on aircraft all by ourselves. As an analogy, airport security can be a real hassle and a serious time cruncher on occasion, but compare that to what has happened, and still could happen, and there should be none of us who would not want that security to keep our flying times safe. Same for the conversion to HTTPS – it is quite the hassle for some, but the very real need to protect our privacy and security is an overwhelming priority, in my humble opinion. So, /Johan (WMF), you don't have to find an IE fix for me, and I greatly appreciate the fact that you said you would! I also deeply thank the rest of you for your enlightening responses here. Best of Everything to You and Yours! – Paine  23:32, 12 June 2015 (UTC)

Thank you. I'll still at least ask around to see if there's anything I can do. We want editing Wikipedia to be as simple as possible, no matter which browser people use. If one is OK with upgrading to IE 11, that's probably the best solution, though. /Johan (WMF) (talk) 01:25, 13 June 2015 (UTC)
So, here's what I got on this issue so far. Yes, there appears to have been an open Phabricator ticket since 2013 reporting this issue, and no, given the number of tickets, the team that dealt with the transition wasn't aware of it. We'd obviously have preferred to be. Sorry, and I really mean it. Causing trouble for people who edit Wikipedia is the opposite of what we want to achieve. We're still in the process of transitioning (English Wikipedia was one of the first to switch over, and there are more than 800 Wikimedia wikis) and I haven't found an easy fix so far (except for upgrading to Internet Explorer 11), as this isn't so much a bug as how Internet Explorer 10 intentionally behaves. The team will be able to focus more on this as soon as the HTTPS transition is complete. We're not ignoring the issue. /Johan (WMF) (talk) 12:10, 16 June 2015 (UTC)

This broke my bot :( I'm using RestClient library to make API requests, and it apparently is unable to verify the certificate. Getting the error SSL_connect returned=1 errno=0 state=SSLv3 read server certificate B: certificate verify failed (RestClient::SSLCertificateNotVerified) Surely that's an issue on my end? I can force it to not verify the certificate but then what's the point of using HTTPS? 18:32, 13 June 2015 (UTC)

I took a quick look, and it seems that this library has a way to pass to the SSL library the CA certificates to be used for verification. It probably just doesn't have a default set of CA certificates. The solution would be to give it a copy of the correct root certificates to use. --cesarb (talk) 21:26, 13 June 2015 (UTC)

Who loses

Hi, while you are here I would like to have something specific clarified. As always with these sorts of major changes, most people win and some people lose. I personal am iffy about the distribution of relative ideological and technical interest and need for this particular project, but I accept that that merely puts me in the middle of the Wikipedian spectrum, between people like TomStar81 who wants nothing to do with the ACLU and people like Jason Quinn who thinks it keeps us from being roasted on an open flame.

However in these sorts of changes I care less about who wins, because that's obvious. I can read the spam-ish blog post to find that out. I am more interested in the question: who loses?

Who does HTTPS hurt? Can we come to an understanding of this? Surely every change, no matter the size, hurts some stakeholders. 03:58, 13 June 2015 (UTC)

1. Can someone clarify what is going on with the IE 10 issues? Was the WMF aware of this problem? Is it really that significant?
2. Can someone clarify what the effect will be in mainland China? Can you quantify the impact there?

Thank you. 04:00, 13 June 2015 (UTC)

Hi, good question that deserves a good answer, not just what I can come up with on the top of my head. I'll ask around about a few things to make sure I (or maybe someone else, I'll spend much of this weekend travelling) can reply properly. /Johan (WMF) (talk) 04:19, 13 June 2015 (UTC)
Great! Thank you. I think this discussion so far has been high on posturing, low on content (speaking about the community response here), and I'd love to see a frank cost-benefit analysis from the WMF on this matter, and an associated community critique. After all, this is the communication that the volunteers so crave. Not, frankly, blog announcements. 04:44, 13 June 2015 (UTC)
I'd also like to see my transparency on WMF's the analysis. Everything seems to be shrouded in unnecessary secrecy. On the subject of China. I'm not that familiar with the situation, but according to https://en.greatfire.org/search/wikipedia-pages - HTTPS is currently not blocked There seems to be conflicting info on if HTTPS is blocked. The greatfire website says https is not blocked, but there actual test data seems to suggest that both normal http and https on zh is blocked starting may 19 [2] (The switchover for zh to https happened on June 9, so change in blocking status seems unrelated) but en is fine (both https and non-https). There are about 324 pages that are censored on the HTTP version, mostly on zh, however on en we had Students for a Free Tibet, Tiananmen_Papers, Tiananmen_square_massacre, Tibetan_independence_movement blocked. Switching to HTTPS forces china to decide either to block all of wikipedia or none of wikipedia (Possibly they can distinguish between languages and block say all zh, but not en. I'm not that familar with SNI, but my impression is the domain is sent in the clear). FWIW, greatfire strongly advocates switching to https on zh wikipedia [3], although they are obviously a special interest group that believes Chinese censorship needs to be fought tooth and nail. I imagine the situation is similar for Russia, which rumor had (Although I've not seen direct sources for this) was trying to censor pages related to Ukraine on ru, but can't anymore due to https. The other impact, is that it makes harder (but certainly not impossible depending on their traffic analysis capabilities) for China to generate lists of people who try to visit certain politically sensitive topics (Its unclear if they actually do that. I haven't heard of any evidence that they do, but it wouldn't surprise me). Other potential things to keep in mind, in the past China has DDOS'd websites (GitHub) that host material China finds objectionable, but cannot be censored selectively due to HTTPS and are too popular to block outright (However, I consider it very unlikely they would do something like that to Wikipedia. Wikipedia has a low enough popularity in China, that they would probably just block it totally if they decided to do something about Wikipedia). Bawolff (talk) 05:18, 13 June 2015 (UTC)
Regarding secrecy, or at least part of it: yeah, we didn’t really enjoy springing this on the community, though the WMF has publicly been talking about the intent to switch to HTTPS for the past years. The reason we didn’t say anything about the specific deadlines or make public the transition until it was in progress was because public statements opened us to possibility of man-in-the-middle attacks. Letting everyone know meant letting bad actors, so to speak, know our plans and timeline. We couldn’t have this debate in public without telling the world what we intended to do, which could have compromised the security and privacy of readers and editors in certain areas. We’d have preferred not having to worry about that, obviously. /Johan (WMF) (talk) 19:33, 16 June 2015 (UTC)
But this discussion and these plans were open and public, where any "bad actors" could surely have followed them. Surely that workboard was missing an item relating to fixing bots that didn't operate on wmflabs.org. I can only do so much to stay tuned to such things, and a proactive heads up, perhaps by email, would have been appreciated. I asked about this last December on the Village Pump, and never got a response. How am I supposed to know about venues such as m:HTTPS, where I might have gotten help last December? Wbm1058 (talk) 16:50, 20 June 2015 (UTC)
An example I've been given is that not knowing our time plan made it much more difficult to e.g. hack DNS and traffic at a border, proxy traffic as HTTPS back to us but make it seem to everyone they're connected to us, as HSTS support in modern browsers will prevent the downgrade and warn about it. I'd have loved to be able to give everyone this would cause trouble for a heads up, and we do understand it has caused more work for people we don't wish to cause any unnecessary work for. We'd definitely have preferred to not found ourselves having to choose between either, as we saw it, putting user security in certain areas at risk or not having proper, open communication.
Are you still having the problems you had last December? /Johan (WMF) (talk) 13:37, 22 June 2015 (UTC)

Other people that HTTPS could potentially hurt which we know about (Personally I think this is an acceptable hurt): People who use IE6 on windows XP will not be able to view any page on wikipedia. (IE6 on XP is incompatible with modern best practices for HTTPS). People on very old browsers which don't support SNI (e.g. Android 2.3.7, IE 8 / XP, Java 6u45), will get a certificate error when visiting a sister project (But wikipedia will be fine). Bawolff (talk) 20:02, 13 June 2015 (UTC)

@Bawolff: Sounds reasonable. 20:21, 13 June 2015 (UTC)
@Bawolff: The Wikimedia certificate uses subjectAltName, not Server Name Indication. SAN is supported by IE6. LFaraone 05:27, 14 June 2015 (UTC)
@LFaraone: IE6 doesn't work because it only supports SSLv3, and we require at least TLS1.0 (To prevent downgrade attacks/POODLE). We use both subject alt name, and SNI and wildcard certs. If no SNI is sent you get a certificate for *.wikipedia.org with an alt name of wikipedia.org. Which is great if you're browsing wikipedia. Not so great if your browsing wiktionary.org. Bawolff (talk) 05:45, 14 June 2015 (UTC)
@Bawolff: Browsing wiktionary.org works fine even if the browser doesn't send SNI. If the SNI is absent, the server sends a diffenert certificate whose subject alternative names include domain names of all sister projects. 191.237.1.8 (talk) 06:42, 14 June 2015 (UTC)
Oh, you're absolutely right, users get a uni cert when they don't have SNI. I saw the SNI behaviour of switching certificates and just assumed it would be broken without SNI. My bad. Bawolff (talk) 11:09, 14 June 2015 (UTC)
@Bawolff: I just checked my IE6, it has TLS 1.0
—Telpardec  TALK  20:57, 16 June 2015 (UTC)
Yes, but its disabled by default. The type of people who use internet explorer are probably not messing with the TLS settings. When I was running IE6 under wine, enabling TLS1.0 didn't seem to help anything, but that was probably just wine not working great. Bawolff (talk) 04:19, 17 June 2015 (UTC)

To Resident Mario: The switch to HTTPS will badly hurt those who chose to change their browser's default list of certification authorities and who, specifically, do not trust GlobalSign (the root authority from which Wikipedia's certificate emanates). At the very least, they will be forced to add security exceptions for all Wikipedia domains, and quite possibly will be locked out of Wikipedia altogether because browsers do not always allow security exceptions on HSTS sites. In effect, the switch means that users are forced to either trust everything that GlobalSign signs if they wish to use Wikipedia, whereas so long as HTTP transport was permitted, one could at least read Wikipedia on HTTP if one does not care about the security of public information on Wikipedia but doesn't want to trust GlobalSign. (I can't explain the problem with GlobalSign because I don't want to risk being sued for libel, but let's say that one might not necessarily wish to trust all, or any, certificate authorities.) So the irony is that this change, which is supposed to protect the "security" of users, actually forces security-conscious users to downgrade theirs, in effect a Trojan horse kind of attack. (In all fairness, Web browsers and HTTPS in general should be blamed for having an absurdly rigid approach to security: one can't restrict a certificate authority to certain domains, or things like that, so I can't say "I trust GlobalSign only for signing certificates in the wikipedia/wikimedia/wiktionary/etc.org domains".) --Gro-Tsen (talk) 21:15, 13 June 2015 (UTC)

For real? Any person who intentionally messes with their root certificate store, should be technically competent enough to make their own trust decisions of Wikimedia certs, by say verifying them in some other way. If you're not, you have no business removing CAs from your trust store. Bawolff (talk) 21:45, 13 June 2015 (UTC)
About 10% of HTTPS websites use GlobalSign, so it is not a Wikipedia-specific issue. One could say the same for any other CA that the WMF may decide to use. Moreover, Bawolff makes a great point that someone technically competent enough to mess with trusted roots would be able to work around this as well. They must know how to do so already, since there are numerous other sites using GlobalSign! If someone really lost faith in the CA system, they should try using Convergence, Perspectives, or Certificate Patrol. Tony Tan · talk 03:07, 14 June 2015 (UTC)
To answer your second question, according to zh:Template:Wiki-accessibility-CHN, zh.wikipedia.org is currently completely blocked in China using DNS poisoning. HTTPS versions of all other Wikimedia projects are not blocked. @Gro-Tsen: If you manually remove GlobalSign root certificates from your browsers' trust stores, you can manually add Wikipedia's leaf certificate to the trust store so that your access to https://en.wikipedia.org/ is not blocked by your browsers. 191.237.1.8 (talk) 05:09, 14 June 2015 (UTC)

To Resident Mario: In short: HTTPS everywhere hurts everyone. HTTP was built with network proxy and caching servers to decrease page load times. These are intermediate servers run by your ISP to reduce backbone data requests. Australians will bemost affected since 87 ms away from our Virginia data centers, so they'll have a 200 ms ping. Due the design of HTML, these requests can stack meaning that 200ms could bloat to 2 seconds. Now <100 ms is considered ideal, 1 sec users become frustrated, and at 10 seconds they'll look for something else. (Proponents will weasel around this by saying your browser caches content, which helps if you don't back to the Google search results)

Additionally, anyone who say this'll stop the $53 billion a year NSA is delusional. Methods for the NSA to get the WMF private keys: Court order (ala Lava Soft who shutdown over this), intercepting and backdooring hardware (Cisco routers, Hard drives), to recruiting/bribery of employees. This basically leaves ISP spying on users (Verizon Wireless adds a advertizing tracking ID to all HTTP requests), but considering how willing WMF is to toss aside net neutrality... — Dispenser 15:08, 17 June 2015 (UTC) On that first part: well yes and no. Most browsers now support SPDY and/or HTTP2/0, for which https is a requirement and which will give you a 20-700% speed boost. Especially this last part is probably going to significantly increase the speed for the majority of the users in those areas. Second, that area is served from the San Francisco caching center, so it's slightly closer then Virginia at least, though still so far away, that there is a good point. I do know that WMF is watching the performance impact of this change around the world, and I think they were already considering adding another caching center for Asia/Oceania regardless, so if it really does drop measurably, then that consideration might get higher priority. —TheDJ (talkcontribs) 01:09, 18 June 2015 (UTC) We send anti-caching headers (Because people edit and then things become outdated). ISP level caching servers that conform to the http spec should not be caching wikipedia pages whatsoever. So HTTPS won't really affect caching efficiency. Well lots of people go on and on about NSA, I really think the threat that this move is more designed to address is someone like China or Russia, altering pages in a Mitm fashion to make articles less NPOV. Bawolff (talk) 02:23, 18 June 2015 (UTC) This isn't an anti-NSA measure, it's due to security and privacy concerns on a number of different levels, not all of them related to governments. /Johan (WMF) (talk) 13:37, 22 June 2015 (UTC) And another problem: No browser history! - In addition to losing the drop-down edit summaries (as mentioned above), I've also lost the browser history for all newly-visited Wikipedia pages. Why the exclamation point?? Because this is absolutely crucial -- in fact, integral -- to my ability to work on Wikipedia. I totally depend on having those page links, which give me quick & easy access to all recently-visited pages. Johan, you said above, "We want editing Wikipedia to be as simple as possible, no matter which browser people use." (I am using IE 8.) Please tell me there is going to be a technical fix for this problem ASAP. Because if there isn't, there is a very real possibility that I will have to give up editing. I am a long-time (since 2006), very conscientious editor, with nearly 60,000 edits. So I truly hope that does not become necessary. Cgingold (talk) 09:11, 13 June 2015 (UTC) P.S. - I raised the very same issues a couple of years ago during the last discussion on this subject, which was resolved to my satisfaction when I learned that it was possible to opt out. So this is really a sore point for me. It sure would have been nice if you guys at least had the consideration to place a banner at the top of all pages for a week or two giving all of us a heads up about the impending change. Matter of fact, I believe I made the same point last time! :-( Cgingold (talk) 09:20, 13 June 2015 (UTC) Best advice I can give is to use IE11 or another non broken browser. —TheDJ (talkcontribs) 10:19, 13 June 2015 (UTC) Yup, this is a problem for me too that is admittedly a considerable annoyance. I always opted out previously for this reason. Connormah (talk) 11:21, 13 June 2015 (UTC) @Cgingold: If you mean that you lost your browser history for all of the http domains, I would say: deal with it yourself. It's a petty issue. You will regenerate the URLs soon enough as you visit the new pages again; it's no different than if you were to clear your browser history. If you have lost the ability to generate new URLs in your URL history, then that is a problem. I hope it can be fixed, but if it cannot...wouldn't it be easier for you to move up to an Internet browser that's less than six years old? 13:48, 13 June 2015 (UTC) Even if it was "merely" the loss of older browser history that I was referring to -- which it wasn't -- that would hardly be "petty", my friend. You might want to check your attitude at the door before you trivialize another editor's problem. But of course, I was talking about the fact that my browser no longer generates new URL links in the browser history. And it is indeed a very serious problem. Cgingold (talk) 21:29, 13 June 2015 (UTC) Petty? The switch to HTTPS is petty. It is stark raving mad to switch to https to avoid NSA surveillance. I cannot believe the reasoning there, some people need to take their tin foil hats off. I bet if anyone were to read this at the NSA then would have a right good laugh at us all. Even if they were inclined to mine data off this site then the switch to https would be of little impediment to a body of that resources. Why do we not only operate on tor and demand VPN usage if we are trying to protect the hypothetical drug smugglers, money launderers and terrorists that apparently have abandoned the onion sites in favour of WP talk pages? There is no benefit for this change in policy and the reasoning behind is deranged.--EchetusXe 17:46, 13 June 2015 (UTC) I am not here to hear your opinion, I am here to assess the damage. 19:39, 13 June 2015 (UTC) As a sysop, you should probably use HTTPS. Otherwise, your account is at risk of being hijacked in a Firesheep-style attack, especially when you use a public network. A sysop account would be really useful for someone intending harm. :( If there are big issues, upgrading your browser to a newer version of IE, Chrome, Firefox, etc. should help. Tony Tan · talk 03:15, 14 June 2015 (UTC) Cgingold, I just wanted to say that, yes, we really do care about your problems, we appreciate all the work you're doing, and I will ping you personally as soon as I have good answer or solution. /Johan (WMF) (talk) 12:15, 16 June 2015 (UTC) For reference, IE < 11 represents about 5.5% of our traffic [4]. Bawolff (talk) 18:54, 13 June 2015 (UTC) How about a 'in the clear' sub-wiki? Like http://itc.en.wikipedia.org which just reflects the normal wiki. Then all users of 'normal' wikipedia get HTTPS, but people who want/need HTTP have to specifically ask for it.˥ Ǝ Ʉ H Ɔ I Ɯ (talk) 09:38, 13 June 2015 (UTC) It would more likely be http://en.insecurewikipedia.org, but I don't think there would be many fans to maintain such a system.. We will have to see about what kind of case can be made for that, but I think it is unlikely that it will happen. —TheDJ (talkcontribs) 10:25, 13 June 2015 (UTC) Anyone could setup a proxy to do this (e.g. http://crossorigin.me/https://en.wikipedia.org [maybe that's a bad example, as it doesn't fix the links]. Anyways, point is that it is trivial to set up an independent proxy to an HTTPS site. Allowing edits might be trickier, but not impossible ). Bawolff (talk) 18:28, 13 June 2015 (UTC) We have had a discussion Just a note that we have had a discussion at the village pump about this earlier this year (WP:VPR/HTTPS). The discussion was closed as WP:CONEXCEPT due to the highly technical nature of the issue. From my point of view, this move to HTTPS-by-default is the correct one. Mozilla (Firefox), Chromium (Chrome), the IETF, and W3C TAG are all behind moving websites on the Internet in general to HTTPS and deprecating insecure HTTP. HTTPS guarantees the authenticity of content sent from Wikipedia servers as it travels through the Internet, prevents tampering (whether it is censorship in another country or your internet service provider injecting ads or adding invasive tracking headers), and curbs mass surveillance (by a gov't or an internet provider) by making it difficult and expensive to monitor articles being read or written by individuals. Regarding the potential negative effects of switching to HTTPS for older clients/browsers, we should be able to find a workable solution for them fairly quickly. A lot of the issues mentioned are software bugs that can be fixed without going back to HTTP. Google uses HTTPS by default, and there does not seem to be an issue with anyone using Google. Tony Tan · talk 20:43, 13 June 2015 (UTC) Thank you so much, Tony, for pointing out that Google doesn't cause these kinds of problems! Somehow, I hadn't even noticed that -- I guess precisely because it doesn't cause any problems... SHEESH!! If these issues are, in fact, entirely unnecessary, then WHY WERE THEY IGNORED by WMF's tech people when they had been explicitly pointed out on this very page a couple of years ago??? Inexcusable. I am sitting here literally shaking my head in disbelief... Cgingold (talk) 21:48, 13 June 2015 (UTC) Well, google (the search engine anyways, not counting other sites google runs) does its own auto-complete with javascript based on what it thinks you want to search for. It does not use the built in remember what I typed previously browser feature. You used the word "issues" in the plural. As far as I'm reading, the old version of IE disables auto-complete on HTTPS is the only actual issue reported in this thread that could possibly not affect Google (Or for that matter, is a reasonable complaint imo). Am I mistaken? Edit: I guess you're also complaining about browser history, so that makes 2 issues. All things considered, both are essentially minor inconveniences, both are experienced only by a relatively small number of users, and the autocomplete one has an easy way of mitigating (update your browser). Not exactly what I'd call the end of the world. Bawolff (talk) 04:45, 14 June 2015 (UTC) Please enable HTTP mode Hi. I'm from Iran. After WP enabled https as default (and no access to http), we have a lot of problem to access WP due to Internet censorship. Because Iranian government abuses https protocol. It's very slow and pages do not load properly. Time-out error happens frequently. Editing is not easy anymore. Please enable HTTP option for restricted countries again. Wikipedia is a great contribution to humanity. Thanks. --188.158.107.24 (talk) 10:41, 14 June 2015 (UTC) All people everywhere possess the inalienable right to have access to information of any and every kind. And they should be able to express that right without intervention by any company, organization or government, to include suppression, censorship and secret monitoring. The sole exception would be information that is kept secret for reasons of national security. What I don't understand is why any government would suppress and censor this right by committing abuse of HTTPS and not also commit abuse of HTTP? Is HTTP really that much harder to abuse? to suppress and to censor? Since many of the problems that have erupted since Wikipedia converted to HTTPS-only are shown to be due to users using older versions of software, and perhaps older hardware as well, maybe if you upgraded to recent versions you would find that rather than governments being the problem, usage of non-recent versions of hardware and software is the problem? – Paine 16:06, 14 June 2015 (UTC) They try to block HTTPS and other encrypted traffic because they can't see what you're doing. Cleartext traffic like HTTP can be examined. They want to give people some access to the Internet, because they know it's generally a lost cause to try to block Internet access completely, and trying to do so might spark a revolt, but they want to retain the ability to block some content, and keep tabs on what you're doing. For instance, China's "Great Firewall" selectively blocks access to information on things like the Tienanmen massacre through multiple techniques, including a blacklist of certain sites, and traffic analysis. --108.38.204.15 (talk) 22:33, 14 June 2015 (UTC) @Legoktm: you might know who to pass this concern onto. Magog the Ogre (t c) 22:35, 14 June 2015 (UTC) I think I understand what it feels like to be faced with Internet censorship; I spend half my time in China, where the Great Firewall disrupts access to websites that are commonly used in countries like the U.S. It is very, very frustrating. What I do want to point out, however, is that by enabling forced HTTPS encryption, governments like that of Iran will be forced to make the decision to either block all of Wikipedia or none of it, instead of being able to selectively filter by the topic of individual articles. While in the short term users may find access to be unstable or even impossible, the government may eventually be forced to stop interfering with Wikipedia traffic if it decides that access to the "good" information is more important than filtering the "bad" information. So in the long run, it may be better to keep Wikipedia HTTPS only if users eventually end up having access to all of Wikipedia, without censorhip. There is no guarantee, but I think we should at least wait and see. Tony Tan · talk 01:50, 15 June 2015 (UTC) Out of curiosity, do you have a source for information about the great firewall using traffic analysis? Most of the things I read seem to suggest they mostly use deep packet inspection and DNS posioning. And I'd be really interested in reading any publicly available info about how their system works. Bawolff (talk) 02:10, 15 June 2015 (UTC) I'm suspicious that HTTPS will do nothing to stop spying by the NSA or GCHQ, but has been introduced to make it much harder for whistleblowers to sit in the middle and see who they are spying on. It seems we're stuck with it though, and if you're using ancient browsers such as IE8, you'll just have to upgrade. Akld guy (talk) 06:24, 15 June 2015 (UTC) That doesn't really make sense to me. What realistic opportunities would a whistleblower ever have to be in the middle of an NSA/GCHQ communication? And even if they were in such a position, the transport security of Wikimedia would be rather irrelevant. To the best of my knowledge, no whistleblower has ever intercepted communications in transit over the internet in order to release for the public interest. Whistleblowers are usually in a trusted position, and legitimately have access to the data which they decide to divulge. Bawolff (talk) 07:47, 15 June 2015 (UTC) I want to clarify one thing that's turned up a couple of times in the general discussion (and I'm not replying to any specific user here). There have been a number of comments regarding the NSA. We know that the NSA has targeted Wikipedia traffic, and the Wikimedia Foundation doesn't believe Wikipedia readers and editors ought to be targeted, but while this may have been tangentially related to concerns over the NSA, it wasn’t the driving force. There are other governments and private actors to take into account, and, for example, the Firesheep style attacks that Bawolff has mentioned. Rather, it was driven by concern for the privacy and security of editors and readers all over the world, which means there are many different problems to consider. /Johan (WMF) (talk) 08:00, 15 June 2015 (UTC) I am with Tony Tan on this one. Our concern is not the NSA or GCHQ spying on users (that can be done even inspite of HTTPS), but its governments like Iran, China, and others that (with HTTP) could filter out certain content from Wikipedia without the majority of people noticing. HTTPS forces them to either block *.wikipedia.org entirely, or just let go. They will probably chose the latter, since the former will cause protest sooner or later. Compare, by the way, to what Russia has been doing with the Internet Archive because of their recent HTTPS-by-default policy: they had to block the entire domain. Of course they would love to only filter out LGBT-related topics etc. but it is a good thing they cannot. And this is why we have to make HTTPS the only option. --bender235 (talk) 09:38, 2 July 2015 (UTC) • Just to add my 5c, I do remember using a university Internet network a year ago that completely banned HTTPS (so I could use Wikipedia only in HTTP). I do not know the origin of this block (this should be definitely a setting by university network administrator), and I do not know if that block is still there (I haven't used it since then), but I would like to inform that such networks do exist, and I don't think there is a way to track them — NickK (talk) 09:16, 15 June 2015 (UTC) Such networks probably exist, but I think it would be up to the network administrators to whitelist Wikipedia's servers if they believe access to Wikipedia is important. They would probably do it after realizing that it is no longer possible to access Wikipedia on plain HTTP. Tony Tan · talk 05:26, 16 June 2015 (UTC) If Iran blocks HTTPS, there's no way Wikipedia/WMF will be changing their minds by blocking access to Wikipedia for Iranians through HTTP, which is probably a desirable outcome for the regime anyways. WMF should set up additional HTTP servers for static access to Wikipedia (no-edit access) then with a disclaimer stating that the content may be modified by third party man-in-the-middle vandalism in big banner statements at the top and bottom of every page. -- 70.51.203.69 (talk) 04:44, 17 June 2015 (UTC) It would be trivial for the men-in-the-middle to remove the disclaimers. (talk to) TheOtherGaelan('s contributions) 06:16, 17 June 2015 (UTC) Yes, it would, however, it would reenable access to populations who are completely blocked form using HTTPS. If the governments in question actively block HTTPS, then we are just falling into their hands by removing access to Wikipedia from their populations, to limit their populations access to information by voluntarily falling into the schemes of their governments to censor the internet by removing access to Wikipedia completely, as they filter out HTTPS. -- 70.51.203.69 (talk) 11:31, 18 June 2015 (UTC) Never really saw the logic behind moving to https...so it either stops the governments from snooping the accounts of say 10,000 wikipedians (people who browse and randomly edit the wiki) or by moving to https, it blocks 1.2bn-2bn users from COMPLETELY accessing the website..If i was the guy incharge of making the decision, I'll choose the latter. I'd rather have a billion users being able to access this site than help 10,000 users from "hiding" behind closed doors and randomly attacking their government and making this site look bad....sadly, I don't work for the site and I sympathize with those that can no longer access the site..if WMF had actually done their research before doing this, they would realise it was those users who contributed a lot to the website than those 10,000 who use the site for their own personal agendas...alas...the weak shall inherit the wiki..and for the 1000th time, enwikipedians demands supersedes the demands of other language wikis-- 11:53, 18 June 2015 (UTC) Billion? Do you have a citation for that? Before anyone says China, China is not currently treating https access to Wikipedia any differently then http access. I'm keenly interested in who this actually blocks, so if anyone has actual information about people who are blocked... please say so. Bawolff (talk) 21:39, 18 June 2015 (UTC) If the governments that currently block HTTPS really intended to completely remove their citizens' access to all of Wikipedia, they would have already done so over HTTP. Precisely because they still see value in some of Wikipedia's content, they chose to filter instead of block. HTTPS removes the filter option, so they will have to either allow or block all traffic to Wikipedia. When they made the decision, Wikipedia was still available over HTTP, so they chose to block HTTPS and filter HTTP, achieving their purpose of allowing access to some information while blocking others. Now that Wikipedia can only be accessed on HTTPS, they are forced to re-evaluate their decision. They are now forced to decide between blocking all of Wikipedia, or allowing all of it. While all of Wikipedia is blocked as of now (due to their earlier decision based on a situation that has since changed), they may eventually be forced to allow it if they think public access to certain resources is important. This was the case for GitHub. When GitHub switched to HTTPS-only, China eventually decided to allow all GitHub traffic because of its importance to software development, even though there were other information on there that the gov't wanted to censor. It may be a while before HTTPS becomes unblocked; perhaps the governments are waiting for Wikipedia to enable HTTP access again, which would make it unnecessary for them to allow HTTPS and give up filtering. Tony Tan · talk 07:34, 21 June 2015 (UTC) Or they could tell people to use Baidu Baike, or similar local service. -- 70.51.203.69 (talk) 12:33, 23 June 2015 (UTC) On that note, does that mean that Wikipedia has a TOR address? (Does Iran successfully block TOR?) -- 70.51.203.69 (talk) 12:36, 23 June 2015 (UTC) You do not need a website to have a "TOR address" to use Tor to access the website. You can use Tor to access any website that does not block Tor exit node IPs. .onion addresses are used for concealing the location of the web server. Tony Tan · talk 20:43, 23 June 2015 (UTC) Note Google has been mentioned. While Google defaults to https it can be (easily) persuaded to use http. All the best: Rich Farmbrough, 16:54, 7 July 2015 (UTC). Should I raise a bug for this error? Database error From Wikipedia, the free encyclopedia • Javascript-enhanced contributions lookup 0.2 enabled. You may enter a CIDR range or append an asterisk to do a prefix search. A database query error has occurred. This may indicate a bug in the software. Function: IndexPager::buildQueryInfo (contributions page filtered for namespace or RevisionDeleted edits) Error: 2013 Lost connection to MySQL server during query (10.64.32.25) All the best: Rich Farmbrough, 19:47, 28 June 2015 (UTC). First of all, steps to reproduce would be welcome so someone else could try to see that error too. :) --AKlapper (WMF) (talk) 08:51, 29 June 2015 (UTC) Search my contribs with namespace "Module" or "Mediawiki" if this fails to provoke an error (due to caching) try another rarely (or never?) edited namespace, or another user with many edits such as User:Koavf. All the best: Rich Farmbrough, 15:54, 29 June 2015 (UTC). It's slow but works for me, for example for you on TimedText which gave "No changes were found matching these criteria.", on a correctly looking page without error messages. PrimeHunter (talk) 16:06, 29 June 2015 (UTC) I'm not surprised, the second time I did it with "Module" it worked, though whether this is a result of smart caching or variance in database load I cannot tell. Nonetheless I would think that the software could deal with these queries, either by chunking or increasing the timeout. All the best: Rich Farmbrough, 22:42, 29 June 2015 (UTC). Here's another one: All the best: Rich Farmbrough, 20:21, 5 July 2015 (UTC). |R parameter for magic words {{PAGESINCATEGORY|Featured articles|R}} should give the number without commas. Strangely this is not working - 4511. Anyone know why? All the best: Rich Farmbrough, 15:51, 29 June 2015 (UTC). Your syntax calls Template:PAGESINCATEGORY which hasn't implemented R. The magic word is {{PAGESINCATEGORY:Featured articles|R}} which gives 4511. PrimeHunter (talk) 15:55, 29 June 2015 (UTC) Thanks, template fixed. All the best: Rich Farmbrough, 22:39, 29 June 2015 (UTC). mw:Help:Magic words#Statistics shows several possible values of a second parameter, and if others than R are used then R can be a third parameter. If you want the template to be similar to the magic word then you could just pass everything on with |{{{2|}}}|{{{3|}}}. It appears the magic word just ignores the extra parameters if they are empty. PrimeHunter (talk) 23:09, 29 June 2015 (UTC) Thanks, good idea, done (and in other languages). All the best: Rich Farmbrough, 20:21, 5 July 2015 (UTC). Watchlist announcements leaving a gap Since a few hours ago, there's a significant vertical gap left between the line containing "Clear the watchlist" and the "You have n pages on your watchlist..." line. All that with dismissed announcements and "Mark all pages as visited" button hidden with some custom JavaScript. Any clues? It used to be all neat and tidy. — Dsimic (talk | contribs) 04:54, 2 July 2015 (UTC) Fixed, there were a couple of watchlist notices that had expired today. Sam Walton (talk) 14:06, 2 July 2015 (UTC) Watchlist legend • While we're on the subject, could someone update the watchlist Legend so we mortals will know what the circles and arrow and bullets and colors mean? And BTW, I've been meaning to ask for some time (though I haven't seen this in a week or two): changed-since-my-last-visit articles usually show up in a deep bold blue, but someones one or two up them are in a sort of grayish blue. Anyone want to explain (or, again, update the legend)? EEng (talk) 14:18, 2 July 2015 (UTC) Thank you! — Dsimic (talk | contribs) 20:30, 2 July 2015 (UTC) @EEng: Hm, I'm a bit confused as I see no such things (circles, arrows and different shades of blue) on my watchlist. Out of curiosity, which watchlist-related options are enabled in your preferences? Oh, and I have some custom CSS forcing bold page names for modified watchlist entries. — Dsimic (talk | contribs) 20:35, 2 July 2015 (UTC) At Preferences > Gadgets I've got these two checked: • Display green collapsible arrows and green bullets for changed pages in your Watchlist, History and Recent changes • Display pages on your watchlist that have changed since your last visit in bold. Sometimes they're green and sometimes they're blue, and now there are little green bullets sometimes. It's all very entertaining. EEng (talk) 21:11, 2 July 2015 (UTC) What's even more confusing, I also have those two options checked, and really haven't seen any fancy watchlist inconsistency. Which skin do you use? I'm using the default Vector skin, while viewing everything in Firefox. — Dsimic (talk | contribs) 21:20, 2 July 2015 (UTC) Vector, Chrome (just checked in IE11 and it's the same). EEng (talk) 21:35, 2 July 2015 (UTC) Thanks. Hopefully others will use all this as debugging information. — Dsimic (talk | contribs) 03:15, 3 July 2015 (UTC) You'll see collapsible items (the arrows) when you have the enhanced watchlist enabled. -- [[User:Edokter]] {{talk}} 20:41, 3 July 2015 (UTC) Green indicates pages you haven't visited yet since they were updated (which is also explained at the top of your watchlist). An arrow indicates a collapsed item, which you can expand. -- [[User:Edokter]] {{talk}} 20:41, 3 July 2015 (UTC) Another watchlist proposal: Symbol to replace (0) when net effect is no change at all In the case where a sequence of edits has resulted in no net change to the source text at all (not just no net change to the length of the source text) how about replacing the (0) length-delta with something else, perhaps (∅)? It's useful to be able to recognize this special case at a glance. EEng (talk) 14:02, 2 July 2015 (UTC) It's a good idea to distinguish between ε and 0, but the numeric field probably isn't the place to do it. Perhaps an "=" sign (or ==, or ===, depending on your preference...) after the numeric thus "(0)=" . I think it would break a smaller number of applications. All the best: Rich Farmbrough, 18:43, 2 July 2015 (UTC). Good point. Maybe it could be worked into whatever it is you folks are cooking up with the green and blue arrows and dots and whatnot. EEng (talk) 19:40, 2 July 2015 (UTC) Since the <span> containing the "(0)" has its own CSS class, called "mw-plusminus-null", changing the text to something else can be done with some relatively simple user-specific JavaScript (I've tested it): var zeros = document.getElementsByClassName("mw-plusminus-null"); for (i=0; i<zeros.length; i++) {zeros[i].innerHTML = "(\u2205)";}  This uses the "∅" empty set symbol (U+2205) surrounded by parentheses, as above; this can be replaced with any wanted string. 19:22, 2 July 2015 (UTC) Misread the question. 19:30, 2 July 2015 (UTC) Yeah, I was wondering just where in that code the test for "no net change" was, but I thought, "Well, these Village Pump gnomes must know something I don't" and went to try it. Guess what? You did misread the question. But it does a beautiful job of turning zeros into "empty-sets". Thanks for the effort, though. EEng (talk) 19:40, 2 July 2015 (UTC) • Any chance on someone doing this? EEng (talk) 12:21, 7 July 2015 (UTC) There's not much we can do here, other than using JavaScript to retrieve the two versions and making a comparison, but that would be rather slow - assume that your watchlist shows 50 edits, that means that 100 pages (50 pairs) need to be retrieved, and for one of the pages in each pair, every byte compared against the corresponding byte in the other page of the pair. Functions exist to compare strings of bytes, not sure if they'd handle strings that were several hundred K in length without breaking into substrings. If nobody is willing to try it in JavaScript, you could file a feature request at phab:. --Redrose64 (talk) 12:34, 7 July 2015 (UTC) (You're not sure there are functions to compare long byte strings? Are you kidding???) Retrieving and comparing the actual text is obviously out of the question, but with all due respect I question the accuracy of your analysis. When I hover over e.g. 4 changes (or whatever), where someone made a bunch of changes and someone else reverted them, it easily pops up with an empty diff, and that isn't happening by two full versions being retrieved and compared on the fly -- something somewhere knows, without too much trouble, that these two versions of have a null diff. EEng (talk) 12:51, 7 July 2015 (UTC) @EEng: A page diff is run on the Wikimedia servers and has access to all sorts of functions. Any javascript that customises display for a particular user is run client-side, and so any functions and data that are used must be available to the client. --Redrose64 (talk) 15:19, 7 July 2015 (UTC) That doesn't explain why, if the javascript can request the two pages themselves (to do its own diff), it can't just as easily request the diff directly. But anyway, since the hashes appear to be available, this is moot (unless we want to improve the performance of the hover-diffs, which would be a good idea -- no wonder they're so slow!). EEng (talk) 15:26, 7 July 2015 (UTC) ──────────────────────────────────────────────────────────────────────────────────────────────────── The null diff might perhaps use a symbol for unit type, sometimes denoted '()'. It's not the same symbol as ø 'nothing at all'; it symbolizes 'action which leaves what you care about unchanged', such as adding zero to your number, or multiplying it by one. --Ancheta Wis (talk | contribs) 13:08, 7 July 2015 (UTC) If I'm understanding you you're suggesting omitting the zero, so that (0) becomes (). That's a great idea, and might (at least partially) address RF's concern about breaking existing applications, since (one hopes) empty string will be interpreted as zero, for those applications that just want the length. EEng (talk) 13:30, 7 July 2015 (UTC) @EEng: If you are using Popups to generate diffs when you hover over a diff link, it actually does retrieve two full versions and compare them on the fly. Popups contains its own diff generator, separate from the MediaWiki one, which is why it sometimes says "diff truncated for performance reasons" but the native MediaWiki one doesn't. You can see the code for it by searching for "Javascript Diff Algorithm" in MediaWiki:Gadget-popups.js. Also, if you open the part of your browser console that monitors network requests, you can see the requests for each of the two pages being sent each time you hover over a different diff link. — Mr. Stradivarius ♪ talk ♪ 14:54, 7 July 2015 (UTC) My apologies. I clearly underestimated the potential depths of implementation insanity. Why in the world don't they get Mediawiki to generate a diff and just use that? Luckily, as seen below, this can all be shortcut via hashes, as seen below. Any thoughts about the interface chanage to ()? EEng (talk) 15:03, 7 July 2015 (UTC) It would seem that a SHA-1 hash is generated for every page revision - see mw:Manual:Revision table#rev_sha1. Now, whether the hash is exposed through the JavaScript API, I've no idea. Alakzi (talk) 13:37, 7 July 2015 (UTC) It is indeed exposed by mw:API:Revisions with prop=revisions&rvprop=sha1. So, it is simply a matter of retrieving the hashes of the first and last revision in a series and doing a string comparison. [5][6] Alakzi (talk) 13:54, 7 July 2015 (UTC) Yippee! That's even better than a very efficient diff, because it's obviously available for free. So before we recruit some knowledgeable gnome to implement this, what do we need to do to make sure everyone who might care is OK with the suggested output format change i.e. ()? Paging Rich Farmbrough. EEng (talk) 14:49, 7 July 2015 (UTC) It doesn't need to be backward-compatible with anything if we're writing a user script. If you want this to be changed in core, just open a ticket on Phabricator; it seems fairly straightforward, so it might even be implemented before the turn of the century. Alakzi (talk) 15:24, 7 July 2015 (UTC) It's an interesting dilemma: a user script would get it sooner for me, but this seems like something that would benefit most users, but they won't get that benefit if they have to know about a script to install, so seen that way it's better to request it in core -- but that will probably delay my getting it. So should I go for my own selfish insterests, or the greater good? EEng (talk) 15:30, 7 July 2015 (UTC) Why not both? :-) Alakzi (talk) 15:38, 7 July 2015 (UTC) Yeah, I thought of that too, but how much you want to bet, if there's a user script available,, that implementing it in core gets deferred because "there's already a user script for people who want this". Honestly I'm amazed this feature wasn't in there from the very beginning -- it's such an obviously important special case. In fact, it really ought to be integrated into the overall grammar of bullets, arrows, bolding, coloring, and so on of the watchlist, since the idea is to help people filter out the unimportant (including the null) and focus on actual changes they care about. EEng (talk) 16:39, 7 July 2015 (UTC) I like that idea! Maybe a ring in place of the bullet? Alakzi (talk) 16:41, 7 July 2015 (UTC) Part of the reason I brought it up is I have the impression that stuff's being changed right now (or maybe I just haven't been looking closely for a while). How do we get the right person's attention? EEng (talk) 16:51, 7 July 2015 (UTC) No Firefox favicon for section redirect I see no favicon in Firefox 39.0 tabs for redirects to sections, for example Wiki spam and others tested in Category:Redirects to sections and the five other languages listed there. I see the favicon in Wiki spam in IE, Chrome, Safari and Opera. In Firefox I see the favicon for Spamdexing (direct link to the page) and Spamdex (redirect to the page but not to a section). Do others have the Firefox issue? PrimeHunter (talk) 02:26, 3 July 2015 (UTC) I couldn't see it for the Wiki spam page in 38.0.5. But then when I went to verify the version number, the browser auto-updated to 39.0 and now that page does show a favicon. Regards, Orange Suede Sofa (talk) 02:45, 3 July 2015 (UTC) Are you still on the page saying "(Redirected from Wiki spam)" when you see the favicon in 39.0? We use url redirection now so if the page is reloaded then you get the redirect target Spamdexing#Wiki spam where I do see the favicon and no "Redirected from". PrimeHunter (talk) 02:51, 3 July 2015 (UTC) Yes, I see the favicon on the "Redirected from..." page. When I reload the page, the "Redirected from..." disappears as expected and the favicon remains. Orange Suede Sofa (talk) 03:04, 3 July 2015 (UTC) Thanks. If others aren't missing the favicon then I'm not filing it in Phabricator. PrimeHunter (talk) 09:55, 3 July 2015 (UTC) Link to talk page in Mobile Wikipedia The talk page link in the mobile version of Wikipedia should be shown for non-logged in users as well. GeoffreyT2000 (talk) 03:36, 3 July 2015 (UTC) Thanks for the feedback. I'd suggest sending to the mobile mailing list, mobile-llists.wikimedia.org, to keep discussion centralised and give everyone a chance to participate. Thanks again! --Dan Garry, Wikimedia Foundation (talk) 20:09, 7 July 2015 (UTC) AutoEd Requesting Volunteers to correct my AuotoEd page https://en.wikipedia.org/wiki/User:Silver_Samurai/common.js According to this instruction https://en.wikipedia.org/wiki/Wikipedia:AutoEd#Installation_guide -- 05:56, 3 July 2015 (UTC) What you've done looks fine to me. Are you not seeing the "auto ed" item in the "More" dropdown menu? — Mr. Stradivarius ♪ talk ♪ 06:17, 3 July 2015 (UTC) User:Mr. StradivariusI am seeing it now.-- 06:41, 3 July 2015 (UTC) Page ID So... I have page ID (i simply have it). Is there some simple way to find out, to which page this ID belongs? Not using API query, SQL quarry... Can't I go to search and do some search like "id:XXXXX", or maybe LUA people can do some work? As I understand, Module:Page can't do that. --Edgars2007 (talk/contribs) 13:40, 3 July 2015 (UTC) My user page has page ID 26096242; this URL //en.wikipedia.org/w/index.php?curid=26096242 is another way to load it. -- John of Reading (talk) 13:55, 3 July 2015 (UTC) OK, but what about not touching URL. The perfect solution would be {{some template|26096242}}, which would give User:John of Reading. --Edgars2007 (talk/contribs) 14:13, 3 July 2015 (UTC) I don't know know a way to display the page name. https:///en.wikipedia.org/?curid=26096242 is a shorter url. It can be used in {{querylink}} where {{querylink||qs=curid=26096242|Unknown page}} produces Unknown page. PrimeHunter (talk) 15:13, 3 July 2015 (UTC) This is easy from Lua: mw.title.new(26096242).prefixedText. It would be easy to set up a module to do this if necessary. Jackmcbarn (talk) 15:21, 3 July 2015 (UTC) Fwiw... there's already a Special: page search utility that can take a PAGID and give you the associated page (albeit a bit clunky as well as poorly labelled) -- just go to the Redirecting Special Pages section and select Redirect by file, user, page or revision ID. Don't forget to switch the input selector menu value from User ID to Page ID before you send your request. You can also build a template based on that special page's syntax and the previous id given above like: Special:Redirect/page/26096242 . Unfortunately, as it stands today, the output is not listed as an optional wikilink for you to follow if need be but automatically takes you to the target article, revision or user in question instead. I'm sure amending the app to display the target as a clickable wikilink rather than automatically opening the target page is the better solution here. Maybe providing a checkbox indicating not to take you to the target as an alternative maybe? -- George Orwell III (talk) 00:15, 4 July 2015 (UTC) Thanks, guys--Edgars2007 (talk/contribs) 09:54, 7 July 2015 (UTC)! Help with css So today I finally started my css page. I've never done that before because I find css very confusing. Here's what I'm trying to do: hide certain templates. I pulled some css from Template:Humor/doc, though I don't want to hide those templates in particular. I added one I did want to hide, which didn't work, and upon looking at the template codes I further edited it to this. It still didn't work, so I removed all code (because it might compromise my account, booga-booga.) So, anyone know why this is happening? 13:43, 3 July 2015 (UTC) @Eman235: I think your problem is that you need a comma after all but the last class selector (You're missing one after .ombox-humorantipolicy). That CSS code is essentially looking for a not-a-forum box inside a humorantipolicy box, which won't exist. Adding the missing comma should solve your problem. /~huesatlum/ 14:27, 3 July 2015 (UTC) Haha! Brilliant. I missed the comma. It works now. 15:09, 3 July 2015 (UTC) Conversion to PDF A reader reported problems converting two articles to PDF. I just tried each of them and can confirm the same problem. In each case I received the following error: Status: Bundling process died with non zero code: 1 The articles: --S Philbrick(Talk) 15:43, 3 July 2015 (UTC) This is filed as T104708. HTH, --Elitre (WMF) (talk) 16:03, 3 July 2015 (UTC) WMF ops found the cause of the problem, I've deployed the fix and it all seems better now. --Krenair (talkcontribs) 17:18, 3 July 2015 (UTC) Another email was sent to OTRS reporting problems with PDF renderings. The two articles mentioned were: • Poseidon - Status: Rendering process died with non zero code: 1 • Twelve Olympians -Status: ! LaTeX Error: Something's wrong--perhaps a missing \item. I had hoped to respond that the problem has been resolved but I tried both of these and both failed. I placed the error message after the article name.--S Philbrick(Talk) 14:23, 4 July 2015 (UTC) Those articles seem to have different error messages? "Poseidon" says "Rendering process died with non zero code: 1" which is phab:T94308. "Twelve Olympians" says "Status: ! LaTeX Error: Something's wrong--perhaps a missing \item." which welcomes a bug report. --AKlapper (WMF) (talk) 08:28, 6 July 2015 (UTC) Good catch, I glossed over the single word difference in the error message. This means that the prior problem, which is reported solved, is solved and this is a different issue. You identified the bug report for the first error. The second one does generate a different error. It has already read been reported as T88890. That report talks about a problem working with collections. I added this specific instance to show that it can be generated with a single article.--S Philbrick(Talk) 13:27, 6 July 2015 (UTC) List of contributors I recently made a post to the help desk asking "is there an easy way to get a list of all the users (and IPs) that have contributed to article X, and maybe even list them in order of number of edits to the page?" and I was directed here. I think there's an xTools page ([7]) that's supposed to accomplish this but it doesn't work for me. I need the list fairly soon because the page (QI (A series)) is up for deletion and the page history is a bit long to go through manually. 22:01, 3 July 2015 (UTC) You could use the API's prop=contributors. Anomie 23:24, 3 July 2015 (UTC) Thank you very much. 07:48, 4 July 2015 (UTC) Problem on WP:AFD The page WP:Articles for deletion has been vandalised somehow. The problem is in template {{Deletion debates}}, but I haven't been able to pin it down any further. JohnCD (talk) 14:19, 4 July 2015 (UTC) Investigating. Jo-Jo Eumerus (talk) 14:22, 4 July 2015 (UTC) If you were seeing a large red screen with the text "nice meme", that was added by 120.50.54.81 to Module:Dynkin. The module was subsequently added to a number of templates by Keastes (talk · contribs). Everything has now been reverted. 14:23, 4 July 2015 (UTC) Looks like Keastes is back in control of their account (diff). Conifer (talk) 15:06, 4 July 2015 (UTC) See recent history of Template:Hlist for the problem specific to {{Deletion debates}}. --Redrose64 (talk) 17:30, 4 July 2015 (UTC) Revision scoring IEG goes for a second round Hey folks, About 6 months ago, we posted here to notify you of an IEG-funded project we've been working on: Revision scoring as a service. Today, I'm posting to ask for your feedback on our plan for a second round of IEG funding. In the first 6 months of our project, we've met our goals. We stood up a production level service for retrieving revision scores. (Test it out right now at this link: http://ores.wmflabs.org/scores/enwiki/?models=reverted|wp10&revids=638307884|642215410) We have 5 languages running (English, French, Portuguese, Turkish and Persian) and two models ('reverted' == probability that the edit will need to be reverted & 'wp10' == WP 1.0 Assessment). We've had a set of tools and bots pick up the service. See ScoredRevisions and Reports bot 3. In the next 6 months we plan to do some more interesting stuff. 1. Add an edit type classifier 2. Expand language support to new languages like Spanish and German and projects like Wikidata 3. Extend our WP:Labels service to allow editors with autoconfirmed accounts to create their own labeling campaigns. If you have a moment, we'd appreciate your feedback or endorsement on our project renewal plan. Thanks. --EpochFail (talkcontribs) 14:38, 4 July 2015 (UTC) revision history statistics "link" this has been down for a very long time,(its important for articles like Dyslexia, Ebola/west Africa,..)is there any idea when it will be working? thank you--Ozzie10aaaa (talk) 22:12, 4 July 2015 (UTC) The whole xtools suite appears to be in a state of flux. The basic problem is that the people who made it are no longer active. Per the recent watchlist notice, some are trying to assemble a new team to rewrite it from scratch. But it is not clear whether they will succeed or how long that would take. Out of curiosity, what statistics specifically are you looking for?--Anders Feder (talk) 05:16, 5 July 2015 (UTC) dyslexia article...1 edits per user...2. bytes added per user--Ozzie10aaaa (talk) 11:56, 5 July 2015 (UTC) There's an alternate tool available here. -- Diannaa (talk) 15:40, 5 July 2015 (UTC) Percent encoding I noticed some strange percent encoding, every time an IP made a mobile edit, ref names were getting another layer of % encoding - [8]. Is this a known bug, or a one off glitch we can ignore? All the best: Rich Farmbrough, 18:51, 5 July 2015 (UTC). Most of those user's edits are not tagged mobile. And the one that is, doesn't show this problem. I'm guessing that the user's browser has an extension that is buggered up by installed extensions or something. —TheDJ (talkcontribs) 19:58, 5 July 2015 (UTC) Looks more like vandalism to me. The IP labelled that edit as "fixed the page". The IP has been blocked by the way. Tvx1 20:08, 5 July 2015 (UTC) The substantive edits seem ok. The IP was blocked as an open proxy, thanks for pointing the block out. All the best: Rich Farmbrough, 20:17, 5 July 2015 (UTC). I assumed they had overridden the tag - I guess that can't be done? Perhaps they were trying not to make that problem. Anyway its something to watch out for. All the best: Rich Farmbrough, 20:17, 5 July 2015 (UTC). Force desktop version? I read WIkipedia on an iPad, and the screen is large enouugh that don't need the awful mobile version. Yet I can't stop my Chrome browser from constantly serving up the mobile version, constantly forcing me to tap the "Request desktop version" button. Is there a way to force Wikipedia to give me the desktop version by default? --Calton | Talk 21:21, 5 July 2015 (UTC) Bookmarking "en.wikipedia.org" (without the .m.) and always starting from there serves as a workround, since once you've requested desktop site once it should remember it as long as you don't give it the chance to go to the mobile site. I completely agree about the shittiness of the mobile site, and whoever thought it should be default should be summarily fired—even on phones, let alone tablets, it's far less friendly than the desktop site. – iridescent 21:29, 5 July 2015 (UTC) If you have some constructive feedback on what you don't like about the mobile view for reading, then the Reading Department would welcome it. You can give that feedback on the mobile mailing list, mobile-llists.wikimedia.org. That said, I would note that if you phrase your feedback to the list in the extremely combative manner that you did here, people will likely avoid engaging with you. Please keep things as constructive as possible, both on- and off-wiki. --Dan Garry, Wikimedia Foundation (talk) 20:07, 7 July 2015 (UTC) I've found that bookmarking "en.wikipedia.org" is not sufficient (on Android Chrome) as the server will detect your platform and redirect to the mobile site anyway. However, once the mobile page loads, if you scroll to the bottom and click the "Desktop" link, then the server remembers your choice. I'm not sure if it only lasts until you close the tab or if it lasts as long as your login session, but it expires eventually. Also, there is no equivalent way to switch back to mobile, you have to add the ".m" into the URL to get back. Ivanvector 🍁 (talk) 20:12, 7 July 2015 (UTC) There is a link to the mobile version on the bottom of every desktop page. -- [[User:Edokter]] {{talk}} 21:13, 7 July 2015 (UTC) Arunanshu abrol thanking DumbBOT Normally, bots cannot be thanked. Why did Arunanshu abrol thank DumbBOT? GeoffreyT2000 (talk) 01:01, 6 July 2015 (UTC) Well, nobody knows except Arunanshu abrol (talk · contribs) themselves. Have you asked them why? But if you mean "how", it's very easy. All you need is the revision ID: for example, the last edit made by DumbBOT (talk · contribs) is Special:Diff/670158689 so try visiting Special:Thanks/670158689. --Redrose64 (talk) 07:44, 6 July 2015 (UTC) I get "Thank action failed. Please try again. " YMMV. All the best: Rich Farmbrough, 17:02, 7 July 2015 (UTC). OK... I just thought to look for the thanks in question. There's only one logged as sent to DumbBOT, and it's timed at 07:48, 29 October 2013 (at first, I had assumed that the incident was recent). Might it be that it was possible to thank bots at the time, but the software has since been amended? --Redrose64 (talk) 17:48, 7 July 2015 (UTC) Tech News: 2015-28 15:13, 6 July 2015 (UTC) API calls just starting throwing SSL/HTTPS (?) errors For context, I run WP:STiki, which scores every en.wp edit in near real-time. In the last couple of hours, this process has hit the fan. In the last several days, I have implemented changes to handle the HTTPS switchover and the new continuation procedure for queries returning long result sets. As of ~15 hours ago, everything was running perfectly smoothly. Now my (Java) API code is throwing errors like this at every API call (but does succeed in browser): Error: HTTP error at URL: https://en.wikipedia.org/w/api.php?action=query&prop=revisions&revids=670300219&rvtoken=rollback&rvprop=ids|timestamp|user|comment|tags&format=xml javax.net.ssl.SSLException: java.lang.RuntimeException: Could not generate DH keypair at sun.reflect.GeneratedConstructorAccessor15.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:532) at sun.net.www.protocol.http.HttpURLConnection$6.run(HttpURLConnection.java:1458)
at java.security.AccessController.doPrivileged(Native Method)
at sun.net.www.protocol.http.HttpURLConnection.getChainedException(HttpURLConnection.java:1452)
at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1106)
at sun.net.www.protocol.https.HttpsURLConnectionImpl.getInputStream(HttpsURLConnectionImpl.java:254)
[snip some lines]
Caused by: javax.net.ssl.SSLException: java.lang.RuntimeException: Could not generate DH keypair
at sun.security.ssl.SSLSocketImpl.fatal(SSLSocketImpl.java:1697)
at sun.security.ssl.SSLSocketImpl.fatal(SSLSocketImpl.java:1660)
at sun.security.ssl.SSLSocketImpl.handleException(SSLSocketImpl.java:1643)
at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1224)
at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1201)
at sun.net.www.protocol.https.HttpsClient.afterConnect(HttpsClient.java:440)
[snip some more]


Clearly something is going on at the SSL handshake between my server and the WMF one. Given my things were working on my end and I have not intervened, this heavily suggests something was changed on the WMF side. Any pointers? I'll note that CBNG also went down parallel to my service, I believe. Thanks, West.andrew.g (talk) 03:57, 7 July 2015 (UTC)

CBNG isn't feeding on IRC either. Ж (Cncmaster) T/C 06:26, 7 July 2015 (UTC)
This is probably related to phab:T104281. You probably need to update your Java version. —TheDJ (talkcontribs) 11:34, 7 July 2015 (UTC)
moving to java 7 or higher will solve your issue. Matanya (talk) 11:47, 7 July 2015 (UTC)

Most frequently used words with 6 or more characters on the English Wikipedia

How can I find someone who knows how to do a statistical analysis on a Wikipedia dump? I would like to have a list of the most frequently used words on Wikipedia that contain 6 or more characters. For more info about why I want such a list please click here. Thanks! The Quixotic Potato (talk) 11:55, 7 July 2015 (UTC)

I have done a bigram analysis before, let me dig though my archive. All the best: Rich Farmbrough, 14:53, 7 July 2015 (UTC).
Oh… and the reason this is theoretically not a sufficient tool for working on typos, is that the statistical nature of each dump is post-correction for certain typos. For example I fixed all (5 or 6) occurrences of "chruches" a few days ago. A current dump would indicate that this is never misspelled thus. All the best: Rich Farmbrough, 14:57, 7 July 2015 (UTC).
Here is some data from 2010. It might be a good test set. All the best: Rich Farmbrough, 15:28, 7 July 2015 (UTC).
I have coded this up, latest dump is downloading.... Moving convo to User:The_Quixotic_Potato's talk page. All the best: Rich Farmbrough, 16:58, 7 July 2015 (UTC).

Tables

Hello everyone! For this year, the WikiProject Formula One has introduced new tables for its race reports. See 2012 Brazilian Grand Prix and 2015 British Grand Prix for the difference. The new format seems to make problems on Firefox. As you can see on my screenshot, the borders ofter don't appear, which makes the tables hard to read. Any idea why that happens? Zwerg Nase (talk) 14:07, 7 July 2015 (UTC)

They use the obsolete "border" attribute, which is no longer supported by modern browsers. Use CSS. —TheDJ (talkcontribs) 14:47, 7 July 2015 (UTC)
It's weird though, there already is a CSS fallback defined, and that should take precedence I think. Might be a FF bug. —TheDJ (talkcontribs) 14:51, 7 July 2015 (UTC)
Borders on the table element do not cascade to the cells and never have with inline CSS. --Izno (talk) 14:56, 7 July 2015 (UTC)
Rigth, where border attributes do, because the attribute also affects the value of the rules attribute, which does provide borders between the cells. As in HTML4 tables spec. But inline CSS doesn't, so you should just use wikitable. —TheDJ (talkcontribs) 15:17, 7 July 2015 (UTC)
Frankly, please stop using custom CSS for the tables. I see no reason not to use the wikitable class. --Izno (talk) 14:56, 7 July 2015 (UTC)
I think it has to do with them wantint to waste less horizontal whitespace, then wikitable allows them. But yes, as proven right here, such an approach is not maintainable.. —TheDJ (talkcontribs) 15:17, 7 July 2015 (UTC)
I can't find any table which resembles your screenshot in either 2012 Brazilian Grand Prix or 2015 British Grand Prix - which sections are they in? --Redrose64 (talk) 15:24, 7 July 2015 (UTC)
Sorry, the screenshot is from 2015 Formula One season, but its the same sort of table with the same problem. Anyway, thank you for clearing that up, I will propose to the Project to go back to the wikitables. Zwerg Nase (talk) 15:40, 7 July 2015 (UTC)
They were coded that way because tables that use the "wikitable" class have barely visible outlines on the mobile site. Like in this example:
A a group of wikitables in a rally article on the desktop site
The same tables on the mobile site

can you please tell me the exact version of FF and operating system that you are using please ? i'd like to keep an eye on this but i've not yet found a version of FF for my Mac with this same problem. —TheDJ (talkcontribs) 15:54, 7 July 2015 (UTC)

@TheDJ: FF 38.0.5 on Win 7 SP1. Zwerg Nase (talk) 16:00, 7 July 2015 (UTC)
"barely visible outlines on the mobile site" -> Then you need to submit a ticket to get the mobile site fixed, not introduce arbitrary styling. Fix the root cause, not the symptom. --Izno (talk) 16:31, 7 July 2015 (UTC)
I did already launch a proposal to fix this, but it failed to get the problem understood. Tvx1 16:40, 7 July 2015 (UTC)

Moving article - history lossed

I had moved page Gülnar to Gülnar (province)
then I made {{disambig}} from Gülnar
Bkonrad moved page Gülnar (province) to Gülnar
when I wanted to see the editing history of Gülnar - I found that the history has lost!

which is completely WRONG! and should not be happened! as if someone made a full article "blah-blah", and someone else made a stub "blah_blah", and then if somebody rename "blah_blah" to "blah-blah" all history of "blah-blah" will be lost! (Idot (talk) 15:35, 7 July 2015 (UTC))

When you reverted Bkonrad's move the history was moved back to the "(province)" page - with Bkonrad's move revert. Jo-Jo Eumerus (talk, contributions) 15:57, 7 July 2015 (UTC)
now I have moved page Gülnar to Gülnar (district), and there is no previous history of Gülnar (Idot (talk) 15:59, 7 July 2015 (UTC))
There are three deleted edits at Gülnar. Only one seems relevant, and that's the one where you added additional links to the page. Is that the history you are looking for? I've dropped the text of that page onto your talk page. There is no other history that I can find, but these deleted edits seem consistent with the moves that show up in the history. UltraExactZZ Said ~ Did 16:21, 7 July 2015 (UTC)

I've asked for the district's page to be moved back to Gülnar, as it is the WP:PRIMARY topic for Gülnar, and there is no need for a disambiguation page per WP:TWODABS. A hatnote for Gulnar Hayitbayeva can be placed at the top of the district's page. -Niceguyedc Go Huskies! 16:24, 7 July 2015 (UTC)

answered Talk:Gülnar (district)#diambig (Idot (talk) 16:41, 7 July 2015 (UTC))

anyway how about technical issues? (Idot (talk) 16:41, 7 July 2015 (UTC))

if the history had not been deleted by an administrator, would it be been visible? (Idot (talk) 16:58, 7 July 2015 (UTC))
Yes, but an administrator has to jump through some hoops to avoid deleting the page history when a new page is moved to a used title without the old page being moved elsewhere. And if the two page histories are merged into a single page history then the result can be quite confusing. WP:HISTMERGE may give an idea of the complications involved in history merging. We usually try to avoid it. PrimeHunter (talk) 17:30, 7 July 2015 (UTC)

Content Translation, the new article creation tool is now available as a beta-feature

How to use Content Translation - a short video)

Hello, Content Translation has now been enabled as an opt-in beta feature on the English Wikipedia for logged-in users. To start translating please enable the Beta feature in your preferences. Visit Special:ContentTranslation or go to your contributions page to open the tool. You can follow the instructions in the User Guide on how to get started. You can also find more information in our earlier announcement in The Signpost.

Since this is the first time we have installed the tool on this wiki there are chances that there may be some problems or service disruptions which we are not yet aware of. We will be monitoring the usage to check for any failures or issues, but please do let us know on the Content Translation talk page or through Phabricator if you spot any problems. Thank you. On behalf of the Wikimedia Foundation's Language Engineering Team:--Runa Bhattacharjee (WMF) (talk) 17:06, 7 July 2015 (UTC)

Why do we need this? This is the English Wikipedia; pages are written in English. If we want to translate a page to, say, German, we edit the German Wikipedia. --Redrose64 (talk) 17:36, 7 July 2015 (UTC)
This isn't my initiative, but you don't seem to understand what Content Translation does. Can you read the Signpost article linked above? Best, Ed Erhart (WMF) (talk) 17:39, 7 July 2015 (UTC)
Research shows that even between big Wikipedias such as English and German the overlap is about 51%. That means that half of the German Wikipedia could be translated into English. That does not mean that all those articles are relevant to English Wikipedia, but there are some valid opportunities for translation into English. In this ticket we collected the requests from the community to enable the tool in English Wikipedia.Pginer-WMF (talk) 18:20, 7 July 2015 (UTC)

Images not showing up

Did you bypass your own browser cache after the null edit? A month ago that was often necessary after edits as discussed at Wikipedia:Village pump (technical)/Archive 137#Post not showing up immediately. It hasn't happened to me lately but I don't know whether it makes a difference that it is null edits. PrimeHunter (talk) 20:13, 7 July 2015 (UTC)
This is different from "Post not showing up immediately". It goes from showing nothing in the infobox image space to showing a nonexistent image, and after the first edit to each page, all of them were in Category:Articles with missing files — besides the category appearing at the bottom of the page, the article names appeared when I went to the category and looked through its contents. Nyttend (talk) 21:15, 7 July 2015 (UTC)

file_get_contents on wmflabs?

$url ="http://tools.wmflabs.org/catscan2/catscan2.php?language=de&categories=$catenc%0D%0A$other_cat_enc&doit=1&format=csv&$all_namespaces&depth=15";
$csv_list = file_get_contents($url);


Any clue, why this happens? Works well on FF with this url. Thanks, --Flominator (talk) 19:32, 7 July 2015 (UTC)

Well the url has variable names still in it.. that would be one reason :) —TheDJ (talkcontribs) 20:58, 7 July 2015 (UTC)

Need some testers

Please see User:Howcheng/sandbox and User:Howcheng/sandbox2. We are trying to add captions to the main page images to solve the longstanding complaint of when the images don't go with the top item in ITN and OTD. I've checked this in Vector and Monobook skins on Win 7 and 8 with latest versions of Chrome, Firefox, and IE. I need some people to verify it using a Mac, using iPad (not worried about iPhone/iPod as smaller resolutions will get the mobile version, which does not include ITN and OTD), and from Android tablets with stock browser and Chrome. Additionally, if anyone has suggestions for other image types to test with, feel free to edit as needed. Thanks! howcheng {chat} 20:20, 7 July 2015 (UTC)

Works just fine on Safari for Mac, and on iPads and iPhones (w00t new responsive design mode of Safari 9 for testing exactly this !). Android will be a lot more work to test. There's a lot of rendering differences between all the minor versions of Android. Android 4.4 == Chrome 30.0.0, that I know. —TheDJ (talkcontribs) 20:57, 7 July 2015 (UTC)
That's a whole lot of nesting divs with contradictory classes (floatright vs. floatnone for example) which basically do nothing. There is a lot of fat to trim. But I like the basic approach, though I would like to advocate using a separate class for main page images instead of using the inline-styled thumb classes (which look weird with the 'new image thumb' gadget enabled). -- [[User:Edokter]] {{talk}} 21:27, 7 July 2015 (UTC)
I can second what TheDJ has written. Tvx1 21:43, 7 July 2015 (UTC)