Jump to content

Wikipedia:Village pump (technical): Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
Hæggis (talk | contribs)
AoV2 (talk | contribs)
<div class="nonumtoc">
Line 598: Line 598:


:: Yeah, that´s the thing: I want it for ''all'' users, just for a WP-page, not for an article. The bullets on this page are numbers oneself, and so the bullet numbers before make the whole table of contents confusing & the typeface very unpicturesque. But thanks for the CSS-code ;-) --[[User:Hæggis|Hæggis]] ([[User talk:Hæggis|talk]]) 08:43, 10 April 2010 (UTC)
:: Yeah, that´s the thing: I want it for ''all'' users, just for a WP-page, not for an article. The bullets on this page are numbers oneself, and so the bullet numbers before make the whole table of contents confusing & the typeface very unpicturesque. But thanks for the CSS-code ;-) --[[User:Hæggis|Hæggis]] ([[User talk:Hæggis|talk]]) 08:43, 10 April 2010 (UTC)

[[MediaWiki:Common.css]] addresses this with {{code|code=.nonumtoc .tocnumber { display: none; }|lang=css}}, so something like {{code|code=<div class="nonumtoc">__TOC__</div>|lang=html4strict}} will display a table of contents with no numbers. Perhaps a template exists containing exactly this code but I couldn′t find it. ―[[User talk:AoV2|AoV²]] 10:13, 10 April 2010 (UTC)


== Chronology if importScript() and OnloadHook ==
== Chronology if importScript() and OnloadHook ==

Revision as of 10:13, 10 April 2010

 Policy Technical Proposals Idea lab WMF Miscellaneous 
The technical section of the village pump is used to discuss technical issues about Wikipedia. Bugs and feature requests should be made at the BugZilla.

Newcomers to the technical village pump are encouraged to read these guidelines prior to posting here. Questions about MediaWiki in general should be posted at the MediaWiki support desk.

Inline template wikitext formatting

Please take a look at Wikipedia:Centralized discussion/Citation discussion#Inline template wikitext formatting and comment, when you have a chance to do so. Thank you. 08:54, 20 February 2010 (UTC−5)

Will $2 million Google donation be used to actually FIX BUGS?

Or will it be used for Jimbo to try to invent another search engine, or for further development of the near-useless Liquid Threads?

How about we use the money to fix most of the around 4000 open bugs?

How about spending money on some simple usability fixes such as integrated watchlists, talk page section watchlisting, GIF scaling, and so on.

When Firefox's Mozilla Foundation got millions of dollars from Google over the years, they wasted a lot of it in my opinion. They worked on grandiose plans, and failed to listen to people about fixing all the many Firefox bugs. They ignore their discussion boards much of the time.

So what exactly are Wikipedia's plans for using the $2 million as concerns the many technical problems discussed in places such as this technical village pump? Where else but here can this be best openly discussed? Or is this one area where the Wikipedia consensus process (or at least open discussion) goes underground to unaccountable boards? --Timeshifter (talk) 12:08, 2 April 2010 (UTC)[reply]

As far as I know this money has not specifically been allocated, but I'm quite sure that it was one of the donations that convinced the foundation that it would be possible to extent the contracts of the Usability Initiative team, which would otherwise have reached the end of their contracts and objectives in the coming two months. The only proper place to discuss this is on the Foundation wiki, or the foundation mailinglist I suspect. —TheDJ (talkcontribs) 13:29, 2 April 2010 (UTC)[reply]
This is a village pump where we discuss technical issues. Other wikis, such as the Foundation wiki, get very little traffic and community participation due to the lack of one of the feature/bugs I mentioned, integrated watchlists. The mailing lists do not get much community participation due to the email list format, and because one's email address is exposed in the public archives. Same as at Bugzilla. That is another requested bug/feature, by the way, that has been ignored for years. I am talking about hiding email addresses in Bugzilla and the mailing list archives, as has long been done in most blog comments, major media page comments, Wikipedia, etc..
It is good to extend the contracts of those members of the Usability Initiative team that are making progress worthy of their pay. There needs to be open discussion though in my opinion about the balance between what is being budgeted for major initiatives such as the Usability Initiative, versus fixing bugs, and implementing long-requested bug/features. How do they blend together too? --Timeshifter (talk) 17:17, 2 April 2010 (UTC)[reply]


I'm not sure what integrated watchlists are, but I believe talk page section watchlisting is something that will be accomplished with the "near-useless Liquid Threads", it would not be in any way simple to do with the current discussion page format. Mr.Z-man 15:11, 2 April 2010 (UTC)[reply]
Implementing talk page section watchlisting would be simpler to do than fixing everything wrong with Liquid Threads in my opinion. My experience with Liquid Threads at http://liquidthreads.labs.wikimedia.org was not good. I left many suggestions for improvement as did many others. --Timeshifter (talk) 17:19, 2 April 2010 (UTC)[reply]
Most of the fixes you suggested seem to be mainly UI improvements. That is still far easier than redesigning almost the entire watchlist system, which is what would be required to do that with the current discussion page system. Adding individual talk page threads to watchlists (without redesigning how talk pages work in the process, which is how liquidthreads does it) is probably one of the least-simple commonly requested features, which is why it hasn't been done. Mr.Z-man 19:29, 2 April 2010 (UTC)[reply]
I think I read that watchlisting individual sections of watchlists was a numbering problem. People sometimes add more sections higher up on a talk page. Section breaks and so on. So a basic fix might be implemented now, but it wouldn't be perfect. I would settle for that, even if some of my watched sections break now and then. I think a lot of problems occur when people try for perfection when mediocrity will suffice. ;)
Kind of like the GIF scaling problem. Static GIF scaling worked fine. Animated GIF scaling became a problem. Rather than separate the two, the developers tried for one massive fix of both together. Turned out to be a big mistake. Should have kept what worked. How does the saying go,... If it aint broke, don't fix it. --Timeshifter (talk) 23:21, 2 April 2010 (UTC)[reply]
That's only part of the problem. The other part is that the watchlist system is designed with the assumption that only entire pages are watched. There currently isn't even a place in the database to put the section information. As for the GIF scaling fix, I believe it was turned off again because it was broken; it caused some animated GIFs to be displayed as still images or something similar. Mr.Z-man 00:43, 3 April 2010 (UTC)[reply]
Please see: commons:Commons:Graphics village pump#GIF scaling (animated and non-animated) still not working and commons:Commons:Graphics village pump#Can static GIF scaling be separated from animated GIF scaling?. See also the related sections above and below them. Static GIF scaling/resizing has worked fine for years. The problem is with scaling/resizing animated GIFs. The solution is to separate the 2 tasks in MediaWiki. Problems pop up now and then with animated GIF scaling, due to the fact that scaling animated GIFs is far more complex, and there are many options on how to do it. It makes no sense to keep static and animated GIF scaling together. See the thread. It has been discussed there for months. --Timeshifter (talk) 12:48, 3 April 2010 (UTC)[reply]
Talk page section watchlisting would be most effective on Village Pump pages. That is where it is most needed in my opinion. Maybe if Liquid Threads could be adjusted enough to fix the major problems, then maybe it could be tested on a Google $2 million dollar discussion page on a special village pump here on English Wikipedia. The main problem with Liquid Threads in my opinion is its lack of integration with current watchlists. We need integrated watchlists, not more separate watchlists. Plus Liquid Threads uses a really unsatisfactory form of "watchlist" called "new messages." It is not really even a watchlist. Most people prefer the simple scannable watchlists used everywhere else. --Timeshifter (talk) 12:58, 3 April 2010 (UTC)[reply]
Talking of liquid threads (near useless or otherwise) - any news about when they're going to be deployed?--Kotniski (talk) 15:40, 2 April 2010 (UTC)[reply]
You might check here:
http://liquidthreads.labs.wikimedia.org
mw:Extension:LiquidThreads
mw:Extension talk:LiquidThreads --Timeshifter (talk) 17:17, 2 April 2010 (UTC)[reply]

The $2 million is an unrestricted grant and will go toward Wikimedia's general budget. Much of that budget is spent on technical costs, including (increasingly) paid development. If you think that a few million dollars is enough to fix a significant number of bugs, though, you're mistaken. Even if it were entirely spent on hiring new developers, two million dollars would only get you two dozen or so. Many of the bugs users complain about the most would require weeks or months of developer time to properly fix. So it doesn't add up to thousands of bugs being fixed.

If you don't believe me, notice that Google made over $23 billion in profit for 2009, but there are 12582 open issues in their browser, Chrome. Users of normal software inevitably outnumber developers by thousands to one, or (in our case) tens of millions to one or more. There is never any guarantee that the bugs you want fixed will be prioritized, unless you do it or pay for it yourself. That's reality for you. —Aryeh Gregor (talk • contribs) 18:43, 2 April 2010 (UTC)[reply]

Bug/feature focus

You just gave me an interesting idea, although I have no idea whether it is reasonable: targeted donations to Wikimedia. I don't have enough money to fund something as big as Usability Initiative, but I would still like it if my few dollars would go towards fixing certain bug(s). Currently, there is no way I can do this, except maybe finding a developer and giving the money directly to him. Svick (talk) 20:33, 2 April 2010 (UTC)[reply]
Maybe we can put features/bugs to a vote. Get some discussion going, and find out what most editors would appreciate most, and how they would prioritize resources. Of course, let people know the difficulty involved with fixing particular bugs, or implementing certain features. Continue this process indefinitely. When it becomes apparent that some things are too resource-intensive, then move on to others if people feel that way. The board and staff can do what they want in the end, but at least they will have more grassroots perspectives to help in their decisions. --Timeshifter (talk) 23:39, 2 April 2010 (UTC)[reply]
Bugs in bugzilla already have votes, but I'm not sure whether developers actually consider them when deciding what to do. Svick (talk) 23:48, 2 April 2010 (UTC)[reply]
Not really, AFAIK. I did create WP:DevMemo in an effort to improve (two-way) communication between devs and enwiki community. Rd232 talk 11:46, 3 April 2010 (UTC)[reply]
That is an idea. A dedicated Village Pump on Bug/Feature prioritization would get more traffic and discussion. If people could bookmark and watchlist the individual talk sections, then even more participation would occur.
I found this interesting talk page that combines the standard talk page and Liquid Threads:
strategy:Proposal talk:Global watchlists
Standard talk page sections are on top. Liquid Threads is on the bottom. Note that the Liquid Threads topics can be watched individually, but "watched" means only that new replies show up in "new messages" linked from the top of the talk page. --Timeshifter (talk) 13:25, 3 April 2010 (UTC)[reply]
There are votes for bugs, in Bugzilla, but they aren't paid too much heed, for at least two reasons. First, Bugzilla voters are nowhere close to a cross-section of Wikipedia users. They aren't even close to a cross-section of Wikipedia editors (and far more people view than edit). Just because something has the most votes doesn't mean it's actually supported by the most people.

Second, users can't assess implementation cost or other development problems. Bug 164 is one of the most-voted-for bugs in Bugzilla, but it's quite difficult to implement acceptably. Although other bugs are less important, many also take less effort to fix, so they receive more priority. On the other hand, some back-end changes are actually quite important for developers to do further work with, but have no direct effect, so users would never vote for them.

On top of that, of course, many developers are volunteers, and don't necessarily care what anyone else thinks. I implemented HTML5 support because I'm enthusiastic about HTML5, for instance, not because anyone asked for it. I do try to be helpful, but not to the extent of putting a lot of effort into things I don't personally care about much (like that collation issue, which doesn't affect me at all). —Aryeh Gregor (talk • contribs) 17:32, 4 April 2010 (UTC)[reply]

Perhaps the money should be used to pay for bounties for bug fixes, rather than to hire staff developers. The hiring of freelance developers on a contract basis could provide incentives for a larger group to gain proficiency at MediaWiki development. We might do something similar to the Summer of Code — pay a stipend to students who are looking to get real-world experience fixing bugs, developing new extensions, etc. They will be "hungry" and wanting to build a portfolio and develop good references for starting their web development careers, so it might lead to some good work; and given their inexperience, the price might be lower than what we would otherwise pay. Plus a lot of students these days love Wikipedia and rely heavily upon it, so that could provide another, more altruistic incentive to help us (i.e. a desire to "give back" to a project that has helped them so much). Tisane (talk) 20:37, 5 April 2010 (UTC)[reply]

As a former GSoC mentor, I have to point out that students require a lot of supervision by seniors, fail often and deliver work that can almost never be directly used. This is not their fault, this is how learning works. The point is that even letting others do the work still requires a lot of work. I think in general however that bounties is something that the Foundation should consider. But they can only work, if they are extremely well managed. This is where most past bounty systems in Open Source software have failed. —TheDJ (talkcontribs) 21:04, 5 April 2010 (UTC)[reply]
What do you suppose causes the failures? Do they get overly ambitious and bite off more than they can chew, not realizing how hard the projects will be? And what are the factors that contribute to the success of a bounty system? Perhaps there is a successful one out there than we can learn from and model ours after. I know part of the problem with MediaWiki bounties has been that someone will commit to a project and then fail to produce any deliverables after several months. Perhaps we should give students and other inexperienced programmers relatively simple projects to work on, and give the harder projects to those with a record of producing working deliverables in a timely manner.
It is generally easier to write than to code, or in any event, there seem to be a lot more people proficient at the former than the latter. I wonder if we can get an intern or someone to do some work on neglected documentation at MediaWiki.org? Maybe someone who is trying to get into the field of technical writing. The better the documentation of MediaWiki is, the easier it will be for new developers to get an understanding of the system and produce code, rather than having to figure everything out the hard way. Tisane (talk) 21:29, 5 April 2010 (UTC)[reply]
The failures are usually caused by inexperience (with a specific project or in general), mixed with lack of guidance. It isn't fun to be stuck on something for 2 days just because you can't find that one developer who is the only person who understands the code you are working on. This leads to people giving up. The other problem is of course that people just get in way over their head. In GSoC this has led many projects to require "qualification patches" from students. Those are quick bugs (1 or 2 day problems) that people are told to fix before they are allowed to commit to an actual larger project. The idea is that you can get a quick assessment what experience people have and of how they deal with problems (do they ask questions), while giving students a quick glance of the project and what quality and expertise a project requires. It's far from a perfect system, but has helped a bit.
Also projects are often not well defined, or far too large. The idea of "Liquid Threads" as a GSoC project clearly showed that by that time (2006), Wikipedia had become so large that this was a much too complicated project for that GSoC. Projects of this level of complexity need multiple projects, with intensive oversight and should probably not ever be GSoC projects again. This brings us to an interesting point. Almost all 'projects' considered for Wikipedia have become of a complexity (mostly due to the complexity of mediawiki/wikipedia itself) that makes them unsuited for this type of development. So I would keep the bounties to the simpler bugs (1 week jobs), hopefully freeing up some time for senior engineers. With a LOT of preparation, some larger tasks might be defined of the "sub project" scale perhaps.
Lastly, I'd like to point out that not every bug is currently 'solvable'. There are often "fault patterns" that have not been pinpointed to a cause yet. Investigation of such problems can be complicated, require custom queries on the database by senior administrators, looking at the php errorlogs (not available for the normal public due to privacy issues) and what not. And again, at the scale of Wikipedia, everything is complicated (see the GIF issue). —TheDJ (talkcontribs) 22:00, 5 April 2010 (UTC)[reply]
I find these 2 points relevant to many endeavors, not just coding: "The better the documentation of MediaWiki is, the easier it will be ..." And: "It isn't fun to be stuck on something for 2 days just because you can't find that one developer who is the only person who understands the code you are working on. This leads to people giving up."
Those are some of the reasons I consolidated some resource and how-to pages here: commons:Category:Commons resources. I also created a few pages. I noticed similar problems at the Firefox forums. I discussed some bugs at Firefox forums, and noticed the lack of consolidation of info.
People were constantly creating similar threads about similar bugs, but people weren't seeing each other due to the threads being spread out across multiple message boards. One suggestion I made there was to create separate discussion boards for each category of bug. At the time, there were many problems and discussions about bookmark/favorites. The discussion threads were, and probably still are, spread out across multiple message boards. A simple aid would have been to have one message board just for bookmarks/favorites. There were some good ideas and solutions getting lost in the chaos.
So, the same is true here. Need focused bug/feature topic boards. Also need to be able to watchlist individual threads. Wikia has figured out how to do watch individual threads. See their forum:
http://community.wikia.com/wiki/Forum:Index
But they don't have integrated watchlists. So it is still difficult to follow discussions since most people only check the watchlist for their own wiki there. The forums are on a different wiki, the community wiki. --Timeshifter (talk) 15:25, 6 April 2010 (UTC)[reply]
Integrated watchlists would be helpful, and that is an enhancement that is long overdue. I'm sure a lot of people would be very pleased to see that be added to the MediaWiki codebase, and it could help revitalize projects like meta and wikiquote. Do you want to collaborate with me on implementing that?
As for the integration of discussion forums, I have suggested that WikiProjects be formed to deal with various aspects of MediaWiki development. E.g., a WikiProject might be devoted to cross-wiki integration. We could get some esprit de corps going, with a standing group of volunteers available as a resource to anyone wanting to work on a particular aspect of MediaWiki, rather than just the present amorphous group of volunteers. In fact, it is probably more important to have WikiProjects for MediaWiki.org than on Wikipedia, because it's harder to write code than to write articles. In fact, I just took the liberty of creating mw:Project:WikiProject Cross-Wiki Integration, so check that out if it isn't deleted by the time you read this. :) Tisane (talk) 01:05, 7 April 2010 (UTC)[reply]
Anyone very familiar with MediaWiki's code who's willing to work on it for money is already employed by Wikimedia (or I guess other places like Wikia). Paying people who aren't familiar with the code, on the other hand, isn't likely to be very productive. It's well recognized among programmers (see, e.g., Brooks' law) that you can only do productive work on a large software project after you've taken a lot of time to get up to speed with the codebase. New developers always need hand-holding, in other words, even if they're very experienced as programmers. —Aryeh Gregor (talk • contribs) 17:02, 8 April 2010 (UTC)[reply]
True, but perhaps we can get a set of mentors to train new developers, who in turn can become mentors of the next generation of developers, and the group can expand outward by that means, in a way that does not consume too much staff time. Tisane (talk) 02:40, 10 April 2010 (UTC)[reply]

New village pump discussed

See Wikipedia talk:Village pump (development). The name may change. ... "Idea lab" or "Think tank" or "Idea Workshop" or something. --Timeshifter (talk) 19:14, 6 April 2010 (UTC)[reply]

Static GIF images not being resized by MediaWiki for years now. When will MediaWiki resizing return?

Static GIF resizing by MediaWiki worked fine years ago. Then some idiot Wikimedia developer saint turned it off, and the rest of the developers left it turned off for years (except for a few months). See commons:Commons:Graphics village pump/GIF thread#GIF scaling (animated and non-animated) still not working and commons:Commons:Graphics village pump/GIF thread#Can static GIF scaling be separated from animated GIF scaling?

See commons:Category:Octave Uzanne or any category with lots of charts, graphs, diagrams, or maps in GIF form. They take many minutes for dialup users to load. The Octave Uzanne thumbnail images look blurry now, but looked sharp when MediaWiki resized them. --Timeshifter (talk) 18:55, 30 March 2010 (UTC)[reply]

For that particular issue, convert the images to PNG format. When properly done and then run through optimization software, the file size is often less. PNG crusade bot and 718 Bot (here, not on Commons) used to automatically process GIF images tagged with {{ShouldBePNG}} (where conversion results in a file-size reduction of the full size image), but that task does not seem to be active anymore. PleaseStand (talk) 21:25, 30 March 2010 (UTC)[reply]
Unnecessary. GIF is an accepted copyright-free format for graphics and grayscale images such as commons:Category:Octave Uzanne. Graphics such as drawings, Line Art, graphs, charts, diagrams, typography, numbers, symbols, geometric designs, maps, engineering drawings, posters, banners, and flyers. GIF is a lossless format that works fine for graphics with less than 256 colors (which is true for most graphics).
See also: commons:Commons talk:Superseded images policy. GIF images are fully accepted. Conversion to PNG might be necessary for some GIFs that use transparency. By the way, if you want an easy way to make PNG images smaller (in kilobytes) I recommend Irfanview. It can losslessly compress PNG images so as to use less kilobytes for the same image without any loss in image quality. Install the Irfanview plugin pack too. It installs instantly and includes even better PNG compression, PNGOUT, which is easy to use in Irfanview. --Timeshifter (talk) 10:25, 31 March 2010 (UTC)[reply]
File:Plush Toys.JPG
Devs' ears can be particularly sensitive.
WP:NPA still applies even if you're talking about a paid staff member. They're still a member of the Wikimedia community. Mr.Z-man 21:48, 30 March 2010 (UTC)[reply]
See also WP:BITED. —TheDJ (talkcontribs) 00:23, 31 March 2010 (UTC)[reply]
Lol. I repent. I will do 3 "Hail Jimbos" and smoke a fatty to calm down. Love the image. --Timeshifter (talk) 10:25, 31 March 2010 (UTC)[reply]

Can some developers please respond? --Timeshifter (talk) 10:28, 31 March 2010 (UTC)[reply]

It seems like it works to me: File:X.gifFile:X.gifFile:X.gifAryeh Gregor (talk • contribs) 18:36, 2 April 2010 (UTC)[reply]

The browser, not MediaWiki, is doing the GIF image scaling. The number of kilobytes downloaded for a thumbnail is the same as for the full-size GIF image. See commons:Category:Octave Uzanne for example. Check the image properties for some thumbnail GIF images there. You might have to use MS Internet Explorer to get the image properties if you are not using the most recent version of Firefox. Example thumbnail info: "327.96 KB (335,832 bytes)," and "1,971px × 2,714px (scaled to 87px × 120px)". That scaling is done in the browser. The full 327 KB is being downloaded for that tiny thumbnail GIF.
That particular category has sharp, not blurry, thumbnails when MediaWiki does the scaling. Viewing that category's thumbnails is an easy way to tell if MediaWiki scaling of static GIF images has been turned on. --Timeshifter (talk) 23:55, 2 April 2010 (UTC)[reply]

Okay, right. The problem seems to be these lines in CommonSettings.php:

// Re-enable GIF scaling --catrope 2010-01-04
// Disabled again, apparently thumbnailed GIFs above the limits have only one frame,
//  should be unthumbnailed instead -- Andrew 2010-01-13
$wgMediaHandlers['image/gif'] = 'BitmapHandler_ClientOnly';

You need to ask sysadmins about that. Developers can't help you, unless there's some underlying software issue that's preventing the thumbnailing from being re-enabled that needs to be fixed. —Aryeh Gregor (talk • contribs) 17:38, 4 April 2010 (UTC)[reply]

It seems that all animated GIFs have only one frame shown when scaled. See bugzilla:22041 and comment #3: "After further investigation it appears that this issue is affecting all or nearly all thumbnails created from animated GIFs since the software update." --Timeshifter (talk) 16:55, 5 April 2010 (UTC)[reply]

Re-enabled GIF scaling

Hi all,

I've just re-enabled GIF scaling on Wikimedia sites. Hopefully you won't notice the difference.

If you do have problems with images rendering as single frames, then you should tell me so, by leaving a note on my talk page, e-mailing me at firstinitial surname at wikimedia dot org, or finding me on IRC.

Andrew Garrett • talk 04:47, 6 April 2010 (UTC)[reply]

Just to be clear, it's reenabled for both static and animated GIFs, correct? Even if it's just static GIFs, that's good news. Gavia immer (talk) 05:02, 6 April 2010 (UTC)[reply]
Great! Thanks. I just checked commons:Category:Octave Uzanne. The thumbnail GIF images are now sharp due to MediaWiki scaling. A few kilobytes per thumbnail, versus hundreds of kilobytes per thumbnail without MediaWiki GIF scaling.
Animated GIF scaling is currently rendering as single frames, though. I left more info on your talk page. --Timeshifter (talk) 16:33, 6 April 2010 (UTC)[reply]
If you see these images, purge the filepage, and force a reload of your browser directly on the image url. This should fix them. They are old cached images from last time. —TheDJ (talkcontribs) 19:09, 6 April 2010 (UTC)[reply]

File:Translational motion.gif. This is supposed to be a 300-pixel-wide, 398 KB, self-running animation. It had achieved FA status in January 2007 and is used on six of Wikipedia’s technology-related articles. Click here to see what the animation is supposed to look like.

Many articles have been adversely affected. Please undo whatever you just did. Many self-running animations, such as the one shown at right (and which is used on Thermodynamic temperature) worked fine for years but no longer do so. Note another self-running animation here on Non-uniform rational B-spline; same error. This is what it is supposed to look like in our articles. The animation on Thermodynamic temperature (and many other articles, including Equipartition theorem) was awarded FA status in January 2007. Now all one sees is a gray boxes saying “Error creating thumbnail: Invalid thumbnail parameters or image file with more than 12.5 million pixels.” As the Translational motion animation is used in the following articles: Thermodynamic temperature
Kinetic theory
Elastic collision
Neutron moderator
Equipartition theorem
…and since it can reasonably be assumed there are many other animations being affected that are used in very many articles, it would be exceedingly nice if whatever just changed was undone and greater caution exercised hereon. Greg L (talk) 18:08, 6 April 2010 (UTC)[reply]

The changes were explicitly made to avoid the crashing of the scale routines of the servers. I doubt the deployment of GIF scaling will be reverted yet again. We don't allow PNG images of 12 million pixels, and now we don't allow GIF images of over 12 million pixels either. I suggest we focus on finding ways to better deal with these large GIFs, but honestly, any animated GIF of this size, should probably never be presented to users. (And never have been). —TheDJ (talkcontribs) 19:14, 6 April 2010 (UTC)[reply]
I have created a small javascript importScript('User:TheDJ/largeanimatedgifs.js');. This script replaces the error message with a play button. The play button links to the image page and will force the unscaled full version of the animated GIF. (Note that currently there is no filetype check, so it does the same for PNGs). —TheDJ (talkcontribs) 20:38, 6 April 2010 (UTC)[reply]
It seems that purging the animated GIF image page directly worked to get a different animated GIF to show up. I don't understand why it is not working in this case. File:Translational motion.gif is only 398 kilobytes at full size. --Timeshifter (talk) 21:07, 6 April 2010 (UTC)[reply]
But it is also 370 frames. I wouldn't be surprised if there is a frame limit, as well as a total pixel size limit. I'll try to find out. —TheDJ (talkcontribs) 21:13, 6 April 2010 (UTC)[reply]
The earlier version of that animated GIF is still working. See:
http://upload.wikimedia.org/wikipedia/commons/archive/6/6d/20080328033120%21Translational_motion.gif
I wonder what is different in that version. --Timeshifter (talk) 21:22, 6 April 2010 (UTC)[reply]
I'm not sure if Purge works for the thumbnails of old file versions.....Interesting effect indeed. —TheDJ (talkcontribs) 21:28, 6 April 2010 (UTC)[reply]

I understand you need a cutoff, but I'm curious: does it have to be 12.5 million pixels? Moore's Law considered, I'd wonder if it could be increased periodically, or even as a defined function of time. Though you'd need fully 29,193,000 pixels for the image that is missing here if I calculated right. Wnt (talk) 19:43, 6 April 2010 (UTC)[reply]

The animation should be shortened a little. Also, I know that one-half of the frames could be dropped (to a 10 frames/s rate), although the animation would not run as smoothly: thanks to Internet Explorer's and Safari's (and others') incorrect animated GIF handling of delays 50 ms and less, the animation runs twice as fast in Firefox as in the two other browsers. (The delays are exactly 50 ms.) Does anyone know of a GIMP script that could do so? It would be much more convenient than having to manually remove 185 frames individually. Alternatively, we could convert the animation to a Theora video, preserving the 20 frames/s rate and also the length. PleaseStand (talk) 20:23, 6 April 2010 (UTC)[reply]
I have converted File:Translational motion.gif to Theora video File:Translational motion gif.ogv, and it plays Ok, only wikimedia is now sending a garbled thumbnail. Will this clear in a few days once the current thumbnail backlog clears? -84user (talk) 09:04, 9 April 2010 (UTC)[reply]

Oh, and could someone please amend that error message to include a direct wikilink to the image? Thanks. Wnt (talk) 19:54, 6 April 2010 (UTC)[reply]

I hav created bugzilla:23071, specifically for this issue. —TheDJ (talkcontribs) 20:41, 6 April 2010 (UTC)[reply]
Given that people's browsers have been perfectly capable of downloading the full image and scaling it for display, why can the servers not cope? Why not load, scale and save each frame separately (gifs support this kind of sequential access)? Or even implement a scaling queue to reduce server load, with some kind of "thumbnail not yet completed" for those images that are waiting? OrangeDog (τε) 20:27, 6 April 2010 (UTC)[reply]
For the origin of the 12.5 megapixel limit, see http://lists.wikimedia.org/pipermail/wikitech-l/2005-October/019681.html -- I guess it's a tradition by now... -- AnonMoos (talk) 20:46, 6 April 2010 (UTC)[reply]
Wow. That's 4.5 years ago. From the Moore's Law article it's unclear to me whether that is two or three doublings of RAM memory storage, but that would mean an equivalent limit today should be 50 million to 100 million pixels. This would allow an animation 1.6 to 3.2 times larger than the one in the example above. Wnt (talk) 20:58, 6 April 2010 (UTC)[reply]
My point being there is no need whatsoever to load every frame of the gif into memory in order to scale it. OrangeDog (τε) 20:51, 6 April 2010 (UTC)[reply]
Ideas are welcome on bugzilla. And the problem isn't the amount of memory per say (leading to server crashes), in the case of GIFs. If memory servers me right, it was also that these were taking SO long to process, they were keeping up other resize jobs. Remember (1,5 years ago) when you randomly would have it that an image took 5 minutes before a thumbnail was generated after you uploaded it ? You were waiting for an animated GIF. And the big problem with animated GIFS is that they cannot be reliably identified from normal GIFs. So you can't create separate pipelines, because only when you start processing, you will know. Well you could create separate pipelines of course, but it's a lot of design work. —TheDJ (talkcontribs) 21:00, 6 April 2010 (UTC)[reply]
I would say waiting 5 minutes once at upload is far better than waiting 5 minutes for every page load, or having broken thumbnails on featured content. I don't want to use bugzilla as you have to reveal your email address as far as I can see. OrangeDog (τε) 21:11, 6 April 2010 (UTC)[reply]
I use a throwaway email address at bugzilla for that reason. I check its inbox around once a week. It's on my calendar. I would prefer that bugzilla be part of wikimedia's unified login. That way email addresses would remain private, and I could check a watchlist instead, or get emails at my main email address. --Timeshifter (talk) 21:34, 6 April 2010 (UTC)[reply]
Actually, those 5-minute delays at Wikimedia Commons were a truly fearsome deterrent. Fix this problem, but yes, let's not have that again! Wnt (talk) 22:44, 6 April 2010 (UTC)[reply]
Last I checked, loading all frames in memory is a requirement of ImageMagick (our preferred image scaler) when you ask it to rescale a whole animated GIF. We could render each frame as a temp file, scale it, and recompose the result, but we don't have such a process set up right now. Dragons flight (talk) 21:01, 6 April 2010 (UTC)[reply]
With all due respect, it's not that difficult to knock up a streaming animated gif re-scaler just using libgif of somesuch, and as ImageMagick is open source, you already have a very good starting point. I'd do it myself, but I prefer to leave my programming time at work. OrangeDog (τε) 21:11, 6 April 2010 (UTC)[reply]
That is perfectly fine. The programmers of MediaWiki prefer to do their programming in their free time for fun. Each form has its drawbacks apparently. —TheDJ (talkcontribs) 21:15, 6 April 2010 (UTC)[reply]
In general we try to avoid approaches that require Mediawiki users to compile custom code as that tends to be unsuitable for some Mediawiki users and environments. Like I said, it can be done with ImageMagick and temp file(s) (it could also be done with PHP GD, or several other ways, but new compiled code should be avoided if possible). One of the problems that makes GIFs more difficult is that the GIF specification doesn't tell people in advance how many frames an image may have. You know there is an additional frame only when you read the file and encounter an additional frame marker. Yes, it is entirely possible to write a streaming process. But we haven't done so and as far as I know none of the scaling tools we support use that approach either. If you want to volunteer to write it, then your contribution would be appreciated. Dragons flight (talk) 21:41, 6 April 2010 (UTC)[reply]
I've never bought the whole "can't be done in pure php" excuse, but if it's a concern, I'm sure the ImageMagick community would welcome a few more developers to write a streaming animated gif scaling function for the next release. OrangeDog (τε) 22:01, 6 April 2010 (UTC)[reply]
This gif and this featured gif were displaying fine recently. Now, purge or not, they just give error messages. I see from above that "now we don't allow GIF images of over 12 million pixels". Does this mean we have to abandon many of the more sophisticated gifs? If so, it is a serious backward step and a body blow to Wikipedia, at a time when computer resources still rapidly continue to become cheaper. --Epipelagic (talk) 08:05, 7 April 2010 (UTC)[reply]

Partial solution

This was discussed last night again on IRC. Current thinking is for the short future, to limit scaling to 12.5MP total, but send clients unscaled animated GIFs between 12.5MP and a higher number (possibly 50MP). If we assume 50MP, then in a worst case scenario, you are asking for 200MB of RAM on the browser side if you have a rather dumb image handling routine in your browser, and 50MB for the smarter ones. Above that value, the error would remain (or just thumb the first frame), because it doesn't seem healthy to send such big files at unsuspecting clients. Actually, even with this there are still problems remaining. Especially on Category pages with lots of animated GIFs, and probably for mobile phones as well. For the longer term a solution with a scaled frame 0 + "play button" is probably the better solution for over 12.5MP, but that requires more work. Note that this is not a promise, nor a commitment, it was just a brainstorm session. —TheDJ (talkcontribs) 13:48, 7 April 2010 (UTC)[reply]

We are defining image size as length*width*frames, which tends to die on images with a very large numbers of frames. However, the smart implementation would only require RAM for something of order one frame, i.e. length*width pixels. Hence the memory burden on the browser could be far less than you estimate above (if the scaling algorithm is smart about animations). Dragons flight (talk) 15:22, 7 April 2010 (UTC)[reply]
I doubt my Sony Ericsson phone browser is that smart. And Safari 3 and earlier was terrible with larger animated GIFs. In general, it is good to assume the worst, because browsers have behaved like that. And especially if you have 200 of those full sized images in a Category page, safeguards are probably wise. Hell, ImageMagick doesn't even work frame by frame apparently, so if the problem exists there, it is likely to occur in client implementations. —TheDJ (talkcontribs) 17:04, 7 April 2010 (UTC)[reply]
Some new code was deployed last night.
  1. If an animation is more than 12.5MP total, you get a single frame
  2. If an animation is more than 12.5MP per frame, you get the thumberror.
Old thumberrors are cached, so you might have to purge the filepage. (Remember to purge on Commons for Commons files). This is still not optimal of course. —TheDJ (talkcontribs) 14:00, 8 April 2010 (UTC)[reply]

Category:Animations of geometry

See commons:Category:Animations of geometry. Why are some of these thumbnails animated and some not? Is it just a matter of time before MediaWiki gets to them? Purging? Purging what? Exactly what are the page URLs and the keys being pressed to purge one of the problem thumbnails? I have had no luck so far.

By the way, I found this comment at bugzilla:22041:

 Andrew Garrett      2010-04-06 04:11:51 UTC

 andrew@fenari:~/php-1.5$ php maintenance/eval.php
 > print $wgMaxAnimatedGifArea
 1

 Seems like the cause.

 I've fixed this.

It is comment number 11. --Timeshifter (talk) 22:04, 6 April 2010 (UTC)[reply]

You can expect the servers to take at least several days before caches clear and new animated images are created for everything that the servers can process. Directly purging the file will jump the queue and update it now. Dragons flight (talk) 22:09, 6 April 2010 (UTC)[reply]
Please try purging one there to see if it updates now. I am not so sure that it is working that way. For example; try this one:
File:Hyperpyramide-animation.gif
File:Hyperpyramide-animation.gif
I already tried awhile back, and there still is no animated thumbnail of it for me at commons:Category:Animations of geometry even after purging the page. --Timeshifter (talk) 22:21, 6 April 2010 (UTC)[reply]
Thumbnail of File:AnimPYRAMIDE2.gif wasn't animating in the category listing, so I purged the image description page and then bypassed my browser cache and now the thumbnail animates properly. Svick (talk) 22:20, 6 April 2010 (UTC)[reply]

thumb

I must be doing something wrong. I found this: Wikipedia:Purge#For images. What exactly are you doing? I use Firefox.
I have also studied Wikipedia:Bypass your cache#Mozilla family some more. Still no luck getting an animated thumbnail to show up for this animated GIF on the category page:
File:Hyperpyramide-animation.gif
But a different-size animated thumbnail of that image shows up to the right. --Timeshifter (talk) 23:16, 6 April 2010 (UTC)[reply]
For reference, the thumb in question is this oneTheDJ (talkcontribs) 23:34, 6 April 2010 (UTC)[reply]
The same situation for me: the thumb didn't animate, so I purged the description page on commons (clicking the “Purge” tab, i.e. http://commons.wikimedia.org/w/index.php?title=File:Hyperpyramide-animation.gif&action=purge), then Ctrl+F5 directly on that thumb and now it animates. Svick (talk) 23:41, 6 April 2010 (UTC)[reply]
It is now animated. It must be taking a very long time after purging for MediaWiki to get around to creating new animated thumbnails in some cases. I have been purging and bypassing that image and its category page for a long time now, and the 120-pixel-wide category thumb only now showed up on the category page. --Timeshifter (talk) 23:44, 6 April 2010 (UTC)[reply]

Extra-strength purge

From Timeshifter's comments elsewhere, it seems that many might not know about the http://commons.wikimedia.org/w/thumb.php?f=IMAGENAME.gif&w=PIXELWIDTH trick (where "IMAGENAME" is the name of the image, and "PIXELWIDTH" is a number such as 120). It sometimes works to regenerate one specific thumbnail size of an image when ordinary purging doesn't. AnonMoos (talk) 00:33, 7 April 2010 (UTC)[reply]

Actually, I was told a while ago by Tim Starling that this should NOT be used, because it stores images on the servers instead of on the scalers, making them unreachable for any future purges (read permanent). The page really shouldn't be publicly accessible at all in the Wikimedia deployment, but no one has gotten around to that. I'm not sure if that is still the case, but this is what I was told. —TheDJ (talkcontribs) 00:37, 7 April 2010 (UTC)[reply]
It's kind of a little late now, since knowledge of that has been circulating among people on Wikimedia Commons for at least a year now (if memory serves), unaccompanied by warnings... AnonMoos (talk) 00:43, 7 April 2010 (UTC)[reply]
 02:37 < thedj> TimStarling: You once told me that we shouldn't 
  use thumb.php directly. Is that still the case ?
 02:37 < TimStarling> yes

Apparently, it is "inefficient", but not really dangerous. Avoid doing it in general (don't make buttons/menulinks for it) and avoid it on the secure server especially. I'm still not quite sure what problems this creates especially. I know that SVGs might not render properly, because the right software/fonts might not be installed on the thumb.php server for instance. But not sure of the caching implications. —TheDJ (talkcontribs) 01:06, 7 April 2010 (UTC)[reply]

Could this be the reason why the category thumbnails for some animated GIFs are not rescaling? I bet many people over the last year have tried that URL with many animated GIFs:
http://commons.wikimedia.org/w/thumb.php?f=IMAGENAME.gif&w=PIXELWIDTH
I tried it today when I first saw it. Did this cause a problem? Is this repairable?
Further work on GIF scaling/resizing by MediaWiki is going on at bugzilla:23063. Some more animated GIF categories to look at are commons:Category:Abstract Animation and commons:Category:Animations of gears and gearboxes. --Timeshifter (talk) 01:07, 7 April 2010 (UTC)[reply]


Anyway, it's actually kind of recommended on the Commons FAQ; see commons:Commons:FAQ#Why is the old picture and not the new uploaded picture on my screen? (Or-- my thumbnail is wrong.) -- AnonMoos (talk) 01:50, 7 April 2010 (UTC)[reply]

Page purge

I noticed something on this page:

That page uses static GIFs. Even after bypassing the browser cache (CTRL-F5 on Firefox) the page was still using browser-resized GIFs instead of MediaWiki-scaled GIFs. See Wikipedia:Bypass your cache.

I tried a WP:null edit (click the edit link at the top and then save the page without making changes to the wikitext) and that fixed the problem. All the static GIFs on that page are now scaled by MediaWiki.

As for animated GIF images and their thumbnails it seems like the scaling is taking days for all those many thumbnails at various sizes. I believe I am seeing a few more animated GIF thumbnails on category pages each day. --Timeshifter (talk) 20:52, 8 April 2010 (UTC)[reply]

Documentation

Can someone who knows the exact details update Wikipedia:Images#Consideration of image download size and any other relevant documentation? Thanks. OrangeDog (τ • ε) 11:58, 9 April 2010 (UTC)[reply]

about that search-box suggestor

Suppose an article′s title is Föøbåŕ and you search for Foobar. The search-box will “suggest” Föøbåŕ for you, sure—but it is useless to determine whether Foobar exists as a redirect, or not at all. I know this because I typed Loys Delteil in the search-box and it suggested Loÿs Delteil, which is cool enough, but see… I first thought it determined this rough equivalence by resolving a redirect, but after striking the “enter” key I found that none existed.

Note: for the sake of discussion, refrain from creating that redirect quite yet.

I referred then to redirects which I knew existed (such as GdanskGdańsk) and the results were indistinguishable from the one above. It seems the only case in which the titles appearing below the search-box provide clear information about multiple page-names which decompose to the same base-letters is when they exist separately as proper articles (e.g. Socrates and Sócrates, not that that′s a terrific idea either).

This and the way the titles tend to defy alphabetical order (and gravity) have me thinking about creating sort of a low-tech replacement gadget which behaves in a predictable fashion, so let me know if somebody doesn′t have a working version already. ―AoV² 12:11, 6 April 2010 (UTC)[reply]

You can just use the address bar... OrangeDog (τε) 20:16, 6 April 2010 (UTC)[reply]
The suggestions were created to be most useful for average users to detect the most likely thing they want to type in. Thus, as you noticed they are not very well suited for finding redirects, or search the full alphabetical listing. You can use Special:Allpages and Special:PrefixIndex for that (redirects are in italics). Before this search suggest has been implemented a couple of tools existed that used those, one of them was User:Zocky/AutoComplete.js. --rainman (talk) 00:27, 7 April 2010 (UTC)[reply]

Here I came up with something short enough to be comprehensible: User:AoV2/suggestor.js. I suppose the key differences are that it uses Prefixindex, it sorts in alphabetical order, and does not float. Let me know if anyone else has a use for non-fuzzy lookup. ―AoV² 05:39, 9 April 2010 (UTC)[reply]

Namespace templates

I was wondering if a template similar to commons:Template:Namespace and commons:Template:Namespaces could be useful here. I recently noted that {{Uncategorized}} still uses a #switch function for this, so it might be handy to use a template there. After searching for templates, I only found {{Pagetype}} (which is however not the same), so such a template doesn't appear to have been created yet. --The Evil IP address (talk) 16:45, 6 April 2010 (UTC)[reply]

All those templates do is provide a slightly different output than the magicword {{NAMESPACE}}... The parserFunction would still be required. –xenotalk 16:49, 6 April 2010 (UTC)[reply]
{{pagetype}} fulfils essentially the same function; because of its different history (spun out of {{WPBannerMeta}}) it has a slightly different syntax and default, but it is still used and usable in templates, see for instance {{db-meta}}. While the behaviour of pagetype could probably be tweaked to be more useful in such cases, I'm not sure if adding another parallel template would be helpful. Happymelon 19:40, 6 April 2010 (UTC)[reply]
We already have loads of templates that perform near-identical functions. What's the harm in adding more? :P I just noticed {{pagetype}} doesn't have any categories. What would be appropriate here, Category:Namespace manipulation templates perhaps? Reach Out to the Truth 01:38, 7 April 2010 (UTC)[reply]
Because you end up with a mass of poorly-maintained, poorly-documented and hard to find templates. If ever possible, add parameters to existing templates. OrangeDog (τ • ε) 22:46, 8 April 2010 (UTC)[reply]

Duplicate section names issue

I'm working on an article with duplicate subsection names, where for example I have "Middle East" under "History" and "Middle East" under "Current status." When editing the latter section, after I am finished editing and save my changes, I am returned instead to the first section! —Khin2718 18:43, 6 April 2010 (UTC)[reply]

Probably a known issue and only a problem when using section links from the watchlist or the return-to magic. The table of contents will appropriately append a "_2" to the second section when clicking from there. –xenotalk 18:45, 6 April 2010 (UTC)[reply]
Nevertheless, this should be avoided wherever possible, as it means that anchored links from other articles may go to the wrong section. Consider retitling one or both headers to avoid this. Chris Cunningham (not at work) - talk 11:04, 7 April 2010 (UTC)[reply]

Gadgets not working

Some of gadgets that can be enabled in Special:Preferences do not appear to be currently working, such as the UTC clock in the upper right corner and the [edit] button to edit the lead section. I am running the Vector skin with Firefox 3.6 on Mac OS X 10.6.2. NERDYSCIENCEDUDE (✉ messagechanges) 22:30, 6 April 2010 (UTC)[reply]

Also, Twinkle and Friendly aren't showing up underneath the down triangle when I hover over it. NERDYSCIENCEDUDE (✉ messagechanges) 22:38, 6 April 2010 (UTC)[reply]
Huh, now it's working again. NERDYSCIENCEDUDE (✉ messagechanges) 23:33, 6 April 2010 (UTC)[reply]

Bots and Logging In

(Crosspost from WP:BON): Please note, about 10-15 minutes ago there was a security change to the login system (Which is now live on all WMF wikis), which will break change the current login systems (API users included), For more information see: bugzilla:23076. Peachey88 (Talk Page · Contribs) 00:43, 7 April 2010 (UTC)[reply]

This is causing AutoWikiBrowser not to work. --Auntof6 (talk) 01:19, 7 April 2010 (UTC)[reply]
AWB will need to be updated. — Carl (CBM · talk) 01:31, 7 April 2010 (UTC)[reply]
Fix was already pre-prepared for this. Just need to get my desktop machine back up online and I'll get AWB sorted. Reedy 08:20, 7 April 2010 (UTC)[reply]

Wiki-markup included in internal searches

Does anyone know why wiki-markup is included in internal searches? When I search for "Gray]]'s ''Elegy"; "Gray's Elegy"; "Gray's]] ''Elegy"; "Gray's [[Elegy"; and so on, I get different results. Which for some purposes is good, but for others is extremely annoying. Anyone know how to make the search engine a bit more flexible when wiki-markup is tangled up with the search terms? Carcharoth (talk) 01:09, 7 April 2010 (UTC)[reply]

Changing this without an option would make it very difficult to find particular instances of markup, for example if using AWB. OrangeDog (τ • ε) 12:29, 7 April 2010 (UTC)[reply]
I thought the internal search engine was primarily for the use of readers searching for something, which should take priority over editors searching for wiki-markup problems. If a reader is searching for "SEARCH TERM" (using quote marks to search for an exact phrase), then the search engine should be searching the text as it is seen on the page, not letting wiki-markup such as the square brackets and piping symbols mess up the search. But as you say, it is probably difficult to change. I just thought it was strange that no-one seems to have realised this before. The end result is that some searches are not finding all examples of what the person is searching for, and a Google-search will find those instances that the internal search engine misses. Carcharoth (talk) 23:29, 7 April 2010 (UTC)[reply]
That's why I said leave the option behind to turn the old way back on per search. OrangeDog (τ • ε) 11:45, 9 April 2010 (UTC)[reply]
This is a byproduct of a search query parsing bug, internal search ignores wiki markup. --rainman (talk) 15:58, 8 April 2010 (UTC)[reply]

Import XML dump

I am looking for a mechanism to apply edits to pages from a Special:Export XML dump file. What I am looking for is not the same as Special:Import, which is oriented toward importing pages with revision histories from other wikis. What I want is to take the page text from the XML dump for each page and apply it as an edit to the page it was created from.

The use case I have in mind is as follows:

  • Use Special:Export to download a number of pages (most recent revision only) as an XML file.
  • Modify the text of the pages in the XML file, using whatever tools are appropriate.
  • Apply the changes back to the Wiki from the modified XML files.

Is such a tool available? -- JPMcGrath (talk) 12:43, 7 April 2010 (UTC)[reply]

I don't think it is. Why do you want to do it exactly this way? Isn't the API sufficient? Or using AWB? In the case of AWB, you can either write a module (in C# or VB.NET) or use some external tool directly. Svick (talk) 16:29, 7 April 2010 (UTC)[reply]
I have made some systematic changes to large numbers of articles with AWB, using an external AWK script to perform the changes. In order to test the scripts, I created a wrapper that processes an Export dump, feeding each page to the script. By the time I actually performed the changes with AWB, I had run all the pages through the process multiple times and created diff files and other reports of the changes to ensure they were correct.
After all of the testing and verification, I had to run the changes through AWB, which took several hours when I recently made a change on 559 pages. I had to look at each individual change and click "save" for each one. If you have ever done that, you know how mind-numbing it can be, and how easy it is to not really examine each change. And the result is somewhat less secure, since if the page had been changed since I created the last dump, I would not know it.
A dump-upload tool as I conceive it would be more efficient and more secure. It would verify that the page had not been modified since the dump was made, so there is no possibility that new changes would create a situation that was not planned for.
From the above, you can see that AWB does not satisfy the need. I think that the API is exactly what would be needed to implement the tool. It looks like it would be rather simple, assuming some of the available libraries work as well as it looks like they should.
Although this does not sound like the traditional view of a bot, in that the changes are all reviewed by a human before they are made, it still would use the same access methods that a bot uses. It also would be a potent tool, if someone chose to abuse it, so I am thinking it would make sense to bring the idea to WP:Bag before writing such a tool. Does that make sense to you?
-- JPMcGrath (talk) 19:10, 7 April 2010 (UTC)[reply]
I think AWB is still the easiest way. I don't know how you loaded your changed articles into AWB (copy-paste?), but AWB can be quite easily customized, so it can, for example, load the article text from XML file, check the date of last revision (if it doesn't match with the one in XML, skip and log it) and save it. Automatic saving is enabled only if you are approved as a bot in AWB. I (or possibly someone else on WT:AWB) can help you if don't know how to program in .NET. I don't have experience with BAG, but their approval shouldn't be necessary if you actually review each change and if you don't do any mass changes. Svick (talk) 21:55, 7 April 2010 (UTC)[reply]
Why do you think AWB would be easier? I don't really see much there that would make it simpler than using a bot library and the API calls. What specific advantages would outwiegh the additional costs and design constraints that come from working within a framework such as AWB? -- JPMcGrath (talk) 22:28, 8 April 2010 (UTC)[reply]
If you want to use your own code, you would have to actually write the code that communicates with the API and parses the answers. It's nothing difficult, but I think using already written code saves you time. And you can either use AWB directly with a custom module or write your own application using classes from WikiFunctions.dll. Svick (talk) 22:55, 8 April 2010 (UTC)[reply]
Most non-ancient bot frameworks have some method for using the API. There is a list of the various frameworks in different languages. Mr.Z-man 23:08, 8 April 2010 (UTC)[reply]

Spaces in URL?

The Official Charts Company finally released an archive, but it's got a horrible technical glitch: it actually expects space characters as a part of the URL, and will not allow "_" or "+" to be used instead. That means we can't create links, because [http://www.theofficialcharts.com/artist/_/Cheryl Cole] expands a link named "Cole" to http://www.theofficialcharts.com/artist/_/Cheryl. I can get around this in {{singlechart}} by skipping the urlencode function and requiring the user to pass in "%20", but that's pretty ugly, and it makes the URL title display with the %20 as well. Test version is at User:Kww/singlechart, with the test driver at User talk:Kww/singlechart. Can anyone figure out a way to do this?—Kww(talk) 21:09, 7 April 2010 (UTC)[reply]

Characters can be unsafe for a number of reasons... The space character is unsafe because significant spaces may disappear and insignificant spaces may be introduced when URLs are transcribed or typeset or subjected to the treatment of word-processing programs... All unsafe characters must always be encoded within a URL.
I don't believe there is currently a solution other than the method you suggest. {{urlencode}} and friends were not designed with such a high level of abject stupidity in mind... :P Happymelon 21:32, 7 April 2010 (UTC)[reply]
Is there at least a way to change "Cheryl%20Cole" back to "Cheryl Cole"? I can make things work (albeit kludgily) if I can accomplish that.—Kww(talk) 21:56, 7 April 2010 (UTC)[reply]
You could use one parameter for each part of the name, something like {{chart template|Cheryl|Cole}} and in the template [http://url.to.charts/{{{1}}}%20{{{2}}} {{{1}}} {{{2}}}]. Svick (talk) 22:11, 7 April 2010 (UTC)[reply]
Hard to make general purpose: What works for "Cheryl Cole" won't work for "Bonzo Dog Doo-Dah Band", and the game is over by the time you get to "The Ogden Edesl Wahalia Blues Ensemble and Mondo Bizzario Band" (who fortunately never had a hit in the UK).—Kww(talk) 22:15, 7 April 2010 (UTC)[reply]
(ec)How about emailing or filling in the feedback on the site. It's a new design of the site, they may not yet know of the technical problems spaces in URL's create. It also may get fixed in the coming days. Regards, SunCreator (talk) 22:17, 7 April 2010 (UTC)[reply]
I've already done that, but I'm not holding my breath. I've gone through similar issues with Stefan Hung, and, while he is sympathetic, I still can't create a link to Beyoncé Knowles on his site because of non-standard URL formatting issues.—Kww(talk) 22:23, 7 April 2010 (UTC)[reply]
As initially described, this would be easy if {{#replace:{{#urlencode:{{{1}}}}}|+|%20}} were allowed. It isn't though; a proposal to enable StringFunctions was rejected since ParserFunctions were not intended to become a general-purpose programming language. Apparently it was thought of to use Lua as a more flexible template programming language, although it would require a compiled Lua binary or PHP extension to work. PleaseStand (talk) 23:12, 7 April 2010 (UTC)[reply]
bugzilla:22474 --MZMcBride (talk) 02:36, 8 April 2010 (UTC)[reply]
Thanks for the information about that addition to the urlencode function that allows using PHP's rawurlencode function instead of urlencode. Of course it hasn't made it to the live site yet; this change is only 2.5 hours old as I am writing this message. PleaseStand (talk) 02:57, 8 April 2010 (UTC)[reply]
This works fine... Cheryl Cole at theofficialcharts.com. Why would you ever want a bare url in an article? OrangeDog (τ • ε) 11:33, 8 April 2010 (UTC)[reply]
The problem is that {{singlechart}} can't generate that for the UK in the same way that it does for other countries, because there's no way to pass in the "artist" parameter and generate "Cheryl%20Cole" for the URL and "CHERYL COLE - The Official Charts Company" for the page title. The documentation at {{singlechart}} can be a little intimidating, but taking a look at Beautiful (Christina Aguilera song)#Charts and certifications makes how it works for other archives pretty obvious, and I want to add support for this one.—Kww(talk) 14:53, 8 April 2010 (UTC)[reply]

Is rev:64726 what you were looking for? Once that is live here, {{urlencode:Cheryl Cole|PATH}} should result in Cheryl%20Cole. Reach Out to the Truth 19:29, 8 April 2010 (UTC)[reply]

It will certainly work for my purposes. How can I find out a timeline for actual implementation?—Kww(talk) 22:54, 8 April 2010 (UTC)[reply]

api watchlist in json format

Despite being quite firmly logged-in, I′m getting the message “foo({"error":{"code":"wlnotloggedin","info":"You must be logged-in to have a watchlist"}})” (but only when I nominate a call-back function). I thought this might be for security reasons, but noticed it still fails after I provide appropriate values for wluser and wltoken. So, what′s the trick? ―AoV² 06:58, 8 April 2010 (UTC)[reply]

"callback - If specified, wraps the output into a given function call. For safety, all user-specific data will be restricted."
You cannot use callback for anything but anonymous actions. This is because it would open huge XSS security flaws: with callback, you can load a remote api call as a <script src="remote link"> that executes on load (letting malicious sites send api actions as you, via your browser). The way around it is to get the page via AJAX (which typically cannot send requests across domains), and then stick the responseText into an eval(), eg: eval('callback(' + foo.responseText + ')');. --Splarka (rant) 08:03, 8 April 2010 (UTC)[reply]
PS: It does work for me with wltoken and wlowner: api.php?action=query&format=json&callback=foo&list=watchlist&wlowner=Splarka&wltoken=XXXXXXXXXXXX gives foo({"query":{"watchlist":[{"pageid":3252662,"revid":354706456,"ns":4,"title":"Wikipedia:Village pump (technical)"}... --Splarka (rant) 08:10, 8 April 2010 (UTC)[reply]

Okay, got that to work but… I don′t suppose there′s any quick way to set the same watchlist token on all wikis? See, I was going to try making something which combines watchlists from several projects onto the same page. ―AoV² 08:44, 8 April 2010 (UTC)[reply]

In order to improve our emitted microformats, it would be useful to be able to add rel attributes to internal (including File:) and external links; and class attributes to links and images; none of which are possible at the moment. Where's the best place to raise this issue, and how should it best be approached? Andy Mabbett (User:Pigsonthewing); Andy's talk; Andy's edits 09:36, 8 April 2010 (UTC)[reply]

For the time being, both of these requests can be supported by cooking up inline templates for the purpose. In the long run, the [[file:]] container is extensible, so it would presumably be easy enough to add class= support. Internal links are another matter; possibly by altering the behaviour when a link is piped not once but twice? Currently the output inserts the last two "arguments" with a literal pipe between them, as piping|piper shows. Chris Cunningham (not at work) - talk 09:49, 8 April 2010 (UTC)[reply]
Thanks. Can you give an example of such an inline template, or how one would work, please? Andy Mabbett (User:Pigsonthewing); Andy's talk; Andy's edits 12:05, 8 April 2010 (UTC)[reply]

Exposing the rel attribute would be foolhardy to any extent which allows negating nofollow.

But perhaps the html-tidy or whate′er can be configured to render:

<span class="foo" style="font-size:larger;">[[Bar]]</span>

as:

<a href="/wiki/Bar" title="Bar" class="foo" style="font-size:larger;">Bar</a>

It already does something similar with <font> tags when the link is the only member (actually it turns them inside-out). However, removing the extra tag-layer would cure the common problem where the link looks like crap because its text-color and underline-color do not match. This might be the most versatile approach short of allowing arbitrary <a> tags. ―AoV² 10:29, 8 April 2010 (UTC)[reply]

Presumably the parser can be modified to insert a default class/rel into the <a> element when generating the html, use of html tidy is not required and would make things more confusing. Alternatively, some javascript could be used to modify all the relevant elements when the page loads, or a new php module that does the same on server-side. Comment by User:OrangeDog
the rel or class values needed vary (from a limited set, for microformats) depending on the circumstances. Andy Mabbett (User:Pigsonthewing); Andy's talk; Andy's edits 12:02, 8 April 2010 (UTC)[reply]
There were long discussions about support for these kind of things in January on the wikitech list. The discussion is far from concluded :D —TheDJ (talkcontribs) 13:29, 8 April 2010 (UTC)[reply]
Thanks; but that appears to be a discussion of RDFa and microdata - similar to, but not the same as - microformats; and not of rel and class attributes. Andy Mabbett (User:Pigsonthewing); Andy's talk; Andy's edits 14:19, 8 April 2010 (UTC).[reply]
True, but it mostly comes from the idea to implement rel-license. (CC uses rel-license +RDFa). There are likely more related bugzilla tickets, but this is one of themTheDJ (talkcontribs) 14:38, 8 April 2010 (UTC)[reply]
It would certainly be possible for certain values to be prohibited/ stripped-out on rendering. Your surrounding-span idea is clever. 'Nofollow' is evil, as used on Wikipedia, BTW. Andy Mabbett (User:Pigsonthewing); Andy's talk; Andy's edits 12:00, 8 April 2010 (UTC)[reply]

There was an attempt a while ago to allow direct insertion of <a> tags. I don't know what happened to that; Tim Starling didn't like the implementation and it seems not to work on trunk, so I guess it was either reverted, or turned off by default. For images, we could pretty easily add new syntax to image inclusions, like we did for alt and title, although that code is a complete mess last I checked.

Somehow it's not surprising to me that this is being requested by someone who uses a vCard as a signature. :P —Aryeh Gregor (talk • contribs) 16:49, 8 April 2010 (UTC)[reply]

RfA votes and automated edit counts

Are these tools still available? Jeffrey Mall (talkcontribs) - 12:36, 8 April 2010 (UTC)[reply]

These ones? Chris Cunningham (not at work) - talk 21:06, 8 April 2010 (UTC)[reply]
Ahh I found them thanks! Jeffrey Mall (talkcontribs) - 23:20, 8 April 2010 (UTC)[reply]

iPad editing?

Is there any way to make it possible to fully edit Wikipedia articles on an iPad? The problem I run into is that there is no way to scroll within the editing text box, since the iPad web browser lacks scroll bars. --agr (talk) 21:29, 8 April 2010 (UTC)[reply]

Use two fingers to scroll. –xenotalk 21:30, 8 April 2010 (UTC) (that being said, my kingdom for a decent app to edit Wikipedia)[reply]
Yes, but will it blend? --Deskana (talk) 21:38, 8 April 2010 (UTC)[reply]
He already blended an iPad? The horror... –xenotalk 21:41, 8 April 2010 (UTC)[reply]
Oh, they'll never approve an app that's better then their own browser. If you've ever tried developing for iPhone you'll know how evil Apple are. OrangeDog (τ • ε) 22:52, 8 April 2010 (UTC)[reply]
If the iPad has Safari similar to the Mac than to the iPhone, then you can use a script to expand the editing box to show the entire text so that there is no need to scroll. Gary King (talk) 00:34, 9 April 2010 (UTC)[reply]
Wikipedia is not iPhone/iPad friendly :(. Big pages, dis-coordinated conversation requiring multi tags open and big sections which are difficult to edit. Regards, SunCreator (talk) 00:45, 9 April 2010 (UTC)[reply]

Check out their latest device [1]. ―AoV² 01:02, 10 April 2010 (UTC)[reply]

extra space

In {{Infobox_NRHP}}, the following syntax are the 1st 2 lines:

{{#ifeq:{{{embed|}}}|yes|</td></tr><tr><td colspan=20>}} {| class="infobox vcard" style="{{#ifeq:{{{embed|}}}|yes|width:100%; border:0; margin:0; background:transparent|width:250px; font-size:90%}}"

The first line, a syntax for collapsation (or something), the second line, the start of a table.

Is this the reason [2] has that line of white space after the hat note? I've seen this in many articles. If this is the case, can we file a bot report to fix this in templates? If not, this has to be fixed with templates manually.174.3.123.220 (talk) 01:20, 9 April 2010 (UTC)[reply]

One might fix it by removing the line-break, or surrounding it with comment tags, etc.:
{{#ifeq:{{{embed|}}}|yes|</td></tr><tr><td colspan=20>}}<!--
-->{| class="infobox vcard" …
Of course the template is protected so getting “consensus” for this edit could take weeks. In any case this is a sloppy way to “embed” multiple infoboxes. ―AoV² 01:59, 9 April 2010 (UTC)[reply]
Oh, ok, this makes sense. Yea this is a big problem with a lot of info boxes:-).174.3.123.220 (talk) 02:13, 9 April 2010 (UTC)[reply]
Actually this wouldn't work, because the {| has to be on a new line (and newlines in comments are ignored). The following code should work:
{{#ifeq:yes|yes|end_table
{{{!}}|{{{!}} }} class="infobox vcard"
Svick (talk) 13:34, 9 April 2010 (UTC)[reply]
Converting the template to use {{Infobox}} would do the trick as well. ---— Gadget850 (Ed) talk 14:38, 9 April 2010 (UTC)[reply]

wildcards for template subcategories

With User:DASHBot/Wikiprojects wildcards work. The bot lists articles in a wikiproject by category, for example:

Category:* Germany articles

I am using User:DASHBot/Wikiprojects/Templates which creates a list by template.

Do wildcards work for a bot to list certain subcategories in a template?

For example, to have the User:DASHBot/Wikiprojects/Templates bot list all templates with have a&e in them like:

{{WPBiography|living=yes|class=stub|a&e-work-group=yes|needs-infobox=yes|listas=108}} 

could I use

Template:WPBiography*a&e*?

If this question does not make sense, here is a more detailed explanation. Okip 01:24, 9 April 2010 (UTC)[reply]

Help with Penthouse Pets category

The category for Category:Penthouse Pets has a bunch of navigation templates. There is a subcat to contain those (Category:Penthouse Pets navigational boxes) and the entries also appear there. I went to remove the templates from the super cat but cannot figure out why they are there in the first place. The templates (e.g., Template:Penthouse_Pets_of_1969) only seem to use "noinclude" for the subcat and not for the supercat and I don't understand where why its being included in the super cat. Could somebody remedy my confusion? Jason Quinn (talk) 02:07, 9 April 2010 (UTC)[reply]

I partly figured it out... has to do with the "includeonly" tag in Template:Penthouse Pets. It's really late and I've been editing for a while now and I'm not thinking straight about the differences at the moment. Jason Quinn (talk) 02:19, 9 April 2010 (UTC)[reply]

Your best bet might be something like:

{{#switch:{{NAMESPACE}}
| {{ns:10}} = [[Category:Penthouse Pets navigational boxes]]
| {{ns:0}} = [[Category:Penthouse Pets]]
}}

The variables ns:10 and ns:0 will expand to “Template” and the null string respectively, but may differ on other language-projects which may wish to copy this interface. This would also ensure that no category appears when you invoke the template on a talk-page for demonstration purposes. ―AoV² 05:52, 9 April 2010 (UTC)[reply]

"diff" and "history" flipped in contributions lists

This seems to have happened in the past few hours... when I list contributions, the "hist" (history) and "diff" (difference between edits) buttons in the list are now reversed and appear as "diff" and "hist". Was there a change, and if so, why? --Ckatzchatspy 09:23, 9 April 2010 (UTC)[reply]

See bug 2971. --Catrope (talk) 09:51, 9 April 2010 (UTC)[reply]
This is related to the fact that last night, Wikipedia was switched to MediaWiki 1.16 pre-release. It seems to have gone without any major issues. The list of changes is here, though much of it had already been deployed on Wikipedia for a long time. Still, this is the first major release since May 2009, and it hopefully the software will be able to return to a more regular update pattern now. —TheDJ (talkcontribs) 13:14, 9 April 2010 (UTC)[reply]
Aha! Then I think I have stumbled over another bug/change/wrinkle in this release. Ajax rollback has stopped working for me. Has there been a change to the API in the area of rollback or of getting the rollback token? Philip Trueman (talk) 13:19, 9 April 2010 (UTC)[reply]
What are you using for AJAX rollback ? —TheDJ (talkcontribs) 13:48, 9 April 2010 (UTC)[reply]
I'm using PILT. Take a look at User:Philip Trueman/recent2.js and search for the function "recent2.tryFastAdminRollback". Philip Trueman (talk) 13:59, 9 April 2010 (UTC)[reply]
My JavaScript API library keeps returning a badtoken. I assume something related to verifying these tokens has changed, because I haven't changed how I encode them. I'll look into it. Hmmm... Ale_Jrbtalk 16:06, 9 April 2010 (UTC)[reply]
Cluebot, which uses API rollback, has stopped reverted but is still logged in and editing. Related? Can anyone get it to work? Ale_Jrbtalk 16:36, 9 April 2010 (UTC)[reply]
API rollback doesn't work. Reported at bugzilla. Incidently, if it turns out to not be broken, and someone gets it to work, please tell me how. :D Ale_Jrbtalk 16:43, 9 April 2010 (UTC)[reply]
It seems to be fixed for me now. Philip Trueman (talk) 17:23, 9 April 2010 (UTC)[reply]
If I may, what does "The user groups ACL system was improved by allowing rights to be revoked, instead of just granted" mean, rather, what is the "ACL system?" ~ Amory (utc) 15:10, 9 April 2010 (UTC)[reply]
Just a guess, but probably related to meta:Access control. –xenotalk 15:17, 9 April 2010 (UTC)[reply]
That's annoying =( –xenotalk 13:55, 9 April 2010 (UTC)[reply]
gah, it is, I thought I just kept slipping and clicking diff instead of history. Is this going to be fixed, or could someone please write a .js fix for it before I get used to it--Jac16888Talk 16:08, 9 April 2010 (UTC)[reply]
I think a .js would slow things down way too much (especially for those of us who have our settings to 500) but I could be wrong. I'll just bite the bullet, but I fear change. –xenotalk 16:09, 9 April 2010 (UTC)[reply]
I meant a personal, vector/monobook.js fix--Jac16888Talk 16:12, 9 April 2010 (UTC)[reply]
I know, I still suspect this would lag things too much while the .js re-writes the screen. Could be wrong tho. –xenotalk 17:26, 9 April 2010 (UTC)[reply]
Diff/hist links

Hello, have the diff/hist links on contribs/watchlists been switched lately? I keep clicking on hist instead of diff, presumably out of habit, which indicates a change. Aiken 17:32, 9 April 2010 (UTC)[reply]

Look up--Jac16888Talk 17:35, 9 April 2010 (UTC)[reply]
Could they be flipped as a gadget in preferences, or would that be the same as a .js fix? TransUtopian (talk) 20:49, 9 April 2010 (UTC)[reply]
Can someone please make a js fix for monobook/vector (I plan to continue to using monobook)? It might be slow for people with settings at 500, but it would be a partial workaround for the moment. TransUtopian (talk) 00:54, 10 April 2010 (UTC)[reply]

table of contents – hide the bullet numbers

Is there a (simple) way to hide the numbers before every bullet? The best solution would bei a __NONUMBERSTOC__ or something like this in style like here. Thank you very much, Hæggis (talk) 13:52, 9 April 2010 (UTC)[reply]

Do you want to do it for yourself on all pages or for everyone on particular page(s)? If it's the former, you can add the following code to your user CSS:
.tocnumber { display: none }
Svick (talk) 20:47, 9 April 2010 (UTC)[reply]
Yeah, that´s the thing: I want it for all users, just for a WP-page, not for an article. The bullets on this page are numbers oneself, and so the bullet numbers before make the whole table of contents confusing & the typeface very unpicturesque. But thanks for the CSS-code ;-) --Hæggis (talk) 08:43, 10 April 2010 (UTC)[reply]

MediaWiki:Common.css addresses this with .nonumtoc .tocnumber { display: none; }, so something like <div class="nonumtoc">__TOC__</div> will display a table of contents with no numbers. Perhaps a template exists containing exactly this code but I couldn′t find it. ―AoV² 10:13, 10 April 2010 (UTC)[reply]

Chronology if importScript() and OnloadHook

Does anyone know if addOnloadHook() waits for all of the importScript()s to load before it executes? In other words, if I import a script, and then add an onload hook which uses the script, can I be perfectly sure to have the script on hand when the hook runs? Additionally, does this apply for nested imports (A imports B, and B imports C)? The structure of my js is:

  • Monobook imports A.js
  • A.js imports B.js and C.js
  • B.js adds an onload hook which requires C.js
    • (Actually, in my case, its much more complicated: Vector.js imports monobook.js, which imports X.js, which imports Y.js, which then imports A.js)

Thanks, ManishEarthTalkStalk 15:29, 9 April 2010 (UTC)[reply]

This depends on the browser. Safari, for example, can't cope with this at all - if you try to import nest anything, they won't run (except the first one) in most circumstances. Firefox, Chrome and IE are better and will usually manage fine. However, imagine your first script imports a second one, and the second one uses onload to import a third. The third will only be imported once the page has finished loaded, which means that onloads in the third script often won't run (though they sometimes seem to, I don't know why). The trick is to replace any code in that third script that runs onload with a direct call, because the page will already have loaded, so it works as expected. Ale_Jrbtalk 15:54, 9 April 2010 (UTC)[reply]
None of the scripts are imported thru the hook. A imports B and C. C contains a variable, which is wanted by B after load.
//A.js
importScript('B.js')
importScript('C.js')

//B.js
addOnloadHook(function(){
alert(fromC)
})

//C.js
var fromC="A variable in C.js"

ManishEarthTalkStalk 03:34, 10 April 2010 (UTC)[reply]

I′m pretty sure B.js would need to contain importScript('C.js'); for this to work reliably. The importScript function is an abstract way to append the following tags to the <head> element:

<script src="/w/index.php?title=B.js&action=raw&ctype=text/javascript" type="text/javascript">
<script src="/w/index.php?title=C.js&action=raw&ctype=text/javascript" type="text/javascript">

However it checks each one against an array of previously imported urls to prevent multiple loading (in a manner vaguely similar to #include guards in C/C++), ensuring that the second call to importScript('C.js'); does nothing. ―AoV² 04:28, 10 April 2010 (UTC)[reply]

Images no longer opaque?

I don't know if this is a new "feature" or a bug, but any new TeX image created is no longer opaque. For example, if the previously created TeX image,

is at all modified (thus creating a new upload.wikimedia.org image),

the background becomes transparent. While most people——who use the browser default white background——won't notice anything different, others who use a colored background (such as black) will: The image is only a half visible mash. If this is a new "feature", is there a way to disable it in user settings? ~Kaimbridge~ (talk) 15:38, 9 April 2010 (UTC)[reply]

This is a feature (Template:Bug, one of our very oldest). You can style it in your personal CSS:
img.tex { background: white; }
Happymelon 18:47, 9 April 2010 (UTC)[reply]
No, that's no good——you have to have your browser set to "allow pages to choose their own colors"! P=( ~Kaimbridge~ (talk) 23:44, 9 April 2010 (UTC)[reply]
Good job on fixing that, whoever did so. Ninety percent of the time, calling attention to our math-rendering hack is the wrong thing to do. Gavia immer (talk) 22:06, 9 April 2010 (UTC)[reply]

Excellent. We ought to purge the existing images so they re-render in a consistent manner. ―AoV² 00:51, 10 April 2010 (UTC)[reply]

Also could we get some halfway meaningful alt-text like “∫ sec⁸ 𝑥 tan 𝑥 𝑑𝑥”? All I see in lynx in place of the images is the LaTeX source code of “\int \sec^8 x \tan x \,dx” which means nothing to me (see screen-shot). ―AoV² 02:41, 10 April 2010 (UTC)[reply]

Edit conflict behavior broken

It seems that starting today the edit conflict behavior is broken. The upper window contains my version, and the lower window contains the version without any changes made. The diff shows what was changed by the other person, but neither window shows the change. Is anyone else seeing this? Gigs (talk) 17:47, 9 April 2010 (UTC)[reply]

I can't edit

Every time I try to type or use any other key I don't see what I intended to do for several seconds.

It's only happening on Wikipedia.Vchimpanzee · talk · contributions · 19:20, 9 April 2010 (UTC)[reply]

Okay, that's weird. It's only happening over there, not here. And part of one of my toolbars didn't show up over there.Vchimpanzee · talk · contributions · 19:21, 9 April 2010 (UTC)[reply]

This may be one for the reference desk. I can't even click on the red X there, or do much of anything. Here, everything seems fine.

Actually, I can click and go other places there, but I can't type. And I can't use the red X. Wait, I can click on some things and get results, but not on others. Vchimpanzee · talk · contributions · 19:28, 9 April 2010 (UTC)[reply]

Now I've tried going to Yahoo email and everything is completely frozen over there. And I can't even click on the rectangle at the bottom of the screen to go back there. Not that I need to. Vchimpanzee · talk · contributions · 19:34, 9 April 2010 (UTC)[reply]

Just so everyone will know, across the bottom of the screen are rectangles with a blue e on the left, with the words "Inbox (167) - Ya...", "Compose Mail -...", Dial-up Conn...", "Village pump (t...", "October 16th ep...", "Hammie's Poetr...", and then a rectangle with a ? in a blue circle followed by "Windows help a..." and finally one with the Norton logo and "Full System Scan". Some of those were added after the problem began.Vchimpanzee · talk · contributions · 19:38, 9 April 2010 (UTC)[reply]

Don't pay attention to the words "Dial-up". The software treats what I have as if it were. My Interent is three times as fast and can be used when I'm on the phone.Vchimpanzee · talk · contributions · 19:40, 9 April 2010 (UTC)[reply]

Each rectangle with a blue e on the left is an instance of Internet Explorer running in its own window. Try to restart your computer and if that doesn't help then clear the entire cache. PrimeHunter (talk) 21:01, 9 April 2010 (UTC)[reply]
In future, assume that problems with your computer are due to problems with your computer, and not Wikipedia. Turn your machine off and on again before even thinking about asking for help. If you do ask somewhere, try reading something like this or even some Windows help files beforehand. OrangeDog (τ • ε) 21:27, 9 April 2010 (UTC)[reply]
I didn't want to lose anything. If it had been everything doing that I would have known. When I turn my computer off, I'll report back if I have any useful information.Vchimpanzee · talk · contributions · 21:41, 9 April 2010 (UTC)[reply]
Okay, no way to know just what happened. I couldn't get anything to happen by clicking on the first rectangle, and the page was completely frozen before that. It also filled the entire computer screen. When I clicked on the red X for the other pages, this screen ("Internet Explorer running in its own window") was only taking up the upper right corner of the screen (display). I clicked on Maximize and everything was working normally.
I realize I frequently use a lot of "Internet Explorer running in its own window", but it never caused this problem.
That one section of the toolbar is still missing. I'll turn off the computer and see if that fixes it.Vchimpanzee · talk · contributions · 22:00, 9 April 2010 (UTC)[reply]

Fixed.Vchimpanzee · talk · contributions · 22:03, 9 April 2010 (UTC)[reply]

Namespace not shown in page title

The namespace prefix is no longer shown in page titles of non-mainspace pages. Trying to edit the page seems to trigger the bug. If this problem is not affecting just me, this could be a significant source of confusion given the number of non-mainspace pages that are created every minute. -- Black Falcon (talk) 19:51, 9 April 2010 (UTC)[reply]

Display of namespaces

(edit conflict)It seems that someone is messing around with the display of namespaces. For example my user page now displays MSGJ instead of User:MSGJ and this page shows Village pump (technical) instead of Wikipedia:Village pump (technical). Has this been discussed anywhere? — Martin (MSGJ · talk) 19:51, 9 April 2010 (UTC)[reply]

I see the same issue when I am in edit mode of a user talk page or an article talk page; the talk-related prefix is missing. Erik (talk) 19:53, 9 April 2010 (UTC)[reply]
It appears to be resolving... Making an edit to a page seems to make the namespace prefix visible again. However, in the edit window, the title no longer reads as "Editing {PAGENAME}"; it is, instead, merely {PAGENAME}. -- Black Falcon (talk) 20:28, 9 April 2010 (UTC)[reply]
Yes. Also, {{DISPLAYTITLE}} no longer works, or so it seems. Ucucha 21:38, 9 April 2010 (UTC)[reply]
Working again now. Ucucha 23:27, 9 April 2010 (UTC)[reply]

Log-on Problem

Recently I created a new account, "Grand Bison", on the secure server, secure.wikimedia.org. I have no trouble logging in on the secure server, but when I try to log in on the regular en.wikipedia.org, it doesn't recognise my account. I don't like having to use the secure server, so how do I get the regular server to let me log on? Note all I did to create the account was to go to regular page, clicked on create a new account, then clicked "secure server" Grand Bison (talk) 20:35, 9 April 2010 (UTC)[reply]

It's clear from Wikipedia:Help desk#Log In? that your user name is recognized but not the password. Maybe your browser has stored a wrong password at the unsecure url and that password is submitted without your knowledge. Go to http://en.wikipedia.org/wiki/Special:UserLogin and write the password very carefully. If it still doesn't work then try another browser if possible, and say which browser you use. Some browsers with some settings can be tricky about automatically changing what the user writes in certain fields if something else is stored in the browser. PrimeHunter (talk) 20:56, 9 April 2010 (UTC)[reply]

Scripts suddenly borked

Hi; beginning today, all of the scripts installed at User:Steve/monobook.js have stopped working. I haven't changed skins, updated Firefox (v. 3.6.3) or meddled with the page recently; indeed, it appears the same on a different machine using another version of Firefox, so I'm not sure what's happened. I assumed it might be related to the 1.16 update, but I'm not seeing complaints from anyone else, so ... any ideas? Steve T • C 21:52, 9 April 2010 (UTC)[reply]

I can only say that I have the same setup (Firefox 3.6.3 + monobook skin) and I haven't had any problem with scripts not working. If it happens for you on multiple machines, it's almost certain that some script is causing it, even if it seemed to work before. Gavia immer (talk) 21:58, 9 April 2010 (UTC)[reply]
Try bypassing your cache or checking the error console for JavaScript-related errors? --MZMcBride (talk) 22:02, 9 April 2010 (UTC)[reply]
Everything's checking out OK. I've just disabled each one-by-one, bypassing my cache each time, and still nothing. Steve T • C 22:07, 9 April 2010 (UTC)[reply]
Checked your gadgets? Ale_Jrbtalk 22:14, 9 April 2010 (UTC)[reply]
I tried using your monobook.js and I got an error on the line ta['pt-future'] = ['1', 'FAC'];. When I changed it to tb['pt-future'] = ['1', 'FAC'];, all your scripts started working. But I have absolutely no idea why would this error manifest itself now, since this error was there for years. Svick (talk) 22:25, 9 April 2010 (UTC)[reply]
Ah! Seems to have fixed all but one; I can live with that for now. If it persists, I'll contact the editor who wrote it. Thanks for your help. Steve T • C 22:40, 9 April 2010 (UTC)[reply]
This is because before MediaWiki 1.16, there was some old code for akeytt() that fixed tooltips and accesskeys in portlets. This code was removed, and one of the variables that this code was using, was "ta". So you had this error in your monobook, and before last night's software update, that error was non-fatal, because ta actually existed. Now ta disappeared and the error became fatal. —TheDJ (talkcontribs) 01:22, 10 April 2010 (UTC)[reply]

When clicking a file (often an image) redlink, it used to give links to Wikipedia and Commons' deletion logs for the file, if there were any. It's very useful to know who, when and why the filename was deleted, and if it ever was. Now all I get is the upload instructions. I couldn't find a link to the logs.

Oddly, for this image from just above here at FFD, I get the deletion reason.

For this old image from Rome's page, I get just the upload form. Both have the File: prefix.

I don't know if this is related to the upgrade. TransUtopian (talk) 02:44, 10 April 2010 (UTC)[reply]

Editbox size in Preferences

I imagine this is related to the recent changeover - the editbox size set in Preferences / Editing seems not to be working. Beyond My Ken (talk) 04:22, 10 April 2010 (UTC)[reply]

You can set something like this in your monobook.css.

#wpTextbox1 {
	width:100% !important;
	height:300px !important;
	/* etc. etc. etc. */
	}

These attributes will override anything in your user-prefs. ―AoV² 04:37, 10 April 2010 (UTC)[reply]

Thanks, I will try that. Beyond My Ken (talk) 05:31, 10 April 2010 (UTC)[reply]
Actually 300px might be too small, but the point is you can use this to change the size, font, color, etc. of the <textarea> to whatever you want [3]. ―AoV² 07:08, 10 April 2010 (UTC)[reply]

Logo position

For a few days now, the Wikipedia logo, here and on other language Wikipedias and on Commons, is not shown in its normal position (top left), but offset to the right by about half that image's width, obscuring the first two letters of whichever article I'm reading. This happens in IE8, logged in used MonoBook or Vector, or logged out (which should exclude my monobook.js, monobook.css, vector.js as culprits). I also get this error on every page:

User Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; GTB6.4; Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1) ; .NET CLR 1.1.4322; InfoPath.1; .NET CLR 2.0.50727; .NET CLR 3.0.04506.30; .NET CLR 3.0.04506.648; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729)
Timestamp: <Current UTC>

Message: Not implemented

Line: 52 Char: 15 Code: 0 URI: http://bits.wikimedia.org/skins-1.5/common/IEFixes.js

None of this happens in Firefox. How can I fix this in IE8? -- Michael Bednarek (talk) 05:05, 10 April 2010 (UTC)[reply]

Real-time updates of Wikipedia mirror

I want to create a mirror of enwiki that will be continually updated. I am not proposing the creation of a live mirror that would hit Wikipedia every time a page is loaded by one of the mirror's readers. But I would want to update my wiki every time an edit is made to Wikipedia. So, what I have in mind is downloading the data dump, setting up the mirror as a MediaWiki installation, and then setting up a bot to update my mirror every time a change is made to Wikipedia. The question is, how to minimize the server load on Wikipedia (and will that even be a major issue)? I submitted this bug suggesting an XML feed of RecentChanges, (and I see there has been a similar bug in reference to XML feeds of watchlists). And I know there used to be an OAI-PMH Wikimedia update feed service which evidently Wikimedia isn't all that interested in offering anymore. It would be great to get a diff that would save bandwidth by just providing the lines that are changing in the + ... and - ... format we see in SVN diffs, if it could be done in a way that could be readily processed and incorporated into the mirror's database. That could theoretically be done via the RSS or Atom feed of recent changes, although I'm told that XML is a superior format to use for transferring data between wikis. But, it seems that XML dumps of revisions list the entire page text for each version.

Anyway, my thought is that lacking such an XML or OAI-PMH feed, I could just set up a bot to use the API to get the page IDs, revision IDs, etc. of revisions that appear in RecentChanges, and then use the API to grab the text of those revisions, e.g., with http://en.wikipedia.org/w/api.php?action=query&prop=revisions&revid=355085522&pageid=19035&titles=Mormon&rvprop=content&rvdiffto=355046139&rvgeneratexml . There are, perhaps, 100 edits a minute so I would only be hitting Wikipedia a couple times a second. Do you think they'd mind? I know Wikimedia has gotten annoyed at the live mirrors that were hitting them 50 times a second and told them to stop or they'd be blocked. But I'm hoping I can actually cause a net reduction on server load on Wikipedia if people use my site rather than Wikipedia, and if other mirrors mirror from my mirror.

I anticipate that this mirror will be used by a lot of editors (rather than just readers), as it will be not just a mirror but also a supplement of Wikipedia. In addition to hosting all of Wikipedia's articles, it will also allow pages to be added that are outside the scope of Wikipedia. If someone clicks on an edit tab on a mirrored Wikipedia article, however, it will take them to the appropriate Wikipedia url, e.g., http://en.wikipedia.org/w/index.php?title=Mormon&action=edit . Users will also be able to override the content of mirrored Wikipedia articles by appending front and back matter and adding forced wikilinks (i.e. causing words to be wikified that aren't wikified on Wikipedia — e.g., because the wikilinked article doesn't exist on Wikipedia but exists on the mirror/supplement wiki). The upshot of this is that maintaining full page histories and up-to-date versions will be more important than if it were just readers using the site. Tisane (talk) 05:37, 10 April 2010 (UTC)[reply]

Just a suggestion, why not try to hit the toolserver or the backup server? You'd have to cope with replag, though... ManishEarthTalkStalk 06:16, 10 April 2010 (UTC)[reply]
The Toolserver doesn't store page text, so I'm pretty sure you can't use it for mirroring purposes. — The Earwig (talk) 06:35, 10 April 2010 (UTC)[reply]
Is there any way to access the backup server (read-only)? ManishEarthTalkStalk 06:36, 10 April 2010 (UTC)[reply]

Missing User Scripts

All my user scripts appear to have stopped working - or at least the tabs don't show up any more. Is there a reason for this that anyone can think of? I haven't touched my monobook.js file so I can't see its anything I did and I did restart my PC. Does anyone have any ideas on this? Spartaz Humbug! 07:51, 10 April 2010 (UTC)[reply]

Probably related to your computer, as I imported your monobook.js into my account and your scripts work. Gary King (talk) 08:15, 10 April 2010 (UTC)[reply]
Strange, they came back. *rubs his eyes* I must be losing it. Thanks Gary. Spartaz Humbug! 08:35, 10 April 2010 (UTC)[reply]
That's bizarre, I can see them on the edit window but on the loaded project page. How strange. Spartaz Humbug! 08:36, 10 April 2010 (UTC)[reply]