This FAQ answers some questions related to the technical workings of Wikipedia, including software and hardware. Check out Wikipedia:FAQ/Main Page for additional main page-specific technical information.
What happens if two or more people are editing the same page?
When the second person (and later persons) attempts to save the page, MediaWiki will attempt to merge their changes into the current version of the text. If the merge fails then the user will receive an "edit conflict" message, and the opportunity to merge their changes manually. If multiple consecutive conflicts are noticed, it will generate a slightly different message. This is similar to Concurrent Versions System (CVS), a widely used software version management system.
How do I recover a password I have forgotten?
If you entered your e-mail address when you signed up, you can have a new password generated. Click on the "Log in" link in the upper-right corner. Enter your user name, and click the button near the bottom of the page called "Mail me a new password". You should receive an e-mail message with a new random password; you can use it to log in, go to your preferences, and change your password to something you'll remember.
The developers use the Phabricator bug tracking tool to keep track of bugs. Anybody is welcome to create an account there and report any bugs they encounter; however, if you prefer, you can post about your bug at the technical village pump. For more information, see Bug reports.
Wikipedia originally ran UseModWiki, a general wiki script by Clifford Adams. In January 2002, we switched to a PHP script, which in turn was completely overhauled the following July to create what we now call MediaWiki.
One of Bomis' servers hosted all Wikipedia wikis running on UseModWiki software
Phase II: January 2002 - July 2002
One of Bomis' servers hosted all Wikipedia wikis; English and meta running on the php/mysql-based new software, all other languages on UseModWiki. Runs both the database and the web server on one machine.
Phase IIIa: July 2002 - May 2003
Wikipedia gets own server, running English Wikipedia and after a bit meta, with rewritten PHP software. Runs both the database and the web server on one machine.
One of Bomis' servers continues to host some of the other languages on UseModWiki, but most of the active ones are gradually moved over to the other server during this period.
Phase IIIb: May 2003 - Feb 2004
Wikipedia's server is given the code name "pliny". It serves the database for all phase 3 wikis and the web for all but English.
New server, code name "larousse", serves the web pages for the English Wikipedia only. Plans to move all languages' web serving to this machine are put on hold until load is brought down with more efficient software or larousse is upgraded to be faster.
One of Bomis' servers continued to host some of the other languages on UseModWiki until it died. All are now hosted on pliny; a few more of the active ones have been gradually moved over to the new software, and an eventual complete conversion is planned.
Phase IIIc: Feb 2004 to Present
Wikipedia gets a whole new set of servers, paid for through donations to the non-profit Wikimedia Foundation.
The new architecture has a new database server (suda), with a set of separate systems running Apache, as well as "squids" that cache results (to reduce the load). More details are at m:Wikimedia servers.
And now the longer answer. Wikipedia, and wikis in general, are meant to be edited on the fly. HTML is not easy to use when you simply want to write an article. Creating links gives us a particularly dramatic example. To link to the Paris article using HTML, one would have to type
Using MediaWiki markup is much easier:
A special markup language even allows you to "transclude" special snippets of code, called templates, into wiki pages. (You can also "substitute" the code for that template, effectively copying and pasting it into the document, but this is a waste of space and is obnoxious to other users who try to edit but find that they have to scroll through large amounts of template code. Substitution is, however, preferred in some cases.)
That's not true. Some HTML tags work. Also, HTML table tags were once the only way to create tables (but now it can be done by wiki syntax too). However, there's been some rumbling among the software developers that many HTML tags are deprecated.
What about non-ASCII characters, and special symbols?
Wikipedia uses Unicode (specifically the UTF-8 encoding of Unicode) and most browsers can handle it but font issues mean that more obscure characters may not work for many users. Meta:Help:Special characters page for a detailed discussion of what is generally safe and what isn't. This page will be updated over time as more browsers come to support more features.
Note that downloading the database dumps is much preferred over trying to spider the entire site. Spidering the site will take you much longer, and puts a lot of load on the server (especially if you ignore our robots.txt and spider over billions of combinations of diffs and whatnot). Heavy spidering can lead to your spider, or your IP, being barred with prejudice from access to the site. Legitimate spiders (for instance search engine indexers) are encouraged to wait about a minute between requests, follow the robots.txt, and if possible only work during less loaded hours (2:00-14:00 UTC is the lighter half of the day).
The uploaded images and other media files are not currently bundled in an easily downloadable form; if you need one, please contact the developers on the wikitech-l mailing list. Please do not spider the whole site to get images.
If you're just after retrieving a topic page, the following Perl sample code works. In this case, it retrieves and lists the Main Page, but modifications to the $url variable for other pages should be obvious enough. Once you've got the page source, Perl regular expressions are your friend in finding wiki links.
$browser = LWP::UserAgent->new();
$url = "http://en.wikipedia.org/wiki/Wikipedia%3AMain_Page";
$webdoc = $browser->request(HTTP::Request->new(GET, $url));
if ($webdoc->is_success) #...then it's loaded the page OK
print $webdoc->title, "\n\n"; # page title
print $webdoc->content, "\n\n"; # page text
Note that all (English) Wikipedia topic entries can be accessed using the conventional prefix "http://en.wikipedia.org/wiki/", followed by the topic name (with spaces turned into underscores, and special characters encoded using the standard URL encoding system).
Cookies are not required to read or edit Wikipedia, but they are required in order to log in and link your edits to a user account.
When you log in, the wiki will set a temporary session cookie which identifies your login session; this will be expired when your browser exits (or after an inactivity timeout), and is not saved on your hard drive.
Another cookie will be saved which lists the user name you last logged in under, to make subsequent logins just a teensy bit easier. (Actually two: one with your name, and one with your account's internal ID number; they must match up.) These cookies expire after 180 days. If this worries you, clear your cookies after completing your session.
If you check the "remember my password" box on the login form, another cookie will be saved with a token that authenticates you to our servers (which is unrelated to your password). As long as this remains valid, you can bypass the login step on subsequent visits to the wiki. The cookie expires after 180 days, or is removed if you log out. If this worries you, don't use the option. (You should not use it on a public terminal!)
This could be a result of your cookie, browser cache, or firewall/Internet security settings. Or, to quote Tim Starling (referring to a question about "remembering password across sessions"):
"The kind of session isn't a network session strictly speaking, it's an HTTP session, managed by PHP's session handling functions. This kind of session works by setting a cookie, just like the "remember password" feature. The difference is that the session cookie has the "discard" attribute set, which means that it is discarded when you close your browser. This is done to prevent others from using your account after you have left the computer.
The other difference is that PHP sessions store the user ID and other such information on the server side. Only a "session key" is sent to the user. The remember password feature stores all required authentication information in the cookie itself. On our servers, the session information is stored in memcached, a system for non-durable (unreliable) caching. Session information may occasionally be lost or go missing temporarily, causing users to be logged out. The simplest workaround for this is to use the remember password feature, as long as you are not worried about other people using the same computer." from the Wikipedia:Village pump (technical) on May 4, 2005 (italics added).
In other words: click the "remember me" box when logging in.
The software that runs Wikipedia is great! Can I use it for my site?
You can, but depending on your needs you might be better served using something else; MediaWiki is big and complex. See first Wiki software for a list of alternative wiki software.
If, after scanning, you're still sure you want to use MediaWiki, see the MediaWiki web site for details on downloading, installing and configuring the software.
Can I add a page hit counter to a Wikipedia page?
Page hit counting is a feature of the MediaWiki software, but this feature is disabled at the Wikipedia site for performance reasons. Wikipedia is one of the most popular web sites in the world and uses a cluster of more than 400 servers (as of January 2011[update]) to handle the load. Nearly 80% of the load is handled by about 100 front end cache servers which store copies of pages so they can be served without having to be rebuilt each time from the database. Hitcount data is therefore not collected centrally, but is aggregated from all the servers and is available at http://stats.grok.se/.
You can also view the page hits for a particular page from the history for that page; and then choose Page view statistics listed as an external tool.
Users of mobile devices (smartphones, etc.) should consider browsing the mobile version of Wikipedia, by clicking the "Mobile view" link at the bottom of any page, or visiting the URL en.m.wikipedia.org. It is suited to touch devices and will save bandwidth.
Alternatively, to view a low-bandwidth Main Page suitable for wireless users, select the Wikipedia:Main Page alternative (simple layout) link. That main page has a link to the text-only version of the main page. For now, direct entry of the URL into your wireless device's browser is the most convenient way to get to the articles. If you know a one-word article, such as Science, you can use that article to gain entry to your favorite topics.
Is the "random article" feature really random? 
No, although it's random enough to provide a small sample of articles reliably.
In the Wikipedia database, each page is assigned a "random index", which is a random floating point number uniformly distributed between 0 (inclusive) and 1 (exclusive). The "random article" feature (Special:Random) chooses a random double-precision floating-point number, and returns the next article whose random index is greater than the selected random number. Some articles will have a larger gap before them, in the random index space, and so will be more likely to be selected. So the actual probability of any given article being selected is in fact itself random.
The random index value for new articles, and the random value used by Special:Random, is selected by reading two 31-bit words from a Mersenne twister, which is seeded at each request by PHP's initialisation code using a high-resolution timer and the PID. The words are combined using:
(mt_rand() * $max + mt_rand()) / $max / $max
Some old articles had their page_random value reset using MySQL's RAND():
There is a third party site, not maintained by Wikipedia, which currently allows you to view page hit counts since December 2007. Additionally, the weekly Top 25 Report provides a list of the 25 most popular articles in the last week.