|This article needs additional citations for verification. (March 2009)|
A web server is an information technology that processes requests via HTTP, the basic network protocol used to distribute information on the World Wide Web. The term can refer either to the entire computer system, an appliance, or specifically to the software that accepts and supervises the HTTP requests.
The primary function of a web server is to store, process and deliver web pages to clients. The communication between client and server takes place using the Hypertext Transfer Protocol (HTTP). Pages delivered are most frequently HTML documents, which may include images, style sheets and scripts in addition to text content.
A user agent, commonly a web browser or web crawler, initiates communication by making a request for a specific resource using HTTP and the server responds with the content of that resource or an error message if unable to do so. The resource is typically a real file on the server's secondary storage, but this is not necessarily the case and depends on how the web server is implemented.
While the primary function is to serve content, a full implementation of HTTP also includes ways of receiving content from clients. This feature is used for submitting web forms, including uploading of files.
Many generic web servers also support server-side scripting using Active Server Pages (ASP), PHP, or other scripting languages. This means that the behaviour of the web server can be scripted in separate files, while the actual server software remains unchanged. Usually, this function is used to generate HTML documents dynamically ("on-the-fly") as opposed to returning static documents. The former is primarily used for retrieving and/or modifying information from databases. The latter is typically much faster and more easily cached but cannot deliver dynamic content.
Web servers are not always used for serving the World Wide Web. They can also be found embedded in devices such as printers, routers, webcams and serving only a local network. The web server may then be used as a part of a system for monitoring and/or administering the device in question. This usually means that no additional software has to be installed on the client computer, since only a web browser is required (which now is included with most operating systems).
In 1989 Tim Berners-Lee proposed a new project to his employer CERN, with the goal of easing the exchange of information between scientists by using a hypertext system. The project resulted in Berners-Lee writing two programs in 1990:
- A browser called WorldWideWeb.
- The world's first web server, later known as CERN httpd, which ran on NeXTSTEP
Between 1991 and 1994, the simplicity and effectiveness of early technologies used to surf and exchange data through the World Wide Web helped to port them to many different operating systems and spread their use among scientific organizations and universities, and then to industry.
In 1994 Tim Berners-Lee decided to constitute the World Wide Web Consortium (W3C) to regulate the further development of the many technologies involved (HTTP, HTML, etc.) through a standardization process.
- Virtual hosting to serve many web sites using one IP address
- Large file support to be able to serve files whose size is greater than 2 GB on 32 bit OS
- Bandwidth throttling to limit the speed of responses in order to not saturate the network and to be able to serve more clients
- Server-side scripting to generate dynamic web pages, still keeping web server and website implementations separate from each other
Web servers are able to map the path component of a Uniform Resource Locator (URL) into:
- A local file system resource (for static requests)
- An internal or external program name (for dynamic requests)
For a static request the URL path specified by the client is relative to the web server's root directory.
Consider the following URL as it would be requested by a client:
The client's user agent will translate it into a connection to www.example.com with the following HTTP 1.1 request:
GET /path/file.html HTTP/1.1 Host: www.example.com
The web server on www.example.com will append the given path to the path of its root directory. On an Apache server, this is commonly /home/www (On Unix machines, usually /var/www). The result is the local file system resource:
The web server then reads the file, if it exists and sends a response to the client's web browser. The response will describe the content of the file and contain the file itself or an error message will return saying that the file does not exist or is unavailable.
Kernel-mode and user-mode web servers
An in-kernel web server (like Microsoft IIS on Windows or TUX on GNU/Linux) will usually work faster, because, as part of the system, it can directly use all the hardware resources it needs, such as non-paged memory, CPU time-slices, network adapters, or buffers.
Web servers that run in user-mode have to ask the system for permission to use more memory or more CPU resources. Not only do these requests to the kernel take time, but they are not always satisfied because the system reserves resources for its own usage and has the responsibility to share hardware resources with all the other running applications. Executing in user mode can also mean useless buffer copies which are another handicap for user-mode web servers.
A web server (program) has defined load limits, because it can handle only a limited number of concurrent client connections (usually between 2 and 80,000, by default between 500 and 1,000) per IP address (and TCP port) and it can serve only a certain maximum number of requests per second depending on:
- its own settings,
- the HTTP request type,
- whether the content is static or dynamic,
- whether the content is cached, and
- the hardware and software limitations of the OS of the computer on which the web server runs.
When a web server is near to or over its limit, it becomes unresponsive.
Causes of overload
At any time web servers can be overloaded because of:
- Too much legitimate web traffic. Thousands or even millions of clients connecting to the web site in a short interval, e.g., Slashdot effect;
- Distributed Denial of Service attacks. A denial-of-service attack (DoS attack) or distributed denial-of-service attack (DDoS attack) is an attempt to make a computer or network resource unavailable to its intended users;
- Computer worms that sometimes cause abnormal traffic because of millions of infected computers (not coordinated among them);
- XSS viruses can cause high traffic because of millions of infected browsers and/or web servers;
- Internet bots Traffic not filtered/limited on large web sites with very few resources (bandwidth, etc.);
- Internet (network) slowdowns, so that client requests are served more slowly and the number of connections increases so much that server limits are reached;
- Web servers (computers) partial unavailability. This can happen because of required or urgent maintenance or upgrade, hardware or software failures, back-end (e.g., database) failures, etc.; in these cases the remaining web servers get too much traffic and become overloaded.
Symptoms of overload
The symptoms of an overloaded web server are:
- Requests are served with (possibly long) delays (from 1 second to a few hundred seconds).
- The web server returns an HTTP error code, such as 500, 502, 503, 504, 408, or even 404, which is inappropriate for an overload condition.
- The web server refuses or resets (interrupts) TCP connections before it returns any content.
- In very rare cases, the web server returns only a part of the requested content. This behavior can be considered a bug, even if it usually arises as a symptom of overload.
To partially overcome above average load limits and to prevent overload, most popular web sites use common techniques like:
- Managing network traffic, by using:
- Deploying web cache techniques
- Using different domain names to serve different (static and dynamic) content by separate web servers, i.e.:
- Using different domain names and/or computers to separate big files from small and medium-sized files; the idea is to be able to fully cache small and medium-sized files and to efficiently serve big or huge (over 10 - 1000 MB) files by using different settings
- Using many web servers (programs) per computer, each one bound to its own network card and IP address
- Using many web servers (computers) that are grouped together behind a load balancer so that they act or are seen as one big web server
- Adding more hardware resources (i.e. RAM, disks) to each computer
- Tuning OS parameters for hardware capabilities and usage
- Using more efficient computer programs for web servers, etc.
- Using other workarounds, especially if dynamic content is involved
|Product||Vendor||April 2014||Percent||May 2014||Percent||Change|
Apache, IIS and Nginx are the most used web servers on the Internet.
- Application server
- Comparison of web server software
- HTTP compression
- MaidSafe (a proposal for a system calling for the elimination of servers)
- Open source web application
- SSI, CGI, SCGI, FastCGI, PHP, Java Servlet, JavaServer Pages, ASP, ASP.NET, SAPI
- Variant object
- Virtual hosting
- Web hosting service
- Web proxy
- Web service
- "What is web server?'". webdevelopersnotes. 2010-11-23. Retrieved 2010-11-23.
- RFC 2616, the Request for Comments document that defines the HTTP 1.1 protocol.
- C64WEB.COM — Commodore 64 running as a web server using Contiki