This article needs additional citations for verification. (March 2009) (Learn how and when to remove this template message)
A web server is computer software and underlying hardware that accepts requests via HTTP, the network protocol created to distribute web pages, or its secure variant HTTPS. A user agent, commonly a web browser or web crawler, initiates communication by making a request for a specific resource using HTTP, and the server responds with the content of that resource or an error message. The server can also accept and store resources sent from the user agent if configured to do so.
A server can be a single computer, or even an embedded system such as a router with a built-in configuration interface, but high-traffic websites typically run web servers on fleets of computers designed to handle large numbers of requests for documents, multimedia files and interactive scripts. A resource sent from a web server can be a preexisting file available to the server, or it can be generated at the time of the request by another program that communicates with the server program. The former is often faster and more easily cached for repeated requests, while the latter supports a broader range of applications. Websites that serve generated content usually incorporate stored files whenever possible.
Technologies such as REST and SOAP, which use HTTP as a basis for general computer-to-computer communication, have extended the application of web servers well beyond their original purpose of serving human-readable pages.
In March 1989 Sir Tim Berners-Lee proposed a new project to his employer CERN, with the goal of easing the exchange of information between scientists by using a hypertext system. The project resulted in Berners-Lee writing two programs in 1990:
- A Web browser called WorldWideWeb
- The world's first web server, later known as CERN httpd, which ran on NeXTSTEP
Between 1991 and 1994, the simplicity and effectiveness of early technologies used to surf and exchange data through the World Wide Web helped to port them to many different operating systems and spread their use among scientific organizations and universities, and subsequently to the industry.
In 1994 Berners-Lee decided to constitute the World Wide Web Consortium (W3C) to regulate the further development of the many technologies involved (HTTP, HTML, etc.) through a standardization process.
Basic common features
Although web server programs differ in how they are implemented, most of them offer the following basic common features.
- HTTP: support for one or more versions of HTTP protocol in order to send versions of HTTP responses compatible with versions of client HTTP requests, e.g. HTTP/1.0, HTTP/1.1 plus, if available, HTTP/2, HTTP/3;
- Logging: usually web servers have also the capability of logging some information, about client requests and server responses, to log files for security and statistical purposes.
A few other popular features (only a very short selection) are:
- Authentication, optional support for authorization request (request of user name and password) before allowing access to some or all kind of website resources.
- Large file support, to be able to serve files whose size is greater than 2 GB on 32 bit OS.
- Bandwidth throttling, to limit the speed of content responses in order to not saturate the network and to be able to serve more clients.
- Virtual hosting, to be able to serve many websites (domain names) using only one IP address.
Web servers are able to map the path component of a Uniform Resource Locator (URL) into:
- A local file system resource (for static requests)
- An internal or external program name (for dynamic requests)
For a static request the URL path specified by the client is relative to the target website's root directory.
Consider the following URL as it would be requested by a client over HTTP:
GET /path/file.html HTTP/1.1 Host: www.example.com
The web server on www.example.com will append the given path to the path of the (Host) website root directory. On an Apache server, this is commonly /home/www/website (on Unix machines, usually /var/www/website). The result is the local file system resource:
The web server then reads the file, if it exists, and sends a response to the client's web browser. The response will describe the content of the file and contain the file itself or an error message will return saying that the file does not exist or is unavailable.
Kernel-mode and user-mode web servers
Web servers that run in kernel mode can have direct access to kernel resources and so they can be, in theory, faster than those running in user mode; anyway there are disadvantages in running a web server in kernel mode, e.g.: difficulties in developing (debugging) software whereas run-time critical errors may lead to serious problems in OS kernel.
Web servers that run in user-mode have to ask the system for permission to use more memory or more CPU resources. Not only do these requests to the kernel take time, but they are not always satisfied because the system reserves resources for its own usage and has the responsibility to share hardware resources with all the other running applications. Executing in user mode can also mean useless buffer copies which are another limitation for user-mode web servers.
Nowadays almost all web server software is executed in user mode (because many of above small disadvantages have been overcome by faster hardware, new OS versions and new web server software). See also comparison of web server software to discover which of them run in kernel mode or in user mode (also referred as kernel space or user space).
To improve user experience, Web servers should reply quickly (as soon as possible) to client requests; unless content response is throttled (by configuration) for some type of files (e.g. big files, etc.), also returned data content should be sent as soon as possible (high transfer speed).
For Web server software, main key performance statistics (measured under a varying load of clients and requests per client) are:
- number of maximum requests per second (RPS, similar to QPS, depending on HTTP version and configuration, type of HTTP requests, etc.);
- network latency response time (usually in milliseconds) for each new client request;
- throughput in bytes per second (depending on file size, cached or not cached content, available network bandwidth, type of HTTP protocol used, etc.).
Above three performance number may vary noticeably depending on the number of active TCP connections, so a fourth statistic number is the concurrency level supported by a web server under a specific Web server configuration, OS type and available hardware resources.
Last but not least, the specific server model used to implement a web server program can bias the performance and scalability level that can be reached under heavy load or when using high end hardware (many CPUs, disks, etc.).
Performances of a web server are typically benchmarked by using one or more of the available automated load testing tools.
A web server (program installation) usually has pre-defined load limits, because it can handle only a limited number of concurrent client connections (usually between 1 and several tens of thousands for each active web server process, see also the C10k problem and C10M problem) and it can serve only a certain maximum number of requests per second depending on:
- its own settings,
- the average HTTP request type,
- whether the requested content is static or dynamic,
- whether the content is cached, or compressed,
- the average network speed between clients and web server,
- the number of active TCP connections,
- the hardware and software limitations or settings of the OS of the computer(s) on which the web server runs.
When a web server is near to or over its limits, it gets overloaded and so it may become unresponsive.
Causes of overload
At any time web servers can be overloaded due to:
- Excess legitimate web traffic. Thousands or even millions of clients connecting to the website in a short interval, e.g., Slashdot effect;
- Distributed Denial of Service attacks. A denial-of-service attack (DoS attack) or distributed denial-of-service attack (DDoS attack) is an attempt to make a computer or network resource unavailable to its intended users;
- Computer worms that sometimes cause abnormal traffic because of millions of infected computers (not coordinated among them)
- XSS worms can cause high traffic because of millions of infected browsers or web servers;
- Internet bots Traffic not filtered/limited on large websites with very few resources (bandwidth, etc.);
- Internet (network) slowdowns (due to packet losses, etc.) so that client requests are served more slowly and the number of connections increases so much that server limits are reached;
- Web servers (computers) partial unavailability. This can happen because of required or urgent maintenance or upgrade, hardware or software failures, back-end (e.g., database) failures, etc.; in these cases the remaining web servers may get too much traffic and become overloaded.
Symptoms of overload
The symptoms of an overloaded web server are:
- Requests are served with (possibly long) delays (from 1 second to a few hundred seconds).
- The web server returns an HTTP error code, such as 500, 502, 503, 504, 408, or an intermittent 404.
- The web server refuses or resets (interrupts) TCP connections before it returns any content.
- In very rare cases, the webserver returns only a part of the requested content. This behavior can be considered a bug, even if it usually arises as a symptom of overload.
To partially overcome above average load limits and to prevent overload, most popular websites use common techniques like:
- Managing network traffic, by using:
- Deploying web cache techniques.
- Using different domain names or IP addresses to serve different (static and dynamic) content by separate web servers, e.g.:
- Using different domain names or computers to separate big files from small and medium-sized files; the idea is to be able to fully cache small and medium-sized files and to efficiently serve big or huge (over 10 – 1000 MB) files by using different settings.
- Using many web servers (programs) per computer, each one bound to its own network card and IP address.
- Using many web servers (computers) that are grouped together behind a load balancer so that they act or are seen as one big web server.
- Adding more hardware resources (i.e. RAM, disks) to each computer.
- Tuning OS parameters for hardware capabilities and usage.
- Using more efficient computer programs for web servers, etc.
- Using other programming workarounds, especially if dynamic content is involved.
- Using latest efficient versions of HTTP (e.g. beyond using common HTTP/1.1 also by enabling HTTP/2 and maybe, in near future, HTTP/3 too whenever available web server software has reliable support for the latter two protocols) in order to reduce a lot the number of TCP/IP connections started by each client and the size of data exchanged (because of more compact HTTP headers representation, data compression, etc.); anyway even if newer HTTP protocols usually require less OS resources sometimes they may require more RAM and CPU resources used by web server software (because of encrypted data, data compression on the fly and other implementation details).
|OpenResty||OpenResty Software Foundation||6.36%|
|Cloudflare Server||Cloudflare, Inc.||5.0%|
All other web servers are used by less than 5% of the websites.
|OpenResty||OpenResty Software Foundation||4.00%|
|Cloudflare Server||Cloudflare, Inc.||3.0%|
All other web servers are used by less than 3% of the websites.
All other web servers are used by less than 1% of the websites.
All other web servers are used by less than 1% of the websites.
|Product||Vendor||January 2017||Percent||February 2017||Percent||Change||Chart color|
|Product||Vendor||January 2016||Percent||February 2016||Percent||Change||Chart color|
- Server (computing)
- Application server
- Comparison of web server software
- HTTP compression
- Open source web application
- Variant object
- Virtual hosting
- Web hosting service
- Web container
- Web proxy
- Web service
- Standard Web Server Gateway Interfaces used for dynamic contents:
- A few other Web Server Interfaces (server or language specific) used for dynamic contents:
- SSI (rarely used, static HTML documents containing SSI directives are interpreted by server software to include small dynamic data on the fly when pages are served, e.g. date and time, other static file contents, etc.)
- SAPI, ISAPI, NSAPI
- PSGI Perl Web Server Gateway Interface
- WSGI Python Web Server Gateway Interface
- Rack Rack Web Server Gateway Interface
- Java Servlet, JavaServer Pages
- Active Server Pages, ASP.NET
- Nancy J. Yeager; Robert E. McGrath (1996). Web Server Technology. ISBN 1-55860-376-X. Retrieved 22 January 2021.
- Zolfagharifard, Ellie (24 November 2018). "'Father of the web' Sir Tim Berners-Lee on his plan to fight fake news". The Telegraph. ISSN 0307-1235. Retrieved 1 February 2019.
- "History of Computers and Computing, Internet, Birth, The World Wide Web of Tim Berners-Lee". history-computer.com. Retrieved 1 February 2019.
- Macaulay, Tom. "What are the best open source web servers?". ComputerworldUK. Retrieved 1 February 2019.
- Fisher, Tim; Lifewire. "Getting a 502 Bad Gateway Error? Here's What to Do". Lifewire. Retrieved 1 February 2019.
- Fisher, Tim; Lifewire. "Getting a 503 Service Unavailable Error? Here's What to Do". Lifewire. Retrieved 1 February 2019.
- "What is a 502 bad gateway and how do you fix it?". IT PRO. Retrieved 1 February 2019.
- Vaughan-Nichols, Steven J. "Apache and IIS' Web server rival NGINX is growing fast". ZDNet. Retrieved 1 February 2019.
- Hadi, Nahari (2011). Web commerce security: design and development. Krutz, Ronald L. Indianapolis: Wiley Pub. ISBN 9781118098899. OCLC 757394142.