Varnish (software)
This article has multiple issues. Please help improve it or discuss these issues on the talk page. (Learn how and when to remove these messages)
|
Developer(s) | Poul-Henning Kamp, Redpill-Linpro, Varnish Software |
---|---|
Stable release | 4.1.0
/ September 30, 2015 |
Repository | |
Written in | C |
Operating system | BSD, Linux, Unix |
Type | HTTP accelerator |
License | two-clause BSD license |
Website | www |
Varnish is an HTTP accelerator designed for content-heavy dynamic web sites as well as heavily consumed APIs. In contrast to other web accelerators, such as Squid, which began life as a client-side cache, or Apache and nginx, which are primarily origin servers, Varnish was designed as an HTTP accelerator. Varnish is focused exclusively on HTTP, unlike other proxy servers that often support FTP, SMTP and other network protocols.
Varnish is used by high-profile, high-traffic websites including online newspaper sites such as The New York Times, The Guardian, The Hindu, Corriere della Sera, social media and content sites such as Wikipedia, Facebook, Twitter, Vimeo, and Tumblr. Of the Top 10K sites in the web, around a tenth use the software.
History
The project was initiated by the online branch of the Norwegian tabloid newspaper Verdens Gang. The architect and lead developer is Danish independent consultant Poul-Henning Kamp (a well-known FreeBSD core developer), with management, infrastructure and additional development originally provided by the Norwegian Linux consulting company Linpro. The support, management and development of Varnish was later spun off into a separate company, Varnish Software.
Varnish is open source, available under a two-clause BSD license. Commercial support is available from Varnish Software, amongst others.
Version 1.0 of Varnish was released in 2006,[1][2] Varnish 2.0 in 2008,[3] Varnish 3.0 in 2011,[4] and Varnish 4.0 in 2014.[5]
Architecture
Varnish stores data in virtual memory and leaves the task of deciding what is stored in memory and what gets paged out to disk to the operating system. This helps avoid the situation where the operating system starts caching data while it is moved to disk by the application.
Varnish is heavily threaded, with each client connection being handled by a separate worker thread. When the configured limit on the number of active worker threads is reached, incoming connections are placed in an overflow queue; when this queue reaches its configured limit incoming connections will be rejected.
The principal configuration mechanism is Varnish Configuration Language (VCL), a domain-specific language (DSL) used to write hooks that are called at critical points in the handling of each request. Most policy decisions are left to VCL code, making Varnish more configurable and adaptable than most other HTTP accelerators. When a VCL script is loaded, it is translated to C, compiled to a shared object by the system compiler, and loaded directly into the accelerator which can thus be reconfigured without a restart.
A number of run-time parameters control things such as the maximum and minimum number of worker threads, various timeouts, etc. A command-line management interface allows these parameters to be modified, and new VCL scripts to be compiled, loaded and activated, without restarting the accelerator.
In order to reduce the number of system calls in the fast path to a minimum, log data is stored in shared memory, and the task of monitoring, filtering, formatting and writing log data to disk is delegated to a separate application.
Performance
While Varnish is designed to reduce contention between threads to a minimum, its authors claim[citation needed] that its performance will only be as good as that of the system's pthreads implementation.
Additionally, a slow malloc implementation (like the ones in Microsoft Windows's msvcrt[6][7]) may add unnecessary contention and thereby limit performance, hence the general recommendation of running Varnish on Linux or Unix based environments.
Load balancing
Varnish supports load balancing using both a round robin and a random director, both with a per-backend weighting. Basic health-checking of backends is also available.[8]
Other features
Varnish Cache also features:
- Plugin support with Varnish Modules, also called VMODs[9]
- Support for Edge Side Includes including stitching together compressed ESI fragments
- Gzip Compression and Decompression
- DNS, Random, Hashing and Client IP based Directors
- HTTP Streaming Pass & Fetch
- Experimental support for Persistent Storage, without LRU eviction
- Saint and Grace mode
See also
- Web accelerator which discusses host-based HTTP acceleration
- Proxy server which discusses client-side proxies
- Reverse proxy which discusses origin-side proxies
- Comparison of web servers
- Internet Cache Protocol
References
- ^ "Making Catalyst Sites Shine with Varnish", Dec. 14, 2008
- ^ "Varnish 1.0 released", Sep. 20, 2006
- ^ "Varnish 2.0 released", Oct. 15 2008
- ^ "Varnish 3.0.0 released", Jun. 16 2011
- ^ "Varnish 4.0.0 released", Apr. 10 2014
- ^ "Re: Why is Windows 100 times slower than Linux when growing a large scalar?".
- ^ http://locklessinc.com/benchmarks_allocator.shtml
- ^ "BackendPolling – Varnish". Varnish-cache.org. Retrieved 2014-07-18.
- ^ "VMODs Directory (Varnish Modules and Extensions) | Varnish Community". Varnish-cache.org. Retrieved 2014-07-18.
External links
- Official development web site
- Official commercial web site
- Notes from the Architect
- "You're Doing It Wrong", June 11, 2010 ACM Queue article by Varnish developer Poul-Henning Kamp describing the implementation of the LRU list.
- Varnish in Layman's Terms
- Varnish Cache How-To