Dynamic site acceleration

From Wikipedia, the free encyclopedia
  (Redirected from Dynamic Site Acceleration)
Jump to: navigation, search

Dynamic site acceleration (DSA) - also known as whole site acceleration - [1] is a group of techniques that make the delivery of dynamic websites more efficient.[2] Manufacturers of application delivery controllers and content delivery networks (CDN's) use the following techniques to accelerate dynamic sites:[3]

  • Improved connection management, by multiplexing client connections and HTTP keep-alive
  • Prefetching of -uncachable- web responses
  • Dynamic cache control [4]
  • On the fly compression [5]
  • Full page caching[6]
  • Off-loading SSL termination
  • Response based TTL-assignment (bending) [7]
  • TCP optimization
  • Route optimization

Techniques[edit]

Better connection management: TCP multiplexing[edit]

An edge device -this can be an ADC or a CDN- that is capable of TCP multiplexing, can be placed in between web servers and clients, to offload origin servers and accelerate content delivery.

Normally, each connection between client and server, requires a dedicated process that lives on the origin for the duration of the connection. When clients have a slow connection, this occupies part of the origin server, because the process has to stay alive, while the server is waiting for a complete request. With TCP multiplexing, the situation is different. The device obtains a complete and valid request from the client, before sending this to the origin when the request has fully arrived. This offloads application and database servers, which are slower -and more expensive to use- than ADC's or CDN's.[8]

Dynamic cache control[edit]

HTTP has a built-in system for cache control, using cache control headers such as ETag, "expires" and "last-modified". Many CDN's and ADC's, that claim to have DSA, have replaced this, with their own system, calling it dynamic caching or dynamic cache control. This gives them more options to invalidate and bypass the cache than the standard HTTP cache control.[9][10]

The purpose of dynamic cache control is to increase the cache-hit ratio of a website. The cache-hit ratio is the content served from cache, in relation to the content generated by origin servers.

Due to the dynamic nature of web 2.0 websites, it is difficult to use static web caching. The reason for this is that dynamic websites -per definition- have personalized content for different users and/or regions. For example mobile users may see different content than desktop users, and registered users may need to see different content than anonymous users. Even among registered users, content may vary widely, for example with social media websites.

This makes it difficult -or even impossible- to store content in cache, since this creates the risk of serving content from a previous visitor, which is not meant to be shown to the next visitor.

Dynamic cache control has more options to configure caching, such as cookie-based cache control, that allows you to serve content from cache, based on the presence or lack of certain cookies. (A cookie is the differentiator between anonymous and logged-in users, so it allows you to serve content from cache to anonymous users and personalized content to logged-in users)

Prefetching responses[edit]

If personalized content can not be cached, it might be queued on an edge device. That means, that a device stores a list of possible responses, which are ready to be served. This is different than caching, as a prefetched response is only served once. This can be especially useful for accelerating responses of third party API's, such as advertisements.

Route optimization methodology[edit]

Route optimization, also known as "latency-based routing", optimizes the route of traffic between clients and origin servers, to minimize latency. Route optimization can be done by a DNS provider[11][12] or by a CDN[13]

Route optimization comes down to measuring multiple paths between the client and origin server, and then recording the fastest path. This path can then be used to serve content when a client actually makes a request.

Front end optimization[edit]

Front end optimization (FEO) and DSA both describe a group of techniques to improve online content delivery. There are overlaps, such as on-the-fly data compression and improved cache-control.[14] However, the key differences are:

  • FEO focuses on changing the actual content, where as DSA focuses on improving content delivery without touching content (i.e. DSA has verbatim delivery of content). DSA focuses on optimizing bit delivery across the network, without changing the content. FEO aims to decrease the number of objects required to download websites, and to decrease the total amount of traffic. This can be done by device-aware content serving (e.g. dropping the quality of images), minification, resource consolidation[15] and inlining [16] Because FEO changes the actual traffic, configuration tends to be more difficult, as there is a risk of affecting the user-experience, by serving content that was incorrectly changed.
  • DSA focuses on decreasing page loading times and offloading web-servers, especially for dynamic sites. FEO focuses primarily on decreasing page loading times and reducing bandwidth. Still, cost-savings on origin servers can also be made by implementing FEO: it decreases page loading times, without rewriting code, consequently saving man-hours that would normally be necessary to optimize code. Also, revenue might increase from lower page loading times[17]

References[edit]