Rate limiting

From Wikipedia, the free encyclopedia
Jump to: navigation, search

In computer networks, rate limiting is used to control the rate of traffic sent or received by a network interface controller and is used to prevent DoS attacks.[1]

Hardware appliances[edit]

Hardware appliances can limit the rate of requests on layer 4 or 5 of the OSI model.

Rate limiting can be induced by the network protocol stack of the sender due to a received ECN-marked packet and also by the network scheduler of any router along the way.

While a hardware appliance can limit the rate for a given range of IP-addresses on layer 4, it risks to block networks with many users, which are masked by NAT with a single IP-address of an ISP.

Deep packet inspection can be used to filter on the session layer, but will effectively disarm encryption protocols like TLS and SSL between the appliance and the web server.

Web Servers[edit]

Web servers typically use a central in-memory key-value database, like Redis or Aerospike, for session management. A rate limiting algorithm is used to check if the user session (or IP-address) has to be limited based on the information in the session cache.

In case a client made too many requests within a given timeframe, HTTP-Servers can respond with status code 429: Too Many Requests.

However, the session management and rate limiting algorithm usually must be built into the application running on the web server, rather than the web server itself.

See also[edit]



  1. ^ Richard A. Deal (September 22, 2004). "Cisco Router Firewall Security: DoS Protection". Retrieved April 16, 2017. 
  2. ^ a b c d Nikrad Mahdi (April 12, 2017). "An Alternative Approach to Rate Limiting". Retrieved April 16, 2017.