Rate limiting
In computer networks, rate limiting is used to control the rate of requests sent or received by a network interface controller. It can be used to prevent DoS attacks[1] and limit web scraping.[2]
Research indicates flooding rates for one zombie machine are in excess of 20 HTTP GET requests per second,[3] legitimate rates much less.
Hardware appliances
Hardware appliances can limit the rate of requests on layer 4 or 5 of the OSI model.
Rate limiting can be induced by the network protocol stack of the sender due to a received ECN-marked packet and also by the network scheduler of any router along the way.
While a hardware appliance can limit the rate for a given range of IP-addresses on layer 4, it risks blocking a network with many users which are masked by NAT with a single IP address of an ISP.
Deep packet inspection can be used to filter on the session layer but will effectively disarm encryption protocols like TLS and SSL between the appliance and the protocol server (i.e. web server).
Protocol servers
Protocol servers using a request / response model, such as FTP servers or typically Web servers may use a central in-memory key-value database, like Redis or Aerospike, for session management. A rate limiting algorithm is used to check if the user session (or IP address) has to be limited based on the information in the session cache.
In case a client made too many requests within a given time frame, HTTP servers can respond with status code 429: Too Many Requests.
However, in some cases (i.e. web servers) the session management and rate limiting algorithm should be built into the application (used for dynamic content) running on the web server, rather than the web server itself.
When a protocol server or a network device notice that the configured request limit is reached, then it will offload new requests and not respond to them. Sometimes they may be added to a queue to be processed once the input rate reaches an acceptable level, but at peak times the request rate can even exceed the capacities of such queues and requests have to be thrown away.
Data centers
Data centers widely use rate limiting to control the share of resources given to different tenants and applications according to their service level agreement.[4] A variety of rate limiting techniques are applied in data centers using software and hardware. Virtualized data centers may also apply rate limiting at the hypervisor layer. Two important performance metrics of rate limiters in data centers are resource footprint (memory and CPU usage) which determines scalability, and precision. There usually exists a trade-off, that is, higher precision can be achieved by dedicating more resources to the rate limiters. A considerable body of research with focus on improving performance of rate limiting in data centers.[4]
See also
- Algorithms
- Token bucket[5]
- Leaky bucket
- Fixed window counter[5]
- Sliding window log[5]
- Sliding window counter[5]
- Libraries
- ASP.NET Web API rate limiter
- ASP.NET Core rate limiting middleware
- Rate limiting for .NET (PCL Library)
- Rate limiting for Node.JS
References
- ^ Richard A. Deal (September 22, 2004). "Cisco Router Firewall Security: DoS Protection". Retrieved April 16, 2017.
- ^ Greenberg, Andy. "An Absurdly Basic Bug Let Anyone Grab All of Parler's Data". Wired. No. 12 January 2021. Retrieved 12 January 2021.
- ^ Jinghe Jin, Nazarov Nodir, Chaetae Im, Seung Yeob Nam, "Mitigating HTTP GET Flooding Attacks through Modified NetFPGA Reference Router," 07 November 2014, pp. 1, Retrieved 19 December 2021.
- ^ a b M. Noormohammadpour, C. S. Raghavendra, "Datacenter Traffic Control: Understanding Techniques and Trade-offs," IEEE Communications Surveys & Tutorials, vol. PP, no. 99, pp. 1-1.
- ^ a b c d Nikrad Mahdi (April 12, 2017). "An Alternative Approach to Rate Limiting". Retrieved April 16, 2017.