Noop scheduler

From Wikipedia, the free encyclopedia
Jump to: navigation, search
The location of I/O schedulers in a simplified structure of the Linux kernel.

The NOOP scheduler is the simplest I/O scheduler for the Linux kernel. This scheduler was developed by Jens Axboe.

The NOOP scheduler inserts all incoming I/O requests into a simple FIFO queue and implements request merging. This scheduler is useful when it has been determined that the host should not attempt to re-order requests based on the sector numbers contained therein. In other words, the scheduler assumes that the host is definitionally unaware of how to productively re-order requests.

There are (generally) three basic situations where this situation is desirable:

  • If I/O scheduling will be handled at a lower layer of the I/O stack. For example: at the block device, by an intelligent RAID controller, Network Attached Storage, or by an externally attached controller such as a storage subsystem accessed through a switched Storage Area Network)[1] Since I/O requests are potentially re-scheduled at the lower level, resequencing IOPs at the host level can create a situation where CPU time on the host is being spent on operations that will just be undone when they reach the lower level, increasing latency/decreasing throughput for no productive reason.
  • Because accurate details of sector position are hidden from the host system. An example would be a RAID controller that performs no scheduling on its own. Even though the host has the ability to re-order requests and the RAID controller does not, the host systems lacks the visibility to accurately re-order the requests to lower seek time. Since the host has no way of knowing what a more streamlined queue would "look" like, it can not restructure the active queue in its image, but merely pass them onto the device that is (theoretically) more aware of such details.
  • Because movement of the read/write head has been determined to not impact application performance in a way that justifies the additional CPU time being spent re-ordering requests. This is usually the case with non-rotational media such as flash drives or solid-state drives (SSDs).

However, NOOP is not necessarily the preferred I/O scheduler for the above scenarios. As with any performance tuning, all guidance will be based on observed work load patterns (undermining one's ability to create simplistic rules of thumb). If there is contention for available I/O bandwidth from other applications, it is still possible that other schedulers will generate better performance by virtue of more intelligently carving up that bandwidth for the applications deemed most important. For example, running an LDAP directory server may benefit from deadline's read preference and latency guarantees. At the same time, a user with a desktop system running many different applications may want to have access to CFQ's tunables or its ability to prioritize bandwidth for particular applications over others (ionice).

If there is no contention between applications, then there are little to no benefits from selecting a scheduler for the above-listed three scenarios. This is due to a resulting inability to deprioritize one workload's operations in a way that makes additional capacity available to another workload. In other words, if the I/O paths are not saturated and the requests for all the workloads fail to cause an unreasonable shifting around of drive heads (which the operating system is aware of), the benefit of prioritizing one workload may create a situation where CPU time spent scheduling I/O is wasted instead of providing desired benefits.

Linux kernel also exposes the nomerges sysfs parameter as a scheduler-agnostic configuration, making it possible for the block layer's requests merging logic to be disabled either entirely, or only for more complex merging attempts.[2] This reduces the need for the NOOP scheduler as the overhead of most I/O schedulers is associated with their attempts to locate adjacent sectors in the request queue in order to merge them. However, most I/O workloads benefit from a certain level of requests merging, even on fast low-latency storage such as SSDs.[3][4]

See also[edit]


  1. ^ "Choosing an I/O Scheduler for Red Hat Enterprise Linux 4 and the 2.6 Kernel". Red Hat. Retrieved 2007-08-10. 
  2. ^ "Documentation/block/queue-sysfs.txt". Linux kernel documentation. December 1, 2014. Retrieved December 14, 2014. 
  3. ^ "6.4.3. Noop (Red Hat Enterprise Linux 6 Documentation)". Red Hat. October 8, 2014. Retrieved December 14, 2014. 
  4. ^ Paul Querna (August 15, 2014). "Configure flash drives in High I/O instances as Data drives". Rackspace. Retrieved December 15, 2014. 

External links[edit]