The NOOP scheduler inserts all incoming I/O requests into a simple FIFO queue and implements request merging. This scheduler is useful when it has been determined that the host should not attempt to re-order requests based on the sector numbers contained therein. In other words, the scheduler assumes that the host is definitionally unaware of how to productively re-order requests.
There are (generally) three basic situations where this situation is desirable:
- If I/O scheduling will be handled at a lower layer of the I/O stack. For example: at the block device, by an intelligent RAID controller, Network Attached Storage, or by an externally attached controller such as a storage subsystem accessed through a switched Storage Area Network) Since I/O requests are potentially re-scheduled at the lower level, resequencing IOPs at the host level can create a situation where CPU time on the host is being spent on operations that will just be undone when they reach the lower level, increasing latency/decreasing throughput for no productive reason.
- Because accurate details of sector position are hidden from the host system. An example would be a RAID controller that performs no scheduling on its own. Even though the host has the ability to re-order requests and the RAID controller does not, the host systems lacks the visibility to accurately re-order the requests to lower seek time. Since the host has no way of knowing what a more streamlined queue would "look" like, it can not restructure the active queue in its image, but merely pass them onto the device that is (theoretically) more aware of such details.
- Because movement of the read/write head has been determined to not impact application performance in a way that justifies the additional CPU time being spent re-ordering requests. This is usually the case with non-rotational media such as flash drives or Solid-state drives.
This is not to say NOOP is necessarily the preferred I/O scheduler for the above scenarios. As with any performance tuning, all guidance will be based on observed work load patterns (undermining one's ability to create simplistic rules of thumb). If there is contention for available I/O bandwidth from other applications, it is still possible that other schedulers will generate better performance by virtue of more intelligently carving up that bandwidth for the applications deemed most important. For example, with a LDAP directory server a user may want deadline's read preference and latency guarantees. In another example, a user with a desktop system running many different applications may want to have access to CFQ's tunables or its ability to prioritize bandwidth for particular applications over others (ionice).
It should be noted that without contention between applications there is little to no benefit to selecting a scheduler for the listed three scenarios. This is due to a resulting inability to deprioritize one workload's operations in a way that makes additional capacity available to another workload. If the I/O paths are not saturated and the requests for all the workloads fail to cause an unreasonable shifting around of drive heads (that the OS is aware of) the benefit of prioritizing one workload may create a situation where CPU time spent scheduling I/O is wasted in excess to the benefit of doing such.
Other I/O schedulers
Notes and references
- "Choosing an I/O Scheduler for Red Hat Enterprise Linux 4 and the 2.6 Kernel". Red Hat. Retrieved 2007-08-10.