I/O scheduling

From Wikipedia, the free encyclopedia
Jump to: navigation, search
For process scheduling, see scheduling (computing). For process management, see process management (computing).

Input/output (I/O) scheduling is the method that computer operating systems use to decide which order block I/O operations will be submitted to storage volumes. I/O Scheduling is sometimes called 'disk scheduling'.

Purpose[edit]

The position of I/O schedulers within various layers of the Linux kernel's I/O stack.[1]

I/O schedulers can have many purposes depending on the goal of the I/O scheduler. Some common ones are:

  • To minimize time wasted by hard disk seeks
  • To prioritize a certain processes' I/O requests
  • To give a share of the disk bandwidth to each running process
  • To guarantee that certain requests will be issued before a particular deadline

Implementation[edit]

I/O Scheduling usually has to work with hard disks which share the property that there is long access time for requests which are far away from the current position of the disk head (this operation is called a seek). To minimize the effect this has on system performance, most I/O schedulers implement a variant of the elevator algorithm which re-orders the incoming randomly ordered requests into the order in which they will be found on the disk.

Common scheduling disciplines[edit]

See also[edit]

References[edit]

  1. ^ Evgeny Budilovsky (April 2013). "Kernel-Based Mechanisms for High-Performance I/O" (PDF). Tel Aviv University. p. 8. Retrieved 2014-12-28. 

Further reading[edit]