Jump to content

I/O scheduling

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by 124.244.59.57 (talk) at 22:09, 7 February 2008. The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

I/O Scheduling is the term used to describe the method computer operating systems decide the order that block I/O operations will be submitted to the disk subsystem. I/O Scheduling is sometimes called 'disk scheduling'.

Purpose

I/O schedulers can have many purposes depending on the goal of the I/O scheduler, some common goals are:

  • To minimize time wasted by hard disk seeks.
  • To prioritize a certain processes' I/O requests.
  • To give a share of the disk bandwidth to each running process.
  • To guarantee that certain requests will be issued before a particular deadline.

Implementation

I/O Scheduling usually has to work with hard disks which share the property that there is long access time for requests which are far away from the current position of the disk head (this operation is called a seek). To minimise the affect this has on system performance, most I/O schedulers implement a variant of the elevator algorithm which re-orders the incoming randomly ordered requests into the order in which they will be found on the disk.

Common disk scheduling disciplines

See also

References

  • Love, R. (2005). Linux Kernel Development, Novell Press. ISBN 0-672-32720-1