|This article needs additional citations for verification. (June 2008)|
Blocking occurs when a subroutine does not return until it either completes its task or fails with an error or exception. A process that is blocked is one that waits for some event, such as a resource becoming available or the completion of an I/O operation.
- the CPU
When one task is using a resource, it is generally not possible, or desirable, for another task to access it. The techniques of mutual exclusion are used to prevent this concurrent use. When the other task is blocked, it is unable to execute until the first task has finished using the shared resource.
Programming languages and scheduling algorithms are designed to minimize this blocking, and to prevent the case of deadlock, where two or more tasks are blocked, waiting for a resource that the other holds.
In a hypothetical two-state (running and not-running) model, processes would go onto the ready queue before being dispatched for execution. In the absence of a blocked state, if priority is measured by holdup time, blocked processes would erroneously get scheduled despite having nothing to operate on. This is undesirable due to inefficient processor usage. Hence the efficient use of resources makes a case for a blocked queue in which processes line up until the event dependency resolves.
Once the event fires, the process is advanced from blocked state to an imminent one, such as runnable.
- Stallings, William (2004). Operating Systems: Internals and Design Principles (5th ed.). Prentice Hall.
|This operating system-related article is a stub. You can help Wikipedia by expanding it.|