Jump to content

Priority inversion: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
Example of a priority inversion: Clarify and remove repetition
Solutions: corrected spell check fail
Line 32: Line 32:


;[[Priority inheritance]]
;[[Priority inheritance]]
:Under the policy of priority inheritance, whenever a high priority task has to wait for some resource shared with an executing low priority task, the low priority task is temporarily assigned the priority of the highest waiting priority task for the duration of its own use of the shared resource, thus keeping medium priority tasks from pre-empting the (originally) low priority task, and thereby effectively the waiting high priority task as well. Once the resource is released, the low priority task continues at its original priority level.
:Under the policy of priority inheritance, whenever a high priority task has to wait for some resource shared with an executing low priority task, the low priority task is temporarily assigned the priority of the highest waiting priority task for the duration of its own use of the shared resource, thus keeping medium priority tasks from pre-empting the (originally) low priority task, and thereby effecting the waiting high priority task as well. Once the resource is released, the low priority task continues at its original priority level.


==See also==
==See also==

Revision as of 22:09, 27 April 2011

In computer science, priority inversion is a problematic scenario in scheduling when a higher priority task is indirectly preempted by a lower priority task effectively "inverting" the relative priorities of the two tasks.

This violates the priority model that high priority tasks can only be prevented from running by higher priority tasks and briefly by low priority tasks which will quickly complete their use of a resource shared by the high and low priority tasks.

Example of a priority inversion

Consider there is a task L, with low priority. This task requires resource R. Consider that L is running and it acquires resource R. Now, there is another task H, with high priority. This task also requires resource R. Consider H starts after L has acquired resource R. Now H has to wait until L relinquishes resource R.

Everything works as expected up to this point, but problems arise when a new task M starts with medium priority during this time. ` Since R is still in use (by L), H cannot run. Since M is the highest priority unblocked task, it will be scheduled before L. Since L has been preempted by M, L cannot relinquish R. So M will run till it is finished, then L will run - at least up to a point where it can relinquish R - and then H will run. Thus, in above scenario, a task with medium priority ran before a task with high priority, effectively giving us a priority inversion.

In some cases, priority inversion can occur without causing immediate harm—the delayed execution of the high priority task goes unnoticed, and eventually the low priority task releases the shared resource. However, there are also many situations in which priority inversion can cause serious problems. If the high priority task is left starved of the resources, it might lead to a system malfunction or the triggering of pre-defined corrective measures, such as a watch dog timer resetting the entire system. The trouble experienced by the Mars lander "Mars Pathfinder"[1][2] is a classic example of problems caused by priority inversion in realtime systems.

Priority inversion can also reduce the perceived performance of the system. Low priority tasks usually have a low priority because it is not important for them to finish promptly (for example, they might be a batch job or another non-interactive activity). Similarly, a high priority task has a high priority because it is more likely to be subject to strict time constraints—it may be providing data to an interactive user, or acting subject to realtime response guarantees. Because priority inversion results in the execution of the low priority task blocking the high priority task, it can lead to reduced system responsiveness, or even the violation of response time guarantees.

A similar problem called deadline interchange can occur within earliest deadline first scheduling (EDF).

Solutions

The existence of this problem has been known since the 1970s, but there is no fool-proof method to predict the situation. There are however many existing solutions, of which the most common ones are:

Disabling all interrupts to protect critical sections
When disabled interrupts are used to prevent priority inversion, there are only two priorities: preemptible, and interrupts disabled. With no third priority, inversion is impossible. Since there's only one piece of lock data (the interrupt-enable bit), misordering locking is impossible, and so deadlocks cannot occur. Since the critical regions always run to completion, hangs do not occur. Note that this only works if all interrupts are disabled. If only a particular hardware device's interrupt is disabled, priority inversion is reintroduced by the hardware's prioritization of interrupts. A simple variation, "single shared-flag locking" is used on some systems with multiple CPUs. This scheme provides a single flag in shared memory that is used by all CPUs to lock all inter-processor critical sections with a busy-wait. Interprocessor communications are expensive and slow on most multiple CPU systems. Therefore, most such systems are designed to minimize shared resources. As a result, this scheme actually works well on many practical systems. These methods are widely used in simple embedded systems, where they are prized for their reliability, simplicity and low resource use. These schemes also require clever programming to keep the critical sections very brief, under 100 microseconds in practical systems. Many software engineers consider them impractical in general-purpose computers.
A priority ceiling
With priority ceilings, the shared mutex process (that runs the operating system code) has a characteristic (high) priority of its own, which is assigned to the task locking the mutex. This works well, provided the other high priority task(s) that tries to access the mutex does not have a priority higher than the ceiling priority.
Priority inheritance
Under the policy of priority inheritance, whenever a high priority task has to wait for some resource shared with an executing low priority task, the low priority task is temporarily assigned the priority of the highest waiting priority task for the duration of its own use of the shared resource, thus keeping medium priority tasks from pre-empting the (originally) low priority task, and thereby effecting the waiting high priority task as well. Once the resource is released, the low priority task continues at its original priority level.

See also

References