Critical section
This article needs additional citations for verification. (June 2010) |
In concurrent programming, a critical section or critical region is a part of a multi-threaded program that may not be concurrently executed by more than one of the program's processes.[a] In other words, it is a piece of a program that requires mutual exclusion of access.[1] Typically, the critical section accesses a shared resource, such as a data structure, a peripheral device, or a network connection, that does not allow multiple concurrent accesses.[2]
A critical section may consist of multiple discontiguous parts of the program's code. For example, one part of a program might read from a file that another part wishes to modify. These parts together form a single critical section, since simultaneous readings and modifications may interfere with each other.[1]
A critical section will usually terminate in finite time,[1] and a thread, task, or process will have to wait for a fixed time to enter it (aka bounded waiting). Some synchronization mechanism is required at the entry and exit of the critical section to ensure exclusive use, for example a semaphore.
By carefully controlling which variables are modified inside and outside the critical section, concurrent access to that state is prevented. A critical section is typically used when a multithreaded program must update multiple related variables without a separate thread making conflicting changes to that data. In a related situation, a critical section may be used to ensure a shared resource, for example a printer, can only be accessed by one process at a time.
How critical sections are implemented varies among operating systems.
The simplest method is to prevent any change of processor control inside the critical section. On uni-processor systems, this can be done by disabling interrupts on entry into the critical section, avoiding system calls that can cause a context switch while inside the section, and restoring interrupts to their previous state on exit. Any thread of execution entering any critical section anywhere in the system will, with this implementation, prevent any other thread, including an interrupt, from being granted processing time on the CPU—and therefore from entering any other critical section or, indeed, any code whatsoever—until the original thread leaves its critical section.
This brute-force approach can be improved upon by using semaphores. To enter a critical section, a thread must obtain a semaphore, which it releases on leaving the section. Other threads are prevented from entering the critical section at the same time as the original thread, but are free to gain control of the CPU and execute other code, including other critical sections that are protected by different semaphores.
Formal Reasoning
Observe, that in a Control_flow_graph a Thread_(computer_science) defines a particular path ( may be closed ) see Cycle_(graph_theory). Thus, a Path_(graph_theory) followed by thread A, and a path followed by another thread B, may or may not have nodes which are common to them. If there are nodes which are common to these paths, then the intersection of the paths are critical section, given underlying data gets changed. If there is no data change, then there would be no race condition, even if the paths are overlapping. If there is data change done at any node of any intersection by any of the threads, then there said to exist a race condition, because order of execution starts mattering.
Kernel-level critical sections
Typically, critical sections prevent process and thread migration between processors and the preemption of processes and threads by interrupts and other processes and threads.
Critical sections often allow nesting. Nesting allows multiple critical sections to be entered and exited at little cost.
If the scheduler interrupts the current process or thread in a critical section, the scheduler will either allow the currently executing process or thread to run to completion of the critical section, or it will schedule the process or thread for another complete quantum. The scheduler will not migrate the process or thread to another processor, and it will not schedule another process or thread to run while the current process or thread is in a critical section.
Similarly, if an interrupt occurs in a critical section, the interrupt's information is recorded for future processing, and execution is returned to the process or thread in the critical section. Once the critical section is exited, and in some cases the scheduled quantum completes, the pending interrupt will be executed. The concept of scheduling quantum applies to "round-robin" and similar scheduling policies.
Since critical sections may execute only on the processor on which they are entered, synchronization is only required within the executing processor. This allows critical sections to be entered and exited at almost zero cost. No interprocessor synchronization is required, only instruction stream synchronization. Most processors provide the required amount of synchronization by the simple act of interrupting the current execution state. This allows critical sections in most cases to be nothing more than a per processor count of critical sections entered.
Performance enhancements include executing pending interrupts at the exit of all critical sections and allowing the scheduler to run at the exit of all critical sections. Furthermore, pending interrupts may be transferred to other processors for execution.
Critical sections should not be used as a long-lived locking primitive. They should be short enough that the critical section will be entered, executed, and exited without any interrupts occurring, neither from hardware, much less the scheduler.
Kernel-level critical sections are the base of the software lockout issue.
See also
Notes
- ^ Where "processes" might mean operating system-level processes, threads, lightweight threads, etc.
References
- ^ a b c Raynal, Michel (2012). Concurrent Programming: Algorithms, Principles, and Foundations. Springer Science & Business Media. p. 9. ISBN 3642320279.
- ^ Jones, M. Tim (2008). GNU/Linux Application Programming (2nd ed.). [Hingham, Mass.]: Charles River Media. p. 264. ISBN 978-1-58450-568-6.
A critical section is a section of code that can be executed by at most one process at a time. The critical section exists to protect shared resources from multiple access.
External links
- Critical Section documentation on the MSDN Library web page
- Tutorial on Critical Sections