Jump to content

Semaphore (programming)

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by 18.239.6.217 (talk) at 20:26, 26 May 2008 (→‎Introduction). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

For other uses, see Semaphore.

A semaphore, in computer science, is a protected variable (an entity storing a value) or abstract data type (an entity grouping several variables that may or may not be numerical) which constitutes the classic method for restricting access to shared resources, such as shared memory, in a multiprogramming environment (a system where several programs may be executing, or taking turns to execute, at once). Semaphores exist in many variants, though usually the term refers to a counting semaphore, since a binary semaphore is better known as a mutex. A counting semaphore is a counter for a set of available resources, rather than a locked/unlocked flag of a single resource. It was invented by Edsger Dijkstra. Semaphores are the classic solution to preventing race conditions in the dining philosophers problem, although they do not prevent resource deadlocks.

Introduction

Semaphores can only be accessed using the following operations. Those marked atomic should not be interrupted (that is, if the system decides that the "turn is up" for the program doing this, it shouldn't stop it in the middle of those instructions) for the reasons explained below.

P(Semaphore s) // Acquire Resource
{
  wait until s > 0, then s := s-1;
  /* must be atomic once s > 0 is detected */
}

V(Semaphore s)  // Release  Resource
{
  s := s+1;   /* must be atomic */
}

Init(Semaphore s, Integer v)
{
  s := v;
}

Notice that incrementing the variable s must not be interrupted, and the P operation must not be interrupted after s is found to be greater than 0. This can be done using a special instruction such as test-and-set (if the architecture's instruction set supports it), or (on uniprocessor systems) ignoring interrupts in order to prevent other processes from becoming active.

The value of a semaphore is the number of units of the resource which are free. (If there is only one resource, a "binary semaphore" with values 0 or 1 is used.) The P operation busy-waits (uses its turn to do nothing) or maybe sleeps (tells the system not to give it a turn) until a resource is available, whereupon it immediately claims one. V is the inverse; it simply makes a resource available again after the process has finished using it. Init is only used to initialize the semaphore before any requests are made. The P and V operations must be atomic, which means that no process may ever be preempted in the middle of one of those operations to run another operation on the same semaphore.

The canonical names P and V come from the initials of Dutch words. V stands for verhogen, or "increase". Several explanations have been given for P (including passeer for "pass", probeer "try", and pakken "grab"), but in fact Dijkstra wrote that he intended P to stand for the made-up portmanteau word prolaag,[1] short for probeer te verlagen, or "try-and-decrease" [1][2] (A less ambiguous, and more accurate, English translation would be "try-to-decrease".) This confusion stems from the unfortunate characteristic of the Dutch language that the words for increase and decrease both begin with the letter V, and the words spelled out in full would be impossibly confusing for non–Dutch-speakers.

In the programming language ALGOL 68, in the Linux kernel,[2] and in some English textbooks, the P and V operations are called, respectively, down and up. In software engineering practice, they are often called wait and signal, or acquire and release, or pend and post. Some texts call them procure and vacate to match the original Dutch initials.

To avoid busy-waiting, a semaphore may have an associated queue of processes (usually a first-in, first out). If a process performs a P operation on a semaphore which has the value zero, the process is added to the semaphore's queue. When another process increments the semaphore by performing a V operation, and there are processes on the queue, one of them is removed from the queue and resumes execution.

The counting semaphore concept can be extended with the ability of claiming or returning more than one 'unit' from the semaphore. This is indeed the way the classical UNIX semaphore works. The modified P and V operations work like this:

P(Semaphore s, integer howmany)
{
  wait until s >= 0;
  s := s - howmany; /* must be atomic operation */
  wait until s >= 0;
}

V(Semaphore s, integer howmany)
{
  s := s+howmany;   /* must be atomic */
}

To understand why it is better than just calling the simple version of P 'howmany' times consider the following problem. Let's say you have a pool of N of some resource, say fixed size buffers. You may want to use a counting semaphore initialised to N to keep track of the number of the buffers available. When a process wants to allocate a buffer, they call P on the semaphore and get a buffer. If there are no buffers available, they will wait until some other process releases a buffer and invoke V on the semaphore.

Consider that there are two processes that want to allocate K and L buffers so that K + L > N. The naive implementation would call the simple decrementing variant P on the semaphore K or L times in a loop. However, it can lead to a deadlock: if the first process gets some Z < K number of buffers so that Z + L > N and then the operating system switches to the second process that starts allocating buffers, then when it manages to get N - Z (which is less than L) buffers, the semaphore becomes 0 and the process gets suspended. The first process resumes, tries to allocate the next buffer, but since the semaphore is 0, it can not, so it gets suspended in turn. None of the processes could get enough buffers to continue and therefore none of them will return any buffers to the pool. Thus, they are stuck in a deadlock situation.

With the modified semaphore version the first process will ask for K units of the semaphore, which it will get in an atomic operation, leaving N-K units on the semaphore. Then the second process arrives, decrements the semaphore down to N-K-L and since that is a negative number, will wait. As the first process releases buffers and increments the semaphore, as soon as the semaphore reaches 0 it means that there are L elements available in the pool, the second process can be woken up and can receive all of its buffers.

It should be noted that the semaphore count is not necessarily equal to the buffers available in the pool. The checking and waiting twice for the s >= 0 condition in P is needed to guarantee that as multiple processes are added to the semaphore's waiting list they do not disturb each other's request: a process does not change the semaphore's count until it is next in the queue. In real implementations it is done without actually activating up the waiting process for the intermediate step.

Semaphores today as used by programmers

Semaphores remain in common use in programming languages that do not intrinsically support other forms of synchronization. They are the primitive synchronization mechanism in many operating systems. The trend in programming language development, though, is towards more structured forms of synchronization, such as monitors and channels. In addition to their inadequacies in dealing with (multi-resource) deadlocks, semaphores do not protect the programmer from the easy mistakes of taking a semaphore that is already held by the same process, and forgetting to release a semaphore that has been taken.

For example, Wikipedia does not seem to use semaphores to synchronize editing. This could result in race conditions if two users are simultaneously making changes. Rather than avoiding this, e.g. by enforcing sequential exclusive editing of a page, Wikipedia uses a version-control system to attempt to merge the results from various contributions or handle conflicts.

Example usage

Since semaphores have a count associated with them, they may be employed when multiple threads need to achieve an objective cooperatively. Consider this example:

A thread named A needs information from two databases before it can proceed. Access to these databases is controlled by two separate threads B, C. These two threads have a message-processing loop; anybody needing to use one of the databases posts a message into the corresponding thread's message queue. Thread A initializes a semaphore S with init(S,-1). A then posts a data request, including a pointer to the semaphore S, to both B and C. Then A calls P(S), which blocks. The other two threads meanwhile take their time obtaining the information; when each thread finishes obtaining the information, it calls V(S) on the passed semaphore. Only after both threads have completed will the semaphore's value be positive and A be able to continue. A semaphore used in this way is called a "counting semaphore."

Apart from a counting semaphore, there is a "blocking semaphore". A blocking semaphore is a semaphore that is initialized to zero. This has the effect that any thread that does a P(S) will block until another thread does a V(S). This kind of construct is very useful when the order of execution among threads needs to be controlled.

The simplest kind of semaphore is the "binary semaphore", used to control access to a single resource, which is essentially the same as a mutex. A binary semaphore is always initialized with the value 1. When the resource is in use, the accessing thread calls P(S) to decrease this value to 0, and restores it to 1 with the V operation when the resource is ready to be freed....

See also

Notes

References

  • Over Seinpalen (EWD 74), in which Dijkstra introduces the concept (in Dutch).
  • The Little Book of Semaphores, by Allen B. Downey, Green Tea Press.