0% found this document useful (0 votes)
93 views

Semaphore Unix

Semaphores are variables that allow processes to share resources safely by controlling access to common resources. They track the number of available resources and processes must wait if no resources are available. Counting semaphores track the exact number while binary semaphores are restricted to 0 or 1. Processes use P() to decrement the semaphore when acquiring a resource and V() to increment when releasing. Semaphores are widely used in operating systems to prevent race conditions during parallel processing.

Uploaded by

Anisha Soni
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
93 views

Semaphore Unix

Semaphores are variables that allow processes to share resources safely by controlling access to common resources. They track the number of available resources and processes must wait if no resources are available. Counting semaphores track the exact number while binary semaphores are restricted to 0 or 1. Processes use P() to decrement the semaphore when acquiring a resource and V() to increment when releasing. Semaphores are widely used in operating systems to prevent race conditions during parallel processing.

Uploaded by

Anisha Soni
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 5

Semaphore (programming)

In computer science, particularly in operating systems, a semaphore is a variable or abstract data type that is used for controlling access, by multiple processes, to a common resource in a parallel programming or a multi user environment. A useful way to think of a semaphore is as a record of how many units of a particular resource are available, coupled with operations to safely (i.e., without race conditions) adjust that record as units are required or become free, and, if necessary, wait until a unit of the resource becomes available. Semaphores are a useful tool in the prevention of race conditions; however, their use is by no means a guarantee that a program is free from these problems. Semaphores which allow an arbitrary resource count are called counting semaphores, while semaphores which are restricted to the values 0 and 1 (or locked/unlocked, unavailable/available) are called binary semaphores. The semaphore concept was invented by Dutch computer scientist Edsger Dijkstra in 1965,[1] and has found widespread use in a variety of operating systems.

Library analogy
Suppose a library has 10 identical study rooms, to be used by one student at a time. To prevent disputes, students must request a room from the front desk if they wish to make use of a study room. When a student has finished using a room, the student must return to the desk and indicate that one room has become free. If no rooms are free, students wait at the desk until someone relinquishes a room. The clerk at the front desk does not keep track of which room is occupied or who is using it, nor does he or she know if the room is actually being used, only the number of free rooms available, which she only knows correctly if all of the students actually use their room and return them when they're done. When a student requests a room, the clerk decreases this number. When a student releases a room, the clerk increases this number. Once access to a room is granted, the room can be used for as long as desired, and so it is not possible to book rooms ahead of time. In this scenario the front desk represents a semaphore, the rooms are the resources, and the students represent processes. The value of the semaphore in this scenario is initially 10. When a student requests a room he or she is granted access and the value of the semaphore is changed to 9. After the next student comes, it drops to 8, then 7 and so on. If someone requests a room and the resulting value of the semaphore would be negative,[2] they are forced to wait. When multiple people are waiting, they will either wait in a queue, or use round-robin scheduling and race back to the desk when someone releases a room (depending on the nature of the semaphore).
Important observations

When used for a pool of resources, a semaphore tracks only how many resources are free; it does not keep track of which of the resources are free. Some other mechanism (possibly involving more semaphores) may be required to select a particular free resource.

Processes are trusted to follow the protocol. Fairness and safety are likely to be compromised (which practically means a program may behave slowly, act erratically, hang or crash) if even a single process acts incorrectly. This includes:

requesting a resource and forgetting to release it releasing a resource that was never requested holding a resource for a long time without needing it using a resource without requesting it first (or after releasing it)

Even if all processes follow these rules, multi-resource deadlock may still occur when there are different resources managed by different semaphores and when processes need to use more than one resource at a time, as illustrated by the dining philosophers problem.

Semantics and implementation


Counting semaphores are equipped with two operations, historically denoted as V (also known as signal()) and P (or wait())(see below). Operation V increments the semaphore S, and operation P decrements it. The semantics of these operations are shown below. Square brackets are used to indicate atomic operations, i.e., operations which appear indivisible from the perspective of other processes. The value of the semaphore S is the number of units of the resource that are currently available. The P operation wastes time or sleeps until a resource protected by the semaphore becomes available, at which time the resource is immediately claimed. The V operation is the inverse: it makes a resource available again after the process has finished using it. One important property of semaphore S is that its value cannot be changed except by using the V signal() and P wait() operations. A simple way to understand wait() and signal() operations is:
wait(): Decrements the value of semaphore variable by 1. If the value becomes negative, the

process executing wait() is blocked, i.e., added to the semaphore's queue. signal(): Increments the value of semaphore variable by 1. After the increment, if the preincrement value was negative (meaning there are processes waiting for a resource), it transfers a blocked process from the semaphore's waiting queue to the ready queue.

Many operating systems provide efficient semaphore primitives that unblock a waiting process when the semaphore is incremented. This means that processes do not waste time checking the semaphore value unnecessarily. The counting semaphore concept can be extended with the ability to claim or return more than one "unit" from the semaphore, a technique implemented in UNIX. The modified V and P operations are as follows:
function V(semaphore S, integer I): [S S + I]

function P(semaphore S, integer I): repeat: [if S >= I: S S - I break]

To avoid starvation, a semaphore has an associated queue of processes (usually a first-in, first out). If a process performs a P operation on a semaphore that has the value zero, the process is added to the semaphore's queue and its execution is suspended. When another process increments the semaphore by performing a V operation, and there are processes on the queue, one of them is removed from the queue and resumes execution. When processes have different priorities the queue may be ordered by priority, so that the highest priority process is taken from the queue first. If the implementation does not ensure atomicity of the increment, decrement and comparison operations, then there is a risk of increments or decrements being forgotten, or of the semaphore value becoming negative. Atomicity may be achieved by using a machine instruction that is able to read, modify and write the semaphore in a single operation. In the absence of such a hardware instruction, an atomic operation may be synthesized through the use of a software mutual exclusion algorithm. On uniprocessor systems, atomic operations can be ensured by temporarily suspending preemption or disabling hardware interrupts. This approach does not work on multiprocessor systems where it is possible for two programs sharing a semaphore to run on different processors at the same time. To solve this problem in a multiprocessor system a locking variable can be used to control access to the semaphore. The locking variable is manipulated using a test-and-set-lock (TSL) command.

Example: Producer/consumer problem


In the producer-consumer problem, one process (the producer) generates data items and another process (the consumer) receives and uses them. They communicate using a queue of maximum size N and are subject to the following conditions:

The consumer must wait for the producer to produce something if the queue is empty. The producer must wait for the consumer to consume something if the queue is full.

The semaphore solution to the producer-consumer problem tracks the state of the queue with two semaphores: emptyCount, the number of empty places in the queue, and fullCount, the number of elements in the queue. To maintain integrity, emptyCount may be lower (but never higher) than the actual number of empty places in the queue, and fullCount may be lower (but never higher) than the actual number of items in the queue. Empty places and items represent two kinds of resources, empty boxes and full boxes, and the semaphores emptyCount and fullCount maintain control over these resources. The binary semaphore useQueue ensures that the integrity of the state of the queue itself is not compromised, for example by two producers attempting to add items to an empty queue simultaneously, thereby corrupting its internal state. Alternatively a mutex could be used in place of the binary semaphore.

The emptyCount is initially N, fullCount is initially 0, and useQueue is initially 1. The producer does the following repeatedly:
produce: P(emptyCount) P(useQueue) putItemIntoQueue(item) V(useQueue) V(fullCount)

The consumer does the following repeatedly


consume: P(fullCount) P(useQueue) item getItemFromQueue() V(useQueue) V(emptyCount)

Example 1. A single consumer enters its critical section. Since fullCount is 0, the consumer blocks. 2. Several producers enter the producer critical section. No more than N producers may enter their critical section due to emptyCount constraining their entry. 3. The producers, one at a time, gain access to the queue through useQueue and deposit items in the queue. 4. Once the first producer exits its critical section, fullCount is incremented, allowing one consumer to enter its critical section.

Note that emptyCount may be much lower than the actual number of empty places in the queue, for example in the case where many producers have decremented it but are waiting their turn on useQueue before filling empty places. Note that emptyCount + fullCount N always holds, with equality if and only if no producers or consumers are executing their critical sections.

Function name etymology


The canonical names V and P come from the initials of Dutch words. V stands for verhogen ("increase"). Several explanations have been offered for P, including proberen for "to test" or "to try,"[3] passeren for "pass," and pakken for "grab." However, Dijkstra wrote that he intended P to stand for the portmanteau prolaag,[4] short for probeer te verlagen, literally "try to reduce," or to parallel the terms used in the other case, "try to decrease."[5][6][7] This confusion stems from the fact that the words for increase and decrease both begin with the letter V in Dutch, and the words spelled out in full would be impossibly confusing for those not familiar with the Dutch language. In ALGOL 68, the Linux kernel,[8] and in some English textbooks, the V and P operations are called, respectively, up and down. In software engineering practice, they are often called signal

and wait, release and acquire (which the standard Java library[9] uses), or post and pend. Some texts call them vacate and procure to match the original Dutch initials.

Semaphores vs. mutexes


A mutex is essentially the same thing as a binary semaphore and sometimes uses the same basic implementation. The differences between them are:
1. Mutexes have a concept of an owner, which is the process that locked the mutex. Only the process that locked the mutex can unlock it. In contrast, a semaphore has no concept of an owner. Any process can unlock a semaphore. 2. Unlike semaphores, mutexes provide priority inversion safety. Since the mutex knows its current owner, it is possible to promote the priority of the owner whenever a higher-priority task starts waiting on the mutex. 3. Mutexes also provide deletion safety, where the process holding the mutex cannot be accidentally deleted. Semaphores do not provide this.

You might also like