Reviewer Plattech
Reviewer Plattech
Reviewer Plattech
THREAD FUNCTIONALITY
A thread is a fundamental element of the central processing unit (CPU) utilization.
A thread should at least contain the following attributes:
o Thread execution state
o A saved thread context when not running.
o Some per-thread static storage for local variables
o Access to the memory and resource of the process
o An execution stacks.
Thread processor affinity - set of processors on which the thread can run.
Thread execution time - time a thread has executed in user mode and in kernel mode.
Alert status - A flag that indicates whether a waiting thread may execute.
Suspension count – The number of times the thread's execution has been suspended.
One process: One thread (1:1) Each thread of execution is a unique process with its own
address space and resources.
Multiple processes: One thread per process (M:1) – A thread may migrate from one (1)
process environment to another.
One process: Multiple threads (1:M) – A process defines a specific address space and a
dynamic resource ownership.
Multiple processes: Multiple threads per processes (M:M) – This has the combined
attribute of the M:1 and 1:M arrangement.
Thread State. The key states for a thread are Running, Ready, and Blocked.
the four (4) basic thread operations associated with a change in thread state
• SPAWN - typically transpires whenever a new process is created, since a thread for that
specific process is also formed.
• BLOCK - when a thread needs to wait for a particular event.
• UNBLOCKED - operation moves a blocked thread into the ready queue.
• FINISH - happens whenever a thread completes its entire execution.
Thread Synchronization. All threads of a process share the same address space and
resources. to synchronize the activities of various threads.
Thread Library. Any application can be programmed to be multithreaded with the use of thread
libraries. A thread library technically provides programmers with an application programming
interface (API) for creating and managing threads.
TYPES OF THREADS
ADVANTAGES
Solaris is a good example of an operating system that implements the combined approach.
MULTITHREADING - the ability of an operating system (OS) to support multiple, concurrent paths of
execution within a single process.
THE USE OF MULTICORE SYSTEMS
Threads in different processes can exchange information through some shared memory that
has been set up between two (2) processes.
1. Multiple applications
2. Structured applications
3. Operating system structure
Critical section – It is a section of code within a process that requires access to shared resources and must not be
executed while other processes are in a corresponding section of code.
Race condition – It is a situation in which multiple threads or processes read and write a shared data item, and
the result depends on the relative timing of their execution.
Competition: Processes unaware of each other. This comprises independent processes that are not
intended to work together, which caneither be batch jobs, interactive sessions, or a mixture of both.
Cooperation by sharing: Processes indirectly aware of each other. This involves processes that are not
necessarily aware of each other by their respective process identifications but share access to some objects.
Cooperation by communication: Processes directly aware of each other. This encompasses processes
that can communicate with each other by process identification and that are designed to work jointly on some
activity.
Counting Semaphore – This involves an integer value that is used for signaling processes. Only three (3)
operations may be performed on a semaphore, all of which are atomic: initialize, decrement, and increment.
Binary Semaphore – This is a semaphore that only takes the values zero (0) and one (1).
Monitor – This is a programming construct that encapsulates variables, access procedures, and initialization
code within an abstract data type. It is easier to control and has been implemented in a few programming
languages such as Java and Pascal-Plus.
Mailbox or Message Passing – This mechanism is considered as a means for two (2) processes to exchange
information, and that may be used for process synchronization.
Spinlock – This is a mechanism in which a process executes in an infiniteloop waiting for the value of a lock
variable to indicate availability.
PRINCIPLES OF DEADLOCKS
Deadlocks can be defined as permanent blocking of a set of processes that either compete for system
resources or communicate with each other.
The following are the general resource categories:
Reusable resources – These resources can be used by only one processat a time and are not depleted by usage.
Consumable resources – These are resources that can be created (produced) and destroyed (consumed).
The resource allocation graph, which was introduced by Richard Holt, is a useful tool in
characterizing the allocation of resources to processes.
Deadlock Prevention, Avoidance, and Detection The following conditions must be
present for a deadlock to occur (Silberschatz, Galvin & Gagne, 2018):
1. Mutual exclusion: At least one resource must be held in a non- sharable mode. This means that only one (1)
thread at a time can use the resource. If another thread requests that specific resource, the requesting thread
must be delayed until the resource has been released.
2. Hold and wait: A thread must be holding at least one (1) resource and waiting to acquire additional
resources that are currently being held by other threads.
3. No preemption: Resources cannot be preempted. This means that a resource can only be released
voluntarily by the thread holding it after that thread has completed its task.
4. Circular wait: A closed chain of threads exists, such that each thread holds at least one resource needed by
the next thread in the chain.
The first three (3) conditions are necessary, but not sufficient, for a deadlock to occur.
The fourth condition is required for a deadlock to take place.
Deadlock Prevention: Disallows one of the four conditions for deadlock occurrence – This strategy involves the
designing of a system in such a way that the possibility of deadlock is excluded.
1. If a process is denied of further request, that process must release the resources that it is currently
holding.
2. If necessary, request the process again with the additional resources; and
3. Let go of other resources to proceed with other process execution.
B. Direct method of deadlock prevention (preventing the occurrence of the fourth condition)
o Circular wait: This condition can be prevented by defining a linear ordering of resource types.
Deadlock Avoidance: Do not grant a resource request if the allocation might lead to a deadlock
condition – This strategy allows the three necessary conditions but makes judicious choices to assure that the
deadlockpoint is never reached.
A. Process initiation denial: Do not start a process if its demands might lead to a deadlock; and
B. Resource allocation denial: Do not grant an incremental resource request to a process if this allocation
might lead to a deadlock.
• The maximum resource requirement for each process must be stated in advance;
Deadlock Detection: Grant resource requests when possible, but periodically check for deadlock
and act to recover – This strategy does notlimit resource access or restricts process executions. Requested
resources are granted to processes whenever possible.
a. Aborting all deadlock processes is the most common solution in operating systems.
b. Back up each deadlocked process to some previously defined checkpoint, and restart all processes. This
requires that the rollback and restart mechanisms are built into the system. However, the original deadlock may
recur.
c. Successively abort deadlocked processes until deadlock no longer exists. After each abortion, the detection
algorithm must be reinvoked to see whether a deadlock still exists.
d. Successively preempt resources until deadlock no longer exists. A process that encompasses preempted
resource must be rolled back to the point prior to its resource acquisition.
The selection criteria in successively aborting deadlocked processes or preempt resources could be one of the
following: