Reviewer Plattech

Download as pdf or txt
Download as pdf or txt
You are on page 1of 7

REVIEWER PLATTECH

THREAD FUNCTIONALITY
A thread is a fundamental element of the central processing unit (CPU) utilization.
A thread should at least contain the following attributes:
o Thread execution state
o A saved thread context when not running.
o Some per-thread static storage for local variables
o Access to the memory and resource of the process
o An execution stacks.

Thread ID - unique value that identifies a thread.

Thread context - set of register values and other volatile data.

Dynamic priority - thread's execution priority

Base priority - The lower limit of the thread's dynamic priority

Thread processor affinity - set of processors on which the thread can run.

Thread execution time - time a thread has executed in user mode and in kernel mode.

Alert status - A flag that indicates whether a waiting thread may execute.

Suspension count – The number of times the thread's execution has been suspended.

Impersonation token – A temporary access token.

Termination port - An inter-process communication channel.

Thread exit status - The reason for a thread's termination.

The following process and thread arrangement or relationship may occur.

One process: One thread (1:1) Each thread of execution is a unique process with its own
address space and resources.

Multiple processes: One thread per process (M:1) – A thread may migrate from one (1)
process environment to another.

One process: Multiple threads (1:M) – A process defines a specific address space and a
dynamic resource ownership.

Multiple processes: Multiple threads per processes (M:M) – This has the combined
attribute of the M:1 and 1:M arrangement.

Thread State. The key states for a thread are Running, Ready, and Blocked.
the four (4) basic thread operations associated with a change in thread state

• SPAWN - typically transpires whenever a new process is created, since a thread for that
specific process is also formed.
• BLOCK - when a thread needs to wait for a particular event.
• UNBLOCKED - operation moves a blocked thread into the ready queue.
• FINISH - happens whenever a thread completes its entire execution.
Thread Synchronization. All threads of a process share the same address space and
resources. to synchronize the activities of various threads.

Thread Library. Any application can be programmed to be multithreaded with the use of thread
libraries. A thread library technically provides programmers with an application programming
interface (API) for creating and managing threads.

TYPES OF THREADS

User-Level Threads - all the thread management is done by theapplication.

ADVANTAGES

• Thread switching does not require kernel-mode


• Scheduling can be application-specific.
• User-level threads can run on any operating system.
Kernel-Level Threads - all thread management is performed by the operating system's kernel.
ADVANTAGES
• The kernel can simultaneously schedule multiple threads
• If one thread in a process is blocked, the kernel can schedule another thread from the same
process.
• The kernel routines themselves can be multithreaded.

Combined User-Level and Kernel-Level Approach - Thread creation is performed completely


in the user space, so is the scheduling and synchronization of threads within an application.
Multiple user-level threads from a single application are then mapped onto specific kernel-level
threads.

Solaris is a good example of an operating system that implements the combined approach.

MULTITHREADING - the ability of an operating system (OS) to support multiple, concurrent paths of
execution within a single process.
THE USE OF MULTICORE SYSTEMS

the general characteristics of multithreading


• The memory overhead in multithreading is small. It only requires an extra stack, register space, and
space for thread-local data.
• The central processing unit (CPU) overhead is small since it uses application programming interface
(API) calls.
• Communication within the process and with other processes are faster.
• The crash resilience in implementing multithreading is low. Any bug can crash the entire application.
• The memory usage is monitored via a system allocator.

The following are some examples of multithreaded applications:


• An application that creates thumbnails of photos from a collection of images
• A web browser that displays images or texts while retrieving data from the network
• A word processor that displays texts and responds to keystrokes or mouse clicks while
performing spelling and grammar check

The benefits of multithreaded programming can be categorized as follows:


RESPONSIVENESS - allows a program to continue running even if some part of it is blocked or
performing a lengthy operation.
RESOURCE SHARING - threads share the memory and resources of the process to which they belong.
ECONOMICAL - memory and resources for process creation is inefficient, but thread creation consumes
less time and memory.
SCALABILITY - where threads can run in parallel on different processing cores.

Windows is a good example of an OS that supports multithreading.

Threads in different processes can exchange information through some shared memory that
has been set up between two (2) processes.

Concurrency and Deadlocks


• Multiprogramming –management of multiple processes within auniprocessor system.
• Multiprocessing - management of multiple processes within a multiprocessor.
• Distributed processing –management of multiple processesexecuting on multiple distributed
computer systems.
A basic requirement to support concurrent processes is the ability of a program to enforce
mutual exclusion.

1. Only one process at a time is allowed into its critical section.


2. A process that halts in its noncritical section must do so without interfering with other
processes.
3. No deadlock of starvation.
4. When no process is in a critical section, any process that requests entry to its critical
section must be permitted to enter without delay.
5. No assumptions must be made about the relative process speeds or number of
processors.
6. A process shall remain inside the critical section for a finite time only.
Principles of Concurrency
Concurrency is an application performance technique that encompasses the ability to load
and execute multiple runnable programs. The concurrent system supports more than one (1)
task by allowing all the tasks to make progress.
Concurrency can be viewed based on the ff context:

1. Multiple applications
2. Structured applications
3. Operating system structure

Below are some important terminologies related to concurrency:


Atomic operation – It is a function, or an action implemented as a sequence of one (1) or more instructions that
appear to be indivisible, wherein no other process can see an intermediate state or interrupt the operation.

Critical section – It is a section of code within a process that requires access to shared resources and must not be
executed while other processes are in a corresponding section of code.

Race condition – It is a situation in which multiple threads or processes read and write a shared data item, and
the result depends on the relative timing of their execution.

Starvation – It is a situation in which a runnable process is overlooked indefinitely by the scheduler.

PROCESSES DEGREE OF AWARENESS

Competition: Processes unaware of each other. This comprises independent processes that are not
intended to work together, which caneither be batch jobs, interactive sessions, or a mixture of both.

Cooperation by sharing: Processes indirectly aware of each other. This involves processes that are not
necessarily aware of each other by their respective process identifications but share access to some objects.

Cooperation by communication: Processes directly aware of each other. This encompasses processes
that can communicate with each other by process identification and that are designed to work jointly on some
activity.

COMMON CONCURRENCY MECHANISMS

Counting Semaphore – This involves an integer value that is used for signaling processes. Only three (3)
operations may be performed on a semaphore, all of which are atomic: initialize, decrement, and increment.

Binary Semaphore – This is a semaphore that only takes the values zero (0) and one (1).

Mutual Exclusion (Mutex) Lock – This mechanism is like a binarysemaphore.


Condition Variable – This is a data type that is used to block a process or a thread until a specific condition is
true.

Monitor – This is a programming construct that encapsulates variables, access procedures, and initialization
code within an abstract data type. It is easier to control and has been implemented in a few programming
languages such as Java and Pascal-Plus.

Event Flag – It is a memory word used as a synchronization mechanism.

Mailbox or Message Passing – This mechanism is considered as a means for two (2) processes to exchange
information, and that may be used for process synchronization.

Spinlock – This is a mechanism in which a process executes in an infiniteloop waiting for the value of a lock
variable to indicate availability.
PRINCIPLES OF DEADLOCKS

Deadlocks can be defined as permanent blocking of a set of processes that either compete for system
resources or communicate with each other.
The following are the general resource categories:
Reusable resources – These resources can be used by only one processat a time and are not depleted by usage.

Consumable resources – These are resources that can be created (produced) and destroyed (consumed).

The resource allocation graph, which was introduced by Richard Holt, is a useful tool in
characterizing the allocation of resources to processes.
Deadlock Prevention, Avoidance, and Detection The following conditions must be
present for a deadlock to occur (Silberschatz, Galvin & Gagne, 2018):

1. Mutual exclusion: At least one resource must be held in a non- sharable mode. This means that only one (1)
thread at a time can use the resource. If another thread requests that specific resource, the requesting thread
must be delayed until the resource has been released.
2. Hold and wait: A thread must be holding at least one (1) resource and waiting to acquire additional
resources that are currently being held by other threads.

3. No preemption: Resources cannot be preempted. This means that a resource can only be released
voluntarily by the thread holding it after that thread has completed its task.

4. Circular wait: A closed chain of threads exists, such that each thread holds at least one resource needed by
the next thread in the chain.

The first three (3) conditions are necessary, but not sufficient, for a deadlock to occur.
The fourth condition is required for a deadlock to take place.

Deadlock Prevention: Disallows one of the four conditions for deadlock occurrence – This strategy involves the
designing of a system in such a way that the possibility of deadlock is excluded.

A. Indirect method of deadlock prevention (preventing the first three conditions)

• Mutual exclusion: In general, this condition cannot be disallowed.


• Hold and wait: This condition can be prevented by requiring a process to request all of its required
resources at once and blocking the process until all resources can be granted simultaneously.
• No preemption: This condition can be prevented through the following ways:

1. If a process is denied of further request, that process must release the resources that it is currently
holding.
2. If necessary, request the process again with the additional resources; and
3. Let go of other resources to proceed with other process execution.
B. Direct method of deadlock prevention (preventing the occurrence of the fourth condition)
o Circular wait: This condition can be prevented by defining a linear ordering of resource types.
Deadlock Avoidance: Do not grant a resource request if the allocation might lead to a deadlock
condition – This strategy allows the three necessary conditions but makes judicious choices to assure that the
deadlockpoint is never reached.

Two (2) approaches of deadlock avoidance are as follows (Stallings, 2018):

A. Process initiation denial: Do not start a process if its demands might lead to a deadlock; and

B. Resource allocation denial: Do not grant an incremental resource request to a process if this allocation
might lead to a deadlock.

Below are some restrictions in implementing the deadlock avoidance strategy:

• The maximum resource requirement for each process must be stated in advance;

• The processes under consideration must be unconstrained by any synchronization requirements;

• There must be a fixed number of resources to allocate; and

• No process may exit while holding resources.

Deadlock Detection: Grant resource requests when possible, but periodically check for deadlock
and act to recover – This strategy does notlimit resource access or restricts process executions. Requested
resources are granted to processes whenever possible.

a. Aborting all deadlock processes is the most common solution in operating systems.

b. Back up each deadlocked process to some previously defined checkpoint, and restart all processes. This
requires that the rollback and restart mechanisms are built into the system. However, the original deadlock may
recur.

c. Successively abort deadlocked processes until deadlock no longer exists. After each abortion, the detection
algorithm must be reinvoked to see whether a deadlock still exists.

d. Successively preempt resources until deadlock no longer exists. A process that encompasses preempted
resource must be rolled back to the point prior to its resource acquisition.

The selection criteria in successively aborting deadlocked processes or preempt resources could be one of the
following:

• Process with the least amount of processor time consumed so far

• Process with least amount of output produced so far

• Process with the most estimated remaining time

• Process with the least total of resources allocated so far

• Process with the lowest priority

You might also like