Module 5

Download as pdf or txt
Download as pdf or txt
You are on page 1of 21

ETC: Introduction to Embedded System BETCK105EJ MODULE 5

MODULE 5

Real-time Operating System (RTOS) based Embedded System Design:

(Text 1: Chapter 10.1 to 10.5)

 Operating System basics

 Types of Operating Systems

 Tasks, Process and Threads

 Multiprocessing and Multitasking

 Task Scheduling

Chetan Ghatage, Asst, Prof., Dept. of CSE, RNSIT Page 1


ETC: Introduction to Embedded System BETCK105EJ MODULE 5

The operating system acts as a bridge between the user applications/tasks and the underlying
system resources through a set of system functionalities and services. The OS manages the
system resources and makes them available to the user applications/tasks on a need basis. A
normal computing system is a collection of different I/O subsystems, working, and storage
memory. The primary functions of an operating system is
 Make the system convenient to use
 Organise and manage the system resources efficiently and correctly
Figure 5.1 gives an insight into the basic components of an operating system and their
interfaces with rest of the world.

Fig. 5.1 The Operating System Architecture


The Kernel
The kernel is the core of the operating system and is responsible for managing the
system resources and the communication among the hardware and other system services.
Kernel acts as the abstraction layer between system resources and user applications. Kernel
contains a set of system libraries and services. For a general-purpose OS, the kernel contains
different services for handling the following.
Process Management Process management deals with managing the processes/tasks. Process
management includes setting up the memory space for the process, loading the process’s code
into the memory space, allocating system resources, scheduling and managing the execution
of the process, setting up and managing the Process Control Block (PCB), Inter Process
Communication and synchronisation, process termination/ deletion, etc.
Primary Memory Management The term primary memory refers to the volatile memory
(RAM) where processes are loaded and variables and shared data associated with each
process are stored. The Memory Management Unit (MMU) of the kernel is responsible for

Chetan Ghatage, Asst, Prof., Dept. of CSE, RNSIT Page 2


ETC: Introduction to Embedded System BETCK105EJ MODULE 5

 Keeping track of which part of the memory area is currently used by which process
 Allocating and De-allocating memory space on a need basis (Dynamic memory allocation).
File System Management File is a collection of related information. A file could be a
program (source code or executable), text files, image files, word documents, audio/video
files, etc. Each of these files differ in the kind of information they hold and the way in which
the information is stored. The file operation is a useful service provided by the OS. The file
system management service of Kernel is responsible for
 The creation, deletion and alteration of files
 Creation, deletion and alteration of directories
 Saving of files in the secondary storage memory (e.g. Hard disk storage)
 Providing automatic allocation of file space based on the amount of free space available
 Providing a flexible naming convention for the files
The various file system management operations are OS dependent. For example, the kernel
of Microsoft® DOS OS supports a specific set of file system management operations and
they are not the same as the file system operations supported by UNIX Kernel.
I/O System (Device) Management Kernel is responsible for routing the I/O requests
coming from different user applications to the appropriate I/O devices of the system. In a
well-structured OS, the direct accessing of I/O devices are not allowed and the access to them
are provided through a set of Application Programming Interfaces (APIs) exposed by the
kernel. The kernel maintains a list of all the I/O devices of the system. This list may be
available in advance, at the time of building the kernel. Some kernels, dynamically updates
the list of available devices as and when a new device is installed (e.g. Windows NT kernel
keeps the list updated when a new plug ‘n’ play USB device is attached to the system). The
service ‘Device Manager’ (Name may vary across different OS kernels) of the kernel is
responsible for handling all I/O device related operations. The kernel talks to the I/O device
through a set of low-level systems calls, which are implemented in a service, called device
drivers. The device drivers are specific to a device or a class of devices. The Device
Manager is responsible for
 Loading and unloading of device drivers
 Exchanging information and the system specific control signals to and from the device
Secondary Storage Management The secondary storage management deals with
managing the secondary storage memory devices, if any, connected to the system. Secondary
memory is used as backup medium for programs and data since the main memory is volatile.
In most of the systems, the secondary storage is kept in disks (Hard Disk). The secondary

Chetan Ghatage, Asst, Prof., Dept. of CSE, RNSIT Page 3


ETC: Introduction to Embedded System BETCK105EJ MODULE 5

storage management service of kernel deals with


 Disk storage allocation
 Disk scheduling (Time interval at which the disk is activated to backup data)
 Free Disk space management
Protection Systems Most of the modern operating systems are designed in such a way to
support multiple users with different levels of access permissions (e.g. Windows 10 with user
permissions like ‘Administrator’, ‘Standard’, ‘Restricted’, etc.). Protection deals with
implementing the security policies to restrict the access to both user and system resources by
different applications or processes or users. In multiuser supported operating systems, one
user may not be allowed to view or modify the whole/portions of another user’s data or profile
details. In addition, some application may not be granted with permission to make use of
some of the system resources. This kind of protection is provided by the protection services
running within the kernel.
Interrupt Handler Kernel provides handler mechanism for all external/internal interrupts
generated by the system.
These are some of the important services offered by the kernel of an operating system. It
does not mean that a kernel contains no more than components/services explained above.
Depending on the type of the operating system, a kernel may contain lesser number of
components/services or more number of components/ services. In addition to the
components/services listed above, many operating systems offer a number of add- on system
components/services to the kernel. Network communication, network management, user-
interface graphics, timer services (delays, timeouts, etc.), error handler, database management,
etc. are examples for such components/services. Kernel exposes the interface to the various
kernel applications/services, hosted by kernel, to the user applications through a set of
standard Application Programming Interfaces (APIs). User applications can avail these API
calls to access the various kernel application/services.
Kernel Space and User Space
The applications/services are classified into two categories, namely: user applications
and kernel applications. The program code corresponding to the kernel applications/services
are kept in a contiguous area (OS dependent) of primary (working) memory and is protected
from the un-authorised access by user programs/applications. The memory space at which the
kernel code is located is known as ‘Kernel Space’. Similarly, all user applications are loaded
to a specific area of primary memory and this memory area is referred as ‘User Space’. User

Chetan Ghatage, Asst, Prof., Dept. of CSE, RNSIT Page 4


ETC: Introduction to Embedded System BETCK105EJ MODULE 5

space is the memory area where user applications are loaded and executed. The partitioning of
memory into kernel and user space is purely Operating System dependent. Some OS
implements this kind of partitioning and protection whereas some OS do not segregate the
kernel and user application code storage into two separate areas. In an operating system with
virtual memory support, the user applications are loaded into its corresponding virtual
memory space with demand paging technique; Meaning, the entire code for the user
application need not be loaded to the main (primary) memory at once; instead the user
application code is split into different pages and these pages are loaded into and out of the
main memory area on a need basis. The act of loading the code into and out of the main
memory is termed as ‘Swapping’. Swapping happens between the main (primary) memory
and secondary storage memory. Each process run in its own virtual memory space and are not
allowed accessing the memory space corresponding to another processes, unless explicitly
requested by the process. Each process will have certain privilege levels on accessing the
memory of other processes and based on the privilege settings, processes can request kernel to
map another process’s memory to its own or share through some other mechanism. Most of
the operating systems keep the kernel application code in main memory and it is not swapped
out into the secondary memory.
Monolithic Kernel and Microkernel
Monolithic Kernel In monolithic kernel architecture, all kernel services run in the kernel
space. Here all kernel modules run within the same memory space under a single kernel
thread. The tight internal integration of kernel modules in monolithic kernel architecture
allows the effective utilisation of the low-level features of the underlying system. The major
drawback of monolithic kernel is that any error or failure in any one of the kernel modules
leads to the crashing of the entire kernel application. LINUX, SOLARIS, MS-DOS kernels
are examples of monolithic kernel. The architecture representation of a monolithic kernel is
given in Fig. 10.2

Chetan Ghatage, Asst, Prof., Dept. of CSE, RNSIT Page 5


ETC: Introduction to Embedded System BETCK105EJ MODULE 5

Fig. 10.2 The Monolithic Kernel Model

Fig. 10.3 The Microkernel model


Microkernel The microkernel design incorporates only the essential set of Operating
System services into the kernel. The rest of the Operating System services are implemented in
programs known as ‘Servers’ which runs in user space. This provides a highly modular design
and OS-neutral abstraction to the kernel. Memory management, process management, timer
systems and interrupt handlers are the essential services, which forms the part of the
microkernel. Mach, QNX, Minix 3 kernels are examples for microkernel. The architecture
representation of a microkernel is shown in Fig. 5.3.
Microkernel based design approach offers the following benefits
 Robustness: If a problem is encountered in any of the services, which runs as ‘Server’
application, the same can be reconfigured and re-started without the need for re-starting
the entire OS. Thus, this approach is highly useful for systems, which demands high
‘availability’. Since the services which run as ‘Servers’ are running on a different
memory space, the chances of corruption of kernel services are ideally zero.
 Configurability: Any services, which run as ‘Server’ application can be changed

Chetan Ghatage, Asst, Prof., Dept. of CSE, RNSIT Page 6


ETC: Introduction to Embedded System BETCK105EJ MODULE 5

without the need to restart the whole system. This makes the system dynamically
configurable.
TYPES OF OPERATING SYSTEMS
Depending on the type of kernel and kernel services, purpose and type of computing
systems where the OS is deployed and the responsiveness to applications, Operating Systems
are classified into different types.
1. General Purpose Operating System (GPOS)
The operating systems, which are deployed in general computing systems, are referred
as General Purpose Operating Systems (GPOS). The kernel of such an OS is more
generalised and it contains all kinds of services required for executing generic applications.
General-purpose operating systems are often quite non-deterministic in behaviour. Their
services can inject random delays into application software and may cause slow
responsiveness of an application at unexpected times. GPOS are usually deployed in
computing systems where deterministic behaviour is not an important criterion. Personal
Computer/Desktop system is a typical example for a system where GPOSs are deployed.
Windows 10/8.x/XP/MS-DOS etc are examples for General Purpose Operating Systems.
2. Real-Time Operating System (RTOS)
There is no universal definition available for the term ‘Real-Time’ when it is used in
conjunction with operating systems. What ‘Real-Time’ means in Operating System context is
still a debatable topic and there are many definitions available. In a broad sense, ‘Real-Time’
implies deterministic timing behaviour. Deterministic timing behaviour in RTOS context
means the OS services consumes only known and expected amounts of time regardless the
number of services. A Real-Time Operating System or RTOS implements policies and rules
concerning time-critical allocation of a system’s resources. The RTOS decides which applications
should run in which order and how much time needs to be allocated for each application.
Predictable performance is the hallmark of a well-designed RTOS. This is best achieved by
the consistent application of policies and rules. Policies guide the design of an RTOS. Rules
implement those policies and resolve policy conflicts. Windows Embedded Compact, QNX,
VxWorks MicroC/OS-II etc are examples of Real Time Operating Systems (RTOS).
The Real-Time Kernel
The kernel of a Real-Time Operating System is referred as Real. Time kernel. In
complement to the conventional OS kernel, the Real-Time kernel is highly specialised and it
contains only the minimal set of services required for running the user applications/tasks. The

Chetan Ghatage, Asst, Prof., Dept. of CSE, RNSIT Page 7


ETC: Introduction to Embedded System BETCK105EJ MODULE 5

basic functions of a Real-Time kernel are listed below:


◆ Task/Process management
◆ Task/Process scheduling
◆ Task/Process synchronisation
◆ Error/Exception handling
◆ Memory management
◆ Interrupt handling
◆ Time management
Task/Process management Deals with setting up the memory space for the tasks, loading
the task’s code into the memory space, allocating system resources, setting up a Task
Control Block (TCB) for the task and task/process termination/deletion. A Task Control
Block (TCB) is used for holding the information corresponding to a task. TCB usually
contains the following set of information.
 Task ID: Task Identification Number
 Task State: The current state of the task (e.g. State = ‘Ready’ for a task which is ready
to execute)
 Task Type: Task type. Indicates what is the type for this task. The task can be a hard
real time or soft real time or background task.
 Task Priority: Task priority (e.g. Task priority = 1 for task with priority = 1)
 Task Context Pointer: Context pointer. Pointer for context saving
 Task Memory Pointers: Pointers to the code memory, data memory and stack memory
for the task Task System Resource Pointers: Pointers to system resources (semaphores,
mutex, etc.) used by the task Task Pointers: Pointers to other TCBs (TCBs for preceding,
next and waiting tasks)
 Other Parameters: Other relevant task parameters
The parameters and implementation of the TCB is kernel dependent. The TCB parameters
vary across different kernels, based on the task management implementation. Task
management service utilises the TCB of a task in the following way
✓ Creates a TCB for a task on creating a task
✓ Delete/remove the TCB of a task when the task is terminated or deleted
✓ Reads the TCB to get the state of a task
✓ Update the TCB with updated parameters on need basis (e.g. on a context switch)
✓ Modify the TCB to change the priority of the task dynamically

Chetan Ghatage, Asst, Prof., Dept. of CSE, RNSIT Page 8


ETC: Introduction to Embedded System BETCK105EJ MODULE 5

Task/Process Scheduling Deals with sharing the CPU among various tasks/processes.
A kernel application called ‘Scheduler’ handles the task scheduling. Scheduler is nothing but
an algorithm implementation, which performs the efficient and optimal scheduling of tasks to
provide a deterministic behaviour. We will discuss the various types of scheduling in a later
section of this chapter.
Task/Process Synchronisation Deals with synchronising the concurrent access of a
resource, which is shared across multiple tasks and the communication between various tasks.
We will discuss the various synchronisation techniques and inter task /process communication
in a later section of this chapter.
Error/Exception Handling Deals with registering and handling the errors
occurred/exceptions raised during the execution of tasks. Insufficient memory, timeouts,
deadlocks, deadline missing, bus error, divide by zero, unknown instruction execution, etc. are
examples of errors/exceptions. Errors/Exceptions can happen at the kernel level services or at
task level. Deadlock is an example for kernel level exception, whereas timeout is an example
for a task level exception. The OS kernel gives the information about the error in the form of a
system call (API). GetLastError() API provided by Windows CE/Embedded Compact RTOS
is an example for such a system call. Watchdog timer is a mechanism for handling the
timeouts for tasks. Certain tasks may involve the waiting of external events from devices.
These tasks will wait infinitely when the external device is not responding and the task will
generate a hang-up behaviour. In order to avoid these types of scenarios, a proper timeout
mechanism should be implemented. A watchdog is normally used in such situations. The
watchdog will be loaded with the maximum expected wait time for the event and if the event is
not triggered within this wait time, the same is informed to the task and the task is timed out.
If the event happens before the timeout, the watchdog is resetted.
Memory Management Compared to the General Purpose Operating Systems, the
memory management function of an RTOS kernel is slightly different. In general, the memory
allocation time increases depending on the size of the block of memory needs to be allocated
and the state of the allocated memory block (initialised memory block consumes more
allocation time than un-initialised memory block). Since predictable timing and deterministic
behaviour are the primary focus of an RTOS, RTOS achieves this by compromising the
effectiveness of memory allocation. RTOS makes use of ‘block’ based memory allocation
technique, instead of the usual dynamic memory allocation techniques used by the GPOS.
RTOS kernel uses blocks of fixed size of dynamic memory and the block is allocated for a
task on a need basis. The blocks are stored in a ‘Free Buffer Queue’. To achieve predictable

Chetan Ghatage, Asst, Prof., Dept. of CSE, RNSIT Page 9


ETC: Introduction to Embedded System BETCK105EJ MODULE 5

timing and avoid the timing overheads, most of the RTOS kernels allow tasks to access any of
the memory blocks without any memory protection. RTOS kernels assume that the whole
design is proven correct and protection is unnecessary. Some commercial RTOS kernels allow
memory protection as optional and the kernel enters a fail-safe mode when an illegal memory
access occurs.
A few RTOS kernels implement Virtual Memory* concept for memory allocation if the
system supports secondary memory storage (like HDD and FLASH memory). In the ‘block’
based memory allocation, a block of fixed memory is always allocated for tasks on need basis
and it is taken as a unit. Hence, there will not be any memory fragmentation issues. The
memory allocation can be implemented as constant functions and thereby it consumes fixed
amount of time for memory allocation. This leaves the deterministic behaviour of the RTOS
kernel untouched. The ‘block’ memory concept avoids the garbage collection overhead also.
(We will explore this technique under the MicroC/OS-II kernel in a latter chapter).The
‘block’ based memory
i. Hard Real-Time
Real-Time Operating Systems that strictly adhere to the timing constraints for a task is
referred as ‘Hard Real-Time’ systems. A Hard Real-Time system must meet the deadlines for
a task without any slippage. Missing any deadline may produce catastrophic results for Hard
Real-Time Systems, including permanent data lose and irrecoverable damages to the
system/users. Hard Real-Time systems emphasise the principle ‘A late answer is a wrong
answer’. A system can have several such tasks and the key to their correct operation lies in
scheduling them so that they meet their time constraints. Air bag control systems and Anti-
lock Brake Systems (ABS) of vehicles are typical examples for Hard Real-Time Systems. The
Air bag control system should be into action and deploy the air bags when the vehicle meets a
severe accident. Ideally speaking, the time for triggering the air bag deployment task, when an
accident is sensed by the Air bag control system, should be zero and the air bags should be
deployed exactly within the time frame, which is predefined for the air bag deployment task.
Any delay in the deployment of the air bags makes the life of the passengers under threat.
When the air bag deployment task is triggered, the currently executing task must be pre-
empted, the air bag deployment task should be brought into execution, and the necessary I/O
systems should be made readily available for the air bag deployment task. To meet the strict
deadline, the time between the air bag deployment event triggering and start of the air bag
deployment task execution should be minimum, ideally zero. As a rule of thumb, Hard Real-
Time Systems does not implement the virtual memory model for handling the memory. This

Chetan Ghatage, Asst, Prof., Dept. of CSE, RNSIT Page 10


ETC: Introduction to Embedded System BETCK105EJ MODULE 5

eliminates the delay in swapping in and out the code corresponding to the task to and from
the primary memory. In general, the presence of Human in the loop (HITL) for tasks
introduces unexpected delays in the task execution. Most of the Hard Real-Time Systems are
automatic and does not contain a ‘human in the loop’.
ii. Soft Real-Time
Real-Time Operating System that does not guarantee meeting deadlines, but offer the
best effort to meet the deadline are referred as ‘Soft Real-Time’ systems. Missing deadlines for
tasks are acceptable for a Soft Real- time system if the frequency of deadline missing is within
the compliance limit of the Quality of Service (QoS). A Soft Real-Time system emphasises the
principle ‘A late answer is an acceptable answer, but it could have done bit faster’. Soft Real-
Time systems most often have a ‘human in the loop (HITL)’. Automatic Teller Machine (ATM)
is a typical example for Soft-Real-Time System. If the ATM takes a few seconds more than the
ideal operation time, nothing fatal happens. An audio-video playback system is another
example for Soft Real-Time system. No potential damage arises if a sample comes late by
fraction of a second, for playback.
TASKS, PROCESS AND THREADS
The term ‘task’ refers to something that needs to be done. In our day-to-day life, we are
bound to the execution of a number of tasks. The task can be the one assigned by our
managers or the one assigned by our professors/teachers or the one related to our personal or
family needs. In addition, we will have an order of priority and schedule/timeline for
executing these tasks. In the operating system context, a task is defined as the program in
execution and the related information maintained by the operating system for the program.
Task is also known as ‘Job’ in the operating system context. A program or part of it in
execution is also called a ‘Process’. The terms ‘Task’, ‘Job’ and ‘Process’ refer to the same
entity in the operating system context and most often they are used interchangeably.
Process
A ‘Process’ is a program, or part of it, in execution. Process is also known as an
instance of a program in execution. Multiple instances of the same program can execute
simultaneously. A process requires various system resources like CPU for executing the process,
memory for storing the code corresponding to the process and associated variables, I/O devices
for information exchange, etc. A process is sequential in execution.
i. The Structure of a Process
The concept of ‘Process’ leads to concurrent execution (pseudo parallelism) of tasks and

Chetan Ghatage, Asst, Prof., Dept. of CSE, RNSIT Page 11


ETC: Introduction to Embedded System BETCK105EJ MODULE 5

thereby the efficient utilisation of the CPU and other system resources. Concurrent execution
is achieved through the sharing of CPU among the processes. A process mimics a processor in
properties and holds a set of registers, process status, a Program Counter (PC) to point to the
next executable instruction of the process, a stack for holding the local variables associated with
the process and the code corresponding to the process. This can be visualised as shown in Fig.
5.4.
A process which inherits all the properties of the CPU can be considered as a virtual
processor, awaiting its turn to have its properties switched into the physical processor. When
the process gets its turn, its registers and the program counter register becomes mapped to the
physical registers of the CPU. From a memory perspective, the memory occupied by the
process is segregated into three regions, namely, Stack memory, Data memory and Code
memory (Fig. 5.5).

Fig. 5.4 Structure of a Process Fig. 5.5 Memory organisation of a Process


The ‘Stack’ memory holds all temporary data such as variables local to the process.
Data memory holds all global data for the process. The code memory contains the program
code (instructions) corresponding to the process. On loading a process into the main
memory, a specific area of memory is allocated for the process. The stack memory usually
starts (OS Kernel implementation dependent) at the highest memory address from the
memory area allocated for the process. Say for example, the memory map of the memory area
allocated for the process is 2048 to 2100, the stack memory starts at address 2100 and grows
downwards to accommodate the variables local to the process.
Process States and State Transition The creation of a process to its termination is not a

Chetan Ghatage, Asst, Prof., Dept. of CSE, RNSIT Page 12


ETC: Introduction to Embedded System BETCK105EJ MODULE 5

single step operation. The process traverses through a series of states during its transition
from the newly created state to the terminated state. The cycle through which a process
changes its state from ‘newly created’ to ‘execution completed’ is known as ‘Process Life
Cycle’. The various states through which a process traverses through during a Process Life
Cycle indicates the current status of the process with respect to time and also provides
information on what it is allowed to do next. Figure 5.6 represents the various states associated
with a process.
The state at which a process is being created is referred as ‘Created State’. The
Operating System recognises a process in the ‘Created State’ but no resources are allocated
to the process. The state, where a process is incepted into the memory and awaiting the
processor time for execution, is known as ‘Ready State’. At this stage, the process is placed in
the ‘Ready list’ queue maintained by the OS. The state where in the source code instructions
corresponding to the process is being executed is called ‘Running State’. Running state is
the state at which the process execution happens. ‘Blocked State/Wait State’ refers to a
state where a running process is temporarily suspended from execution and does not have
immediate access to resources. The blocked state might be invoked by various conditions
like: the process enters a wait state for an event to occur (e.g. Waiting for user inputs such as
keyboard input) or waiting for getting access to a shared resource (will be discussed at a later
section of this chapter). A state where the process completes its execution is known as
‘Completed State’. The transition of a process from one state to another is known as ‘State
transition’. When a process changes its state from Ready to running or from running to
blocked or terminated or from blocked to running, the CPU allocation for the process may
also change.
It should be noted that the state representation for a process/task mentioned here is a
generic representation. The states associated with a task may be known with a different name
or there may be more or less number of states than the one explained here under different OS
kernel. For example, under VxWorks’ kernel, the tasks may be in either one or a specific
combination of the states READY, PEND, DELAY and SUSPEND. The PEND state
represents a state where the task/process is blocked on waiting for I/O or system resource.
The DELAY state represents a state in which the task/process is sleeping and the SUSPEND
state represents a state where a task/process is temporarily suspended from execution and not
available for execution. Under MicroC/OS-II kernel, the tasks may be in one of the states,
DORMANT, READY, RUNNING, WAITING or INTERRUPTED. The DORMANT state
represents the ‘Created’ state and WAITING state represents the state in which a process

Chetan Ghatage, Asst, Prof., Dept. of CSE, RNSIT Page 13


ETC: Introduction to Embedded System BETCK105EJ MODULE 5

waits for shared resource or I/O access.


Process management deals with the creation of a process, setting up the memory space
for the process, loading the process’s code into the memory space, allocating system
resources, setting up a Process Control Block (PCB) for the process and process
termination/deletion.

Fig. 5.6 Process states and state transition representation


Threads
A thread is the primitive that can execute code. A thread is a single sequential flow of control
within a process. ‘Thread’ is also known as lightweight process. A process can have many
threads of execution. Different threads, which are part of a process, share the same address
space; meaning they share the data memory, code memory and heap memory area. Threads
maintain their own thread status (CPU register values), Program Counter (PC) and stack. The
memory model for a process and its associated threads are given in Fig. 5.7.

Chetan Ghatage, Asst, Prof., Dept. of CSE, RNSIT Page 14


ETC: Introduction to Embedded System BETCK105EJ MODULE 5

Fig. 5.7 Memory organization of a Process and its associated Threads


A process/task in embedded application may be a complex or lengthy one and it may
contain various suboperations like getting input from I/O devices connected to the processor,
performing some internal calculations/operations, updating some I/O devices etc. If all the
subfunctions of a task are executed in sequence, the CPU utilisation may not be efficient. For
example, if the process is waiting for a user input, the CPU enters the wait state for the event,
and the process execution also enters a wait state. Instead of this single sequential execution
of the whole process, if the task/process is split into different threads carrying out the
different subfunctionalities of the process, the CPU can be effectively utilised and when the
thread corresponding to the I/O operation enters the wait state, another threads which do not
require the I/O event for their operation can be switched into execution. This leads to more
speedy execution of the process and the efficient utilisation of the processor time and
resources. The multithreaded architecture of a process can be better visualised with the
thread-process diagram shown in Fig.5.8.
If the process is split into multiple threads, which executes a portion of the process,
there will be a main thread and rest of the threads will be created within the main thread. Use
of multiple threads to execute a process brings the following advantage.
 Better memory utilisation. Multiple threads of the same process share the address space
for data memory. This also reduces the complexity of inter thread communication since
variables can be shared across the threads.
 Since the process is split into different threads, when one thread enters a wait state, the
CPU can be utilised by other threads of the process that do not require the event, which
the other thread is waiting, for processing. This speeds up the execution of the process.
 Efficient CPU utilisation. The CPU is engaged all time.

Chetan Ghatage, Asst, Prof., Dept. of CSE, RNSIT Page 15


ETC: Introduction to Embedded System BETCK105EJ MODULE 5

Fig. 5.8 Process with multi-threads


3. Thread v/s Process
I hope, by now you got a reasonably good knowledge of process and threads. Now let us
summarise the properties of process and threads.
Thread Process
Thread is a single unit of execution and is Process is a program in execution and
part of process. contains one or more threads.
A thread does not have its own data
memory and heap memory. It shares the Process has its own code memory, data
data memory and heap memory with other memory and stack memory.
threads of the same process.
A thread cannot live independently; it
A process contains at least one thread.
lives within the process.
There can be multiple threads in a process. Threads within a process share the code,
The first thread (main thread) calls the data and heap memory. Each thread holds
main function and occupies the start of the separate memory area for stack (shares the
stack memory of the process. total stack memory of the process).
Processes are very expensive to create.
Threads are very inexpensive to create
Involves many OS overhead.
Context switching is complex and involves
Context switching is inexpensive and fast lot of OS over- head and is comparatively
slower.
If a process dies, the resources allocated to
If a thread expires, its stack is reclaimed
it are reclaimed by the OS and all the
by the process.
associated threads of the process also dies.

MULTIPROCESSING AND MULTITASKING


The terms multiprocessing and multitasking are a little confusing and sounds alike. In the

Chetan Ghatage, Asst, Prof., Dept. of CSE, RNSIT Page 16


ETC: Introduction to Embedded System BETCK105EJ MODULE 5

operating system context multiprocessing describes the ability to execute multiple processes
simultaneously. Systems which are capable of performing multiprocessing, are known as
multiprocessor systems. Multiprocessor systems possess multiple CPUs and can execute
multiple processes simultaneously. The ability of the operating system to have multiple
programs in memory, which are ready for execution, is referred as multiprogramming. In a
uniprocessor system, it is not possible to execute multiple processes simultaneously.
However, it is possible for a uniprocessor system to achieve some degree of pseudo
parallelism in the execution of multiple processes by switching the execution among different
processes. The ability of an operating system to hold multiple processes in memory and
switch the processor (CPU) from executing one process to another process is known as
multitasking. Multitasking creates the illusion of multiple tasks executing in parallel.
Multitasking involves the switching of CPU from executing one task to another. In an earlier
section ‘The Structure of a Process’ of this chapter, we learned that a Process is identical to
the physical processor in the sense it has own register set which mirrors the CPU registers,
stack and Program Counter (PC). Hence, a ‘process’ is considered as a ‘Virtual processor’,
awaiting its turn to have its properties switched into the physical processor. In a multitasking
environment, when task/process switching happens, the virtual processor (task/process) gets
its properties converted into that of the physical processor. The switching of the virtual
processor to physical processor is controlled by the scheduler of the OS kernel. Whenever a
CPU switching happens, the current context of execution should be saved to retrieve it at a
later point of time when the CPU executes the process, which is interrupted currently due to
execution switching. The context saving and retrieval is essential for resuming a process
exactly from the point where it was interrupted due to CPU switching. The act of switching
CPU among the processes or changing the current execution context is known as ‘Context
switching’. The act of saving the current context which contains the context details (Register
details, memory details, system resource usage details, execution details, etc.) for the currently
running process at the time of CPU switching is known as ‘Context saving’. The process of
retrieving the saved context details for a process, which is going to be executed due to CPU
switching, is known as ‘Context retrieval’. Multitasking involves ‘Context switching’ (Fig.
5.11), ‘Context saving’ and ‘Context retrieval’. Toss Juggling The skilful object
manipulation game is a classic real world example for the multitasking illusion. The juggler
uses a number of objects (balls, rings, etc.) and throws them up and catches them. At any
point of time, he throws only one ball and catches only one per hand. However, the speed at
which he is switching the balls for throwing and catching creates the illusion, he is throwing

Chetan Ghatage, Asst, Prof., Dept. of CSE, RNSIT Page 17


ETC: Introduction to Embedded System BETCK105EJ MODULE 5

and catching multiple balls or using more than two hands simultaneously, to the spectators.

Fig. 5.11 Context switching


1. Types of Multitasking
As we discussed earlier, multitasking involves the switching of execution among
multiple tasks. Depending on how the switching act is implemented, multitasking can be
classified into different types. The following section describes the various types of
multitasking existing in the Operating System’s context.
i. Co-operative Multitasking
Co-operative multitasking is the most primitive form of multitasking in which a
task/process gets a chance to execute only when the currently executing task/process
voluntarily relinquishes the CPU. In this method, any task/process can hold the CPU as much
time as it wants. Since this type of implementation involves the mercy of the tasks each other
for getting the CPU time for execution, it is known as co-operative multitasking. If the
currently executing task is non-cooperative, the other tasks may have to wait for a long time to
get the CPU.
ii. Preemptive Multitasking
Preemptive multitasking ensures that every task/process gets a chance to execute. When
and how much time a process gets is dependent on the implementation of the preemptive
scheduling. As the name indicates, in preemptive multitasking, the currently running
task/process is preempted to give a chance to other tasks/ process to execute. The preemption
of task may be based on time slots or task/process priority.
iii. Non-preemptive Multitasking
In non-preemptive multitasking, the process/task, which is currently given the CPU
time, is allowed to execute until it terminates (enters the ‘Completed’ state) or enters the

Chetan Ghatage, Asst, Prof., Dept. of CSE, RNSIT Page 18


ETC: Introduction to Embedded System BETCK105EJ MODULE 5

‘Blocked/Wait’ state, waiting for an I/O or system resource. The co-operative and non-
preemptive multitasking differs in their behaviour when they are in the ‘Blocked/Wait’ state.
In co-operative multitasking, the currently executing process/task need not relinquish the CPU
when it enters the ‘Blocked/Wait’ state, waiting for an I/O, or a shared resource access or an
event to occur whereas in non-preemptive multitasking the currently executing task
relinquishes the CPU when it waits for an I/O or system resource or an event to occur.
TASK SCHEDULING
As we already discussed, multitasking involves the execution switching among the
different tasks. There should be some mechanism in place to share the CPU among the
different tasks and to decide which process/task is to be executed at a given point of time.
Determining which task/process is to be executed at a given point of time is known as
task/process scheduling. Task scheduling forms the basis of multitasking. Scheduling policies
forms the guidelines for determining which task is to be executed when. The scheduling
policies are implemented in an algorithm and it is run by the kernel as a service. The kernel
service/application, which implements the scheduling algorithm, is known as ‘Scheduler’. The
process scheduling decision may take place when a process switches its state to
1. ‘Ready’ state from ‘Running’ state
2. ‘Blocked/Wait’ state from ‘Running’ state
3. ‘Ready’ state from ‘Blocked/Wait’ state
4. ‘Completed’ state
A process switches to ‘Ready’ state from the ‘Running’ state when it is preempted.
Hence, the type of scheduling in scenario 1 is pre-emptive. When a high priority process in
the ‘Blocked/Wait’ state completes its I/O and switches to the ‘Ready’ state, the scheduler
picks it for execution if the scheduling policy used is priority based preemptive. This is
indicated by scenario 3. In preemptive/non-preemptive multitasking, the process relinquishes
the CPU when it enters the ‘Blocked/Wait’ state or the ‘Completed’ state and switching of the
CPU happens at this stage. Scheduling under scenario 2 can be either preemptive or non-
preemptive. Scheduling under scenario 4 can be preemptive, non-preemptive or co-operative.
The selection of a scheduling criterion/algorithm should consider the following factors:
 CPU Utilisation: The scheduling algorithm should always make the CPU utilisation
high. CPU utilisation is a direct measure of how much percentage of the CPU is
being utilised.
 Throughput: This gives an indication of the number of processes executed per unit

Chetan Ghatage, Asst, Prof., Dept. of CSE, RNSIT Page 19


ETC: Introduction to Embedded System BETCK105EJ MODULE 5

of time. The throughput for a good scheduler should always be higher.


 Turnaround Time: It is the amount of time taken by a process for completing its
execution. It includes the time spent by the process for waiting for the main memory,
time spent in the ready queue, time spent on completing the I/O operations, and the
time spent in execution. The turnaround time should be a minimal for a good
scheduling algorithm.
 Waiting Time: It is the amount of time spent by a process in the ‘Ready’ queue
waiting to get the CPU time for execution. The waiting time should be minimal for a
good scheduling algorithm.
 Response Time: It is the time elapsed between the submission of a process and the
first response. For a good scheduling algorithm, the response time should be as least
as possible.
The Operating System maintains various queues in connection with the CPU scheduling,
and a process passes through these queues during the course of its admittance to execution
completion.
The various queues maintained by OS in association with CPU scheduling are:
 Job Queue: Job queue contains all the processes in the system
 Ready Queue: Contains all the processes, which are ready for execution and waiting
for CPU to get their turn for execution. The Ready queue is empty when there is no
process ready for running.
 Device Queue: Contains the set of processes, which are waiting for an I/O device.
A process migrates through all these queues during its journey from ‘Admitted’ to
‘Completed’ stage. The following diagrammatic representation (Fig. 5.12) illustrates the
transition of a process through the various queues.

Chetan Ghatage, Asst, Prof., Dept. of CSE, RNSIT Page 20


ETC: Introduction to Embedded System BETCK105EJ MODULE 5

Fig. 5.12 Illustration of process transition through various queues

Chetan Ghatage, Asst, Prof., Dept. of CSE, RNSIT Page 21

You might also like