0% found this document useful (0 votes)
11 views93 pages

Embedded Systems Chapter 3

The document presents an overview of embedded systems programming, focusing on operating systems (OS) and their functions, including process and memory management, device management, and security. It discusses the history of OS, types of OS, and differences between firmware and OS, as well as the kernel's role in managing system resources. Additionally, it covers processes and threads, their advantages and disadvantages, and the importance of synchronization in managing shared resources.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views93 pages

Embedded Systems Chapter 3

The document presents an overview of embedded systems programming, focusing on operating systems (OS) and their functions, including process and memory management, device management, and security. It discusses the history of OS, types of OS, and differences between firmware and OS, as well as the kernel's role in managing system resources. Additionally, it covers processes and threads, their advantages and disadvantages, and the importance of synchronization in managing shared resources.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 93

Embedded Systems : Programming

Presented by
Dr. B. Naresh Kumar Reddy
Department of Electronics and Communication Engineering
Outline 1

▶ Basic Features of an Operating System.


▶ Kernel Features, Real-time Kernels.
▶ Processes and Threads, Context Switching.
▶ Classification of Real Time Scheduling Approaches: Clock-
Driven Approach, Weighted Round- Robin Approach, Priority-
Driven Approach.
▶ Dynamic versus Static Systems.
▶ Effective Release Times and Deadlines.
▶ optimality of the EDF and LST algorithms.
▶ Shared Memory Communication.
▶ Message-Based Communication.
▶ Real-time Memory Management.
▶ Dynamic Allocation.

Dr. B. Naresh Kumar Reddy | Department of Electronics and Communication Engineering | National Institute of Technology Tiruchirappalli.
What is an Operating System? 2
▶ An Operating system (OS) is a software which acts as an
interface between the end user and computer hardware.
▶ Every computer must have at least one OS to run other programs
▶ An application like Chrome, MS Word, Games, etc needs some
environment in which it will run and perform its task.
▶ The OS helps you to communicate with the computer without
knowing how to speak the computer’s language.
▶ It is not possible for the user to use any computer or mobile
device without having an operating system.

Dr. B. Naresh Kumar Reddy | Department of Electronics and Communication Engineering | National Institute of Technology Tiruchirappalli.
History Of OS 3

▶ Operating systems were first developed in the late 1950s to


manage tape storage
▶ The General Motors Research Lab implemented the first OS in
the early 1950s for their IBM 701
▶ In the mid-1960s, operating systems started to use disks
▶ In the late 1960s, the first version of the Unix OS was developed
▶ The first OS built by Microsoft was DOS. It was built in 1981 by
purchasing the 86-DOS software from a Seattle company
▶ The present-day popular OS Windows first came to existence in
1985 when a GUI was created and paired with MS-DOS.

Dr. B. Naresh Kumar Reddy | Department of Electronics and Communication Engineering | National Institute of Technology Tiruchirappalli.
4

Dr. B. Naresh Kumar Reddy | Department of Electronics and Communication Engineering | National Institute of Technology Tiruchirappalli.
Functions of an Operating System 5

Dr. B. Naresh Kumar Reddy | Department of Electronics and Communication Engineering | National Institute of Technology Tiruchirappalli.
In an operating system software performs each
of the function: 6

▶ Process management: Process management helps OS to create


and delete processes. It also provides mechanisms for
synchronization and communication among processes.
▶ Memory management: Memory management module performs
the task of allocation and de-allocation of memory space to
programs in need of this resources.
▶ File management: It manages all the file-related activities such
as organization storage, retrieval, naming, sharing, and
protection of files.
▶ Device Management: Device management keeps tracks of all
devices. This module also responsible for this task is known as
the I/O controller. It also performs the task of allocation and
de-allocation of the devices.
▶ I/O System Management: One of the main objects of any OS is
to hide the peculiarities of that hardware devices from the user.

Dr. B. Naresh Kumar Reddy | Department of Electronics and Communication Engineering | National Institute of Technology Tiruchirappalli.
7
▶ Secondary-Storage Management: Systems have several levels
of storage which includes primary storage, secondary storage.
▶ Security: Security module protects the data and information of a
computer system against malware threat and authorized access.
▶ Command interpretation: This module is interpreting commands
given by the and acting system resources to process that
commands.
▶ Networking: A distributed system is a group of processors which
do not share memory, hardware devices, or a clock.
▶ Job accounting: Keeping track of time & resource used by
various job and users.
▶ Communication management: Coordination and assignment of
compilers, interpreters, and another software resource of the
various users of the computer systems.

Dr. B. Naresh Kumar Reddy | Department of Electronics and Communication Engineering | National Institute of Technology Tiruchirappalli.
Types of Operating system 8

▶ Batch Operating System


▶ Multitasking/Time Sharing OS
▶ Multiprocessing OS
▶ Real Time OS
▶ Distributed OS
▶ Network OS
▶ Mobile OS

Dr. B. Naresh Kumar Reddy | Department of Electronics and Communication Engineering | National Institute of Technology Tiruchirappalli.
Difference Between Firmware and Operating
System 9

S.No Firmware Operating System


1 low-level software high-level software
that controls hardware that manages applications and hardware
2 It resides in ROM. It resides on a disk.
3 It is a small program. It is a huge program.
4 It is usually fixed. It is often updated on a regular basis.
5 It is a low-level operation. It is a high-level interface.
6 It has a single purpose. It is a general-purpose system
7 Examples –keyboards, Examples – Apple macOS, Microsoft Windows,
routers, video cards, Linux Operating System, and Apple iOS.
webcams, motherboards, etc.

Dr. B. Naresh Kumar Reddy | Department of Electronics and Communication Engineering | National Institute of Technology Tiruchirappalli.
Advantages of Operating System 10

▶ Provides an interface between users and hardware.


▶ User-friendly GUI with menus, buttons, and icons.
▶ No technical expertise required for GUI operation.
▶ Cost-effective and controls all computer functions.
▶ Supports features like "Plug and Play" for devices.
▶ Uses memory management techniques like segmentation and
paging.
▶ Manages all input and output devices efficiently.
▶ Synchronizes and schedules processes effectively.
▶ Implements various scheduling algorithms (FCFS, Round Robin,
Priority Scheduling, etc.).
▶ Reduces external fragmentation.
▶ Enables data sharing among multiple users.
▶ Supports resource sharing (printers, fax machines, etc.).
▶ Allows seamless updates and installations.
▶ Some OS provide built-in security features against threats.
▶ Open-source OS like Unix/Linux are free to use.
Dr. B. Naresh Kumar Reddy | Department of Electronics and Communication Engineering | National Institute of Technology Tiruchirappalli.
Disadvantages of Operating System 11

▶ Increased memory access times due to page table lookups.


▶ Requires improvements like Translation Lookaside Buffer (TLB).
▶ Needs secure page tables and additional memory.
▶ Possibility of internal fragmentation.
▶ Page Table Length Register (PTLR) must be bounded to virtual
memory size.
▶ Requires enhancements in multi-level page tables and variable
page sizes.
▶ Unauthorized users can access the system if security is
compromised.
▶ OS failures may result in data loss.
▶ Difficult to provide complete protection against viruses and
malware.

Dr. B. Naresh Kumar Reddy | Department of Electronics and Communication Engineering | National Institute of Technology Tiruchirappalli.
Kernel in Operating System 12

▶ The core component of an OS, managing communication


between hardware and software.
▶ Handles process scheduling, memory management, and device
control.
▶ Ensures efficient multitasking and system stability.
▶ Types of Kernels:
▶ Monolithic Kernel (e.g., Linux, Unix - Used in enterprise servers
and cloud computing)
▶ Microkernel (e.g., Minix, Mach - Used in aerospace and automotive
systems)
▶ Hybrid Kernel (e.g., Windows NT, Netware - Used in commercial
operating systems like Windows)
▶ Exo Kernel (e.g., ExOS, Nemesis - Used in experimental and
research OS projects)
▶ Nano Kernel (e.g., EROS - Used in secure computing
environments)

Dr. B. Naresh Kumar Reddy | Department of Electronics and Communication Engineering | National Institute of Technology Tiruchirappalli.
Functions of Kernel with Real-Time Examples 13

▶ Process Management: Manages multiple processes (Example:


Running multiple applications simultaneously on Windows or
Linux).
▶ Memory Management: Allocates and deallocates memory
efficiently (Example: Virtual memory usage in modern
smartphones and computers).
▶ Device Management: Handles input/output devices (Example:
Managing USB devices in Linux and Windows).
▶ File System Management: Organizes and manages file storage
(Example: NTFS in Windows, EXT4 in Linux).
▶ Security Access Control: Enforces authentication and
permissions (Example: User access control in macOS and
Linux).
▶ Inter-Process Communication: Facilitates communication
between processes (Example: Communication between different
web browser tabs and background processes).
Dr. B. Naresh Kumar Reddy | Department of Electronics and Communication Engineering | National Institute of Technology Tiruchirappalli.
What is a Shell? 14

▶ The command-line interface between the user and the OS.


▶ Interprets user commands and translates them into kernel
instructions.
▶ Provides features like command history, tab completion, and
scripting.
▶ Types of Shells:
▶ Bourne Shell (sh - Used in Unix-based scripting and automation
tasks)
▶ C Shell (csh - Used in academic and research environments)
▶ Korn Shell (ksh - Used in enterprise Unix systems for scripting)
▶ Bash Shell (bash - Default shell in Linux and macOS, widely used
for system administration)

Dr. B. Naresh Kumar Reddy | Department of Electronics and Communication Engineering | National Institute of Technology Tiruchirappalli.
15

Dr. B. Naresh Kumar Reddy | Department of Electronics and Communication Engineering | National Institute of Technology Tiruchirappalli.
Difference Between Shell and Kernel with Real-
Life Examples 16

Shell Kernel
Interface between user and Core component managing OS tasks
kernel (e.g., Linux Kernel in Android, Ubuntu)
(e.g., Command Prompt in
Windows, Terminal in Linux)
Executes user commands Handles resource management
(e.g., Running a script using (e.g., Allocating CPU time to applications)
Bash)
Provides command-line in- Directly interacts with hardware
terpreter (e.g., Managing memory allocation in real-
(e.g., macOS Terminal for time applications)
Unix commands)
Supports scripting and au- Manages memory and process scheduling
tomation (e.g., Ensuring smooth multitasking on
(e.g., Automating tasks us- smartphones)
ing shell scripts in Linux)
Example: Bash, Csh, Ksh Example: Linux, Windows NT, Minix

Dr. B. Naresh Kumar Reddy | Department of Electronics and Communication Engineering | National Institute of Technology Tiruchirappalli.
Kernel Features 17

▶ Process Management: Efficiently schedules and manages


multiple processes.
▶ Memory Management: Allocates and deallocates memory
dynamically.
▶ Device Management: Handles input and output operations with
drivers.
▶ File System Management: Organizes and controls file access.
▶ Security Protection: Enforces access control and system
security policies.
▶ Interrupt Handling: Manages hardware and software interrupts
for efficient execution.
▶ Multitasking: Supports concurrent execution of multiple
processes.
▶ Inter-Process Communication (IPC): Enables communication
between running processes.

Dr. B. Naresh Kumar Reddy | Department of Electronics and Communication Engineering | National Institute of Technology Tiruchirappalli.
Real-Time Kernels 18

▶ Definition: A real-time kernel ensures deterministic response


times for critical tasks.
▶ Types of Real-Time Kernels:
▶ Hard Real-Time Kernels: Strict timing constraints (e.g., avionics,
medical devices).
▶ Soft Real-Time Kernels: Flexible timing constraints (e.g.,
multimedia, gaming).
▶ Key Characteristics:
▶ Low-latency task switching.
▶ Priority-based scheduling.
▶ Predictable interrupt handling.
▶ Minimal jitter for time-sensitive applications.
▶ Examples:
▶ VxWorks – Used in aerospace and defense applications.
▶ RTLinux – Provides real-time capabilities for Linux-based systems.
▶ FreeRTOS – Open-source RTOS used in embedded systems.

Dr. B. Naresh Kumar Reddy | Department of Electronics and Communication Engineering | National Institute of Technology Tiruchirappalli.
What is a Process? 19

▶ A process is a program under execution.


▶ It maintains its own Process Control Block (PCB), stack, and ad-
dress space.
▶ Processes are isolated, meaning they do not share memory with
one another.
▶ They can create child processes.
▶ Typical states include: New, Ready, Running, Waiting, Terminated,
and Suspended.
▶ Process and threads are the basic components in OS. Process
is a program under execution whereas a thread is part of pro-
cess. Threads allows a program to perform multiple tasks simul-
taneously, like downloading a file while you browse a website or
running animations while processing user input. A process can
consists of multiple threads.

Dr. B. Naresh Kumar Reddy | Department of Electronics and Communication Engineering | National Institute of Technology Tiruchirappalli.
What is a Thread? 20

▶ Often called a lightweight process.


▶ Always part of a specific process.
▶ Threads share the parent process’s memory and resources.
▶ Common states: Running, Ready, and Blocked.
▶ Faster creation and termination compared to processes.
▶ Enables multitasking (e.g., handling user input while loading
content).

Dr. B. Naresh Kumar Reddy | Department of Electronics and Communication Engineering | National Institute of Technology Tiruchirappalli.
Process vs. Thread: A Comparison 21

Process Thread
Program in execution Segment of a process
Own memory space Shares memory with other
threads
Heavier weight Lighter weight
Slower creation and termination Faster creation and termination
More overhead in context Less overhead in context switch-
switching ing
Inter-process communication is Intra-process communication is
less efficient efficient
Isolated execution Shared environment; changes
affect all threads

Dr. B. Naresh Kumar Reddy | Department of Electronics and Communication Engineering | National Institute of Technology Tiruchirappalli.
Advantages and Disadvantages 22

Processes:
▶ Advantages:
▶ Memory isolation improves security.
▶ Independent resource allocation.
▶ Processes can be prioritized.
▶ Disadvantages:
▶ Slower creation and termination.
▶ Context switching is more time-consuming.
▶ High memory usage if too many processes are running.

Threads:
▶ Advantages:
▶ Faster creation and termination.
▶ Efficient handling of multiple tasks concurrently.
▶ Lower resource consumption.
▶ Disadvantages:
▶ Shared memory can lead to synchronization issues.
▶ Lack of isolation can cause interference between threads.
Dr. B. Naresh Kumar Reddy | Department of Electronics and Communication Engineering | National Institute of Technology Tiruchirappalli.
Real-time Example: Processes & Threads in a
Web Browser 23

Processes:
▶ Each browser tab runs as a separate process with its own
memory.
▶ Isolation ensures that a crash in one tab does not affect the
others.
Threads within a Process:
▶ UI Thread: Handles user interactions (scrolling, clicking, typing).
▶ Rendering Thread: Draws the webpage content.
▶ Network Thread: Manages data fetching (images, scripts, text).
▶ JavaScript Thread: Executes scripts for interactivity.
Benefit: Multitasking with threads within isolated processes provides
a smooth, responsive browsing experience.

Dr. B. Naresh Kumar Reddy | Department of Electronics and Communication Engineering | National Institute of Technology Tiruchirappalli.
Introduction to Synchronization 24

▶ Synchronization is critical for managing access to shared


resources.
▶ It prevents race conditions and ensures data consistency.
▶ Two common mechanisms are mutexes and semaphores.

Dr. B. Naresh Kumar Reddy | Department of Electronics and Communication Engineering | National Institute of Technology Tiruchirappalli.
What is a Mutex? 25

▶ A mutex (mutual exclusion) is a lock used to ensure that only


one thread or task accesses a shared resource at a time.
▶ Key Properties:
▶ Binary in nature (locked/unlocked).
▶ Ownership: the thread that locks it must unlock it.
▶ Usage: Protecting critical sections to avoid concurrent
modifications.

Dr. B. Naresh Kumar Reddy | Department of Electronics and Communication Engineering | National Institute of Technology Tiruchirappalli.
What is a Semaphore? 26

▶ A semaphore is a synchronization tool that uses a counter to


control access to shared resources.
▶ Types:
▶ Binary Semaphore: Similar to a mutex (0 or 1), but without strict
ownership.
▶ Counting Semaphore: Allows a predefined number of concurrent
accesses.
▶ Usage: Managing a limited pool of resources or controlling
concurrency.

Dr. B. Naresh Kumar Reddy | Department of Electronics and Communication Engineering | National Institute of Technology Tiruchirappalli.
Usage in Operating Systems 27

▶ Mutexes:
▶ Protect critical sections in multi-threaded applications.
▶ Prevent race conditions in kernel data structures and process
scheduling.
▶ Semaphores:
▶ Regulate access to shared devices (e.g., printers, disk I/O).
▶ Manage pools of resources such as database connections or
thread pools.

Dr. B. Naresh Kumar Reddy | Department of Electronics and Communication Engineering | National Institute of Technology Tiruchirappalli.
Usage in Embedded Systems 28

▶ Mutexes:
▶ Ensure exclusive access to hardware peripherals (e.g., sensor data
buffers).
▶ Used in Real-Time Operating Systems (RTOS) to coordinate task
execution.
▶ Semaphores:
▶ Control access to shared resources like communication buses
(I2C, SPI).
▶ Manage limited hardware resources (e.g., ADC channels) in
time-critical applications.

Dr. B. Naresh Kumar Reddy | Department of Electronics and Communication Engineering | National Institute of Technology Tiruchirappalli.
Key Differences & Real-Time Examples 29

Differences:
▶ Mutex:
▶ Exclusive access; a single thread can hold the lock.
▶ Ownership rules prevent unintended release.
▶ Semaphore:
▶ Allows a limited number of threads to access resources
concurrently.
▶ Can be signaled by any thread.

Real-Time Examples:
▶ Operating Systems:
▶ A web server might use a counting semaphore to limit concurrent
access to a database.
▶ A mutex protects a critical section in the OS kernel to update
scheduling queues.
▶ Embedded Systems:
▶ An RTOS in a drone uses a mutex to secure sensor data
processing.
▶ A counting semaphore controls access to a limited number of
communication channels on a microcontroller.
Dr. B. Naresh Kumar Reddy | Department of Electronics and Communication Engineering | National Institute of Technology Tiruchirappalli.
Overview of Context Switching 30

▶ Definition: Context switching is the process of switching the


CPU from one process, task, or thread to another.
▶ Essential in multitasking OSs (e.g., Linux) to manage multiple
processes.
▶ Each CPU core (without hyperthreading) can only run one
process/thread at a time.

Dr. B. Naresh Kumar Reddy | Department of Electronics and Communication Engineering | National Institute of Technology Tiruchirappalli.
31

Dr. B. Naresh Kumar Reddy | Department of Electronics and Communication Engineering | National Institute of Technology Tiruchirappalli.
32

Dr. B. Naresh Kumar Reddy | Department of Electronics and Communication Engineering | National Institute of Technology Tiruchirappalli.
Mechanism of Context Switching 33

▶ Saving State: The CPU saves the state of the currently running
process.
▶ Loading State: The CPU loads the state of the new process.
▶ Resuming Execution: Execution continues with the new
process.

Note: This process is computationally intensive.

Dr. B. Naresh Kumar Reddy | Department of Electronics and Communication Engineering | National Institute of Technology Tiruchirappalli.
Impact on Performance 34

▶ Frequent context switches consume CPU resources.


▶ Overhead from saving and loading states can slow down system
performance.
▶ Systems with many processes/threads on few CPU cores are
particularly affected.

Dr. B. Naresh Kumar Reddy | Department of Electronics and Communication Engineering | National Institute of Technology Tiruchirappalli.
Real-Time Embedded Systems 35

▶ Definition: Systems that must respond to inputs or events within


a strict time constraint.
▶ Examples: Automotive systems, medical devices, industrial
automation, aerospace systems.
▶ Characteristics:
▶ Deterministic behavior
▶ High reliability
▶ Specific timing constraints

Dr. B. Naresh Kumar Reddy | Department of Electronics and Communication Engineering | National Institute of Technology Tiruchirappalli.
Key Concepts 36

▶ Release Time: The time at which a job becomes ready for


execution.
▶ Deadline: The latest time by which a job must complete.
▶ Completion Time: The actual time at which a job finishes
execution.

Dr. B. Naresh Kumar Reddy | Department of Electronics and Communication Engineering | National Institute of Technology Tiruchirappalli.
Soft vs. Hard Real-Time Systems 37

▶ Hard Real-Time Systems:


▶ Missing a deadline is a critical failure.
▶ Examples: Pacemakers, anti-lock braking systems.
▶ Soft Real-Time Systems:
▶ Missing a deadline reduces performance but is not catastrophic.
▶ Examples: Video streaming, online gaming.

Dr. B. Naresh Kumar Reddy | Department of Electronics and Communication Engineering | National Institute of Technology Tiruchirappalli.
Jobs, Tasks, and Processors 38

▶ Job: An individual unit of work with a specific execution time.


▶ Task: A sequence of jobs that perform a function.
▶ Processor: The hardware that executes tasks.

Dr. B. Naresh Kumar Reddy | Department of Electronics and Communication Engineering | National Institute of Technology Tiruchirappalli.
Clock-Driven Scheduling 39

▶ Definition: Scheduling decisions are made at specific time


instants based on a pre-computed schedule.
▶ Example: Cyclic scheduling in automotive control systems.
▶ Advantages: Simple and predictable.
▶ Disadvantages: Inflexible and not suitable for dynamic tasks.

Dr. B. Naresh Kumar Reddy | Department of Electronics and Communication Engineering | National Institute of Technology Tiruchirappalli.
Round Robin Scheduling 40

▶ Definition: Each task is assigned a fixed time slot (quantum) in


a cyclic order.
▶ Example: Time-sharing systems in operating systems.
▶ Advantages: Fairness in task execution.
▶ Disadvantages: Not suitable for tasks with varying execution
times.

Dr. B. Naresh Kumar Reddy | Department of Electronics and Communication Engineering | National Institute of Technology Tiruchirappalli.
Weighted Round Robin Scheduling 41

▶ Definition: Extends round robin by assigning different weights to


tasks, allowing longer time slices for higher priority tasks.
▶ Example: Network packet scheduling.
▶ Advantages: Provides better control over task priorities.
▶ Disadvantages: More complex to implement.

Dr. B. Naresh Kumar Reddy | Department of Electronics and Communication Engineering | National Institute of Technology Tiruchirappalli.
Priority-Driven Scheduling 42

▶ Definition: Tasks are scheduled based on their priority levels.


▶ Example: Real-time operating systems using Rate Monotonic
Scheduling (RMS) or Earliest Deadline First (EDF).
▶ Advantages: Suitable for dynamic environments.
▶ Disadvantages: Priority inversion issues.

Dr. B. Naresh Kumar Reddy | Department of Electronics and Communication Engineering | National Institute of Technology Tiruchirappalli.
Example 43

Figure 1: Example of priority-driven scheduling. (a) Preemptive (b)


Nonpreemptive.

Dr. B. Naresh Kumar Reddy | Department of Electronics and Communication Engineering | National Institute of Technology Tiruchirappalli.
Job Scheduling Examples 44

▶ Example 1: Clock-Driven Scheduling


▶ Jobs J1, J2, J3 scheduled at fixed intervals (e.g., every 10ms)
▶ Suitable for periodic tasks like sensor data collection
▶ Example 2: Round Robin Scheduling
▶ Jobs J1, J2, J3 get 5ms each in a cyclic order
▶ Ideal for multitasking in time-sharing systems
▶ Example 3: Priority-Driven Scheduling
▶ Job J1 (High Priority), J2 (Medium), J3 (Low)
▶ J1 preempts J2 and J3 if it arrives during their execution

Dr. B. Naresh Kumar Reddy | Department of Electronics and Communication Engineering | National Institute of Technology Tiruchirappalli.
Effective Release Time 45

▶ Definition: The effective release time of a job without


predecessors is equal to its given release time.
▶ For jobs with predecessors, the effective release time is the
maximum value among its given release time and the effective
release times of all of its predecessors.
▶ Computed in one pass through the precedence graph in O(n2 )
time where n is the number of jobs.

Dr. B. Naresh Kumar Reddy | Department of Electronics and Communication Engineering | National Institute of Technology Tiruchirappalli.
Effective Deadline 46

▶ Definition: The effective deadline of a job without a successor is


equal to its given deadline.
▶ For jobs with successors, the effective deadline is the minimum
value among its given deadline and the effective deadlines of all
of its successors.
▶ Computed in O(n2 ) time similar to effective release times.

Dr. B. Naresh Kumar Reddy | Department of Electronics and Communication Engineering | National Institute of Technology Tiruchirappalli.
Example: Effective Release Time and Deadline 47

▶ Set of jobs as shown in the problem description.


▶ Calculation of effective release times and deadlines step-by-step.

Dr. B. Naresh Kumar Reddy | Department of Electronics and Communication Engineering | National Institute of Technology Tiruchirappalli.
Effective Release Time and Deadline 48

Dr. B. Naresh Kumar Reddy | Department of Electronics and Communication Engineering | National Institute of Technology Tiruchirappalli.
Job Details and Effective Times 49

Job Given Release Time Given Deadline Effective Release Time Effective Deadline
J1 2 10 2 8
J2 0 7 0 7
J3 1 12 2 8
J4 4 9 4 9
J5 1 8 2 8
J6 0 20 4 20
J7 6 21 6 21

Table 1: Effective Release Times and Deadlines of Jobs

Dr. B. Naresh Kumar Reddy | Department of Electronics and Communication Engineering | National Institute of Technology Tiruchirappalli.
Example 50

Dr. B. Naresh Kumar Reddy | Department of Electronics and Communication Engineering | National Institute of Technology Tiruchirappalli.
Example 51

Dr. B. Naresh Kumar Reddy | Department of Electronics and Communication Engineering | National Institute of Technology Tiruchirappalli.
Example 52

Consider a system that has five periodic tasks, A, B, C, D, and E, and


three processors P1, P2, P3. The periods of A, B, and C are 2 and
their execution times are equal to 1. The periods of D and E are 8 and
their execution times are 6. Find a feasible schedule of the five tasks
on three processors

Dr. B. Naresh Kumar Reddy | Department of Electronics and Communication Engineering | National Institute of Technology Tiruchirappalli.
Earliest Deadline First (EDF) 53

▶ Dynamic priority scheduling algorithm.


▶ Task with the earliest deadline gets the highest priority.
▶ Can be preemptive or non-preemptive.
▶ Used in real-time operating systems.

Dr. B. Naresh Kumar Reddy | Department of Electronics and Communication Engineering | National Institute of Technology Tiruchirappalli.
Example 54

Dr. B. Naresh Kumar Reddy | Department of Electronics and Communication Engineering | National Institute of Technology Tiruchirappalli.
Example 55

Dr. B. Naresh Kumar Reddy | Department of Electronics and Communication Engineering | National Institute of Technology Tiruchirappalli.
Slack Time Definition 56

▶ Slack time is the amount of time left after a job if it were started
now.
▶ Defined mathematically as:

s = (d − t) − c ′
▶ Where:
▶ d = Process deadline
▶ t = Current real time since cycle start
▶ c ′ = Remaining computation time

Dr. B. Naresh Kumar Reddy | Department of Electronics and Communication Engineering | National Institute of Technology Tiruchirappalli.
LST Scheduling Algorithm 57

▶ Selects the process with the smallest slack time first.


▶ Preemptive: A process with lower slack time can preempt the
current task.
▶ No processor affinity, making it efficient for multi-core embedded
systems.
▶ Ensures real-time constraints are met.

Dr. B. Naresh Kumar Reddy | Department of Electronics and Communication Engineering | National Institute of Technology Tiruchirappalli.
Example 58

Dr. B. Naresh Kumar Reddy | Department of Electronics and Communication Engineering | National Institute of Technology Tiruchirappalli.
Example 59

Dr. B. Naresh Kumar Reddy | Department of Electronics and Communication Engineering | National Institute of Technology Tiruchirappalli.
What is Shared Memory? 60

▶ Shared Memory is a concept in Operating Systems used for


Inter-Process Communication (IPC).
▶ Each process has its own address space and cannot directly
share data.
▶ A process can allocate a portion of its address space as shared
memory.
▶ Other processes can read or write data in this shared memory.

Dr. B. Naresh Kumar Reddy | Department of Electronics and Communication Engineering | National Institute of Technology Tiruchirappalli.
Working of Shared Memory 61

▶ Consider two processes: P1 and P2 .


▶ P1 allocates a shared memory segment S1 .
▶ If P1 grants read access:
▶ P1 writes data to S1 .
▶ P2 reads data from S1 .
▶ If P1 grants write access:
▶ P2 can modify the data in S1 .
▶ P1 is the creator and can destroy the shared memory.

Dr. B. Naresh Kumar Reddy | Department of Electronics and Communication Engineering | National Institute of Technology Tiruchirappalli.
Example 62

Dr. B. Naresh Kumar Reddy | Department of Electronics and Communication Engineering | National Institute of Technology Tiruchirappalli.
Example 63

Dr. B. Naresh Kumar Reddy | Department of Electronics and Communication Engineering | National Institute of Technology Tiruchirappalli.
Use Cases of Shared Memory 64

▶ Inter-Process Communication (IPC): Two processes share


memory for fast data exchange.
▶ Parallel Processing: Multiple processes modify shared data to
enhance computation speed.
▶ Databases: Used as cache for fast read/write operations.
▶ Graphics and Multimedia: CPU and GPU access shared
memory for video processing.
▶ Distributed Systems: Multiple machines share memory to
function as a single system.

Dr. B. Naresh Kumar Reddy | Department of Electronics and Communication Engineering | National Institute of Technology Tiruchirappalli.
Advantages of Shared Memory 65

▶ One of the fastest IPC mechanisms as it avoids kernel


involvement.
▶ Efficient memory usage since data is stored only once.
▶ Easy access to shared data once initialized.

Dr. B. Naresh Kumar Reddy | Department of Electronics and Communication Engineering | National Institute of Technology Tiruchirappalli.
Disadvantages of Shared Memory 66

▶ Requires synchronization mechanisms like semaphores or


mutexes.
▶ Potential risk of memory leaks.
▶ Can lead to deadlocks if processes wait indefinitely to access
shared memory.

Dr. B. Naresh Kumar Reddy | Department of Electronics and Communication Engineering | National Institute of Technology Tiruchirappalli.
What is Message-Based Communication? 67

▶ Message-based communication is an IPC (Inter-Process


Communication) mechanism where processes exchange data
via messages.
▶ A process sends a message using a message queue, and the
receiving process reads it.
▶ Used in distributed systems, ensuring synchronization and
coordination between processes.

Dr. B. Naresh Kumar Reddy | Department of Electronics and Communication Engineering | National Institute of Technology Tiruchirappalli.
Distributed Programming Models Using
Message-Based Communication 68

▶ Message Passing: Processes exchange encoded messages,


e.g., MPI, OpenMP.
▶ Remote Procedure Call (RPC): Enables execution of remote
functions with marshaled arguments, e.g., RPC, gRPC.
▶ Distributed Objects: Object-oriented approach to remote
invocation, e.g., CORBA, Java RMI, .NET Remoting.
▶ Active Objects: Objects with control threads manage execution
asynchronously.
▶ Web Services: Uses HTTP-based RPC communication, e.g.,
SOAP, RESTful APIs.

Dr. B. Naresh Kumar Reddy | Department of Electronics and Communication Engineering | National Institute of Technology Tiruchirappalli.
Advantages of Message-Based Communication 69

▶ Simplicity: Easy to implement and use for IPC.


▶ Asynchronous Communication: Sender and receiver operate
independently.
▶ Reliability: Messages can be resent until received.
▶ Scalability: Suitable for large distributed systems.

Dr. B. Naresh Kumar Reddy | Department of Electronics and Communication Engineering | National Institute of Technology Tiruchirappalli.
Disadvantages of Message-Based Communica-
tion 70

▶ Overhead: Managing message queues consumes resources.


▶ Complexity: Requires careful queue management and message
synchronization.
▶ Latency: Messages may experience delays in queues.
▶ Limited Data Size: Messages may have constraints on the data
they can hold.

Dr. B. Naresh Kumar Reddy | Department of Electronics and Communication Engineering | National Institute of Technology Tiruchirappalli.
Example: Shared Memory Model Working 71

Scenario: Producer-Consumer Problem


▶ A producer process generates data and stores it in a shared
memory buffer.
▶ A consumer process retrieves the data from the shared memory
and processes it.
▶ Synchronization is required using semaphores or mutex locks to
avoid simultaneous access issues.
▶ This approach is commonly used in database systems and
parallel processing applications.

Dr. B. Naresh Kumar Reddy | Department of Electronics and Communication Engineering | National Institute of Technology Tiruchirappalli.
Example: Message Passing Model Working 72

Scenario: Web Server Communication


▶ A web browser (client) requests a webpage by sending an HTTP
request message to a web server.
▶ The web server processes the request, retrieves the required
data, and sends an HTTP response message containing the
webpage content.
▶ Messages ensure reliable communication between the client and
the server even when they are on different machines.
▶ This model is commonly used in web applications, cloud
services, and remote procedure calls (RPC).

Dr. B. Naresh Kumar Reddy | Department of Electronics and Communication Engineering | National Institute of Technology Tiruchirappalli.
Differences Between Shared Memory Model and
Message Passing Model 73

Shared Memory Model Message Passing Model


The shared memory region is used for com- A message-passing facility is used for commu-
munication. nication.
Used for communication between processes Used in distributed environments where pro-
on the same machine as they share a com- cesses reside on different machines con-
mon address space. nected via a network.
No explicit code is required, as message pass-
An application programmer needs to write ex-
ing handles communication and synchroniza-
plicit code for reading/writing shared memory.
tion.
Provides high-speed computation as commu-
Time-consuming due to kernel intervention
nication occurs via shared memory with mini-
(system calls) for message passing.
mal system calls.
Requires synchronization to avoid simultane- Suitable for small data sharing, as conflicts do
ous writing to the same memory location. not need to be resolved.
Faster communication strategy. Relatively slower communication strategy.
No kernel intervention is needed after estab-
Involves kernel intervention.
lishing shared memory.
Suitable for exchanging large amounts of data. Best suited for small amounts of data.
Example: Data transfer between a client pro-
Example: Web browsers, web servers, chat
cess and a server process for modification be-
applications on the World Wide Web (WWW).
fore returning the data.

Dr. B. Naresh Kumar Reddy | Department of Electronics and Communication Engineering | National Institute of Technology Tiruchirappalli.
What is Memory Management? 74

▶ Memory management is the process of managing computer


memory resources.
▶ It ensures efficient allocation, utilization, and deallocation of
memory.
▶ The operating system manages memory dynamically to support
multiple processes.

Dr. B. Naresh Kumar Reddy | Department of Electronics and Communication Engineering | National Institute of Technology Tiruchirappalli.
Example 75

Dr. B. Naresh Kumar Reddy | Department of Electronics and Communication Engineering | National Institute of Technology Tiruchirappalli.
Why Memory Management is Required? 76

▶ Allocates and deallocates memory before and after process


execution.
▶ Tracks memory usage by different processes.
▶ Minimizes fragmentation issues.
▶ Ensures proper utilization of main memory.
▶ Maintains data integrity during execution.

Dr. B. Naresh Kumar Reddy | Department of Electronics and Communication Engineering | National Institute of Technology Tiruchirappalli.
Logical vs. Physical Address 77

▶ Logical Address: Address generated by the CPU, also known


as a virtual address.
▶ Physical Address: The actual location in memory seen by the
hardware.
▶ Mapping from logical to physical address is done by the Memory
Management Unit (MMU).

Dr. B. Naresh Kumar Reddy | Department of Electronics and Communication Engineering | National Institute of Technology Tiruchirappalli.
Memory Allocation Techniques 78

▶ Contiguous Memory Allocation: Each process is given a


continuous block of memory.
▶ Partition Allocation: Memory is divided into fixed or dynamic
partitions.
▶ Paging: Memory is divided into fixed-size blocks called pages.
▶ Segmentation: Memory is divided based on logical divisions
such as functions and arrays.

Dr. B. Naresh Kumar Reddy | Department of Electronics and Communication Engineering | National Institute of Technology Tiruchirappalli.
Difference Between Paging and Segmentation 79

Feature Paging Segmentation

Division Type Fixed-size pages Variable-size segments


Responsible Component Operating System Compiler
Size Determination Hardware-defined User-defined
Speed Faster Slower
Fragmentation Internal fragmentation External fragmentation
Logical Address Structure Page number + Offset Segment number + Offset
Memory Table Page table (Base address of Segment table (Base address
pages) and limit)
Free Space Management Free frame list List of memory holes
User Visibility Invisible to user Visible to user
Address Calculation Uses page number and offset Uses segment number and off-
set
Procedure Sharing Difficult Facilitates sharing
Data Structure Handling Inefficient Efficient
Protection Hard to apply Easy to apply
Page/Segment Size Con- Must be equal to frame size No constraint
straint
Information Unit Type Physical unit Logical unit
System Efficiency Less efficient More efficient

Dr. B. Naresh Kumar Reddy | Department of Electronics and Communication Engineering | National Institute of Technology Tiruchirappalli.
Static vs. Dynamic Loading 80

▶ Static Loading: Entire program is loaded into memory before


execution.
▶ Dynamic Loading: Program loads parts into memory only when
needed.

Dr. B. Naresh Kumar Reddy | Department of Electronics and Communication Engineering | National Institute of Technology Tiruchirappalli.
Static vs. Dynamic Linking 81

▶ Static Linking: All necessary program modules are combined


into a single executable.
▶ Dynamic Linking: Uses a stub to load required routines at
runtime.

Dr. B. Naresh Kumar Reddy | Department of Electronics and Communication Engineering | National Institute of Technology Tiruchirappalli.
Swapping 82

▶ Swapping moves processes between main memory and


secondary storage.
▶ Allows multiple processes to fit in memory at different times.
▶ Used in multitasking systems to improve CPU utilization.

Dr. B. Naresh Kumar Reddy | Department of Electronics and Communication Engineering | National Institute of Technology Tiruchirappalli.
Memory Allocation Strategies 83

▶ Fixed Partitioning: Divides memory into fixed-size partitions.


▶ Dynamic Partitioning: Allocates partitions based on process
size.
▶ First Fit: Allocates the first available block that fits the process.
▶ Best Fit: Allocates the smallest available block that fits the
process.
▶ Worst Fit: Allocates the largest available block to minimize
fragmentation.

Dr. B. Naresh Kumar Reddy | Department of Electronics and Communication Engineering | National Institute of Technology Tiruchirappalli.
Memory allocation 84

▶ Memory allocation is crucial for embedded systems due to


limited resources.
▶ Dynamic allocation provides flexibility but introduces challenges.
▶ Common allocation techniques:
▶ Static Allocation
▶ Stack-based Allocation
▶ Heap-based Dynamic Allocation

Dr. B. Naresh Kumar Reddy | Department of Electronics and Communication Engineering | National Institute of Technology Tiruchirappalli.
Memory Allocation Methods 85

▶ Static Allocation: Assigned at compile-time, no runtime


overhead.
▶ Stack-Based Allocation: Function-local storage, automatically
freed.
▶ Heap-Based Allocation: Uses functions like malloc() and
free().

Dr. B. Naresh Kumar Reddy | Department of Electronics and Communication Engineering | National Institute of Technology Tiruchirappalli.
Key Differences: Stack vs Heap Memory Alloca-
tion 86

Feature Stack Memory Heap Memory

Allocation Type Automatic by compiler Manual by programmer or


garbage collector
Speed Faster (due to automatic deal- Slower (due to manual mem-
location) ory management)
Flexibility Fixed-size, cannot be resized Resizable, allows dynamic
memory allocation
Safety Thread-safe, accessible only Not thread-safe, shared
to the owner thread among all threads
Main Issue Risk of stack overflow (limited Risk of fragmentation and
space) memory leaks

Dr. B. Naresh Kumar Reddy | Department of Electronics and Communication Engineering | National Institute of Technology Tiruchirappalli.
Challenges of Dynamic Allocation 87

▶ Limited Memory – RAM constraints in embedded systems.


▶ Memory Fragmentation – Leads to inefficient memory use.
▶ Unpredictable Execution Time – Not ideal for real-time
applications.
▶ Memory Leaks – Improper deallocation causes memory
exhaustion.

Dr. B. Naresh Kumar Reddy | Department of Electronics and Communication Engineering | National Institute of Technology Tiruchirappalli.
Alternatives to Dynamic Allocation 88

▶ Memory Pools – Pre-allocated fixed-size memory blocks.


▶ Stack Allocation – Avoids heap fragmentation.
▶ Region-Based Allocators – Segregated memory areas.
▶ Custom Allocators – Optimized allocation for specific
applications.

Dr. B. Naresh Kumar Reddy | Department of Electronics and Communication Engineering | National Institute of Technology Tiruchirappalli.
Best Practices for Dynamic Memory 89

▶ Minimize heap usage and prefer static or stack allocation.


▶ Use memory pools to manage fragmentation.
▶ Always check memory allocation failures.
▶ Free unused memory to prevent leaks.
▶ Utilize RTOS-specific memory management features.

Dr. B. Naresh Kumar Reddy | Department of Electronics and Communication Engineering | National Institute of Technology Tiruchirappalli.
RTOS and Dynamic Memory 90

▶ RTOSs provide built-in memory management:


▶ FreeRTOS: pvPortMalloc() instead of malloc().
▶ Zephyr RTOS: Uses kernel-managed memory pools.
▶ RTEMS: Provides fixed-size block allocation.

Dr. B. Naresh Kumar Reddy | Department of Electronics and Communication Engineering | National Institute of Technology Tiruchirappalli.
Real-Time Example: IoT-Based Smart Agricul-
ture System 91

Scenario: An IoT-based smart agriculture system uses multiple


sensors to monitor soil moisture, temperature, and humidity. The data
is transmitted to a cloud server for analysis.
Memory Usage in the System:
▶ Static Allocation: System parameters such as device IDs and
sensor thresholds.
▶ Stack Allocation: Temporary storage for real-time sensor
readings before processing.
▶ Dynamic Allocation:
▶ Sensor data stored in a dynamically allocated buffer before
transmission.
▶ Adaptive memory allocation based on varying sensor data volume.
▶ Efficient handling of memory to avoid wastage in low data periods.
Why Dynamic Allocation?
▶ Sensors generate data at unpredictable rates.
▶ Memory pools are used to reduce fragmentation and improve
efficiency.
▶ Temporary buffer storage helps handle network transmission
delays.
Dr. B. Naresh Kumar Reddy | Department of Electronics and Communication Engineering | National Institute of Technology Tiruchirappalli.
92

Thank you

Dr. B. Naresh Kumar Reddy | Department of Electronics and Communication Engineering | National Institute of Technology Tiruchirappalli.

You might also like