Lecture 1.3.5 Threads in OS
Lecture 1.3.5 Threads in OS
COMPUTING
UNIT-1
Bachelor of Computer Applications
Operating System
(23CAT-153/22SCT-252)
CO3 Outline the concept of process and its scheduling algorithms used by the operating
system. CO2 Analyze the distributed and Network Operating Systems
and tabulate summary.
CO4 Explain how
. an operating system virtualises CPU and memory
• Cpu Scheduling
• Definition, Scheduling objectives, Types of Schedulers, Scheduling criteria, CPU utilization,
Throughput, Turnaround Time, Waiting Time, Response Time, Preemptive and Non – Preemptive,
FCFS, SJF, RR, Multiprocessor scheduling, Types, Performance evaluation of the scheduling.
• Shared Memory System
• Definition, Shared Memory System, Message passing, Critical section problem, Mutual Exclusion,
Semaphores.
• Deadlock
• Conditions, modeling, deadlock prevention, deadlock avoidance, detection and recovery, Banker’s
algorithms
SYLLABUS (UNIT – III)
• Multiprogramming
• Multiprogramming with fixed partition, variable partitions, virtual memory, paging, demand
paging, design and implementation issues in paging such as page tables, inverted page tables, page
replacement algorithms, page fault handling, working set model, local vs. global allocation, page
size, segmentation with paging.
• File system Structure
• Concept, Access Methods, File system Structure, Directory Structure, Allocation Methods, Free
Space Management, File Sharing, Protection and Recovery.
Topics to be Covered
1. Introduction of Thread in OS
2. Types of Threads
3. Components of Threads
4. Benefits of Threads
Introduction of Thread
8
Need of Threads
9
Why Multithreading?
10
Types of Thread
11
Types of Threads
12
Types of Threads
13
Types of Threads
14
Types of Threads
17
Benefits of Threads
• Enhanced throughput of the system: When the process is split into many threads, and each
thread is treated as a job, the number of jobs done in the unit time increases. That is why the
throughput of the system also increases.
• Effective Utilization of Multiprocessor system: When you have more than one thread in one
process, you can schedule more than one thread in more than one processor.
• Faster context switch: The context switching period between threads is less than the process
context switching. The process context switch means more overhead for the CPU.
• Responsiveness: When the process is split into several threads, and when a thread completes its
execution, that process can be responded to as soon as possible.
• Communication: Multiple-thread communication is simple because the threads share the same
address space, while in process, we adopt just a few exclusive communication strategies for
communication between two processes.
• Resource sharing: Resources can be shared between all threads within a process, such as code,
data, and files.
18
REFERENCE BOOKS