0% found this document useful (0 votes)
2 views155 pages

Os Module 2

This document covers the principles of operating systems, including system calls, user/kernel modes, process management, and operating system services. It discusses the structure and design of operating systems, the user interface options, and the implementation of system calls through APIs. Additionally, it explores various operating system architectures, including monolithic kernels, microkernels, and virtual machines, highlighting their functionalities and benefits.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views155 pages

Os Module 2

This document covers the principles of operating systems, including system calls, user/kernel modes, process management, and operating system services. It discusses the structure and design of operating systems, the user interface options, and the implementation of system calls through APIs. Additionally, it explores various operating system architectures, including monolithic kernels, microkernels, and virtual machines, highlighting their functionalities and benefits.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 155

MODULE_2

OS Principles
System calls, System/Application Call Interface – Protection: User/Kernel modes -
Interrupts
-Processes - Structures (Process Control Block, Ready List etc.), Process creation,
management in Unix – Threads: User level, kernel level threads and thread
models.
Operating-System Structures
• Operating System Services
• User Operating System Interface
• System Calls
• Types of System Calls
• System Programs
• Operating System Design and Implementation
• Operating System Structure
• Virtual Machines
• Operating System Debugging
• Operating System Generation
• System Boot
Objectives
• To describe the services an operating system
provides to users, processes, and other systems

• To discuss the various ways of structuring an


operating system

• To explain how operating systems are installed and


customized and how they boot
Operating System Services
• Operating systems provide an environment for execution of programs
and services to programs and users
• One set of operating-system services provides functions that are helpful
to the user:
• User interface - Almost all operating systems have a user interface
(UI).
• Varies between Command-Line (CLI), Graphics User Interface
(GUI), Batch
• Program execution - The system must be able to load a program
into memory and to run that program, end execution, either
normally or abnormally (indicating error)
• I/O operations - A running program may require I/O, which may
involve a file or an I/O device
• File-system manipulation - The file system is of particular interest.
Programs need to read and write files and directories, create and
delete them, search them, list file Information, permission
management.
Operating System Services (Cont.)

• Communications – Processes may exchange


information, on the same computer or between
computers over a network
• Communications may be via shared memory or through message
passing (packets moved by the OS)
• Error detection – OS needs to be constantly aware of
possible errors
• May occur in the CPU and memory hardware, in I/O devices, in user
program
• For each type of error, OS should take the appropriate action to
ensure correct and consistent computing
• Debugging facilities can greatly enhance the user’s and programmer’s
abilities to efficiently use the system
Operating System Services (Cont.)

• Another set of OS functions exists for ensuring the efficient operation


of the system itself via resource sharing

• Resource allocation - When multiple users or multiple jobs


running concurrently, resources must be allocated to each of
them
• Many types of resources - Some (such as CPU cycles, main
memory, and file storage) may have special allocation code,
others (such as I/O devices) may have general request and
release code

• Accounting - To keep track of which users use how much and


what kinds of computer resources
Operating System
Services (Cont.)
• Protection and security - The owners of information stored in a multiuser or
networked computer system may want to control use of that information,
concurrent processes should not interfere with each other
• Protection involves ensuring that all access to system resources is controlled
• Security of the system from outsiders requires user authentication, extends to defending external
I/O devices from invalid access attempts
• If a system is to be protected and secure, precautions must be instituted throughout it. A chain is
only as strong as its weakest link.
A View of Operating System Services
User Operating System Interface - CLI

• Command Line Interface (CLI) or command interpreter allows direct command


entry
• Sometimes implemented in kernel, sometimes by systems program
• Sometimes multiple flavors implemented – shells
• Primarily fetches a command from user and executes it
• Sometimes commands built-in, sometimes just names of programs
• If the latter, adding new features doesn’t require shell modification
User Operating System Interface -
GUI
• User-friendly desktop metaphor interface
• Usually mouse, keyboard, and monitor
• Icons represent files, programs, actions, etc
• Various mouse buttons over objects in the interface cause
various actions (provide information, options, execute
function, open directory (known as a folder)
• Invented at Xerox PARC

• Many systems now include both CLI and GUI interfaces


• Microsoft Windows is GUI with CLI “command” shell
• Apple Mac OS X as “Aqua” GUI interface with UNIX kernel
underneath and shells available
• Solaris is CLI with optional GUI interfaces (Java Desktop, KDE)
Bourne Shell Command
Interpreter
The Mac OS X GUI
System Calls
A system call is a programmatic way in which a computer
program requests a service from the kernel of the operating system on
which it is executed. A system call is a way for programs to interact with
the operating system.
A computer program makes a system call when it requests the
operating system's kernel. System call provides the services of the
operating system to the user programs via the Application Program
Interface(API). System calls are the only entry points into the kernel
system and are executed in kernel mode.
System Calls
• Programming interface to the services provided by the OS

• Typically written in a high-level language (C or C++)

• Mostly accessed by programs via a high-level Application Program


Interface (API) rather than direct system call use

• Three most common APIs are Win32 API for Windows, POSIX API for POSIX-
based systems (including virtually all versions of UNIX, Linux, and Mac OS
X), and Java API for the Java virtual machine (JVM)

• Why use APIs rather than system calls?

(Note that the system-call names used throughout this text are generic)
Example of System
Calls
• System call sequence to copy the contents of one file to another file
Example of Standard
API
• Consider the ReadFile() function
• Win32 API—a function for reading from a file

A description of the parameters passed to ReadFile()


HANDLE file—the file to be read
LPVOID buffer—a buffer where the data will be read into and written from
DWORD bytesToRead—the number of bytes to be read into the buffer
LPDWORD bytesRead—the number of bytes read during the last read
LPOVERLAPPED ovl—indicates if overlapped I/O is being used
System Call
Implementation
System Call Implementation
• Typically, a number associated with each system call
• System-call interface maintains a table indexed according to these
numbers
• The system call interface invokes intended system call in OS kernel and
returns status of the system call and any return values
• The caller need know nothing about how the system call is
implemented
• Just needs to obey API and understand what OS will do as a result call
• Most details of OS interface hidden from programmer by API Managed
by run-time support library (set of functions built into libraries included with
compiler)
API – System Call – OS Relationship
Standard C Library Example
• C program invoking printf() library call, which calls write() system call
System Call Parameter Passing
• Often, more information is required than simply identity of desired
system call
• Exact type and amount of information vary according to OS and call

• Three general methods used to pass parameters to the OS


• Simplest: pass the parameters in registers
• In some cases, may be more parameters than registers
• Parameters stored in a block, or table, in memory, and address of
block passed as a parameter in a register
• This approach taken by Linux and Solaris
• Parameters placed, or pushed, onto the stack by the program and
popped off the stack by the operating system
• Block and stack methods do not limit the number or length of
parameters being passed
Parameter Passing via
Table
Types of System Calls
• Process control
• end, abort
• load, execute
• create process, terminate process
• get process attributes, set process attributes
• wait for time
• wait event, signal event
• allocate and free memory
• File management
• create file, delete file
• open, close file
• read, write, reposition
• get and set file attributes
Types of System Calls (Cont.)
• Device management
• request device, release device
• read, write, reposition
• get device attributes, set device attributes
• logically attach or detach devices
• Information maintenance
• get time or date, set time or date
• get system data, set system data
• get and set process, file, or device attributes
• Communications
• create, delete communication connection
• send, receive messages
• transfer status information
• attach and detach remote devices
Examples of Windows and Unix System Calls
Example: MS-DOS
• Single-tasking
• Shell invoked when system booted
• Simple method to run program
• No process created
• Single memory space
• Loads program into memory, overwriting all but the kernel
• Program exit -> shell reloaded
MS-DOS execution

(a) At system startup (b) running a program


Example: FreeBSD(Berkeley
Software Distribution)
• Unix variant
• Multitasking
• User login -> invoke user’s choice of shell
• Shell executes fork() system call to create process
• Executes exec() to load program into process
• Shell waits for process to terminate or continues with user commands
• Process exits with code of 0 – no error or > 0 – error code
FreeBSD Running Multiple Programs
Examples of a System Call in Windows and Uni
Process Windows Unix
CreateProcess() Fork()
Process Control ExitProcess() Exit()
WaitForSingleObject() Wait()
Open()
CreateFile()
Read()
File manipulation ReadFile()
Write()
WriteFile()
Close()
SetConsoleMode() Ioctl()
Device Management ReadConsole() Read()
WriteConsole() Write()
GetCurrentProcessID() Getpid()
Information Maintenance SetTimer() Alarm()
Sleep() Sleep()
CreatePipe() Pipe()
Communication CreateFileMapping() Shmget()
MapViewOfFile() Mmap()
SetFileSecurity() Chmod()
Protection InitializeSecurityDescriptor() Umask()
SetSecurityDescriptorgroup() Chown()
System Programs
• System programs provide a convenient environment for program development and execution. They can be
divided into:
• File manipulation
• Status information
• File modification
• Programming language support
• Program loading and execution
• Communications
• Application programs

• Most users’ view of the operation system is defined by system programs, not the actual system calls
System Programs
• Provide a convenient environment for program development
and execution
• Some of them are simply user interfaces to system calls; others are
considerably more complex

• File management - Create, delete, copy, rename, print, dump,


list, and generally manipulate files and directories

• Status information
• Some ask the system for info - date, time, amount of available
memory, disk space, number of users
• Others provide detailed performance, logging, and debugging
information
• Typically, these programs format and print the output to the
terminal or other output devices
• Some systems implement a registry - used to store and retrieve
configuration information
System Programs (Cont.)
• File modification
• Text editors to create and modify files
• Special commands to search contents of files or perform
transformations of the text

• Programming-language support - Compilers, assemblers,


debuggers and interpreters sometimes provided

• Program loading and execution- Absolute loaders,


relocatable loaders, linkage editors, and overlay-loaders,
debugging systems for higher-level and machine language

• Communications - Provide the mechanism for creating virtual


connections among processes, users, and computer systems
• Allow users to send messages to one another’s screens, browse
web pages, send electronic-mail messages, log in remotely, transfer
files from one machine to another
Operating System Design and Implementation

• Design and Implementation of OS not “solvable”, but some approaches


have proven successful

• Internal structure of different Operating Systems can vary widely

• Start by defining goals and specifications

• Affected by choice of hardware, type of system

• User goals and System goals


• User goals – operating system should be convenient to use, easy to learn, reliable,
safe, and fast
• System goals – operating system should be easy to design, implement, and
maintain, as well as flexible, reliable, error-free, and efficient
Operating System Design and Implementation (Cont.)

• Important principle to separate


Policy: What will be done?
Mechanism: How to do it?

• Mechanisms determine how to do something,


policies decide what will be done
• The separation of policy from mechanism is a very
important principle, it allows maximum flexibility if
policy decisions are to be changed later
Simple Structure
• MS-DOS – written to provide the most functionality in the least space
• Not divided into modules
• Although MS-DOS has some structure, its interfaces and levels of functionality are not well
separated
MS-DOS Layer Structure
Layered Approach
• The operating system is divided into a number of layers (levels), each built
on top of lower layers. The bottom layer (layer 0), is the hardware; the
highest (layer N) is the user interface.

• With modularity, layers are selected such that each uses functions
(operations) and services of only lower-level layers
Traditional UNIX System Structure
UNIX

• UNIX – limited by hardware functionality, the original UNIX operating system had
limited structuring. The UNIX OS consists of two separable parts
• Systems programs
• The kernel
• Consists of everything below the system-call interface and above the physical
hardware
• Provides the file system, CPU scheduling, memory management, and other
operating-system functions; a large number of functions for one level
Layered Operating System
Microkernel System Structure
• Moves as much from the kernel into “user” space

• Communication takes place between user modules using


message passing

• Benefits:
• Easier to extend a microkernel
• Easier to port the operating system to new architectures
• More reliable (less code is running in kernel mode)
• More secure

• Detriments:
• Performance overhead of user space to kernel space
communication

Example: Mac OS X and Windows NT


Monolithic kernel Vs
microkernels
• Monolithic kernel is a single large process running entirely in a single address space. It is a
single static binary file.
• All kernel services exist and execute in the kernel address space.
• The kernel can invoke functions directly.
• Examples of monolithic kernel based OSs: Unix, Linux. (Hybrid)

• In microkernels, the kernel is broken down into separate processes, known as servers.
Some of the servers run in kernel space and some run in user-space.
• All servers are kept separate and run in different address spaces. Servers invoke "services"
from each other by sending messages via IPC (Interprocess Communication).
• This separation has the advantage that if one server fails, other servers can still work
efficiently.
Modules
• Most modern operating systems implement kernel modules
• Uses object-oriented approach
• Each core component is separate
• Each talks to the others over known interfaces
• Each is loadable as needed within the kernel

• Overall, similar to layers but with more flexible


Solaris Modular Approach
Virtual Machines
• A virtual machine takes the layered approach to its logical conclusion.
It treats hardware and the operating system kernel as though they
were all hardware.

• A virtual machine provides an interface identical to the underlying


bare hardware.

• The operating system host creates the illusion that a process has its
own processor and (virtual memory).

• Each guest provided with a (virtual) copy of underlying computer.


Virtual Machines History and
Benefits
• First appeared commercially in IBM mainframes in 1972
• Fundamentally, multiple execution environments (different
operating systems) can share the same hardware
• Protect from each other
• Some sharing of file can be permitted, controlled
• Commutate with each other, other physical systems via
networking
• Useful for development, testing
• Consolidation of many low-resource use systems onto fewer
busier systems
• “Open Virtual Machine Format”, standard format of virtual
machines, allows a VM to run within many different virtual
machine (host) platforms
Virtual Machines (Cont.)

(a) Nonvirtual machine (b) virtual machine


Para-virtualization
• Presents guest with system similar but not
identical to hardware

• Guest must be modified to run on paravirtualized


hardware

• Guest can be an OS, or in the case of Solaris 10


applications running in containers
Virtualization Implementation
• Difficult to implement – must provide an exact duplicate of underlying
machine
• Typically runs in user mode, creates virtual user mode and virtual kernel
mode
• Timing can be an issue – slower than real machine
• Hardware support needed
• More support-> better virtualization
• i.e. AMD provides “host” and “guest” modes
Solaris 10 with Two Containers
VMware Architecture
The Java Virtual Machine
Operating-System Debugging

• Debugging is finding and fixing errors, or bugs


• OSes generate log files containing error information
• Failure of an application can generate core dump file capturing memory of the process
• Operating system failure can generate crash dump file containing kernel memory
• Beyond crashes, performance tuning can optimize system performance
• Kernighan’s Law: “Debugging is twice as hard as writing the code in the first place.
Therefore, if you write the code as cleverly as possible, you are, by definition, not smart
enough to debug it.”
• DTrace tool in Solaris, FreeBSD, Mac OS X allows live instrumentation on production systems
• Probes fire when code is executed, capturing state data and sending it to consumers of
those probes
Solaris 10 dtrace Following System Call
Operating System Generation
• Operating systems are designed to run on any of a class of machines; the
system must be configured for each specific computer site

• SYSGEN program obtains information concerning the specific


configuration of the hardware system

• Booting – starting a computer by loading the kernel

• Bootstrap program – code stored in ROM that is able to locate the kernel,
load it into memory, and start its execution
System Boot
• Operating system must be made available to hardware so hardware can
start it
• Small piece of code – bootstrap loader, locates the kernel, loads it into memory,
and starts it
• Sometimes two-step process where boot block at fixed location loads bootstrap
loader
• When power initialized on system, execution starts at a fixed memory location
• Firmware used to hold initial boot code
Processes
• Process Concept
• Process Scheduling
• Operations on Processes
• Inter-process Communication
• Examples of IPC Systems
• Communication in Client-Server Systems
Objectives
• To introduce the notion of a process -- a program
in execution, which forms the basis of all
computation

• To describe the various features of processes,


including scheduling, creation and termination,
and communication

• To describe communication in client-server


systems
Process Concept
• An operating system executes a variety of programs:
• Batch system – jobs
• Time-shared systems – user programs or tasks

• Textbook uses the terms job and process almost interchangeably

• Process – a program in execution; process execution must


progress in sequential fashion

• A process includes:
• program counter
• stack
• data section
The Process
• Multiple parts
• The program code, also called text section
• Current activity including program counter, processor registers
• Stack containing temporary data
• Function parameters, return addresses, local variables
• Data section containing global variables
• Heap containing memory dynamically allocated during run time
• Program is passive entity, process is active
• Program becomes process when executable file loaded into memory
• Execution of program started via GUI mouse clicks, command line entry of its name,
etc
• One program can be several processes
• Consider multiple users executing the same program
Process in Memory
Process State
• As a process executes, it changes state
• new: The process is being created
• running: Instructions are being executed
• waiting: The process is waiting for some event to occur
• ready: The process is waiting to be assigned to a processor
• terminated: The process has finished execution
Diagram of Process State
Process Control Block (PCB)

Information associated with each process


• Process state
• Program counter
• CPU registers
• CPU scheduling information
• Memory-management information
• Accounting information
• I/O status information
The Process Control Block (PCB) is stored in RAM within the operating system's kernel space
Process Control Block (PCB)
CPU Switch From Process to Process
Process Scheduling

• Maximize CPU use, quickly switch processes onto CPU


for time sharing
• Process scheduler selects among available processes
for next execution on CPU
• Maintains scheduling queues of processes
• Job queue – set of all processes in the system
• Ready queue – set of all processes residing in main
memory, ready and waiting to execute
• Device queues – set of processes waiting for an I/O device
• Processes migrate among the various queues
Process Representation in Linux
• Represented by the C structure task_struct
pid t pid; /* process identifier
*/
long state; /* state of the
process */
unsigned int time slice /*
scheduling information */ struct
task struct *parent; /* this
process’s parent */ struct list
head children; /* this process’s
children */ struct files struct
*files; /* list of open files */
struct mm struct *mm; /* address
space of this pro */
Ready Queue And Various I/O Device Queues
Representation of Process Scheduling
Schedulers

• Long-term scheduler (or job scheduler) – selects which processes should


be brought into the ready queue
• Short-term scheduler (or CPU scheduler) – selects which process should
be executed next and allocates CPU
• Sometimes the only scheduler in a system
Schedulers (Cont.)
• Short-term scheduler is invoked very frequently (milliseconds) 
(must be fast)

• Long-term scheduler is invoked very infrequently (seconds,


minutes)  (may be slow)

• The long-term scheduler controls the degree of


multiprogramming

• Processes can be described as either:


• I/O-bound process – spends more time doing I/O than computations,
many short CPU bursts
• CPU-bound process – spends more time doing computations; few very
long CPU bursts
Addition of Medium Term Scheduling

• Medium-term scheduling is a part of swapping.


• It removes the processes from the memory.
• It reduces the degree of multiprogramming.
• The medium-term scheduler is in-charge of handling the swapped out-processes.
Context Switch
• When CPU switches to another process, the system must save the
state of the old process and load the saved state for the new process
via a context switch.

• Context of a process represented in the PCB

• Context-switch time is overhead; the system does no useful work while


switching
• The more complex the OS and the PCB -> longer the context switch

• Time dependent on hardware support


• Some hardware provides multiple sets of registers per CPU -> multiple
contexts loaded at once
Process Creation
• Parent process create children processes, which, in turn
create other processes, forming a tree of processes

• Generally, process identified and managed via a process


identifier (pid)

• Resource sharing
• Parent and children share all resources
• Children share subset of parent’s resources

• Execution
• Parent and children execute concurrently
• Parent waits until children terminate
Process Creation (Cont.)

• Address space
• Child duplicate of parent
• Child has a program loaded into it

• UNIX examples
• fork system call creates new process
• exec system call used after a fork to replace the process’ memory space with
a new program
Process Creation
C Program Forking Separate Process

else if (pid == 0) {
#include <stdio.h> // This is the child process
#include <unistd.h> printf("Child Process: PID = %d, Parent PID
= %d\n", getpid(), getppid());
int main() { }
pid_t pid; else {
// This is the parent process
pid = fork(); // Create a new process printf("Parent Process: PID = %d, Child PID
= %d\n", getpid(), pid);
if (pid < 0) { }
perror("Fork failed");
return 1; return 0;
} }
A Tree of Processes on Solaris
Process Termination
• Process executes last statement and asks the operating
system to delete it (exit)
• Output data from child to parent (via wait)
• Process’ resources are deallocated by operating system

• Parent may terminate execution of children processes


(abort)
• Child has exceeded allocated resources
• Task assigned to child is no longer required
• If parent is exiting
• Some operating systems do not allow child to continue if its parent
terminates
• All children terminated - cascading termination
Interprocess Communication
• Processes within a system may be independent or cooperating
• Cooperating process can affect or be affected by other
processes, including sharing data
• Reasons for cooperating processes:
• Information sharing
• Computation speedup
• Modularity
• Convenience
• Cooperating processes need interprocess communication (IPC)
• Two models of IPC
• Shared memory
• Message passing
Communications Models
Cooperating Processes
• Independent process cannot affect or be affected by the
execution of another process

• Cooperating process can affect or be affected by the


execution of another process

• Advantages of process cooperation


• Information sharing
• Computation speed-up
• Modularity
• Convenience
Producer-Consumer Problem

• Paradigm for cooperating processes, producer process produces information


that is consumed by a consumer process
• unbounded-buffer places no practical limit on the size of the buffer
• bounded-buffer assumes that there is a fixed buffer size
Bounded-Buffer –Shared-Memory Solution

• Shared data
#define BUFFER_SIZE 10
typedef struct {
...
} item;
item buffer[BUFFER_SIZE];
int in = 0;
int out = 0;

• Solution is correct, but can only use BUFFER_SIZE-1


elements
Bounded-Buffer – Producer

while (true) {
/* Produce an item */
while (((in = (in + 1) % BUFFER SIZE count) == out)
; /* do nothing -- no free buffers */
buffer[in] = item;
in = (in + 1) % BUFFER SIZE;
}
Bounded Buffer – Consumer
while (true) {
while (in == out)
; // do nothing -- nothing to consume

// remove an item from the buffer


item = buffer[out];
out = (out + 1) % BUFFER SIZE;
return item;
}
Interprocess Communication – Message Passing

• Mechanism for processes to communicate and to synchronize their


actions
• Message system – processes communicate with each other without
resorting to shared variables
• IPC facility provides two operations:
• send(message) – message size fixed or variable
• receive(message)
• If P and Q wish to communicate, they need to:
• establish a communication link between them
• exchange messages via send/receive
• Implementation of communication link
• physical (e.g., shared memory, hardware bus)
• logical (e.g., logical properties)
Implementation Questions
• How are links established?
• Can a link be associated with more than two processes?
• How many links can there be between every pair of
communicating processes?
• What is the capacity of a link?
• Is the size of a message that the link can accommodate
fixed or variable?
• Is a link unidirectional or bi-directional?
Direct Communication
• Processes must name each other explicitly:
• send (P, message) – send a message to process P
• receive(Q, message) – receive a message from process Q

• Properties of communication link


• Links are established automatically
• A link is associated with exactly one pair of communicating
processes
• Between each pair there exists exactly one link
• The link may be unidirectional, but is usually bi-directional
Indirect Communication
• Messages are directed and received from mailboxes
(also referred to as ports)
• Each mailbox has a unique id
• Processes can communicate only if they share a mailbox

• Properties of communication link


• Link established only if processes share a common mailbox
• A link may be associated with many processes
• Each pair of processes may share several communication links
• Link may be unidirectional or bi-directional
Indirect Communication
• Operations
• create a new mailbox
• send and receive messages through mailbox
• destroy a mailbox

• Primitives are defined as:


send(A, message) – send a message to mailbox A
receive(A, message) – receive a message from mailbox
A
Indirect Communication
• Mailbox sharing
• P1, P2, and P3 share mailbox A
• P1, sends; P2 and P3 receive
• Who gets the message?

• Solutions
• Allow a link to be associated with at most two processes
• Allow only one process at a time to execute a receive
operation
• Allow the system to select arbitrarily the receiver. Sender is
notified who the receiver was.
Synchronization
• Message passing may be either blocking or non-blocking

• Blocking is considered synchronous


• Blocking send has the sender block until the message is received
• Blocking receive has the receiver block until a message is available

• Non-blocking is considered asynchronous


• Non-blocking send has the sender send the message and continue
• Non-blocking receive has the receiver receive a valid message or
null
Buffering
• Queue of messages attached to the link; implemented
in one of three ways
1. Zero capacity – 0 messages
Sender must wait for receiver (rendezvous)
2. Bounded capacity – finite length of n messages
Sender must wait if link full
3. Unbounded capacity – infinite length
Sender never waits
Examples of IPC Systems - POSIX
• POSIX Shared Memory
• Process first creates shared memory segment
segment id = shmget(IPC PRIVATE, size, S
IRUSR | S IWUSR);
• Process wanting access to that shared memory must attach to it
shared memory = (char *) shmat(id, NULL, 0);
• Now the process could write to the shared memory
sprintf(shared memory, "Writing to shared
memory");
• When done a process can detach the shared memory from its
address space
shmdt(shared memory);
Examples of IPC Systems - Mach

• Mach communication is message based


• Even system calls are messages
• Each task gets two mailboxes at creation- Kernel and Notify
• Only three system calls needed for message transfer
msg_send(), msg_receive(), msg_rpc()
• Mailboxes needed for commuication, created via
port_allocate()
Examples of IPC Systems – Windows XP

• Message-passing centric via local procedure call (LPC)


facility
• Only works between processes on the same system
• Uses ports (like mailboxes) to establish and maintain
communication channels
• Communication works as follows:
• The client opens a handle to the subsystem’s connection port object.
• The client sends a connection request.
• The server creates two private communication ports and returns the
handle to one of them to the client.
• The client and server use the corresponding port handle to send
messages or callbacks and to listen for replies.
Local Procedure Calls in Windows XP
Communications in Client-Server Systems

• Sockets

• Remote Procedure Calls

• Pipes

• Remote Method Invocation (Java)


Sockets
• A socket is defined as an endpoint for
communication

• Concatenation of IP address and port

• The socket 161.25.19.8:1625 refers to port 1625 on


host 161.25.19.8

• Communication consists between a pair of sockets


Socket Communication
Remote Procedure Calls
• Remote procedure call (RPC) abstracts procedure calls between
processes on networked systems

• Stubs – client-side proxy for the actual procedure on the server

• The client-side stub locates the server and marshalls the


parameters

• The server-side stub receives this message, unpacks the


marshalled parameters, and performs the procedure on the
server
Execution of RPC
Pipes
• Acts as a conduit allowing two processes to
communicate

• Issues
• Is communication unidirectional or bidirectional?
• In the case of two-way communication, is it half or full-
duplex?
• Must there exist a relationship (i.e. parent-child) between the
communicating processes?
• Can the pipes be used over a network?
Ordinary Pipes
• Ordinary Pipes allow communication in standard producer-
consumer style

• Producer writes to one end (the write-end of the pipe)

• Consumer reads from the other end (the read-end of the pipe)

• Ordinary pipes are therefore unidirectional

• Require parent-child relationship between communicating


processes
Ordinary Pipes
Named Pipes
• Named Pipes are more powerful than ordinary pipes

• Communication is bidirectional

• No parent-child relationship is necessary between the communicating


processes

• Several processes can use the named pipe for communication

• Provided on both UNIX and Windows systems


Threads
• Overview
• Multithreading Models
• Thread Libraries
• Threading Issues
• Operating System Examples
• Windows XP Threads
• Linux Threads
Objectives
• To introduce the notion of a thread — a fundamental
unit of CPU utilization that forms the basis of
multithreaded computer systems

• To discuss the APIs for the Pthreads, Win32, and Java


thread libraries

• To examine issues related to multithreaded


programming
Motivation
• Threads run within application
• Multiple tasks with the application can be implemented by separate
threads
• Update display
• Fetch data
• Spell checking
• Answer a network request
• Process creation is heavy-weight while thread creation is light-weight
• Can simplify code, increase efficiency
• Kernels are generally multithreaded
Single and Multithreaded Processes
Benefits

• Responsiveness

• Resource Sharing

• Economy

• Scalability
Multicore Programming
• Multicore systems putting pressure on programmers,
challenges include:
• Dividing activities
• Balance
• Data splitting
• Data dependency
• Testing and debugging
Multithreaded Server Architecture
Concurrent Execution on a Single-core System
Parallel Execution on a Multicore System
User Threads
• Thread management done by user-level threads library

• Three primary thread libraries:


• POSIX Pthreads
• Win32 threads
• Java threads
Kernel Threads
• Supported by the Kernel

• Examples
• Windows XP/2000
• Solaris
• Linux
• Tru64 UNIX
• Mac OS X
Aspect User Threads Kernel Threads
Threads managed entirely by user- Threads managed directly by the
Difference Between User-Level Thread and Definition
level libraries operating system kernel
Managed by user-space thread Managed by kernel (e.g., Windows NT,
Management
library (e.g., POSIX pthreads) Linux)
Faster, no need to switch to kernel Slower due to switching between user
Context Switching
mode and kernel modes
Kernel knows and schedules all kernel
System Call Visibility Kernel is unaware of user threads
threads
Scheduling Done by user-level thread library Done by kernel thread scheduler
One thread blocking causes all One thread can block without affecting
Blocking
threads in the process to block others
Kernel-Level Thread

More portable across different OS Less portable, kernel-specific


Portability
platforms implementation
Low overhead (no need for kernel Higher overhead (kernel allocates data
Resource Overhead
structures) structures for each thread)
Only one thread can access kernel at True parallelism on multiprocessors is
Concurrency
a time supported
Lightweight tasks, simulations, Multithreaded servers, OS-level parallel
Use Case Examples
educational tools processing
Multithreading Models
• Many-to-One

• One-to-One

• Many-to-Many
Many-to-One
• Many user-level threads mapped to single kernel thread

• Examples:
• Solaris Green Threads
• GNU Portable Threads
Many-to-One Model
One-to-One
• Each user-level thread maps to kernel thread

• Examples
• Windows NT/XP/2000
• Linux
• Solaris 9 and later
One-to-one Model
Many-to-Many Model
• Allows many user level threads to be mapped to many
kernel threads

• Allows the operating system to create a sufficient


number of kernel threads

• Solaris prior to version 9

• Windows NT/2000 with the Thread Fiber package


Many-to-Many Model
Two-level Model
• Similar to M:M, except that it allows a user thread to be
bound to kernel thread

• Examples
• IRIX
• HP-UX
• Tru64 UNIX
• Solaris 8 and earlier
Two-level Model
Thread Libraries
• Thread library provides programmer with API for
creating and managing threads

• Two primary ways of implementing


• Library entirely in user space
• Kernel-level library supported by the OS
• Thread pool
• Thread cancellation –
• Signal Handling
Pthreads
• May be provided either as user-level or kernel-level

• A POSIX standard (IEEE 1003.1c) API for thread creation


and synchronization

• API specifies behavior of the thread library,


implementation is up to development of the library

• Common in UNIX operating systems (Solaris, Linux, Mac


OS X)
Pthreads Example
Pthreads Example (Cont.)
Win32 API Multithreaded C Program
Win32 API Multithreaded C Program (Cont.)
Java Threads
• Java threads are managed by the JVM

• Typically implemented using the threads model provided by underlying


OS

• Java threads may be created by:

• Extending Thread class


• Implementing the Runnable interface
Java Multithreaded Program
Java Multithreaded Program (Cont.)
Threading Issues
• Semantics of fork() and exec() system calls

• Thread cancellation of target thread


• Asynchronous or deferred

• Signal handling
• Synchronous and asynchronous
Threading Issues (Cont.)
• Thread pools
• Thread-specific data
 Create Facility needed for data private to thread

• Scheduler activations
Semantics of fork() and exec()

• Does fork() duplicate only the calling thread or all threads?


Thread Cancellation

• Terminating a thread before it has finished

• Two general approaches:


• Asynchronous cancellation terminates the target thread
immediately.
• Deferred cancellation allows the target thread to
periodically check if it should be cancelled.
Signal Handling

• Signals are used in UNIX systems to notify a process


that a particular event has occurred.

• A signal handler is used to process signals


1. Signal is generated by particular event
2. Signal is delivered to a process
3. Signal is handled

• Options:
• Deliver the signal to the thread to which the signal applies
• Deliver the signal to every thread in the process
• Deliver the signal to certain threads in the process
• Assign a specific thread to receive all signals for the process
Thread Pools
• Create a number of threads in a pool where they await
work

• Advantages:
• Usually slightly faster to service a request with an existing
thread than create a new thread
• Allows the number of threads in the application(s) to be
bound to the size of the pool
Thread Specific Data
• Allows each thread to have its own copy of data

• Useful when you do not have control over the thread


creation process (i.e., when using a thread pool)
Scheduler Activations
• Both M:M and Two-level models require
communication to maintain the appropriate number of
kernel threads allocated to the application

• Scheduler activations provide upcalls - a


communication mechanism from the kernel to the
thread library

• This communication allows an application to maintain


the correct number kernel threads
Lightweight Processes
Operating System Examples
• Windows XP Threads

• Linux Thread
Windows XP Threads Data Structures
Windows XP Threads
• Implements the one-to-one mapping, kernel-level

• Each thread contains


• A thread id
• Register set
• Separate user and kernel stacks
• Private data storage area

• The register set, stacks, and private storage area are known as
the context of the threads

• The primary data structures of a thread include:


• ETHREAD (executive thread block)
• KTHREAD (kernel thread block)
• TEB (thread environment block)
Linux Threads
• Linux refers to them as tasks rather than threads

• Thread creation is done through clone() system call

• clone() allows a child task to share the address


space of the parent task (process)

• struct task_struct points to process data


structures (shared or unique)
Linux Threads
 fork() and clone() system calls
 Doesn’t distinguish between process and thread
 Uses term task rather than thread
 clone() takes options to determine sharing on process create
 struct task_struct points to process data structures (shared or unique)

You might also like