Chapter 2 Process Management
Chapter 2 Process Management
Process Management
Chapter Content
• Multithreading
– Allow multiple threads per process
– An example of where multiple threads might be used is in a file
server process. It receives requests to read and write files and
send back the requested data or accepts updated data
Benefits of Threads
• At any given time there are two processes that can be run
– OS must decide which one to run first
• Short-term scheduling - scheduling of processes that are ready to run and are loaded
into memory
• Dispatching - The action of loading the process state into the CPU.
• The part of the operating system that makes this decision is known as the scheduler
and the algorithm it uses is called the scheduling algorithm.
• The scheduler is usually concerned with deciding the policy and not the mechanism.
• There are various consideration when deciding a good scheduling algorithm. Some of
them include:
1. Fairness – making sure that each process gets its fair share of the CPU
2. Efficiency – keep the CPU busy 100% of the time
3. Response time – minimize response time for interactive users
4. Turnaround time – minimize the time batch users must wait for output
5. Throughput – maximize the number of jobs processed per unit time.
Process Scheduling (cont)
• Nonpreemptive scheduling: occurs when the currently executing process gives up the CPU
voluntarily
• Preemptive scheduling: Preemption is the action of stopping a running job and scheduling
another in its place.
– Occurs when the operating system decides to favor another process, preempting the
currently executing process
– Once a job captures processor and begins execution, it remains in RUNNING state
uninterrupted.
– Until it issues an I/O request (natural wait) or until it is finished (exception for infinite loops).
• Context switching is required by all preemptive algorithms
– When Job A is preempted
• All of its processing information must be saved in its PCB for later (when Job A’s
execution is continued).
• Contents of Job B’s PCB are loaded into appropriate registers so it can start running
again (context switch).
– Later when Job A is once again assigned to processor another context switch is performed.
• Information from preempted job is stored in its PCB.
• Contents of Job A’s PCB are loaded into appropriate registers.
Scheduling algorithms
• Non-preemptive.
• Handles jobs based on length of their CPU cycle time.
– Use lengths to schedule process with shortest time.
– Looks at all processes in the ready state and dispatches the one with
the smallest service time
• Optimal – gives minimum average waiting time for a given set of
processes.
– optimal only when all of jobs are available at same time and the CPU
estimates are available and accurate.
• Doesn’t work in interactive systems because users don’t
estimate in advance CPU time required to run their jobs
SJF example
i
τ(pi) 0 75 200 450 800 1275
TTRnd(pi)
0 350 P4 P1 P3 P0 P2
1 125
2 475
3 250
4 75
0 350
650 750 850 950 1050 1150 1250 1275
1 125 P0 P2 P3 P0 P2 P3 P0 P2 P0 P2 P2 P2 P2
2 475
3 250 Time slice size is 50, negligible amount of time for
4 75
context switching
• Average turn around time:
– TTRnd = (1100 + 550 + 1275 + 950 + 475)/5 = 870
priority
System processes queue ED Queue
and interrupts
• 2 queues
– Foreground processes (highest priority)
– Background processes (lowest priority)
• 3 queues
– OS processes and interrupts (highest priority, serviced ED)
– Interactive processes (medium priority, serviced RR)
– Batch jobs (lowest priority, serviced FCFS)
Multiple Level Queue with
feedback
• Same with MLQ, but the processes could migrate from
class to class in a dynamic fashion
• Different strategies to modify the priority:
– Increase the priority for a given process during the compute
intensive paths (in the idea to that the user needs larger share of
the CPU to sustain acceptable service)
– Decrease the priority for a given process during the compute
intensive paths (in the idea that the user process is trying to get
more CPU share, which may impact on the other users)
– If a process is giving the CPU before its time slice expires, then the
process is assigned to a higher priority queue
• During the evolution to completion, a process may go
through a number of different classes
• Any of the previous algorithms may be used for treating a
specific process class.
Practical example: BSD UNIX
scheduling
• MLQ with feedback approach – 32 run queues
– 0 through 7 for system processes
– 8 through 31 for processes executing in user space
• The dispatcher selects a process from the
queue with highest priority; within a queue, RR is
used, therefore only processes in highest priority
queue can execute; the time slice is less than
100us
• Each process has an external priority (used to
influence, but not solely determine the queue where
the process will be placed after creation)
Problems with concurrent execution
1. Proactive Approaches:
– Deadlock Prevention
• Prevent one of the 4 necessary conditions from arising
• …. This will prevent deadlock from occurring
– Deadlock Avoidance
• Carefully allocate resources based on future knowledge
• Deadlocks are prevented
1
1 2
2 4
1
3
Ordering not always possible, low resource utilization
Deadlock Avoidance
• Avoidance Approach:
– Before granting resource, check if state is safe
– If the state is safe no deadlock!
Deadlock Detection & Recovery
• If neither avoidance or prevention is implemented,
deadlocks can (and will) occur.
• Coping with this requires:
– Detection: finding out if deadlock has occurred
• Keep track of resource allocation (who has what)
• Keep track of pending requests (who is waiting for what)
– Recovery: untangle the mess.
• Livelock
• Starvation
Starvation
• Sockets
• Remote Procedure Calls
• Remote Method Invocation (Java)
Sockets
• A socket is defined as an endpoint
for communication
• Concatenation of IP address and
port
• The socket 161.25.19.8:1625 refers
to port 1625 on host 161.25.19.8
• Communication takes place between
a pair of sockets
Socket Communication
Remote Procedure Calls
0x30000
0x50000 0x50000
region
Client/server with shared
memory
kernel
Output file Input file
Shared memory
• Advantages
– good for sharing large amount of data
- very fast,
• Limitation
– no synchronization provided – applications must create
their own
• Alternative
- mmap system call, which maps file into the
address space of the caller,