Module02-2
Module02-2
CCCS 321
Module 2: Processes and Process Management
Graham Thorpe
1
Process Concept
• A computer program that is running on a computer is referred to as a process.
• An operating system executes a variety of programs:
– In batch systems we refer to them as jobs, in time-shared systems we refer to them as user
programs or tasks
• The terms job and process are used almost interchangeably
• Process – a program in execution; process execution must progress in sequential
fashion
• Multiple parts
– The program code, also called text section
– Current activity including program counter, processor registers
– Stack containing temporary data
• Function parameters, return addresses, local variables
– Data section containing global variables
– Heap containing memory dynamically allocated during run time
2
Process Concept
• Program is passive entity stored on disk (executable file),
process is active
– Program becomes process when executable file loaded into memory
3
Process Concept
• We can instruct the OS to execute a program from command
line interface (CLI), Graphical User Interface (GUI) mouse clicks,
from within other processes, etc.
• One application program can be several processes
– Consider multiple users executing the same program. Is there one
shared process or does each user spawn their own process of the
same code?
• Reentrant code is a programming routine/process that multiple
applications can share simultaneously.
4
CPU Architectures
• Processes are executed on CPUs, so it is important to
understand and consider their architectures when designing an
OS and applications that will run on them.
• A central processing unit (CPU) is the electronic circuitry within
a computer that performs arithmetic and logic instructions.
• https://en.wikipedia.org/wiki/Central_processing_unit
5
Multi-CPU Systems
6
Symmetric Multiprocessing Architecture
7
A Dual-Core Design
8
Multicore Programming
• Multicore or multiprocessor systems putting pressure on coding of
OS, development tools, and programmers. Challenges include:
– Dividing activities
– Balance
– Data splitting
– Data dependency
– Testing and debugging
• Parallelism implies a system can perform more than one task
simultaneously
• Concurrency supports more than one task making progress
– Single processor / core, scheduler providing concurrency
9
Concurrency vs. Parallelism
Concurrent execution on single-core system:
10
Process Threads
• Most modern applications are multithreaded
• Threads run within application/process
• Multiple tasks with the application can be implemented by separate
threads. Examples include:
– Update display
– Fetch data
– Spell checking
– Answer a network request
• Process creation is heavy-weight, while thread creation is light-weight
• Can simplify code, increase efficiency
• OS Kernels are generally multithreaded
11
Single and Multithreaded Processes
12
Example: Windows Multithreaded C Program
13
Windows Multithreaded C Program (Cont.)
14
Example: Java Threads
• Java threads are managed by the JVM
• Typically implemented using the threads model provided by
underlying OS
• Java threads may be created by:
15
Q How many cores does a dual socket, quad-core
system have?
16
Q How many cores does a dual socket, quad-core system
have?
17
Process State
• As a process executes, it changes state
– new: The process is being created
– running: Instructions are being executed
– waiting: The process is waiting for some event to occur
– ready: The process is waiting to be assigned to a processor
– terminated: The process has finished execution
18
Process Control Block (PCB)
Information associated with each process
(also called task control block)
• Process state – running, waiting, etc
• Program counter – location of instruction to next execute
• CPU registers – contents of all process-centric registers
• CPU scheduling information- priorities, scheduling queue
pointers
• Memory-management information – memory allocated to the
process
• Accounting information – CPU used, clock time elapsed since
start, time limits
• I/O status information – I/O devices allocated to process, list
of open files
19
CPU Switch From Process to Process
20
Threads
• So far, process has a single thread of execution
• Consider having multiple program counters per process
– Multiple locations can execute at once
• Multiple threads of control -> threads
• Must then have storage for thread details, multiple program
counters in PCB
21
Process Scheduling
• Maximize CPU use, quickly switch processes onto CPU for time
sharing
• Process scheduler selects among available processes for next
execution on CPU
• Maintains scheduling queues of processes
– Job queue – set of all processes in the system
– Ready queue – set of all processes residing in main memory, ready
and waiting to execute
– Device queues – set of processes waiting for an I/O device
– Processes migrate among the various queues
22
Multitasking in Mobile Systems
• Some mobile systems (e.g., early version of iOS) allow only one process to run,
others suspended. This saves CPU and battery.
• Due to screen real estate, user interface limits of early iOS (iPhone/iPad OS)
versions provided for a
– Single foreground process- controlled via user interface
– Multiple background processes– in memory, running, but not on the display, and with
limits
– Limits include single, short task, receiving notification of events, specific long-running
tasks like audio playback
• Android runs foreground and background, with fewer limits
– Background process uses a service to perform tasks
– Service can keep running even if background process is suspended
– Service has no user interface, small memory use
23
Context Switch
• When CPU switches to another process, the system must save
the state of the old process and load the saved state for the
new process via a context switch
• Context of a process represented in the PCB
• Context-switch time is overhead; the system does no useful
work while switching
– The more complex the OS and the PCB the longer the context switch
• Time dependent on hardware support
– Some hardware provides multiple sets of registers per CPU multiple
contexts loaded at once
24
Operations on Processes
• System must provide mechanisms for:
– process creation,
– process termination
25
Q Does Windows allow the user to end a
process?
26
Process Creation
• Parent process create children processes, which, in turn create other
processes, forming a tree of processes
• Generally, process identified and managed via a process identifier
(pid)
• Resource sharing options
– Parent and children share all resources
– Children share subset of parent’s resources
– Parent and child share no resources
• Execution options
– Parent and children execute concurrently
– Parent waits until children terminate
27
Process Termination
28
Multiprocess Architecture – Example: Chrome Browser
29
Scheduling Criteria
• CPU utilization – keep the CPU as busy as possible
• Throughput – # of processes that complete their execution per
time unit
• Turnaround time – amount of time to execute a particular
process
• Waiting time – amount of time a process has been waiting in
the ready queue
• Response time – amount of time it takes from when a request
was submitted until the first response is produced, not output
(for time-sharing environment)
30
Q How can we tell if a CPU is busy? When busy, is
the computer slower?
31
Scheduling Algorithm Optimization Criteria
32
Scheduling Algorithms
• Shortest job first
• Round robin
• Priority-based
• Scheduling is more complex for real time operating systems
(but these are not the focus of this course).
• Most modern OSes use pre-emptive multitasking, which means
the scheduler can interrupt a running process to give another
process a turn at execution on the CPU. Non-pre-emptive OSes
must wait for the process to relinquish control.
33
Multiple-Processor Scheduling
• CPU scheduling more complex when multiple CPUs are available
• Homogeneous processors within a multiprocessor
• Asymmetric multiprocessing – only one processor accesses the system data
structures, alleviating the need for data sharing
• Symmetric multiprocessing (SMP) – each processor is self-scheduling, all
processes in common ready queue, or each has its own private queue of
ready processes
• Processor affinity – process has affinity for processor on which it is
currently running, meaning it will try to do all execution on a favored
processor.
• For optimal performance, an OS must factor in CPU architectures that have
non-uniform memory access (NUMA).
34
NUMA and CPU Scheduling
35
36