Lec1 OS
Lec1 OS
1
Learning Objectives (1 of 2)
After completing this chapter, you should be able to describe:
• The differences among common configurations of multiprocessing systems
• How processes and processors compare
• How multi-core processor technology works
• How a critical region aids process synchronization
2
Learning Objectives (2 of 2)
• Process synchronization software essential concepts
• How several processors work cooperatively together
• How jobs, processes, and threads are executed
• How concurrent programming languages manage tasks
3
What Is Parallel Processing? (1 of 4)
• Parallel processing
o Two or more processors operate in one system at the same time
• Work may or may not be related
o Two or more CPUs execute instructions simultaneously
o Processor Manager
• Coordinates activity of each processor
• Synchronizes interaction among CPUs
4
What Is Parallel Processing? (2 of 4)
• Benefits
o Increased reliability
• More than one CPU
• If one processor fails, others take over: must be designed into the system
o Faster processing
• Instructions processed in parallel two or more at a time
5
What Is Parallel Processing? (3 of 4)
• Faster instruction processing methods
o CPU allocated to each program or job
o CPU allocated to each working set or parts of it
o Individual instructions subdivided
• Each subdivision processed simultaneously
• Concurrent programming
• Two major challenges
o Connecting processors into configurations
o Orchestrating processor interaction
• Example: six-step information retrieval system
o Synchronization is key
6
What Is Parallel Processing? (4 of 4)
• (table 6.1)
• The six steps of the four-processor fast food lunch stop.
• © Cengage Learning 2018
Originator Action Receiver
Processor 1 (the order Accepts the query, checks for errors, and passes the request on Processor 2 (the bagger)
clerk) to the receiver
Processor 2 (the Searches the database for the required information (the
bagger) hamburger)
Processor 3 (the Retrieves the data from the database (the meat to cook for the
cook) hamburger) if it's kept off-line in secondary storage
Processor 3 (the Once the data is gathered (the hamburger is cooked), it's placed Processor 2 (the bagger)
cook) where the receiver can get it (in the hamburger bin)
Processor 2 (the Retrieves the data (the hamburger) and passes it on to the Processor 4 (the cashier)
bagger) receiver
Processor 4 (the Routes the response (your order) back to the originator of the You
cashier) request
7
Levels of Multiprocessing (1 of 2)
• Multiprocessing occurs at three levels
o Job level
o Process level
o Thread level
• Each level requires different synchronization frequency
8
Levels of Multiprocessing (2 of 2)
(table 6.2)
Typical levels of parallelism and the required synchronization among processors.
© Cengage Learning 2018
10
Typical Multiprocessing Configurations
• Multiple processor configuration impacts systems
• Three types
o Master/slave
o Loosely coupled
o Symmetric
11
Master/Slave Configuration (1 of 3)
• Asymmetric multiprocessing system
• Single-processor system
o Additional slave processors
• Each managed by primary master processor
• Master processor responsibilities
o Manages entire system
o Maintains status of all processes
o Performs storage management activities
o Schedules work for other processors
o Executes all control programs
12
Master/Slave Configuration (2 of 3)
(figure 6.1)
In a master/slave multiprocessing configuration, slave processors can access main memory directly but
they must send all I/O requests through the master processor.
© Cengage Learning 2018
13
Master/Slave Configuration (3 of 3)
• Advantages
o Simplicity
• Disadvantages
o Reliability
• No higher than single processor system
o Potentially poor resources usage
o Increases number of interrupts
14
Loosely Coupled Configuration (1 of 2)
• Several complete computer systems
o Each with own resources
• Each maintains commands and I/O management tables
• Independent single-processing difference
o Each processor
• Communicates and cooperates with others
• Has global tables
• Several requirements and policies for job scheduling
• Single processor failure
o Others continue work independently
o Difficult to detect
15
Loosely Coupled Configuration (2 of 2)
(figure 6.2)
In a loosely coupled multiprocessing configuration, each processor has its own dedicated
resources.
© Cengage Learning 2018
16
Symmetric Configuration (1 of 3)
• Decentralized processor scheduling
o Each processor uses same scheduling algorithm
• Advantages (over loosely coupled configuration)
o More reliable
o Uses resources effectively
o Balances loads well
o Degrades gracefully in failure situation
• Most difficult to implement
o Requires well synchronized processes
• Avoids races and deadlocks
17
Symmetric Configuration (2 of 3)
(figure 6.3)
A symmetric multiprocessing configuration with homogeneous processors. Processes must be
carefully synchronized to avoid deadlocks and starvation.
18
Symmetric Configuration (3 of 3)
• Interrupt processing
o Update corresponding process list
o Run another process
• More conflicts
o Several processors access same resource at same time
• Algorithms resolving conflicts between processors required
19
Process Synchronization Software (1 of 3)
• Successful process synchronization
o Lock up used resource
• Protect from other processes until released
o Only when resource is released
• Waiting process is allowed to use resource
• Mistakes in synchronization can result in:
o Starvation
• Leave job waiting indefinitely
o Deadlock
• If key resource is being used
20
Process Synchronization Software (2 of 3)
• Critical region
o Part of a program
o Critical region must complete execution
• Other processes must wait before accessing critical region resources
• Processes within critical region
o Cannot be interleaved
• Threatens integrity of operation
21
Process Synchronization Software (3 of 3)
• Synchronization
o Implemented as lock-and-key arrangement:
o Process determines key availability
• Process obtains key
• Puts key in lock
• Makes it unavailable to other processes
• Types of locking mechanisms
o Test-and-set
o WAIT and SIGNAL
o Semaphores
22
Test-and-Set (1 of 2)
• Indivisible machine instruction (TS)
• Executed in single machine cycle
o If key available: set to unavailable
• Actual key
o Single bit in storage location: zero (free) or one (busy)
• Before process enters critical region
o Tests condition code using TS instruction
o No other process in region
• Process proceeds
• Condition code changed from zero to one
• P1 exits: code reset to zero, allowing others to enter
23
Test-and-Set (2 of 2)
• Advantages
o Simple procedure to implement
o Works well for small number of processes
• Drawbacks
o Starvation
• Many processes waiting to enter a critical region
• Processes gain access in arbitrary fashion
o Busy waiting
• Waiting processes remain in unproductive, resource-consuming wait loops
24
WAIT and SIGNAL
• Modification of test-and-set
o Designed to remove busy waiting
• Two new mutually exclusive operations
o WAIT and SIGNAL
o Part of Process Scheduler’s operations
• WAIT
o Activated when process encounters busy condition code
• SIGNAL
o Activated when process exits critical region and condition code set to “free”
25
Semaphores (1 of 5)
• Nonnegative integer variable
o Flag (binary signal)
o Signals if and when resource is free
• Resource can be used by a process
• Two operations of semaphore: introduced by Dijkstra (1965)
o P (proberen means “to test”)
o V (verhogen means “to increment”)
26
Semaphores (2 of 5)
(figure 6.4)
The semaphore used by railroads indicates
whether your train can proceed. When it’s
lowered (a), another train is approaching and
your train must stop to wait for it to pass. If it is
raised (b), your train can continue.
27
Semaphores (3 of 5)
• Let s be a semaphore variable
o V(s): s: = s + 1
• Fetch, increment, store sequence
o P(s): If s > 0, then s: = s −1
• Test, fetch, decrement, store sequence
• s = 0 implies busy critical region
o Process calling on P operation must wait until s > 0
• Waiting job of choice processed next
o Depends on process scheduler algorithm
28
Semaphores (4 of 5)
(table 6.3)
The sequence of states for four processes (P1, P2, P3, P4) calling test and increment (P and V) operations on the
binary semaphore s. (Note: The value of the semaphore before the operation is shown on the line preceding the
operation. The current value is on the same line.)
29
Semaphores (5 of 5)
• P and V operations on semaphore s
o Enforce mutual exclusion concept
• Semaphore called mutex (MUTual EXclusion)
• P(mutex): if mutex > 0 then mutex: = mutex − 1
• V(mutex): mutex: = mutex + 1
• Critical region
o Ensures parallel processes modify shared data only while in critical region
• Parallel computations
o Mutual exclusion explicitly stated and maintained
30
Process Cooperation
• Several processes work together to complete common task
• Each case requires
o Mutual exclusion and synchronization
• Examples
o Producers and consumers problem
o Readers and writers problem
• Each case implemented using semaphores
31
Producers and Consumers (1 of 3)
• One process produces data
o Another process later consumes data
• Example: CPU and printer buffer
o Delay producer: buffer full
o Delay consumer: buffer empty
o Implemented by two semaphores
• Number of full positions
• Number of empty positions
o Mutex
• Third semaphore: ensures mutual exclusion between processes
32
Producers and Consumers (2 of 3)
(figure 6.6)
Four snapshots of a single buffer in four states: from completely empty at system initialization
(a) to almost full (d).
33
Producers and Consumers (3 of 3)
(figure 6.7)
A typical system with one producer, one consumer, and a
single buffer.
34
Readers and Writers (1 of 2)
• Formulated by Courtois, Heymans, and Parnas (1971)
• Two process types need to access shared resource, e.g., file or database
35
Readers and Writers (2 of 2)
• Example: airline reservation system
o Implemented using two semaphores
• Ensures mutual exclusion between readers and writers
o Resource given to all readers
• Provided no writers are processing (W2 = 0)
o Resource given to a writer
• Provided no readers are reading (R2 = 0) and no writers writing (W2 = 0)
36
Concurrent Programming (1 of 2)
• Another type of multiprocessing
• Concurrent processing system
o One job uses several processors
• Executes sets of instructions in parallel
o Requires programming language and computer system support
• Two broad categories of parallel systems
o Data level parallelism (DLP)
o Instruction (or task) level parallelism (I LP)
37
Concurrent Programming (2 of 2)
(table 6.4)
Number of Instructions Number of Instructions
The four classifications of Flynn’s for Single Instruction for Multiple Instruction
Taxonomy for machine structures. Single data SISD MISD
Multiple data SIMD MIMD
38
Amdahl’s Law
(figure 6.8)
Amdahl’s Law. Notice that all four graphs level off and
there is no speed difference even though the number
of processors increased from 2,048 to 65,536 (Amdahl,
1967). Photo Source: Wikipedia among many others
http://vi.wikipedia.org/wiki/T%E1%BA%ADp_tin:Amdah
lsLaw.png.
39
Order of Operations (1 of 2)
• Precedence of operations or rules of precedence
• Solving an equation–all arithmetic calculations performed from the left and in
the following order:
o Perform all calculations in parentheses
o Calculate all exponents
o Perform all multiplications and divisions: resolved from the left
o Perform all additions and subtractions: resolved from the left
40
Order of Operations (2 of 2)
• Z = 10 − A / B + C(D + E) * *(F − G)
(table 6.5)
The sequential computation of the
expression requires several steps. In this
example, there are six steps, but each step,
such as the last one, may involve more than
one machine operation.
41
Applications of Concurrent Programming (1 of 2)
(figure 6.9)
Using three CPUs and the COBEGIN
command, this six-step equation can be
resolved in these three steps.
42
Applications of Concurrent Programming (2 of 2)
• Reducing complexity via concurrent programming
o Case 1: array operations
o Case 2: matrix multiplication
o Case 3: sorting and merging files
o Case 4: data mining
43
Threads and Concurrent Programming (1 of 2)
• Threads: lightweight processes
o Smaller unit within process
• Scheduled and executed
• Minimizes overhead
o Swapping process between main memory and secondary storage
• Each active process thread
o Processor registers, program counter, stack, and status
• Shares data area and resources allocated to its process
44
Threads and Concurrent Programming (2 of 2)
• Web server: improved performance and interactivity with threads
o Requests for images or pages: each served with a different thread
o After thread’s task completed: thread returned to pool for assignment to another
task
45
Two Concurrent Programming Languages
• Ada Language
o First language providing specific concurrency commands
o Developed in late 1970’s
o ADA 2012: International Standards Organization (I SO) standard replacing ADA
2005
• Java
o Designed as universal Internet application software platform
o Allows programmers to code applications that can run on any computer
46
Java
• Developed at Sun Microsystems, Inc. (1995)
• Solves several issues
o High software development costs for different incompatible computer
architectures
o Distributed client-server environment needs
o Internet and World Wide Web growth
• Uses compiler and interpreter
o Easy to distribute Java applications
47
The Java Platform (1 of 2)
• Software only platform
o Runs on top of other hardware-based platforms
• Two components
o Java Virtual Machine (Java VM)
• Foundation for Java platform
• Contains the interpreter
• Runs compiled bytecodes
o Java application programming interface (Java API)
• Collection of software modules
• Grouped into libraries by classes and interfaces
48
The Java Platform (2 of 2)
(figure 6.12)
A process used by the Java platform to shield a Java program from a computer’s hardware.
49
The Java Language Environment (1 of 2)
• Designed for experienced programmers (similar to C++)
• Object oriented
o Exploits modern software development methods
• Fits into distributed client-server applications
• Memory allocation features
o Done at run time
o References memory via symbolic “handles”
o Translated to real memory addresses at run time
o Not visible to programmers
50
The Java Language Environment (2 of 2)
• Security
o Built-in feature
• Language and run-time system
o Checking
• Compile-time and run-time
• Sophisticated synchronization capabilities
o Multithreading at language level
• Popular features
o Single program runs on various platforms; robust feature set; Internet and Web
integration
51
Conclusion (1 of 3)
• Multiprocessing
o Single-processor systems
• Interacting processes obtain control of CPU at different times
o Systems with two or more CPUs
• Control synchronized by processor manager
• Processor communication and cooperation
o System configuration
• Master/slave, loosely coupled, and symmetric
52
Conclusion (2 of 3)
• Multiprocessing: several configurations
o Single-processor with interacting processes
o Multiple processors: synchronized by Process Manager
• Mutual exclusion
o Prevents deadlock
o Maintained with test-and-set, WAIT and SIGNAL, and semaphores (P, V, and
mutex)
• Synchronize processes using hardware and software mechanisms
53
Conclusion (3 of 3)
• Avoid typical problems of synchronization
o Missed waiting customers
o Synchronization of producers and consumers
o Mutual exclusion of readers and writers
• Concurrent processing innovations
o Threads and multi-core processors
54