Introduction To Operating Systems Continued
Introduction To Operating Systems Continued
Mesut ÜNLÜ
Dr. Mesut ÜNLÜ
1
Dr. Mesut ÜNLÜ
Computer System Organization Dr. Mesut ÜNLÜ
▪ Computer-system Organization
o One or more CPUs, device controllers connect through common bus providing access
to shared memory
o Concurrent execution of CPUs and devices competing for memory cycles
Dr. Mesut ÜNLÜ
Computer System Operation Dr. Mesut ÜNLÜ
▪ Computer-system Organization
o I/O devices and the CPU can execute concurrently
o Each device controller is in charge of a particular device type
o Each device controller has a local buffer
o CPU moves data from/to main memory to/from local buffers
o I/O is from the device to the local buffer of the controller
o Device controller informs CPU that it has finished its operation by causing an
interrupt
Dr. Mesut ÜNLÜ
Computer System Operation Dr. Mesut ÜNLÜ
▪ Interrupt
▪ An interrupt is a signal from a device attached to a computer or from a program
within the computer that requires the operating system to stop and figure out
what to do next.
▪ Interrupt systems are nothing but while the CPU can process the programs if the
CPU needs any IO operation. Then, it is sent to the queue and it does the CPU
process. Later on Input/output (I/O) operation is ready.
▪ In I/O devices, one of the bus control lines is dedicated to this purpose and is
called the Interrupt Service Routine (ISR).
Dr. Mesut ÜNLÜ
Common Functions of Interrupts Dr. Mesut ÜNLÜ
▪ The operating system preserves the state of the CPU by storing registers and the
program counter
▪ Determines which type of interrupt has occurred:
o polling
o vectored interrupt system
▪ Separate segments of code determine what action should be taken for each type
of interrupt
Dr. Mesut ÜNLÜ
Interrupt Handling Dr. Mesut ÜNLÜ
When a device raises an interrupt at the process, the processor first completes the execution of an
instruction. Then it loads the Program Counter (PC) with the address of the first instruction of the ISR.
Before loading the program counter with the address, the address of the interrupted instruction is moved to a
temporary location. Therefore, after handling the interrupt, the processor can continue with the process.
Dr. Mesut ÜNLÜ
Interrupt Timeline Dr. Mesut ÜNLÜ
The basic unit of computer storage is the bit. A bit can contain one of two values, 0 and 1. All other storage in a computer is based
on collections of bits. Bits can represent numbers, letters, images, movies, sounds, documents, and programs, to name a few. A
byte is 8 bits, and on most computers, it is the smallest convenient chunk of storage.
A less common term is word, which is a given computer architecture’s native unit of data. A word is made up of one or more bytes.
For example, a computer that has 64-bit registers and 64-bit memory addressing typically has 64-bit (8-byte) words. A computer
executes many operations in its native word size rather than a byte at a time.
Computer storage, along with most computer throughput, is generally measured and manipulated in bytes and collections of bytes.
o a kilobyte, or KB, is 1,024 bytes
o a megabyte, or MB, is 1,0242 bytes
o a gigabyte, or GB, is 1,0243 bytes
o a terabyte, or TB, is 1,0244 bytes
o a petabyte, or PB, is 1,0245 bytes
Computer manufacturers often round off these numbers and say that a megabyte is 1 million bytes and a gigabyte is 1 billion bytes.
Networking measurements are an exception to this general rule; they are given in bits (because networks move data a bit at a
time).
Dr. Mesut ÜNLÜ
Storage Structure Dr. Mesut ÜNLÜ
▪ Main memory – only large storage media that the CPU can access directly
o Random access
o Typically volatile
▪ Secondary storage – an extension of main memory that provides large nonvolatile storage
capacity
▪ Hard disks – rigid metal or glass platters covered with magnetic recording material
o Disk surface is logically divided into tracks, which are subdivided into sectors
o The disk controller determines the logical interaction between the device and the
computer
▪ Solid-state disks – faster than hard disks, nonvolatile
o Various technologies
o Becoming more popular
Dr. Mesut ÜNLÜ
Storage Hierarchy Dr. Mesut ÜNLÜ
Storage-device Hierarchy
Dr. Mesut ÜNLÜ
Storage Hierarchy Dr. Mesut ÜNLÜ
▪ Caching
▪ Cache is a type of memory that is used to increase the speed of data access. Normally, the data
required for any process resides in the main memory. However, it is transferred to the cache
memory temporarily if it is used frequently enough. The process of storing and accessing data
from a cache is known as caching.
Dr. Mesut ÜNLÜ
Storage Structure Dr. Mesut ÜNLÜ
▪ A single-processor System
▪ A single-processor system contains only one
processor. So only one process can be
executed at a time and then the process is
selected from the ready queue.
▪ The core is the component that executes
instructions and registers for storing data
locally.
▪ The one main CPU with its core is capable of
executing a general-purpose instruction set,
including instructions from processes.
Dr. Mesut ÜNLÜ
Computer-System Architecture Dr. Mesut ÜNLÜ
▪ Multiprocessors System
▪ Multiprocessors systems growing in use and importance
▪ Also known as parallel systems, tightly-coupled systems
▪ Advantages include:
o Increased throughput
o Economy of scale
o Increased reliability – graceful degradation or fault tolerance
▪ Two types:
o Asymmetric Multiprocessing – each processor is assigned a specific task.
o Symmetric Multiprocessing – each processor performs all tasks
Dr. Mesut ÜNLÜ
Computer-System Architecture Dr. Mesut ÜNLÜ
Symmetric multiprocessing architecture A dual-core design with two cores on the same chip
Dr. Mesut ÜNLÜ
Computer-System Architecture Dr. Mesut ÜNLÜ
Dr. Mesut ÜNLÜ
Computer-System Architecture Dr. Mesut ÜNLÜ
▪ Clustered System
▪ Clustered systems are similar to parallel systems as they both have multiple CPUs.
However a major difference is that clustered systems are created by two or more
individual computer systems merged together. Basically, they have independent
computer systems with a common storage and the systems work together.
Dr. Mesut ÜNLÜ
Computer-System Architecture Dr. Mesut ÜNLÜ
▪ Clustered System
▪ Like multiprocessor systems, but multiple systems working together
o Usually sharing storage via a storage-area network (SAN)
o Provides a high-availability service which survives failures
• Asymmetric clustering has one machine in hot-standby mode
• Symmetric clustering has multiple nodes running applications, monitoring each
other
o Some clusters are for high-performance computing (HPC)
• Applications must be written to use parallelization
o Some have distributed lock manager (DLM) to avoid conflicting operations
Dr. Mesut ÜNLÜ
Operating System Operations Dr. Mesut ÜNLÜ
▪ Multiprogramming
▪ Multiprogramming (Batch system) needed for efficiency
▪ Single user cannot keep CPU and I/O devices busy at all times
▪ Multiprogramming organizes jobs (code and data) so CPU always has one to
execute
▪ A subset of total jobs in system is kept in memory
▪ One job selected and run via job scheduling
▪ When it has to wait (for I/O for example), OS switches to another job
Program A Run Wait Run Wait Dr. Mesut ÜNLÜ
Operating System Operations Dr. Mesut ÜNLÜ
Time
(a) Uniprogramming
▪ Multiprogramming
▪ There must be enough memory to hold the OS (resident monitor) and one user program
Time
(b) M ultiprogramming with two programs
▪ Multiprogramming
▪ Memory is expanded to hold three, four, or more programs and switch among all of them
Figure 2.5 M ultiprogramming Example
Dr. Mesut ÜNLÜ
Operating System Operations Dr. Mesut ÜNLÜ
▪ Multitasking
▪ Timesharing (multitasking) is logical extension in which CPU switches jobs so
frequently that users can interact with each job while it is running, creating
interactive computing
▪ Response time should be < 1 second
▪ Each user has at least one program executing in memory process
▪ If several jobs ready to run at the same time CPU scheduling
▪ If processes don’t fit in memory, swapping moves them in and out to run
▪ Virtual memory allows execution of processes not completely in memory
Dr. Mesut ÜNLÜ
Operating System Operations Dr. Mesut ÜNLÜ
▪ Multitasking
▪ Multitasking is a logical extension of multiprogramming.
▪ Multitasking is the ability of an OS to execute more
than one task simultaneously on a CPU machine. These
multiple tasks share common resources (like CPU and
memory).
▪ In multi-tasking systems, the CPU executes multiple
jobs by switching among them typically using a small
time quantum, and the switches occur so quickly that
the users feel like interact with each executing task at
the same time.
Dr. Mesut ÜNLÜ
Operating System Operations Dr. Mesut ÜNLÜ
Concept of Context Switching is used. Concept of Context Switching and Time Sharing is used.
In multiprogrammed system, the operating system simply The processor is typically used in time sharing mode. Switching
switches to, and executes, another job when current job happens when either allowed time expires or where there other reason
needs to wait. for current process needs to wait (example process needs to do IO).
Multi-programming increases CPU utilization by organizing In multitasking also increases CPU utilization, it also increases
jobs . responsiveness.
The idea is to reduce the CPU idle time for as long as The idea is to further extend the CPU Utilization concept by increasing
possible. responsiveness Time Sharing.
It uses job scheduling algorithms so that more than one Time sharing mechanism is used so that multiple tasks can run at the
program can run at the same time. same time.
Execution of process takes more time. Execution of process takes less time.
▪ Timer
▪ Timer to prevent infinite loop / process hogging resources
o Atimer can beset to interrupt the computer after a specified period. The
period may be fixed (for example, 1/60 second) or variable (for example,
from 1 millisecond to 1 second).
o A variable timer is generally implemented by a fixed-rate clock and a counter.
o The operating system sets the counter. Every time the clock ticks, the counter
is decremented. When the counter reaches 0, an interrupt occurs.
Dr. Mesut ÜNLÜ
Resource Management Dr. Mesut ÜNLÜ
▪ Process Management
▪ A process is a program in execution. It is a unit of work within the system.
Program is a passive entity, process is an active entity.
o Process needs resources to accomplish its task
o CPU, memory, I/O, files
o Initialization data
Dr. Mesut ÜNLÜ
Resource Management Dr. Mesut ÜNLÜ
▪ Memory Management
▪ The main memory is central to the operation of a modern computer system. Main
memory is a large array of bytes, ranging in size from hundreds of thousands to
billions. Each byte has its own address.
o To execute a program all (or part) of the instructions must be in memory
o All (or part) of the data that is needed by the program must be in memory.
o Memory management determines what is in memory and when
o Optimizing CPU utilization and computer response to users
Dr. Mesut ÜNLÜ
Resource Management Dr. Mesut ÜNLÜ
▪ File-System Management
▪ OS provides uniform, logical view of information storage
o Abstracts physical properties to logical storage unit - file
o Each medium is controlled by device (i.e., disk drive, tape drive)
• Varying properties include access speed, capacity, data-transfer rate, access
method (sequential or random)
▪ Files usually organized into directories
▪ Access control on most systems to determine who can access what
Dr. Mesut ÜNLÜ
Resource Management Dr. Mesut ÜNLÜ
▪ File-System Management
▪ The operating system is responsible for the following activities in connection with
file management:
o Creating and deleting files
o Creating and deleting directories to organize files
o Supporting primitives for manipulating files and directories
o Mapping files onto mass storage
o Backing up files on stable (nonvolatile) storage media
Dr. Mesut ÜNLÜ
Resource Management Dr. Mesut ÜNLÜ
▪ Mass-Storage Management
▪ The operating system is responsible for the following activities in connection with
secondary storage management:
o Mounting and unmounting
o Free-space management
o Storage allocation
o Disk scheduling
o Partitioning
o Protection
Dr. Mesut ÜNLÜ
Resource Management Dr. Mesut ÜNLÜ
▪ Cache Management
▪ Caching is an important principle of computer systems.
▪ Information is normally kept in some storage system (such as main memory).
▪ As it is used, it is copied into a faster storage system—the cache—on a
temporary basis. When we need a particular piece of information, we first check
whether it is in the cache. If it is, we use the information directly from the cache.
Dr. Mesut ÜNLÜ
Resource Management Dr. Mesut ÜNLÜ
▪ I/O Management
▪ One of the purposes of an operating system is to hide the peculiarities of specific
hardware devices from the user. Forexample, in UNIX, the peculiarities of I/O
devices are hidden from the bulk of the operating system itself by the I/O
subsystem. The I/O subsystem consists of several components:
o A memory-management component that includes buffering, caching, and spooling
o A general device-driver interface
o Drivers for specific hardware devices
▪ Only the device driver knows the peculiarities of the specific device to which it is
assigned.
Dr. Mesut ÜNLÜ
Security and Protection Dr. Mesut ÜNLÜ
▪ Systems generally first distinguish among users, to determine who can do what
o User identities (user IDs, security IDs) include name and associated number,
one per user
o User ID then associated with all files, processes of that user to determine
access control
o Group identifier (group ID) allows set of users to be defined and controls
managed, then also associated with each process, file
o Privilege escalation allows user to change to effective ID with more rights
Dr. Mesut ÜNLÜ
Virtualization Dr. Mesut ÜNLÜ
A computer running (a) a single operating system and (b) three virtual machines.
Dr. Mesut ÜNLÜ
Distributed Systems Dr. Mesut ÜNLÜ
▪ Linked lists accommodate items of varying sizes and allow easy insertion and
deletion of items
Dr. Mesut ÜNLÜ
Kernel Data Structures Dr. Mesut ÜNLÜ
▪ Stack
▪ A stack is a sequentially ordered data structure that uses the last in, first out
(LIFO) principle for adding and removing items, meaning that the last item
placed onto a stack is the first item removed. The operations for inserting and
removing items from a stack are known as push and pop, respectively.
▪ An operating system often uses a stack when invoking function calls. Parameters,
local variables, and the return address are pushed onto the stack when a function
is called; returning from the function call pops those items off the stack.
Dr. Mesut ÜNLÜ
Kernel Data Structures Dr. Mesut ÜNLÜ
▪ Tree
▪ A tree is a data structure that can be used to represent data hierarchically.
▪ Data values in a tree structure are linked through parent–child
relationships. In a general tree, a parent may have an unlimited number of
children.
▪ In a binary tree, a parent may have at most two children, which we term the
left child and the right child.
▪ A binary search tree additionally requires an ordering between the parent’s two
children in which left child <= right child.
Dr. Mesut ÜNLÜ
Kernel Data Structures Dr. Mesut ÜNLÜ
▪ Hash Function
▪ A hash function takes data as its input, performs a numeric operation on the
data, and returns a numeric value.
▪ This numeric value can then be used as an index into a table (typically an array)
to quickly retrieve the data. Whereas searching for a data item through a list of
size n can require up to O(n) comparisons, using a hash function for retrieving
data from a table can be as good as O(1), depending on implementation details.
▪ Because of this performance, hash functions are used extensively in operating
systems.
Dr. Mesut ÜNLÜ
Kernel Data Structures Dr. Mesut ÜNLÜ
▪ One use of a hash function is to implement a hash map, which associates (or
maps) [key:value] pairs using a hash function. Once the mapping is established,
we can apply the hash function to the key to obtain the value from the hash
map.
▪ For example, suppose that a username is mapped to a password. Password
authentication then proceeds as follows:
o A user enters his/her username and password.
o The hash function is applied to the username to retrieve the mapped password.
o The retrieved password is then compared with the password entered by the user for
authentication.
Dr. Mesut ÜNLÜ
Kernel Data Structures Dr. Mesut ÜNLÜ
Hash map
Dr. Mesut ÜNLÜ
Kernel Data Structures Dr. Mesut ÜNLÜ
▪ Bitmaps
▪ A bit map is a string of n binary digits that can be used to represent the status of
n items. For example, suppose we have several resources, and the availability of
each resource is indicated by the value of a binary digit: 0 means that the
resource is available, while 1 indicates that it is unavailable (or vice
versa).
001011101
▪ Traditional Computing
▪ Stand-alone general-purpose machines
▪ But blurred as most systems interconnect with others (i.e., the Internet)
▪ Portals provide web access to internal systems
▪ Network computers (thin clients) are like Web terminals
▪ Mobile computers interconnect via wireless networks
▪ Networking becoming ubiquitous – even home systems use firewalls to protect
home computers from Internet attacks
Dr. Mesut ÜNLÜ
Computing Environments Dr. Mesut ÜNLÜ
▪ Mobile Computing
▪ Mobile computing refers to computing on handheld smartphones and tablet
computers.
Dr. Mesut ÜNLÜ
Computing Environments Dr. Mesut ÜNLÜ
▪ Client–Server Computing
▪ Contemporary network architecture features arrangements in which server
systems satisfy requests generated by client systems. This form of a specialized
distributed system, called a client-server system, has a general structure
o Dumb terminals supplanted by smart PCs
o Many systems now servers, responding to requests generated by clients
o Compute-server system provides an interface to client to request services
(i.e., database)
o File-server system provides interface for clients to store and retrieve files
Dr. Mesut ÜNLÜ
Computing Environments Dr. Mesut ÜNLÜ
▪ Peer-to-Peer Computing
▪ Another structure for a distributed system is the peer-to-peer (P2P) system
model.
▪ In this model, clients and servers are not distinguished from one another.
Instead, all nodes within the system are considered peers, and each may act as
either a client or a server, depending on whether it is requesting or providing a
service.
▪ Examples include Napster and Gnutella, Voice over IP (VoIP) such as Skype
Dr. Mesut ÜNLÜ
Computing Environments Dr. Mesut ÜNLÜ
▪ Cloud Computing
▪ Cloud computing is a type of
computing that delivers computing,
storage, and even applications as a
service across a network.
▪ In some ways, it’s a logical extension
of virtualization, because it uses
virtualization as a base for its
functionality.
Dr. Mesut ÜNLÜ
Computing Environments Dr. Mesut ÜNLÜ
▪ Cloud Computing
▪ Many types
o Public cloud – available via Internet to anyone willing to pay
o Private cloud – run by a company for the company’s own use
o Hybrid cloud – includes both public and private cloud components
o Software as a Service (SaaS) – one or more applications available via the Internet (i.e.,
word processor)
o Platform as a Service (PaaS) – software stack ready for application use via the Internet
(i.e., a database server)
o Infrastructure as a Service (IaaS) – servers or storage available over the Internet (i.e.,
storage available for backup use)
Dr. Mesut ÜNLÜ
Dr. Mesut ÜNLÜ