0% found this document useful (0 votes)
257 views110 pages

Mol 5 Sat

The document discusses the components and functions of an operating system. It describes an operating system as a program that controls the computer hardware and allows other programs to run. The main goals of an operating system are to control and coordinate the use of hardware among users and applications, manage system resources efficiently, and provide a convenient interface for users. It covers the roles and components of an operating system including process management, memory management, storage management, I/O management, and protection.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
257 views110 pages

Mol 5 Sat

The document discusses the components and functions of an operating system. It describes an operating system as a program that controls the computer hardware and allows other programs to run. The main goals of an operating system are to control and coordinate the use of hardware among users and applications, manage system resources efficiently, and provide a convenient interface for users. It covers the roles and components of an operating system including process management, memory management, storage management, I/O management, and protection.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 110

Chapter One OS

Computer System Structure


฀ Computer system can be divided into four components
฀ Hardware – provides basic computing resources
CPU, memory, I/O devices
฀ Operating system
Controls and coordinates use of hardware among various applications and users
฀ Application programs – define the ways in which the system resources are used
to solve the computing problems of the users
Word processors, web browsers, database systems, video games
฀ Users
People, machines, other computers
฀ A computer system consists of
฀ hardware
฀ system programs
฀ application programs

What is an Operating System?


฀ A program that is between a user of a computer and the computer
hardware.
฀ Controls execution of programs to prevent errors and improper use of
the computer
“The one program running at all times on the computer” is the kernel
฀ OS is a resource allocator
฀ Allow multiple programs to run at the same time
฀ Manage and protect memory, I/O devices, and other resources
฀ Includes multiplexing (sharing) resources in two different ways:
In time
In space

฀ Control Resources among:


Users; Programs; Processors;
Control means: sharing/multiplexing, monitoring, protection, report/payment
Why Operating System
฀ Convenience
Makes the computer more convenient to use
฀ Efficiency
Allows computer system resources to be used in an efficient manner
฀ The two Views of an Operating System
฀ Top-down View:
An Extended Machine di 3laketha me3 al user
Hidesthe details which must be performed
Presentsuser with a virtual machine, easier to use
฀ Bottom-up View:
A Resource Manager di 3laketha me3 al resource
Each program gets time with the resource
Each program gets space on the resource
Role Of Operating system
฀ Program execution
฀ the O/S loads programs and data into memory, initializes I/O devices and files,schedules the
execution of programs
฀ Access to I/O devices
฀ the O/S hides I/O device details from applications (direct I/O access is forbidden) and offers
a simplified I/O interface
฀ Controlled access to files
฀ the O/S organizes data into files, controls access to the files (create, delete, read, write) and
฀ Communications
฀ the O/S allows exchange of information between processes, which are possibly executing on
different computers
฀ Error detection and response
฀ the O/S properly handles hardware failures and software errors with the least impact to
running applications (ending retrying Or reporting)
฀ Computer-system operation
฀ One or more CPUs, device controllers connect through common bus providing access to
shared memory
฀ Concurrent execution of CPUs and devices competing for memory cycles

Hardware Elements
฀ Processor
฀ Main Memory
฀ volatile
฀ I/O modules
฀ secondary memory devices
฀ communications equipment
฀ terminals
฀ System bus
฀ communication among processors, memory, and I/O modules

Storage Structure
฀ Main memory – only large storage media that the CPU can access
directly.
฀ Secondary storage – extension of main memory that provides large
nonvolatile storage capacity.
฀ Magnetic disks –
฀ Disk surface is logically divided into tracks, which are subdivided into sectors.
Storage Hierarchy
฀ Storage systems organized in hierarchy.
฀ Speed
฀ Cost
฀ Volatility
Decreasing cost per bit
฀ Increasing capacity
฀ Increasing access time
฀ Decreasing frequency of access of the memory by the processor
Caching – copying information from slower into faster storage system;
main memory can be viewed as a last cache for secondary storage.
฀ Faster storage (cache) checked first if information is there
฀ If it is, information used directly from the cache (fast)
฀ If not, data copied to cache and used there
฀ Cache smaller than storage being cached
฀ A portion of main memory used as a buffer to hold data for the disk
฀ Some data written out may be referenced again. The data are retrieved rapidly from
the software cache instead of slowly from disk
I/O Operation
฀ After I/O starts, control returns to user program only upon I/O completion
฀ Wait instruction idles the CPU until the next interrupt
฀ At most one I/O request is outstanding at a time, no simultaneous I/O processing.
Direct Memory Access (DMA) Structure
฀ I/O exchanges occur directly with memory
฀ Processor grants I/O module authority to read from or write to memory
฀ Processor is free to do other things
฀ Device controller transfers blocks of data from buffer storage directly
to main memory without CPU intervention.
฀ Only one interrupt is generated per block, rather than the one interrupt
per byte.
Computer-System Operation
฀ I/O devices and the CPU can execute concurrently.
฀ Each device controller is in charge of a particular device type.
฀ Each device controller has a local buffer.
฀ CPU moves data from/to main memory to/from local buffers
฀ I/O is from the device to local buffer of controller.
฀ Device controller informs CPU that it has finished its operation by
causing an interrupt.
Instruction Cycle
฀ The processor fetches the instruction from memory
฀ Program counter (PC) holds address of the instruction to be fetched next
฀ Program counter is incremented after each fetch
฀ Fetched instruction is placed in the instruction register
฀ Types of instructions
฀ transfer data between processor and memory
฀ data transferred to or from a peripheral device
฀ arithmetic or logic operation on data

Interrupts
฀ Most I/O devices are slower than the processor (writing to a line
printer)
฀ Processor must pause to wait for device
฀ Interrupts allow the processor to execute other instructions while an
I/O operation is in progress
฀ Process being interrupted must be suspended in such a way so that it
can be resumed once the operation is complete
฀ Improves processing efficiency
Interrupt Handling
฀Interrupt transfers control to the interrupt service routine generally,
through the interrupt vector, which contains the addresses of all the
service routines.
฀Interrupt architecture must save the address of the interrupted
instruction.
฀The operating system preserves the state of the CPU by storing
registers and the program counter
Types of Interrupt
฀ Software interrupt
arithmetic overflow
division by zero
execute illegal instruction
reference outside user’s memory space
฀ Hardware interrupt
฀ When a hardware device needs the processor's attention, it simply sends an
electrical signal (hardware interrupt).
I/O operation
฀ After I/O starts, control returns to user program without waiting for I/O completion.
฀ System call – request to the operating system to allow user to wait for I/O
completion.
฀ Device-status table contains entry for each I/O device indicating its type, address,
and state.
฀ Operating system indexes into I/O device table to determine device status and to
modify table entry to include interrupt
Operating-System Operations
฀ Dual-mode operation allows OS to protect itself and other system
components
฀ User mode and kernel mode
฀ Mode bit provided by hardware
Providesability to distinguish when system is running user code or kernel code
Some instructions designated as privileged, only executable in kernel mode
System call changes mode to kernel, return from call resets it to user

Transition from User to Kernel Mode


฀ Timer to prevent infinite loop / process hogging resources
฀ Set interrupt after specific period
฀ Operating system decrements counter
฀ When counter zero generate an interrupt

Types of Computer System


฀ Mainframe systems
฀ Desktop & laptop systems
฀ Parallel system
฀ Real-time systems
Mainframe systems
฀ Characteristics of mainframe systems
฀ The first computers used to tackle various applications and still found today in corporate
data centers
฀ room-sized, high I/O capacity, reliability, security, tech support
฀ mainframes focus on I/O-bound business data applications
(“supercomputers” focus on CPU-bound scientific calculations) ฀ Mainframes provide three
main functions
฀ batch processing: insurance claims, store sales reporting, etc.
฀ transaction processing: credit card, bank account, etc.
฀ time-sharing ( sessions): multiple users querying a database

Parallel systems
SMP = Symmetric MultiProcessing
All CPUs are peers and concurrently run the same copy of O/Sin memory
a Symmetric or "Master - Slave" MultiProcessing
one CPU runs the O/S. Others ask for tasks to do
Real-time systems
฀ systems controlling scientific experiments, medical imaging systems, industrial control
systems, some display systems
฀ “Hard” real-time: critical tasks are guaranteed on time
฀ secondary storage limited or absent, data stored in short term memory, or read-only memory
(ROM)
฀ conflicts with time-sharing systems and virtual memory delays
฀ “Soft” real-time: critical tasks just get higher priority
฀ more useful in applications requiring tight but not strict response times ( multimedia, virtual
reality, robotic exploration)

OS - Main Components
฀ Process management
฀ process creation; deletion; suspension
฀ process synchronization; communication
฀ Main-memory management
฀ Manage used parts and their current users
฀ Select processes to load
฀ Allocate memory to running processes
฀ Secondary storage management
฀ Free-space management
฀ Storage allocation
฀ File system management
฀ File + directory creation; deletion
฀ File manipulation primitives
฀ Mapping files onto secondary storage
฀ I/o system management
฀ general device-driver interface
฀ Drivers for specific hardware devices
฀ Protection system
฀ Distinguish between authorized and unauthorized usage
฀ Provide means of enforcement
฀ Command-interpreter System
฀ Controlstatements that deal with process creation and
management + I/o handling + file-system access + protection +
networking
Introduction

What is an operating system?


 A program that acts as a link between a user and computer hardware

What are the operating systems goals?


1. Execute programs and make solving user problems easier.
2. Make the computer system more convenient to use
3. Use the computer hardware in an efficient manner.

Computer Systems can be divided into 4 components?


 Hardware: provides basic computing resources CPU, memory, I/O devices
 Operating Systems: Controls and coordinates use of hardware among various applications and users.
 Application programs: define the ways in which the system resources are used to solve the computing problems
of the user Word processors, compilers, web browsers, database systems, video games.
 Users: People, machines, other computers.

1.1 What operating systems do?


Depends on the point of view:

A. Users: The user’s view of the computer varies according to the interface being used.
a. Single user
i. Want convenience, ease of use
ii. Don’t care about resource utilization
iii. Such systems are optimized for the single-user experience rather than the requirements of multiple
users.

b. Multiple users
i. User sits at a terminal connected to a mainframe or a minicomputer. Other users are accessing the
same computer through other terminals. These users share resources and may exchange
information. The operating system in such cases is designed to maximize resource utilization.
ii. Users of dedicate systems such as workstations have dedicated resources but frequently use shared
resources from servers Therefore, their operating system is designed to compromise between
individual usability and resource utilization.
iii. Handheld computers are resource poor, optimized for usability and battery life
iv. Some computers have little or no user interface, such as embedded computers in devices and
automobiles
B. System:
a. OS is a resource allocator
i. Manages all resources (CPU time, memory space, file-storage space, I/O devices, … …)
ii. Decides between conflicting requests for efficient and fair resource use
b. OS is a control program

Page 1 of 10
i. Controls execution of programs to prevent errors and improper use of the computer

No universally accepted definition of what is part of the operating system


 A vast Definition: “Everything a vendor ships when you order an operating system”

 A Narrow Definition: “The one program running at all times on the computer” is the kernel.

Everything else is either a system program (ships with the operating system) or an application program.

Computer Startup:
 For a computer to start running—for instance, when it is powered up or rebooted—it needs to have an initial
program to run.
 This initial program, or [bootstrap program] tends to be simple
 Typically, it is stored within the computer hardware in read-only memory (ROM) or electrically erasable programmable
read-only memory (EEPROM), known by the general term firmware.
 It initializes all aspects of the system, from CPU registers to device controllers to memory contents.
 The bootstrap program must know how to load the operating system and how to start executing that system.
 To accomplish this goal, the bootstrap program must locate the operating-system kernel and load it into memory.
 Once the kernel is loaded and executing, it can start providing services to the system and its users.

1.2 Computer-System Organization:


1. One or more CPUs, device controllers connect through common bus providing access to shared memory
2. CPUs and device controllers can execute in parallel, competing for memory cycles
3. I/O devices and the CPU can execute concurrently
4. Each device controller is in charge of a particular device type
5. Each device controller has a local buffer
6. CPU moves data from/to main memory to/from local buffers
7. I/O is from the device to local buffer of controller
8. Device controller informs CPU that it has finished its operation by causing an interrupt
9. Interrupt transfers control to the appropriate interrupt service routine, through the interrupt vector,
which contains the addresses of all the service routines
10. Interrupt architecture must save the address of the interrupted instruction
11. A trap or exception is a software-generated interrupt caused either by an error or a user request

N.B: An operating system is interrupt driven

12. The operating system preserves the state of the CPU by storing registers and the program counter
13. Determines which type of interrupt has occurred:
a. polling
b. vectored interrupt system
14. Separate segments of code determine what action should be taken for each type of interrupts.
Page 2 of 10
Storage Structure
A. Main memory:
a. Only storage media that the CPU can access directly
b. So any programs to run must be stored there.
c. Its advantage  random access
d. Its disadvantage  typically volatile (loses all data when computer shuts down).

B. Secondary storage:
a. Extension of main memory that provides large nonvolatile storage capacity
b. Examples
i. Magnetic disks
1. rigid metal or glass platters covered with magnetic recording material
2. Disk surface is logically divided into tracks, which are subdivided into sectors
3. The disk controller determines the logical interaction between the device and the
computer
ii. Solid-state disks (faster than magnetic disks)
1. Various technologies
2. Becoming more popular

Storage systems organized in hierarchy by (from up to down)

1. Speed
2. Cost
3. Volatility

Caching:

1. copying information into faster storage system


2. Main memory can be viewed as a cache for secondary storage
3. Important principle, performed at many levels in a computer (in H/W, OS, S/W)
4. Information in use copied from slower to faster storage temporarily

Page 3 of 10
5. Faster storage (cache) checked first to determine if information is there
a. If it is, information used directly from the cache (fast)
b. If not, data copied to cache and used there
6. Cache smaller than storage being cached (Cashe has smaller size than data wanted to be stored)
a. Cache management important design problem
b. Cache size and replacement policy

Device Driver:

1. Exists in each device controller to manage I/O


2. Provides interface between controller and kernel

I/O Structure:

A. First Method (Synchronous)


1. To start an I/O operation, the device driver loads the appropriate registers within the device controller.
2. The device controller, in turn, examines the contents of these registers to determine what action to take
(such as “read a character from the keyboard”).
3. The controller starts the transfer of data from the device to its local buffer.
4. Once the transfer of data is complete, the device controller informs the device driver via an interrupt
that it has finished its operation.
5. The device driver then returns control to the operating system, possibly returning the data or a pointer
to the data if the operation was a read. For other operations, the device driver returns status
information.
6. After I/O starts, control returns to user program only upon I/O completion.

N.B:

1. This form of interrupt-driven I/O is fine for moving small amounts of data but can produce high overhead
when used for bulk data movement such as disk I/O.
2. To solve this problem, direct memory access (DMA) is used.
B. Second Method (Asynchronous)
1. After setting up buffers, pointers, and counters for the I/O device, the device controller transfers an
entire block of data directly to or from its own buffer storage to memory, with no intervention by the
CPU.
2. Only one interrupt is generated per block, to tell the device driver that the operation has completed,
rather than the one interrupt per byte generated for low-speed devices.
3. While the device controller is performing these operations, the CPU is available to accomplish other
work.

1.3 Computer-System Architecture


In this section we introduced the general structure of a typical computer system. Computer system can be organized in a
number of different ways, which we can categorize roughly according to the number of general-purpose processors
used.
A. Single-Processor Systems:
a. Most systems use a single general-purpose processor
b. Almost all have special-purpose processors as well
B. Multiprocessors Systems:
a. Also known as parallel or multicore systems
Page 4 of 10
b. Advantages include:
i. Increased throughput: By increasing the number of processors, we expect to get more work done in
less time.
ii. Economy of scale: Multiprocessor systems can cost less than equivalent multiple single-processor
systems, because they can share peripherals, mass storage, and power supplies.
iii. Increased reliability: If functions can be distributed properly among several processors, then the
failure of one processor will not halt the system, only slow it down.
c. Two types:
1. Asymmetric Multiprocessing:
1. Each processor is assigned a specific task.
2. A boss processor controls the system; the other processors either look to the boss for
instruction or have predefined tasks.
3. This scheme defines a boss–worker relationship.
4. The boss processor schedules and allocates work.to the worker processors.
2. Symmetric Multiprocessing (SMP):
1. Each processor performs all tasks within the operating system.
SMP means that all processors are peers; no boss–worker relationship exists between processors.

 Multiprocessing can cause a system to change its memory access model from uniform memory access (UMA) to
non-uniform memory access (NUMA).
o UMA: is defined as the situation in which access to any RAM from any CPU takes the same amount of time.
o NUMA: some parts of memory may take longer to access than other parts, creating a performance penalty.
Operating systems can minimize the NUMA penalty through resource management.

 A recent trend in CPU design is to include multiple computing cores on a single chip. Such multiprocessor systems are
termed multicore.
 They can be more efficient than multiple chips with single cores because on-chip communication is faster than
between-chip communication.
 In addition, one chip with multiple cores uses significantly less power than multiple single-core chips.
 It is important to note that while multicore systems are multiprocessor systems, not all multiprocessor systems
are multicore,
 Blade Servers are a relatively recent development in which multiple processor boards, I/O boards, and networking
boards are placed in the same chassis.
 The difference between these and traditional multiprocessor systems is that each blade-processor board boots
independently and runs its own operating system.
 Some blade-server boards are multiprocessor as well, which blurs the lines between types of computers.

Clustered System
 Like multiprocessor systems, but multiple systems working together
o Usually sharing storage via a storage-area network (SAN)
o Provides a high-availability service which survives failures even if one or more systems in the cluster fail.
 Asymmetric clustering: The hot-standby host machine does nothing but monitor the active
server. If that server fails, the hot-standby host becomes the active server.
 Symmetric clustering has multiple nodes running applications, monitoring each other

o Some clusters are for high-performance computing (HPC)


o Applications must be written to use parallelization (which divides a program into separate components
that run in parallel on individual computers in the cluster.)
o Some have distributed lock manager (DLM) to avoid conflicting operations

Page 5 of 10
1.4 Operating-System Structure
 One of the most important aspects of operating systems is the ability to multi-program.
 A single program cannot, in general, keep either the CPU or the I/O devices busy at all times.
 Single users frequently have multiple programs running.
 Multiprogramming needed for efficiency
 Multiprogramming organizes jobs (code and data) so CPU always has one to execute
 Main memory is too small to accommodate all jobs, the jobs are kept initially on the disk in the job pool.
 This pool consists of all processes residing on disk awaiting allocation of main memory.
 The set of jobs in memory can be a subset of the jobs kept in the job pool.
 The operating system picks and begins to execute one of the jobs in memory.
 The job may have to wait for some task, such as an I/O operation, to complete.
 In a non-multi programmed system, the CPU would sit idle.
 In a multi programmed system, the operating system simply switches to, and executes, another job.
 Timesharing (multitasking) is a logical extension to multiprogramming
o The CPU switches jobs so frequently that users can interact with each job while it is running, creating interactive
computing
o Response time should be short (typically < 1 second)
o Each user has at least one program executing in memory { Process }
o If several jobs are ready to run at the same time, the system must choose which job will run first. Making this
decision is { CPU scheduling }
o If processes don’t fit in memory, swapping moves them whereby processes are swapped in and out of main
memory to the disk to run
o Virtual memory allows execution of processes not completely in memory

1.5 Operating-System Operations


 Modern operating systems are interrupt driven. If there are no processes to execute, no I/O devices to service,
and no users to whom to respond, an operating system will sit quietly, waiting for something to happen.
 Software error or request creates exception or trap like:
o Division by zero, request for operating system service
o Other process problems include infinite loop, processes modifying each other or the operating system
 Dual-mode operation allows OS to protect itself and other system components
o User mode and kernel mode
o Mode bit provided by hardware
 Provides ability to distinguish when system is running user code or kernel code
 Some instructions designated as privileged, only executable in kernel mode
 System call changes mode to kernel, return from call resets it to user
 The concept of modes can be extended beyond two modes (in which case the CPU uses more than one bit to set and test
the mode) for example :
o CPUs that support virtualization frequently have a separate mode to indicate when the virtual machine
manager (VMM) and the virtualization management software is in control of the system.
 In this mode, the VMM has more privileges than user processes but fewer than the kernel.
 Timer to prevent infinite loop / process hogging resources
o Set interrupt after specific period (fixed or variable)
o Operating system initializes and decrements counter
o When counter reaches zero generate an interrupt
o Set up before scheduling process to regain control or terminate program that exceeds agreed time

Page 6 of 10
1.6 Process Management
 A process is a program in execution.
o It is a unit of work within the system
o Program is a passive entity, process is an active entity
 Process needs resources to accomplish its task
o CPU, memory, I/O, files
o Initialization data
 Process termination requires reclaim of any reusable resources
 Single-threaded process has one program counter specifying location of next instruction to execute
o Process executes instructions sequentially, one at a time, until completion
 Multi-threaded process has one program counter per thread
 Typically a system has many processes running concurrently on one or more CPUs
o Some user processes and others system processes
o Concurrency by multiplexing the CPUs among the processes / threads
 The operating system is responsible for the following activities in connection with process management:
o Creating and deleting both user and system processes
o Scheduling processes and threads on the CPUs
o Suspending and resuming processes
o Providing mechanisms for process synchronization
o Providing mechanisms for process communication
o Providing mechanisms for deadlock handling

1.7 Memory Management


 All data in memory before and after processing
 All instructions in memory in order to execute
 Memory management allows keeping several programs in memory
o To improve CPU utilization and speed computer’s response to users
 Memory management activities
o Keeping track of which parts of memory are currently being used and by whom
o Deciding which processes (or parts thereof) and data to move into and out of memory
o Allocating and deallocating memory space as needed

1.8 Storage Management


 OS provides uniform, logical view of information storage
o Abstracts physical properties to logical storage unit - file
o Each medium is controlled by device (i.e., disk or tape drive)
o Varying properties include access speed, capacity, data-transfer rate, access method (sequential or
random)
 File-System management
o Files usually organized into directories
o Access control on most systems to determine who can access what
o OS activities include
 Creating and deleting files and directories
 Supporting primitives to manipulate files and directories
 Mapping files onto secondary storage
 Backup files onto stable (non-volatile) storage media
Page 7 of 10
 Usually disks used to store data that does not fit in main memory or data that must be kept for a “long” period
of time
 Proper management is of central importance
 Entire speed of computer operation hinges on disk subsystem and its algorithms
 OS activities
o Free-space management
o Storage allocation
o Disk scheduling
 Some storage need not be fast
o Tertiary storage includes optical storage, magnetic tape
o Still must be managed – by OS or applications
o Varies between WORM and RW
 Caching is an important principle of computer systems. Here’s how it works.
o Information is normally kept in some storage system (such as main memory).
o As it is used, it is copied into a faster storage system—the cache—on a temporary basis. When we need
a particular piece of information, we first check whether it is in the cache.
 If it is, we use the information directly from the cache.
 If it is not, we use the information from the source, putting a copy in the cache under the
assumption that we will need it again soon.
o Because caches have limited size, cache management is an important design problem.
o Careful selection of the cache size and of a replacement policy can result in greatly increased
performance.
o Multitasking environments must be careful to use most recent value, no matter where it is stored in the
storage hierarchy
o Multiprocessor environment must provide cache coherency in hardware such that all CPUs have the
most recent value in their cache
o Distributed environment situation even more complex (several copies of a datum can exist)
o I/O Subsystem
 One purpose of OS is to hide individuality of hardware devices from the user
 I/O subsystem responsible for Memory management of I/O including
 buffering - storing data temporarily while it is being transferred
 caching - storing parts of data in faster storage for performance
 spooling - overlapping of output of one job with input of other jobs
 General device-driver interface
 Drivers for specific hardware devices

1.9 Protection and Security


 Protection – any mechanism for controlling access of processes or users to resources defined by the OS
 Security – defense of the system against internal and external attacks
o Huge range, including denial-of-service, worms, viruses, identity theft, theft of service
 Systems generally first distinguish among users, to determine who can do what
o User identities (user IDs, security IDs) include name and associated number, one per user
o User ID then associated with all files, processes of that user to determine access control
o Group identifier (group ID) allows set of users to be defined and controls managed, then also associated
with each process, file
o Privilege escalation allows user to change to effective ID with more rights
Page 8 of 10
1.11 Computing Environments
1. Traditional
a. Stand-alone general purpose machines
b. But indistinct as most systems interconnect with others (i.e. the Internet)
c. Portals provide web access to internal systems
d. Network computers (thin clients) are like Web terminals - more security or easier maintenance
e. Mobile computers interconnect via wireless networks
f. Networking becoming ubiquitous - even home systems use firewalls to protect home computers
from Internet attacks
2. Mobile
a. Handheld smart phones, tablet computers, etc
b. Allows new types of apps like augmented reality
c. Use IEEE 802.11 wireless, or cellular data networks for connectivity
d. Leaders are Apple iOS and Google Android

3. Distributed
a. Collection of separate, possibly heterogeneous, systems networked together
b. Network is a communications path, TCP/IP most common
c. Local Area Network (LAN)
d. Wide Area Network (WAN)
e. Metropolitan Area Network (MAN)
f. Personal Area Network (PAN)
g. Network Operating System provides features (as file sharing) between systems across network
h. Communication scheme allows systems to exchange messages
i. Impression of a single system

4. Client-Server Computing
a. Dumb terminals supplanted by smart PCs
b. Many systems now servers, responding to requests generated by clients
c. Compute-server system provides an interface to client to request services (i.e., database)
d. File-server system provides interface for clients to store and retrieve files

5. Peer To Peer
a. Another model of distributed system
b. P2P does not distinguish clients and servers
c. Instead all nodes are considered peers
d. May each act as client, server or both
e. Node must join P2P network
i. Registers its service with central lookup service on network, or
ii. Broadcast request for service and respond to requests for service via discovery protocol
f. Examples include Napster and Gnutella, Voice over IP (VoIP) such as Skype

6. Virtualization
a. Allows operating systems to run as applications within other OSs
i. Vast and growing industry

Page 9 of 10
b. Emulation used when source CPU type different from target CPU type (i.e. PowerPC to Intel x86)
i. Generally slowest method
ii. When computer language not compiled to native code – Interpretation
c. Virtualization – OS natively compiled for CPU, running guest OSs also natively compiled
i. Consider VMware running WinXP guests, each running applications, all on native WinXP
host OS
ii. VMM provides virtualization services
d. Use cases involve laptops and desktops running multiple OSs for exploration or compatibility
i. Apple laptop running Mac OS X host, Windows as a guest
ii. Developing apps for multiple OSs without having multiple systems
iii. QA testing applications without having multiple systems
iv. Executing and managing compute environments within data centers
e. VMMs can run natively, in which case they are also the host
7. Cloud Computing
a. Delivers computing, storage, even apps as a service across a network
b. Logical extension of virtualization as based on virtualization
i. Amazon EC2 has thousands of servers, millions of VMs, PBs of storage available across
the Internet, pay based on usage
c. Cloud compute environments composed of usual OSs plus VMMs and cloud management tools
i. Internet connectivity requires security like firewalls
ii. Load balancers spread traffic across multiple applications
d. Many types
i. Public cloud – available via Internet to anyone willing to pay
ii. Private cloud – run by a company for the company’s own use
iii. Hybrid cloud – includes both public and private cloud components
iv. Software as a Service (SaaS) – one or more applications available via the Internet (i.e.
word processor)
v. Platform as a Service (PaaS) – software stack ready for application use via the Internet
(i.e a database server)
vi. Infrastructure as a Service (IaaS) – servers or storage available over Internet (i.e. storage
available for backup)
8. Real time embedded system
a. Real-time embedded systems most prevalent form of computers
i. Vary considerable, special purpose, limited purpose OS, real-time OS
ii. Usually have little or no user interface
b. Many other special computing environments as well
i. Some have OSs, some perform tasks without an OS
c. Real-time OS has well-defined fixed time constraints
i. Processing must be done within constraint
ii. Correct operation only if constraints met

1.12 Open-Source Operating Systems


 Operating systems made available in source-code format rather than just binary closed-source
 Examples include GNU/Linux and BSD UNIX (including core of Mac OS X), and many more
 Can use VMM like VMware Player (Free on Windows), Virtualbox (open source and free on many platforms)
o Use to run guest operating systems for exploration

Page 10 of 10
Chapter Two OS

Operating System Services


The operating system provides a set of services that target two basic purposes:
฀ System services that is used to ensure efficient use of system via resource sharing
฀User services To provide an interface between the computer hardware and the
programmer (or system users) that simplifies his role
(User Services)
User interface - Almost all operating systems have a user interface (UI)
Varies between Command-Line (CLI), Graphics User Interface (GUI)
I/O operations - A running program may require I/O, which may involve a file or an I/O device.
File-system manipulation - Read and write files and directories, create and delete of them,
search them, list file Information, permission management.
Communications – Processes may exchange information, on the same computer or between
computers over a network
Error detection – OS needs to be constantly aware of possible errors
May occur in the CPU and memory hardware, in I/O devices, in user program
For each type of error, OS should take the appropriate action to ensure correct computing

(System services)
Initial program loader: The system must be able to load a program into memory and to run that
program, end execution, either normally or abnormally
Resource allocation.
» Processmanagement
» Memory management
» Secondary storage management
Accounting Information .Keep track of which users use how much and what kinds of
computer resources.
Protection and security The owners of information stored in a multiuser or networked
computer system may want to control use of that information, concurrent processes should not interfere
with each other
» Protection involves ensuring that all access to system resources is controlled
» Security of the system from outsiders requires userextends to defending external I/O devices from
invalid access attempts
System Calls
Programming interface to the services provided by the OS
Typically written in a high-level language (C or C++)
Types of System Calls
฀ Process control
฀ File management
฀ Device management
฀ Information maintenance
฀ Communications
Operating system structure
1- Simple Structure
MS-DOS – written to provide the most functionality in the least space
฀ Not divided into modules
฀ Although MS-DOS has some structure, its interfaces and levels of functionality are not well
separated
฀ A set of service procedures that carry out the system calls.
฀ A set of utility procedures that help the service procedures.

2- Layered Approach
฀ The operating system is divided into a number of layers (levels), each built on top of lower
layers. The bottom layer (layer 0), is the hardware; the highest (layer N) is the user interface
฀layers are selected such that each uses functions (operations) and services of only lower-
level layers
3- Virtual Machines
฀ A virtual machine takes the layered approach to its logical conclusion.
It treats hardware and the operating system kernel as though they were
all hardware
฀ A virtual machine provides an interface identical to the underlying bare
hardware
The operating system creates the illusion of multiple processes, each
executing on its own processor with its own (virtual) memory
฀ The resources of the physical computer are shared to create the virtual machines
฀ CPU scheduling can create the appearance that users have their own processor
฀ Spooling and a file system can provide virtual card readers and virtual line printers
฀ A normal user time-sharing terminal serves as the virtual machine operator’s console
Objectives of the chapter
 An operating system provides the environment within which programs are executed.
 The design of a new operating system is a major task.
 It is important that the goals of the system be well defined before the design begins. These goals form the basis for
choices among various algorithms and strategies.
 We can view an operating system from several vantage points.
o One view focuses on the services that the system provides;
o another, on the interface that it makes available to users and programmers;
o a third, on its components and their interconnections.
 In this chapter, we explore all three aspects of operating systems, showing the viewpoints of users, programmers, and
operating system designers.
 We consider what services an operating system provides, how they are provided, how they are debugged, and what the
various methodologies are for designing such systems.
 Finally, we describe how operating systems are created and how a computer starts its operating system.

2.1 Operating-System Services


 An operating system provides an environment for the execution of programs.
 It provides certain services to programs and to the users of those programs.
 The specific services provided, of course, differ from one operating system to another, but we can identify
common classes.
 These operating system services are provided for the convenience of the programmer, to make the
programming task easier.
 One set of operating-system services provides functions that are helpful to the user:
o User interface - Almost all operating systems have a user interface (UI).
 Varies between Command-Line Interface (CLI) , Batch, Graphics User Interface (GUI)
o Program execution - The system must be able to load a program into memory and to run that program,
end execution, either normally or abnormally (indicating error)
o I/O operations - A running program may require I/O, which may involve a file or an I/O device
o File-system manipulation - The file system is of particular interest. Programs need to read and write
files and directories, create and delete them, search them, list file Information, permission
management.
o Communications - There are many circumstances in which one process needs to exchange information
with another process. Such communication may occur between processes that are executing on the
same computer or between processes that are executing on different computer systems tied together
by a computer network.
o Error detection – OS needs to be constantly aware of possible errors
 May occur in the CPU and memory hardware, in I/O devices, in user program
 For each type of error, OS should take the appropriate action to ensure correct and consistent
computing
 Debugging facilities can greatly enhance the user’s and programmer’s abilities to efficiently use
the system
o Resource allocation - When multiple users or multiple jobs running concurrently, resources must be
allocated to each of them
 Many types of resources - Some (such as CPU cycles, main memory, and file storage) may have
special allocation code, others (such as I/O devices) may have general request and release code
o Accounting - To keep track of which users use how much and what kinds of computer resources
o Protection and security - The owners of information stored in a multiuser or networked computer
system may want to control use of that information, concurrent processes should not interfere with
each other
 Protection involves ensuring that all access to system resources is controlled
 Security of system from outsiders requires user authentication, extends to defending external
I/O devices from invalid access attempts
o If a system is to be protected and secure, precautions must be instituted throughout it.

2.2 User and Operating-System Interface


 We mentioned earlier that there are several ways for users to interface with the operating system. Here, we discuss two
fundamental approaches.
 One provides a command-line interface, or command interpreter, that allows users to directly enter commands to be
performed by the operating system.
 The other allows users to interface with the operating system via a graphical user interface, or GUI.

2.2.1 Command Interpreters:


 Some operating systems include the command interpreter in the kernel.
 Others, such as Windows and UNIX, treat the command interpreter as a special program that is running when a
job is initiated or when a user first logs on (on interactive systems).
 On systems with multiple command interpreters to choose from, the interpreters are known as shells.
o For example, on UNIX and Linux systems, a user may choose among several different shells, including
the Bourne shell, C shell, Bourne-Again shell, Korn shell, and others.
 Primarily obtains a command from the user and executes it
 Sometimes contains the code to execute each command, or the commands are just names of programs
 For example:
o the UNIX command to delete a file
o rm file.txt
o Would search for a file called rm, load the file into memory, and execute it with the parameter file.txt.
The function associated with the rm command would be defined completely by the code in the file rm.
 If the latter, adding new features doesn’t require shell modification

2.2.2 Graphical User Interfaces


 User-friendly desktop metaphor interface
o Usually mouse, keyboard, and monitor
o Icons represent files, programs, actions, etc
o Various mouse buttons over objects in the interface cause various actions (provide information,
options, execute function, open directory – known as a folder)

 Invented at Xerox PARC research facility (1970s)


 Many systems now include both CLI and GUI interfaces
o Microsoft Windows is GUI with CLI “command” shell
o Apple Mac OS X is “Aqua” GUI interface with UNIX kernel underneath and shells available
o Unix and Linux have CLI with optional GUI interfaces (CDE, KDE, GNOME)
 Touchscreen devices require new interfaces
o Mouse not possible or not desired
o Actions and selection based on motions
o Virtual keyboard for text entry

2.3 System Calls


 Programming interface to the services provided by the OS
 Typically written in a high-level language (C or C++)
 Mostly accessed by programs via a high-level Application Programming Interface (API) rather than direct system
call use
 Three most common APIs are:
o Win32 API for Windows
o POSIX API for POSIX-based systems (including virtually all versions of UNIX, Linux, and Mac OS X)
o Java API for the Java virtual machine (JVM)
 Why would an application programmer prefer programming according to an API rather than invoking actual
system calls?
 There are several reasons for doing so. One benefit concerns program portability.
 An application programmer designing a program using an API can expect her program to compile and run on any
system that supports the same API (although, in reality, architectural differences often make this more difficult
than it may appear).
 Furthermore, actual system calls can often be more detailed and difficult to work with than the API available to
an application programmer.
 Nevertheless, there often exists a strong correlation between a function in the API and its associated system call
within the kernel.
 In fact, many of the POSIX and Windows APIs are similar to the native system calls provided by the UNIX, Linux,
and Windows operating systems.
 The system-call interface intercepts function calls in the API and invokes the necessary system calls within the
operating system.
 Typically, a number is associated with each system call, and the system-call interface maintains a table indexed
according to these numbers.
 The system call interface then invokes the intended system call in the operating-system kernel and returns the status of
the system call and any return values.
 The caller need know nothing about how the system call is implemented
o Just needs to follow API and understand what OS will do as a result call
o Most details of OS interface are hidden from the programmer by API
o Managed by set of functions built into libraries included with compiler
 Often, more information is required than simply identity of desired system call
o Exact type and amount of information vary according to OS and call
 Three general methods used to pass parameters to the OS
o Simplest: pass the parameters in registers
o In some cases, may be more parameters than registers
o Parameters stored in a block, or table, in memory, and block address passed as a parameter in a register
o This approach taken by Linux and Solaris
o Parameters pushed, onto the stack by the program and popped off the stack by the operating system
 Block and stack methods do not limit the number or length of parameters being passed
2.4 Types Of System Calls
System calls can be grouped roughly into six major categories:

 process control
 file manipulation
 device manipulation
 information maintenance
 communications
 Protection.

2.4.1 Process Control


A. Process control
B. end, abort
C. load, execute
D. create process, terminate process
E. get process attributes, set process attributes
F. wait for time
G. wait event, signal event
H. allocate and free memory
I. Dump memory if error
J. Debugger for determining bugs, single step execution
K. Locks for managing access to shared data between processes
 Quite often, two or more processes may share data.
 To ensure the integrity of the data being shared, operating systems often provide system calls
allowing a process to lock shared data.
 Then, no other process can access the data until the lock is released.
 Typically, such system calls include acquire lock() and release lock().

2.4.2 File management


A. create file, delete file
B. open, close file
C. read, write, reposition
D. get and set file attributes

2.4.3 Device management


A. request device, release device
B. read, write, reposition
C. get device attributes, set device attributes
D. logically attach or detach devices

2.4.5 Information maintenance


A. get time or date, set time or date
B. get system data, set system data
C. get and set process, file, or device attributes

2.4.6 Communications
A. create, delete communication connection
B. Message passing model send, receive messages to host name or process name
C. From client to server
D. Shared-memory model create and gain access to memory regions
E. transfer status information
F. attach and detach remote devices

2.4.7 Protection
A. Control access to resources
B. Get and set permissions
C. Allow and deny user access

2.5 System Programs


System programs, also known as system utilities, provide a convenient environment for program development and
execution. Some of them are simply user interfaces to system calls. Others are considerably more complex. They can be
divided into these categories:

 File management
o These programs create, delete, copy, rename, print, dump, list, and generally manipulate files and
directories.
 Status information.
o Some programs simply ask the system for the date, time, amount of available memory or disk space,
number of users, or similar status information. Others are more complex, providing detailed
performance, logging, and debugging information.
o Typically, these programs format and print the output to the terminal or other output devices or files or
display it in a window of the GUI.
o Some systems also support a registry, which is used to store and retrieve configuration information.
 File modification.
o Several text editors may be available to create and modify the content of files stored on disk or other
storage devices.
o There may also be special commands to search contents of files or perform transformations of the text.
 Programming-language support.
o Compilers, assemblers, debuggers, and interpreters for common programming languages (such as C,
C++, Java, and PERL) are often provided with the operating system or available as a separate download.
 Program loading and execution.
o Once a program is assembled or compiled, it must be loaded into memory to be executed.
o The system may provide absolute loaders, relocatable loaders, linkage editors, and overlay loaders.
Debugging systems for either higher-level languages or machine language are needed as well.
 Communications.
o These programs provide the mechanism for creating virtual connections among processes, users, and
computer systems.
o They allow users to send messages to one another’s screens, to browse Web pages, to send e-mail
messages, to log in remotely, or to transfer files from one machine to another.
 Background services.
o All general-purpose systems have methods for launching certain system-program processes at boot
time.
o Some of these processes terminate after completing their tasks, while others continue to run until the
system is stopped.
o Provide facilities like disk checking, process scheduling, error logging, printing
o Run in user context not kernel context
o Known as services, subsystems, daemons

2.6 Operating-System Design and Implementation


In this section, we discuss problems we face in designing and implementing an operating system. There are, of course,
no complete solutions to such problems, but there are approaches that have proved successful.

2.6.1 Design Goals


 The first problem in designing a system is to define goals and specifications.
 Internal structure of different Operating Systems can vary widely.
 At the highest level, the design of the system will be affected by the choice of hardware and the type of system:
a. Batch
b. Time sharing
c. Single user
d. Multiuser
e. Distributed
f. Real time
g. General purpose.
 Beyond this highest design level, the requirements may be much harder to specify.
 The requirements can, however, be divided into two basic groups:
a. User goals
 operating system should be convenient to use, easy to learn, reliable, safe, and fast
b. System goals.
 operating system should be easy to design, implement, and maintain, as well as flexible, reliable,
error-free, and efficient

2.6.2 Mechanisms and Policies


 One important principle is the separation of policy from mechanism.
o Mechanisms: determine how to do something
o Policies: determine what will be done.
 The separation of policy from mechanism is a very important principle, it allows maximum flexibility if policy
decisions are to be changed later
 In the worst case, each change in policy would require a change in the underlying mechanism.
 A general mechanism insensitive to changes in policy would be more desirable.
 For Ex:
o For instance, consider a mechanism for giving priority to certain types of programs over others.
o If the mechanism is properly separated from policy, it can be used either to support a policy decision
that I/O-intensive programs
 Specifying and designing OS is highly creative task of software engineering

2.6.3 Implementation
 Much variation
o Early OSs in assembly language
o Then system programming languages like Algol, PL/1
o Now C, C++
 Actually usually a mix of languages
o Lowest levels in assembly
o Main body in C
o Systems programs in C, C++, scripting languages like PERL, Python, shell scripts
 More high-level language easier to port to other hardware , But slower
 Emulation can allow an OS to run on non-native hardware

2.7 Operating-System Structure


 A system as large and complex as a modern operating system must be engineered carefully if it is to function
properly and be modified easily.
 A common approach is to partition the task into small components, or modules, rather than have one
monolithic system.
 Various ways to structure one as follows
o Simple Structure
 Small and simple systems written to provide the most functionality in the least space
 Not divided into modules
 Unix system is a good example
 UNIX – limited by hardware functionality, the original UNIX operating system had
limited structuring.
 The UNIX OS consists of two separable parts
a. Systems programs
b. The kernel
1. Consists of everything below the system-call interface and above the
physical hardware
2. Provides the file system, CPU scheduling, memory management, and
other operating-system functions; a large number of functions for one
level
o Layered Approach
 The operating system is divided into a number of layers (levels), each built on top of lower
layers.
 The bottom layer (layer 0), is the hardware; the highest (layer N) is the user interface.


 With modularity, layers are selected such that each uses functions (operations) and services of
only lower-level layers
o Microkernel System Structure
 Moves as much from the kernel into user space
 Mach example of microkernel
 Mac OS X kernel (Darwin) partly based on Mach
 Communication takes place between user modules using message passing
 Benefits:
 Easier to extend a microkernel
 Easier to port the operating system to new architectures
 More reliable (less code is running in kernel mode)
 More secure
 Detriments:
 Performance overhead of user space to kernel space communication
o Modules
 Most modern operating systems implement loadable kernel modules
 Uses object-oriented approach
 Each core component is separate
 Each talks to the others over known interfaces
 Each is loadable as needed within the kernel
 Overall, similar to layers but with more flexible
 Linux, Solaris


 The Solaris operating system structure, shown in Figure , is organized around a core
kernel with seven types of loadable kernel modules:
a. Scheduling classes
b. File systems
c. Loadable system calls
d. Executable formats
e. STREAMS modules
f. Miscellaneous
g. Device and bus drivers
 Linux also uses loadable kernel modules, primarily for supporting device drivers and
file systems.
o Hybrid Systems
 Very few operating systems adopt a single, strictly defined structure.
 Instead, they combine different structures, resulting in hybrid systems that address
performance, security, and usability issues.
 Hybrid combines multiple approaches to address performance, security, usability needs
 Linux and Solaris kernels in kernel address space, so monolithic, plus modular for dynamic
loading of functionality
 Windows mostly monolithic, plus microkernel for different subsystem personalities
 Examples:
 Mac OS X :
a. The Apple Mac OS X operating system uses a hybrid structure
b. It is a layered system. The top layers include the Aqua user interface and a set
of application environments and services.
c. Notably, the Cocoa environment specifies an API for the Objective-C
programming language, which is used for writing Mac OS X applications.
d. Below these layers is the kernel environment, which consists primarily of the
Mach microkernel and the BSD UNIX kernel.
 IOS
a. Apple mobile OS for iPhone, iPad
b. Structured on Mac OS X, added functionality
c. Does not run OS X applications natively
d. Also runs on different CPU architecture (ARM vs. Intel)
e. Cocoa Touch Objective-C API for developing apps
f. Media services layer for graphics, audio, video
g. Core services provides cloud computing, databases
h. Core operating system, based on Mac OS X kernel
 Android
a. Developed by Open Handset Alliance (mostly Google)
1. Open Source
b. Similar stack to iOS
c. Based on Linux kernel but modified
1. Provides process, memory, device-driver management
2. Adds power management
d. Runtime environment includes core set of libraries and Dalvik virtual machine
1. Apps developed in Java plus Android API
 Java class files compiled to Java bytecode then translated to
executable than runs in Dalvik VM
e. Libraries include frameworks for web browser (webkit), database (SQLite),
multimedia, smaller libc

2.8 Operating-System Debugging


 Debugging is finding and fixing errors, or bugs
 OSs generate log files containing error information

2.8.1 Failure Analysis


 Failure of an application can generate core dump file capturing memory of the process
 Operating system failure can generate crash dump file containing kernel memory
 Beyond crashes, performance tuning can optimize system performance
o Sometimes using trace listings of activities, recorded for analysis
o Profiling is periodic sampling of instruction pointer to look for statistical trends

2.8.2 Performance Tuning


 Improve performance by removing bottlenecks
 OS must provide means of computing and displaying measures of system behavior

2.9 Operating-System Generation


 Operating systems are designed to run on any of a class of machines; the system must be configured for each
specific computer site
 SYSGEN program obtains information concerning the specific configuration of the hardware system
o Used to build system-specific compiled kernel or system-tuned
o Can general more efficient code than one general kernel

2.10 System Boot


 When power initialized on system, execution starts at a fixed memory location
o Firmware ROM used to hold initial boot code
 Operating system must be made available to hardware so hardware can start it
o Small piece of code – bootstrap loader, stored in ROM or EEPROM locates the kernel, loads it into
memory, and starts it
o Sometimes two-step process where boot block at fixed location loaded by ROM code, which loads
bootstrap loader from disk
 Common bootstrap loader, GRUB, allows selection of kernel from multiple disks, versions, kernel options
 Kernel loads and system is then running
Operating System
Chapter 3 - Processes

Ostaz Online
http://www.facebook.com/Ostaz.Online
‫األ‪ٚ‬ي وذٖ لثً اٌسالَ ‪ :‬إْ اٌطشذ دٖ ِص ػٍطاْ ذزاوش ِٕٗ وً زاخح الالال دٖ ػٍطاْ تس ذف‪ ِٕٗ ُٙ‬اٌٍيٍح وٍ‪ٙ‬ا‬
‫‪ٚ‬اٌغّ‪ٛ‬ؼ اٌٍي في االساليذ ‪ٚ‬ذمذس ذسر‪ٛ‬ػة االساليذ ِاضيح اصاي ‪ٚ‬وذٖ ٌىٓ اويذ ِص ٘رمذس ذسً زاخح في‬
‫االِرساْ ِٓ اٌٍّخع دٖ ا‪ ِٓ ٚ‬اٌطشذ دٖ ‪‬‬

‫الفصل الثالث‬

‫العمليات‬

‫األ‪ٚ‬تيشاذيٕح سيسرُ ٌّا تيدي يطرغً ػٍي اٌد‪ٙ‬اص ػادي تيمؼذ يشغي في ٍِي‪ ْٛ‬زاخح غر ؟؟ ازٕا ٕ٘اخذ ِٕ‪ُٙ‬‬
‫زاخريٓ ‪:‬‬

‫األ‪ٌٚ‬ي ‪ :‬زاخح اسّ‪ٙ‬ا اٌثاذص سيسرُ ‪ٚ‬دٖ اذطشذ في ضاترش ‪ٚ‬ازذ ‪ٕ٘ٚ‬ا اسّٗ خ‪ٛ‬تس‬

‫‪ٚ‬اٌرأيح ‪ٍِ :‬ري تش‪ٚ‬خشاِيٓ سيسرُ ‪ٚ‬دٖ ٘‪ ٛ‬تشدٖ اسّٗ اٌي‪ٛ‬سش تش‪ٚ‬خشاِض أ‪ ٚ‬اٌراسىس اٌٍي ٘ي اٌّ‪ّٙ‬اخ يؼٕي‬

‫ذؼشيف اٌثش‪ٚ‬خشاَ ؟؟ دٖ ػثاسج ػٓ زاخح اسراذيه يؼٕي زاخح في‪ٙ‬ا ِدّ‪ٛ‬ػح ِٓ األ‪ٚ‬اِش خا٘ضج أسرشاورطٕض‬
‫ترى‪ِ ْٛ‬سذدج ذّاِا ال ترضيذ ‪ٚ‬ال ذمً ‪ٚ‬ال يسضٔ‪:D ْٛ‬‬

‫ذؼشيف اٌثش‪ٚ‬سيس ؟؟ دي زاخح ديٕاِيه يؼٕي ِص ثاترح ا‪ِ ٚ‬ص ِسف‪ٛ‬ظح ٘ي تررغيش ِّىٓ ذضيذ ‪ِّٚ‬ىٓ ذمً‬
‫‪ٚ‬اٌرؼشيف اٌؼٍّي ٌي‪ٙ‬ا تمي ٌّا ٔيدي ٔىٍُ غر ِغ تؼؽ ٘ي ػثاسج ػٓ أي تشٔاِح داخً ػٍي ِشزٍح اٌرٕفيز‬
‫ٔمذس ٔم‪ٛ‬ي ػٍيٗ تش‪ٚ‬سيس أ‪ ٚ‬ػٍّيح ‪‬‬

‫أي غ‪ٛ‬سج ٌٍثش‪ٚ‬سيس في اٌّيّ‪ٛ‬سي " اٌزاوشج يؼٕي ‪ " ‬تيثمي اسّ‪ٙ‬ا ادسيس اسثيس " ِسازح " ‪ٚ‬تيى‪ ْٛ‬في‪ٙ‬ا‬
‫اٌراٌي ‪ :‬ض‪ٛ‬يح ِى‪ٔٛ‬اخ وذٖ ٕ٘م‪ٌٙٛ‬ا ِغ تؼؽ ‪:‬‬

‫‪1‬‬
‫أ‪ٚ‬ي زاخح ‪ :‬ذىسد سيىطٓ ‪ٚ‬دي تيى‪ ْٛ‬في‪ٙ‬ا اٌى‪ٛ‬د تراع وً اتٍيىاضٓ تيطرغً ‪ ،‬يؼٕي وً اتٍيىاضٓ ٌيٗ و‪ٛ‬د ذّاَ‬
‫؟؟ ا٘ا ا‪ٚ‬وي يثمي اٌدضء دٖ تيى‪ ْٛ‬فيٗ اٌى‪ٛ‬د تراع االتٍيىاضٓ‬

‫ذأي زاخح ‪ :‬داذا سىطٓ ‪ٚ‬دي تيى‪ ْٛ‬في‪ٙ‬ا اٌذاذا اسرشاورطش تراع االتٍيىاضٓ يثمي اٌٍي فاذد في‪ٙ‬ا اٌى‪ٛ‬د ‪ٚ‬دي في‪ٙ‬ا‬
‫اٌذاذا اسرشاورطش‬

‫ذاٌد زاخح ‪ :‬اٌ‪ٙ‬ية ‪ٚ‬دي تمي ترى‪ ْٛ‬اٌّيّ‪ٛ‬سي اٌٍي تررسدض ػٍطاْ خاطش ػي‪ ْٛ‬االتٍيىاضٓ ٌىٓ ترى‪ ْٛ‬ديٕاِيه‬
‫تّؼٕي ذأي تررسدض في اٌشْ ذايُ " ‪ٚ‬لد اٌرطغيً " ‪ٚ‬وّاْ ِص تيى‪ٌ ْٛ‬ي‪ٙ‬ا ِسازح ِؼيٕح ‪ ،‬تررسدض زسة‬
‫اٌساخح ‪ٚ‬ذسد أِش سؼادذه ‪$:‬‬

‫ساتغ زاخح تمي ‪ٚ‬اآلخيشج ‪ :‬االسران ‪ٚ‬دي زاخح تٕسط في‪ٙ‬ا ليّح اٌشيديسراس ‪ٚ‬أي ليّح تسرخذِ‪ٙ‬ا ٌفرشج ِؤلرح ‪...‬‬
‫طثؼا ذمشيثا وً اٌّفا٘يُ دي ػذيد ػٍيٕا في ِادج اٌذاذا سرشاورطش ‪٘ٚ‬ىزا‬

‫اٌّفش‪ٚ‬ؼ دٌ‪ٛ‬لري اْ ٔظاَ اٌرطغيً ا‪ ٚ‬اال‪ٚ‬تشيرٕيح سيسرُ يٕظُ اٌّ‪ٙ‬شخاْ دٖ وٍٗ ‪ٚ‬اٌؼٍّياخ اٌٍي تررُ دي وٍ‪ٙ‬ا‬
‫ذّاَ ؟؟ ذّاَ خذا ‪ ،‬ػٍطاْ وذٖ ٕ٘اخذ تمي ايٗ ض‪ٛ‬يح زاخاخ وذٖ ترى‪ِ ْٛ‬غ اي تش‪ٚ‬سيس ا‪ ٚ‬ذمذس ذم‪ٛ‬ي ِالصِح‬
‫ٌي‪ٙ‬ا ػٍطاْ تٕاءا ػٍي اٌساخاخ دي ٕ٘مذس ٔشذة اٌؼٍّياخ تراػرٕا صي إٌاط اٌثٕي آدِيٓ ػادي خذا ‪‬‬

‫إيٗ ض‪ٛ‬يح اٌساخاخ دي تمي اٌٍي ذثغ اٌثش‪ٚ‬سيس ‪:‬‬

‫فيٗ تس زاخح الصَ ٔثمي ػاسفيٕ‪ٙ‬ا ‪٘ٚ‬ي إْ وً اٌذاذا خاغح تأي ػٍّيح تررخضْ في زاخح اسّ‪ٙ‬ا " تش‪ٚ‬سيس‬
‫وٕرش‪ٚ‬ي تٍ‪ٛ‬ن " ‪ٚ‬تيرخضْ طثؼا ِغ اٌثش‪ٚ‬سيس ض‪ٛ‬يح زاخاخ ِالصِح ٌي‪ٙ‬ا اٌٍي ازٕا لٍٕا ػٍي‪ٙ‬ا في اٌسطشيٓ اٌٍي‬
‫لثٍ‪ُٙ‬‬

‫اٌساخح األ‪ٌٚ‬ي ‪ :‬اٌثش‪ٚ‬سيس أي دي ‪ٚ‬دٖ تيى‪ ْٛ‬سلُ ي‪ٔٛ‬يه " ِ‪ٛ‬زذ " ‪ِٚ‬ص تيرىشس ‪ٚ‬طثؼا اويذ وً ‪ٚ‬ازذ دٌ‪ٛ‬لري‬
‫خٗ في رٕ٘ح اٌثشايّاسي وي تراع اٌذاذا تيض أ‪ ٚ‬ستط اٌّػطٍر تساخح‬
‫يمذس يفرىشٖ‬

‫اٌساخح اٌرأيح ‪ :‬االسريد " اٌساٌح يؼٕي " ‪ٚ‬دي ٕ٘ىٍُ ػٍي‪ٙ‬ا ِغ تؼؽ‬
‫دٌ‪ٛ‬لري تاٌرفػيً وّاْ ض‪ٛ‬يح غغٕٕيٓ ‪٘ٚ‬ثمي افىشوُ تي‪ٙ‬ا ٌّا ٔ‪ٛ‬غٍ‪ٙ‬ا‬
‫‪‬‬

‫اٌساخح اٌراٌرح ‪ :‬اٌثشي‪ٛ‬سيري " األ‪ٌٛٚ‬يح " يؼٕي ِٓ اآلخش وذٖ ِيٓ لثً‬
‫ِيٓ ‪ِٚ‬يٓ يذخً األ‪ٚ‬ي ‪٘ٚ‬ىزا‬

‫اٌساخح اٌشاتؼح ‪ :‬اٌثش‪ٚ‬خشاَ وا‪ٔٚ‬رش " ػذاد اٌثشٔاِح " ‪ٚ‬دٖ تيطا‪ٚ‬س ػٍي‬
‫االدسيس ا‪ ٚ‬اٌؼٕ‪ٛ‬اْ تراع األِش اٌٍي ػٍيٗ اٌذ‪ٚ‬س ‪ ُِّّّّ ...‬طثؼا‬
‫ذ‪٘ٛ‬رٗ ض‪ٛ‬يح ٕ٘ا ‪ ...‬تػ‪ٛ‬ا اٌؼٍّيح في‪ٙ‬ا وزا أِش غر ؟؟ ذّاَ ‪ّ٘ ،‬ا‬
‫‪ٚ‬الفيٓ تمي ِثال ٔرخيً وذٖ تاٌذ‪ٚ‬س طات‪ٛ‬س ػٍطاْ يرٕفز‪ٚ‬ا في اٌؼٍّيح تس‬
‫اويذ ِص الصَ يرٕفز‪ٚ‬ا تاٌذ‪ٚ‬س ػٍطاْ فيٗ أ‪ٌٛٚ‬ياخ ذأيح ٕ٘اخذ٘ا في‬
‫االػرثاس ف‪ ٛٙ‬خالظ تيسذد ٘يٕفز أٗ اِش في اٌؼٍّيح ‪ٚ‬يم‪ َٛ‬اٌثش‪ٚ‬خشاَ‬
‫وا‪ٔٚ‬رش صي اٌطاطش يش‪ٚ‬ذ يمف ا‪ ٚ‬يطا‪ٚ‬س ػٍي االِش اٌٍي ػٍيٗ اٌذ‪ٚ‬س‬
‫ػٍطاْ يرٕفز‬

‫‪2‬‬
‫اٌساخح اٌخاِسح ‪ :‬سي تي ي‪ ٛ‬سيديسرشص ‪ٚ‬دٖ تيرخضْ في‪ٙ‬ا سيديسرشص " سدالخ تاٌؼشتي " ٌّا يسػً ايش‪ٚ‬س‬
‫ِثال في اٌثش‪ٚ‬سيس ػٍطاْ اٌّشج اٌٍي خايح ٌّا يدي ذسػً اٌؼٍّيح ذأي تيى‪ِ ٛ٘ ْٛ‬سدً في اٌشيسدرش‬
‫االيش‪ٚ‬س دٖ زػً لثً وذٖ فّص يمغ فيٗ ذأي ‪ٚ‬يخٍي اٌؼٍّيح ذىرًّ تػ‪ٛ‬سج غسيسح‬

‫اٌساخح اٌسادسح ‪ :‬سي تي ي‪ ٛ‬سىاد‪ٚ‬اٌيٕح " خذ‪ٚ‬ي يؼٕي " أف‪ٛ‬سِيطٓ ‪ٚ‬دي تيى‪ ْٛ‬في‪ٙ‬ا ا‪ٌٚ‬يح اٌؼٍّيح اٌثشي‪ٛ‬سيري‬
‫‪ٚ‬دٖ تمي ِىاْ تيرخضْ فيٗ اٌثشي‪ٛ‬سيري تراػري‬

‫اٌساخح اٌساتؼح ‪ِ :‬يّ‪ٛ‬سي ِأدّٕد أف‪ٛ‬سِيطٓ ‪ٚ‬دي تمي ترسر‪ٛ‬ي ػٍي ِىاْ وً زاخح في اٌّيّ‪ٛ‬سي ِٓ اآلخش‬
‫وذٖ ترثمي ػاسفح وً زاخح في اٌّيّ‪ٛ‬سي ِ‪ٛ‬خ‪ٛ‬دج فيٓ ‪ٚ‬زافظح اٌؼٕ‪ٛ‬اْ تراػ‪ٙ‬ا‬

‫اٌساخح اٌرإِح ‪ :‬أو‪ٔٛ‬ريٕح أف‪ٛ‬سِيطٓ ‪ٚ‬دي تيرخضْ في‪ٙ‬ا اٌ‪ٛ‬لد ‪ٚ‬اٌد‪ٙ‬ذ اٌٍي زػً ِٓ اٌثش‪ٚ‬سيس‪ٛ‬س ػٍطاْ يؼًّ‬
‫ا‪ ٚ‬يٕفز اٌؼٍّيح دي ُِّّّّّّّ تّؼٕي ذأي أا ِثال ػٕذي زاخح ٘ررٕفز اويذ زػً ػٍي‪ٙ‬ا ض‪ٛ‬يح زاساتاخ ‪ِٚ‬ص‬
‫ػاسف ايٗ ‪ٚ‬سسثد خ‪ٙ‬ذ ِٓ اٌثش‪ٚ‬سيس‪ٛ‬س ‪ٚٚ‬لد غر ؟؟ غر ‪ ،‬ذّاَ خذا اٌساخاخ دي تمي وٍ‪ٙ‬ا ترثمي ِرخضٔح‬
‫في اٌسرح دي اٌٍي ٘ي او‪ٔٛ‬ريح أف‪ٛ‬سِيطٓ‬

‫اٌساخح اٌراسؼح ‪ :‬أث‪ٛ‬خ ا‪ٚ‬ذث‪ٛ‬خ اسراذي‪ٛ‬ط أف‪ٛ‬سِاضٓ ‪ٚ‬دي تمي تيى‪ ْٛ‬في‪ٙ‬ا لائّح وذٖ تىً زاخح أا ٘سرخذِ‪ٙ‬ا‬
‫ِٓ االخ‪ٙ‬ضج خالي اٌثش‪ٚ‬سيس ا‪ ٚ‬اٌؼٍّيح اٌفالٔيح يؼٕي ِثال ػٕذي ػٍّيح ٔمً فيٍُ ٕ٘سرخذَ ِثال ويث‪ٛ‬سد ػٍطاْ‬
‫اػًّ و‪ٛ‬تي ا‪ ٚ‬تاسد ‪ٚ‬وذٖ تيى‪ِ ْٛ‬رخضْ في‪ٙ‬ا اسّاء االخ‪ٙ‬ضج تراػح االٔث‪ٛ‬خ ا‪ ٚ‬اال‪ٚ‬خ ت‪ٛ‬خ اٌٍي أد‬
‫٘رسرخذِ‪ٙ‬ا في اٌؼٍّيح دي‬

‫اٌساخح اآلخيشج ‪ٚ‬دي تمي ِص ِ‪ٛ‬خ‪ٛ‬دج في االساليذ تس ا٘‪ ٛ‬زاخح ٌٍضِٓ وذٖ ‪:D‬‬

‫أٗ أا ػٕذي في آخش وً تش‪ٚ‬سيس وٕرش‪ٚ‬ي تٍ‪ٛ‬ن اٌٍي ازٕا ػّاٌيٓ ٔشغي فيٗ ِٓ اٌػثر دٖ تيى‪ ْٛ‬ػٕذي ت‪ٛ‬يٕرش‬
‫" ِؤضش " تيطا‪ٚ‬س ػٍي اٌثش‪ٚ‬سيس وٕرش‪ٚ‬ي تٍ‪ٛ‬ن اٌٍي ٘يرُ ذٕفيزٖ تؼذي ياسب ذى‪ ٗٔٛ‬فاّ٘يٕ‪ٙ‬ا ُِّّّّّّّّّّ‬
‫ساِغ زذ تيم‪ٛ‬ي ِص فاُ٘ ‪ ‬تع أا ػٕذي زاخح اسّ‪ٙ‬ا تش‪ٚ‬سيس وٕرش‪ٚ‬ي تٍ‪ٛ‬ن تيرخضْ في‪ٙ‬ا اٌثش‪ٚ‬سيس ‪ٚ‬وً‬
‫زاخح ذثؼ‪ٙ‬ا غر ؟؟ اويذ ػٕذي ٌىً تش‪ٚ‬سيس تمي تش‪ٚ‬سيس وٕرش‪ٚ‬ي تٍ‪ٛ‬ن ذرخضْ فيٗ فاٌث‪ٛ‬يٕرش دٖ تمي تيطا‪ٚ‬س‬
‫ػٍي اٌٍي ٘يرُ ذٕفيزٖ تؼذي ‪٘ٚ‬ىزا‬

‫تس ٍِس‪ٛ‬ظح ػٍي خٕة وذٖ ‪ :‬اٌثش‪ٚ‬سيس وٕرش‪ٚ‬ي تٍ‪ٛ‬ن تيرخضْ فيٗ وً اٌٍي فاخ دٖ ِا ػذا األ‪ٌٚ‬ي ِّىٓ يى‪ْٛ‬‬
‫ِرخضْ في ِىاْ ذأي تس ّ٘ا ِص تيثم‪ٛ‬ا ِ‪ٛ‬خ‪ٛ‬ديٓ في اٌثش‪ٚ‬سيس وٕرش‪ٚ‬ي تٍ‪ٛ‬ن‬

‫دٌ‪ٛ‬لري تمي ٕ٘رىٍُ ػٓ زاخح تاٌرفػيً وٕا لٍٕا ػٍي‪ٙ‬ا ف‪ٛ‬ق فاوشيٓ ؟؟ ا٘ا فاوشيٓ اٌدضء اٌرأي ا‪ ٚ‬اٌساخح اٌرأيح‬
‫ا‪ ٚ‬اٌّشزٍح اٌرأيح في اٌثش‪ٚ‬سيس وٕرش‪ٚ‬ي تٍ‪ٛ‬ن ٌّا لٍٕا ٕ٘ىٍُ ػٕ‪ٙ‬ا تاٌرفػيً ‪٘ٚ‬ي " اٌثش‪ٚ‬سيس اسراخ " ‪....‬‬

‫‪ٚ‬دي تمي تّدشد إْ اٌثش‪ٚ‬سيس يسػً ٌي‪ٙ‬ا اوسيىي‪ٛ‬ضٓ " ذطغيً أ‪ ٚ‬اسرذػاء ٌي‪ٙ‬ا ػٍطاْ ذطرغً " تررغيش‬
‫االسراخ تراػر‪ٙ‬ا " اٌساٌح يؼٕي " ‪ٚ‬ػٕذٔا تمي ٕ٘ا خّس أ‪ٛ‬اع ِٓ اٌساالخ‪:‬‬

‫إٌ‪ٛ‬ع األ‪ٚ‬ي ‪ٔ :‬ي‪ " ٛ‬خذيذ " ‪ٚ‬دي ِؼٕا٘ا اْ اٌثش‪ٚ‬سيس دي ٌسٗ ِؼّ‪ٌٛ‬ح خذيذ ‪ٌٚ‬سٗ زذ ػاًِ ٌي‪ٙ‬ا وشيد " إٔطاء‬
‫" دٌ‪ٛ‬لري زاال‬

‫إٌ‪ٛ‬ع اٌرأي ‪ :‬سأيٕح " ذطغيً " ‪ٚ‬دي ِؼٕا٘ا اْ اٌؼٍّيح ضغاٌح دٌ‪ٛ‬لري‬

‫إٌ‪ٛ‬ع اٌراٌد ‪ٚ :‬يريٕح " أرظاس " ‪ٚ‬دي ِؼٕا٘ا اْ اٌثش‪ٚ‬سيس ِسرٕيح زاخح ِؼيٕح ذسػً ػٍطاْ ذىًّ ضغٍ‪ٙ‬ا صي‬
‫ِثال ِٕرظش أث‪ٛ‬خ يذخً ا‪ ٚ‬ا‪ٚ‬خ ت‪ٛ‬خ يخشج اي ترٕداْ يسػً ‪٘ٚ‬ي ذىًّ ضغٍ‪ٙ‬ا ػٍي ط‪ٛ‬ي‬

‫‪3‬‬
‫إٌ‪ٛ‬ع اٌشاتغ ‪ :‬سيذي " ِسرؼذ ا‪ ٚ‬االضاسج ٌ‪ٙٔٛ‬ا اغفش " ‪ٚ‬دي ِؼٕا٘ا اْ اٌثش‪ٚ‬سيس ِسرؼذج ا‪ِٕ ٚ‬رظشج أ‪ٙ‬ا ذثمي‬
‫سأيٕح ِٓ اآلخش اٌشيذي ا‪ٚ‬ي ِا تيسػً ِؼٕا٘ا صي ِا ذم‪ٛ‬ي ايٗ وذٖ أد ‪ٚ‬الف طات‪ٛ‬س ػٍطاْ ذٕرخة ‪ٚ‬اٌشاخً‬
‫ٔادي ػٍي اسّه ترم‪٘ ٌٗٛ‬ا اخي ؟؟ لاٌه الالال خٍيه تس ‪ٚ‬الف أد اٌذ‪ٚ‬س اٌٍي خاي ‪‬‬

‫إٌ‪ٛ‬ع اٌخاِس ‪ٚ‬اآلخيش ‪ :‬ذشِيٕيرذ " إغالق " ِؼٕا٘ا ِٓ اآلخش يا وثيش أٔا خٍػد ‪ٚ‬ػا‪ٚ‬ص اخٍغ أا =))‬

‫اٌطىً دٖ تمي تيم‪ٌٛ‬ه ايٗ اٌساالخ اٌٍيُ ِّىٓ ذفرر ػٍي تؼؽ يؼٕي ً٘ ِّىٓ اٌشيذي ذثمي ٔي‪ ٛ‬؟؟ ‪ٚ‬ال اٌؼىس ؟؟‬
‫طية ايٗ اٌّشازً اٌٍي ترذخً في‪ٙ‬ا اٌثش‪ٚ‬سيس ؟؟ ٔي‪ ٛ‬األ‪ٚ‬ي ‪ٚ‬ال ‪ٚ‬ال ‪ٚ‬دٖ ٕ٘ؼشفٗ ِٓ اٌطىً دٖ ‪ٚ‬اٌٍي ِص فاُ٘‬
‫زاخح ِٓ اٌطىً يثمي يشخغ يشاخغ اٌخّس أٔ‪ٛ‬اع اٌٍي ف‪ٛ‬ق د‪ٚ‬ي ذأي صي اٌطط‪ٛ‬س وذٖ ػٍطاْ االِرساْ ػا‪ٚ‬ص‬
‫ف‪ِ ُٙ‬غ ض‪ٛ‬يح زفظ ػٍطاْ ذمذس ذؼثش تس تاٌّػطٍساخ ‪٘ٚ‬ي س‪ٍٙ‬ح إْ ضاء اهلل‬

‫تؼذ وذٖ ٕ٘ذخً في زرح وذٖ صي ِا ذم‪ٌٛٛ‬ا ٕ٘ؼًّ ذدشتح تسيطح ٔ‪ٛ‬ؾر تي‪ٙ‬ا ِغ تؼؽ اصاي اٌثش‪ٚ‬سيس ترطرغً‬
‫ِغ تؼؿ‪ٙ‬ا ‪ٚ‬تررٕمً ِٓ تش‪ٚ‬سيس وٕرش‪ٚ‬ي تٍ‪ٛ‬ن‬

‫في اٌشسّح دي ػٕذٔا ػٍي اٌطّاي تش‪ٚ‬سيس صيش‪ٚ ٚ‬ػٍي اٌيّيٓ تش‪ٚ‬سيس ‪ٚ‬ازذ ‪ٚ‬اٌد‪ٙ‬اصيٓ ٌسٗ ‪ٚ‬الفيٓ ‪ ،‬لاَ‬
‫اٌد‪ٙ‬اص اٌٍي ػٍي اٌطّاي ػًّ اوسيىي‪ٛ‬خ ٌثش‪ٚ‬سيس ِا " اسرذػاء يؼٕي " فطثؼا ٌسٗ ِفيص تش‪ٚ‬سيس‪ٛ‬س ضغاي‬
‫ساذ ػًّ زفظ ٌٍساٌح تراػر‪ٙ‬ا ‪ٚ‬فؿً ِسرٕي ض‪ٛ‬يح ‪ٌ ....‬سذ ِا اٌد‪ٙ‬اص اٌٍي ػٍي اٌيّيٓ خٗ يطرغً لاَ ػًّ‬
‫سيٍ‪ٛ‬د ٌٍساٌح تراػح اٌثش‪ٚ‬سيس ذأي ‪ٚ‬لاَ اٌد‪ٙ‬اص اٌٍي ػٍي اٌيّيٓ اضرغً ‪ٚ‬فؿً ضغاي ‪ٚ‬تؼذ ِا خٍع ساذ ػًّ‬
‫زفظ ٌالسراخ تراػح اٌثش‪ٚ‬سيس ِٓ ذأي ‪ٚ‬فؿً اٌد‪ٙ‬اصيٓ ‪ٚ‬الفيٓ ِص تيؼٍّ‪ٛ‬ا زاخح ٌٍثش‪ٚ‬سيس دي ‪ِٚ‬ص‬

‫‪4‬‬
‫ِسراخيٕ‪ٙ‬ا دٌ‪ٛ‬لري ‪ ...‬أ‪ٚ‬ي ِا خٗ اٌد‪ٙ‬اص اٌٍي ػٍي اٌطّاي ِسراخ‪ٙ‬ا ذأي ػٍطاْ يٕفز ض‪ٛ‬يح أ‪ٚ‬اِش لاَ ػاًِ‬
‫سيٍ‪ٛ‬د ذاااأي ٌٍساٌح تراػر‪ٙ‬ا ‪٘ٚ‬ىزا‬

‫ٌيٗ تمي أا ‪ٚ‬اخغ دِاغىُ تاٌىالَ دٖ وٍٗ ؟؟ ‪ٌٚ‬يٗ ِسراج اورش ِٓ ػٍّيح ا‪ ٚ‬تش‪ٚ‬سيس ؟؟ !!‬

‫أ‪ٚ‬ال ػٍطاْ ػٕذي زاخاخ اسّ‪ٙ‬ا سيٕدً " فشدي ِغ أ‪ٚ‬ضح " اتٍيىاضٓ ‪ٚ‬دي ِؼٕا٘ا أي ػٕذي زاخاخ الصَ‬
‫ذسػً في ٔفس اٌ‪ٛ‬لد صي ِثال سثيً ذطيىش " ِػسر اٌىٍّاخ " ‪ٚ‬دٖ تيطرغً في ٔفس اٌ‪ٛ‬لد اٌٍي أا تىرة فيٗ‬
‫فأويذ ِسراج اورش ِٓ ػٍّيح ا‪ ٚ‬تش‪ٚ‬سيس‬

‫ذأيا ػٕذي زاخاخ ٍِري اتٍيىاضٓ ‪ٚ‬دي ض‪ٛ‬يح ػٍّياخ ترثمي ضغاٌح في اٌخٍفيح صي ِثال ‪ٚ‬ازذ تيؼًّ د‪ٍٛٔٚ‬د‬
‫‪ٚ‬سايثٗ أ‪ِ ٚ‬ثال اٌ‪ٛ‬يٕذ‪ٚ‬ص ِا٘‪ ٛ‬ضغاي ‪ٚ‬أا تىرة‬

‫ذاٌد زاخح ٍِري ي‪ٛ‬سش ‪ٚ‬دي ِؼٕا٘ا ِثال أي ػٕذي اورش ِٓ ي‪ٛ‬سش ضغاي صي دتاسذّٕد " لسُ " وّثي‪ٛ‬ذش يؼٕي‬
‫ِثال ٌّا ذذخً ا‪ٚ‬ؾح وذٖ في‪ٙ‬ا اورش ِٓ خ‪ٙ‬اص ِّىٓ ذاللي‪ ُٙ‬وٍ‪ ُٙ‬داخٍيٓ ػٍي خ‪ٙ‬اص ‪ٚ‬ازذ ‪ٚ‬وٍ‪ ُٙ‬ضايفيٓ ٔفس‬
‫اٌد‪ٙ‬اص ‪ٚ‬وً ‪ٚ‬ازذ ضغاي في ِىاْ ِؼيٓ ‪٘ٚ‬ىزا‬

‫طية ايٗ تمي اٌساالخ اٌٍي الذس اذٕمً في‪ ِٓ ُٙ‬ػٍّيح ٌؼٍّيح ذأيح ؟؟ !! ٕ٘م‪ ٌُٙٛ‬س‪ٛ‬ا دٌ‪ٛ‬لري‬

‫أ‪ٚ‬ي زاخح ‪ :‬اٌىٍ‪ٛ‬ن " ساػح " ‪ٚ‬دي ِؼٕا٘ا خذخ اٌ‪ٛ‬لد اٌٍي ِسّ‪ٛ‬ذ ٌي‪ٙ‬ا تيٗ‬

‫ذأي زاخح ‪ :‬أث‪ٛ‬خ ا‪ٚ‬خ ت‪ٛ‬خ ‪ٚ‬دي ِؼٕا٘ا أ‪ٙ‬ا خذخ االٔث‪ٛ‬خ اٌٍي ِسرٕياٖ ا‪ِ ٚ‬ثال خشخد اال‪ٚ‬تد اٌٍي ِطٍ‪ٛ‬ب‬
‫ِٕ‪ٙ‬ا ‪٘ٚ‬ىزا‬

‫ذاٌد زاخح ‪ٚ‬االخيشج ‪ :‬اْ يى‪ ْٛ‬زػٍ‪ٙ‬ا ايش‪ٚ‬س فأويذ ِص ٘رىًّ ‪٘ٚ‬رط‪ٛ‬ف اٌٍي تؼذٖ‬

‫ػٕذٔا تمي زاخح اسّ‪ٙ‬ا تش‪ٚ‬سيس سىاد‪ٚ‬اٌيٕح " خذ‪ٚ‬ي " وي‪ " ٛ‬طات‪ٛ‬س "‬

‫ػٕذي ذالخ أٔ‪ٛ‬اع ِٓ اٌىي‪ " ٛ‬اٌط‪ٛ‬اتيش " تمذس اغٕف اٌؼٍّيح ػٍي اساس‪ُٙ‬‬

‫أ‪ٚ‬ي زاخح ‪ :‬اٌد‪ٛ‬ب وي‪ٚ ٛ‬دٖ فيٗ وً اٌثش‪ٚ‬سيس اٌٍي ِ‪ٛ‬خ‪ٛ‬دج في اٌسيسرُ‬

‫ذأي زاخح ‪ :‬اٌشيذي وي‪ٚ ٛ‬دٖ فيٗ وً اٌثش‪ٚ‬سيس اٌٍي ِ‪ٛ‬خ‪ٛ‬دج في اٌّيّ‪ٛ‬سي اٌشئيسيح ‪ِٕٚ‬رظشج يسػٍ‪ٙ‬ا‬
‫اوسيىي‪ٛ‬خ " اسرذػاء "‬

‫ذاٌد زاخح ‪ :‬ديفيس " خ‪ٙ‬اص " وي‪ٚ ٛ‬دي ترطيً اٌثش‪ٚ‬سيس اٌٍي ِٕرظشج أث‪ٛ‬خ ا‪ ٚ‬ا‪ٚ‬خ ت‪ٛ‬خ دفيس " خ‪ٙ‬اص "‬

‫‪ ٚ‬دٖ سسُ ٌٍسرشاورطش تراع اٌىي‪ٛ‬ص " اٌػف‪ٛ‬ف "‪:‬‬

‫‪5‬‬
‫‪ ٚ‬ديٗ تمي تر‪ٛ‬ؾر اٌسي تي ي‪ ٛ‬تيرؼاًِ اصاي ِغ اٌىي‪ٛ‬ص ‪ ٚ‬اٌساخاخ ديٗ‪:‬‬

‫ٕ٘ذخً تمي في زاخح اسّ‪ٙ‬ا االسىيذ‪ٚ‬اٌض " اٌدذا‪ٚ‬ي " ‪:‬‬

‫أ‪ٚ‬ي ٔ‪ٛ‬ع ‪ٌٔٛ :‬ح ذشَ " ط‪ٛ‬يٍح اٌّذي " ‪ٚ‬دي اٌٍي ترمشس أٗ ػٍّيٗ ذذخً اٌشيذي وي‪ٚ ٛ‬في اٌغاٌة ‪ٚ‬لر‪ٙ‬ا تيثمي ِٓ‬
‫ث‪ٛ‬أي ٌذلايك ‪ٚ‬دٖ ‪ٚ‬لد وثيش طثؼا تإٌسثح ٌٍد‪ٙ‬اص ‪:D‬‬

‫ذأي ٔ‪ٛ‬ع ‪ :‬اٌط‪ٛ‬سخ ذشَ " لػيش اٌّذي " ‪ٚ‬دي ترمشس أٗ ػٍّيح ٘يسػً ٌي‪ٙ‬ا اوسيىي‪ٛ‬خ " اسرذػاء " ‪ٚ‬دٖ ‪ٚ‬لرٗ‬
‫تمي تيثمي تاٌٍّي ثأيح ‪ٚ‬دٖ ‪ٚ‬لد لٍيً خذا طثؼا ‪‬‬

‫‪6‬‬
‫ٔمطح تمي تؼذ وذٖ إْ‬

‫اٌؼٍّياخ ِّىٓ ذر‪ٛ‬غف تساخح ِٓ اذٕيٓ ‪:‬‬

‫اٌساخح األ‪ٌٚ‬ي ‪ :‬إٔث‪ٛ‬خ أ‪ٚ‬ذث‪ٛ‬خ تا‪ٔٚ‬ذيذ تش‪ٚ‬سيس ‪ٚ‬دي ِؼٕا٘ا اْ اٌؼٍّيح ترمؿي اورش ‪ٚ‬لر‪ٙ‬ا في االٔث‪ٛ‬خ ا‪ٚ‬ذث‪ٛ‬خ‬
‫اورش ِٓ اٌسساتاخ‬

‫اٌساخح اٌرأيح ‪ :‬اٌسي تي ي‪ ٛ‬تا‪ٔٚ‬ذ تش‪ٚ‬سيس ‪ٚ‬دي ِؼٕا٘ا اْ اٌؼٍّيح ترمؿي اورش ‪ٚ‬لر‪ٙ‬ا في اٌسسثااخ يؼٕي ِٓ‬
‫اآلخش وذٖ دي ػىس اٌساخح األ‪ٌٚ‬ي‬

‫اٌؼٍّياخ ػٍي اٌؼٍّياخ ‪:D:D:D:D:D:D‬‬

‫٘‪ ٗٙٙٙٙ‬تػ‪ٛ‬ا ٘ي وذٖ ‪‬‬

‫اٌّ‪: ُٙ‬‬

‫اٌساخح األ‪ٌٚ‬ي في‪ٙ‬ا ‪ :‬اٌثش‪ٚ‬سيس وشيطٓ " أطاء " ِؼٕا٘ا اصاي اػًّ تش‪ٚ‬سيس خذيذج‬

‫‪ٚ‬دي في‪ٙ‬ا استغ زاخاخ الذس اػًّ في‪ٙ‬ا أطاء ٌثش‪ٚ‬سيس خذيذج‬

‫‪ 1‬اٌسيسرُ أيطياٌيضاضٓ ‪ٚ‬دي تيم‪ٌٛ‬ه ‪ ٛ٘ٚ‬اٌسيسرُ ٌسح تيرؼشف اويذ فيٗ تش‪ٚ‬سيس ترٕطأ في اٌ‪ٛ‬لد دٖ‬

‫‪ 2‬اوسيىي‪ٛ‬ضٓ " اسرذػاء " ا‪ٚ‬ف تش‪ٚ‬سيس ‪ٚ :‬دي ٌّا اخي اػًّ اوسيىي‪ٛ‬خ " اسرذػاء " ٌؼٍّيح ‪ٚ‬أا اغال‬
‫ضغاي في ػٍّيح ذأيح فأويذ اٌؼٍّيح اٌدذيذج اٌٍي أا ٌسٗ ػاًِ ٌي‪ٙ‬ا اسرذػاء ترٕطأ ٌسٗ ِٓ خذيذ‬

‫‪ 3‬ي‪ٛ‬سش سيى‪ٛ‬يسد ‪ٚ :‬دي ِؼٕا٘ا اْ اٌي‪ٛ‬سش ٘‪ ٛ‬اٌٍي يطٍة تمي تطشيمح ِثاضشج يم‪ٌٛ‬ه أا ػا‪ٚ‬ص اػًّ تش‪ٚ‬سيس‬
‫خذيذج‬

‫‪ 4‬أطياضٓ ا‪ٚ‬ف اتاذص خ‪ٛ‬ب ‪ :‬ازفظ‪٘ٛ‬ا وذٖ ‪ٚ $:‬اٌٍي فاُ٘ يثؼد‬

‫‪7‬‬
‫تػ‪ٛ‬ا تمي في اٌػ‪ٛ‬سج اٌٍي خايح دي تيم‪ٌٛ‬ه اْ اٌؼٍّيح اٌ‪ٛ‬ازذج ٌّا ذثمي تاسٔد " أب " ِّىٓ يثمي ٌي‪ٙ‬ا اتٕاء "‬
‫ضايٍذ " ‪:D‬‬

‫فٍّا ذيدي ترٕادي ػٍي ػٍّيح ذأيح ِٓ اٌٍي ذسر‪ٙ‬ا تيرى‪ ْٛ‬ػٕذي اٌطىً دٖ اٌٍي ٘‪ ٛ‬ضىً اٌطدشج أ‪ " ٚ‬اٌرشي "‬

‫ذؼاٌ‪ٛ‬ا وذٖ ٔذسدش ض‪ٛ‬يح في اٌخػائع اٌٍي تيٓ وً أب ‪ٚ‬اتٕٗ " ساِسٕي ياسب ‪" ‬‬

‫أ‪ٚ‬ي زاخح ‪ ِٓ :‬زيث اٌشيس‪ٛ‬سط ضيشيٕح "‪: "sharing‬‬

‫ِّىٓ االب ‪ٚ‬االتٓ يؼٍّ‪ٛ‬ا ضيش ٌىً اٌشيس‪ٛ‬سط ِغ تؼؽ ‪ِّٚ‬ىٓ االتٓ يؼًّ ضيش ِٓ زاخاخ ات‪ ٖٛ‬ياخذٌٗ‬
‫خالتيح ا‪ ٚ‬وذٖ زاخح ػٍي اٌّاضي يؼٕي ‪:D‬‬

‫‪ِّٚ‬ىٓ تمي االذٕيٓ ٍِ‪ِٛٙ‬ص ػاللح تثؼؽ اساسا يؼٕي ِٓ اآلخش ٌ‪ ٛ‬أد ِطش‪ٚ‬د ِٓ اٌثيد‬

‫ذأي زاخح ِٓ زيث االوسيىي‪ٛ‬ضٓ ‪:‬‬

‫ِّىٓ االب ‪ٚ‬اتٕٗ يسػٍ‪ ُٙ‬اسرذػاء في ٔفس اٌ‪ٛ‬لد ‪ِٚ‬غ تؼؽ " أِٓ د‪ٌٚ‬ح "‬

‫أ‪ِّ ٚ‬ىٓ االب يسرٕي ٌسذ ِا اتٕٗ يخٍع " ض‪ٛ‬يح ذؿسيح "‬

‫ذاٌد زاخح ِٓ زيث االدسيس اسثاط " ِسافح " ‪:‬‬

‫دي تمي اْ االتٓ ِّىٓ يمشس ياخذ ػٕ‪ٛ‬اْ ات‪ " ٖٛ‬يرد‪ٛ‬ص ِؼاٖ ‪:D‬‬

‫أ‪ِّ ٚ‬ىٓ يمشس تشدٖ إٔٗ يثمي ِخرٍف ػٕٗ‬

‫طية ٕ٘اخذ ِثاي وذٖ فاوشيٓ طثؼا االسيّٕد اٌٍي وٍٕا ػٍّٕا فيٗ و‪ٛ‬تي ِٓ إٌد =)) تراع اٌّيٕىس اٌٍي اٌذفؼح‬
‫وٍ‪ٙ‬ا سٍّد ٔفس اٌى‪ٛ‬د ؟؟ ا٘‪ ٛ٘ ٛ‬دٖ وّا ٘‪ِٛ ٛ‬ؾر في اٌػ‪ٛ‬سج اآلذيح ‪:‬‬

‫‪8‬‬
‫طثؼا وً دٖ تٕىٍُ في اٌؼٍّياخ ػٍي اٌؼٍّياخ ِص ذٕس‪ٛ‬ا ‪ٚ :D‬لٍٕا أ‪ٚ‬ي ٔ‪ٛ‬ع ‪ٚ‬واْ تذاخٍح وً اٌشغي دٖ ‪ٛ٘ٚ‬‬
‫أطاء ػٍّيح خذيذج‬

‫إٌ‪ٛ‬ع اٌرأي تمي ‪ :‬تش‪ٚ‬سيس ذشِيٕاضٓ ‪:‬‬

‫ٕ٘م‪ٛ‬ي تمي االسثاب اٌٍي ِّىٓ ذؼًّ اغالق " ذشِيٕيطٓ " ألي ػٍّيح‬

‫ٔ‪ٛ‬سِاي ‪ ..‬يؼٕي ِٓ االخش خشخد طثيؼي أ‪ ٚ‬ايش‪ٚ‬س ‪ٚ‬دٖ ٌّا يسػً ايش‪ٚ‬س ‪ٚ‬ذخشج ‪ٚ‬ضىشا أ‪ ٚ‬تمي فاذً ايش‪ٚ‬س‬
‫‪ٚ‬دٖ ِؼٕاٖ ايش‪ٚ‬س تس فيٗ اٌطفي يؼٕي يخٍي اٌؼٍّيح ذؿشب ‪ ‬أ‪ِ ٚ‬ثال ذيدي ػٍّيح ذأيح ِٓ ‪ٚ‬ظيفير‪ٙ‬ا أ‪ٙ‬ا‬
‫ذخٍع ػٍي اٌؼٍّيح اٌٍي ضغاٌح‬

‫ٕ٘شغي ض‪ٛ‬يح في اٌرشِيٕطٓ " اإلغالق " ‪:‬‬

‫أ‪ٚ‬ي ٌّا اٌؼٍّيح ذٕفز آخش زاخح في‪ٙ‬ا ذثؼد ٌٕظاَ اٌرطغيً ‪ٚ‬ذم‪ ٌٗٛ‬اػٍّي ذطغيً ‪ٚ‬تيرُ طثؼا اٌىالَ دٖ تطشيمريٓ ‪:‬‬

‫‪ 1‬اٌذاذا ٌّا تيريدي ذخشج ترخشج ِٓ االتٓ ٌالب ثُ ٌالب االوثش ‪٘ٚ‬ىزا يؼٕي وً اب الصَ يسرٕي اتٕٗ ٌّا يخٍع‬
‫األ‪ٚ‬ي‬

‫‪ 2‬اٌثش‪ٚ‬سيس سيس‪ٛ‬سط تيسػً ٌي‪ٙ‬ا اصاٌح ػٓ طشيك ٔظاَ اٌرطغيً‬

‫ِّىٓ وّاْ اْ االب يمشس ِغ ٔفسٗ وذٖ أٗ يمفً االتٓ اٌٍي ذسريٗ ‪ٚ‬يؼًّ ٌيٗ ذشِيٕيد ‪ٚ‬دي تيثمي اسّ‪ٙ‬ا ات‪ٛ‬سخ‬

‫وّاْ ِّىٓ اغٕف اٌثش‪ٚ‬سيس ا‪ ٚ‬اٌؼٍّياخ ػٍي ٔ‪ٛ‬ػيٓ ‪:‬‬

‫ا‪ٚ‬ي زاخح أذيثيٕذأد تش‪ٚ‬سيس ِؼٕا٘ا تش‪ٚ‬سيس ِسرمٍح ٍِ‪ٙ‬اش ػاللح تأي زاخح ال ذأثش ‪ٚ‬ال ذرأثش زاخح خاِذج‬
‫أخش زاخح ‪‬‬

‫ذأي زاخح تش‪ٚ‬سيس ِّىٓ ذأثش ا‪ ٚ‬ذرأثش يؼٕي ػىس األ‪ٌٚ‬ي‬

‫ِّيضاخ اٌثش‪ٚ‬سيس و‪ٛ‬أتشاضٓ‪:‬‬

‫ِّىٓ ا ػًّ في‪ٙ‬ا ضيش ٌٍّؼٍ‪ِٛ‬اخ ا‪ ٚ‬وّاْ ِّىٓ ذسشيغ اٌؼٍّيح أدض٘ا يؼٕي تسشػح ا‪ٚ ٚ‬س‪ ًٙ‬وّاْ ػٍي‬
‫اٌسيسرُ أٗ يمسُ اٌؼٍّيح ا‪ ٚ‬يمسُ اٌّ‪ّٙ‬اخ ػٍي اخضاء ‪ٚ‬يمذس يخٍع وً خضء ٌ‪ٛ‬زذٖ ‪ ٚ‬وّاْ ذّىٓ اٌّسرخذَ أٗ‬
‫يطرغً ػٍي ِ‪ّٙ‬اخ وريش في ٔفس اٌ‪ٛ‬لد‬

‫‪9‬‬
‫فيٗ ِػٍطسيٓ وذٖ ٕ٘م‪ٚ ٌُٙٛ‬اٌثالي ٍِ‪ٛٙ‬ش اي ذالذيٓ الصَ وّا لاي تاسُ ي‪ٛ‬سف‬

‫ضيشد ِيّ‪ٛ‬سي ‪ :‬دٖ ِىاْ ِطرشن تيٓ اٌؼٍّيريٓ يمذس‪ٚ‬ا يطيش‪ٚ‬ا ِاتيٓ تؼؽ فيٗ ِثال صي ِا يى‪ ْٛ‬ػٕذي‬
‫تش‪ٚ‬سيس ‪ٚ‬ازذ ‪ٚ‬تش‪ٚ‬سيس اذٕيٓ ‪ ،‬تش‪ٚ‬سيس ‪ٚ‬ازذ ػٍّد تيأاخ ِؼيٕح ‪ٚ‬تش‪ٚ‬سيس اذٕيٓ ِسراخح ذمشأ اٌثيأاخ دي‬
‫فرم‪ َٛ‬اٌثياأاخ دي ذذخً في اٌطيشد ِيّ‪ٛ‬سي ػٍطاْ االذٕيٓ يمذس‪ٚ‬ا يط‪ٛ‬ف‪٘ٛ‬ا‬

‫ػٕذن ِطىٍريٓ في إٌ‪ٛ‬ع دٖ ‪:‬‬

‫‪ 1‬أْ تا‪ٔٚ‬ذيذ تافش ‪ٚ :‬دٖ تيسّر ٌٍّسرخذَ أٗ يؼًّ ػٍّياخ غيش ِسذ‪ٚ‬دج إلْ زدّٗ غيش ِسذ‪ٚ‬د‬

‫‪ 2‬ذأي زاخح اٌثا‪ٔٚ‬ذيذ تافش ‪ :‬تيسذدٌي ِسازح ِؼيٕح تس ػٍي لذ ِا اٌثش‪ٚ‬سيس اال‪ٌٚ‬ي ذٕردٗ ػٍي لذ ِا‬
‫اٌثش‪ٚ‬سيس اٌرأيح ذسر‪ٍٙ‬ىٗ ‪ٚ‬تس وذٖ‬

‫اٌّػطٍر اٌرأي ‪ :‬اٌّاسح تاسيٕح ‪ٚ‬دي اٌطشيمح اٌرأيح اٌٍي الذس اػًّ تي‪ٙ‬ا ضيشيٕح تيٓ اٌؼٍّياخ ‪ٚ‬تؼؿ‪ٙ‬ا‬

‫‪ٚ‬دٖ تيرُ ذثادي اٌشسايً تيٓ اٌؼٍّياخ فيٗ ػٓ طشيك االذػاي ِغ تؼؽ ِٓ غيش ‪ٚ‬خ‪ٛ‬د ِىاْ ِطرشن يؼٍّ‪ٛ‬ا فيٗ‬
‫ضيش ٌٍذاذاخ ِغ تؼؽ ‪ٚ‬يى‪ ْٛ‬طثؼا زدُ اٌساخاخ دي ثاتد ‪ٚ‬تشدٖ ػٍطاْ يثؼر‪ٛ‬ا اٌساخاخ دي ترّش ػٍي‬
‫اٌىشٔاي " " ‪ٚ kernel‬تيرُ إٌمً ػٓ طشيك ٔ‪ٛ‬ػيٓ ِٓ اٌشسايً ‪:‬‬

‫‪ 1‬سٕذ ‪ 2ٚ‬سسيف سٍُ ‪ٚ‬اسرٍُ يؼٕي ‪ٚ ‬ػٍطاْ يثمي فيٗ اذػاي تيٓ اي ػٍّيريٓ الصَ يثمي ػٕذن زاخريٓ ‪:‬‬

‫‪ 1‬أ‪ٚ‬ي زاخح الصَ يى‪ ْٛ‬فيٗ ‪ٚ‬سيٍح اذػاي تيٕ‪ 2 . ُٙ‬اْ يى‪ ْٛ‬فيٗ ذثادي سسايً تيٕ‪ُٙ‬‬

‫‪ ٚ‬ديٗ سسّح تر‪ٛ‬ؾر اٌفشق تيٓ إٌ‪ٛ‬ػيٓ اٌٍي ٌسٗ لايٍٕ‪:ُٙ‬‬

‫تالي اٌطاترش فيٗ ذالذح اساليذ تي‪ٛ‬ؾس‪ٛ‬ا االذػاي اٌّثاضش ‪ٚ‬اٌغيش ِثاضش ِا تيٓ اٌؼٍّياخ ‪ٚ‬تؼؽ ‪ٚ‬س‪ ًٙ‬خذا ِٓ‬
‫االساليذ‬

‫‪10‬‬
Chapter Three OS

Process Concept
Process is a program in execution

An instance of a program running on a computer

Program is a(Static) set of insrtuctions which used as the basis of the process

Process is (Dynamic) instance of of a program in ececution .

Program became process when it executed and loaded in the memory and become an
activity entity

1- Multiprogramming of four programs seems to be 1 program counter and


process witch as when the process finish its job the program counter switches
to the other program
2- Conceptual model of 4 independant , sequential processes seems to be 4
program counter i think it means that the programs run in parallel
3- Only 1 program is active at once seems to be thats every process finish a part
of its job and then turn to the other process till all processes finish

Min Goals of process managment


Provide reasonable response time
Allocate resources to processes
Support inter-process communication and user creation of processes
And to achieve these goals

1- Respond to user requests by user programs


2- Construct tables for each entity managed by operating system
3- Schedule and dispatch processes for execution by the processor
4- Implement a safe and fair policy for resource aloocation to processes

The process image in the memory call the process's address space that have

1- Text Section "the application's code"


2- Data section "applicatopn's predefined data structure"
3- Heap "memory allocation"an area from wich space can be allocated
4- Stack "where register and temporary values can be saved"

Process information
Some information is associated with the process like (Identifier , satet , priority ,
program counter , memory pointers , I/O status informations )

Process Contraol Block


All the data about a process is kept in the PCB "the process information"

Process state
States of the process

New :- the process is being created


Runing :- Instructions are being executed
Waiting :- the process is waitong for some event to occur
Ready :- the process is waiting to be assigned
Terminated :- the process has finished its execution
Look at page 11 for more details

Need more than process


1- Single application needs things happen concurrently
2- Multiple applications needs processes running in the background
3- Multiple users the department computer example
Context Switch
When CPU switches from process to another it must save the state of the old process
And load the new state for the new process
The system does no useful work while switching
The system switch when process has executed full time slice
Or waiting I/O
Or when an error occurred
Change of Process State
Update the process control block with the new state and any accounting information
Select another process for execution
Update the process control block of the selected process
Update memory-management data structures
Process Scheduling Queues
Job queue – set of all processes in the system
Ready queue – set of all processes residing in main memory, waiting to execute
Device queues – set of processes waiting for an I/O
Schedulers
Long-term scheduler (or job scheduler) – selects which processes should be brought
into the ready queue
Short-term scheduler (or CPU scheduler) selects which process should be executed
next
Short-term scheduler is invoked very frequently (fast , melliseconds)
Long-term scheduler is invoked very infrequently (slow , seconds)
Long-term scheduler controls the degree of multiprogramming

Processes can be described as

I/O-bound process– spends more time doing I/O than computations many short CPU
bursts
CPU-bound process– spends more time doing computations; few very long CPU
bursts
Operation on process
1- Process Creation
System initialization
Execution of a process creation system call by a running process
A user request to create a new process
Initiation of a batch job
Parent process create children processes which also create their children processes
forming a tree of processors
Resource sharing
Parent and children share all resources
Children share subset of parent’s resources
Parent and child share no resources
Execution
Parent and children execute concurrently
Parent waits until children terminate
Address space
Child duplicate of parent
Child has a program loaded into it
2- Process Termination
Normal exit (voluntary).
Error exit (voluntary).
Fatal error (involuntary).
Killed by another process (involuntary).
When process finishes its work ask OS to delete it (Exit)
 Child sends output to parent via waitning
 Process resources are allocated
Parent may terminate child process
 Child has exceeded allocated resource
 Task assigned to child is no longer required
 When parent exiting some OS's do not allow to children to continue their work
Cooperating Processes
Independent process cannot affect or be affected by the execution of another process
Cooperating process can affect or be affected by the execution of another process
Advantages of process cooperation
Information sharing
Computation speed-up
Modularity
Convenience
Shared Memory
Producer Consumer Problem
process produces information that is consumed by a consumer process
unbounded-buffer places no practical limit on the size of the buffer
bounded-buffer assumes that there is a fixed buffer size
Message system – processes communicate with each other without resorting to shared
variables
IPC facility provides two operations:
Send message message size fixed or variable
Receive message
If P and Q wish to communicate , they need to establish a communication link
exchange messages via send/recieve
Implementation of communication link
physical (e.g., shared memory, hardware bus
logical (e.g., logical properties)
Direct Communication
Processes must name each other explicitly:
Properties of communication link
Links are established automatically
A link is associated with exactly one pair of communicating
Between each pair there exists exactly one link
The link may be unidirectional, but is usually bi-directional
Indirect Communication
Messages are directed and received from mailboxes
Each mailbox has a unique id
Processes can communicate only if they share a mailbox
Properties of communication link
Link established only if processes share a common mailbox
A link may be associated with many processes
Each pair of processes may share several communication links
Link may be unidirectional or bi-directional
Operations
create a new mailbox
send and receive messages through mailbox
destroy a mailbox
Primitives are defined as:
Send (A,Message) send message to mailbox A
Recieve (A,Message) recieve message from mailbox A
Mailbox sharing
P1,P2, and P3 share the same Mailbox
P1 Sends and P2 , P3 Recieve
Who gets the Message ?
Allow a link to be associated with at most two processes
Allow only one process at a time to execute a receive operation
Allow the system to select arbitrarily the receiver. Sender is notified who the
receiver was
Operating System
Chapter 5 - Scheduling

Ostaz Online
http://www.facebook.com/Ostaz.Online
‫‪Chapter 5: CPU Scheduling‬‬
‫ط‪١‬ب لبً ِا ٔبذأ ف‪ ٟ‬اٌشابخز دٖ ‪ ,‬ف‪ ٟ‬ش‪٠ٛ‬ت ‪ concepts‬السَ ٔى‪ ْٛ‬ػارف‪ٕٙ١‬ا ف‪ ٟ‬اال‪ٚ‬ي‬

‫‪Operating system – Main goals:‬‬

‫اُ٘ ا٘ذاف اٌـ ‪ OS‬أٗ ‪٠‬مذر ‪٠‬سخغً اٌـ ‪ processor‬االسخغالي االِثً‪.‬‬

‫‪ ٚ‬دٖ ب‪١‬بم‪ ٟ‬حٍٗ ػٓ طز‪٠‬ك اٌـ ‪multiprogramming‬‬

‫‪٠‬ؼٕ‪ ٟ‬ف‪ ٟ‬اٌ‪ٛ‬الغ اصال اْ سِاْ واْ اٌـ ‪ٕ١ِ processor‬فؼش ‪٠‬شخغً ف‪ process ٟ‬جذ‪٠‬ذة اال ٌّا ‪٠‬خٍص اٌٍ‪ٟ‬‬
‫ب‪١‬ؼٍّ‪ٙ‬ا ‪ ,‬اِا دٌ‪ٛ‬لخ‪ ٟ‬فـ اٌـ ‪ multiprogramming‬دٖ ب‪١‬اخذ وً اٌـ ‪ processes‬اٌٍ‪ ٟ‬اٌّفز‪ٚ‬ض حخُ ‪ٕ٠ ٚ‬فذ‬
‫ش‪٠ٛ‬ت ٕ٘ا ‪ ٚ‬ش‪٠ٛ‬ت ٕ٘ا ‪٘ ٚ‬ىذا ‪...‬‬

‫‪CPU – I/O Burst Cycle:‬‬

‫حٕف‪١‬ذ ا‪ process ٞ‬ب‪١‬خى‪ ِٓ ْٛ‬حاجخ‪ ٓ١‬اال‪ ٟ٘ ٌٟٚ‬اٌؼًّ داخً اٌـ ‪ ٚ CPU‬اٌخأ‪١‬ت اٌٍ‪ ٟ٘ ٟ‬أخظار ٌٍـ ‪ٚ I/O‬‬
‫حفضً اٌحٍمت د‪ ٗ٠‬شغاٌت وذٖ ٌخا‪٠‬ت ِا اٌـ ‪ Processes‬حخٍص ‪ ٚ ,‬د‪ ٗ٠‬ص‪ٛ‬رة ح‪ٛ‬ضحٍٕا اٌـ ‪ loop‬اوخز‬

‫‪ ٚ‬د‪ ٗ٠‬ص‪ٛ‬رة حب‪ ٍٟٕ١‬اٌـ ‪ Histogram‬بخاع اٌـ ‪:CPU-burst times‬‬

‫‪1‬‬
ٗ٠‫ ٌّذة لذ ا‬ٚ ً‫شخغ‬١٘ ٟ‫ اِخ‬ٚ ً‫شخغ‬١٘ ٌٍٟ‫ٓ ا‬١ِ ‫مزر‬٠ system ‫ اٌـ‬ٟٔ‫ ا‬ٟ٘ Scheduling ‫ت ٌٍـ‬١‫اٌفىزة االساس‬
.. ‫فضً شغاي‬١٘

CPU Scheduler

.‫ا‬ٙ١ٍ‫شخغً ػ‬٠ CPU ‫ اٌـ‬ٍٟ‫ض اخ‬ٚ‫ ِفز‬ٌٍٟ‫ ا‬process ٟٙٔ‫ ا‬ready queue ‫ٓ اٌـ‬١‫خخار ِٓ ب‬١‫ ب‬ٌٍٟ‫ ا‬ٛ٘ ٟ‫دٖ بم‬

:ٗ٠‫ اٌحاالث د‬ٟ‫حصً ف‬١‫ دٖ ب‬ٚ

1. Switches from running to waiting state.


2. Switches from running to ready state.
3. Switches from waiting to ready state.
4. Terminates.

ٟ‫ طبؼا ف‬ٚ , non-preemptive ٟ‫بم‬١٘ ٖ‫ا د‬ٙ‫ ساػخ‬ٟ‫بم‬٠ ‫ اٌزابغ‬ٚ‫ي ا‬ٚ‫ع اال‬ٌٕٛ‫ ا‬ٟ‫ واْ ف‬ٌٛ scheduling ‫اٌـ‬
.preemptive ٟ‫بم‬١٘ ‫اع‬ٛٔ‫ اال‬ٟ‫بال‬

Preemptive

.‫ت‬١ٔ‫ حا‬process ً‫ذخ‬٠ ٚ ‫ا‬ٙ‫خزج‬٠ processor ‫ ِّىٓ اٌـ‬ٟٕ‫ؼ‬٠ processor ‫ٗ ِش بخؼزف ححخىز اٌـ‬٠‫ د‬ٚ

Non-Preemptive

.I/O ‫ك‬٠‫ا ػٓ طز‬ٙ‫م‬٠‫ؼخزض طز‬٠ ٚ‫ت ِا حخٍص خاٌص ا‬٠‫ ٌغا‬Processor ‫ بخحخىز اٌـ‬ٌٍٟ‫ ا‬ٟ‫ٗ بم‬٠‫ د‬ٚ

Dispatcher

ٍٝ‫خضّٓ ػ‬٠ ‫٘ذا‬ٚ , Short-term scheduler ‫ اخخار٘ا‬ٟ‫ت اٌخ‬١ٍّ‫خحىُ باٌؼ‬٠ CPU ‫مت حجؼً اٌـ‬٠‫طز‬

 Switching context
 Switching to user mode
 Jumping to the proper location in the user program to restart that
program.

Dispatch latency

.ٜ‫ت أخز‬١ٍّ‫ً ػ‬١‫اٌبذأ بخشغ‬ٚ ‫ت‬١ٍّ‫لف ػ‬ٌٛ ً‫سخغزلٗ اٌّزس‬٠ ٞ‫لج اٌذ‬ٌٛ‫ ا‬ٛ٘

Aim of Scheduling

 CPU utilization

‫ا‬ٌٙ‫خُ اسخغال‬١ٌ ْ‫ٌٗ بمذر االِىا‬ٛ‫ ِشغ‬CPU ْٛ‫ حى‬ٟٕ‫ؼ‬٠ , ‫اث‬١ٍّ‫ذ اٌؼ‬١‫ حٕف‬ٟ‫ ف‬CPU ‫لج‬ٚ ً‫اسخغالي و‬
.ً‫االسخغالي األِث‬
 Throughput

2
‫حب‪ِ ّٓ١‬ؼذي اٌؼٍّ‪١‬اث اٌخ‪ّ٠ ٟ‬ىٓ أجاس٘ا ف‪ٚ ٟ‬لج ِؼ‪ِ , ٓ١‬ثال أا اػزف أفذ بّؼذي ‪ 4‬ػٍّ‪١‬اث ف‪ 3 ٟ‬ث‪ٛ‬أ‪.ٟ‬‬
‫)‪ Turnaround time (TAT‬‬
‫اٌ‪ٛ‬لج اٌالسَ ٌخٕف‪١‬ذ ػٍّ‪١‬ت ِا ( اٌ‪ٛ‬لج اٌّسخغزق ِٓ بذا‪٠‬ت حٕف‪١‬ذ اٌؼٍّ‪١‬ت إٌ‪ٙٔ ٝ‬ا‪٠‬خ‪ٙ‬ا)‪.‬‬
‫اٌؼٍّ‪١‬ت ِّىٓ اْ حى‪ single CPU burst ْٛ‬ا‪ ٚ‬لذ حى‪ ْٛ‬جشء ‪ِ , thread‬ثاي‪ :‬ابخذأ اٌؼٍّ‪١‬ت ف‪ ٟ‬اٌثأ‪١‬ت اٌثاٌثت‬
‫‪ٚ‬أخ‪ٙ‬ج ف‪ ٟ‬اٌثأ‪١‬ت اٌسابؼت اٌ‪ٛ‬لج اٌالسَ ٌخٕف‪١‬ذ ٘ذٖ اٌؼٍّ‪١‬ت ٘‪ 4 ٛ‬ث‪ٛ‬أ‪.ٟ‬‬
‫‪ Waiting time‬‬
‫٘‪ ٛ‬اٌ‪ٛ‬لج اٌذ‪ ٞ‬حسخغزلٗ اٌؼٍّ‪١‬ت ف‪ ٟ‬االٔخظار داخً ‪ ready queue‬لبً دخ‪ٌٙٛ‬ا إٌ‪.CPU ٝ‬‬
‫‪ Response time‬‬
‫٘‪ ٛ‬اٌ‪ٛ‬لج اٌذ‪٠ ٞ‬حخاجٗ اٌبزٔاِج ٌبذأ فؼٍ‪١‬ا‪.‬‬

‫ط‪١‬ب ف‪ ٗ١‬بؼذ وذٖ سٍ‪١‬ذ ‪ 01 ٚ 9‬د‪ٚ‬ي بص‪ٛ‬ا ف‪ِ ُٙ١‬اف‪ِٛٙ١‬ش ا‪ ٞ‬حاجت ِش ‪ٚ‬اضحت ‪ ..‬وٍٗ حىٍّٗ ٌٍ‪ ٟ‬فاث‬

‫ٕ٘اخذ بم‪ ٟ‬ا‪ٚ‬ي ٔ‪ٛ‬ع ف‪ ٟ‬اٌـ ‪ Scheduling‬اٌٍ‪First-come First-Served ٛ٘ ٟ‬‬

‫حؼاٌ‪ٛ‬ا ٔاخذ ِثاي بسزػت ع اٌىالَ دٖ ‪ ..‬أا جا‪ processes ٍٟ٠‬باٌخزح‪١‬ب دٖ ‪ ٚ‬اٌـ‬
‫‪ burst time‬بخاػ‪ ُٙ‬س‪ِ ٞ‬ا ِ‪ٛ‬ج‪ٛ‬د لذإِا‬

‫ط‪١‬ب أا ٌّا اج‪ ٟ‬ادخً اٌـ ‪ processes‬د‪٘ ٗ٠‬ذخً اٌٍ‪ ٟ‬جٗ اال‪ٚ‬ي ‪ ٚ‬بؼذ وذٖ اٌٍ‪ ٟ‬جٗ بؼذ‪٘ ٚ ٗ٠‬ىذا ‪ ...‬ف‪ٙ‬خبم‪ٟ‬‬
‫اٌزسّت باٌشىً دٖ‬

‫‪ٌّ ٚ‬ا اج‪ ٟ‬احسب اٌـ ‪ time‬اٌٍ‪ ٟ‬أخظزحٗ وً ‪ process‬لبً ِا حخش‪:‬‬

‫‪‬‬ ‫‪P1 = 0‬‬


‫‪‬‬ ‫‪P2 = 24‬‬
‫‪‬‬ ‫‪P3 = 27‬‬

‫‪ ٚ‬اٌـ ‪١٘ average waiting time‬بم‪ ٟ‬ب‪١‬سا‪ِ ٞٚ‬جّ‪ٛ‬ػ‪ ُٙ‬ػٍ‪ ٟ‬ػذدُ٘ اٌٍ‪١٘ (0 + 24 + 27) / 3 ٛ٘ ٟ‬طٍغ‬
‫ب‪١‬سا‪.17 ٞٚ‬‬

‫ط‪١‬ب أا ف‪ ٟ‬اٌّثاي دٖ ػٕذ‪ wait ٞ‬وخ‪١‬ز ا‪ .. ٞٚ‬ط‪١‬ب ‪ٕ٘ ٚ‬حً اٌّشىٍت د‪ ٗ٠‬اسا‪:S ٞ‬‬
‫أا ِّىٓ ارسُ اٌزسّت د‪ ٗ٠‬باٌشىً دٖ‬

‫‪ٌّ ٚ‬ا اج‪ ٟ‬احسب اٌـ ‪ time‬اٌٍ‪ ٟ‬أخظزحٗ وً ‪ process‬لبً ِا حخش‪:‬‬

‫‪‬‬ ‫‪P1 = 6‬‬

‫‪3‬‬
‫‪‬‬ ‫‪P2 = 0‬‬
‫‪‬‬ ‫‪P3 = 3‬‬

‫‪ ٚ‬اٌـ ‪١٘ average waiting time‬بم‪ ٟ‬ب‪١‬سا‪ِ ٞٚ‬جّ‪ٛ‬ػ‪ ُٙ‬ػٍ‪ ٟ‬ػذدُ٘ اٌٍ‪١٘ (0 + 6 + 3) / 3 ٛ٘ ٟ‬طٍغ‬
‫ب‪١‬سا‪ ٚ .3 ٞٚ‬طبؼا اٌفزق وب‪١‬ز ب‪ ٚ ٕٗ١‬ب‪ ٓ١‬اٌشىً اٌٍ‪ ٟ‬لبٍٗ ‪‬‬

‫‪ ٚ‬دٖ ِثاي حأ‪ ٟ‬شارح ٔفسٗ‬

‫‪ِ ٌٍٟ ٚ‬ف‪ّٙ‬ش اٌـ ‪ algorithm‬دٖ ِّىٓ ‪٠‬خش ع اٌٍ‪ٕ١‬ه دٖ ‪١٘ ٛ٘ ٚ‬ف‪ ّٗٙ‬اْ شاء اهلل‬
‫‪http://www.slideshare.net/S.AL.Ballaa/fcfs‬‬

‫حأ‪ٛٔ ٟ‬ع بم‪ ٟ‬اٌٍ‪Shortest-Jop-First ٛ٘ ٟ‬‬

‫‪ ٚ‬دٖ ب‪١١‬ج‪ ٟ‬ف‪ ٗ١‬وً ‪ِ ٚ process‬ؼا٘ا اٌ‪ٛ‬لج اٌٍ‪ِ ٟ‬حخاجاٖ ‪ ٚ‬باخخار اٌؼٍّ‪١‬ت اٌٍ‪ٙ١ٌ ٟ‬ا ‪ٚ‬لج الً‪.‬‬

‫‪ ٚ‬إٌ‪ٛ‬ع دٖ جشاٖ ٔ‪ٛ‬ػ‪ٓ١‬‬

‫إٌ‪ٛ‬ع اال‪ٚ‬ي اٌٍ‪ ٚ Non-Preemptive ٛ٘ ٟ‬دٖ ِؼٕاٖ أٗ ِش ٘‪١‬س‪١‬ب اٌـ ‪ process‬اال ٌّا ‪٠‬خٍص‪ٙ‬ا ‪ ٚ‬بؼذ وذٖ‬
‫‪ّ٠‬سه اٌٍ‪ ٟ‬بؼذ٘ا‪.‬‬

‫‪ ٚ‬إٌ‪ٛ‬ع اٌخأ‪ ٟ‬اٌٍ‪ ٛ٘ ٟ‬اٌـ ‪ ٚ Preemptive‬دٖ ِؼٕاٖ أٗ ٘‪١‬شخغً ػٍ‪ process ٟ‬دٌ‪ٛ‬لخ‪ٌ ٟ‬ىٓ ٌ‪ ٛ‬جاحٍٗ‬
‫‪ process‬حأ‪١‬ت الً ف‪ ٟ‬اٌ‪ٛ‬لج ‪ ٛ٘ ٚ‬شغاي ٘‪١‬فىس ٌٍ‪ ٟ‬ب‪١‬ؼٍّ‪ٙ‬ا ‪٠ ٚ‬ز‪ٚ‬ح ‪ٕ٠‬فذ اٌٍ‪ٚ ٟ‬لخ‪ٙ‬ا الً ‪ ٚ‬بؼذ وذٖ ‪٠‬بم‪ٟ‬‬
‫‪٠‬زجغ ٌٍ‪ ٟ‬ب‪١‬ؼٍّ‪ٙ‬ا حأ‪.ٟ‬‬
‫طبؼا ح‪٘ٛ‬خ‪ٛ‬ا ِٕ‪ .. :D ٟ‬بس خش‪ٛ‬ا بزدٖ ع اٌٍ‪ٕ١‬ه دٖ ػٍ‪ِ ٗ١‬ثاي ػٍ‪ ٟ‬وً ٔ‪ٛ‬ع ِشز‪ٚ‬ح ببساطت جذا‬

‫‪http://www.slideshare.net/guest49057a/sjf‬‬

‫طبؼا أا ػٕذ‪ ٞ‬دٌ‪ٛ‬لخ‪ٌ priority ٟ‬ىً ‪١٘ ٚ process‬بم‪ ٟ‬ػٕذ‪ِ ٞ‬شىٍت ف‪ ٟ‬إٌ‪ٛ‬ع اٌٍ‪ٕ٘ ٟ‬ا دٖ اْ ا‪process ٞ‬‬
‫اٌـ ‪ priority‬بخاػخ‪ٙ‬ا لٍ‪ٍ١‬ت ٘خخٕفذ ِخاخز ا‪ِّ ٚ ٞٚ‬ىٓ أل ‪ ..‬بس ف‪ ٗ١‬حً ِّىٓ ‪٠‬حٍٍ‪ ٟ‬اٌّشىٍت د‪ :ٗ٠‬اْ وً ِا‬
‫‪ٚ‬لج االٔخظار بخاع اٌـ ‪ process‬طاي أا الذر ااس‪ٚ‬د اٌـ ‪ priority‬بخاػخ‪ٙ‬ا‪.‬‬

‫‪4‬‬
‫ط‪١‬ب حؼاٌ‪ ٛ‬بم‪ٔ ٟ‬اخذ إٌ‪ٛ‬ع اٌثاٌث اٌٍ‪Round Robin ٛ٘ ٟ‬‬

‫‪ ٚ‬إٌ‪ٛ‬ع دٖ بم‪ ٟ‬ب‪١‬حذد ‪ٚ‬لج ِؼ‪ ٓ١‬أل‪ process ٞ‬ػشاْ حشخغً ‪ٕ١ِ ٚ‬فؼش‬
‫حؼذ‪ ٞ‬اٌ‪ٛ‬لج دٖ ‪ٌ ,‬ىٓ ٌ‪ ٛ‬خٍصج لبٍٗ ‪٠‬بم‪ ٟ‬خ‪١‬ز ‪ ٚ‬بزوت ‪‬‬

‫س‪ ٞ‬اٌّثاي دٖ ٘‪ِ ٛ‬ذ‪ٚ ٞ‬لج ٌىً ‪ process‬أ‪ٙ‬ا حخٕفذ ف‪ ٟ‬حذ الص‪01 ٟ‬‬

‫‪ ٚ‬اٌزسّت بخاػخ‪ٙ‬ا ٘خبم‪ ٟ‬باٌشىً دٖ‪:‬‬

‫‪.CPU‬‬ ‫٘ذٖ اٌخاص‪١‬ت حشبٗ ‪ٌٚ FCFS‬ىٓ حخخٍف ػٕ‪ٙ‬ا إْ ٕ٘ان ‪ٚ‬لج ِحذد ٌخٕف‪١‬ذ اٌـ ‪ process‬داخً‬

‫ف‪ ٗ١‬رسّخ‪ِ ٓ١‬ف‪ ٓ١ِٛٙ‬ف‪ ٟ‬سٍ‪١‬ذ ‪ِ 02 ٚ 02‬ش ِحخاج‪ ٓ١‬ا‪ ٞ‬حاجت‪.‬‬

‫بؼذ وذٖ ػٕذٔا حاجت اسّ‪ٙ‬ا ‪Multilevel Queue‬‬

‫‪5‬‬
Chapter Five OS

CPU Scheduling
Operating system Main Goals: interleave the execution of processes to
maximize the processor utilization
Maximize CPU Utilization obtained with multiprogramming
Process execution consists of a cycle of CPU execution and I/O wait
The main idea of scheduling, the system decides:
Who will run
When will it run
For how long
CPU Burst :- Burts frequency increased duration decreased and the opposite is right
CPU Scheduler
Selects from memory a process and allocates the CPU to it
CPU scheduling decisions may take place when a process:
1-Switches from running to waiting state // waitning for I/O Non
2-Switches from running to ready state
3-Switches from waiting to ready
4-Terminates // Non
Scheduling under 1 and 4 is non preemative
Decision Mode
Preemptive:-Currently running process may be interrupted and moved to the Ready
state by the operating system
Allows for better service since any one process cannot monopolize the processor for
very long
Non-preemptive :-Once a process is in the running state, it will continue until it
terminates or blocks itself for I/O
Dispatcher
Dispatcher module gives control of the CPU to the process selected by the short term
scheduler; this involves
switching context
switching to user mode
jumping to the proper location in the user program to restart that program
Dispatcher latency
time it takes for the dispatcher to stop one process and start another running
Scheduling goals
CPU Utilizatin :- Keeps the CPU as busy as possible
Response Time :- amount of time it takes from when a request was submitted until
the first response is produced
Throughput :-Number of processes that complete their execution per time unit
Turnaround time:-amount of time to execute a particular process "the total time of
the process staying in the CPU"
Waiting time:-amount of time a process has been waiting in the ready queue
Quality criteria for scheduling (Algorithm)
Fairness:- each process gets a “fair share” of the CPU
Efficiency: keep the CPU busy
Response time: minimize for interactive users
Turnaround: for batch users (total time of batch)
Throughput: maximal number of processed jobs per unit time
Waiting time: minimize the average over processes
Optimization Critiria
Max CPU Utilization
Max throughput
Min Turnaround Time
Min Waiting Time
Min Response Time
First Come First Served (FCFS) Scheduling
If we have 3 processes P1 , P2 , P3 and all arrice at 0 P1 = 24 P2 = 3 P3 = 3
And P1 started first then P1 wait time = 0 P2 = 24 P3 = 27
Average of Waiting = (0 + 24 + 27 )/ 3 = 17
FCFS means what arrived First do its job and not out of processor till finished
The same example but if P2 first then P3 and last P1 P2 Waiting = 0 P3 = 3 P1 = 6
Average of Waing = (0+3+6)/3 = 3
Arrival Time Ta = time the processor become ready "time when the process enetr
the memory"
Service Time Ts = time spent executing in CPU "Burst Time"
Turnaround Time Tr = total time spent waiting and executing
Shortest job First (SJR) Scheduling
Non Preemptive :- Once the process enter the CPU ca not out till finish its Burst
Time "Finish its Job"
Preemptive :- if a new process arrives with Burst time less than the remaining time
of the current process preempt this schema know as Shortest-Remaining-Time-
First (SRTF)
SJF is optimal :- Givest he minimum average waiting time
Priority Scheduling
A priority number (integer) is associated with each process
The CPU is allocated to the process with the highest priority
smallest integer = highest priority
SJF is a priority scheduling where priority is the predicted Next CPU burst time
Problem :- Starvation "Low priority processes may never execute"
Solution :- Aging as time progresses increase the priority of the process
Round Robin (RR)
Each process gets a small unit of CPU time time quantum usually 10-100 milliseconds.
After this time has elapsed, the process is preempted and added to the end of the
ready queue
If there are N processes in the ready queue and the time quantum is q , then each
process gets 1/n of the CPU time in chunks of at most q time units at once. No
process waits more than ( n= 1 ) q time units
When Q is small there will be more switching and overhead is too high
Example of RR with quantum 20 P1 = 53 P2 = 17 P3=68 P4=24
P1 "20"-P2"37"-P3"57"-P4"77"-P1"97"-P3"117"-P4"121"-P1"134"-P3"154"-P3"162"
Typically, higher average turnaround than SJF, but better Response
When Quantum is so low Context Swtich time is so high look page 20
Multilevel Queue
Ready queue is partitioned into separate queues:-
foreground (interactive)
background (batch)
Each queue has its own scheduling algorithm:-
foreground – RR
background – FCFS
Scheduling must be done between the queues :-
Fixed priority scheduling; (i.e., serve all from foreground then from background).
Possibility of starvation.
Time slice each queue gets a certain amount of CPU time which it can schedule
amongst its processes; i.e.,
80% to foreground in RR
20% to background in FCFS
Loog at page 23 for priority of the processes
Multilevel Feedback Queue
A process can move between the various queues; aging can be implemented this way
Multilevel-feedback-queue scheduler defined by the following parameters:
number of queues
scheduling algorithms for each queue
method used to determine when to upgrade a process
method used to determine when to demote a process
method used to determine which queue a process will enter
when that process needs service
Example of Multilevel Feedback Queue
Three queues:
Q0– RR with time quantum 8 milliseconds
Q1– RR time quantum 16 milliseconds
Q2–FCFS
Scheduling
A new job enters queue Q0 which is served FCFS. When it gains CPU, job receives 8
milliseconds. If it does not finish in 8 milliseconds job is moved to queue Q1 at Q1 is
again served FCFS and receives 16 additional milliseconds. If it still does not
complete, it is preempted and moved to queue Q2
Multiple Processor Scheduling
Homogeneous processors within a multiprocessor
Load sharing
Asymmetric multiprocessing:-– only one processor accesses the system data
structures, alleviating the need for data sharing
Real Time Scheduling
Hard real-time:-systems – required to complete acritical task within a guaranteed
amount of time
Soft real-time:-computing – requires that critical processes receive priority over less
fortunate ones
Thread Scheduling
Local Scheduling – How the threads library decides which thread to put onto an
available LWP
Global Scheduling – How the kernel decides which kernel thread to run next

Example:
The following 4 processes arrive in the ready queue at the times shown and have
service (CPU burst times also as shown:
P1 AT=0 ST=8 P2 AT=2 ST=2 P3 AT=3 ST=5 P4 AT=5 ST=1
For each of the following scheduling methods (RR ,with T=2"Qunatum" and SJF) give:
(i) a timing chart to illustrate the execution sequence
(ii) the average job turnaround time
(iii) the waiting time for each job
Assume the overhead of context switching is one time unit
turnaround time "the total time of the process in the CPU"
overhead of context switching "how much time it take to switch from process to other"
PS the new comer in RR system enter in the end of the queue
P1 is processing then P2 arrive and enter the ready queue . P3 arrives but yet P1 didnt
fully get out as it arrived at context switching time so P3 after P2 then P1 out and P2
enter the Processor so P1 will be after P3
OSTAZ ONLINE

ostazonline

2012

Operation System1
‫بسم هللا الرحمن الرحيم‬

‫بسم هللا الرحمن الرحيم‬

‫قال رسول هللا صلي هللا عليه وسلم ‪:‬‬

‫"المرؤ علي دين خليله فلينظر احدكم من يخالل" صدق رسول هللا صلي هللا عليه وسلم‬

‫مالحظات ‪:‬‬
‫هذا الشرح اجتهاد شخصي وليس رسميا‬
‫الناشر ليس مسؤال قانونيا عن الشرح‬
‫ال غني عن المذاكرة من االسليد او الكتاب‬
‫ال تنسوا الدعاء لمن قام بهذا العمل‬
‫اذا ذاكرت شيئا ولم تقهم ال تتردد في السؤال‬
‫اذا اردت ان تسأل فسأل علي صفحة استاذ اون الين في ال ‪Discussion Board‬‬

‫حاول ان تقرأ وردا من القران قبل المذاكرة حتي يفتح هللا عليك‬
‫حاول ان تساعد بأي شيئ حتي لو كان صغييرا‬
‫ال تنس ان هللا في عون العبد ما كان العبد في عون اخيه‬

‫اذا واجهت اي مشكلة ال تتردد في السوال هنا ‪:‬‬

‫‪https://www.facebook.com/Ostaz.Online?sk=wall‬‬

‫وتقبل هللا منا ومنكم‬


‫‪Ch. 6 Process synchronization‬‬

‫السالم عليكم‬

‫اول حاجه هيا المقدمه‬

‫نفتكر مع بعض شبتر ‪ 3‬وال‪ co operating processes‬اللي هيا معانا العمليات اللي معتمده‬
‫علي بعضها زي ايه؟؟مثال عمليه ال‪ paste‬معتمده انها بتقرا الحجات اللي موجوده جوا‬
‫ال‪ buffer‬اللي انا مليته بعمليه ال‪... copy‬طيب نفتكر بقي التعريف بالظبط كان ايه‬

‫‪Cooperating processes are those processes that can affect or be affected‬‬


‫‪by other processes executing in the system‬‬

‫كل كالممنا في الشبتر داه هيبقي علي العمليات اللي بتبادل المعلومات عن طريق ‪shared‬‬
‫‪memory‬ولو مش فاهم يعني ايه ارجع شبتر ‪ 3‬‬

‫نبدا بقي بشويع تعريفات قد تكون مش مهمه بس الزم تفهمها عشان تفهم الشبتر‬

‫‪Data inconsistency: happens when there are different and conflicting‬‬


‫‪versions of the same data in two places‬‬

‫زي ايه مثال زي المثال اللي واحد بيسحب فلوس من البنك والتاني بيسحب نفس الفلوس عن‬
‫طريق ال‪ ATM‬فمفروض الرصيد بيقل بس المكنه لسه ما كانتش شافت ان الرصيد قل مثال او‬
‫مثال هنقول ان انا جاي اقرا ‪ variable‬معين وقريته قبل ما تتكتب عليها القيمه اللي مفروض‬
‫اقراها فبيبقي فيه كذا نسخه متغيره للبيانات بتعتي وانا مفروض اقرا نسخه معينه او ‪update‬‬
‫معين ويكون لسه ما اتكتبش او اتكتب عليه ‪ update‬غيره‬

‫‪Dead block: a situation in which two processes or more sharing the‬‬


‫‪same resource are effectively preventing each others from accessing the‬‬
‫‪resource leading to cease the function in both programs‬‬

‫يعني بيبقي البرنامج االوالني ماسك في ال‪( resource‬أ) والبرنامج التاني ماسك في‬
‫‪( resource‬ب) والبرنامج االول مش هيسيب ال ‪resource‬بتاعه اال اما ياخد (ب) و‬
‫البرنامج التاني مش هيسيب ال‪ resource‬بتاعه اال اما ياخد (أ) وبكده وال واحد فيهم هينفذ‬
‫الفنكشن بتاعته واالتنين هيقفوا‬

‫زي ما انتوا شايفين كل عربيه مش هتمشي اال تستخدم‬


‫ال‪ resource‬اللي هوا الطريق وبالتالي كلهم توقفوا عن‬
‫العمل‪...‬طيب حل الموضوع داه ان كان الزم يبقي فيه‬
‫حاجه بتنظم مرور العربيات تقول لديه انتي دلوقتي امشي‬
‫وانتي دلوقتي اقفي وهكذا وهوا داه ال ‪process‬‬
‫‪synchronization organization‬‬
‫كده بقي المقدمه خلصت ونبدا في الشغل ‪‬‬

‫طيب مشكله التزامن ‪ concurrency problem‬بتيجي من ايه‬

‫‪1-sharing global resources‬‬

‫اني بعمل مشاركه لموارد في اكتر من مكان زي مثال البنك وال ‪ ATM‬ودي بتسبب ‪data‬‬
‫‪inconsistency‬‬

‫‪2-bad management of resources‬‬

‫االدارة السئيه للموارد وان ما يبقاش فيه حد بيوجه ال‪ processes‬امتي ديه تقرا وامتي ديه‬
‫تكتب وامتي ديه تشتغل بيؤدي الي ‪dead blocks‬‬

‫(اظن بقي عرفتوا قيمه التعريفات اللي فوق ومش هتالقوها غير عندنا‪) ‬‬

‫مثال علي التزامن – مشكله المنتج والمستهلك ‪producer consumer problem‬‬


‫وظيفه المنتج انه ينتج خدمات للمستهلك ووظيفه المستهلك انه يستهلك الحجات ديه‬

‫المنتج بيكتب الحاجه اللي هيديها للمستهلك في ‪ buffer‬المستهلك بيقراها منها‬

‫ممكن يكون ال ‪ buffer‬داه ‪ fixed‬او ‪variable‬‬

‫الكود الجميل بقي‬

‫‪ in‬دا ‪ variable‬بيقولي ايه المكان الجاي الفاضي اللي ممكن املي فيه حاجه بيصنعها المنتج‬

‫‪ out‬دا ‪ variable‬بيقولي ايه المكان الجاي اللي مفروض المستهلك هيستهلك‬

‫‪ count‬داه بيعبر عن عدد االماكن المليانه في ‪ buffer‬بعتنا (ال ‪ buffer‬هنا ‪) fixed size‬‬
‫شرح فكرة ‪Consumer‬‬

‫الفكرة ان لو ‪ buffer‬فاضي مش هستهلك اي حاجه واول ما المنتج يماله هستهلك الحاجه ديه‬
‫واغير المكان اللي هستهلك منه المرة الجايه وبعدين اقلل ال‪ count‬عشان خاطر انا كده فيه‬
‫مكان كان مليان وبقي فاضي‬

‫شرح فكرة ‪Producer‬‬

‫الفكرة ان لو ‪ buffer‬مليان مش هنتج اي حاجه عشان مفيش مكان احط فيه واول ما المستهلك‬
‫يفضي هنتج الحاجه ديه واغير المكان اللي هنتج فيه المرة الجايه وبعدين ازود ال‪ count‬عشان‬
‫خاطر انا كده فيه مكان كان فاضي وبقي مليان‬

‫كود ‪consumer‬‬

‫طبعا انا بقوله اشتغل لالبد واستهلك ‪ ‬بس لو ال‪ count‬بزيرو داه معناه ان مفيش اي ااماكن‬
‫مليانه يعني مفيش حااجه استهلكها فبخش في لوب فاضيه مبتعملش حاجه (سامع واحد دلوقت‬
‫بيقول امتي هنخرج من اللوب ديه مش هيخرج خالص ‪ )‬سؤال جميل ‪..‬الفكرة زي ما قلنا ان‬
‫هنا في تزامن يعني المنتج والمستهلك شغالين مع بعض فلما المنتج ينتج حاجه ال‪ count‬هتزيد‬
‫و مش هتبقي بصفر وهخرج من اللوب الفاضيه ديه وبس بعدين بقي الحاجه اللي هستهلكها‬
‫هحطها عندي وازود المكان اللي هستتهلك منه المرة الجايه‬

‫كود ‪producer‬‬

‫طبعا انا بقوله اشتغل لالبد انتج ‪ ‬بس لو ال‪ count‬بتساوي حجم ال‪ buffer‬داه معناه ان‬
‫مفيش اي ااماكن فاضيه يعني مش هنيفع انتج حاجه فبخش في لوب فاضيه مبتعملش حاجه‬
‫وبرضه امتي هنخرج‪ ..‬زي ما قلنا ان هنا في تزامن يعني المنتج والمستهلك شغالين مع بعض‬
‫فلما المستهلك يسحب حاجه حاجه ال‪ count‬هتقل وهخرج من اللوب الفاضيه ديه وبس بعدين‬
‫بقي الحاجه اللي هنتجها هحطها في البافر وازود المكان اللي هنتج فيه المرة الجايه‬

‫طيب ما هو كالم زي الفل ايه مشكله التزامن بقي ‪...‬الااااا دي مشكله كبيره جدا لو الحظنا‬
‫بقي اي ‪ instruction‬عشان تتنفذ في ال‪ assembly language‬عشان تتنفذ الزم االول‬
‫بقرا ال ‪ variable‬من الذاكره واحطه في ‪ register‬وبعدين اعمل عليه الشغل بتاعي وبعدين‬
‫ارجع اكتبه في الذاكره تاني يعني مثال‬

‫‪count++; #C++‬‬

‫‪register1 = count; resgister1 = register1 + 1;count = register1; #assembly‬‬

‫افرض بقي ان اثناء وانا شغال قبل ما اكتب قيمه ‪ register1‬حصلي ‪ interrupt‬عشان‬
‫حاجه تانيه هتشتغل وانا لسه ما كتبتبش القيمه الجديده يا فالح ‪ ‬كده الدنيا هتبوظ وممكن‬
‫اكتب برا البافر او اقرا من براه وهيحصل ‪ run time error‬او ‪ logical error‬وحجات كده‬
‫ربنا يبعدها عنكو ‪ ‬وداه بقي اللي هوا ‪race condition‬‬
‫تعريفات مهمه‬

Race condition: a situation in which two processes or more access and


manipulate the same data concurrently and the result of execution
depends on an order in which instructions occur and memory access
takes place

‫ للداتا في نفس اللحظه والناتج بتاع‬access ‫يعني دي الحاله اللي عندها عمليتين او اكتر بيعملوا‬
‫العمليتين متوقف علي الترتيب اللي هما هيشتغلوا بيه يعني مثال لو نفتكر فوق في كود المنتج‬
‫والمستهلك وانا بعمل‬

register1 = count; resgister1 = register1 + 1;

// interrupt here and consumer instruction is working

count = register1;

‫ قبل ما هيا تتكتب ال‬count ‫ يعني ما يجيش المستهلك ويقرا‬access‫كان الزم اراعي ترتيب ال‬
‫كان مفروض يستني لما الحاجه تتنفذ بالترتيب لصح‬

Critical section: a code segment within a process in which memory


access (read/write) takes place, it's critical because it must not be
interrupted while executing

‫ واللي مفروض ما يخصلش عنده‬count++ ‫دا بقي اللي هوا بيبقي كود معين جو العمليه زي‬
‫ عشان لو حصل ممكن تحصل اي مشكله من اللي قلنا عليها فوق عشان كده اتسمي‬interrupt
 ‫ يعني عيل خيطر‬critical

‫طيب فيه حجات الزم بقي اي تكتيك الداره التزامن يحققها واال يترمي في الزباله‬

Requirements need to be satisfied for any synchronization protocol:

1-mutual exclusion

2-progress

3-bounded waiting

‫اول واحده‬

If a process is executing in its critical section then no other process can


execute its critical section

‫ يبقي الزم تكمل‬shared ‫ والسكشن داه‬Critical section ‫ شغاله في‬process ‫يعني لو في‬
‫وما ينفعش نعلمها انتربت عشان نشغل حاجه تانيه‬
‫تاني واحده‬

if there is no process executing in its critical section, then only processes


that are not executing in their remainder section can contribute to
making decision of the next process to execute, in other words,
processes that are not running in their critical section must not delay the
progress of other processes

‫ بس اللي عايزه‬processes‫ بتاعها يبقي ال‬CS‫ بتشغل ال‬process ‫يعني لو دلوقت ما فيش اي‬
‫ بتاعه‬CS‫ بتعها هيا بس اللي بتخش في عمليه االختيار مين اللي عليه الدور يشغل ال‬CS ‫تشغل‬
‫ التانيه‬processes‫ تعيق تقدم ال‬process ‫فالبتالي مفيش اي‬

‫ حتت‬3 ‫ عامه بيبقي‬process‫ ان كود ال‬ ‫اه نسيت اقولكوا‬

While(true) {

// entry section: requesting permission to enter critical section

// critical section

// remainder section: exit section + rest of code that is independent of


other instructions

‫تالت واحده‬

There must be a bound/limit on the times that a process executes its


critical section

‫ بتاعها‬CS‫ تشغل بيه ال‬process ‫يعني الزم يكون في حد معين علي عدد المرات اللي ممكن‬
‫ تانيه تجوع وما تخشش ابدا يعني لو واحده كل شويه بتشغله وعدت‬processes ‫عشان ما فيش‬
‫ مرات مثال ال نركنها شويه وبعدين نبقي نرجعلها وهكذا‬5 ‫الحد اللي هوا‬

‫تماااااام قوم بقي يال ذاكر الجزء داه من الساليدز او الكتاب وبعدين كمل النص التاني من‬
‫الشبتر النه عايز صحصحه‬

Solution to synchronization problems

S/W solution: a code segment that should run before entering critical
section

H/W solution: by using one of the H/W components


‫‪S/W solutions:‬‬

‫‪1-Simple Lock variable:‬‬


‫‪There is a lock variable that has a value of zero if the CR is empty,‬‬
‫‪one when it is not empty, a process must set this variable to one‬‬
‫‪when it's inside its CS and must reset it to zero when it's going to‬‬
‫‪exit‬‬
‫يعني بحط ‪ variable‬المتغير داه بقي لو بصفر معني كده ان ال‪Critical Region‬‬
‫فاضي وممكن اي ‪ process‬تشغل ال ‪CS‬بتاعها بس الزم وهيا داخله تخلي قيمه‬
‫المتغير داه بواحد عشان لو حد تاني جيه يالقيه بواحد ما يخشش ويشتغل معاها ولما‬
‫تكون ال‪ process‬خارجه ترجعه بصفر تاني عشان الناس تشتشغل بقي بالش احتكار‬
‫وشغل فلول‬

‫هههههههههههه طيب خد يا فالح هنا الحل داه عنده عيب كبير جدا ان ال ‪lock‬‬
‫‪ variable‬داه نفسه اصال يعتبر ‪ critical section‬الن افرض ال‪processes‬‬
‫وصلوا في نفس الوقت االتنين هيشوفوه بصفر واالتنين هيخشوا ويلبسوا في بعض‬
‫‪ ‬اذا الحل داه غير عملي الن مش هنعرف نخلي ال‪ processes‬ان واحد بس يقراه‬
‫مش كلهم يقروه في نفس الوقت‬
‫‪2-Peterson solution‬‬

‫يوميها كان تسليم الربورت والناس كانت نايمه اوعوا حد ينام دلوقت ‪‬‬

‫الفكرة انه بيحل مشكله ‪ simple lock variable‬يعني لو اتنين جايين مع بعض في نفس الوقت‬
‫واحده بس هيا اللي هتخش والتانيه هتستني طب ازاي؟؟نشوف سوا‬

‫هنشرح بمثال ‪ tracing‬بس االول‬


‫نعرف حبه حجات ‪...‬اوال ‪ turn‬داه‬
‫بيقول الدور علي انهي ‪process‬‬
‫تخش ال‪ CS‬المرة الجايه ‪...‬ثانيا‬
‫‪ flag‬دا اراي كل ‪ process‬ليها‬
‫عنصر فيه وبتخليه ب‪ true‬لما هيا‬
‫تكون عايزه تخش ال‪CS‬‬

‫طيب خلي بالكوا ان ال‪process 2‬‬


‫هشتيشغلوا مع بعض‬

‫عندنا حالتين بس في اي حاله في‬


‫اول ما ابدا دايما االراي بيبقي‬
‫ب‪ false‬الن لسه ما حدش دخل‬

‫مع مالحظه ان ‪ i = 0‬و ‪j = 1‬‬

‫‪-1‬ان واحده تيحي قبل التانيه‬

‫‪P0‬‬ ‫‪P1‬‬
‫‪I came‬‬
‫‪Flag[0] = true‬‬
‫‪Turn = 1‬‬
‫)‪While(false && true‬‬
‫اللوب ديه بقي مهمه جدا معناها بيقول ايه لو‬
‫ال‪ process‬التانيه ال‪ flag‬بتاعها ب ‪true‬‬
‫معني كده انها بتحاول دلوقت تخش ال‪CS‬‬
‫بتاعها طيب تمام و كمان بتقول اللوب ان‬
‫الدور علي ال‪ process‬التانيه طيب يعني لو‬
‫الدور عليها وهيا لما تخلص هتخلي الدوء عليا‬
‫فهخرج من اللوب الفاضيه‬
‫طيب هنا البروسيس التانيه لسه ما جتش‬
‫فال‪ condition‬هي‪ fail‬وهخش اعمل الجزء‬
‫بتاعي براحتي‬
‫‪Executing crtitical section‬‬ ‫‪I came‬‬
‫‪Flag[1] = true‬‬
‫‪Turn = 0‬‬
‫)‪While(true && true‬‬
‫دا معناه ان التانيه لو جاجهزة والدور عليها‬
‫هخش في لوب فاضيه ودا اللي هيحصل‬
‫‪Still Executing critical section‬‬ ‫)‪While(true && true‬‬
‫لسه في لوب فاضيه‬
‫‪Finished‬‬ ‫)‪While (false && true‬‬
‫‪Flag[0] = false‬‬ ‫دلوقت هخرج من اللوب الفاضيه واشتغل انا‬
‫بقي‬
‫يا ريت تعملولها تريسنج تاني مع نفسكوا علورق كده‬

‫‪-2‬ان االتنين ييجوا مع بعض‬

‫‪P0‬‬ ‫‪P1‬‬
‫‪I came‬‬ ‫‪I came‬‬
‫‪Flag[0] = true‬‬ ‫‪Flag[1] = true‬‬
‫‪Turn = 1‬‬ ‫‪Turn = 0‬‬
‫هنا بقي ‪ Turn‬هيتكتب عليها قيمه واحده بس في االخر مش قيمتين علي حسب انهي اتفذت‬
‫االول او انهي بروسيسور خلص قبل التاني وبالتالي الموضوع هيكمل كان واحده جت قبل‬
‫التانيه وعادي جدا ونفترض ان القيمه اللي اتكتب بعد كده هيا ‪ 1‬معني كده ان اه االتنين‬
‫جاهزين بس الدور المرة الجايه علي واحد يبقي صفر اللي هتتشغل دلوقت‬
‫)‪While(true && false‬‬ ‫)‪While(true && true‬‬
‫‪No wait‬‬ ‫;‪Wait‬‬
‫‪Executing crtitical section‬‬ ‫)‪While(true && true‬‬
‫;‪Wait‬‬
‫‪Still Executing critical section‬‬ ‫)‪While(true && true‬‬
‫;‪Wait‬‬
‫‪Finished‬‬ ‫)‪While (false && true‬‬
‫‪Flag[0] = false‬‬ ‫‪No Wait‬‬

‫‪H/W Solutions:‬‬

‫‪1-Synchronization H/W with atomic instructions:‬‬

‫ملحوظه ‪ atomic instruction‬يعني ‪ non interruptable instruction‬يعني ما ينفعش‬


‫اعملها ‪ interrupt‬وهيا شغاله ازاي ‪ O_O‬دي بقي وظيفه ال‪ computer theory‬ان‬
‫علماءها بيصمموا هاردوير عشان ال ‪ developer‬تتسهل مهمتهم ايا كان يعني ‪‬‬
‫طيب مشكورين العلماء دول عملولنا‬
‫‪ instruction‬اسمها ‪TestAndSet‬‬

‫بتعمل ايه بقي ديه‬

‫اول حاجه باخد قيمه المتغير اللي اتبعتلي‬


‫واحطها في حاجه ‪ tmp‬اسمها ‪ rv‬وبعدين‬
‫بغير قيمه المتغير اللي اتبعتلي بخليه ‪ true‬وبرجع القيمه القديمه اللي قبل ما اغير المتغير‪.‬تمام‬
‫كده؟؟ ما تنقلش علي اللي بعدها اال ما تفهم ديه‬

‫طيب اوعي تنسي ان الفنكشن دي ‪ atomic‬يعني او‬


‫ما اندهها خالص ما حدش هيقاطعها‬

‫طيب بستعمل ازاي الفنكشن المهمه ديه؟؟‬

‫دلوقت عندي متغير اسمه ‪ lock‬ودا بيبقي قيمته ‪false‬‬


‫في اول مرة ‪..‬طيب تمام‪...‬دلوقت بقي مفروض ان‬
‫الفنكشن ديه لو بترجع ‪ true‬معني كده ان قيمه‬
‫ال ‪ lock‬ب ‪ true‬الن الفنشكن اصال هترجعها و‬
‫فخش في لوب فاضيه ومش هعمل حاجه عشان داه‬
‫معناه ان في حد شغال‬

‫طيب امتي هخرج من اللوب ديه ؟؟لما حد يخلص ال‪ CS‬بتاعه ويخلي قيمه المتغير ب‪false‬‬
‫طيب اشطه انا دلوقت خالص هخرج من اللوب الفاضيه ديه وهشتغل في ‪ CS‬بتاعي محتاج اني‬
‫اخلي المتغير داه ب‪ true‬عشان ما حدش يقاطعني ويعدل في شغلي انا وشغال وعشان كده‬
‫الفنكشن ديه قبل ما بترجع القيمه بتاعه المنغير بتخليه ب‪ true‬عشان لو هوا ‪ true‬خالص مش‬
‫هتفرق لو خليته ‪ true‬ولو هوا ‪ false‬والفنكشن اتنده علليها معني كده ان في ‪ process‬هتخش‬
‫ال‪ CS‬بتعها دلوقت ومحتاجه انها تخليه ‪ true‬عشان ما حدش يقاطعها (نفهم بمثال احسن ‪)‬‬

‫نفترض ان عندي ‪ processes 2‬طيب االولي شغاله عليمين والتانيه علشمال وكل واحده‬
‫دلوقت اهيه اديه شغلها‬

‫‪DO { : OK‬‬ ‫‪Do { : OK‬‬


‫‪While(TAS(lock)) ; // wait until TAS‬‬ ‫‪While(TAS(lock)) ; // call fn.‬‬
‫‪on the right finishes because it's‬‬
‫‪atomic(can't be interrupted), but‬‬
‫‪I'm still trying to call the function‬‬
but I can't call it yet

I'm still trying to call it but not now Tmp = lock : OK // False
I must wait TMP = TRUE: OK
Return Tmp : OK // False
// Yes  I can call call TAS(lock) // Yes  false returned I won't
now because the TAS on the right enter any infinite loop
is finished
Tmp = lock : OK // TRUE I'm executing in my critical section
TMP = TRUE: OK
Return TMP : OK // TRUE
// What the... I can't enter CS now
Tmp = lock : OK // TRUE I'm still executing in my critical
TMP = TRUE: OK section :P
Return TMP : OK // TRUE
// What the... I still can't enter CS
now
Tmp = lock : OK // FALSE I have finished CS now and ill set
TMP = TRUE: OK lock to false 
Return TMP : OK // FALSE
// Finally 

2-Semaphores

‫قبل ما نخش في الكود بتاعها الزم نفهم شويه حجات‬

Semaphore is an integer variable that can be accessed only through two


atomic methods wait () and signal ()

When a process finishes its critical section, it calls signal ()

When a process needs to execute its CS it must call wait () first

S: an integer variable which is the semaphore

Semaphore has two types:

Binary(mutex): Semaphore has only values 0,1


‫‪Counting: no restriction on values‬‬

‫‪Implementation (binary):‬‬

‫نشرح بقي بالعربي افتقدته بقالي صفحه ‪‬‬

‫الفكرة بقي ان انا مش هشتغل ابدا اال لو قيمه ال‪ S‬بواحد فلو هيا بصفر هخش في لوب فاضيه‬
‫ونفس السؤال هيتكرر هخرج امتي ‪...‬هخرج لما تيجي اي ‪ process‬تانيه وتزود فيمه ال‪s‬‬
‫وتخليها بواحد عن طريق )(‪ signal‬طيب انا دلوقت دخلت ولقيت ان قيمه ال‪ S‬واحد خالص‬
‫هنقص بقي ال‪ S‬فهتبقي بصفر عشان اي حد ييجي بعد كده يخش في انفت لوب طيب بعد ما‬
‫الحد داه يخرج من االنفنت لوب برضه هيقوم مقلل ال‪ S‬عشان هوا هيسنخدمها وهكذا‬

‫طبعا ابصم بالعشرة انك مش هتفهم وال كلمه اال لما تعمل ‪ tracing‬زي الكود اللي احنا عاملينه‬
‫فوق بتاع ‪( TestAndSet‬اعمل انت بقي انا مش هسقيك بمعلقه ‪)‬‬

‫طيب بس برضه مش هسيبك ‪ ‬الحل داه مشكلته انك بتخش في لوب فاضيه وديه اسمها ‪busy‬‬
‫‪ waiting‬حلها ان بدل ما اضيع البروسيسور وكل اللي بيعمله انه مستني ال انا ‪ process‬اللي‬
‫تيجي وتالقي انها تستني احطها في طابور بتاع ال‪ processes‬الي مستنيه ولما واللي بشتغل‬
‫تخلص تصحي اي واحده من الطابور داه وداه اللي عملناه في المعمل لو تفتكروا‬
‫هتحتاجوا برضه تعمله ‪ tracing‬للكود بايديكوا عشان تفهموا بس لو الناس حاولت وما فهمتش‬
‫يبعتولنا علي صفحه استاذ اونالين و احنا نبعتلكوا ازاي تمشوها ان شاء هللا‬
Chapter Six OS

Synchronization Hardware
Many systems provide hardware support for critical section code
Uniprocessors – could disable interrupts
Currently running code would execute without preemption
Generally too inefficient on multiprocessor systems
Operating systems using this not broadly scalable
It is unwise to give user process the power to turn off interrupts.
Modern machines provide special atomic hardware
Either test memory word and set value
Or swap contents of two memory words
TestAndSet Instructions
read a memory word into a register AND store a nonzero value into that word, in an
indivisible instruction sequence // i think it means that no other instructions can
access this register till its value changes indivisible "unsharable"
Shared boolean variable lock., initialized to false.
Semaphore
Synchronization tool does not require busy waiting
semaphore is a variable that has an integer value
Wait:- operation decrements the semaphore value
Signal:- operation increments semaphore value
If a process is waiting for a signal, it is suspended until that signal is sent
Less complicated
Two standard operations modify S: wait() and signal()
Wait and signal operations are atomic
Signal operation increments semaphore value
Wait operation decrements the semaphore value
Semaphore can only be accessed via two indivisible (atomic) operations// i think it means
that the operation whcih use signal no other operations can use it
Semaphore Usage
Counting
semaphore – integer value can range over an unrestricted domain
May be initialized to a nonnegative number
Binary
semaphore – integer value can range only between 0 and 1; can be simpler to implement
Also known as mutex locks
Provides mutual exclusion
Semaphore Implementation with no Busy waiting
With each semaphore there is an associated waiting queue .
Each entry in a waiting queue has two data items:
value (of type integer)
pointer to next record in the list
Two operations:
block – place the process invoking the operation on the appropriate waiting queue.
wakeup – remove one of processes in the waiting queue and place it in the ready queue.
Implementation of wait: wait (S){
value--;
if (value <0){ add this process to waiting queue
block(); } }
Implementation of signal: Signal (S){
value++;
if (value<=0){ remove a process P from the waiting queue
wakeup(P); }
}}
Classical Problems of Synchronization
Bounded-Buffer Problem
Readers and Writers Problem
Sleeping Barber Problem
Dining-Philosophers Problem
Bounded Buffer-Problem
N buffers, each can hold one item
Semaphore Mutex initialized to the value 1
Semaphore full initialized to value 0
Semaphore empty initialized to value N
Readers Writers-Problem
A data set is shared among a number of concurrent processes
Readers – only read the data set; they dont perform any Updates
Writers – can both read and write.
Problem – allow multiple readers to read at the same time. Only one single writer can access
the shared data at the same time.
Shared Data
Semaphore Mutex initialized to the value 1
Semaphore wrt initialized to the value 1
Integer readcount initialized to 0
Dining Philosophers-Problem
Five philosophers who alternately think and eat Shares a fork with each neighbor
Assume each philosopher picks up left fork,then right fork, then eats
Deadlock if all enter at once
Shared data
Bowl of rice (data set)
Semaphore chopstick [5] initialized to 1
OSTAZ ONLINE

OstazOnline

2012

Operating System -1
‫بسم هللا الرحمن الرحيم‬

‫قال رسول هللا صلي هللا عليه وسلم ‪:‬‬

‫"افضل الذكر ال اله اال هللا " صدق رسول هللا صلي هللا عليه وسلم‬

‫مالحظات ‪:‬‬
‫هذا الشرح اجتهاد شخصي وليس رسميا‬
‫الناشر ليس مسؤال قانونيا عن الشرح‬
‫ال غني عن المذاكرة من االسليد او الكتاب‬
‫ال تنسوا الدعاء لمن قام بهذا العمل‬
‫اذا ذاكرت شيئا ولم تقهم ال تتردد في السؤال‬
‫اذا اردت ان تسأل فسأل علي صفحة استاذ اون الين في ال ‪Discussion Board‬‬

‫حاول ان تقرأ وردا من القران قبل المذاكرة حتي يفتح هللا عليك‬
‫حاول ان تساعد بأي شيئ حتي لو كان صغييرا‬
‫ال تنس ان هللا في عون العبد ما كان العبد في عون اخيه‬

‫اذا واجهت اي مشكلة ال تتردد في السوال هنا ‪:‬‬

‫‪https://www.facebook.com/Ostaz.Online?sk=wall‬‬

‫وتقبل هللا منا ومنكم‬


‫‪Ch. 8 Memory Management‬‬

‫السالم عليكم‬

‫احب اقول بس في االول ان الشتبر داه ما ينفعش تذاكروه من هنا وبس النه مليان حجات‬
‫كتير‪..‬انا برجح انكم تفهمه من هنا وتذاكره وبعد ما تخلصوه تفتحوا الساليدز بقي او الكتاب‬
‫زي ما تحبوا‪...‬ربنا معاكوا‬

‫اول حاجه هنتكلم عليها هيا المقدمه‬

‫طبعا كلنا فاكرين شبتر ‪ 5‬الجميل بتاع الــ ‪ CPU Scheduling‬وعرفنا ان فيه في‬
‫‪ processes‬بتبقي موجوده جوا الــ ‪ ready queue‬و برضه فيه شويه تانيين جوا الــ‬
‫‪ blocked queue‬و هكذا‬

‫طيب ما حدش فكر الطوابير ديه موجوده فين ؟؟؟ اه سامعك بتقول ‪ ‬موجوده في الـ‬
‫‪ Memory‬طيب معني كده ان بيحصل ‪ sharing‬للذاكره ديه وبالتالي اكيد محتاجين طرق‬
‫لتنظيم الذاكره بتاعتنا وهوا داه اللي بيتكلم عليه الشبتر كله‬

‫‪Memory management objective‬‬

‫‪To pack as much as processes in the main memory, in other words‬‬


‫‪increase overall number of processes residing in main memory which is‬‬
‫‪the multi programming degree‬‬

‫‪Virtual address X Physical address‬‬

‫ديه بقي حته مهمه جدا وهيا اساس كل حاجه في الكمبيوتر دلوقت ‪ ...‬اول ما البرنامج بيفتح‬
‫مفروض انه بيحتاج مساحه معينه في الذاكره هنقول مثال محتاج يكون عنده ‪ 011‬عنوان او‬
‫‪ 011‬مكان يعني طيب هل الذاكره هتكفي ان كل برنامج ييجي ويستخدم ‪ 011‬مكان في الذاكره‬
‫دا احنا لو بنتكلم علي حاجه مثال زي ‪ DBMS‬بكل امكانياتها وتفاصيلها وبابا غنوجها دي‬
‫محتاجه علي كده بقي تحمل ال‪ 4‬جيجا جوا الذاكره ‪ ‬من هنا بقي نشات فكره الــ ‪virtual‬‬
‫‪memory‬اللي هنتكلم عليها في شبتر ‪ 9‬ان شاء هللا ان البرنامج مش بيبقي متحمل كله جوا‬
‫المموري دا بتبقي الحجات اللي شغاله بس متحمله جوا المموري و احنا هنضحك علي البرنامج‬
‫وهنخليه بيبقي فاكر ‪ /‬شايف ادامه ان كل حاجه متحمله‬

‫طيب الحجات اللي بيبقي شايفها البرنامج قدامه اسمها‬

‫‪Logical address space: set of all logical addresses generated and seen by‬‬
‫‪CPU‬‬

‫والحجات اللي فعال موجوده في الذاكره و مفروض ان ال ‪ logical‬بتشاور عليها اسمها‬


Physical address space: set of all actual memory addresses
corresponding to logical address space

‫ تبقي موجوده في ال‬logical ‫مع مالحظه بأكد عليها تاني مش شرط كل حاجه في ال‬
‫ مثال وبندخلها الذاكره لما البرنامج‬hard disk ‫ ممكن تكون موجوده في ال‬physical
‫يحتاجها‬

‫يبقي تاني البرنامج بيبقي وهميا شايف ان البرنامج كله متحمل ادامه ولما بيحمل حاجه فيه حد‬
‫مسؤول امام البرنامج انه يجبله الجزء المطلوب داه‬

Program sees virtually that every part is loaded and there is some H/W
responsible for getting the required part

‫طيب دلوقت بقي مين اللي مسؤول انه يجيب للبرنامج اي حاجه هوا عايزها وكمان بيترجم‬
‫ دا بقي اسمه (الترجمه هنا يعني تحويل حاجه من نطاق الي نطاق‬physical‫ لل‬logical ‫ال‬
) mapping ‫اخر اللي هيا كلمه‬

Memory management Unit (MMU):

This H/W part is responsible for mapping logical addresses into physical
addresses

How mapping happens? Logical address is added to a base/relocation


register which indicates the start of that program in memory

‫ ودا اللي بيقول‬base register ‫يعني ايه بقي اللوغارتمات ديه يعني كل برنامج بيبقي ليه‬
‫ عشان‬logical address ‫ داه بقي بيتجمع علي ال‬register ‫البرنامج بيبدا منين في الذاكره ال‬
physical address‫نعرف بالظبط فين مكانه في الذاكره اللي هوا ال‬

Base register (relocation register): is the smallest legal physical memory


address, in other words it holds the physical address of the start of the
program in MM
Any Memory Mgt. Technique must satisfy these requirements:

1-Relocation
2-Protection
3-Sharing
4-Physical Organization

‫اول واحده‬

‫ لما المستخدم يفتح‬binary files ‫ علي هيئه‬hard disk ‫ان البرنامج بيبقي موجود في ال‬
‫ ان‬3 ‫ ودا يؤكد التعريف بتاع شتبر‬Process ‫البرنامج بيتنقل للذاكره ويتحول ل‬, ‫البرنامج‬
program in execution  ‫ هي‬process ‫ال‬

Relocation means the ability of the program to be swapped in to


memory if needed or to continue some operation and to be swapped
out of memory when it's not needed

‫تاني واحده‬

‫ تقلب في‬process ‫ للذاكره فداه معناه ان ممكن‬sharing ‫ بتعمل‬processes ‫اكيد طالما ال‬
‫ تانيه وطبعا احنا مش عايزين كده عشان كده الزم اي طريقه من طرق تنظيم‬process ‫ملفات‬
‫الذاكره تراعي الحجات ديه عشان تحقق الحمايه‬

Prevent processes form interfering with the OS part

Prevent processes from interfering with each other (no process can
reference any part in any other process without asking it for the
permission)

‫تالت واحده‬

Allow different processes to share same portion of memory, share


codes, etc...

‫ مش فاهم منها حاجه ومش موجوده في الكتاب لو حد فاهم منها حاجه يا ريت‬:‫رابع واحده‬
‫يحط علي صفحه استاذ اونالين‬
Swapping‫دلوقت بقي هنتكلم علي مفهوم جديد وهوا ال‬

‫ عن طريق اننا بندخل‬relocation ‫دا بقي هوا الحاجه اللي بنستخدمها عشان نعمل‬
‫ عايزه تخش ومش القيه‬process ‫ اللي البرنامج محتاجها في الذاكره ولو فيه‬process ‫ال‬
‫ ان شاء هللا‬9 ‫مكان بنطلع واحده تانيه عن طريق خوارزمات معينه ناقشها في شبتر‬

Swapping is the technique used for accomplishing relocation; a process


can be swapped out to a backing store and then brought back if needed
for continued execution

Backing store: it's a fast, non volatile and large storage that
must be large enough to accommodate swapped out processes
(it stores any information swapped out from the main
memory), it must provide means for accessing this processes
directly, and backing store is slower and cheaper than the main
memory
So what is a backing store here??? It is the Hard Disk

‫ او اي طابور في الدنيا‬ready queue‫ اللي جوا ال‬processes ‫ ان ال‬:‫مالحظه مهمه جداااااا‬


backing ‫ والحجات اللي جوا‬ready queue ‫ من الحاجات اللي جوا‬mix ‫بتبقي عباره عن‬
‫ جوا السستم تخيليا لكن مش محتاجها دلوقت‬processes‫ ودا اللي اتكلمت عليه ان ال‬store
‫ وهندخلها تاني الذاكره‬backing store‫لما ييجي عليها الدور ساعتها هنرجعها من ال‬

‫طيب تماااااااااااااام اقترح دلوقت انك تبص علي الكتاب او االساليدز وتذاكر الجزء داه‬
‫وبعديها تنقل علي الجزء التاني من الشبتر اللي هوا‬

Memory allocation mechanisms

Memory
allocation
mechanisms

B-Non-
A-contiguous
contiguous

1-fixed sized
1-paging
partitioning

2-dynamic sized
2-segmentation
partitionining
‫اول حاجه ال‪contiguous‬‬

‫التعريف؟؟‬

‫عمنا داه بقي اول ما البرنامج بيشتغل ويتحول ل‪ process‬بيحجزلها كل المكان اللي هيا عيازاه‬
‫في الذاكره وورا بعضه يعني لو برنامج محتاج ‪ 011‬مكان ال‪ MMU‬هتحجزله ‪ 011‬مكان‬
‫وورا بعضهم‬

‫‪Each process is contained in a single contiguous section/block of‬‬


‫‪memory‬‬

‫طيب الترجمه ؟؟‪logical address mapping‬‬

‫هنا هنسنخدم في ترجمه ال‪ logical‬لل‪ physical‬حاجه اسمها ‪relocation register‬‬


‫‪scheme‬‬

‫في طريقه الترجمه ديه ال‪ logical address‬بيبقي عباره عن رقم بيتجمع علي ‪base‬‬
‫‪ register‬وداه هيبقي المكان اللي عايز اوصله‬

‫طيب الحمايه؟؟‪protection‬‬

‫طيب واحده واحده كده احنا قلنا ان كل برنامج بيبقي ليه ‪ base register‬ودا اللي بيقول‬
‫البرنامج بيبدا منين بالظبط في الذاكره معني كده ان عشان اعمل حمايه انا ما ينفعش اعمل‬
‫‪ access‬لحاجه مش تبعي خالص هنعمل كمان ‪ register‬هنحط فيه البرنامج بيخلص فين في‬
‫الذاكره واسمه ‪limit register‬‬

‫‪Limit register: contains the maximum legal physical address, in other‬‬


‫‪words it holds the physical address of the program ending‬‬

‫طيب يبقي عشان ال‪ logical‬يترجم صح الزم يكون اكبر من او يساوي ال ‪ base‬وكمان‬
‫اصغر من ‪base + limit‬‬

‫ولو طلع ان ال ‪ logical‬حاجه غلط يبقي كده بعمل ‪ interrupt‬لل ‪ OS‬واقوله ان فيه‬
‫‪ addressing error‬يعني البرنامج بيحاول يعمل ‪ access‬علي حاجه مش متخصصه له‬
‫طيب بقي احنا قلنا ان البرنامج بينزل حته وحده كده بس يا تري الحته ديه او ال‪ block‬داه اللي‬
‫هلنزل فيه هيكون حجمه ثابت وال بيتغير وال ايه‬

‫(تعريف بس ‪ hole‬هو ‪)a block of available free memory‬‬

‫فيه طريقتين‬

‫يكون حجم ال‪ block‬ثابت او يكون حجمه متغير ودا الي بتعمله‬
‫‪Dynamic-sized portioning‬‬ ‫ودا اللي بتعمله‬
‫‪Fixed sized‬‬
‫‪partitioning‬‬
‫‪Memory is divided into variable sized partitions‬‬ ‫‪Memory is divided‬‬
‫‪3 Methods:‬‬ ‫‪into fixed sized‬‬
‫‪1.First fit‬‬ ‫‪partitions , that is‬‬
‫‪Places the process in the first suitable hole‬‬ ‫‪partition size must‬‬
‫‪ accommodate the‬يعني بحط ال‪ process‬في اول مكان االقيه ويكون كبير كفايه انه‬
‫يستحملها‬ ‫‪longest possible‬‬
‫‪2.Best fit‬‬ ‫‪program‬‬
‫‪Places the process in the smallest suitable hole‬‬ ‫يعني الذاكره بتتقسم‬
‫لقطاعات متساويه االحجام يعني بحطها في اصغر مكان كبير ‪ O_O‬ايه داه بقي ‪ ‬براحه‬
‫يعني بالبلدي كده بنشوف فين االماكن اللي ممكن تستحملها‬ ‫وطبعا الزم القطاع يكون‬
‫ونحطها في اصغر مكان فيهم‬ ‫كبير كفايه انه يستحمل‬
‫‪3.Worst fit‬‬ ‫اكبر برنامج ممكن ييجي‬
‫‪Places the process in the biggest suitable hole‬‬
‫هيا هيا اللي فاتت بس بعد ما بشوف ايه االماكن اللي ممكن‬
‫تستحملها بحطها في اكبر مكان فيهم‬

‫طيب اكيد كالعاده كل ما ناخد حاجه الزم يطلعوا فيها القطط الفطسانه ويطلع ان ملهاش لزمه‬
‫في االخر ‪ ‬ايه المشاكل اللي موجوده عموما في اي ‪memory allocation technique‬‬

‫‪Internal fragmentation: difference between block size and process size‬‬


‫‪that can't be used‬‬

‫دا دايما بيحصل لما بيكون التقسيم بقسم قطاعات ثابته يعني ايه بقي دلوقت احنا قلنا ان حجم‬
‫القطاع ثابت يعني افرض بقي جت ‪ process‬صغيره دلقوت اهيه شغلت المكان ومش هنعرف‬
‫نستغله‬
‫‪External fragmentation: total space of holes can fit processes requests‬‬
‫‪but the space is not contiguous‬‬

‫ودا بيحصل دايما في القطاعات ذات الحجم المتغير ‪ dynamic‬ان ال‪ process‬بتخش وكل‬
‫واحده اكيد بتعمل فراغات مثال في ال‪ best fit‬بيبقي فيه فراغات كتير حطمها صغير وفي‬
‫ال‪ worst fit‬بيبقي فيه فراغات كتيرحجمها كبير والفراغات ديه الحجم الكلي بتاعها بيبقي‬
‫يكفي ان ‪ process‬تانيه تيجي تستخجمها بس الفرات ديه مش علي بعضها فكان ملهاش لزمه‬

‫الحل بقي لل‪ fragmentation‬حاجه اسمها الضم‪/‬الدمج او ال‪compaction‬‬

‫‪Shuffling all holes together to produce one new free hole‬‬

‫ان انا بلم كل الفراغات مع بعضيها عشان استفيد بمكانها داه مع ملحوظه مهمه جداااااااا ان انا‬
‫ما ينفعش اعلم ‪ compaction‬اال لو انا بستخدم ‪dynamic memory allocation‬‬

‫هيييييييه برافو عليك انت كده خلصت اكتر من نص الشبتر قوم ذاكر بقي من الكتاب او‬

‫الساليدز وارجع تاني كمل هنا‬

‫تاني حاجه ‪Non-contiguous‬‬

‫التعريف؟؟‬

‫هنا بقي ال‪ process‬مش بيتنزل في حته علي بعضها ال دي بتققسم لكذا قطاع او ‪block‬‬
‫وتنزل في القطاعات ديه ال وايه كمان مش الزم تكون القطاعات ديه متتاليه او ‪contiguous‬‬

‫‪The process is divided into pages or segments (kinds of blocks), each‬‬


‫‪page or a segment corresponds to a memory frame‬‬

‫واضحه يعني ‪.. ‬الذاكره بنقسمها لحاجه اسمها ‪ frames‬في حاله ال‪ paging‬او‬
‫‪ segments‬في حاله ال‪ segmentation‬وال ‪ process‬هنقسمها لحاجه اسمها ‪segments‬‬
‫او ممكن نقسمها ‪ pages‬وهنعرف دلوقت الفرق بينهم مع مالحظه ان كل ‪ segment‬او‬
‫‪ page‬برضه بتشاور علي ‪ frame‬او ‪ segment‬معين في الذاكره بتعتنا‬

‫طيب قولنا ان ال‪ process‬ممكن تتقسم لحاجتين‬

‫‪segment‬‬ ‫‪page‬‬
‫‪The process is divided into variable The process is divided into fixed‬‬
‫‪sized blocks‬‬ ‫‪sized blocks‬‬
‫اوعي تتلخبطوا بينها وبين ال ‪ contiguous‬اوعي تتلخبطوا بينها وبين ال ‪contiguous‬‬
‫‪ fixed‬احنا قلنا ان هنا ال‪ process‬بتنزل في ‪ dynamic‬احنا قلنا ان هنا ال‪process‬‬
‫بتنزل في اماكن متفرقه مش شرط تكون ورا‬ ‫اماكن متفرقه مش شرط تكون ورا بعض‬
‫بعض‬
‫تاني يا جماعه الفرق بين ال ‪ contiguous‬وال ‪ non contiguous‬الفرق الوحيد ان في‬
‫ال ‪ contiguous‬ال‪ process‬بتنزل حته واحده علي بعضيها سواء كانت الحته ديه مساحتها‬
‫ثابته او متغيره انما ال‪ non contiguous‬ال‪ process‬بتنزل في حتت متفرقه سواء كانت‬
‫الحتت مساحتها ثابته او متغيره‬

‫طيب الترجمه ؟؟‪logical address mapping‬‬

‫‪Paging‬‬

‫>‪Logical address consists of a pair: <page no, offset‬‬

‫‪Page number is an index for the frame number in the process page table‬‬
‫‪page table‬‬

‫طب حد يترجم بقي الهجص داه ‪... ‬بسيط جد الموضوع‪...‬كل ‪ process‬بيبقي عندها ‪page‬‬
‫‪ table‬ودا بيبقي عباره عن ‪ array‬فيه كل ‪ page‬بتشاور علي ‪ frame‬رقم كام في الذاكره‬

‫مثال كده‬

‫‪4‬‬ ‫دا معناه ان ال‪ page‬رقم صفر بتشاور علي ال‪ frame‬رقم ‪ 4‬وال‪ page‬رقم ‪0‬‬
‫‪6‬‬ ‫بتشاور علي ال ‪ frame‬رقم ‪ 6‬وال ‪page‬رقم ‪ 0‬بتشاور علي ‪ frame‬رقم ‪00‬‬
‫‪00‬‬ ‫وال ‪ page‬رقم ‪ 3‬بتشاور علي ال ‪ frame‬رقم ‪011‬‬
‫‪011‬‬
‫‪Page table indicates corresponding frame to each page, such that the‬‬
‫]‪frame no can obtained through accessing pageTable[pageNo‬‬

‫هللا امال ايه بقي موضوع ال‪ offset‬داه بسيط جداااا‪...‬دلوقت احنا عرفنا هنروح علي انهي‬
‫‪ frame‬طيب افرض انا مش عايز اكتب من اول ال‪ frame‬افرض حجم ال‪ 4 frame‬بايت‬
‫وانا عايز اكتب بايت واحد من اخر الفرم خالص بقوله مثال امشي ‪ 3‬بايتس من الفريم رقم كذا‬
‫وادي مثال‬
‫طيب نركز بقي هنقول دلوقت هوا عايز يكتب في‬
‫العنوان ال ‪ logical‬داه >‪ <0,2‬الحرف ‪ c‬هيترجمها‬
‫ازاي؟؟؟‬

‫هبص علي ال ‪ page table‬هشوف ال ‪ page‬رقم‬


‫صفر بتشاور فين ؟؟ صح ‪..‬بتشاور علي ‪ frame‬رقم‬
‫‪ 5‬دلوقت هروح علي ‪frame‬رقم ‪ 5‬طيب انا عايز‬
‫اكتب في انهي حته فيه ؟؟؟ مفروض اني عاز اكتب‬
‫بعد ‪ 0‬بايتس يعني هسيب البايت رقم ‪ 01‬والبايبت رقم‬
‫‪ 00‬واكتب عند البايت ‪  00‬طيب كمان مثال عشان‬
‫ما يبقاش ليك حجه يا عم‬

‫عايزين نكتب في العنوان داه>‪ <2,3‬الحرف ‪l‬‬

‫هبص علي ال ‪ page table‬هشوف ال ‪ page‬رقم ‪0‬‬


‫بتشاور فين ؟؟ صح ‪..‬بتشاور علي ‪ frame‬رقم ‪0‬‬
‫دلوقت هروح علي ‪frame‬رقم ‪ 1‬طيب انا عايز اكتب‬
‫في انهي حته فيه ؟؟؟ مفروض اني عاز اكتب بعد ‪ 3‬بايتس يعني هسيب البايت رقم ‪ 4‬والبايت‬
‫رقم ‪ 5‬والبايت رقم ‪ 6‬وهكتب في البايت رقم ‪7‬‬

‫‪Segmentation‬‬

‫هاه نفتكر بقي كان ايه البتاع داه؟؟ ال‪ process‬بتقسسم لكذا قطاع متغير الحجم او ‪variable‬‬
‫‪sized partitions/segments‬‬

‫طيب يبقي اكيد يعني لو هكتب حاجه الزم ابقي عارف هكتب من اول فين وكمان اخري فين‬
‫عشان ما يحلصلش ايرور‬

‫هنا بقي الل‪ logical address‬عباره عن >‪<segment no, offset‬‬

‫وبرضه هنا ال‪ segment no‬بيقبقي ال‪ index‬بتاع الحاجه هللا نقولها دلقت ‪‬‬

‫الجدول بتاعنا بقي هنا اسمه ‪segment table‬‬

‫وبيبقي جواه حاجتين مش حاجه واحده اول حاجه انا ال‪ physical address‬بتاع‬
‫ال ‪ segment‬بيبدا من اول فين وكمان موجود الحجم بتاع ال‪ segment‬طيب مثال احسن‬
‫عشان نفهم‬
‫طيب هنقول مثال عايز اكتب الحاجه اللي جوا‬
‫‪ segment 0‬كلها (يعني باعتبار ان ‪offset‬‬
‫بيساوي صفر) جوا الذاكره هعمل ايه؟؟‬

‫وادي ال ‪logical‬هيبقي كده >‪<0,0‬‬

‫طيب هروح في ال‪ segment table‬هشوف‬


‫رقم صفرطيب ايه ال‪ base register‬بتاعها‬
‫بيبدا منين ‪...‬اه تمام بيبدا من اول البايت‬
‫‪ 0411‬في الذاكره تمام جدا‪...‬طيب بينتهي‬
‫فين بقي ال‪ segment‬داه بينتهي عند بدايه‬
‫ال ‪ segment‬مجموع عليها حجمها يبقي‬
‫هيخلص عند ‪1400+1000 = 2400‬‬

‫كمان مثال مفتكس بقي من عندي ‪‬‬

‫عايز اكتب في ال ‪ logical address‬داه>‪<3,5‬‬

‫يعني عايز اكتب حاجه من ‪ segment 3‬خالص هبص في الجدول ال‪ 3 index‬هيقولي اكتب‬
‫من اول ‪ 3200‬بس انا عايز اسيب ‪ 5‬بايتس اللي هما ال ‪ offset‬خالص هسيب البايت رقم‬
‫‪ 3011‬والبايت ‪ 3010‬وهكذا وهبدا كتابه من اول البايت رقم ‪3015‬‬

‫طيب الحمايه في ال‪segmentation‬؟؟‪protection‬‬

‫(تحذيييير ما حدش يذاكر الحمايه اللي في الكتاب في‪ paging‬الن الدكتور ما شرحتهاش‬
‫ومش موجوده في الساليدز النها هتتعارض مع حاجه في شبتر ‪) 9‬‬

‫ازاي بقي بسيط جدا انا بشوف الزم يكون ‪ offset‬بتاعي اصغر من الحجم بتاع ال‪segment‬‬
‫اللي هوا ‪limit register‬‬

‫اوعوا تنسوا ان في حاله ال ‪ paging‬الذاكره بتتقسم ‪ frames‬وال‪ process‬بتققسم ‪pages‬‬


‫وحجم ال‪ frame‬بيساوي حجم ال‪ page‬وفي حاله ال‪ segmentation‬الذاكره بتتقسم‬
‫‪ segments‬وكمان ال‪ process‬بتتقسم ‪segments‬‬
‫‪OS-Chap8 Summary‬‬

‫‪Chapter 8‬‬
‫السالم عليكم‬
‫هنكمل مع ال ‪ OS‬ودي الوقت وصلنا لل ‪ Main memory‬هنتكلم عليها بتفصيل أكتر وهنشوةف ازاي هنستغل مساحه الميوري دي‬
‫بأفضل شكل ممكن وازاي هنحط البروسيس والداتا جواها بحيث نوفر الستوريج والبيرفورمانس ‪,‬وده ه ‪return‬على السيستم كلو ‪..‬‬
‫وزي ما احنا عارفين أن الميموري والريجسترز هما الوحيدين اللي ال ‪CPU‬هتتعامل معاهم وهناخد منهم ‪data‬‬
‫واحنا هنهتم بالميموري ‪ ,‬وطبعا الميموري بتتألف من ‪ , words‬وكل ورد من دول ليها ‪ address‬وركزو بحته ال ‪ address‬دي‬
‫عشان هنشتغل بيها بكل الشابتر ‪ ,‬عشان احنا واحنا منعمل البروغرام مش هيكون عندنا ‪ access‬على ال ‪ address‬بتاع الميموري‬
‫عشان السيكيورتي وحاجات تانيه ‪ ,‬سو احنا هنعمل ‪ address‬لوجيكال وال ‪ Cpu‬هتحولو لل ‪ address‬بتاع الميموري ‪ ,‬زي مثال‬
‫ال ‪ pointer‬احنا لما منشاور على داتا بالميموري بالبوينتر ده منكون منشاور ب ‪ ,Logical Address‬وال ‪CPU‬هتحولو‬
‫لل ‪ physical Address‬والحته دي اسمها ال ‪.... Binding‬‬

‫طب أنا محتاج أعرف ال ‪ Address‬والرغي اللي فات ده ليه اصال !!!‬
‫زي ما احنا عارفين أن ال ‪ Cpu‬مش بتشتغل ‪task‬وحده وخالص أل ‪ ,‬هيهة بتشتغل أكتر من تاسك بنفس الوقت ‪,‬ولما انا يكون عندي‬
‫أكتر من بروغرام عايز انفذهم مع بعض يبقا الزم اعملهم ‪load‬على الميموري عشان ال ‪CPU‬هتشتغل عليهم ‪,‬يبقا أول حاجه لما انقل‬
‫البروغرام على الميموري هيبقا اسمو ‪ ,process‬طب ممكن انا عايز اشتغل على ‪ 100‬بروسيس وكل بروسيس بتاخد ‪ size‬معين‬
‫بالميموري وكل بروسيس ليها ‪ size‬مختلف والميوموري مش بتقدر تشيلهم مع بعض احيانا يبقا انا الزم اشيل بروسيس واحط بروسيس‬
‫بالميموري ‪ ,‬وعشان اعمل الحاجه دي الزم اعرف كل بروسيس مكانها فين في الميموري وفين األماكن الفاضيه والمليانه ‪ ,‬وعشان كده‬
‫انا محتاج ال ‪ , Address‬ومادام احنا تكلمنا عن أن هنشيل ونحط بروسيس يبقى العمليه دي محتاجه مانجمنت للميموري عشان اعرف‬
‫فين األماكن الفاضيه وفين األماكن المليانه ببروسيس ‪ ,‬وكده هنالقي أكتر من طريقه عشان نحط البروسيس في الميموري ‪ ,‬عشان نوفر‬
‫اسرع وقت واقل ستورج ممكن ‪.........................‬‬

‫الكالم اللي فات ده كلو عباره عن ‪ introduction‬عن الشابتر كلو ودي الوقت هنشرح كل كلمه وكل ساليد لوحده إن شاء هللا عز وجل‬

‫‪Background:‬‬

‫زي ما قولنا ان البرنامج منجيبو من الهارد ديسك ‪ ,‬ومنعملو ‪ load‬على الميموري وبيبقى اسمو ‪ process‬عشان ال ‪CPU‬تقدر تنفذ‬
‫ال ‪instructions‬بتاع البروسيس دي ‪...‬‬

‫‪Basic Hardware:‬‬

‫احنا عندنا أن كل ريجستر بيقراه ال ‪ CPU‬بسايكل وحده ‪ ,‬يعني هوة سريع جدا ان ال ‪ cpu‬مش هتاخد غير سايكل وحده عشان تقرأه‬
‫بينما الميموري هتاخد وقت أكبر بكتيير عشان كده أنا مضطر اني اعمل ميموري بين الريجستر والميموري واللي هيه الكاش ‪ ,‬وبعتقد‬
‫‪OS-Chap8 Summary‬‬

‫مش محتاجه شرح أكتر من كده ‪:D‬‬


‫وبرضوا أنا محتاج‬

‫‪Protection of memory required to ensure correct operation‬‬

‫‪Base and Limit Registers‬‬


‫زي ما قلت قبل شويه انا محتاج أني اعمل بروتكشن للميموري ‪ ,‬عشان انا عندي بالميموري ‪ processes‬متخزنه ورا بعض ‪ ,‬وال‬
‫‪ cpu‬بتنفذهم وحده وحده ‪ ,‬فالزم أخلي ال ‪cpu‬تبدأ بوحده من أولها ولما توصل ألخرها توقف عشان ما تنفذش اي ‪instruction‬من‬
‫بروسيس تانيه ‪ ,‬طب هعمل دي ازاي ؟؟‬
‫يبقى هيكون عندي ‪ 2‬ريجيستر ماحدش هيغير فيهم غير ال ‪OS‬نفسه وهيخزن باألول اللي هوة ‪ Base‬ال ‪address‬بتاع أول‬
‫انستركشن بالبروسيس اللي بتتنفذ دي الوقت ‪ ,‬وبالتاني اللي هوة ‪ Limit‬هخزن ال ‪ size‬بتاع البروسيس وكده ال ‪cpu‬لما تبتدي تنفذ‬
‫اي بروسيس هتبدأ ‪ fetch‬انستركشن من الميموري من الفيزيكال ادرس اللي موجود بال ‪ base‬وتبدا تنفذ كل االنستركشنز لحد ‪size‬‬
‫الموجود ب ‪ , limit register‬وكده أنا عملت بروتكشن ‪ ,‬ولو حصل من اي بروغرام أن حاول ياكسس اي ادرس بالميموري غير‬
‫المسموح ليه ‪ ,‬هي يطلعلي ال ‪. trap OS‬زي لما منكون بنبرمج بالجافا وعندنا ‪ array‬ونأكسس اندكس اكبر من الينث بتاعها فهوة‬
‫هيدينا اكسبشن اللي هوة اصال ‪ trap‬عملو ال ‪ os‬ألننا أكسسنا داتا بالميموري اكبر من ال ‪Limit Size‬‬

‫‪Binding of Instructions and Data to‬‬


‫‪Memory:‬‬
‫وهنا زي ما قلنا أن أنا كبروغرامر مش معايا ال ‪ physical Address‬بتاع الميموري ‪ ,‬ففرضا انا عندي بروغرام كبيير شويه فأنا‬
‫قسمتو ل ‪ process‬عشان اقدر انفذه وكل ما انفذ بروسيس وتخلص مثال هشيلها من الميموري عشان احط مكانها بروسيس تانيه ‪,‬‬
‫ومثال البروسيس اللي عايز اشيلها الفيزيكال أدرس بتاعها ‪ , 00000‬ال ‪ OS‬مش هيسمحلي أني اتعامل مع األدرس ده ‪ ,‬عشان كده انا‬
‫هيكون معايا أدرس تاني اسمو ‪ logical Address‬وغالبا بيكون ‪ counter‬مثال ‪ , 45677, 54 ,0‬وبعدها هيجي ال ‪compiler‬‬
‫وهيعمل ‪ binding‬لألدرس الوجيكال بتاعي ويحولو ألدرس فيزيكال ‪...‬‬
‫اي يعني ‪Binding‬؟؟؟؟؟‬
‫يعني مثال انا معايا لوجيكال ادرس هوة ‪ , 5‬هيجي ويضفلو مثال ‪ , 500‬ويصير ‪ , 505=500+5‬وكده ال ‪ 505‬هوة الفيزيكال أدرس ‪..‬‬
‫ومنعمل ال ‪ Binding‬بتلت اوقات مختلفه حسب عده شروط وهيه‪:‬‬
‫‪: At compile time -1‬‬
‫وأنا بكتب الكود بتاع البروغرام لو كنت عارف مكان البروسيس هتتحط فين في الميموري ‪ ,‬يبقى الكومبايلر هوة اللي هيعمل ال‬
‫‪ ,binding‬وهيعمل ‪ absolute code‬بعد الكومبايليشن ‪...‬‬
‫‪: at load time -2‬‬
‫وانا بكتب الكود لو ما كنتش عارف ال بروسيس هتتحط فين في الميموري ‪ ,‬يبقى هسيب البروسيس ينعمل ‪ binding‬لالدرس بتاعها‬
‫‪OS-Chap8 Summary‬‬

‫وهيه بتتحمل للميموري ‪ ,‬ويبقى الكود بتاعي اسمو ‪....reallocated code‬‬


‫‪: at Execution Time -3‬‬
‫لو البروسيس بتاعتي هتتشال وتتحط أكتر من مره في الميموري أثناء التنفيذ يبقى األدرس بتاعها كل مره هيتحدد من جديد ‪ ,‬يبقى ال‬
‫‪ binding‬هيتعمل اثناء ال ‪ execution time‬والحاله دي هيه الحاله الغالبه بكل ال ‪ general –purpose OS‬والزم يكون في حته‬
‫هارد ويرر بتدعم العمليه دي ‪...‬‬
‫وفي ‪figure out‬في الكتاب في البيج ‪ 333‬بتوضح الكالم ده‬

‫‪Logical vs. Physical Address Space:‬‬


‫دي الوقت اي ادرس هيطلع من ال ‪ cpu‬هيبقى اسمو ‪ , logical Address‬واألدرس ده القيمه بتاعتو هتكون مثال من ‪max <-0‬‬
‫وده األدرس اللي البرنامج بتاعنا هيتعامل معاه وهيشوفو ‪,‬عشان كده برضو اسمو ‪virtual address‬‬
‫وفي الميموري هتتعامل بس مع ادرس اللي هوة ‪ physical Address‬واللي هيكون بعد ما نعمل ‪ binding‬على ال ‪logical‬‬
‫‪ address‬ولو كان البايندنج بقيمه ‪ m‬مثال يبقى الفيزيكال ادرس هيكون من ‪.. max+m <- 0+m‬‬
‫وعشان كده هنالحظ ان في الكومبايل تايم وفي اللوود تايم هيكون التنين ادرس زي بعض ‪ ,‬وهيختلفو عن بعض بس االكسكيوشن تايم‬
‫بعد البايندنج‬

‫)‪Memory-Management Unit (MMU‬‬


‫دي من األخر حته هارد وييير اللي هتعمل البايندنج ‪ ,‬وزي قلنا قبل أن أحنا عندنا ‪ base and limit registers‬دي الوقت ال ‪base‬‬
‫‪ register‬هيبقى اسمو ‪ relocation register‬ليه بقا ؟؟‬
‫عشان ال ‪ MMU‬هتاخد القيمه اللي جواه وتضيفه على ال ‪ logical‬ويبقى اسمو ‪.. physical‬‬
‫يبقى من األخر كده البرنامج بتاعنا هيتعامل بس مع ال ‪,logical address‬اي ادرس طالع من ال ‪ cpu‬يبقى اسمو لوجيكال ‪ ,‬واي‬
‫ادرس طالع من ‪ MMU‬يبقى اسمو فيزيكال ‪..‬‬

‫‪Dynamic Loading:‬‬
‫اتفقنا أن البروغرام هنقسمو ل ‪ subprogram‬إذا كان كبيير وكل ‪ subprogram‬لما ننقلو للميموري هيكون بروسيس ‪ ,‬أحيانا هيكون‬
‫البروغرام كبيير اوي علشان نحطو كلو في الميموري فاحنا هنا ‪ ,‬هنحط في الميموري البروسيس اللي عايزين ننفذها دي الوقت ‪ ,‬ولما‬
‫نعوز بروسيس جديدة هنعمل ‪ load‬البروسيس دي من الهارد ديسك للميموري ‪ ,‬ودا بقا اسمو ‪Dynamic Loading‬‬

‫وأنا كده اللي مش هعوزو من البروغرام مش هحطو في الميموري ‪ ,‬وده هيوفرلي في الميموري وفي البيرفورمانس ‪..‬‬

‫ومش محتاج اي حاجه من ال ‪ OS‬لن الكالم ده هوة مسؤوليه البروغرامر ‪...‬‬


‫‪OS-Chap8 Summary‬‬

‫‪Dynamic Linking:‬‬
‫ده بقا شبه ‪ dynamic loading‬بس مع ال ‪ libraries‬بتاع البروغرام ‪ ,‬أزاي بقا ؟؟؟‬
‫أحنا عارفين ان البروغرام لما منعملو ‪ compilation‬بيقوم بيعمل ‪ link‬لكل ال ‪ libraries‬اللي هيستخدمها ممكن تشوف الكتاب في‬
‫البيج ‪ 333‬الرسمه ‪...‬‬
‫فده اسمو ‪ static link‬أنا مش مضطر اعمل لينك لكل ال ‪ libraries‬اللي مش محتاجها ده الوقت في بروسيس اللي شغال عليها ‪ ,‬فانا‬
‫هعمل كود صغير اسمو ‪ stub‬هوة هينفذ نفسه وبعد كده هيعمل ‪ replace‬لنفسه مع ‪ Address‬بتاع ال ‪library routine‬‬

‫فانا هنا محتاج أن ال ‪ OS‬يتدخل عشان يتأكد ان ال ‪ routine‬ده لسا ضمن البروسيس بتاعتي عشان السيكيورتي ‪...‬‬
‫وده اسمو ‪Dynamic linking‬‬
‫والسيستم اللي بتعمل الحاجات ده اسمها بقا ‪Shared libraries‬‬

‫‪Swapping:‬‬
‫الجزء ده بيتكلم عن البروسيس اللي هشيلها من الميموري لما عوز اشيلها ‪...‬‬
‫فانا هشيلها وحطها في مكان بالهارد ديسك اسمو ‪ Backing stor‬حتى ارجعها تاني للميموري وكمل تنفيذها لو لسا أدامها تنفيذ ‪..‬‬
‫وال ‪ Backing store‬أقدر أأكسس عليه بسرعه ويكون كبييير بحيث أقدر اخزن فيه كل البروسيس اللي في الميموري لكل اليوزرس‬
‫اللي في السيستم ‪..‬‬
‫طيب أنا هشيل اني بروسيس واحط اني بروسيس يبقى هنا عندنا حاجه اسمها ‪ roll out ,roll in‬لك بروسيس هيكون عندي ‪priority‬‬
‫فانا هشيل ال ‪ low priority process‬وهحط مكانها ‪high priority process‬‬

‫الجزء األكبر من الوقت اللي بياخدو ال ‪ swapping‬هو في ال ‪ transfer time‬للهارد ديسك ‪ ,‬فحجم البروسيس بيلعب الدور في‬
‫زمن ال ‪... swapping‬‬

‫وأخيرا ال ‪ swapping‬ده موجود فأغلب ال ‪ OS‬زي ‪unix ,windows‬‬

‫‪Contiguous Allocation:‬‬
‫ودي بكل بساطه بتقول ان أنا مقسم المين ميموري لقسمين األول لل ‪ OS‬والتاني لليوزر بروسيسز ‪ ,‬بحط ال ‪ OS‬في ال ‪low‬‬
‫‪ memory‬والبروسيسز ‪high memory‬‬
‫وبقسم كل اليوزر بروسيسز ميموري ل ‪ holes‬كل ‪ hole‬بحط فيها بروسيس ‪ ,‬وال ‪ os‬بيعرف فين ال ‪holes‬الفاضيه وفين المليانه‬
‫‪..‬‬

‫‪Dynamic Storage-Allocation Problem:‬‬


‫‪OS-Chap8 Summary‬‬

‫احنا بقا عندنا ‪ holes‬فاضيه اول ما ال ‪ OS‬يبتدي ‪ ,‬وبعديها أنا بحط فيها بروسيس وبخلص بروسيس ‪ ,‬بشيل بروسيس وبحط بروسيس‬
‫‪ ,‬والدنيا هتبقى مش مترتبة ‪ ,‬يعني هالقي ‪ hole‬فاضيه وبعديها مليانه وبعديها فاضيه ‪ ,‬وكل وحده منه ب ‪ Size‬مختلف عن التانيه ‪,‬‬
‫طب أزاي أنا بحط بروسيس وسط الزحمه دي بيقلك انا عندي تالت طرق ‪:‬‬

‫‪:First fit -1‬‬


‫أنا بجيب البروسيس اللي عايز احطها بالميموري وبشوف أول ‪hole‬فاضيه وحجمها بيوسع البروسيس بتاعتي بقوم حاطط‬
‫البروسيس بيها ‪...‬‬
‫‪: Best fit -2‬‬
‫بشوف أصغر ‪ size‬ل ‪ hole‬فاضيه عندي وحجمها بيوسع البروسيس بتاعتي وبحط البروسيس بيها ‪..‬‬
‫‪: Worst fit -3‬‬
‫بشوف أكبر ‪ size‬ل ‪ hole‬عندي وحجمها بيوسع البروسيس بتاعتي وبحط بيها البروسيس ‪...‬‬

‫‪First-fit and best-fit better than worst-fit in terms of speed and storage utilization.‬‬

‫‪Fragmentation:‬‬
‫يعني يكون عندي مكان في الذاكره مش مستفيد منه وبعديه في اماكن فيها بروسيس مستفيد منها ‪,‬يعني األماكن الفاضيه مش متجاوره مع‬
‫بعضها ‪ ,‬عندنا نوعين‪:‬‬
‫‪: External‬‬
‫وده يعني أن المساحه الفاضيه بقدر احط فيها بروسيس ‪ ,‬بس كل المساحات الفاضيه في الميموري مش متجاورة مع بعضها ‪,‬‬

‫‪: Internal‬‬
‫وده يعني أن يكون عندي مساحه فاضيه بس ما بتوسعش بروسيس وهيه مساحه نتيجه أني حطيت بروسيس في مكان بروسيس أكبر منها‬
‫بسنه وزاد عندي مساحه صغيره مثال ‪:‬‬

‫أنا عندي مساحه فاضيه في الميموري بعد ما شلت بروسيس ‪ ,‬والمساحه دي مثال ‪ 14,364‬بايت ‪,‬عندي بروسيس عايز احطها في‬
‫المكان ده والبروسيس دي حجمها ‪ 14,362‬بايت يبقى الفرق ما بين المكان الفاضي وحجم البروسيس هوة ‪ 2‬بايت والمساحه دي مش‬
‫هقدر احط فيها بروسيس عشان هية صغيره جدا وأنا مش مستفيد منها تبقى دي مشكله واسمه ‪internal fragmentation‬‬

‫‪Paging:‬‬
‫دي طريقه جديده هحط بيها البروسيسز بالميموري ‪ ,‬هبدأ هيكون عندي تصوريين للميموري ‪ ,‬األول هوة ‪ logical‬بعقلي والتاني هوة‬
‫‪ physical‬الحقيقي ‪ ,‬لما يكون معايا البروسيس هقسم الميموري ل ‪ pages‬كل ‪ page‬ليها ‪ size‬ثابت وهيكون تقريبا ‪ 512‬بات لحد‬
‫‪ 8,192‬بايت ‪ ,‬يبقى هحط البروسيس ضمن البيجز فمثال هيكون عندي بروسيس حجمها ‪ 4‬كيلو بايت والبيج سايز عندي ‪ 512‬بايت‬
‫فالبروسيس هتتخزن في ‪4‬كيلو\‪ 512‬هيبقوا ‪ , pages 8‬وهروح للميموري هقسمها ل ‪ frames‬كل فريم ليها نفس ساز البيج نفسها ‪,‬‬
‫ييبقى انا هنا محتاج برضو ‪ frames 8‬عشان اخزن البروسيس دي ‪ ,‬طب انا هعرف كل بيج في اني ‪frame‬؟؟؟‬
‫أول حاجه أنا كيوزر مش هعرف اي حاجه عن الفريم ‪ :D‬عشان دي مهمه ال ‪ , OS‬بس هوة بيعمل ‪ table‬بيعمل جواه ‪ mapping‬ما‬
‫بين كل ‪ page‬وكل ‪.. frame‬‬
‫‪OS-Chap8 Summary‬‬

‫زي المثال ده ‪..‬‬

‫كل بيج ليها جوا التيبل الفريم بتاعها ‪..‬‬


‫أنا بالطريقه دي اتخلصت من ‪ External fragmentation‬بس هيحصل معايا ‪ internal fragmentation‬ليه بقا‬
‫ألن ممكن ‪ size‬البروسيس يكون محتاج مثال ‪ pages 10.5‬يبقى انا محتاج عشان أخزن البروسيس دي ‪ frames 11‬والفريم األخيره‬
‫هكون مستخدم نصها بس والنص التاني فاضي مش مستخدمو !!!‬

‫في الطريقه السابقه بتاعت ‪ reallocation‬كان عندنا اتنين )‪ registers (base,limit‬عشان أعرف كل بروسيس فين ‪ ,‬دي الوقت أنا‬
‫محتاج ‪.. logical Address‬‬

‫‪Address Translation Scheme‬‬


‫هنا هقسم ال ‪ logical address‬لقسمين ‪ ,‬األول هيكون اسمو ‪ base‬وده هيكون رقم في ال ‪ page table‬اللي هيقولي ال ‪base‬‬
‫بتاع الفيزيكال ‪ address‬يعني الفريم بتاع البيج ده موجوده فين !!‬
‫والتاني هوة ‪ page offset‬ده هيقولي األنستركشن أو ال‪ word‬معينه ضمن ال ‪ page‬موجوده فين ‪...‬‬
‫مثال‪:‬‬
‫لنفرض عندي ال ‪ logical‬ده ‪ 12112‬هاخد اول تلت ارقام وهيكونو هما البيز ودول ‪ 121‬اللي هاخدهم للتيبل وهيقلولي البيج ده‬
‫‪OS-Chap8 Summary‬‬

‫هخزنها في أني فريم ‪ ,‬وهاخد ‪ 12‬الباقيين ودول هيكونو ال ‪ offset‬ودول هيقلولي ضمن البيج ده او الفريم ال ‪ word‬اللي رقمها ‪ 12‬انا‬
‫عايزها ‪...‬‬

‫واحسن يكون ‪size‬البيج وسايز الفريم وكل حاجه ‪2 power n‬‬

‫‪Paging Hardware‬‬
‫غالبا بيكون التعامل مع البيج والتحويل ده ‪ ,‬عن طريق الهارد وير وبيكون في حته هارد وير هي اللي بتعمل‪ mapping‬مع البيج تيبل‬
‫وبتدي ال ‪physical address‬‬
‫‪OS-Chap8 Summary‬‬

‫في ‪ page method‬أنا هيكون عندي أن البروسيس مش هتتخزن في المميموري كلها جنب بعض ممكن ‪ 5‬بيج منها تتخزن في اول‬
‫الميموري و‪ 3‬بيج في النص و‪ 4‬بيج في األخر وبينهم في بيجز لبروسيسز تانيه طالما أنا عندي ال ‪ page table‬لكل بروسيس فانا‬
‫هقدر أأكسس كل بروسيس واعرف اجيبها‬

‫‪Implementation of Page Table:‬‬


‫دي الوقت هنتكلم شويه عن ال ‪: Page table‬‬

‫زي ما قلنا ان ال ‪ access‬على البيج تيبل هيكون عن طريق حته هارد وير مخصصه ليه ‪,‬وال ‪ page table‬هيكون موجود في‬
‫الميموري ‪ ,‬يبقى عشان أوصلو هعمل ‪ register‬اسمو ‪ page-table-base-register‬وهحط بيه االدرس بتاع البيج تيبل في الميموري‬
‫‪,‬هعمل ريجستر تاني اسمو ‪ page-table length register‬هحط بيه سايز بتاع التيبل ‪ ,‬نفس الفكره بتاعت البيز والليمت ريجستر !!!‬
‫طب دي الوقت عشان اوصل ألي بيج في البيج تيبل ده محتاج الرقم بتاع البيج ‪ ,‬طب لو أنا عندي سايز البيج تيبل كبيير وغالبا بيكون‬
‫كبيير مثال مليون بيج جواه وكل بيج برضو فيها انستركشنز كتيير ‪ ,‬يبقى عشان اوصل ل ‪ instruction‬معينه في بيج معينه الزم ادور‬
‫عالى البيج في المليون بيج وبعديها جواها على األنستركشن دي ‪ ,‬وكده هياخد مني وقت ياما ‪ ,‬لذلك هما عملو زي الكاش ميموري‬
‫وعملو تيبل صغير فوق التيبل الكبيير اسمو‪:‬‬

‫‪associative memory or translation look-aside buffers(TLBs).‬‬

‫لما أجي ومعايا بيج عايز أجيب الفريم بتاعتها ‪ ,‬أول حاجه هبص على ال ‪ TLB‬لو لقيتها جواه ‪ ,‬هخد الفريم وأكمل اكسس على الميموري‬
‫‪ ,‬لو مش جواه دي اسمها ‪ TLB miss‬هروح وادور على البيج في البيج تيبل الكبير وامري هلل ‪ :D‬وأكسس على الميموري بعد ما اعرف‬
‫رقم الفريم وارج واقوم واخد البيج والفريم واحطهم في ال ‪ TLB‬لو في مكان هحطهم بيه لو أل هقول لل ‪ OS‬يفضيلي مكان ‪ ,‬وهوة بقا‬
‫هيشوف هيعمل ‪ swap‬مع مين حسب ال ‪ police‬معينه ‪...‬‬
‫‪OS-Chap8 Summary‬‬

‫‪Effective Access Time:‬‬


‫بعد الكالم اللي فات أنا عايز اعرف أنا عشان أجيب أنا هاخد وقت أد ايه عشان ‪ access‬للميموري !!!‬

‫أنا عايز اعرف ‪ hit ratio‬واللي هية النسبه اللي هالقي البيج فيها في ال ‪ LTB‬وهرمزلها ب ‪α‬‬

‫القانون ده بتاع الدكتور أنا هشرحوا هوة شرحو موجود في الكتاب في بيج ‪.. 348‬‬
‫اوال أنا عندي حالتين األولى اني أنا لقيت البيج في ال ‪ TLB‬فهيكون الزمن عندي هوة األكسس على ال ‪ LTB‬والزمن لحتى أكسس‬
‫الميموري ‪ ,‬تانيا لما مش هالقي البيج في ال ‪ LTB‬فهيكون الزمن في السيرش في ال ‪ LTB‬والزمن في السيرش في البيج تيبل ‪ +‬الزمن‬
‫في األكسس الميموري ‪ ,‬القانون ضرب كل حاله بالريشوي بتاعتها وجمعهم لبعض وكده هيطلع ال ‪Effective access time‬‬

‫‪Memory Protection:‬‬
‫من اسمها يعني هية مسؤووله عن الحمايه وكده ‪ ,‬يعني ازاي أنا اقدر احافظ على الميموري من أني كل ما استخدم بيج وأجيب فريم ‪,‬‬
‫مش هجيب الفريم الغلط أو أكتب فوق فريم غلط ‪ ,‬يعني هوة هيحددلي مساحه في الميموري أمشي عليها وكل بيج مش هتتعدى على بيج‬
‫تانيه وكل فريم برضو ‪ ,‬طب ازاي بيعمل كده !!؟؟‪:‬‬
‫أوال هيحط لكل بيج بت زياده ‪ ,‬هيحدد من خالله ان البيج ده أو الفريم المقابله ليها هيه ‪ read-only‬وال ‪read-write‬‬
‫حاجه اهم هيحط لكل بيج برضو بت زياده هيحدد من خالله أن البيج ده ‪ valid‬ليا وال ‪ invalid‬والبت ده اسمو ‪Valid-Invalid bit‬‬
‫أحنا اتفقنا أن الميموري هتكون اسمين ‪ ,‬قسم لل ‪ OS‬واألقسم الباقي ‪ , user’s processes‬فالبيج اللي مسموح ليا أني استخدمها هيكون‬
‫البت ده ‪ Valid‬وغير كده هيكون ‪ Invalid‬زي المثاال ده ‪:‬‬

‫البيج ‪6‬و‪ 7‬انا مش مسموح أني استخدمهم ‪ ,‬عشان كده ادامهم ‪invalid‬‬
‫‪OS-Chap8 Summary‬‬

‫‪Shared Pages:‬‬
‫بعد ما اتفقنا أن هنعمل الميموري بيجز وفريمز ‪ ,‬لقينا أن ممكن نستفيد من التكنولوجي دي أن أحنا ممكن بالطريقه دي نخلي البروسيس‬
‫اللي بتنفذ كود مشترك تنفذ نفس الكود بزمن واحد يعني نعمل ‪ Share‬للكود ‪ ,‬عمليا أحنا منعمل شير للبيجز بس الزم يكون في شروط ‪,‬‬
‫أوال الكود ده الزم يكون ‪ reentrant‬يعني ‪ read only‬ما ينفعش أني اغير بيه والبروسيس بتستخدمه ‪,‬مثال ‪:‬‬

‫لو عندي سيستم بيخدم ‪ 40‬يوزرس وال ‪ 40‬دول كلهم بيستخدمو ‪ editor‬حجمو ‪ 150‬كيلو وكل واحد منهم بيشتغل على داتا حجمها ‪50‬‬
‫كيلو ‪ ,‬يبقى أنا محتاج (‪ 40*)50+150‬يعني ‪ 8000‬كيلو بايت في الميموري ‪ ,‬طيب ليه أنا ما خلي الكود بتاع ال ‪ editor‬ثابت في‬
‫الميموري وكل بروسيس هتستخدمو عالداتا بتاعتها ‪ ,‬يعني أنا محتاج ‪ editor‬واحد بس يعني هحتاج ‪ 50*40+150‬هيبقة ‪ 2150‬كيلو‬
‫بايت بس ‪ ...‬الصورة في السلاليد ‪ 33‬بتوضح الكالم ده على مثال صغير تالته بروسيسز ‪...‬‬

‫‪Structure of the Page Table:‬‬


‫دي الوقت هنتكلم عن البيج تيبل وأزاي معمول من جوا ‪ ,‬عندنا عده اشكال وطرق معمول بيها ‪:‬‬
‫‪OS-Chap8 Summary‬‬

‫‪Hierarchical Page Tables:‬‬


‫ال ‪ page table‬فاغلب السيستمز مع ‪ 32-bit‬هيكون كبيير جدا ومش هيكون ‪, efficient‬فالبطريقه دي احنا هنقسمو ألجزاء صغيره‬
‫يعني هنعمل ‪ paging‬على البيج تيبل نفسه ‪ ,‬مثال أنا عندي بيج تيبل فيه مليون ‪ entries‬هقسمهم لغروبات والغروبات دي هحطها‬
‫‪, outer page‬وكده ‪...‬‬

‫‪Two-Level Page-Table Scheme‬‬ ‫‪:‬‬

‫مافيش حاجه جديده زي ما قلت قبل شوي هقسم التيبل لغروبس وحطهم في بيج تاني ‪ ,‬يبقى أنا كده عملت اتنين بيج ‪ ,‬يعني ‪tow-level‬‬
‫‪ ,‬الساليد ‪.. 36‬‬
‫طيب دي الوقت هاوصل للبيج ازاي ؟؟!!‬
‫هاخد ال ‪ logical address‬اللي هوة ‪ 32‬بت واقسمو لقسمين ‪:‬‬

‫‪: Page number -1‬‬


‫وده هيغبلي البيج ‪ ,‬ازاي ؟؟‬
‫هقسمو برضو لقسمين األول ‪ p1‬هيغبلي األندكس في ال ‪ outer table‬يعني هيجبلي الغروب بتاع البيج‬
‫والتاني ‪ p2‬هيغبلي البيج في ‪ , page table‬يعني جوا الغروب ‪..‬‬
‫‪: Page offset -2‬‬
‫ده البيج اوف سيت اللي اتلكمنا عليه في البيج واللي بجيب منو من جوا البيج ‪..‬‬
‫‪OS-Chap8 Summary‬‬

‫‪Three-level Paging Scheme:‬‬


‫ممكن اعمل اكتر من ‪ level‬تالته اربعه ‪......‬‬
‫شوف الساليد ‪39‬‬

‫‪Hashed Page Tables:‬‬


‫هنا بقا أنا هعمل‪ hash table‬وهعمل ‪ hash‬على البيج ‪ , number‬وكل ‪ element‬من الهاش تيبل ده فيها ‪ linked list‬عشان لو‬
‫أكتر من بيج نمبر شاور على نفس المكان يكون الفريم بتاعتو بال ‪ , list‬وكل ‪ list element‬فيها تلت حاجات ‪:‬‬

‫‪ :virtual page number -1‬ده عشان اتأكد أن البيج ده اللي انا عاوزها‬

‫‪ : value of the mapped page frame -2‬ده عشان اخد الفريم منها واروح أأكسس في الميموري‬

‫‪pointer to next element in linked list-3‬‬

‫فأنا هعمل هاش للبيج نمبر وهاخد اول ‪ element‬باليست اشوف البيج نمبر هوة نفس للبيج اللي معايا لو هوة هاخد‬

‫‪ value of the mapped page frame‬واروح أأكسس الميموري ‪ ,‬لو مش هوة هشوف البوينتر واروح لل ‪next element‬‬
‫واعمل معاه نفس الحكايه لحد ما القي البيج بتاعتي ‪..‬‬
‫‪OS-Chap8 Summary‬‬

‫‪Inverted Page Table:‬‬


‫ده ما فهمتهاش كويس بصراحه ‪‬‬
‫بس اللي فهمتو أني هاعمل ‪ one page table‬لكل البروسيسز اللي بالسستم بدال كل بروسيس ليها بيج تيبل وهحط بالتيبل لكل فيزيكال‬
‫ادرس البيج اللي ممكن تكون متخزنه بيه ولما اكون عايز اعمل ‪ mapping‬هادور بالبيج نمبر وممكن استخدم هاش تيبل عشان احسن‬
‫السيرش ‪..‬‬
‫ممكن تقراو ده من الكتاب في ‪..page 355‬‬

‫‪Segmentation:‬‬
‫دي بقا هقسم البرنامج بتاعي لعده اقسام مثال ‪:‬‬

‫‪main program,‬‬
‫‪procedure, function,‬‬
‫‪method, object,‬‬
‫‪local variables, global variables,‬‬
‫‪common block, stack,‬‬
‫‪symbol table, arrays‬‬
‫كل قسم هسميه ‪ segment‬وهعمل تيبل فيه كل سيغمنت موجوده فين في الميموري ‪ ,‬وكل ‪ entry‬في التيبل فيه‬

‫‪ :Base -1‬السيغمنت دي موجوده فين ؟‬


‫‪ : Limit -2‬السايز بتاعها كام‬
‫وبرضو عندي اتنين ريجيستر هحدد من خاللهم مكان التيبل بتاعي فين في الميموري ‪:‬‬
‫‪Segment-table base register (STBR) points to the segment table’s location in‬‬
‫‪memory.‬‬
‫‪Segment-table length register (STLR) indicates number of segments used by a program‬‬
‫‪OS-Chap8 Summary‬‬

‫ولو حصل أن السيغمنت نمبر اللي انا عملتو أكبر من عدد السيغمنت اللي عندي ال ‪ OS‬هيطلع ‪trap‬‬
‫ودي مشروحه كويس في الكتاب في بيج ‪…. 356‬‬
Chapter Eight OS

Memory Management
Program must be brought into memory and placed within a process for it to be run.
User programs go through several steps before being Run
Job queue:-collection of processes on the disk that are waiting to be to run
You can never have enough memory to hold the OS and all the processes
need to transfer blocks between secondary and main memory as needed
Memory organization
Part reserved for OS
Rest is divided among processes
Try to put as many processes in memory as possible, to keep the CPU busy
Memory management objective:
To pack as many processes into memory as possible, needs to be allocated efficiently
Requirements for Memory Management to satisfy:
Relocation – Protection – Sharing - Physical Organization
Relocation
Programmers dont know where will the program be placed in the memory
While the program is being executed it may be swapped to the disk and turned back
to the main memory in different location
Restriction:- But its address saved in the CPU so when its changed CPU will have
difficulty in executing the program
Relocation:- ability to swap a program from disk into any location in memory
Memory references must be translated in the code to actual physical memory address
Protection
Prevent processes from interfering with the OS. or other processes cant refrence at
any process without permisiion
Protect process from accidental or intentional interference:
Can't know all memory references in advance anyway (arrays, data structures)
Must check at run time (when mapping is done) and therefore when operating system
is not active "handled by hardware"
Sharing
Allow several processes to access the same portion of memory (access control)
Processes executing same program
Sharing data structure i e producer/consumer
Physical Organization
Allocation of physical memory
Moving data between secondary and main memory
Memory management is the responsibility of the OS and not the programmer
programmer doesn’t know how much space will be available
OS can make it transparent to user, simplify programmer's job
Swapping
A process can be swapped temporarily out of memory to a backing store, and then
brought back into memory for continued execution
Backing store– fast disk and large to accommodate copies of all memory images
for all users; must provide direct access to these memory images
Roll out, roll in:- Swapping based on proirity lower proiority process swapped out
so higher can be loaded
transfer time; total transfer time is directly proportional to the amount of memory
swapped// i think when process swaped to the disk her priority increasing so that it
could go back to the memory and the lower amount of memory the higher its priority
increased the lower transfer time "thats my thoght"
The criteria for decisions regarding swapping
Fairness: processes take alot of CPU usage swapped out to give the others a chance
Creating a good job mix:
jobs that compete with each other will be swapped out more than jobs that
complement each other.
For example, if there are two CPU-bound jobs and one I/O-bound job, it makes sense
to swap out the CPU-bound jobs alternately, and leave the I/O-bound job memory
resident.
Logical vs. Physical Address Space
Logical address: generated by the CPU; also referred to as virtual address
Physical address: – address seen by the memory unit
Memory Management Unit (MMU)
Hardware device that maps virtual to physical address
In MMU scheme, the value in the relocation register is added to every address
generated by a user process at the time it is sent tomemory
The user program deals with Logical addresses; it never sees the physical addresses
Memory Allocation Mechanisms
Contiguous Allocation
Paging
Segmentation
Contiguous Allocation
Main memory usually into two partitions:
Resident operating system, usually held in low memory with interrupt vector
Rest of memory is left for programs
Establish programs partition once at system initialization
Relocation-register scheme used to protect user processes from each other, and from
changing operating-system code and data
Relocation register contains value of smallest physical address; limit register contains
range of logical addresses – each logical address must be less than the limit register
Registers Used during Execution
Base register:-Starting address for the process
Limit register:-Ending location of the process
These values are set when the process is loaded or when the process is swapped in
The value of the base register is added to a relative address to produce an absolute
address
The resulting address is compared with the value in the bounds register
If the address is not within limits, an interrupt is generated to the operating system
Memory Partitioning
Fixed (Equal-Sized) Partitions:-Any process whose size is less than or equal to
the partition size can be loaded into an available partition
If all partitions full, and no process ready, then swap in a new process (remove one)
Limits size of a process
Inefficient use of memory for small processes ( internal fragmentation)
Dynamic (Variable-Sized) Partitions:-Partitions are of variable length and
number
Handle larger programs with larger partitions
Each program allocated exactly as much memory as it requires
Eventually holes appear in main memory between the partitions
We call this "External Fragmentation"
Dynamic Partitions
Hole:– block of available memory; holes of various size are scattered in memory
When a process arrives, it is allocated for a hole large enough to accommodate it
a) allocated partitions b) free partitions (hole)
Dynamic Storage Allocation Problem
Operating system must decide which free block to allocate to a process
How to satisfy a request of size N from a list of free holes
First-fit: Allocate the First hole that is big enough
Best-fit: Allocate the Smallest hole that is big enough;
must search entire list, unless ordered by size. Produces the smallest leftover hole.
Worst-fit: Allocate the Largest hole; must also search entire list Produces the
largest leftover hole
First-fit and best-fit better than worst-fit in terms of speed and
Fragmentation
External Fragmintation:-free memory spaces but not contiguous
Internal Fragmintation:-Free memory spaces may be larger than the requested
size
Reduce external fragmentation by compaction
Shuffle memory contents to place all free memory together in one large block
Compaction is possible Only if relocation is dynamic, and is done at execution time
I/O problem // the problems of I/O
Latch job in memory while it is involved in I/O //close on the job // i thing when I/O
needed the job should be swapped to CPU
Do I/O only into OS buffers
Paging
Support Needed for Virtual Memory
Hardware must support paging and/or segmentation
Operating system must manage the movement of pages and/or segments between
secondary memory and main memory
Logical address space of a process can be noncontiguous;process is allocated physical
memory whenever the latter is available
Divide physical memory into fixed-sized blocks called frames (size is power of 2,
between 512 bytes and 8192 bytes)
Divide logical memory into blocks of same size called pages Keep track of all free
frames
Operating system maintains a page table for each process contains the frame location
for each page in the process
To run a program of size N pages, need to find N free frames and load program
Set up a page table to translate logical to physical addresses memory address consist
of a page number and offset within the page
// i think pages is always equal or smaller than frames also page is part of frames and
the process is devided into pages and these pages spread over frames and not always
contiguous
Address Translation Scheme in Paging
Address generated by CPU is divided into:
Page number(p)–an index contains base address of each page in physical memory
Page offset (d)– with base address defineing the physical memory address
I think it means that logical adress consist of two parts first the Pnumber and thats an
index in the page table refers to adress of the page in the physical memory the other
part is Poffset and this is part of the physical address without changing
Page Size
Small page size
less internal fragmentation// if it means more external so its bad otherwise is good
larger page table
more pages in main memory
reduce page faults since pages contain recent references//good
Large page size
more internal fragmentation//if it means less external so its good otherwise is bad
smaller page table
secondary memory is designed to efficiently transfer large blocks of data //good
increase page faults since unreferenced data in memory
Shared Pages
Shared code
One copy of read-only (reentrant) code shared among processes (i.e., text editors,
compilers, window systems).
It must appear in same location in the logical address space of all processes
Private code and data
Each process keeps a separate copy of the code and data
The pages for the private code and data can appear anywhere in the logical address
space // look at page 39.
Segmentation
Let programmer view memory as a number of dynamically-sized partitions A
program is a collection of segments. A segment is a logical unit such as: main
program, procedure, function, method, object,local variables, global variables
There is a maximum segment length
Addressing consist of two parts –a segment number and an offset
Since segments are not equal, segmentation is similar to dynamic partitioning
Advantages
independent linking/loading of modules
sharing
Protection
// i think its more familiar to pages only changed the name and its dynamical not
fixed like frams or static like pages :D
Segmentation Architecture
Logical address consists of a two tuple: <segment-number, offset>,
Segment table:-maps two dimensional physical addresses; each
table entry has:
base – contains the starting physical address where the segments reside in memory
limit– specifies the length of the segment
Segment-table base register (STBR) points to the segment table’s location in memory
Segment-table length register (STLR) indicates number of segments used by a
program;
Relocation:-Dynamic by segment table
Sharing:-shared segments, same segment number
Allocation:-first fit/best fit, external fragmentation
Protection:-With each entry in segment table associate:
validation bit = 0 illegal segment
read/write/execute privileges
Protection bits associated with segments; code sharing occurs at segment level
Since segments vary in length, memory allocation is a dynamic storage-allocation
problem
A segmentation example is shown in the following diagram .
‫‪Operating systems‬‬
‫‪Chapter 9: Virtual Memory Management‬‬

‫‪OsatazOnline‬‬
‫‪http://www.facebook.com/Ostaz.Online‬‬

‫عن أبي هريرة ‪-‬رضي اهلل عنه‪ -‬ق ال‪ :‬ق ال رسول اهلل ‪-‬صلى اهلل عليه وسلم‬
‫) ‪ -:‬المؤمن القوي خير وأحب إلى اهلل من المؤمن الضعيف‪ ،‬وفي كل‬
‫خير‪ ،‬احرص على ما ينفعك‪ ،‬واستعن باهلل وال تعجز(‬
‫صدق رسول اهلل –صلي اهلل عليه وسلم‪-‬‬
Ch. 9: Virtual Memory Management

‫مقدمه‬

‫ و حاجه اسمها‬logical address space ‫ عرفنا ان في حاجه اسمها‬8 ‫طبعا من شبتر‬


‫ وعرفنا كمان ان البرنامج لما بيشتغل بيحتاج مساحه كبيرة في‬physical address space
‫ال احنا بنحمل بس الجزء‬... memory‫ ومش شرط البرنامج يتحمل كله في الـ‬memory‫الـ‬
Virtual memory‫اللي محتاجينه وهوا دا تعريف الـ‬

Virtual memory is a technique that allows processes to run even if it is


not contained / loaded totally in memory

In Virtual memory only required part is loaded (active, in memory) and


the rest is passive (stored in hard disk)

Virtual memory‫فوائد الـ‬

1-Allowing the process to run even if the physical space is not


enough for it (by loading the required part only)
‫ بتمكنني اني اشغل اي برنامج حتي‬Virtual memory technique‫اول حاجه ان الـ‬
‫ مش مكفيه عن طريق اني بحمل الجزء المطلوب بس من البرنامج‬memory‫لو الـ‬
2-Increasing the number of active processes, thus increasing
degree of multi programming
‫ وبكده ابقي‬memory‫ اللي موجوده جوا الـ‬process‫تاني حاجه اني بزود عدد الـ‬
multi programming ‫زودت عدد البرامج اللي شغاله مع بعضها اللي هوا‬

Virtual memory technique‫طيب في طريقتين عشان اقدر انفذ ال‬

1-Demand paging 2-Demand segmentation

Demand paging: Paging + Swapping

Only required pages are loaded to the program, if a page does not exist
in the main memory, it's loaded to a memory frame, if not, then we
need to swap out a process to provide a free frame in memory

‫ ان البرنامج بيتقسم‬Ch. 8 ‫ من‬paging‫ يعني بستخدم مفهوم ال‬ ‫نفس الكالم بيكرر نفسه‬
‫ البرنامج عايزها وبعمل‬page ‫ الي‬swapping in ‫ وهزود عليه حته كمان اني بعمل‬pages‫ل‬
‫ معينه هخرجها بناء علي طرق معينه هنشوفها في اخر الشبتر‬page‫ ل‬swapping out
‫ ديه موجوده وال ال في‬page‫هوا البرنامج هيعرف منين ال‬... ‫طيب بقي فيه حاجه مهمه‬
logical ‫ ان البرنامج بيبقي شايف ال‬8 ‫غلطططططططط احنا قلنا في شبتر‬... memory‫ال‬
‫ هيا اللي‬MMU‫انما ال‬... ‫ وداه بيبقي جواه البرنامج كله متحمل تخيليا‬address space
validation‫ وكمان بقي ال‬logical address mapping‫بتعمل عمليه ال‬

‫ لو‬protection‫ زيها زي ال‬mapping‫ بتحصل اثناء ال‬validation‫اول حاجه عمليه ال‬


‫هنا بقي هنزود عليه كمان عمود او خانه‬... page table ‫فاكرين ال‬...‫هنعمل ايه بقي‬...‫نفتكر‬
‫ بيبقي ليه قيمتين‬bit ‫ وداه بقي عباره عن‬valid – invalid bit ‫اسمها‬

Value 1: the page is valid and is residing now in the memory

Value 0: the page is either invalid (not in the logical address space) or
valid but it is not in the memory

‫ صفحه معينه افرض الصفحه ديه مش‬MMU‫طيب تماااااام دلوقت بقي البرنامج طلب من ال‬
page fault ‫ هنا بقي تحصل حاجه اسمها‬memory‫موجوده في ال‬

Page fault: a situation in which the required page is not residing in the
main memory

page fault‫طيب ازاي نتعامل مع ال‬

1- Check an internal table at the process to know whether the page has
invalid address or it is just not in the memory

If the page has an invalid address -> Interrupt OS with addressing error

Else

2-find a free frame

3-swap in the page to that free frame

4-update page table (set valid bit to one)

5-restart the instruction that was stopped because it didn't find the page

Finding a free frame 2 ‫طيب باقي الشبتر كله بقي هوا شرح للخطوه رقم‬

‫ وكده اشطه الكالم داه فل الفل‬free frame ‫دلوقتي انا ادامي حالتين يا اما يكون فعال في‬
swapping ‫ خلصت علي كده خالص هنعمل‬memory‫ او ان تكون ال‬page‫ونحط جواح ال‬
‫ ديه بقي بيبقي فيها كذا خطوه‬swapping‫ونطلع واحده برا عشان ندخل المطلوبه حركه ال‬
page replacement ‫ادرجناهم تحت اسم‬
‫)‪Page replacement: swapping out a page from the memory (victim page‬‬
‫‪after writing it to a backing store (hard disk), then the required page can‬‬
‫‪be swapped in to the memory‬‬

‫طيب ممكن نضيف تحسين للموضوع داه بدل ما كل شويه اكتب الصفحه اللي هخرجها من‬
‫ال‪ memory‬علي ال‪ hard‬وممكن تكون الصفحه ما اتعدلتش خالص انا هستخدم حاجه‬
‫اسمها ‪ modify/dirty bit‬وديه بتقولي ال‪ page‬اتغيرت وال ال ‪...‬لو اتغيرت امري هلل‬
‫هكتبها علي ال ‪ hard disk‬لو ما اتغيرتش خالص الحياه فله ومش هحتاج اكتبها‬

‫نخش بقي علي الجزء اللي الناس بتسن سنانها من الصبح عليه بس االول قوم ذاكر اللي فات‬
‫داه من الساليدز بتاعه الدكتور يال ‪‬‬

‫‪Page replacement algorithms‬‬

‫)‪(FIFO-Optimal-LRU‬‬

‫‪1-FIFO: first in first out‬‬

‫الراجل الحلو داه بيقولك لو انت محتاج مكان فاضي خرج ال‪ pages‬بالترتيب يعني لو ‪ 1‬دخلت‬
‫وبعدين ‪ 2‬هنخرج واحد االول وبعدين هنخرج ‪ 2‬واوعي تنسي ان لو في‪ page‬موجوده جوا‬
‫ال‪ main memory‬فانا مش هحتاج اني اخرج حاجه النها اصال موجوده‬

‫ملحوظه‪ :‬دا يعتبر من اسوا خوازمات االستبدال النه بيزود ال‪ page faults‬طيب نخش‬
‫علمثال بقي‬

‫اول حاجه جتلنا ال‪ 7‬طبعا ما كانتش موجوده في ال‪ memory‬فحطيناها‬

‫جتلنا ال ‪ – 0‬مش موجوده بس في مكان فاضي‪ -‬حطها‬

‫جتلنا ال ‪ -1‬مش موجوده بس في مكان فاضي – حطها‬

‫جتلنا ال ‪ -2‬مش موجوده ومفيش مكان فاضي – خالص طلع حاجه – هطلع اول واحده دخلت‬
‫اللي هيا ‪7‬‬

‫جتلنا ‪ – 0‬موجوده خالص اشطه مش هطلع حاجه‬


‫جتلنا ‪ -3‬مش موحوده ومفيش مكان فاضي – خالص هطلع اول واحده عليها الدور تطلع اللي‬
‫هيا ‪0‬‬

‫وهكذا بقي‬

‫‪2-Optimal‬‬

‫داه بقي بيعمل احسن ‪ performance‬النه بيقلل ال‪ page faults‬بس ما تفرحش اوي كده ‪‬‬
‫النه اصال ما ينفعش يتنفذ زي ما بيقولو كده ‪ infeasible‬النه بيحتاج انه يعرف ايه كل‬
‫الصفحات اللي هتتطلب بعد كده في المستقبل وطبعا داه مستحيل فبيتم استخدامه للبحث العلمي‬
‫مش اكتر‬

‫شغال ازاي بقي‪ :‬ادي جمله الساليدز‬

‫‪Swap out the process that will not be required for the longest time‬‬

‫يعني بشوف انهي واحده هييجي عليها الدور في االخر اني اطلبها وهيا ديه اللي هطلعها ‪...‬ايه‬
‫الكالم داه انا مش فاهم حاجه ‪...‬هقولك بمثال‬

‫لو مثال دلوقت ال‪ memory‬جواها ‪ 1‬و ‪ 2‬و ‪ 3‬وانا عايز اطلع واحده فيهم و هفترض ان ديه‬
‫الصفحات اللي هتتطلب مني‬

‫‪987987657311123123‬‬

‫ركز بقي قلنا ن المموري جواها ‪ 1‬و ‪ 2‬و ‪ 3‬دلوقتي انا محتاج اعرف ترتيب ظهورهم في‬
‫الصفحات المطلوبه‬

‫هالقي ان ‪ 3‬اتطلبت وبعدين واحد اتطلبت وبعدين ‪ 2‬اتطلبت يعني اخر واحده هتتطلب هيا رقم‬
‫‪ 2‬يبقي رقم ‪ 2‬اللي هتطلع‬

‫اوعي بقي تقع في الفخ انك تكمل لالخر يعني ايه‪...‬يعني بعد ما يجيلك كل واحد ظهرت امتي‬
‫خالص اقف ما تكملش لالخر وتقولي ال دا ‪ 3‬هيا اخر واحده هتتطلب ال انت اول ما تالقي ال‪3‬‬
‫ظهروا هتقف علي طول وتشوف انهي اخر واحده فيهم وتطلعها برا وادي مثال المحاضره‬

‫اول ‪ 3‬عواميد مش محتاجين شرح اللي هما ‪ 7‬و ‪ 0‬و ‪1‬‬


‫دلوقت بقي هنقول ‪ 2‬جت طيب ومفيش مكان هدور كده بنفس الطريقه اللي فوق ايه من‬
‫الحجات اللي جوا ال ‪ memory‬دلوقت هطلعها‬

‫‪20304230321201701‬‬

‫اخر واحده هتتطلب هيا ال‪ 7‬يبقي هتطلع ال‪ 7‬وادخل بدلها االتنين‬

‫دلوقت جت ‪ – 0‬موحوده طب كمل‬

‫دلوقت جت ‪ – 3‬مش موجوده ومفيش مكان برضه هدور مين هيطلع من ال ‪memory‬‬

‫‪304230321201701‬‬

‫اه تمام ال‪ 1‬اخر واحده هتتطلب يبقي هنطلع الواحد ونحط بدلها ال‪3‬‬

‫دلوقت جت صفر وموجوده‪ -‬كمل‬

‫دلوقت جت ‪ – 4‬مش موجوده – نفس الحوار بقي هنشوف مين اخر واحده هتيجي من اللي‬
‫جوا ال‪ memory‬دلوقت عشان اطلعها‬

‫‪4230321201701‬‬

‫اخر واحده هتتطلب هيا صفر فهطلعها‬

‫وهكذا بقي‬

‫لو فهمت داه كويس كده تمام انت فهمت الجاي كمان ‪‬‬

‫‪3-LRU (Least recently used): swap out the process that has not been‬‬
‫‪required for a long time‬‬

‫هيا هيا هيا هيا هيا ‪ optimal‬بس بدل ما هبص الدام هبص لورا الني ببقي مش عارف ايه‬
‫اللي جاي لكن اكيد عارف ايه اللي اتطلب‬

‫يعني مثال لو المموري جواها ‪ 1‬و ‪ 2‬و ‪3‬‬

‫والحجات اللي اتطلبت قبل كدا هيا‬

‫‪3332318973‬‬

‫هنالقي ان رقم ‪ 2‬هيا اخر واحده اتطلبت ( هناك في ‪ optimal‬كنا بندور علي اخر واحده‬
‫هتتطلب هنا بندور علي اخر واحده اتطلبت)‬

‫وبكده نطلع رقم ‪ 2‬‬

‫طيب بقي ناقص حته بسيطه جدا ان الخوازم داه فيه حبه ‪over head‬‬
‫عشان طبعا الزم تعرف كل ‪ page‬امتي اخر مره اتطلبت ودا بطريقتين‬

‫‪1-assign a time stamp with each page, bits overhead‬‬

‫يعني مع كل ‪ page‬هحط الوقت اللي هيا اتطلبت فيه اخر مره وككده هزود عندي مساحه‬
‫ال‪process‬‬

‫‪2-put pages numbers in a stack, time overhead‬‬

‫يعني هحهط في ‪ stack‬وبالتالي هعرف مين اللي بقالها مده ما اتطليتش بس طبعا هنا هضيع‬
‫وقت عشان ادور في ال‪stack‬‬
Chapter Nine OS

Virtual Memory Background


Two important characteristics of paging and segmentation
Run-time translation of logical address to physical address enables relocation// "i think
as logical has addres to the page table and the bage table has addres to phisycal so when b4 translation the
address in the table changes"
Divide address space into pieces that don't need to be contiguous
No need to store all pages/segments of a process in main memory during execution!
Only need current instruction and current memory reference in memory
Virtual memory– separation of user logical memory from physical memory.
Only part of the program needs to be in memory for execution.
Resident set: portion of pages/segments in memory at a time
Allows address spaces to be shared by several processes.
For each memory reference, if it is in the working set, use it, otherwise
interrupt for memory fault and bring in new page, then continue
Virtual memory can be implemented via:
Demand paging
Demand segmentation
// i think it based on no need to save the whole program in the memory only the
executing part others "the rest is saving on virtual memory"
Virtual Memory Improvement
More processes may be maintained in main memory
improves utilization because there is a greater chance that some process is ready
A process may be larger than all of main memory
programmer doesn't have to be aware of how much memory is available
Demand Paging
Bring a page into memory only when it is needed// benefits
Less I/O needed
Less memory needed
Faster response
More users
Page is needed→reference to it
invalid reference→abort
not-in-memory→bring to memory
Valid / Invalid Bit
With page table entry a valid–invalid bit is associated(1in-memory,0 not-in-memory)
Initially valid invalid bit is set to 0 on all entries
During address translation, if valid–invalid bit in page table entry is 0 page fault
Page Fault
If there is ever a reference to a page, first reference will trap to
OS→page fault
OS looks at another table to decide:
Invalid reference→abort.
If Just not in memory.
Get empty frame.
Swap page into frame.
Reset tables, validation bit = 1.
Restart instruction: Least Recently Used block move
// Handling Page Fault dunno how :D for more details look at page 13
What happens if there is no free frame?
Page replacement– find some page in memory, but not really in use,swap it out
Algorithm
performance – need algorithm which will result in minimum number of page faults
Same page may be brought into memory several times
Use modify (dirty) bit to reduce overhead of page transfers – only
modified pages are written to disk
Page replacement completes separation between logical memory and physical
memory – large virtual memory can be provided on a smaller physical memory
Basic Page Replacement
1-Find the location of the desired page on disk
2-Find a free frame:
- If there is a free frame, use it
- If there is no free frame, use a page replacement algorithm to select a victim frame
3- Read the page into the (newly) free frame. Update the page and frame tables.
4- Restart the process
The rest cant understand so i will leave it and study it b4 the exam :D

You might also like