0% found this document useful (0 votes)
6 views

Computer Architecture

Uploaded by

Kondwani Nyanga
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views

Computer Architecture

Uploaded by

Kondwani Nyanga
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 6

Name: Kondwani Nyanga

Student No.: 2023001588

Process Scheduling
Words Used in Process Scheduling:

 Arrival Time: The time the process arrives in the ready queue.
 Completion Time: The time the process completes its execution time.
 Burst time: Time needed by a process for execution.
 Turn Around time: The difference between completion time and arrival time.

Turn Around Time=Completion Time−Arrival Time

 Waiting Time: The difference between turnaround time and burst time.
Waiting Time=Turn Around Time−Burst Time

The process of selecting a process to execute/preform is done by a temporary (CPU) scheduler.


The scheduler selects between memory processes ready to launch and assigns the CPU to one of
them.

Definition of a Process
A process is an instance of a computer program that is being executed by one or more threads. It
contains the program instructions. Depending on the logical design of the system, a process can
comprise of multiple threads of execution.

Process Memory
The process memory is divided into four sections for optimum (efficient) operation:

 Text Category: This part is made up of integrated program code, which is read from fixed
storage when the program is launched.
 Data Class: This is the component that is made up of global and static variables, distributed
and executed before the main action.
 Heap: This component’s purpose is flexible or dynamic memory allocation and is managed
by calls to new, delete, malloc, free, etc.
 Stack: This component is the comprised of space that is reserved for local variables.

Definition of Process Scheduling


Process Scheduling is the process of the process manager handling the removal of an active
process manager handling the removal an active process from the CPU and selecting
another process based on a specific strategy.
There are three types of process scheduling:

 Long term or Job Scheduler


 Short term or CPU scheduler
 Medium-term Scheduler

Reasons for Scheduling Processes:

 Allows for programs to work smoothly on CPUs.


 Process scheduling allocates time to processes to provide faster response time.
 Give users the illusion of running many apps simultaneously by context switching.
(Context switching is the capability of a computer to run multiple programs by launching
and stopping them and switching to another program.

Purpose of Scheduling Algorithms:

 Utilize the CPU as much as possible by keeping it as busy as possible.


 Allocation of CPU must be fair.
 Number of processes that complete execution per period of time should be as high as
possible. Note that this number is usually called the ‘Throughput’.
 Time taken by a process to finish execution should be the least. This time taken is usually
called the ‘Turnaround Time’.
 The process should not starve in the ready queue. The amount of time a process waits to
be executed is called ‘waiting time’.
 Minimum response time.

Things to put into consideration when designing a process scheduling algorithm:

 CPU Utilization
 Throughput
 Turn Round Time
 Waiting Time
 Response Time

Types of Scheduling Algorithms:

 Preemptive Scheduling(When a process switches from running state to ready state or


from waiting state to ready state)
 Non-Preemptive Scheduling(When a process switches from running state to waiting
state)

Common Scheduling Algorithms:

 First Come First Serve (Non-Preemptive):


o Processes that require the CPU first is allocated the CPU first and is
implemented though First in First out (FIFO).
o First Come First Server supports both non-preemptive and preemptive CPU
scheduling algorithms.
o Tasks are always allocated as First Come, First Server concept.
o It is easy to implement and use and is not very efficient in performance and the
waiting time is quite high.
o First Come First Serve suffers from a Convoy effect.
 Shortest Job First (SJF) (Non-Preemptive):
o Selects the waiting process with the smallest execution time.
o May or may not be preemptive.
o Significantly reduces the average waiting time for processes to be executed.
o May cause starvation if shorter processes keep coming.
o It is import to note that, many times it becomes complicated to predict the length
of the upcoming CPU request.
 Longest Job First (LJF) (Non-Preemptive):
o The process with the largest burst time is processed first.
o It is non-preemptive in nature.
o If two processes have the same burst time First in First out is used.
o It can be of both preemptive and non-preemptive.
o All jobs finish at the same time approximately.
o Have a very high average wait time and average turn-around time for a given set
of processes.
o It may lead to Convoy effect.
 Priority Scheduling (Preemptive)
 Round Robin (Preemptive)
 Shortest Remaining Time First (SRTF) (Preemptive)
 Longest Remaining Time First (LRTF) (Preemptive)
 Highest Response Ratio Next (HRRN) (Non-Preemptive)
 Multiple Queue Scheduling
 Multilevel Feedback Queue Scheduling

Interrupts
Definition of an Interrupt
An interrupt is a signal emitted by hardware and software when a process or an event needs
immediate attention. Interrupts inform the processor of a high-priority process requiring
immediate attention by interrupting current working processes.

Types of Interrupts
 Software Interrupts
 Hardware Interrupts
o Makeable Interrupt
o Spurious Interrupt
When an Interrupt Request (IRQ) occurs:
1) Devices are informed of an Interrupt Request (IRQ).
2) The processor interrupts the program currently being executed.
3) The device sending the request signal is informed the request was sent and recognized
so the device stops the request signal.
4) The requested process is processed (executed).
5) An interrupt is enabled and the interrupted program is resumed.
A more detailed explanation of this process is:
Step 1:- Any time that an interrupt is raised, it may either be an I/O interrupt or a system
interrupt.
Step 2:- The current state comprising registers and the program counter is then stored in
order to conserve the state of the process.
Step 3:- The current interrupt and its handler is identified through the interrupt vector table
in the processor.
Step 4:- This control now shifts to the interrupt handler, which is a function located in the
kernel space.
Step 5:- Specific tasks are performed by Interrupt Service Routine (ISR) which are
essential to manage interrupt.
Step 6:- The status from the previous session is retrieved so as to build on the process from
that point.
Step 7:- The control is then shifted back to the other process that was pending and the
normal process continues.

Interrupts and the Management of Multiple Devices:


When multiple interrupts are raised, the following methods are used to decide which interrupt
to select:
 Polling
 Vectored Interrupt
 Interrupting Nesting

Definition of Interrupt Latency


The amount of time between the generation of an interrupt and its handling is known as
interrupt latency. It is determined and affected by:
 The number of created interrupts.
 The number of enabled interruptions.
 The number of interrupts that may be handled.
 The time required to handle each interrupt.
How CPUs reacts to interrupts:
 Interrupt Detection
 Interrupt Acknowledgment
 Interrupt Handling
 Context Saving
 Transfer Control
 Interrupt Servicing

Triggering Methods:
 Level-Trigger
 Edge-Trigger

Benefits and Functions of Interrupts


 Real-time Responsiveness
 Efficient Resource usage
 Multitasking and Concurrency
 Improved system Throughput

Instruction Set of Micro Processors


Definition of an Instruction Set

The instruction set can be defined as a group of commands that a microprocessor uses to
perform tasks. This is also the instruction and commands used to develop software.

Classification of an instruction set


 Data Movement Instructions. E.g. MOV.
 Arithmetic Instructions. E.g. ADD.
 Logic Instructions. E.g. AND.
 Control Transfer Instructions. E.g. JMP
 String Instructions. E.g. MOVSB.
 Input /Output Instructions. E.g. IN.
 Flag Control Instructions. E.g. HLT.
 Etc.

Process Control Instructions are used to control the order of execution in a program and in
processes. The involve commands to branch, loop and call functions or subroutines.

Note that:
 Branching instructions transfer the flow of execution onto certain conditions or
unconditional transfers to the part of the program.
 Looping instructions can be used to repeatedly execute a block of code either
conditionally or unconditional.
 Subroutine instructions are used to call and return from subroutine to enhance code
modularity and reusability.
 Unconditional jumps included in the program flow are the instruction that
unconditionally jump to a predetermined location without taking any condition in to
account.
 Conditional jumps a conditional that moves the control flow to a specific address
depending on the truth value of a condition.
 Subroutine calls jump commands instructions that are responsible for transferring
control to a subroutine, allowing task execution, while promoting code modularity and
reusability.

Addressing Modes
Addressing modes of computer architecture are the ways of specifying operand(s) of an
instruction. These modes define how the processor finds the data it needs to execute a
command. Examples of the addressing modes in the Intel 8086 include; Immediate Addressing,
Register Addressing, Direct Addressing, Indirect Addressing, Indexed Addressing, Based
Addressing, Based Index Addressing, etc.

You might also like