Comp Architecture
Comp Architecture
Comp Architecture
An operating system is a program that acts as an interface between the user and the
computer hardware and controls the execution of all kinds of programs.
Memory Management
Processor Management
Device Management
File Management
Security
Control over system performance
Job accounting
Error detecting aids
Coordination between other software and users
An Operating System provides services to both the users and to the programs.
Program execution
I/O operations
File System manipulation
Communication
Error Detection
Resource Allocation
Protection
Multitasking
Multitasking is when multiple jobs are executed by the CPU simultaneously by
switching between them. Switches occur so frequently that the users may interact with
each program while it is running.
Multiprogramming
Sharing the processor, when two or more programs reside in memory at the same
time, is referred as multiprogramming. Multiprogramming assumes a single shared
processor. Multiprogramming increases CPU utilization by organizing jobs so that the
CPU always has one to execute. The following figure shows the memory layout for a
multiprogramming system.
Process Life Cycle
When a process executes, it passes through different states. These stages may differ
in different operating systems, and the names of these states are also not
standardized.
1 Start
This is the initial state when a process is first started/created.
2 Ready
The process is waiting to be assigned to a processor. Ready processes are
waiting to have the processor allocated to them by the operating system so that
they can run. Process may come into this state after Start state or while running
it by but interrupted by the scheduler to assign CPU to some other process.
3 Running
Once the process has been assigned to a processor by the OS scheduler, the
process state is set to running and the processor executes its instructions.
4 Waiting
Process moves into the waiting state if it needs to wait for a resource, such as
waiting for user input, or waiting for a file to become available.
5 Terminated or Exit
Once the process finishes its execution, or it is terminated by the operating
system, it is moved to the terminated state where it waits to be removed from
main memory.
1. What are the main purposes of operating system?
Answer: The main purposes of OS are processor management, memory management, file
management, I/O handling, security, error detection, control over system performance, and
job accounting.
Answer: A program is a piece of code which may be a single line or millions of lines. A
computer program is usually written by a computer programmer in a programming language.
A process is basically a program in execution. We write our computer programs in a text file
and when we execute this program, it becomes a process which performs all the tasks
mentioned in the program.
OS maintains a Process Control Block (PCB) for each process. In PCB OS keeps all the
information about the running process to track the process. OS generally keeps the following
information in PCB.
Process ID
STATE
Pointer
Priority
Program counter
CPU registers
I/O information
Accounting information
etc.
Answer: long term scheduler determines the processes are admitted to the system for
processing. It selects processes from the queue and loads them into memory for execution.
Process loads into the memory for CPU scheduling. When a process changes the state from
new to ready, then there is use of long-term scheduler.
CPU scheduler selects a process among the processes that are ready to execute and allocates
CPU to one of them. Short-term schedulers, also known as dispatchers, make the decision of
which process to execute next. Short-term schedulers are faster than long-term schedulers.
6. What is dispatcher?
Answer: short-term schedulers, also known as dispatchers, make the decision of which
process to execute next. Short-term schedulers select a process among the processes that are
ready to execute and allocates CPU to one of them.
Answer: when a process request for I/O it becomes suspend or it becomes less prior. Medium
term scheduler or swapping is a process of temporarily removes (roll out) suspends or less
prior processes from main memory & move to secondary memory and make space for (roll
in) high prior processes. So, this is the concept Roll in and Roll out with respect to swapping
Answer: The process of switching from one process to another is called context switch. A
context switch is the mechanism to store and restore the state or context of a Process in
Process Control block so that a process execution can be resumed from the same point at a
later time.
9. Which of the following are non-preemptive? FIFO, Shortest job first or Round
Robin?
Answer: FIFO, Shortest job first are non-preemptive. Round Robin is preemptive.
A thread is segment of a process means a process can have multiple threads and these threads
are contained within a process. A thread is a flow of execution through the process code.
Answer: A cluster system is created when two or more computers are merged and all the
computer shared common storage and system works together. Cluster system resembles to
parallel operating system as they possess multiple CPUs.
13. Differentiate between user level and kernel level threads?
Answer: Traps and interrupts are two types of events. A trap/softinterrupt is raised by a user
program whereas an interrupt/hardware interrupt is raised by a hardware device such as
keyboard, timer, etc. A trap passes the control to the trap handler and the interrupt passes the
control to an interrupt handler.
Answer: When a computer-software needs to access the operating system's kernel, it makes a
system call. The system call uses an Application Program Interface (API) to expose the
operating system's services to user programs. It is the only method to access the kernel
system. All programs or processes that require resources for execution must use system calls,
as they serve as an interface between the operating system and user programs.
There are commonly five types of system calls. These are as follows:
1. Process Control
2. File Management
3. Device Management
4. Information Maintenance
5. Communication
Answer: We have two types of addresses that are logical address and physical address. The
logical address is a virtual address and can be viewed by the user. The user can’t view the
physical address directly. The logical address is used like a reference, to access the physical
address. The fundamental difference between logical and physical address is that logical
address is generated by CPU during a program execution whereas, the physical address refers
to a location in the memory unit.
19. What is memory management unit?
Answer: A memory management unit (MMU) is a computer hardware component that
handles all memory and caching operations associated with the processor.
Loading:
Bringing the program from secondary memory to main memory is called
Loading.
Dynamic Loading
All the modules are loaded dynamically. The developer provides a reference to all of
them and the rest of the work is done at execution time.
Loading of data and information takes bit by bit in run time.
The linking process takes place dynamically in re-locatable form. Data is loaded into the
memory only when it is needed in the program.
The processing speed is slower as files are uploaded at the time of processing.
Static Dynamic
Loading the entire program into the main Loading the program into the
memory before start of the program execution main memory on demand is
is called as static loading. called as dynamic loading.
Answer: The stub is a small piece of code that indicates how to locate the appropriate
memory-resident library routine or how to load the library if the routine is not already
present.
22. What is concept of dynamic linking?
Answer: Establishing the linking between all the modules or all the functions of the
program in order to continue the program execution is called linking. In dynamic linking
links between the module and program established on demand at compile and linking.
Answer: Multiprocessing is the use of two or more central processing units within a single
computer system. Asymmetric Multiprocessing and Symmetric Multiprocessing are two
types of multiprocessing.
24. What is deadlock? Or what are deadlock characterizations? Or what are four
necessary conditions to deadlocks to exist? How deadlocks can prevent to occur?
Answer: If a process is in the waiting state and is unable to change its state because the
resources required by the process is held by some other waiting process, then the system is
said to be in Deadlock.
What is Deadlock?
Deadlock is a situation where two or more processes are waiting for
each other. For example, let us assume, we have two processes P1 and
P2. Now, process P1 is holding the resource R1 and is waiting for the
resource R2. At the same time, the process P2 is having the resource
R2 and is waiting for the resource R1. So, the process P1 is waiting for
process P2 to release its resource and at the same time, the process
P2 is waiting for process P1 to release its resource. And no one is
releasing any resource. So, both are waiting for each other to release
the resource. This leads to infinite waiting and no work is done here.
This is called Deadlock.
Starvation is the problem that occurs when high priority processes keep
executing and low priority processes get blocked for indefinite time. In heavily
loaded computer system, a steady stream of higher-priority processes can
prevent a low-priority process from ever getting the CPU. In starvation
resources are continuously utilized by high priority processes. Problem of
starvation can be resolved using Aging. In Aging priority of long waiting
processes is gradually increased.
Answer: The critical section is a code segment where the shared variables can be
accessed. An atomic action is required in a critical section i.e. only one process can execute
in its critical section at a time. All the other processes have to wait to execute in their critical
sections.
A computer can address more memory than the amount physically installed on the system.
This extra memory is actually called virtual memory and it is a section of a hard that's set up
to emulate the computer's RAM. Paging technique plays an important role in implementing
But in fact the characteristics of virtual memory are different than
virtual memory.
those of physical memory. The key difference between virtual memory and
physical memory is that RAM is very much faster than virtual memory.
28. What do you mean a busy waiting semaphore or spinlock? Or what is spinlock?
How to avoid spinlock or busy waiting semaphore?
Processes waiting on a semaphore must constantly check to see if the semaphore is not zero. This
continual looping is clearly a problem in a real multiprogramming system this is called busy waiting and
it wastes CPU cycles. When a semaphore does this, it is called a spinlock.
To avoid busy waiting, a semaphore may use an associated queue of processes that are waiting on the
semaphore, allowing the semaphore to block the process and then wake it when the semaphore is
incremented.
This can lead to the inconsistency of shared data. So the change made by one
process not necessarily reflected when other processes accessed the same shared
data. To avoid this type of inconsistency of data, the processes need to be
synchronized with each other.
30. What are the three different stages/times when the address can be bound to
instruction and data?
The disk controller is the controller circuit which enables the CPU to communicate with a
hard disk, floppy disk or other kind of disk drive. It also provides an interface between the
disk drive and the bus connecting it to the rest of the system.
Thrashing occurs when there are too many pages in memory, and each page refers to
another page. A high degree of multiprogramming and lack of frames are two main causes
of thrashing in the Operating system.
Answer: The term dispatch latency describes the amount of time it takes for a
system to respond to a request for a process to begin operation.
Safe State. A state is safe if the system can allocate all resources requested by all
processes ( up to their stated maximums ) without entering a deadlock state
Throughput is the amount of a product or service that a company can produce and deliver
to a client within a specified period of time.
The definition of a computer response rate refers to the average time it takes for the
computer to do whatever you requested it to do, such as open a file or move the cursor.
What is the difference between "progress" and "bounded waiting," which are the two
requirements for handling the critical section problem in an operating system?
Bounded waiting means no process should wait for a resource for infinite amount of time.
Process affinity - Process affinity refers to the processors that the threads of a given
process may run on.