Comp Architecture

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 17

Operating System (OS)

An operating system is a program that acts as an interface between the user and the
computer hardware and controls the execution of all kinds of programs.

Following are some of important functions of an operating System.

 Memory Management
 Processor Management
 Device Management
 File Management
 Security
 Control over system performance
 Job accounting
 Error detecting aids
 Coordination between other software and users
An Operating System provides services to both the users and to the programs.

 It provides programs an environment to execute.


 It provides users the services to execute the programs in a convenient manner.
Following are a few common services provided by an operating system −

 Program execution
 I/O operations
 File System manipulation
 Communication
 Error Detection
 Resource Allocation
 Protection

Multitasking
Multitasking is when multiple jobs are executed by the CPU simultaneously by
switching between them. Switches occur so frequently that the users may interact with
each program while it is running.

Multiprogramming
Sharing the processor, when two or more programs reside in memory at the same
time, is referred as multiprogramming. Multiprogramming assumes a single shared
processor. Multiprogramming increases CPU utilization by organizing jobs so that the
CPU always has one to execute. The following figure shows the memory layout for a
multiprogramming system.
Process Life Cycle
When a process executes, it passes through different states. These stages may differ
in different operating systems, and the names of these states are also not
standardized.

S.N State & Description


.

1 Start
This is the initial state when a process is first started/created.

2 Ready
The process is waiting to be assigned to a processor. Ready processes are
waiting to have the processor allocated to them by the operating system so that
they can run. Process may come into this state after Start state or while running
it by but interrupted by the scheduler to assign CPU to some other process.

3 Running
Once the process has been assigned to a processor by the OS scheduler, the
process state is set to running and the processor executes its instructions.

4 Waiting
Process moves into the waiting state if it needs to wait for a resource, such as
waiting for user input, or waiting for a file to become available.

5 Terminated or Exit
Once the process finishes its execution, or it is terminated by the operating
system, it is moved to the terminated state where it waits to be removed from
main memory.
1. What are the main purposes of operating system?

Answer: The main purposes of OS are processor management, memory management, file
management, I/O handling, security, error detection, control over system performance, and
job accounting.

2. What do you know about process and program?

Answer: A program is a piece of code which may be a single line or millions of lines. A
computer program is usually written by a computer programmer in a programming language.
A process is basically a program in execution. We write our computer programs in a text file
and when we execute this program, it becomes a process which performs all the tasks
mentioned in the program.

3. What is a process? What information does an OS generally need to keep about


running processes in order to execute them? ‘Or’ what is PCB?
Answer: we write our computer programs in a text file and when we execute this program, it
becomes a process which performs all the tasks mentioned in the program.

OS maintains a Process Control Block (PCB) for each process. In PCB OS keeps all the
information about the running process to track the process. OS generally keeps the following
information in PCB.

Process ID

STATE

Pointer

Priority

Program counter

CPU registers

I/O information

Accounting information

etc.

4. What is Ready Queue?


Answer: Ready Queue is process scheduling Queue. Ready queue keeps a set of all the
processes resides in main memory, ready and waiting to execute. A new process always puts
in the ready queue.
5. Differentiate between long term scheduler and short term scheduler?

Answer: long term scheduler determines the processes are admitted to the system for
processing.  It selects processes from the queue and loads them into memory for execution.
Process loads into the memory for CPU scheduling. When a process changes the state from
new to ready, then there is use of long-term scheduler.

CPU scheduler selects a process among the processes that are ready to execute and allocates
CPU to one of them. Short-term schedulers, also known as dispatchers, make the decision of
which process to execute next. Short-term schedulers are faster than long-term schedulers.

6. What is dispatcher?

Answer: short-term schedulers, also known as dispatchers, make the decision of which
process to execute next. Short-term schedulers select a process among the processes that are
ready to execute and allocates CPU to one of them.

7. Define Roll in and Roll out with respect to swapping?

Answer: when a process request for I/O it becomes suspend or it becomes less prior. Medium
term scheduler or swapping is a process of temporarily removes (roll out) suspends or less
prior processes from main memory & move to secondary memory and make space for (roll
in) high prior processes. So, this is the concept Roll in and Roll out with respect to swapping

8. What is context switch?

Answer: The process of switching from one process to another is called context switch. A
context switch is the mechanism to store and restore the state or context of a Process in
Process Control block so that a process execution can be resumed from the same point at a
later time.

9. Which of the following are non-preemptive? FIFO, Shortest job first or Round
Robin?

Answer: FIFO, Shortest job first are non-preemptive. Round Robin is preemptive.

10. What is the difference between preemptive and non-preemptive multitasking?

Answer: These algorithms are either non-preemptive or preemptive. Non-preemptive


algorithms are designed so that once a process enters the running state; it cannot be
preempted until it completes its allotted time, whereas the preemptive scheduling is based on
priority where a scheduler may preempt a low priority running process anytime when a high
priority process enters into a ready state.

11. What is the difference between process and thread?

Answer: A process is basically a program in execution. We write our computer programs in a


text file and when we execute this program, it becomes a process which performs all the
tasks mentioned in the program.

A thread is segment of a process means a process can have multiple threads and these threads
are contained within a process. A thread is a flow of execution through the process code.

12. What is a Clustered system?

Answer: A cluster system is created when two or more computers are merged and all the
computer shared common storage and system works together. Cluster system resembles to
parallel operating system as they possess multiple CPUs.
13. Differentiate between user level and kernel level threads?

Answer: Types of Thread

Threads are implemented in following two ways −

 User Level Threads − User managed threads.


 Kernel Level Threads − Operating System managed threads acting on kernel, an
operating system core.

14. How can differentiate interrupt from trap?

Answer: Traps and interrupts are two types of events. A trap/softinterrupt is raised by a user
program whereas an interrupt/hardware interrupt is raised by a hardware device such as
keyboard, timer, etc. A trap passes the control to the trap handler and the interrupt passes the
control to an interrupt handler.

15. What is a system call? Or what is the purpose of system calls?

Answer: When a computer-software needs to access the operating system's kernel, it makes a
system call. The system call uses an Application Program Interface (API) to expose the
operating system's services to user programs. It is the only method to access the kernel
system. All programs or processes that require resources for execution must use system calls,
as they serve as an interface between the operating system and user programs.

There are commonly five types of system calls. These are as follows:
1. Process Control
2. File Management
3. Device Management
4. Information Maintenance
5. Communication

16. Name the techniques to manage free space


 External fragmentation can be reduced by compaction or shuffle memory contents
to place all free memory together in one large block. To make compaction
feasible, relocation should be dynamic.
 The internal fragmentation can be reduced by effectively assigning the smallest
partition but large enough for the process.
 Swapping is known as a technique for memory compaction.

17. What is the purpose of compaction?

Answer: Swapping is also known as a technique for memory compaction. Swapping is


mechanisms in which a process can be swapped temporarily out of main memory (or move)
to secondary storage (disk) and make that memory available to other processes. At some
later time, the system swaps back the process from the secondary storage to main memory.
Though performance is usually affected by swapping process but it helps in running multiple
and big processes in parallel and that's the reason.

18. What is the difference between physical and logical address?

Answer: We have two types of addresses that are logical address and physical address. The
logical address is a virtual address and can be viewed by the user. The user can’t view the
physical address directly. The logical address is used like a reference, to access the physical
address. The fundamental difference between logical and physical address is that logical
address is generated by CPU during a program execution whereas, the physical address refers
to a location in the memory unit.
19. What is memory management unit?

Answer: A memory management unit (MMU) is a computer hardware component that
handles all memory and caching operations associated with the processor.

20. Define the concept of dynamic loading? Or what is dynamic loading?

Loading: 
Bringing the program from secondary memory to main memory is called
Loading. 

Dynamic Loading
All the modules are loaded dynamically. The developer provides a reference to all of
them and the rest of the work is done at execution time.
Loading of data and information takes bit by bit in run time.
The linking process takes place dynamically in re-locatable form. Data is loaded into the
memory only when it is needed in the program.
The processing speed is slower as files are uploaded at the time of processing.

Static Dynamic

Loading the entire program into the main Loading the program into the
memory before start of the program execution main memory on demand is
is called as static loading. called as dynamic loading.

Inefficient utilization of memory because


whether it is required or not required entire
program is brought into the main memory. Efficient utilization of memory.

Program execution will be faster. Program execution will be slower.

21. What is the purpose of ‘stub’ in dynamic loading?

Answer: The stub is a small piece of code that indicates how to locate the appropriate
memory-resident library routine or how to load the library if the routine is not already
present.
22. What is concept of dynamic linking?

Answer: Establishing the linking between all the modules or all the functions of the
program in order to continue the program execution is called linking. In dynamic linking
links between the module and program established on demand at compile and linking.

23. Discuss the difference Symmetric and asymmetric multiprocessing? what is


symmetric multiprocessing? What are the two types of multiprocessing?

Answer: Multiprocessing is the use of two or more central processing units within a single
computer system. Asymmetric Multiprocessing and Symmetric Multiprocessing are two
types of multiprocessing. 

Asymmetric Multiprocessing system is a multiprocessor computer system where not all of


the multiple interconnected central processing units (CPUs) are treated equally. In
asymmetric multiprocessing, only a master processor runs the tasks of the operating
system. 
SMP (symmetric multiprocessing) is the processing of programs by multiple processors that
share a common operating system and memory. In symmetric (or "tightly coupled")
multiprocessing, the processors share memory and the I/O bus or data path. A single copy of
the operating system is in charge of all the processors.

24. What is deadlock? Or what are deadlock characterizations? Or what are four
necessary conditions to deadlocks to exist? How deadlocks can prevent to occur?

Answer: If a process is in the waiting state and is unable to change its state because the
resources required by the process is held by some other waiting process, then the system is
said to be in Deadlock.

Deadlock characterization describes the distinctive features that are the


cause of deadlock occurrence. Deadlock is a condition in the
multiprogramming environment where the executing processes get stuck in
the middle of execution waiting for the resources that have been held by
the other waiting processes thereby preventing the execution of the
processes. The four conditions that must sustain at the same time to eventuate a deadlock
are: mutual experience, hold and wait, no preemption, circular wait.

We can prevent Deadlock by eliminating any of the above four conditions. 

 Eliminate mutual exclusion


 Eliminate hold and wait
 Eliminate non-preempt
 Eliminate circular path

What is Deadlock?
Deadlock is a situation where two or more processes are waiting for
each other. For example, let us assume, we have two processes P1 and
P2. Now, process P1 is holding the resource R1 and is waiting for the
resource R2. At the same time, the process P2 is having the resource
R2 and is waiting for the resource R1. So, the process P1 is waiting for
process P2 to release its resource and at the same time, the process
P2 is waiting for process P1 to release its resource. And no one is
releasing any resource. So, both are waiting for each other to release
the resource. This leads to infinite waiting and no work is done here.
This is called Deadlock.

Necessary Conditions of Deadlock


There are four different conditions that result in Deadlock. These
four conditions are also known as Coffman conditions and these
conditions are not mutually exclusive. Let's look at them one by one.
 Mutual Exclusion: A resource can be held by only one
process at a time. In other words, if a process P1 is using some
resource R at a particular instant of time, then some other
process P2 can't hold or use the same resource R at that
particular instant of time. The process P2 can make a request
for that resource R but it can't use that resource simultaneously
with process P1.

 Hold and Wait: A process can hold a number of resources at a


time and at the same time, it can request for other resources
that are being held by some other process. For example, a
process P1 can hold two resources R1 and R2 and at the same
time, it can request some resource R3 that is currently held by
process P2.

 No preemption: A resource can't be preempted from the


process by another process, forcefully. For example, if a process
P1 is using some resource R, then some other process P2 can't
forcefully take that resource. If it is so, then what's the need for
various scheduling algorithm. The process P2 can request for
the resource R and can wait for that resource to be freed by the
process P1.
 Circular Wait: Circular wait is a condition when the first
process is waiting for the resource held by the second process,
the second process is waiting for the resource held by the third
process, and so on. At last, the last process is waiting for the
resource held by the first process. So, every process is waiting
for each other to release the resource and no one is releasing
their own resource. Everyone is waiting here for getting the
resource. This is called a circular wait.

Starvation is the problem that occurs when high priority processes keep
executing and low priority processes get blocked for indefinite time. In heavily
loaded computer system, a steady stream of higher-priority processes can
prevent a low-priority process from ever getting the CPU. In starvation
resources are continuously utilized by high priority processes. Problem of
starvation can be resolved using Aging. In Aging priority of long waiting
processes is gradually increased.

25. What is ISR?

An interrupt service routine (ISR) is a software routine that hardware invokes in response


to an interrupt. ISR examines an interrupt and determines how to handle it executes the
handling, and then returns a logical interrupt value. If no further handling is required the ISR
notifies the kernel with a return value.

26. What is critical section?

Answer: The critical section is a code segment where the shared variables can be
accessed. An atomic action is required in a critical section i.e. only one process can execute
in its critical section at a time. All the other processes have to wait to execute in their critical
sections.

27. What are the benefits of virtual memory?

Virtual memory is an area of a computer system's secondary memory storage


space (such as a hard disk or solid state drive) which acts as if it were a part of the system's
RAM or primary memory.

A computer can address more memory than the amount physically installed on the system.
This extra memory is actually called virtual memory and it is a section of a hard that's set up
to emulate the computer's RAM. Paging technique plays an important role in implementing
But in fact the characteristics of virtual memory are different than
virtual memory.
those of physical memory. The key difference between virtual memory and
physical memory is that RAM is very much faster than virtual memory.

28. What do you mean a busy waiting semaphore or spinlock? Or what is spinlock?
How to avoid spinlock or busy waiting semaphore?

Processes waiting on a semaphore must constantly check to see if the semaphore is not zero. This
continual looping is clearly a problem in a real multiprogramming system this is called busy waiting and
it wastes CPU cycles. When a semaphore does this, it is called a spinlock.
To avoid busy waiting, a semaphore may use an associated queue of processes that are waiting on the
semaphore, allowing the semaphore to block the process and then wake it when the semaphore is
incremented. 

29. Differentiate between Entry Section and Reminder section?

Answer: Sections of a Program

 Entry Section: It is part of the process which decides the entry of a


particular process.
 Critical Section:  The critical section is a code segment where the shared variables
can be accessed. An atomic action is required in a critical section i.e. only one process
can execute in its critical section at a time. All the other processes have to wait to
execute in their critical sections.
 Exit Section: Exit section allows the other processes that are waiting in the
Entry Section, to enter into the Critical Sections. It also checks that a process
that finished its execution should be removed through this Section.
 Remainder Section: All other parts of the Code, which is not in Critical,
Entry, and Exit Section, are known as the Remainder Section.
What is Process Synchronization?
Process Synchronization is the task of coordinating the execution of processes in
a way that no two processes can have access to the same shared data and
resources.

This can lead to the inconsistency of shared data. So the change made by one
process not necessarily reflected when other processes accessed the same shared
data. To avoid this type of inconsistency of data, the processes need to be
synchronized with each other.

30. What are the three different stages/times when the address can be bound to
instruction and data?

Address Binding divided into three types as follows.

 Compile-time Address Binding.


 Load time Address Binding.
 Execution time Address Binding.

31. What is disk controller?

The disk controller is the controller circuit which enables the CPU to communicate with a
hard disk, floppy disk or other kind of disk drive. It also provides an interface between the
disk drive and the bus connecting it to the rest of the system.

32. What is garbage collection?


Garbage collection (GC) is a dynamic approach to automatic memory management and heap
allocation that processes and identifies dead memory blocks and reallocates storage for reuse.

Garbage collection is a function of an operating system or programming language that


reclaims memory no longer in use.

33. What is the cause of thrashing?

Thrashing occurs when there are too many pages in memory, and each page refers to
another page. A high degree of multiprogramming and lack of frames are two main causes
of thrashing in the Operating system.

34. What is meant by dispatch latency?

Answer: The term dispatch latency describes the amount of time it takes for a
system to respond to a request for a process to begin operation.

Safe State. A state is safe if the system can allocate all resources requested by all
processes ( up to their stated maximums ) without entering a deadlock state

Throughput is the amount of a product or service that a company can produce and deliver
to a client within a specified period of time. 

The definition of a computer response rate refers to the average time it takes for the
computer to do whatever you requested it to do, such as open a file or move the cursor.

What is the difference between "progress" and "bounded waiting," which are the two
requirements for handling the critical section problem in an operating system?

Progress means that process should eventually be able to complete.

Bounded waiting means no process should wait for a resource for infinite amount of time.

Process affinity - Process affinity refers to the processors that the threads of a given
process may run on.

What is interprocess communication in operating system?


Interprocess communication is the mechanism provided by the operating system
that allows processes to communicate with each other. This communication could
involve a process letting another process know that some event has occurred or the
transferring of data from one process to another.
Advantages of using CICS Inter Process Communication
 Use of shared memory for communication, limits Remote Procedure Call
communication on the local machine.
 Only users with access to the shared memory can view the calls.
 Use OS provided authentication in absence of DCE security.

You might also like