0% found this document useful (0 votes)
36 views11 pages

OS Back

Ops backlog imp

Uploaded by

faizjshaikh3269
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
36 views11 pages

OS Back

Ops backlog imp

Uploaded by

faizjshaikh3269
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

What is operating system? Explain the functions.

And organise collection of programs that controls the execution of the


application program and it's as interface between computer system and
computer hardware.
1. Memory management - an operating system deals with the allocation of
main memory and other storage areas to the system programs as well as
user program and data.
2. Processor management - an operating system deals with your
assignment of processors to di erent task being performed by the
computer system.
3. Device management in operating system deals with the coordination and
assignment of di erent input, an output device while one or more
programs are being executed.
4. Security - OS keep this system and program safe and secure through
authentication.

Q.What is multiprocessor operating system? State its advantages.


Multiprocessor OS means the use of two or more processors within a single
computer system. There are two types of multiprocessor system.
1) Asymmetric multiprocessing.
2) Symmetric multiprocessing.
Advantages:
• more work done in a shorter period of time.
• It can also save cost compare to multiple single systems.
• It increases relatively.
• The system provides higher performance due to parallel processing.

Q.Operating system structure?

Q. What is Kernel?
Kernel manages all operations of computer and hardware and act as a
bridge between the user and the resources of system. By accessing various
computer resources the kernel is the innermost layer and is the central
controlling part of the operating system.

Q.What is microkernel? its advantages?


A microkernel is a piece of software that contains the near-minimum amount
of functions and features required to implement an operating system.
The main function of microkernel is to provide communication facilities
between the client program and various services are running in user mode,
communication is achieved by message passing.
Avantages :
• It provides high security and reliability.
• Extending of operating system is easier.
ff
ff
• The microkernel structure is portable due to small size of kernel.
• It is easy to port operating system from one hardware to other.

Q. De ne system boot?
Booting the system is done by loading the Colonel into main memory and
starting its execution. The process of starting a computer by loading kernel
is known as system booting.

Q. What is the function of bootstrap loader?


Bootstrap loader locate the Karnal, loads it into main memory and start its
execution.

Q.types of system calls?


• Process control - these system calls deal with processes. Process is
nothing but program is execution some examples of these system calls
are end, create new, process, terminate process, load, execute.
• File management - these types of system calls are used to handle or
dealing with les. Eg: Creating les, delete les, open, close, read, write
les.
• Device Management - this system call is used to deal with devices
manipulation. Eg: call request device, read & write device operation.
• Information maintenance - These system calls handle information and its
transfer between the operating system and the user program.
• Communication - These system calls are useful for interprocess
communication.

Q.List any two examples of many to many model?


Solaris-2, IRIX, HP-UX, Tru 64 UNIX.

Q.Explain PCB with proper diagram?


Process control Block is a data structure that stores information about
particular process. A PCB keeps all the information needed to keep track of
process as listed below:
1: process state : The current state of the process ie: whether it is running,
waiting or whatever.
2 : process privileges : this is required to allow/disallow access to system
resources.
3 : process ID : unique identi cation for each of the process in the operating
system.
4 : pointer : a pointer to parent process.
5 : program counter : program counter is a pointer to the address of the next
instruction to be executed for this process.
6 : CPU registers : various CPU registers where process need to be stored
for execution for running state.
fi
fi
fi
fi
fi
fi
7 : CPU scheduling information : process priority and other scheduling
information which is required to scheduling the process which is required to
scheduling the process.
8 : Memory management information : this includes the information to page
table, memory limits, segment table depending on memory used by the
operating system.
9 : Accounting information : this includes the amount of CPU used for
process execution, time limits, execution ID etc.
10 : 10 status information : this includes a list of1/10 devices allocated to
process.

Q.Write a primary function of medium term schedular?


The medium term scheduler temporarily removes processes form main
memory & places them on secondary memory or vice. This is referred as
“swapping out” or “swapping not”.

Q.State scalability & responsiveness bene ts of multi-threading?


Program responsiveness allows a program to run even it part of it is Block
using multithreading. This can also be done if the process is performing a
lengthy operation.

Q. State to bene ts of multithreaded programming?


• Resource sharing.
• responsiveness.
• Economically.
• e cient communication.

Q. What is process state? and explain in brief di erent types of process.


States.
A process is de ned as a program under execution, which competes for the
CPU, time and other resources. When a process executes, it passes through
di erent states.
In general, a process can have one of the following ve states at a time.
• Start—> this is the initial state when a process is rst started/created.
• Ready—> this process is said to be ready state if it is ready for the
execution and waiting for the CPU to be allocated to it.
• Running state—> a process is said to be in running state. If the CPU has
been allocated to eat, and it is currently being executed.
• Waiting or blocked—> a process is set to be is waiting state if it has been
blocked by some event. Unless that event occurs, the process cannot
continue it execution.
• Terminated—> a process is said to be in waiting state it is as completed
its execution normally or it has been terminated by the operating system
is because of some error or killed by some other processes.
ffi
ff
fi
fi
fi
fi
ff
fi
Q.List any two operating system examples that uses one to one modes?
OS/2, Windows NT, Windows 2000.

Q. Give example of operating system which user with pthread?


Linux, salaries, macOS and Tru 64 UNIX.

Q. State two bene ts of multithreaded programming?


1: Resource sharing. 2: responsiveness.

Q.Di erentiate between user level and Karnal level?


USER level KERNEL level
User level threads are faster to create Kernel level threats are slower to create
and manage. and manage.
Implemented by a thread library at the Operating system support directly to
user level. Kernel threats.
User level threat can run on any Kernel level threads are speci c to the
operating system operating system.
Multitrait application cannot take Kernel routines themselves can be
advantage of multiprocessing multithreaded.

Q. Thread?
Unit of concurrency within a process and had a access to the entire code
and data parts of the process.

Q.What is ready queue?


Ready Q keeps a set of all processing residing in main memory, ready and
waiting to execute. A new process is always put in the queue.

Q.What is JOB queue?


It keeps all the processes in the system. As the process enter the system, it
is put into Job queue.

Q. Which schedule her controls the degree of multiprogramming.?


Long-term schedular.

Q. Explain many-to-many multithreading model?


• In this model menu is a label threads, Multiplex to the kernel trade for
smaller or equal number.
• The number of kernel threads maybe speci c to either a particular
application or a particular machine.
ff
fi
fi
fi
• Solaris 2, IRIXM HP-UX and Tru 64 UNIX support many-to-many thread
model.

Q.
Long time scheduler Short term scheduler Medium term scheduler
It is a job scheduler It is a CPU scheduler It is a process swapping
scheduler
It controls the degree of It provides lesser control It reduce the degree of
multiprogramming over degree of multiprogramming
multiprogramming
It is almost absent or It is also minimal in time It is a part of timesharing
minimal in time sharing sharing system system
system
Speed is lesser than Speed is fastest among Speed is in between both
short-term other to short and long-term
scheduler
It deals with main It deals with CPU It deals with main
memory for loading memory for removing
process processes and reloading
whenever required

Q. What will happen if all process is CPU bound in system?


The process which spends more time in computations, or with CPU, and
very rarely with the input output devices is called as CPU bounded process.

Q. De ne dispatch latency?
The time taken by dispatcher to stop one process and start another process
to rain is called dispatch latency time.

Q. De ne turnaround time?
It is the di erence between the time a process enters the system and the
time it exists the system.

Q.State & explain criteria for computing various scheduling algorithms?


Scheduling algorithm is one which select some job for Eddy Cue based on
some criteria and submit it to the CPU.
CPU utilization - the percentage of time the CPU is busy in executing
processes.
Throughput
If the cpu is busy executing processes, then work is being done. One measure of
work is the numbers of processes that are completed per time unit call throughput.
fi
fi
ff
Turn around Time
It is the difference between the time a process enters the system and the time it
exits the system

Waiting Time
The time spend by the process while waiting in the ready queue

Balanced Utilization
The percentage of time all the system resources are busy

Response Time
The time elapsed between the moment when a user initiates a request and the
instant when the system starts responding to this request.

Q.State the functions of Dispatcher


-Loading the register of the process
-Switching operating System to the user node
-Restart the program by jumping to the proper location in the user program

Q.Define Preemptive and non-preemptive Scheduling and state its


advantages and disadvantages

Preemptive scheduling allows a higher priority process to replace a currently


running process, even if its time slot is not completed or it has not requested for any
I/O. Non-preemptive scheduling, on the other hand, does not allow a running
process to be interrupted until it completes its allotted time. The advantage of
preemptive scheduling is that it allows real multiprogramming, while the advantage
of non-preemptive scheduling is that it is simple and cannot lead to race conditions.
The disadvantage of preemptive scheduling is that it is complex and can lead to
race conditions, while the disadvantage of non-preemptive scheduling is that it does
not allow real multiprogramming.

Q.write any two disadvantages of priority scheduling


Two disadvantages of priority scheduling are that it can lead to starvation of lower
priority processes and it may not be suitable for applications with varying time and
resource requirements.

Q.Short note multilevel Queue Scheduling


Multilevel queue scheduling is a CPU scheduling algorithm that partitions the ready
queue into separate queues based on process properties such as memory size,
process type, or priority. Each queue follows a separate scheduling algorithm, and
processes are permanently assigned to one queue. This algorithm is particularly
useful for systems with different types of processes that have different response-
time requirements. Multilevel queue scheduling can prevent starvation by moving a
lower priority process to a higher priority queue if it has been waiting for too long.
However, the turnaround time for long processes may increase significantly, and the
algorithm is the most complex scheduling algorithm.
Q.what is critical section problem? how it is solved
The critical section problem is a situation where several processes execute
concurrently, sharing some data, and the result of execution depends on the
particular order in which the access to shared data takes place. The shared data is
called critical resource and the portion of the program that uses this critical
resource is called critical section. In order to synchronize the cooperative
processes, the main task is to solve the critical section problem. A solution to the
critical-section problem must satisfy the following three requirements: mutual
exclusion, progress, and bounded waiting. Mutual exclusion ensures that if a
process is executing in its critical section, then no other process is allowed to
execute in the critical section. Progress ensures that if no process is executing in
the critical section and other processes are waiting outside the critical section, then
only those processes that are not executing in their remainder section can
participate in deciding which will enter in the critical section next, and the selection
cannot be postponed indefinitely. Bounded waiting ensures that a bound must exist
on the number of times that other processes are allowed to enter their critical
sections after a process has made a request to enter its critical section and before
that request is granted. There are several methods to solve the critical section
problem, including Peterson's solution, semaphore solution, and hardware-based
solutions.

Q.What is a semaphore?
Semaphore is a synchronization tool used to control access to a common resource
by multiple processes and avoid critical section problems. It is an integer variable
that can be accessed only through two operations, Wait() and Signal(). The Wait()
operation decrements the value of the semaphore, and if it is positive, the process
can enter the critical section. If the value is negative or zero, the process waits until
the semaphore value becomes positive. The Signal() operation increments the
value of the semaphore, indicating that the process has left the critical section and
the resource is available for other processes. Semaphores can be either binary or
counting, and they can be used to solve various synchronization problems such as
the critical section problem, the producer-consumer problem, the reader-writer
problem, and the dining philosopher problem. Semaphores are more efficient than
some other methods of synchronization and do not waste resources due to busy
waiting.

Q.what is mutual exclusion


Mutual exclusion is a concept in process synchronization where only one process is
allowed to access a shared resource or critical section at a time, to prevent conflicts
and ensure data consistency.
Q.Types of semaphore
There are two types of semaphores: binary semaphore and counting semaphore. A
binary semaphore is restricted to values of zero (0) or one (1), while a counting
semaphore can assume any non-negative integer value.

Q.Explain internal and external fragmentation in detail separetly


Internal fragmentation occurs when a process is allocated a memory block that is
larger than the size of the process. As a result, some part of the memory is left
unused, causing internal fragmentation. For example, a process that needs 57
bytes of memory may be allocated a block that contains 60 bytes or even 64 bytes.
The extra bytes that the process does not need go to waste, and over time, these
tiny chunks of unused memory can build up and create large quantities of memory
that cannot be put to use by the allocator. Because all of these useless bytes are
inside larger memory blocks, the fragmentation is considered internal.

External fragmentation occurs when the total memory space is enough to satisfy
a request or to reside a process in it, but it is not contiguous, so it cannot be used.
For example, there is a hole of 20K and 10K is available in multiple partition
allocation schemes. The next process requests for 30K of memory. Actually, 30K of
memory is free which satisfies the request, but the hole is not contiguous.
Therefore, there is external fragmentation of memory. Compaction is a method
used to overcome the external fragmentation problem. All free blocks are brought
together as one large block of free space.

what are various dynamic allocation memory management method?


Dynamic allocation memory management methods include dynamic partitioning,
paging, segmentation, and swapping. In dynamic partitioning, memory is divided
into variable-sized partitions and allocated to processes as needed. Paging divides
memory into fixed-sized pages and allocates them to processes as needed.
Segmentation divides memory into variable-sized segments and allocates them to
processes as needed. Swapping moves entire processes in and out of memory as
needed.

Q.explain physical address in detail


A physical address is a memory address that identifies a physical location of
required data in a memory. It is the actual address that is used by the memory unit
to access data in the memory. In contrast, a logical address or virtual address is
generated by the CPU while a program is running and used as a reference to
access the physical memory location by the CPU. The set of all physical addresses
corresponding to these logical addresses is referred to as a physical address
space. The physical address space is the range of physical addresses that are
available for a program or process to use. The size of the physical address space is
determined by the number of bits used to represent physical addresses. The
physical address space is divided into fixed-size blocks called frames, which are
used to store data and code. The size of a frame is determined by the hardware
and is generally a power of 2 varying between 512 bytes and 16MB per page. The
mapping of logical addresses to physical addresses is done by the memory
management unit (MMU) using techniques such as paging and segmentation
Q.Explain MVT
MVT stands for Multiprogramming with Variable number of Tasks. It is a dynamic
memory partitioning technique used in operating systems. In MVT, the size of the
partition is not fixed and can vary dynamically according to the requirement of the
process. The operating system maintains a table indicating which parts of the
memory are free and which are allocated. Whenever a job requests for the memory,
the table indicating the current status of the memory is referred. If memory is
available, then the job is assigned to a partition according to job scheduling policy
and the table is updated to reflect the new status. When the job terminates, it
releases the memory occupied by it.

Q.what is fragmentation
Fragmentation is a problem in memory management where the available memory
space is broken into little pieces, making it difficult to allocate memory to processes.
There are two types of fragmentation: internal fragmentation, where memory
allocated to a program is not fully utilized by it, and external fragmentation, where
free memory areas existing in systems are too small to be allocated to processes.
Compaction and dynamic memory partitioning are some of the techniques used to
overcome fragmentation.

Q.Define logical address


A logical address, also known as a virtual address, is an address generated by the
CPU while a program is running and used as a reference to access a physical
memory location by the CPU. It identifies a location in the logical address space of
a program.
Q.what is page table and give its content
page table is a data structure used by the operating system to map virtual
addresses to physical addresses in a paging system. It contains an entry for each
page and has the base address of each page in physical memory, which is
combined with the page offset to define the physical memory address. The content
of a page table includes the page number, frame number, and valid/invalid bit for
each page. It is used to map a page on a frame and is implemented in different
ways such as dedicated registers, main memory, or associative registers

Q.explain demand paging with example


Demand paging is a method of virtual memory management where pages are only
loaded when they are demanded during program execution. Pages that are never
accessed are thus never loaded into physical memory. For example, when a
process is to be swapped in, the pager guesses which pages will be used before
the process is swapped out again. Instead of swapping in a whole process, the
pager brings only those necessary pages into memory. Thus, it avoids reading into
memory pages that will not be used in any way, decreasing the swap time and the
amount of physical memory needed.

Q.what is memory management


Memory management is the process of controlling and coordinating computer
memory. It involves improving the utilization of CPU and the speed of its response
to users by keeping several or multiple processes in memory. Main memory is
usually too small to accommodate all the data and programs permanently, so the
computer system must provide secondary storage (disks) to backup main memory.
Memory is central to the operation of a modern computer system, and to execute a
program, it must be brought into memory. Depending on the memory management
scheme in use, the process may be moved between disk and memory during its
execution.

Q.what is swaping?
Swapping is a memory management technique in which a process can be
temporarily moved out of main memory to a backing store, such as a hard disk, and
then brought back into memory for continued execution. It involves performing two
tasks called swap-in and swap-out. The task of placing the process from the hard
disk to the main memory is called swap-in. The task of removing the process from
main memory to the hard disk is called swap-out. Swapping is capable of offering
direct access to memory images and is used to manage multiple processes within a
single main memory.

Q.Role of valid and invalid in demand paging short answer


In demand paging, the valid-invalid bit is used to distinguish between the pages that
are in memory and the pages that are on the disk. When the bit is set to "valid", the
associated page is legal and in main memory. When the bit is set to "invalid", the
page may not be valid or the page is valid but currently on the disk. This bit is used
to check whether a page is in memory or not, and if not, a page fault occurs.

You might also like