The document defines operating system concepts like processes, threads, context switching, virtual memory, and more. It discusses process states, scheduling algorithms, synchronization techniques, and interprocess communication mechanisms. The document also covers kernel types, paging, deadlocks, and memory management strategies.
Download as DOCX, PDF, TXT or read online on Scribd
0 ratings0% found this document useful (0 votes)
12 views
Operating System
The document defines operating system concepts like processes, threads, context switching, virtual memory, and more. It discusses process states, scheduling algorithms, synchronization techniques, and interprocess communication mechanisms. The document also covers kernel types, paging, deadlocks, and memory management strategies.
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 8
Operating System
What is an operating system?
The operating system is a software program that facilitates computer hardware to communicate and operate with the computer software. It is the most important part of a computer system without its computer is just like a box. What is the main purpose of an operating system? It is designed to make sure that a computer system performs well by managing its computational activities. It provides an environment for the development and execution of programs. What are the different operating systems? Batched operating systems Distributed operating systems Timesharing operating systems Multi-programmed operating systems Real-time operating systems What is a socket? A socket is used to make connection between two applications. Endpoints of the connection are called socket. What is a real-time system? Real-time system is used in the case when rigid-time requirements have been placed on the operation of a processor. It contains a well-defined and fixed time constraints. What is kernel? Kernel is the core and most important part of a computer operating system which provides basic services for all parts of the OS. What is monolithic kernel? A monolithic kernel is a kernel which includes all operating system code is in single executable image. What do you mean by a process? An executing program is known as process. There are two types of processes: Operating System Processes User Processes What are the different states of a process? New State: It is the first State of a process. This is the state where the process is just created. Ready State: After process creation, the process is supposed to be executed. But it is first put into the ready queue, where it waits for its turn to get executed. Ready Suspended State: Sometimes, when many processes come into a ready state, then due to memory constraints, some processes are shifted from a ready State to a ready suspended State. Running State: One process from the ready state queue is put into the running state queue by the CPU for execution. And that process will be now in a running state. Waiting or Blocked state: If during execution, the process wants to do I/O operation like writing on file, or some more priority process might come. Then the running process will go in a blocked or waiting state. What is the difference between micro kernel and macro kernel? Micro kernel: micro kernel is the kernel which runs minimal performance affecting services for operating system. In micro kernel operating system all other operations are performed by processor. Macro Kernel: Macro Kernel is a combination of micro and monolithic kernel. What is the difference between process and program? A program while running or executing is known as a process. What is the use of paging in operating system? Paging is used to solve the external fragmentation problem in operating system. This technique ensures that the data you need is available as quickly as possible. What is the concept of demand paging? Demand paging specifies that if an area of memory is not currently being used, it is swapped to disk to make room for an application's need. What is the advantage of a multiprocessor system? As many as processors are increased, you will get the considerable increment in throughput. It is cost effective also because they can share resources. So, the overall reliability increases What is virtual memory? Virtual memory is a very useful memory management technique which enables processes to execute outside of memory. This technique is especially used when an executing program cannot fit in the physical memory. What are the four necessary and sufficient conditions behind the deadlock? Mutual Exclusion Condition: It specifies that the resources involved are non-sharable. Hold and Wait Condition: It specifies that there must be a process that is holding a resource already allocated to it while waiting for additional resource that are currently being held by other processes. No-Preemptive Condition: Resources cannot be taken away while they are being used by processes. Circular Wait Condition: It is an explanation of the second condition. It specifies that the processes in the system form a circular list or a chain where each process in the chain is waiting for a resource held by next process in the chain. What is a thread? A thread is a basic unit of CPU utilization. It consists of a thread ID, program counter, register set and a stack. What is FCFS? FCFS stands for First Come, First Served. It is a type of scheduling algorithm. In this scheme, if a process requests the CPU first, it is allocated to the CPU first. Its implementation is managed by a FIFO queue. What is SMP? SMP stands for Symmetric Multiprocessing. It is the most common type of multiple processor system. In SMP, each processor runs an identical copy of the operating system, and these copies communicate with one another when required. What is deadlock? Explain. Deadlock is a specific situation or condition where two processes are waiting for each other to complete so that they can start. But this situation causes hang for both of them. What is Banker's algorithm? Banker's algorithm is used to avoid deadlock. It is the one of deadlock-avoidance method. It is named as Banker's algorithm on the banking system where bank never allocates available cash in such a manner that it can no longer satisfy the requirements of all of its customers. What is the difference between logical address space and physical address space? Logical address space specifies the address that is generated by CPU. On the other hand physical address space specifies the address that is seen by the memory unit. What is fragmentation? Fragmentation is a phenomenon of memory wastage. It reduces the capacity and performance because space is used inefficiently. How many types of fragmentation occur in Operating System? Internal fragmentation: It is occurred when we deal with the systems that have fixed size allocation units. External fragmentation: It is occurred when we deal with systems that have variable-size allocation units. What is spooling? Spooling is a process in which data is temporarily gathered to be used and executed by a device, program or the system. It is associated with printing. When different applications send output to the printer at the same time, spooling keeps these all jobs into a disk file and queues them accordingly to the printer. What is the difference between internal commands and external commands? Internal commands are the built-in part of the operating system while external commands are the separate file programs that are stored in a separate folder or directory. What is semaphore? Semaphore is a protected variable or abstract data type that is used to lock the resource being used. The value of the semaphore indicates the status of a common resource. Binary semaphores Counting semaphores What is a binary Semaphore? Binary semaphore takes only 0 and 1 as value and used to implement mutual exclusion and synchronize concurrent processes. What is Belady's Anomaly? Belady's Anomaly is also called FIFO anomaly. Usually, on increasing the number of frames allocated to a process virtual memory, the process execution is faster, because fewer page faults occur. Sometimes, the reverse happens, i.e., the execution time increases even when more frames are allocated to the process. This is Belady's Anomaly. This is true for certain page reference patterns. What is starvation in Operating System? Starvation is Resource management problem. In this problem, a waiting process does not get the resources it needs for a long time because the resources are being allocated to other processes. What is aging in Operating System? Aging is a technique used to avoid the starvation in resource scheduling system. What are the advantages of multithreaded programming? Enhance the responsiveness to the users. Resource sharing within the process. Economical Completely utilize the multiprocessing architecture. What are overlays? Overlays makes a process to be larger than the amount of memory allocated to it. It ensures that only important instructions and data at any given time are kept in memory. When does trashing occur? Thrashing specifies an instance of high paging activity. This happens when it is spending more time paging instead of executing. What are the different kinds of operations that are possible on semaphore? Wait() Signal() What do you mean by process synchronization? Process synchronization is basically a way to coordinate processes that use shared resources or data. It is very much essential to ensure synchronized execution of cooperating processes so that will maintain data consistency. Its main purpose is to share resources without any interference using mutual exclusion. There are two types of process synchronization. Independent Process Cooperative Process What is IPC? What are the different IPC mechanisms? IPC (Interprocess Communication) is a mechanism that requires the use of resources like a memory that is shared between processes or threads. With IPC, OS allows different processes to communicate with each other. It is simply used for exchanging data between multiple threads in one or more programs or processes. In this mechanism, different processes can communicate with each other with the approval of the OS. Different IPC Mechanisms: Pipes Message Queuing Semaphores Socket Shared Memory Signals What is different between main memory and secondary memory.
What is Context Switching?
Context switching is basically a process of saving the context of one process and loading the context of another process. It is one of the cost-effective and time-saving measures executed by CPU the because it allows multiple processes to share a single CPU. Therefore, it is considered an important part of a modern OS. This technique is used by OS to switch a process from one state to another i.e., from running state to ready state. It also allows a single CPU to handle and control various different processes or threads without even the need for additional resources. Throughput – number of processes that complete their execution per time unit. Turnaround time – amount of time to execute a particular process. Waiting time – amount of time a process has been waiting in the ready queue. Response time – amount of time it takes from when a request was submitted until the first response is produced, not output (for time-sharing environment). What is difference between process and thread?
What is RAID? What are the different RAID levels?
A redundant Array of Independent Disks (RAID) is used to store the same data redundantly to improve the overall performance. Following are the different RAID levels: RAID 0 – Striped Disk Array without fault tolerance. In this data is stripped between different disks and you can access data at the It offers the best performance, but it does not provide fault tolerance. RAID 1 – Mirroring and duplexing This provides fault tolerance as data is stored on different disks. If one fails then data can be accessed from another drive. RAID 3 – Bit-interleaved Parity Raid 3 is not used much. Data is divided evenly and stored on two or more disks, plus there is a dedicated drive for parity storage. RAID 5 – Block-interleaved distributed Parity Data is divided evenly and stored on two or more disks, plus parity is distributed in different drives. RAID 6 – P+Q Redundancy. Data is divided evenly and stored on two or more disks, plus parity is distributed in two different drives. What is cache memory? Cache memory is a small-sized memory that is volatile. That means its contents are stored temporarily. Cache memory is small in size, so it is faster than main memory and secondary memory, as its size is small. So accessing data from cache memory is fast. Whenever the CPU wants to access any data, it first checks the cache memory. If the data is not there in cache memory, the CPU goes to the main memory. What is Thrashing? Thrashing is a situation when the performance of a computer degrades or collapses. Thrashing occurs when a system spends more time processing page faults than executing transactions. While processing page faults is necessary to in order to appreciate the benefits of virtual memory, thrashing has a negative affect on the system. As the page fault rate increases, more transactions need processing from the paging device. The queue at the paging device increases, resulting in increased service time for a page fault. List the Coffman’s conditions that lead to a deadlock. 1. Mutual Exclusion: Only one process may use a critical resource at a time. 2. Hold & Wait: A process may be allocated some resources while waiting for others. 3. No Pre-emption: No resource can be forcible removed from a process holding it. 4. Circular Wait: A closed chain of processes exist such that each process holds at least one resource needed by another process in the chain. What is the difference between process and program?
What is CPU Scheduler?
Selects from among the processes in memory that are ready to execute, and allocates the CPU to one of them. CPU scheduling decisions may take place when a process: 1. Switches from running to waiting state. 2. Switches from running to ready state. 3. Switches from waiting to ready. 4. Terminates. What are the different types of scheduling algorithms? 1. First come First serve (FCFS): First came process is served first. 2. Round Robin (RR): Each process is given a quantum amount of time. 3. Shortest job first (SJF): Process with lowest execution time is given first preference. 4. Priority scheduling (PS): Priority value called (nice value) is used for selecting process. Its value is from 0 to 99. 0 being maxed and 99 being least. What is the basic difference between pre-emptive and non-pre-emptive scheduling?
What are the deadlock avoidance algorithms?
A dead lock avoidance algorithm dynamically examines the resource-allocation state to ensure that a circular wait condition can never exist. The resource allocation state is defined by the number of available and allocated resources, and the maximum demand of the process. There are two algorithms: Resource allocation graph algorithm Banker’s algorithm o Safety algorithm o Resource request algorithm What are various scheduling queues? Job Queue: - When a process enters the system it is placed in the job queue. Ready Queue: - The processes that are residing in the main memory and are ready and waiting to execute are kept on a list called the ready queue. Device Queue: - A list of processes waiting for a particular I/O device is called device queue.