System Programming and operating system notes
System Programming and operating system notes
Q1. a) Explain Differences between static link library and dynamic link library.
Answer: Static link libraries and dynamic link libraries differ primarily in how they are linked to
programs and utilized. A static link library is incorporated into the executable file during the
compile-time, which increases the size of the executable but ensures that all the required
functions are embedded, eliminating the need for external dependencies at runtime. This
approach leads to faster execution since all the necessary code is already available within the
executable file. However, any modification to the library necessitates recompiling the program.
Dynamic link libraries, on the other hand, are linked at runtime. This means the executable file
remains smaller as the library code is kept separate, which allows multiple programs to use the
same library concurrently. Updates to the library do not require recompilation of the dependent
programs. However, runtime linking may slightly slow down execution and demands the
presence of the required dynamic libraries during program execution.
Q1. b) What are the different types of Loaders? Explain compile and Go loader in detail.
Answer: Loaders are essential for loading programs into memory for execution. The common
types of loaders include compile-and-go loader, absolute loader, relocating loader, and direct
linking loader. The compile-and-go loader directly translates the source program into machine
code and loads it into memory for execution. This loader simplifies the execution process as no
intermediate object file is created. However, it wastes memory space since the program remains
in memory even after execution, and recompilation is needed for any modification.
This loader is particularly useful for small and simple systems where execution speed is
prioritized over memory utilization. Despite its simplicity, the compile-and-go loader is not
efficient for larger systems with complex programs due to its inability to handle program
relocation or modularity effectively.
Answer: Loader schemes are categorized based on how they load programs into memory and
resolve addresses. The absolute loader places the object code at a predetermined location in
memory and assumes no address modification is required. This simplicity comes at the cost of
flexibility, as the program must be designed for a specific memory location.
The relocating loader overcomes this limitation by adjusting the addresses in the object code
based on the allocated memory location. This scheme ensures that programs can be loaded
anywhere in memory, improving resource utilization.
Direct linking loaders enhance functionality by linking multiple object modules and resolving
external references during the loading process. This approach uses data structures such as
external symbol tables and relocation tables to manage symbol resolution and address
adjustment. Finally, dynamic linking loaders link required libraries at runtime, reducing memory
usage and allowing for easy updates to shared libraries without recompiling dependent programs.
Q2. b) Explain Design of Direct linking loaders and explain required data structures.
Answer: The design of direct linking loaders is centered around linking and resolving external
references during program loading. The loader reads the object modules, resolves symbols using
the external symbol table (EST), and modifies addresses using the relocation table. The EST
stores symbols and their corresponding memory addresses, ensuring proper linkage between
modules. The relocation table identifies addresses that require adjustment based on the allocated
memory locations. Additionally, the load map provides an overview of memory allocation and
symbol resolution.
Direct linking loaders are efficient for handling modular programs, enabling multiple object files
to be linked seamlessly. This design minimizes runtime overhead by performing all linking and
address resolution during the loading phase, ensuring faster execution of programs.
Unit 4
Answer: Compilers and interpreters both translate high-level source code into machine code, but
they do so in different ways. A compiler translates the entire program into machine code before
execution. This results in faster execution since the machine code is directly executed by the
hardware. However, errors in the source code are only identified after the entire compilation
process is complete, which may delay debugging.
Interpreters, on the other hand, translate and execute source code line by line. This approach
allows for immediate error detection, making debugging more interactive and user-friendly.
However, interpreters are slower in execution as they perform translation at runtime. While
compilers are suitable for performance-intensive applications, interpreters are often used in
scenarios requiring rapid prototyping or dynamic execution.
Answer: LEX is a widely used tool for generating lexical analyzers, which are essential
components of a compiler. It processes the source code to identify and categorize tokens such as
keywords, operators, and identifiers. The working of LEX involves defining patterns and actions
in a LEX specification file. The LEX tool then converts these specifications into a C program
that performs lexical analysis.
When executed, the generated program reads the input source code and matches it against
defined patterns to produce a stream of tokens. This process is crucial for the subsequent phases
of compilation, as tokens serve as the basic building blocks for syntax and semantic analysis.
Q4. a) Define token, pattern, lexemes & lexical error.
Answer: A token is the smallest unit of a program that holds meaning, such as a keyword,
operator, or identifier. A pattern is a rule that specifies the structure of a token. Lexemes are the
actual sequences of characters in the source code that match a given pattern, forming the
corresponding tokens. A lexical error occurs when the input sequence does not conform to any
defined pattern, indicating invalid or unexpected input that the lexical analyzer cannot process.
Q4. b) What is a compiler? Explain any two phases of compiler with suitable diagram.
Answer: A compiler is a software tool that translates high-level source code into machine code,
enabling the program to be executed by a computer. The compilation process involves multiple
phases, two of which are:
1. Lexical Analysis: This phase tokenizes the source code, breaking it into meaningful units
such as keywords, identifiers, and operators. For example, the statement "int x = 5;" is
divided into tokens: "int," "x," "=," and "5."
2. Syntax Analysis: This phase checks the structure of the token stream against the
grammar of the programming language. It constructs a parse tree to represent the
hierarchical structure of the source code, ensuring the syntax is correct before proceeding
to semantic analysis.
Unit 5
Binary semaphores, which can take values 0 or 1, are used for mutual exclusion, while counting
semaphores are used for managing access to a limited number of identical resources. Semaphore
operations include wait() to decrement the counter and block processes if the counter is zero,
and signal() to increment the counter and unblock waiting processes. This mechanism ensures
orderly and controlled access to shared resources.
Q5. b) What is Operating System? Explain various operating system services in detail.
Answer: An operating system (OS) acts as an intermediary between users and computer
hardware, managing system resources and providing a user-friendly environment. Key services
provided by an OS include process management, which handles process creation, scheduling,
and termination. Memory management allocates and deallocates memory for active processes,
while file management oversees the creation, storage, and retrieval of files.
Device management ensures efficient communication between hardware devices and software
applications. Security services protect data and system resources from unauthorized access. By
offering these services, the OS ensures efficient and secure operation of the computer system.
Answer: Preemptive scheduling allows the CPU to interrupt a running process and allocate
resources to another process, enabling better resource utilization and responsiveness. For
instance, the Round Robin scheduling algorithm assigns a fixed time slice to each process,
preempting them if the time slice expires.
Answer: Round Robin scheduling assigns a fixed time quantum to each process in a cyclic
manner. For example, with processes P1, P2, and P3 having burst times of 5, 3, and 8 units
respectively, and a time quantum of 2, the CPU switches between processes after every 2 units,
ensuring fairness.
Shortest Job First (SJF) scheduling prioritizes processes with the shortest burst time. For
instance, if P1 has a burst time of 6, P2 has 2, and P3 has 4, SJF executes P2, followed by P3,
and finally P1, minimizing the average waiting time.
Unit 6
Answer: Virtual memory management allows a computer to use more memory than is physically
available by using disk space to simulate additional RAM. This technique enables large
programs to run on systems with limited physical memory. Address translation in a paging
system involves dividing the logical address space into fixed-size pages and mapping them to
physical memory frames using a page table. The page table stores the frame number
corresponding to each page, facilitating efficient translation from logical to physical addresses.
Q7. b) Write proper examples and explain memory allocation strategies first fit, best fit
and worst fit. Also explain their advantages and disadvantages.
Answer: The first fit strategy allocates the first available memory block large enough to
accommodate a process. For example, with memory blocks of 100, 200, and 300 units and a
process of size 150, the process is allocated to the 200-unit block. This method is fast but may
lead to fragmentation.
The best fit strategy allocates the smallest block that can accommodate the process, minimizing
wasted space. For example, with the same memory blocks, the process of size 150 would also
take the 200-unit block. However, this method is slower due to the need to search for the best fit.
The worst fit strategy allocates the largest available block, aiming to leave larger free blocks for
future allocations. For example, the 300-unit block would be used for the 150-unit process. This
method may lead to more fragmentation compared to best fit.
Answer: FIFO (First-In-First-Out) replaces the oldest page in memory when a new page needs
to be loaded. For example, if pages 1, 2, 3, and 4 are loaded sequentially, and page 5 arrives,
page 1 is replaced.
LRU (Least Recently Used) replaces the page that has not been accessed for the longest time.
For example, if pages 1, 2, 3, and 4 are in memory and page 2 was last used longest ago, it is
replaced when page 5 arrives. LRU offers better performance compared to FIFO but is more
complex to implement.
Q8. b) What is TLB? Explain the paging system with the use of TLB? What are the
advantages of TLB?
Answer: A Translation Lookaside Buffer (TLB) is a specialized cache that stores recent page
table entries, reducing the time required for address translation. In a paging system, the CPU
checks the TLB for the desired page table entry before accessing the main memory. If the entry
is found (a TLB hit), the frame number is retrieved directly. If not (a TLB miss), the page table is
accessed, and the entry is added to the TLB.
The TLB significantly improves performance by reducing the number of memory accesses
needed for address translation, especially in programs with high locality of reference.