Lab Report
Lab Report
OBJECTIVE:
i) To learn about creating, controlling and terminating processes in the program.
THEORY:
Processes are created by a parent process using system calls like fork( ), forming a parent-
child relationship. The child process inherits certain attributes from its parent, such as open
files and execution context, and may execute independently or load a new program using
system calls like exec( ).
All processes are organized in a hierarchical tree structure within the system. A process
terminates either voluntarily, using calls like exit( ), or involuntarily due to errors or external
signals such as kill. After termination, the operating system cleans up the resources allocated
to the process, and the parent process is notified of the termination through status codes or
signals.
SOURCE CODE:
#include <stdio.h>
int main()
{
int a;
a = 4 + 5;
printf("The result is:%d",a);
}
OUTPUT:
OBJECTIVE:
i) To learn about creating and terminating threads for efficient multitasking and
resource management.
THEORY:
Thread creation and termination are fundamental concepts in multithreaded programming.
Threads are lightweight processes that share the same memory space, enabling efficient
multitasking within an application. Threads can be created using system calls or library
functions, such as pthread_create() in C or Thread in Java.
During creation, each thread is assigned a unique identifier and executes a specific task or
function. A thread terminates either after completing its assigned task or through explicit
termination calls like pthread_exit() or thread.interrupt().
Proper synchronization mechanisms, such as mutexes or semaphores, are essential to manage
shared resources and avoid race conditions. Efficient thread creation and termination enhance
application performance and resource utilization.
SOURCE CODE:
#include <pthread.h>
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
// Thread function
void* thread_function(void* arg)
{
int thread_id = *((int*)arg);
printf("Thread %d: Starting\n", thread_id);
sleep(2); // Simulate some work
printf("Thread %d: Ending\n", thread_id);
pthread_exit(NULL); // Exit the thread
}
int main()
{
pthread_t threads[3];
int thread_ids[3];
int i;
// Creating threads
for (i = 0; i < 3; i++)
{
thread_ids[i] = i + 1;
if (pthread_create(&threads[i], NULL, thread_function, &thread_ids[i]) != 0)
{
perror("Failed to create thread");
exit(EXIT_FAILURE);
}
printf("Main: Created thread %d\n", thread_ids[i]);
}
// Joining threads (wait for them to finish)
for (i = 0; i < 3; i++)
{
if (pthread_join(threads[i], NULL) != 0)
{
perror("Failed to join thread");
exit(EXIT_FAILURE);
}
printf("Main: Joined thread %d\n", thread_ids[i]);
}
printf("Main: All threads finished. Exiting.\n");
return 0;
}
OUTPUT:
OBJECTIVE:
i) To learn about understanding inter-process communication (IPC) techniques for
efficient data exchange and synchronization between processes.
THEORY:
Inter-process communication (IPC) enables processes to share data and coordinate their
actions. IPC techniques include pipes, message queues, shared memory, and sockets, each
suited for specific use cases. Pipes provide a unidirectional communication channel, while
message queues allow asynchronous message exchange.
Shared memory facilitates direct memory access between processes for faster data sharing,
but it requires proper synchronization. Sockets enable communication between processes
over a network, supporting distributed systems. Semaphores and mutexes are often used
alongside IPC methods to prevent race conditions and ensure safe resource access.
IPC mechanisms are crucial for parallel processing and resource sharing in modern systems.
Proper implementation ensures data integrity, synchronization, and efficient communication
between processes.
SOURCE CODE:
#include <windows.h>
#include <stdio.h>
#include <tchar.h>
int main()
{
HANDLE hRead, hWrite;
SECURITY_ATTRIBUTES sa = {sizeof(SECURITY_ATTRIBUTES), NULL,
TRUE};
char write_msg[] = "Hello from parent!";
char read_msg[100];
DWORD bytes_written, bytes_read;
// Create a pipe
if (!CreatePipe(&hRead, &hWrite, &sa, 0))
{
fprintf(stderr, "Pipe creation failed.\n");
return 1;
}
OUTPUT:
OBJECTIVE:
i) To learn about analyzing different process scheduling algorithms for optimizing CPU
utilization and system performance.
THEORY:
Process scheduling algorithms determine the order in which processes are executed by the
CPU. Common algorithms include First-Come-First-Serve (FCFS), Shortest Job First (SJF),
Shortest Remaining Time First (SRTF), Round Robin (RR), and Priority Scheduling.
FCFS schedules processes based on their arrival time, leading to potential inefficiencies like
the "convoy effect." SJF schedules processes with the shortest burst time first, minimizing
waiting time but suffering from starvation for longer processes. SRTF is the preemptive
version of SJF which schedules process for one second each based on burst time.
Similarly, Round Robin allocates a fixed time slice to each process, ensuring fairness but
potentially increasing turnaround time. Priority Scheduling executes processes based on
priority levels, which can be preemptive or non-preemptive. Proper implementation of these
algorithms ensures efficient CPU utilization, minimizes waiting and turnaround times, and
improves overall system performance.
First-Come-First-Serve(FCFS) CPU Scheduling:
SOURCE CODE:
#include <stdio.h>
int main()
{
int n, bt[20], wt[20], tat[20], avwt = 0, avtat = 0, i, j;
SOURCE CODE:
#include <stdio.h>
int main()
{
int n, bt[20], p[20], wt[20] = {0}, tat[20], i, j, total_wt = 0, total_tat = 0;
SOURCE CODE:
#include <stdio.h>
#include <limits.h> // Required for INT_MAX
int main()
{
int n, bt[20], rt[20], wt[20] = {0}, tat[20];
int time = 0, completed = 0, smallest;
int i, min_rt; // Declare all variables at the top
float total_wt = 0, total_tat = 0;
total_wt += wt[smallest];
total_tat += tat[smallest];
printf("P%d\t%d\t\t%d\t\t%d\n", smallest + 1, bt[smallest],
wt[smallest], tat[smallest]);
}
}
printf("\nAverage Waiting Time: %.2f", total_wt / n);
printf("\nAverage Turnaround Time: %.2f\n", total_tat / n);
return 0;
}
OUTPUT:
Round Robin CPU Scheduling:
SOURCE CODE:
#include <stdio.h>
int main()
{
int n, tq, bt[20], rt[20], wt[20] = {0}, tat[20] = {0};
int time = 0, completed = 0, i;
float total_wt = 0, total_tat = 0;
OUTPUT:
Priority CPU Scheduling
SOURCE CODE:
#include <stdio.h>
int main()
{
int n, i, j, bt[10], p[10], pr[10], wt[10] = {0}, tat[10] = {0}, temp;
float total_wt = 0, total_tat = 0;
OUTPUT:
RESULT AND CONCLUSION:
In this lab, we demonstrated our comprehension of process scheduling algorithms by crafting
a program to calculate average waiting time and average turnaround time of the processes
through the implementation of various C programming library files.
TITLE: IMPLEMENTATION OF BOUNDED BUFFER PROBLEM.
OBJECTIVE:
i) To learn about the Bounded Buffer problem and demonstrate synchronization
between producer and consumer processes using appropriate synchronization
mechanisms.
THEORY:
The Bounded Buffer problem involves managing a fixed-size buffer shared by producer and
consumer processes. The producer generates data and places it into the buffer, while the
consumer retrieves and processes the data. The challenge is to prevent the buffer from being
overfilled or emptied, which can lead to synchronization issues.
Key synchronization mechanisms like semaphores, mutexes, or condition variables are used
to ensure that the producer waits when the buffer is full, and the consumer waits when the
buffer is empty. The solution often involves two semaphores: one for counting the available
spaces and another for counting the number of items in the buffer.
Proper synchronization ensures that both processes can operate concurrently without data
corruption or race conditions, improving the efficiency and reliability of the system such that
the system can run in multitasking environment.
SOURCE CODE:
#include <stdio.h>
#include <stdlib.h>
#include <pthread.h>
#include <semaphore.h>
#include <unistd.h>
#define BUFFER_SIZE 5
#define NUM_ITEMS 5
int buffer[BUFFER_SIZE];
int in = 0;
int out = 0;
sem_t empty;
sem_t full;
pthread_mutex_t mutex;
pthread_join(prod_thread, NULL);
pthread_join(cons_thread, NULL);
sem_destroy(&empty);
sem_destroy(&full);
pthread_mutex_destroy(&mutex);
return 0;
}
OUTPUT:
OBJECTIVE:
i) To learn about the deadlock avoidance algorithm that prevents deadlock from
occurring by making resource allocation decisions in advance.
THEORY:
Deadlock avoidance algorithms ensure that a system never enters a deadlock state by
carefully controlling resource allocation. One widely used algorithm is the Banker's
Algorithm, which checks the safety of resource allocation before granting a request. It uses
information about maximum resource needs, currently allocated resources, and available
resources to determine if granting a request will lead to a safe or unsafe state.
A safe state is one where all processes can eventually complete without causing a deadlock,
while an unsafe state may lead to deadlock. Deadlock avoidance algorithms require processes
to declare their maximum resource requirements in advance, allowing the system to make
informed decisions.
Although these algorithms can prevent deadlocks, they may reduce system resource
utilization and efficiency. Proper implementation helps maintain system stability by avoiding
deadlocks while ensuring processes can still execute efficiently.
SOURCE CODE:
#include <stdio.h>
#include <stdbool.h>
#define MAX_PROCESSES 10
#define MAX_RESOURCES 10
void calculateNeed()
{
int i, j;
for (i = 0; i < n; i++)
{
for (j = 0; j < m; j++)
{
need[i][j] = max[i][j] - allocation[i][j];
}
}
}
bool isSafe()
{
int work[MAX_RESOURCES], finish[MAX_PROCESSES] = {0};
int safeSequence[MAX_PROCESSES], index = 0;
int i, j, k;
OBJECTIVE:
i) To learn about the different memory allocation techniques for efficient memory
management in a system.
THEORY:
Memory allocation techniques are used to assign memory blocks to processes in an efficient
manner. Contiguous memory allocation assigns a single contiguous block of memory to a
process, simplifying access but potentially leading to fragmentation.
Paging divides memory into fixed-size pages, allowing non-contiguous allocation and
reducing fragmentation, but introducing overhead for page management. Segmentation
divides memory into segments of varying sizes, where each segment corresponds to a logical
unit, such as a function or data array.
Virtual memory combines both paging and segmentation, enabling the system to use disk
space as an extension of RAM, thus supporting processes larger than physical memory. The
choice of memory allocation technique depends on the system's needs, with the goal of
reducing fragmentation, improving memory utilization, and ensuring efficient execution of
processes. Proper memory allocation techniques enhance system performance, resource
utilization, and scalability.
First Fit Memory Allocation Technique
SOURCE CODE:
#include <stdio.h>
void firstFit(int blockSize[], int m, int processSize[], int n)
{
int allocation[n];
int i, j; // Declare variables outside the for loop for older C standards
int main()
{
int m, n, i;
printf("Enter the number of memory blocks: ");
scanf("%d", &m);
int blockSize[m];
printf("Enter the sizes of the memory blocks in order: ");
for (i = 0; i < m; i++)
{
scanf("%d", &blockSize[i]);
}
printf("Enter the number of processes: ");
scanf("%d", &n);
int processSize[n];
printf("Enter the sizes of the processes in order: ");
for (i = 0; i < n; i++)
{
scanf("%d", &processSize[i]);
}
firstFit(blockSize, m, processSize, n);
return 0;
}
OUTPUT:
Best Fit Memory Allocation Technique
SOURCE CODE:
#include <stdio.h>
void bestFit(int blockSize[], int m, int processSize[], int n)
{
int allocation[10]; // Array to hold allocation results
int i, j; // Declare variables outside the for loop
OUTPUT:
Worst Fit Memory Allocation Technique
SOURCE CODE:
#include <stdio.h>
#include <string.h>
// Pick each process and find suitable blocks according to its size
for (int i = 0; i < n; i++)
{
int wstIdx = -1;
for (int j = 0; j < m; j++)
{
if (blockSize[j] >= processSize[i])
{
if (wstIdx == -1 || blockSize[wstIdx] < blockSize[j])
{
wstIdx = j;
}
}
}
// If a block is found for the current process
if (wstIdx != -1)
{
allocation[i] = wstIdx;
blockSize[wstIdx] -= processSize[i];
}
}
printf("\nProcess No.\tProcess Size\tBlock no.\n");
for (int i = 0; i < n; i++)
{
printf(" %d\t\t%d\t\t", i + 1, processSize[i]);
if (allocation[i] != -1)
printf("%d", allocation[i] + 1);
else
printf("Not Allocated");
printf("\n");
}
}
int main()
{
int blockSize[] = {100, 500, 200, 300, 600};
int processSize[] = {212, 417, 112, 426};
int m = sizeof(blockSize) / sizeof(blockSize[0]);
int n = sizeof(processSize) / sizeof(processSize[0]);
worstFit(blockSize, m, processSize, n);
return 0;
}
OUTPUT:
OBJECTIVE:
i) To learn about the free space management techniques for efficient allocation and de-
allocation of memory or disk space.
THEORY:
Free space management is crucial for tracking and managing unused memory or disk space in
systems, ensuring that resources are efficiently allocated and de-allocated. Common
techniques include the bit vector method, where each bit represents a block of memory or
disk, with 0 indicating free space and 1 indicating allocated space.
Linked list allocation maintains a list of free blocks, where each block contains a pointer to
the next free block, making allocation and de-allocation more flexible. Counting keeps track
of the number of contiguous free blocks, allowing for faster allocation of large contiguous
spaces. Buddy system allocation divides memory into blocks of varying sizes, pairing them
into "buddies" to optimize space usage and reduce fragmentation.
These methods ensure that free space is managed effectively, minimizing fragmentation and
optimizing system performance. Proper implementation of free space management techniques
leads to better resource utilization and faster allocation or de-allocation operations which
results in smooth functioning of the system.
Bit Vector Free Space Management Technique
SOURCE CODE:
#include <stdio.h> // For printf
#include <stdbool.h> // For bool, true, false
#define BLOCKS 16 // Number of blocks
unsigned char bitmap[BLOCKS / 8] = {0};
OUTPUT:
Linked List Free Space Management Technique
SOURCE CODE:
#include <stdio.h>
#include <stdlib.h>
// Simulate allocation
allocate(&freeList, 20);
display(freeList);
allocate(&freeList, 30);
display(freeList);
// Simulate deallocation
deallocate(&freeList, 20, 10);
display(freeList);
deallocate(&freeList, 50, 10);
display(freeList);
return 0;
}
OUTPUT:
SOURCE CODE:
#include <stdio.h>
#include <stdbool.h>
#define TOTAL_BLOCKS 16 // Total number of blocks
// Global variables
int index_block[TOTAL_BLOCKS]; // Indexed free space table
bool block_status[TOTAL_BLOCKS]; // Track block status: true = allocated, false = free
int index_count = 0; // Number of free blocks in the index block
// Initialize free blocks
void initialize_blocks()
{
for (int i = 0; i < TOTAL_BLOCKS; i++)
{
index_block[i] = i; // Add all blocks to the index
block_status[i] = false; // Mark all blocks as free
}
index_count = TOTAL_BLOCKS; // Initialize free block count
}
// Allocate a block
int allocate_block()
{
if (index_count == 0)
{
printf("No free blocks available.\n");
return -1; // No free blocks
}
int allocated_block = index_block[--index_count]; // Get a free block
block_status[allocated_block] = true; // Mark as allocated
printf("Block %d allocated.\n", allocated_block);
return allocated_block;
}
// Free a block
void free_block(int block)
{
if (block < 0 || block >= TOTAL_BLOCKS || !block_status[block])
{
printf("Invalid or already free block: %d\n", block);
return;
}
block_status[block] = false; // Mark as free
index_block[index_count++] = block; // Add back to the index
printf("Block %d freed.\n", block);
}
// Display the current index block
void display_index_block()
{
printf("Index Block (Free Blocks): ");
for (int i = 0; i < index_count; i++)
{
printf("%d ", index_block[i]);
}
printf("\n");
}
// Display block allocation status
void display_block_status()
{
printf("Block Status: ");
for (int i = 0; i < TOTAL_BLOCKS; i++)
{
printf("%d[%c] ", i, block_status[i] ? 'A' : 'F'); // A = Allocated, F = Free
}
printf("\n");
}
int main()
{
initialize_blocks(); // Initialize the blocks
// Free a block
free_block(1);
OUTPUT:
RESULT AND CONCLUSION:
In this lab, we demonstrated our comprehension of free space management techniques by
crafting a program to find block status through the implementation of various C programming
library files.
TITLE: IMPLEMENTATION OF PAGE REPLACEMENT ALGORITHMS.
OBJECTIVE:
i) To learn about the page replacement algorithms that optimize memory management
by determining which pages to swap in and out of physical memory.
THEORY:
Page replacement algorithms are used in virtual memory systems to decide which pages
should be swapped out when physical memory is full. Common algorithms include First-In-
First-Out (FIFO), which replaces the oldest page in memory, regardless of how frequently it
is used. Least Recently Used (LRU) replaces the page that has not been used for the longest
period, aiming to keep the most recently accessed pages in memory.
Similarly,Optimal Page Replacement chooses the page that will not be used for the longest
time in the future, providing the best possible performance but being impractical to
implement in real systems since future page requests are unknown. Clock is a more efficient
approximation of LRU, using a circular buffer and a reference bit to track page usage.
Page replacement algorithms aim to minimize page faults and optimize memory utilization,
enhancing system performance by ensuring that frequently used pages are readily available in
memory while less frequently used pages are swapped out. Proper implementation reduces
overhead and increases system responsiveness.
First-In-First-Out (FIFO) Page Replacement Algorithm
SOURCE CODE:
#include <stdio.h>
int main()
{
int capacity, n, page_faults = 0, page_hits = 0, front = 0;
printf("Enter the number of frames: ");
scanf("%d", &capacity); // Number of frames available (capacity of memory)
printf("Enter the number of page requests: ");
scanf("%d", &n); // Number of pages (sequence length)
SOURCE CODE:
#include <stdio.h>
struct Page
{
int value; // Page value
int frequency; // Frequency of usage
int last_used; // Last used time for LFU tie-breaking
};
int main()
{
int capacity, n, page_faults = 0, page_hits = 0, time = 0;
OUTPUT:
Optimal Page Replacement Algorithm
SOURCE CODE:
#include <stdio.h>
int find_farthest(int pages[], int frames[], int n, int current_index, int capacity)
{
int farthest_index = -1;
int farthest_distance = -1;
int i, j; // Declare loop variables here
for (i = 0; i < capacity; i++)
{
// Check how far this page is used in the future
for (j = current_index + 1; j < n; j++)
{
if (frames[i] == pages[j])
{
if (j > farthest_distance)
{
farthest_distance = j;
farthest_index = i;
}
break; // Break if the page is found
}
}
// If the page is not used in the future at all
if (j == n)
{
return i; // Replace this page since it's never used again
}
}
return (farthest_index == -1) ? 0 : farthest_index;
}
int main()
{
int capacity, n, page_faults = 0, page_hits = 0;
int filled = 0; // Number of pages in frames currently filled
OUTPUT:
SOURCE CODE:
#include <stdio.h>
#include <stdbool.h>
#include <stdlib.h>
OBJECTIVE:
i) To learn about the file allocation techniques for efficient storage, retrieval, and
management of files on a disk.
THEORY:
File allocation techniques manage how files are stored on a disk to optimize space utilization
and access time. Contiguous allocation stores a file in consecutive blocks on the disk,
providing fast access but leading to fragmentation and limited flexibility.
Linked allocation uses a linked list where each file block points to the next, eliminating
fragmentation but slowing access due to sequential traversal. Indexed allocation maintains an
index block for each file, containing pointers to its data blocks, combining efficient access
and flexibility but requiring extra storage for index blocks. Advanced techniques like multi-
level indexing and hashed allocation further improve performance and reliability in specific
scenarios.
Each technique has its advantages and trade-offs in terms of speed, space utilization, and
complexity. Implementing these techniques helps to optimize file storage and ensures
efficient disk space management in a file system so that the each required files can be
accessed easily.
Sequential File Allocation Technique
SOURCE CODE:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#define MAX_FILES 10
#define MAX_NAME_LENGTH 20
#define DISK_SIZE 100
OUTPUT:
SOURCE CODE:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#define MAX_FILES 10
#define DISK_SIZE 20
#define MAX_NAME_LENGTH 20
displayDiskState(&disk);
return 0;
}
OUTPUT:
Indexed File Allocation Technique
SOURCE CODE:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#define MAX_FILES 10
#define DISK_SIZE 20
#define MAX_NAME_LENGTH 20
#define MAX_INDEX_BLOCKS 5 // Max number of blocks a single file can use
displayDiskState(&disk);
return 0;
}
OUTPUT:
RESULT AND CONCLUSION:
In this lab, we demonstrated our comprehension of file allocation techniques by crafting a
program to locate various files in the system through the implementation of various C
programming library files.
TITLE: IMPLEMENTATION OF DISK SCHEDULING ALGORITHMS.
OBJECTIVE:
i) To learn about the disk scheduling algorithms for optimizing disk access time and
improving overall system performance.
THEORY:
Disk scheduling algorithms determine the order in which disk I/O requests are processed to
minimize seek time and improve throughput. First-Come-First-Serve (FCFS) processes
requests in the order they arrive, which is simple but may lead to long seek times. Shortest
Seek Time First (SSTF) selects the request closest to the current head position, reducing seek
time but potentially causing starvation for distant requests.
Similarly, SCAN and C-SCAN move the disk head in a specific direction, servicing requests
along the way, and reverse or reset direction at the end, providing fairness and reducing
variance in access time. LOOK and C-LOOK are variants of SCAN and C-SCAN that only
move as far as the furthest request, further optimizing performance.
These algorithms aim to balance efficiency, fairness, and responsiveness in handling disk I/O.
Proper implementation ensures reduced latency, improved disk utilization, and better end-
user experience in storage-intensive applications.
First-Come-First-Serve (FCFS) Disk Scheduling Algorithm
SOURCE CODE:
#include<stdio.h>
#include<stdlib.h>
void FCFS(int requests[], int n, int head)
{
int total_head_movement = 0;
int i; // Declare the loop variable outside the for loop
SOURCE CODE:
#include <stdio.h>
#include <stdlib.h>
void SSTF(int requests[], int n, int head)
{
int total_head_movement = 0;
int completed[100] = {0}; // To track completed requests
int current_index, i;
OUTPUT:
SOURCE CODE:
#include <stdio.h>
#include <stdlib.h>
void SCAN(int requests[], int n, int head, int direction)
{
int total_head_movement = 0;
int current_index, i;
OUTPUT:
C-SCAN Disk Scheduling Algorithm
SOURCE CODE:
#include <stdio.h>
#include <stdlib.h>
void C_SCAN_Top_Bottom(int requests[], int n, int head, int disk_size)
{
int total_head_movement = 0;
int sorted_requests[n + 2]; // Include 2 more for the boundary points,0 and disk_size-1
int i, j;
// Copy the requests and add the boundary points (0 and disk_size - 1)
for (i = 0; i < n; i++)
sorted_requests[i] = requests[i];
// Copy the requests and add the boundary points (0 and disk_size - 1)
for (i = 0; i < n; i++)
sorted_requests[i] = requests[i];
SOURCE CODE:
#include <stdio.h>
#include <stdlib.h>
void LOOK_Top_Bottom(int requests[], int n, int head)
{
int total_head_movement = 0;
int sorted_requests[n]; // Array to store sorted requests
int i, j;
SOURCE CODE:
#include <stdio.h>
#include <stdlib.h>
void C_LOOK_Top_Bottom(int requests[], int n, int head)
{
int total_head_movement = 0;
int sorted_requests[n]; // Array to store sorted requests
int i, j;
OUTPUT:
OBJECTIVE:
i) To learn about the architecture, features, and functionalities of the Linux operating
system and its applications in real-world scenarios.
THEORY:
Linux is an open-source, Unix-like operating system widely used in servers, desktops, mobile
devices, and embedded systems. Its modular architecture comprises a monolithic kernel
responsible for managing hardware resources, process scheduling, and memory management.
Linux supports multitasking, multiuser operations, and file system management, offering
flexibility and efficiency.
Key features include support for multiple file systems (ext4, XFS, FAT), robust security
through permissions and SELinux, and dynamic library linking for efficient application
execution.Linux distributions such as Ubuntu, CentOS, and Debian cater to diverse use cases,
from general-purpose computing to enterprise-level servers. The OS is extensively used in
cloud computing, containerization (via Docker), and high-performance computing clusters
due to its scalability and reliability.
Its package management systems (like apt, yum) simplify software installation and updates.
A notable example is its role in powering the majority of web servers, including those for
companies like Google and Facebook, emphasizing its stability and performance. This case
study highlights Linux's versatility and dominance in modern computing environments.