Os Lab
Os Lab
!/bin/bash
filename="addressbook.txt"
while true
do
echo ""
echo "===== Address Book Menu ====="
echo "1. Create Address Book"
echo "2. View Address Book"
echo "3. Insert a Record"
echo "4. Delete a Record"
echo "5. Modify a Record"
echo "6. Exit"
echo "=============================="
read -p "Choose an option (1-6): " choice
case $choice in
1)
> $filename
echo "Address book created (cleared if existed)!"
;;
2)
if [ -e $filename ]; then
echo "---- Address Book ----"
cat $filename
else
echo "Address book does not exist. Create it first!"
fi
;;
3)
read -p "Enter Name: " name
read -p "Enter Phone: " phone
read -p "Enter Email: " email
echo "$name | $phone | $email" >> $filename
echo "Record inserted!"
;;
4)
read -p "Enter Name to Delete: " delname
grep -v "^$delname |" $filename > temp.txt && mv temp.txt $filename
echo "Record deleted (if existed)."
;;
5)
read -p "Enter Name to Modify: " modname
grep -v "^$modname |" $filename > temp.txt && mv temp.txt $filename
read -p "Enter New Name: " newname
read -p "Enter New Phone: " newphone
read -p "Enter New Email: " newemail
echo "$newname | $newphone | $newemail" >> $filename
echo "Record modified!"
;;
6)
echo "Exiting. Bye "
break
;;
*)
echo "Invalid option. Try again!"
;;
esac
done
3. Implement process scheduling algorithms.
P1 6
P2 8
P3 7
P4 3
Now let’s solve this with FCFS, SJF, and Round Robin.
Process BT WT TAT
P1 6 0 6
P2 8 6 14
P3 7 14 21
P4 3 21 24
P4 3
P1 6
P3 7
P2 8
Step-by-step:
Process BT WT TAT
P4 3 0 3
P1 6 3 9
P3 7 9 16
P2 8 16 24
Notice: Waiting time is much lower than FCFS. That's why SJF is better
for batch processing.
P1 6 17 17 11
P2 8 21 21 13
P3 7 24 24 17
P4 3 15 15 12
Final Verdict:
Given:
Process Allocated Max
ABC ABC
P0 010 753
P1 200 322
P2 302 902
P3 211 222
P4 002 433
Available = [3, 3, 2]
Process Need (A B C)
P0 743
P1 122
P2 600
P3 011
P4 431
Safe Sequence:
P1 → P3 → P4 → P0 → P2
This means no deadlock will happen! The system is SAFE
def increment():
global shared_variable
for _ in range(iterations):
shared_variable += 1
def decrement():
global shared_variable
for _ in range(iterations):
shared_variable -= 1
t1 = threading.Thread(target=increment)
t2 = threading.Thread(target=decrement)
t1.start()
t2.start()
t1.join()
t2.join()
print(f"Initial value should be: 0")
print(f"Final value is: {shared_variable}")
print(f"Difference: {shared_variable} (This should be 0 without race
conditions)")
6. Program create two threads: one to increment the value of a
shared variable and second to decrement the value of shared
variable. Both the threads make use of locks so that only one
of the threads is executing in its critical section (Process
Synchronization using mutex locks).
import threading
def increment():
global shared_variable
for _ in range(iterations):
lock.acquire() Acquire lock before critical section
shared_variable += 1
lock.release() Release lock after critical section
def decrement():
global shared_variable
for _ in range(iterations):
lock.acquire() Acquire lock before critical section
shared_variable -= 1
lock.release() Release lock after critical section
t1 = threading.Thread(target=increment)
t2 = threading.Thread(target=decrement)
t1.start()
t2.start()
t1.join()
t2.join()
print(f"Initial value: 0")
print(f"Final value: {shared_variable}")
print("Note: With proper synchronization, the final value should always be
0")
7. Program creates two threads: one to increment the value of a shared
variable and second to decrement the value of the shared variable.
Both the threads make use of semaphore variable so that only one of
the threads is executing in its critical section. (Process
Synchronization using semaphore).
import threading
shared_variable = 0
iterations = 100000
semaphore = threading.Semaphore(1)
def increment():
global shared_variable
for _ in range(iterations):
semaphore.acquire()
shared_variable += 1
semaphore.release()
def decrement():
global shared_variable
for _ in range(iterations):
semaphore.acquire()
shared_variable -= 1
semaphore.release()
t1 = threading.Thread(target=increment)
t2 = threading.Thread(target=decrement)
t1.start()
t2.start()
t1.join()
t2.join()
print(f"Initial value: 0")
print(f"Final value: {shared_variable}")
import threading
import time
import random
BUFFER_SIZE = 5
buffer = []
mutex = threading.Semaphore(1)
empty = threading.Semaphore(BUFFER_SIZE)
full = threading.Semaphore(0)
class Producer(threading.Thread):
def run(self):
global buffer
for i in range(10):
item = random.randint(1, 100)
empty.acquire()
mutex.acquire()
buffer.append(item)
print(f"Produced {item}, Buffer: {buffer}")
mutex.release()
full.release()
time.sleep(random.random())
class Consumer(threading.Thread):
def run(self):
global buffer
for i in range(10):
full.acquire()
mutex.acquire()
item = buffer.pop(0)
print(f"Consumed {item}, Buffer: {buffer}")
mutex.release()
empty.release()
time.sleep(random.random())
producer = Producer()
consumer = Consumer()
producer.start()
consumer.start()
producer.join()
consumer.join()
7. Live RAM Content Capture and Analysis with Volatility
2. Volatility Analysis
After capturing memory, analyze with Volatility:
Basic System Information
bash
volatility -f memory.dmp imageinfo
volatility -f memory.dmp --profile=Win10x64_19041 pslist
Process Analysis Commands
bash
List all processes
volatility -f memory.dmp --profile=Win10x64_19041 pstree
Advanced Analysis
bash
Check for API hooks
volatility -f memory.dmp --profile=Win10x64_19041 apihooks -p 1234
python
!/usr/bin/env python3
import os
import subprocess
Key Features: