Smu CS7343 Midterm
Smu CS7343 Midterm
b. P1: 20-0 - 20, P2: 80-25 = 55, P3: 90 - 30 = 60, P4: 75-60 = 15, P5:
120-100 = 20, P6: 115-105 = 10
c. P1: 0, p2: 40, P3: 35, P4: 0, P5: 10, P6: 0
d. 105/120 = 87.5 percent.
class CPUScheduler:
def __init__(self):
self.processes: List[Process] = []
def add_process(self, pid: int, arrival_time: int, burst_time: int, priority: int = 0):
self.processes.append(Process(pid, arrival_time, burst_time, priority))
def reset_processes(self):
for process in self.processes:
process.waiting_time = 0
process.turnaround_time = 0
process.response_time = -1
process.remaining_time = process.burst_time
return self._calculate_metrics()
if not ready_queue:
current_time = remaining_processes[0].arrival_time
continue
process = heapq.heappop(ready_queue)
process.response_time = current_time - process.arrival_time
process.waiting_time = current_time - process.arrival_time
current_time += process.burst_time
process.turnaround_time = current_time - process.arrival_time
completed_processes.append(process)
self.processes = completed_processes
return self._calculate_metrics()
if not ready_queue:
current_time = remaining_processes[0].arrival_time
continue
process = ready_queue.popleft()
if process.response_time == -1:
process.response_time = current_time - process.arrival_time
return self._calculate_metrics()
if not ready_queue:
current_time = remaining_processes[0].arrival_time
continue
_, process = heapq.heappop(ready_queue)
process.response_time = current_time - process.arrival_time
process.waiting_time = current_time - process.arrival_time
current_time += process.burst_time
process.turnaround_time = current_time - process.arrival_time
completed_processes.append(process)
self.processes = completed_processes
return self._calculate_metrics()
return {
"avg_turnaround_time": round(avg_turnaround, 2),
"avg_waiting_time": round(avg_waiting, 2),
"avg_response_time": round(avg_response, 2)
}
# Print results
print("\nPerformance Metrics Comparison:")
print("-" * 70)
print(f"{'Algorithm':<15} {'Avg Turnaround Time':<20} {'Avg Waiting Time':<20} {'Avg
Response Time'}")
print("-" * 70)
print(f"{'FCFS':<15} {fcfs_metrics['avg_turnaround_time']:<20.2f}
{fcfs_metrics['avg_waiting_time']:<20.2f} {fcfs_metrics['avg_response_time']:.2f}")
print(f"{'SJF':<15} {sjf_metrics['avg_turnaround_time']:<20.2f}
{sjf_metrics['avg_waiting_time']:<20.2f} {sjf_metrics['avg_response_time']:.2f}")
print(f"{'Round Robin':<15} {rr_metrics['avg_turnaround_time']:<20.2f}
{rr_metrics['avg_waiting_time']:<20.2f} {rr_metrics['avg_response_time']:.2f}")
print(f"{'Priority':<15} {priority_metrics['avg_turnaround_time']:<20.2f}
{priority_metrics['avg_waiting_time']:<20.2f} {priority_metrics['avg_response_time']:.2f}")
if __name__ == "__main__":
run_simulation()
o Higher latency
o Serialization overhead
o More complex programming model
Recommendation: Hybrid approach
• Use shared memory within nodes
• Use message passing between nodes
• Implement smart buffering and batching
f) Real-time Constraints Strategy:
1. Priority-based scheduling:
python
class TaskPriority:
CRITICAL = 0 # Real-time requirements
HIGH = 1 # Near real-time
NORMAL = 2 # Standard processing
BATCH = 3 # Background processing
2. Deadline-aware scheduling:
o Track processing time for each pipeline stage
o Use estimated completion time for scheduling
o Implement early deadline first (EDF) scheduling
3. Resource reservation:
o Reserve capacity for critical tasks
o Implement admission control
o Dynamic resource allocation
g) Dynamic Scaling Strategy:
1. Metrics-based scaling:
o Monitor CPU, memory, queue length, and processing latency
o Define scaling thresholds based on SLAs
o Implement predictive scaling based on historical patterns
2. Cost optimization:
o Use spot instances for non-critical processing
o Implement workload-aware node shutdown
o Batch processing for non-real-time tasks
3. Implementation:
python
def scale_decision(metrics):
return {
'scale_up': metrics.queue_length > QUEUE_THRESHOLD
and metrics.processing_latency > LATENCY_THRESHOLD,
'scale_down': metrics.node_utilization < UTILIZATION_THRESHOLD
and metrics.queue_length < MIN_QUEUE_LENGTH,
'recommended_nodes': calculate_needed_nodes(metrics)
}
The architecture provides an excellent basis to manage the large-scale image process while
allowing for the room needed for further scaling and modifications. One key point, however,
is the exact ratio between performance and reliability against the utilization of the needed
resources obtained through careful system design and application of the right algorithms and
policies.
2. Create the Module Source File: Here, you need to create a C source file known as
my_module.c:
3. Create the Makefile: In the same directory, you will create a file called Makefile:
Compile the Kernel Module
1. From your ~/my_kernel_module directory, open a terminal and run:
2. This command compiles your kernel module and creates an output file called
my_module.ko.
Setting Up QEMU
1. Create a Virtual Disk Image:
2. Boot QEMU:
1. The Module Load Once you are inside the QEMU terminal, you can load your module
with:
2. The kernel message can be checked by printing:
3. The Module Unload: After you finish testing the module, you can remove it from memory
with:
4. Check Again: Make sure to check the kernel message once again to verify that it has been
unloaded:
Clean Up
Conclusion
This guide presents an elementary framework for kernel module creation, based on process
management and synchronisation. The form of the module can be extended into far more
complicated process management features or synchronization mechanisms, like spinlocks,
mutexes, and others.