Not True Is A Ring Based Security System F
Not True Is A Ring Based Security System F
Not True Is A Ring Based Security System F
Only the parent and child processes can use named pipes for
F
communication.
Seek time and rotational latency are the two components of positioning
T
time for a disk head.
Question 6
a. Give two advantages of Virtual Memory compared to a simple memory system with load-time binding and
base and limit registers.
Process isolation
b. Give two disadvantages of Virtual Memory compared to a simple memory system with load-time binding and
base and limit registers
i. More complicated
iii. Thrashing?
c. Give one advantage of a static priority algorithm for real-time scheduling, and give one example of such an
algorithm.
i. Rate Monotonic for Real-Time Systems. Scheduling is more guaranteed? There doesn’t seem to be a
good answer if it is static priority
d. Give one advantage of a dynamic priority algorithm for real-time scheduling, and give one example of such an
algorithm.
i. Theoretically optimal, would use 100% CPU if context switching was free
e. Give two advantages of using interrupts rather than polling for interaction between a host computer and a
device controller.
f. Give one disadvantage of using interrupts rather than polling for interaction between a host computer and a
device controller.
i. Context switching
Question 6.
A. Advantages Virtual memory
b. abstracts main memory into an extremely large uniform array of storage, separating logical memory as
viewed by the user from physical memory.
a. Advantage: Assigns higher priorities to tasks that require the CPU more often.
b. Example: Earliest-deadline-first
E. Advantages of interrupts
F. Disadvantages of interrupts
Question 7
a. a
1. Prevention Method: A process must be holding at least one resource and waiting to acquire
additional resources that are currently being held by other processes.
2. Disadvantages:
a. Resource utilisation is low since recourses may be allocated but unused for long
periods.
b. Starvation is possible. A process that needs several popular resources may have to wait
indefinitely because at least one of the resources that it needs is always allocated to
some other process.
ii. No Preemption
2. Disadvantage:
2. Disadvantage:
b. Access List
i. Attached to each object who can access and how. Access revocation for user is easy. Just remove the
person from the list
ii. Advantage: If a small number of files of files have a lot of users with different requirements.
c. Capability List
i. Capability list is per user and says what that person can do overall. Can be transferred because one
process can run on behalf of another
ii. Advantage: If a large number of users have a small number os files with different requirements
(1). EDF uses full computational power of processor whereas rate-monotonic only uses up to
µ = N(21/n-1) where n is the number of processes to be scheduled.
(2). EDF assigns priorities dynamically, so priority levels of tasks can change as time goes on. Rate-monotonic assigns
fixed priorities to tasks, therefore it is unreliable in meeting deadlines.
Some more:
1. Rate monotonic requires that processes are periodic, EDF does not.
2. EDF can schedule processes without missing deadlines that rate monotonic can’t due to static priorities.
(b) What are the two types of latency that affect the performance of real-time systems and how can these be
reduced?
(4 marks)
Deadlock avoidance: means, whenever a request is made for a particular resource by a particular process, the system
looks at the available resources, if the future resource needs for process's already in use resources, determine the
possibility of a deadlock in case the resource is granted. If possible, don't grant the resource, if not possible then grant
the resource.
Deadlock prevention: make sure that at least one of the condition for deadlock to occur is not fulfilled at anytime. This
can be achieved in the way resources are requested and granted in the system.
Similar to a police officer controlling traffic (Avoidance) vs. Traffic lights (Prevention)
Source: http://stackoverflow.com/questions/2485608/what-the-difference-between-deadlock-avoidance-and-deadlock-
prevention
Deadlock prevention prevents deadlocks from occuring by restricting how resource requests can be made, ensuring that
the four necessary conditions can not possibly occur.
Deadlock avoidance requires the system to know information in advance of resource requirements. The resource-
allocation state is dynamically examined to ensure no circular-wait condition (one of the necessary conditions) can occur
when a request is made.
tl;dr deadlock prevent = make sure deadlock is impossible by ensuring a necessary condition can never be satisfied.,
deadlock avoidance = at runtime refuse resource allocation if it would deadlock
(d) How is a limit register used for protecting main memory? (4 marks)
They are used to ensure user process only accesses memory that ‘belongs’ to that particular process. It is privileged
activity that only the kernel has the privilege to change the contents of limit registers (and base registers). If the user
program tries to access outside of the base - limit registers, it will result in a trap to the operating system, which in turn
treats the attempt as a fatal error.
Question 6 (continued)
(e) Give two scenarios where using polling may be better than using interrupts for interaction between a host
computer and a device controller. (4 marks)
Polling might be a better in cases where the polls frequently ‘hit’ what the host was requesting from the device controller.
IE - To reduce the number of context switches if a device needs to be polled very frequently.
If the context switch time is very large on a machine, then polling may be the better option to avoid context switches
where possible.
In bare metal code when there is no thread support, the CPU is often just looping until it a task is received. Therefore
instead of not doing anything in that spare time, a better alternative could be polling the device controller.
Disadvantage - It is possible that starvation can occur where the disk head only stays in one area of the disk if it is busy.
Requests can occur any time, a burst of short-seek requests could starve a long-seek request since the short-seek requests
will be continually serviced on arrival. (requests can arrive WHILST the head is moving around the platter, remember)
(b) Name five common file attributes that an operating system keeps track of and associates with a file? (5
marks)
(c) What is the one main disadvantage to using a linear list to implement a directory structure? What is
one step that can be taken to compensate for this problem? (3 marks)
Linear lists are slow to search, at worst case O(n) run time. This is not ideal as the directory structure is used frequently
by users. Implementing a cache could compensate this problem by storing regularly used directory information and
improving search time.
P 552, the textbook suggests fixing O(n) by storing the list in the adt such as a balanced tree.
You can use a hashed table + linear list, with the file name used as a hash-key to do fast lookups of an entry in the linear
list.
(d) Give one advantage and one disadvantage of having access rights managed by the kernel rather
than by the user. (4 marks)
Advantage -
It is considered more secure as users are restricted from exploiting system calls, along with giving other potentially
malicious users full access privileges.
Disadvantage -
Kernel has dictatorship over all access rights, has full control and can perform malicious attacks. Also is inconvenient by
having only one source to manage all access rights.
Question 7 (continued)
(e) Explain two key differences between hardware virtualization and hardware emulation. (4 marks)
Virtualization - simulates parts of the computer’s hardware. Most operations happen on the actual hardware of the
computer for performance reasons. This is why virtualization is faster than emulation.
Emulation - In emulation the virtual machine simulates the entire computer in software. This provides interoperability as
it allows one computer’s OS to be able to run on another computer (of different architecture) such that the emulator is
written for. This means that even if the computer architecture is different, you can still run the OS on a different machine.
You have been asked to design an operating system for a new domestic robot that uses a multiprocessor
computer to respond to voice commands and uses video cameras to navigate around the house.
I don’t like this answer much, but I’m not in a position to improve it. It’s a multiprocessor computer and these scheduling
algos are shit. https://en.wikipedia.org/wiki/Multiprocessor_scheduling#Algorithms
Maybe just pick a good and a bad algo and compare those instead.
Given that its multiprocessor, perhaps you could specify using asymmetric or symmetric multiprocessing alongside a
scheduling algorithm for the OS as a whole? Ie. SMP + EDF, suggested EDF since robots generally have real time
requirements
(The file system stores things like maps for the house, dictionaries for commands, and individual
programs for tsks like cleaning.)
(i).
For this I wasn’t sure whether it meant directory structure or file system structure.
For File System, I chose (i) Unix, then (ii) Linked Allocation with File Allocation Table. With (iii) as: A map of a
particular room may change with moving furniture. This will mean files will need to be changed in the middle.
For Directory structure I would have: (i) Graph Structure, (ii) Acyclic graph structure.with (iii) as: A room may is
connected to many other rooms and may be connected in a loop. (Cyclic Graph).
(c) Processor Affinity (i.e., whether particular tasks prefer particular processors). (5 marks)
I think we should be talking about process affinity (Soft Affinity / Hard Affinity / No Affinity)
For this, I would probably choose:
(i) Soft Affinity
(ii) Hard Affinity
(iii) Try to keep process one processor to avoid costly context switches; however, with soft affinity allow for processes to
switch processes to help reduce CPU overloading
There are lots of ways you can spin this, but I’d go hard affinity as my main choice - it guarantees that the process will
run on a specific processor(s) unlike soft. This would avoid slow access to other process’s memory as well. Note, this
could potentially reduce overall computational power available to the process (video processing could take more than one
cpu worth of power).
I would agree with hard affinity, since you could make the assumption the OS might have real time requirements you
could probably hand wave that its easier to provide with hard affinity - a CPU dedicated to control can’t be overloaded by
people trying to stream cat videos or some shit.
(5 marks)
Question 6
a) Describe what a Context Switch does and list 3 values are involved.
It is when a CPU switches to another process and saves the state into the PCB and reads the state from the
other processes PCB.
STATE, IDENTIFIER, ADDRESS SPACE, PARENT ID, CHILD ID.
b) For memory swapping, describe the difference between Backing Store and Roll Out Roll In.
Backing store is a large and fast disk able to store all copies of all memory images while Roll Out Roll in is priority
based scheduling for the memory.
c) What is the difference between internal and external Fragmentation?
Internal: allocated memory may be slightly larger than requested memory.
External: total memory space exists to satisfy request, bit is not contiguous.
d) What are atomic instructions and give an example of how they are used for synchronisation?
Idk an atomic increment, can be used with counting semaphores for synchronisation. Atomic instructions are
hardware level CPU instructions that cannot be interrupted and are executed in sequential order even across
multiple CPUs (ie. in a multicore system or muliprocessor). They can be used to implement mutexes or locks
such as by the following pseudocode:
Do {
// crit section
Lock = false;
} while (true);
Test_and_set will return the current value of the lock and then set it to one atomically.
Question 5
EXAMPLE: Linux is an open source operating system T/F
The program counter of a single threaded process specifies the next memory location of the local variables FALSE
to be fetched
A multi threaded process has one program counter per thread TRUE
The process control block must contain the values of all CPU registers TRUE
A context switch only occurs when the process changes state TRUE
For a many to one threading model, multiple kernel threads are mapped to one user thread FALSE
For bounded waiting, the number of times that other processes are allowed to enter the critical sections, is TRUE
limited after a process has made a request to enter its critical section and before that request is granted
Roll out, roll in is a swapping variant used for time sliced based scheduling algorithms FALSE
Major part of swap time is execution time; total execution time is directly proportional to the amount of FALSE?
memory swapped
If a page has the valid/invalid bit set to “invalid” the page is not in the physical address space of the TRUE
process
A lazy swapper only swaps a page into memory, when needed TRUE
A page fault means that an error has occurred with current page being used by the process FALSE?
C-LOOK treats the sectors as a circular list that wraps around from the last sectors to the first one TRUE
To recover from crashes, a log file system records each metadata update to the file system as a transaction TRUE
Read, write and seek commands are used with character based IO (streamed) devices FALSE
Question 8
a) Scheduling Algorithm
i) Solution: First come first served
ii) Less favoured: Round robin
iii) Two advantages of solution: First come first served reduces context switching
which is a waste of processing, especially on a battery powered device. Round
robin involves unnecessary context switches which is a waste of battery life.
b) File System Implementation
i) Solution: Extent-based system
ii) Less favoured: Contiguous
iii) Two advantages of solution: Allows allocation of larges blocks of disk space for
faster writing. Also allows for easier reading via large blocks.