0% found this document useful (0 votes)
10 views10 pages

Os Midterm Review

An operating system (OS) is essential software that facilitates communication between users and computer hardware, managing resources like memory and processes. It operates through a kernel, which handles tasks such as process management, memory management, and I/O operations, while employing techniques like multitasking and multiprogramming to optimize CPU usage. The document also discusses various system architectures, storage hierarchies, and the importance of security and protection in computing environments.

Uploaded by

ambaraponte1
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views10 pages

Os Midterm Review

An operating system (OS) is essential software that facilitates communication between users and computer hardware, managing resources like memory and processes. It operates through a kernel, which handles tasks such as process management, memory management, and I/O operations, while employing techniques like multitasking and multiprogramming to optimize CPU usage. The document also discusses various system architectures, storage hierarchies, and the importance of security and protection in computing environments.

Uploaded by

ambaraponte1
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

Chapter 1 Introduction

What is an Operating System (OS)?


Software that helps you use your computer hardware. It lets you talk to the machine.
Why is it important? Because we can’t directly talk to hardware — OS makes that communication
possible.

OS Goals:
Run programs, Make computers easy to use, Use hardware efficiently

Kernel
The main core of the OS that connects users and hardware. Always running in the background.
Everything else (apps, system tools) runs on top of it.
It handles:
- Running programs (processes)
- Memory
- Devices
- Security
- System calls

Computer System Structure Components:


- Hardware – CPU, RAM, storage, input/output devices
- OS – Controls how hardware is used
- Apps – Solve problems for the user
- User – People, machines, other computers

Hardware basics:
- CPU – Runs instructions
- RAM – Temporary data storage
- Storage – Permanent storage (HDD, SSD)
- Input/Output – Keyboard, mouse, screen, printer, etc.

System layout: CPU & device controllers share memory via a bus. They take turns to use memory.

System Operations
- CPU and devices can work at the same time
- Each device has a controller with its own buffer
- Data moves from devices to controller → RAM
- Devices notify CPU when they’re done via interrupts
*Buffer = reserved memory to hold data temporarily to avoid delays.

OS as a Resource Manager
The OS controls everything (via CPU) — memory, storage, etc.

Startup (Booting)
- Bootstrap program runs when you turn on the computer
- Loads the OS
- Stored in ROM/EPROM
- Starts kernel and daemons (background helpers)
*In UNIX, `init` is the first process

Interrupts
- Signals that tell the CPU to stop and handle something
- Triggered by hardware or software
- Trap/Exception = software-generated interrupt
Types:
- Polling – CPU keeps checking
- Vectored – Device tells the CPU directly where to go

Storage Concepts
- Word = unit of data in a computer (1+ bytes)
- Storage is in bytes, networking uses bits

Storage types:
- RAM – fast, temporary, volatile
- Hard disk – slow, big, permanent
- SSD – faster, no moving parts
* Volatile: data gone without power / * Non-volatile: data stays even if off

Storage Hierarchy:
Faster = more expensive → RAM
Slower = cheaper → HDD
Caching = copying data to faster memory (RAM caches disk)
Device Driver = makes hardware talk to the OS smoothly

System Architecture
- Most systems use one main CPU
- Many also use special processors
- Multiprocessor systems = multiple CPUs → more power, better reliability

Clustered Systems = multiple computers work together


- Share storage (SAN)
- Can handle failures better
- Asymmetric = 1 backup node
- Symmetric = all work and help each other
- HPC clusters → super fast computing

Multiprogramming
- Keeps the CPU busy by having several programs in memory
- If one waits (e.g., for disk), switch to another

Multitasking (Time Sharing)


- CPU switches between users really fast
- Feels like programs run at the same time
- Response time < 1 second
- Uses CPU scheduling
- Swapping moves programs in/out of memory
- Virtual memory lets programs run even if not fully in RAM
Dual-Mode Operation
- Two modes: User mode and Kernel mode
- Protects the system from bad programs
- Only kernel mode can do powerful tasks (like accessing hardware)
- Mode bit switches between user ↔ kernel during system calls

Process Management
- A process = program that’s running
- Program = passive, Process = active
- Needs CPU, memory, I/O
- When it ends, system takes back resources
Activities:
- Create/delete processes
- Pause/resume
- Sync/communicate
- Handle deadlocks

Memory Management
- Programs and data must be in RAM to run
- OS decides what goes in/out of RAM
- Tracks usage, allocates/frees memory

Storage Management
OS shows storage as files/directories. Handles:
- Creating/deleting files
- File access and permissions
- Saving files to disk
- Backups

I/O Subsystem
The OS hides hardware details from the user. Handles:
- Buffering – temp memory for smooth transfers
- Caching – fast access
- Spooling – queueing print jobs, etc.
Provides drivers to talk to devices

Protection & Security


- Protection = who can access what
- Security = protecting system from attacks
Each user has an ID
Can also belong to groups
Privilege escalation = temporarily gaining more power

Computer Environments
- Traditional – PCs, workstations
- Mobile – smartphones, tablets
- Client-Server – clients request services, servers respond
- P2P – all devices are equal, share stuff (e.g., Skype, torrents)
- Virtualization – run multiple OSes on one machine
- Cloud computing – services over the internet
SaaS – apps online (e.g., Google Docs)
PaaS – platforms for devs
IaaS – rent virtual computers
- Real-time Embedded – small systems that do critical tasks on time (e.g., car brakes, pacemakers)

Summary
• An operating system is software that manages the computer hardware, as well as providing an
environment for application programs to run.
• Interrupts are a key way in which hardware interacts with the operating system. A hardware device
triggers an interrupt by sending a signal to the CPU to alert the CPU that some event requires attention.
The interrupt is managed by the interrupt handler.
• For a computer to do its job of executing programs, the programs must be in main memory, which is the
only large storage area that the processor can access directly.
• The main memory is usually a volatile storage device that loses its contents when power is turned off or
lost. Nonvolatile storage is an extension of main memory and is capable of holding large quantities of
data permanently.
• The most common nonvolatile storage device is a hard disk, which can provide storage of both
programs and data.
• The wide variety of storage systems in a computer system can be organized in a hierarchy according to
speed and cost. The higher levels are expensive, but they are fast. As we move down the hierarchy, the
cost per bit generally decreases, whereas the access time generally increases.
• Modern computer architectures are multiprocessor systems in which each CPU contains several
computing cores.
• To best utilize the CPU, modern operating systems employ multiprogramming, which allows several
jobs to be in memory at the same time, thus ensuring that the CPU always has a job to execute.
• Multitasking is an extension of multiprogramming wherein CPU scheduling algorithms rapidly switch
between processes, providing users with a fast response time.
• To prevent user programs from interfering with the proper operation of the system, the system
hardware has two modes: user mode and kernel mode.
• Various instructions are privileged and can be executed only in kernel mode. Examples include the
instruction to switch to kernel mode, I/O control, timer management, and interrupt management.
• A process is the fundamental unit of work in an operating system. Process management includes
creating and deleting processes and providing mechanisms for processes to communicate and
synchronize with each other.
• An operating system manages memory by keeping track of what parts of memory are being used and
by whom. It is also responsible for dynamically allocating and freeing memory space.
• Storage space is managed by the operating system; this includes providing file systems for
representing files and directories and managing space on mass-storage devices.
• Operating systems provide mechanisms for protecting and securing the operating system and users.
Protection measures control the access of processes or users to the resources made available by the
computer system.
• Virtualization involves abstracting a computer’s hardware into several different execution environments.
• Data structures that are used in an operating system include lists, stacks, queues, trees, and maps.
• Computing takes place in a variety of environments, including traditional computing, mobile computing,
client–server systems, peer-to-peer systems, cloud computing, and real-time embedded systems.
• Free and open-source operating systems are available in source-code format. Free software is licensed
to allow no-cost use, redistribution, and modification. GNU/Linux, FreeBSD, and Solaris are examples of
popular open-source systems.
Chapter 3 Processes
Process Management
- Process = a program in execution.
- Thread = the smallest unit of execution inside a process. A process can have multiple threads.

Process States
- New – just created.
- Running – executing instructions.
- Waiting – paused, waiting for an event.
- Ready – ready to run but waiting for CPU.
- Terminated – finished execution.

Process Control Block (PCB)


OS uses a PCB to keep track of each process:
- Process ID – unique identifier.
- State – current status
- Program Counter – next instruction address.
- CPU Registers – process-specific data.
- Memory Info – memory boundaries.
- Scheduling Info – priority, queues, etc.
- Accounting Info – resources used.
- I/O Info – devices and files used.

Process Scheduling
- Multiprogramming = keep CPU busy all the time.
- Time-sharing = switch CPU fast so users feel it’s responsive.
- Scheduler picks which process runs next.
- On a single CPU, only one process runs at a time. Others wait.

Scheduling Queues
- Job Queue – all system processes.
- Ready Queue – processes loaded in memory, waiting to run.

Context Switch
- Context = process state info (stored in PCB).
- When switching processes, OS saves one’s context and loads another’s.
- It takes time and does no real work—speed depends on system hardware.

Process Creation
- Parent creates child processes, forming a tree.
Execution options:
1. Parent runs concurrently with children.
2. Parent waits for children to finish.
Address space options:
1. Child duplicates parent.
2. Child loads new program.

Process Termination
- Ends with `exit()` system call.
- Returns a status (usually an integer).
- Resources get cleaned up.
A parent can kill its child:
- Too many resources used.
- No longer needed.
- Parent dies → child dies.

Interprocess Communication (IPC)


Independent vs. Cooperating Processes
- Independent – no interaction.
- Cooperating – share data/resources.
Why cooperate?
- Share info.
- Speed up work.
- Modularity.
- Ease of use.

IPC Models
1. Shared Memory
- Shared region in RAM.
- Processes read/write there.
- OS must allow memory sharing.
Producer-Consumer Problem
- Producer creates data.
- Consumers use data.
- Need a buffer to hold items.
Buffer Types:
- Unbounded – no limit, producer never blocked.
- Bounded – fixed size, must coordinate access.

2. Message Passing
- Useful in distributed systems.
- Uses `send()` and `receive()`.
Message size:
- Fixed – easy for OS, harder for programming.
- Variable – easier to code, harder to manage.

Naming
Direct Communication
- Processes name each other directly.
- 1 link per process pair.
- May use addressing (both name each other) or asymmetric addressing (only sender names).
Indirect Communication
- Use mailboxes (shared message holders).
- Can involve many processes.
- Which process receives a message? Depends on the system’s delivery method.

Synchronization
Blocking:
- Send – waits until message is received.
- Receive – waits until message arrives.
Non-blocking:
- Send – continues after sending.
- Receive – returns immediately (may get null).

Buffering Methods
- Zero Capacity – no queue, sender always blocks.
- Bounded Capacity – limited queue, sender blocks if full.
- Unbounded – never blocks.
Sockets (Client-Server)
- Each end = a socket (IP + port number).
- Server listens on a known port.
- Client is assigned a random port.
- Ports <1024 are well-known ports for standard services.

Remote Procedure Calls (RPC)


- Lets a program on one machine call a function on another.
- Works like function calls, but over the network.
How it works:
1. Client calls stub (local function).
2. Stub sends message to server with function name + parameters.
3. Server executes and replies.
4. Stub returns result to client.
Problems & Solutions:
1. Different data formats – use XDR (external data representation).
2. Failures/repeats – OS must ensure exactly-once execution.
3. Finding ports – use:
- Fixed ports
- Matchmaker (rendezvous daemon) to get port info.

RPC Flow Example


1. User asks kernel to send RPC to `procedure X`.
2. Kernel asks matchmaker for server port.
3. Matchmaker replies with port `P`.
4. Kernel adds port `P` to message and sends it.
5. Server (daemon) receives, processes, and replies.

Summary
• A process is a program in execution, and the status of the current activity of a process is represented
by the program counter, as well as other registers.
• The layout of a process in memory is represented by four different sections: (1) text, (2) data, (3) heap,
and (4) stack.
• As a process executes, it changes state. There are four general states of a process: (1) ready, (2)
running, (3) waiting, and (4) terminated.
• A process control block (PCB) is the kernel data structure that represents a process in an operating
system.
• The role of the process scheduler is to select an available process to run on a CPU.
• An operating system performs a context switch when it switches from running one process to running
another.
• The fork() and CreateProcess() system calls are used to create processes on UNIX and Windows
systems, respectively.
• When shared memory is used for communication between processes, two (or more) processes share
the same region of memory. POSIX provides an API for shared memory.
• Two processes may communicate by exchanging messages with one another using message passing.
The Mach operating system uses message passing as its primary form of interprocess communication.
Windows provides a form of message passing as well.
• A pipe provides a conduit for two processes to communicate. There are two forms of pipes, ordinary
and named. Ordinary pipes are designed for communication between processes that have a parent–child
relationship. Named pipes are more general and allow several processes to communicate.
• UNIX systems provide ordinary pipes through the pipe() system call. Ordinary pipes have a read end
and a write end. A parent process can, for example, send data to the pipe using its write end, and the
child process can read it from its read end. Named pipes in UNIX are termed FIFOs.
• Windows systems also provide two forms of pipes—anonymous and named pipes. Anonymous pipes
are similar to UNIX ordinary pipes. They are unidirectional and employ parent–child relationships
between the communicating processes. Named pipes offer a richer form of interprocess communication
than the UNIX counterpart, FIFOs.
• Two common forms of client–server communication are sockets and remote procedure calls (RPCs).
Sockets allow two processes on different machines to communicate over a network. RPCs abstract the
concept of function (procedure) calls in such a way that a function can be invoked on another process
that may reside on a separate computer.
• The Android operating system uses RPCs as a form of interprocess communication using its binder
framework.
Chapter 4 Threads and Concurrency
Thread is a basic unit of cpu utilization. It comprises:
A thread ID, a program counter, a register set and a stack
It shares with other threads belonging to the same process its code section, data section and other
resources such as open files and signals
A traditional process (heavyweight) has a single thread of control. If a process has multiple threads of
control, it can perform more tasks at a time
Benefits
-​ Responsiveness: multithreading an interactive application may allow a program to continue
running even if part of it is blocked or is performing a lengthy operation, thereby increasing
responsiveness to the user
-​ Resource sharing: threads share memory and resources of the process to which they belong by
default. This allows an application to have several different threads of activity within the same
address space
-​ Economy: allocating memory and resources for the process creation is costly. It is more
economical to create and context-switch threads because of their resource sharing
-​ Utilization of multiprocessor architectures: in a multiprocessor architecture threads may be
running in parallel on different processors. A single threaded process can only run in one cpu, no
matter how many are available. Multithreading on a multiple cpu machine increases concurrency

Multithreading models
Types of threads
-​ User threads: supported above the kernel and are managed without kernel support
-​ Kernel threads: supported and managed directly by the OS
There must exist a relationship between the two. There are 3 ways of establish said relationship
1.​ Many-to-one model:
a.​ Maps many user level threads to one kernel thread
b.​ Thread management is done by the thread library in user space, is efficient
c.​ The entire process will block if a thread makes a blocking system call
d.​ Because only one thread can access the kernel at a time, multiple threads are unable to
run in parallel on multiprocessors
2.​ One-to-one model
a.​ Maps each user thread to a kernel thread
b.​ Provides more concurrency than the many to one model by allowing another thread to run
when a thread makes a blocking system call
c.​ Allows multiple threads to run in parallel on multiprocessors
d.​ Creating a user thread requires creating the corresponding kernel thread
e.​ Because of overhead of creating kernel threads can burden the performance of an
application, most implementations of this model restricts the number of threads supported
by the system
3.​ Many-to-many model
a.​ Multiplexes many user level threads to a smaller or equal number of kernel threads
b.​ The number of kernel threads may be specific to either particular application or machine
c.​ Developers can create as many user threads as necessary and the corresponding kernel
threads can run in parallel on a multiprocessor
d.​ When a thread performs a blocking system call the kernel can schedule another thread fro
execution

Hyperthreading or simultaneous multithreading (SMT)


Hyperthreaded systems allow their processors cores’ resources to become multiple logical processors
for performance
It enables the processor to execute two threads or a set of instructions at the same time. Sinec
hyperthreading allows two streams to be executed in parallel, it is almost like having two separate
processors working together
Fork and exec system calls
The fork() system call is used to create a separate, duplicate process
When an exec() system call is invoked, the program specified in the parameter t exec() will replace the
entire process - including all threads
Exec replace a process with another process and keeps the same ID because we are no creating a new
process but replacing a existing one

Total number of processes = 2’n where n is the number of fork() system calls

Threading issues

You might also like