0% found this document useful (0 votes)
41 views

Lecture 3 - Structure of OS

Uploaded by

bilalgillani1000
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
41 views

Lecture 3 - Structure of OS

Uploaded by

bilalgillani1000
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 50

OPERATING SYSTEM STRUCTURE

Bilal Ahmed
1
STRUCTURE OF OPERATING SYSTEM
• A common approach is to partition the task into small components, or modules, rather
than have one monolithic system. Each of these modules should be a well-defined portion
of the system, with carefully defined inputs, outputs, and functions.

• Simple Structure - Many operating systems do not


have well-defined structures. Frequently, such
systems started as small, simple, and limited
systems and then grew beyond their original scope.
MS-DOS is an example of such a system.
• Not divided into modules. Its interface , levels and
functionality are not well separated
STRUCTURE OF OPERATING SYSTEM
• Layered Approach - With proper hardware support, operating systems can be broken into pieces. The operating
system can then retain much greater control over the computer and over the applications that make use of that
computer.
• Implementers have more freedom in changing the inner workings
of the system and in creating modular operating systems.
• Under a top-down approach, the overall functionality and
features are determined and are separated into components.
• Information hiding is also important, because it leaves
programmers free to implement the low-level routines as they
see fit.
• A system can be made modular in many ways. One method
is the layered approach, in which the operating system is
broken into a number of layers (levels). The bottom layer
(layer 0) is the hardware; the highest (layer N) is the user
interface.
MICRO-KERNEL APPROACH
• In the mid-1980s, researchers at Carnegie Mellon University developed an operating system
called Mach that modularized the kernel using the microkernel approach.
• This method structures the operating system by removing all nonessential components from
the kernel and implementing them as system and user-level programs. The result is a smaller
kernel.
• One benefit of the microkernel approach is that it makes extending the operating system easier. All new
services are added to user space and consequently do not require modification of the kernel.
• When the kernel does have to be modified, the changes tend to be fewer, because the microkernel is a
smaller kernel.
• The MINIX 3 microkernel, for example, has only approximately 12,000 lines of code. Developer
Andrew S. Tanenbaum
MONOLITHIC KERNEL VS MICROKERNEL
MONOLITHIC KERNEL VS MICROKERNEL
MONOLITHIC KERNEL
In a monolithic kernel, all essential operating system services run in kernel
space. This includes managing hardware devices, memory management, file
system management, and other OS services.
Key Features of Monolithic Kernel:
Single large process: All functions and services (e.g., device drivers, file
system management, process management) run within a single address
space (kernel space).
Fast execution: Since all OS services are part of the same process,
communication between components is very fast because there's no need
for message passing.
Less modular: Adding or updating functionality often requires recompiling
and restarting the entire kernel, which can make it less flexible and more
prone to crashes if a bug occurs in one of the kernel modules.
MONOLITHIC KERNEL
Advantages:
Efficient performance: Direct communication within the kernel makes it
faster because no context switching or inter-process communication (IPC) is
required.
Disadvantages:
Lack of isolation: If one service crashes, it could crash the entire system
since all services run in the same memory space.
Difficult to maintain: Changing or debugging the kernel requires working on a
large codebase, which increases complexity.
Example:
Linux and Unix use monolithic kernels.
MICRO KERNEL
In a micro kernel, In a microkernel, only the most essential services are run
in kernel space, such as inter-process communication (IPC), memory
management, and scheduling. Other services like device drivers, file system
management, and networking are run in user space, outside the kernel.
Key Features of Monolithic Kernel:
Minimal kernel functionality: Only the core functions (like IPC, basic
scheduling, and memory management) run in kernel mode.
Modular design: Other services such as device drivers, file systems, and
networking are moved out of the kernel and executed as user-space
processes, meaning they run separately from the core kernel.
Inter-process communication (IPC): The microkernel uses message
passing (IPC) to communicate between the core kernel and the external
services running in user space.
MICRO KERNEL
Advantages:
Improved stability: Since the essential kernel functions are small, the system is
more stable. If a user-space service (like a device driver) crashes, it doesn’t crash
the entire system.
Modular and easier to maintain: New services or changes can be added without
modifying or recompiling the kernel.
Disadvantages:
Slower performance: Because services must communicate with the kernel
via IPC, the overhead can result in slower performance compared to
monolithic kernels.
Complexity in message-passing: The IPC mechanism adds complexity and
potential performance bottlenecks.
Example:
Minix and QNX use microkernels. macOS and Windows NT use hybrid kernels
SO…….?

What about Windows??


WINDOWS
Before the introduction of the Windows NT series, Microsoft used a simpler, monolithic
kernel model in its earlier operating systems, particularly in the MS-DOS and early
Windows (1.x, 2.x, and 3.x) versions.

Key Characteristics of the Kernel Models Before Windows NT:

MS-DOS (1981-1995):
• Monolithic Kernel: MS-DOS was a single-tasking operating system with a monolithic
design. The OS had minimal kernel functionality, providing direct access to hardware
without the abstraction or protection found in modern kernels.
• No Protected Mode: MS-DOS ran entirely in real mode, which meant there was no
memory protection or multitasking. Programs had unrestricted access to system
memory and hardware, which made the system prone to crashes and instability.
• Single User, Single Task: MS-DOS was designed for single-user, single-task
operations, meaning only one program could run at a time.
WINDOWS
Early Windows (1.x, 2.x, and 3.x):

• MS-DOS Underpinnings: These versions of Windows (starting with Windows 1.0 in 1985)
were essentially graphical shells running on top of MS-DOS. While they introduced a
graphical user interface (GUI), they still relied on the underlying MS-DOS kernel for many
system operations.
• Cooperative Multitasking: Windows 1.x to 3.x introduced basic multitasking, but it was
cooperative multitasking. In this model, programs themselves had to yield control back
to the OS for the next program to run, which often led to poor stability if a program failed
to cooperate.
• No True 32-bit Processing: These early versions of Windows ran in 16-bit mode and had
limited access to the advanced features of modern CPUs, such as protected memory or
preemptive multitasking.
WINDOWS
Transition to Windows NT:

• By the early 1990s, the limitations of MS-DOS and early Windows (like lack of
multitasking, memory protection, and stability) became increasingly problematic as
computer hardware advanced.
• Windows NT, introduced in 1993, was a complete rewrite of the Windows operating
system, designed from scratch to be more robust and modern. It introduced the hybrid
kernel architecture and supported preemptive multitasking, protected memory, and 32-
bit processing, marking a significant departure from the earlier MS-DOS-based systems.

In summary, the kernel model used before Windows NT was primarily a monolithic kernel in
MS-DOS, with early versions of Windows acting as graphical extensions running on top of it.
These systems lacked the advanced kernel features that were later introduced in Windows
NT and subsequent versions.
USER AND OPERATING-SYSTEM INTERFACE
• There are several ways for users to interface with the operating system. Here,
we discuss two fundamental approaches.
• Command-line interface, or command interpreter.
• Graphical User Interfaces.

? Operating
System
COMMAND-LINE INTERFACE, OR COMMAND INTERPRETER.
• Command Interpreters - Some operating
systems include the command interpreter
in the kernel.
• Others, such as Windows and UNIX, treat
the command interpreter as a special
program that is running when a job is
initiated or when a user first logs on (on
interactive systems).
• On systems with multiple command
interpreters to choose from, the
interpreters are known as shells. For
example, on UNIX and Linux systems, a
user may choose among several different
shells, including the Bourne shell, C shell,
Bourne-Again shell, Korn shell, and others.
GRAPHICAL USER INTERFACES
• Graphical User Interfaces - A second
strategy for interfacing with the operating
system is through a user- friendly graphical
user interface, or GUI. Here, users employ
a mouse-based window- and-menu system
characterized by a desktop.
• The user moves the mouse to position its
pointer on images, or icons, on the screen
(the desktop) that represent programs, files,
directories, and system functions.
• Depending on the mouse pointer’s location,
clicking a button on the mouse can invoke a
program, select a file or directory—known as
a folder —or pull down a menu that contains
commands.
NEXT LEVEL OF GUI

• Because a mouse is impractical for most mobile


systems, smartphones and handheld tablet computers
typically use a touchscreen interface.
• Here, users interact by making gestures on the
touchscreen—for example, pressing and swiping
fingers across the screen.
WHAT IS FOR YOU?
• The choice of whether to use a command-line or GUI interface is mostly one of personal preference.
• System administrators who manage computers and power users who have deep knowledge of a system
frequently use the command-line interface. For them, it is more efficient, giving them faster access to
the activities they need to perform.
• Indeed, on some systems, only a subset of system functions is available via the GUI, leaving the less
common tasks to those who are command-line knowledgeable.
SYSTEM CALL
• System calls provide the means for a user program
to ask the operating system to perform tasks
reserved for the operating system on the user
program’s behalf.
• System calls provide an interface to the services
made available by an operating system.
• These calls are generally available as routines
written in C and C++.
• The API specifies a set of functions that are available
to an application programmer, including the
parameters that are passed to each function and the
return values the programmer can expect.

Any Example?
• Types of System Calls - System calls can be grouped roughly into six major categories: process
control, file manipulation, device manipulation, information maintenance, communications,
and protection.
• Process control
1. end, abort
2. load, execute
3. create process, terminate process
4. get process attributes, set process attributes
5. wait for time

6. wait event, signal event


7. allocate and free memory
• File management

1. create file, delete file

2. open, close

3. read, write, reposition

4. get file attributes, set file attributes


• Device management

1. request device, release device

2. read, write, reposition

3. get device attributes, set device attributes

4. logically attach or detach devices


• Information maintenance

1. get time or date, set time or date

2. get system data, set system data

3. get process, file, or device attributes

4. set process, file, or device attributes


• Communications

1. create, delete communication connection

2. send, receive messages transfer status information


Mode
• We need two separate modes of operation: User mode and Kernel mode (also called
supervisor mode, system mode, or privileged mode). A bit, called the mode bit, is added to
the hardware of the computer to indicate the current mode: kernel (0) or user (1).
• When the computer system is executing on behalf of a user application, the system is in
user mode. However, when a user application requests a service from the operating
system (via a system call), the system must transition from user to kernel mode to fulfill the
request.
Multithreading
• Multithreading is a technique in which a process, executing an application, is
divided into threads that can run concurrently. We can make the following
distinction:
• Thread: A dispatchable unit of work. It includes a processor context (which
includes the program counter and stack pointer) and its own data area for a
stack (to enable subroutine branching). A thread executes sequentially and is
interruptible so the processor can turn to another thread.
• Process: A collection of one or more threads and associated system resources
(such as memory containing both code and data, open files, and devices). This
corresponds closely to the concept of a program in execution. By breaking a
single application into multiple threads, the programmer has great control over
the modularity of the application and the timing of application-related events.
Multithreading
• Multithreading is useful for applications that perform a number of essentially
independent tasks that do not need to be serialized.
• An example is a database server that listens for and processes numerous client
requests. With multiple threads running within the same process, switching back
and forth among threads involves less processor overhead than a major process
switch between different processes.
Symmetric multiprocessing (SMP)
• Symmetric multiprocessing (SMP) is a term that refers to a computer hardware
architecture and also to the OS behavior that exploits that architecture. The OS
of an SMP schedules processes or threads across all of the processors. SMP has
a number of potential advantages over uniprocessor architecture
• Performance: If the work to be done by a computer can be organized so some
portions of the work can be done in parallel, then a system with multiple
processors will yield greater performance than one with a single processor.
• Availability: In a symmetric multiprocessor, because all processors can perform
• the same functions, the failure of a single processor does not halt the system.
Instead, the system can continue to function at reduced performance.
• Incremental growth: A user can enhance the performance of a system by adding
• an additional processor.
• Scaling: Vendors can offer a range of products with different price and per-
formance characteristics based on the number of processors
Symmetric multiprocessing (SMP)
• Multithreading and SMP are often discussed together, but the two are
independent facilities.
• Even on a uniprocessor system, multithreading is useful for structuring
applications and kernel processes.
• An SMP system is useful even for nonthreaded processes, because several
processes can run in parallel.
• However, the two facilities complement each other, and can be used effectively
together.
OS Design Considerations for Multiprocessor & Multicore
Symmetric Multiprocessor OS Considerations
• In an SMP system, the kernel can execute on any processor, and typically each
processor does self-scheduling from the pool of available processes or threads.
• The kernel can be constructed as multiple processes or multiple threads,
allowing portions of the kernel to execute in parallel.
• The SMP approach complicates the OS. The OS designer must deal with the
complexity due to sharing resources and coordinating actions (such as accessing
devices) from multiple parts of the OS executing at the same time.
• An SMP operating system manages processor and other computer resources so
the user may view the system in the same fashion as a multiprogramming
uniprocessor system
OS Design Considerations for Multiprocessor & Multicore
Symmetric Multiprocessor OS Considerations
OS Design Considerations for Multiprocessor & Multicore
Multicore OS Considerations May or may not be exploited
by application programmers
and compilers

Without strong and effective


OS support for the last two types of
parallelism mentioned, hardware
resources
will not be efficiently used
OS Design Considerations for Multiprocessor & Multicore
Parallelism within Applications
• Most applications can, in principle, be subdivided into multiple tasks that can
execute in parallel, with these tasks then being implemented as multiple
processes, perhaps each with multiple threads.
• The difficulty is that the developer must decide how to split up the application
work into independently executable tasks. That is, the developer must decide what
pieces can or should be executed asynchronously or in parallel. It is primarily the
compiler and the programming language features that support the parallel
programming design process.
• But the OS can support this design process, at minimum, by efficiently allocating
resources among parallel tasks as defined by the developer.
• One of the most effective initiatives to support developers is Grand Central
Dispatch (GCD), implemented in the latest release of the UNIX-based Mac OS X and
the iOS operating systems
OS Design Considerations for Multiprocessor & Multicore
Parallelism within Applications
• GCD is a thread pool mechanism, in which the OS maps tasks onto threads
representing an available degree of concurrency
• Windows also has a thread pool mechanism (since 2000)
How GCD works
Concurrency and Islands
Anonymous Functions
of Serialization: Even
Thread Pool Mechanism: (Blocks): A big innovation in
though many tasks may run
GCD works like a manager GCD is how it allows
at the same time
that has a pool of workers developers to write small,
(concurrency), GCD allows
(threads). When you give it a self-contained pieces of
developers to specify the
task, it picks one of these work called blocks. Instead
order in which certain tasks
workers to run it. You don’t of worrying about how these
need to be done. This helps
need to worry about blocks get executed, the
avoid issues like two tasks
managing these workers system (GCD) handles it for
trying to access the same
yourself, which saves time you, ensuring that the work
data at the same time
and effort. is done when there are
(which can cause errors like
available resources.
data corruption).
OS Design Considerations for Multiprocessor & Multicore
Virtual Machine Approach
• We allow one or more cores to be dedicated to a particular process, then leave the
processor alone to devote its efforts to that process, we avoid much of the overhead
of task switching and scheduling decisions.
• The multicore OS could then act as a hypervisor that makes a high-level decision to
allocate cores to applications, but does little in the way of resource allocation
beyond that.
WINDOWS ARCHITECTURE
• Windows has a highly modular architecture
• One component handles one function of the OS
• Key system data can be accessed through that function only
• Any module can be removed, upgraded, or replaced without rewriting the entire system
• The kernel-mode components of Windows are the following:
• Executive: Contains the core OS services, such as memory management, process
and thread management, security, I/O, and interprocess communication.
• Kernel: Controls execution of the processors. The Kernel manages thread
scheduling, process switching, exception and interrupt handling, and multiprocessor
synchronization. Unlike the rest of the Executive and the user levels, the Kernel’s own
code does not run in threads.
• Hardware abstraction layer (HAL): Maps between generic hardware commands
and responses and those unique to a specific platform. It isolates the OS from
platform-specific hardware differences. The HAL makes each computer’s system bus,
direct memory access (DMA) controller, interrupt controller, system timers, and
memory controller look the same to the Executive and kernel components. It also
delivers the support needed for SMP.
WINDOWS ARCHITECTURE
• The kernel-mode components of Windows are the following: (continued….)
• Device drivers: Dynamic libraries that extend the functionality of the Executive.
These include hardware device drivers that translate user I/O function calls into
specific hardware device I/O requests, and software components for implementing
file systems, network protocols, and any other system extensions that need to run in
kernel mode.
• Windowing and graphics system: Implements the GUI functions, such as dealing
with windows, user interface controls, and drawing.
WINDOWS
ARCHITECTURE

Kernel-mode
components
WINDOWS
ARCHITECTURE

User-Mode
Processes
ANDROID ARCHITECTURE
• The Android operating system is a Linux-based system originally designed for mobile
phones. It is the most popular mobile OS by a wide margin: Android handsets outsell
Apple’s iPhones globally by about 3 to 1

• Initial Android OS development was done by Android, Inc., which was bought by Google
in 2005.
• The open-source nature of Android has been the key to its success.
ANDROID
ARCHITECTURE
Dalvik Virtual Machine (DVM)
Dalvik was Android's original runtime environment from its launch up until Android 4.4
(KitKat). It is a type of virtual machine designed to run Android applications, which are
compiled into bytecode (similar to Java bytecode).
Key Concepts:
• Just-In-Time (JIT) Compilation:
• Dalvik uses a JIT compiler. This means that parts of the code are compiled into
machine code right before they are executed. The JIT compiler compiles code on-the-
fly as needed, rather than compiling the entire app beforehand.
• This results in quicker initial startup times for apps, but less overall performance
since the code is recompiled during each execution.
• Bytecode Execution:
• Android apps are written in Java (or Kotlin), which is compiled into Java bytecode
(.class files). These bytecode files are then converted into Dalvik Executable (DEX)
format. Dalvik executes this DEX bytecode.
Dalvik Virtual Machine (DVM)
Continued…

• Register-based:
• Unlike the Java Virtual Machine (JVM), which is stack-based, Dalvik is register-
based. This architectural difference is optimized for mobile devices and is more
memory-efficient for the limited resources available on smartphones.
• Memory Efficiency:
• Dalvik was designed to minimize memory usage, which is important for mobile
devices. Multiple instances of the Dalvik VM could run simultaneously, allowing for
multiple apps to run concurrently without consuming excessive memory.
Limitations:
• Since Dalvik uses JIT compilation, it has some performance drawbacks, particularly
during startup and runtime, as the JIT compiler has to compile code dynamically during
execution.
Android Runtime (ART)
Android Runtime (ART) was introduced as an experimental runtime in Android 4.4 (KitKat)
and became the default runtime in Android 5.0 (Lollipop) and later. ART was designed to
replace Dalvik and address its performance shortcomings.
Key Concepts:
• Ahead-of-Time (AOT) Compilation:
• ART uses Ahead-of-Time (AOT) compilation, which compiles the entire app's code
into machine code when the app is installed. This means that the app's code doesn't
need to be recompiled during runtime, improving performance and reducing the time
it takes to start an app.
• AOT helps by eliminating the need for JIT compilation and results in better overall app
performance, smoother execution, and faster app startup times.
• Improved Garbage Collection:
• ART introduced a more efficient garbage collection mechanism. This results in
smoother app performance because garbage collection (reclaiming memory that is
no longer in use) causes fewer delays or pauses during app execution.
Android Runtime (ART)
Continued…
•Better Debugging and Profiling:
• ART provides enhanced debugging and profiling tools for developers, making it easier
to track down performance bottlenecks and improve app efficiency.
• Memory Management Improvements:
• ART optimizes memory usage more effectively than Dalvik, reducing memory
fragmentation and providing a more stable experience when running multiple apps or
large applications.
• Larger Disk Space Usage:
• One of the trade-offs of ART is that the pre-compilation (AOT) requires more disk
space. Since the entire app is compiled to native code at installation time, apps take
up more storage space compared to Dalvik.
Android Runtime (ART)
Continued…

Advantages over Dalvik:


• Performance: Apps generally run faster under ART due to AOT compilation.
• Battery Life: Since code is not recompiled during execution (unlike JIT in Dalvik), ART can
help conserve battery power.
• Fewer Runtime Overheads: With less dynamic compilation happening during runtime,
ART reduces the overall system load, which improves responsiveness and stability.
LIFTIME OF AN APK

You might also like