Module4 Protection
Module4 Protection
Protection
Goals:
Protection is especially important in a multiuser environment when multiple users use computer
resources such as CPU, memory, etc. It is the operating system's responsibility to offer a
mechanism that protects each process from other processes. In a multiuser environment, all
assets that require protection are classified as objects, and those that wish to access these objects
are referred to as subjects. The operating system grants different 'access rights' to different
subjects.
A mechanism that controls the access of programs, processes, or users to the resources defined
by a computer system is referred to as protection. You may utilize protection as a tool for multi-
programming operating systems, allowing multiple users to safely share a common logical
namespace, including a directory or files.
It needs the protection of computer resources like the software, memory, processor, etc. Users
should take protective measures as a helper to multiprogramming OS so that multiple users may
safely use a common logical namespace like a directory or data. Protection may be achieved by
maintaining confidentiality, honesty and availability in the OS. It is critical to secure the device
from unauthorized access, viruses, worms, and other malware.
1. The policies define how processes access the computer system's resources, such as the
CPU, memory, software, and even the operating system. It is the responsibility of both
the operating system designer and the app programmer. Although, these policies are
modified at any time.
2. Protection is a technique for protecting data and processes from harmful or intentional
infiltration. It contains protection policies either established by itself, set by management
or imposed individually by programmers to ensure that their programs are protected to
the greatest extent possible.
3. It also provides a multiprogramming OS with the security that its users expect when
sharing common space such as files or directories.
Its main role is to provide a mechanism for implementing policies that define the use of
resources in a computer system. Some rules are set during the system's design, while others are
defined by system administrators to secure their files and programs.
Every program has distinct policies for using resources, and these policies may change over time.
Therefore, system security is not the responsibility of the system's designer, and the programmer
must also design the protection technique to protect their system against infiltration.
Access Matrix:
The Access Matrix is a security model for a computer system's protection state. It is described as
a matrix. An access matrix is used to specify the permissions of each process running in the
domain for each object. The rows of the matrix represent domains, whereas the columns
represent objects. Every matrix cell reflects a set of access rights granted to domain processes,
i.e., each entry (i, j) describes the set of operations that a domain Di process may invoke on
object Oj.
There are various methods of implementing the access matrix in the operating system. These
methods are as follows:
Global Table
Lock-Key Mechanism
Lock-Key Mechanism
It is a compromise between the access lists and the capability lists. Each object has a list of locks,
which are special bit patterns. On the other hand, each domain has a set of keys that are special
bit patterns. A domain-based process could only access an object if a domain has a key that
satisfies one of the locks on the object. The process is not allowed to modify its keys.
Now, let's take an example to understand the implementation of an access matrix in the operating
system.
In this example, there are 4 domains and objects in the above matrix, and also consider 3 files
(including F1, F2, and F3) and one printer. Files F1 and F3 can be read by a process running
in D1. A process running in domain D4 has the same rights as D1, but it may also write on files.
Only one process running in domain D2 has access to the printer. The access matrix mechanism
is made up of various policies and semantic features. Specifically, we should ensure that a
process running in domain Di may only access the objects listed in row i.
The protection policies in the access matrix determine which rights must be included in the (i j)th
entry. We should also choose the domain in which each process runs. The OS usually decides
this policy. The Users determine the data of the access-matrix entries.
The relationship between the domain and the processes might be static or dynamic. The access
matrix provides a way for defining the control for this domain-process association. We perform a
switch action on an object when we switch a process from one domain to another. We may
regulate domain switching by containing domains between the access matrix objects. If they
have access to switch rights, processes must be enable.
Access Control:
Card or Key
For computer security, access control include the authorization, authentication and audit of the
entity trying to gain access. Access control models have a subject and an object.
The Subject-the human user-is the one trying to gain access to the object-usually the software. In
computer systems, an access control list contains a list of permissions and the users to whom
these permissions apply.
Authentication Mechanism:
Two-factor authentication
one-time password
Three-factor authentication
Bio metrics
Hard Tokens
Soft Tokens
Contextual Authentication
Device identification
Different access control models are used depending on the compliance requirements and the
security levels of information technology that is to be protected. Basically access control is of 2
types:
Physical Access Control: Physical access control restricts entry to campuses, buildings, rooms
and physical IT assets.
Logical Access Control: Logical access control limits connections to computer networks,
system files and data.
Identity-Based Access Control (IBAC): By using this model network administrators can more
effectively manage activity and access based on individual requirements.
Mandatory Access Control (MAC): A control model in which access rights are regulated by a
central authority based on multiple levels of security. Security Enhanced Linux is implemented
using MAC on the Linux operating system.
Organization-Based Access control (OrBAC): This model allows the policy designer to define
a security policy independently of the implementation.
Role-Based Access Control (RBAC): RBAC allows access based on the job title. RBAC
eliminates discretion on a large scale when providing access to objects. For example, there
should not be permissions for human resources specialist to create network accounts.
Rule-Based Access Control (RAC): RAC method is largely context based. Example of this
would be only allowing students to use the labs during a certain time of day.
Immediate versus delayed - If delayed, can we determine when the revocation will take place?
Selective versus general - Does revocation of an access right to an object affect all users who
have that right, or only some users?
Partial versus total - Can a subset of rights for an object be revoked, or are all rights revoked at
once?
Temporary versus permanent - If rights are revoked, is there a mechanism for processes to re-
acquire some or all of the revoked rights?
With an access list scheme revocation is easy, immediate, and can be selective, general, partial,
total, temporary, or permanent, as desired.
With capabilities lists the problem is more complicated, because access rights are distributed
throughout the system. A few schemes that have been developed include:
Reacquisition - Capabilities are periodically revoked from each domain, which must then re-
acquire them.
Back-pointers - A list of pointers is maintained from each object to each capability which is
held for that object.
Indirection - Capabilities point to an entry in a global table rather than to the object. Access
rights can be revoked by changing or invalidating the table entry, which may affect multiple
processes, which must then re-acquire access rights to continue.
Keys - A unique bit pattern is associated with each capability when created, which can be neither
inspected nor modified by the process.
When a capability is created, its key is set to the object's master key.
As long as the capability's key matches the object's key, then the capabilities remain valid.
The object master key can be changed with the set-key command, thereby invalidating all current
capabilities.
More flexibility can be added to this scheme by implementing a list of keys for each object,
possibly in a global table.
Virtual Machines:
A virtual machine (VM) is a virtual environment which functions as a virtual computer system
with its own CPU, memory, network interface, and storage, created on a physical hardware
system.
VMs are isolated from the rest of the system, and multiple VMs can exist on a single piece of
hardware, like a server. That means, it as a simulated image of application software and
operating system which is executed on a host computer or a server.
It has its own operating system and software that will facilitate the resources to virtual
computers.
Multiple OS systems use the same hardware and partition resources between virtual computers.
Separate Security and configuration identity.
Ability to move the virtual computers between the physical host computers as holistically
integrated files.
The below diagram shows you the difference between the single OS with no VM and Multiple
OS with VM −
Benefits
Let us see the major benefits of virtual machines for operating-system designers and users which
are as follows −
The multiple Operating system environments exist simultaneously on the same machine, which
is isolated from each other.
Virtual machine offers an instruction set architecture which differs from real computer.
Using virtual machines, there is easy maintenance, application provisioning, availability and
convenient recovery.
Virtual Machine encourages the users to go beyond the limitations of hardware to achieve their
goals.
The operating system achieves virtualization with the help of a specialized software called a
hypervisor, which emulates the PC client or server CPU, memory, hard disk, network and other
hardware resources completely, enabling virtual machines to share resources.
The hypervisor can emulate multiple virtual hardware platforms that are isolated from each other
allowing virtual machines to run Linux and window server operating machines on the same
underlying physical host.
1. System Virtual Machine: These types of virtual machines gives us complete system platform
and gives the execution of the complete virtual operating system. Just like virtual box, system
virtual machine is providing an environment for an OS to be installed completely. We can see in
below image that our hardware of Real Machine is being distributed between two simulated
operating systems by Virtual machine monitor. And then some programs, processes are going on
in that distributed hardware of simulated machines separately.
2. Process Virtual Machine : While process virtual machines, unlike system virtual machine,
does not provide us with the facility to install the virtual operating system completely. Rather it
creates virtual environment of that OS while using some app or program and this environment
will be destroyed as soon as we exit from that app. Like in below image, there are some apps
running on main OS as well some virtual machines are created to run other apps. This shows that
as those programs required different OS, process virtual machine provided them with that for the
time being those programs are running. Example – Wine software in Linux helps to run
Windows applications.
Virtual Machine Language : It’s type of language which can be understood by different
operating systems. It is platform-independent. Just like to run any programming language (C,
python, or java) we need specific compiler that actually converts that code into system
understandable code (also known as byte code). The same virtual machine language works. If we
want to use code that can be executed on different types of operating systems like (Windows,
Linux, etc) then virtual machine language will be helpful.
Robustness:- Robustness in a distributed system refers to its ability to maintain stable and
reliable operation even in the presence of various types of faults, failures, or unexpected events.
A robust distributed system is designed to handle failures gracefully, minimize disruptions, and
ensure that the system remains operational and responsive. Achieving robustness in distributed
systems involves several key strategies and considerations:
Fault Tolerance Distributed systems should be designed to tolerate faults such as hardware
failures, network outages, and software errors. Replication of data and services across multiple
nodes helps ensure that if one node fails, another can take over without service interruption.
Isolation and Resource Management Proper resource allocation and isolation prevent resource-
hogging processes or nodes from affecting the overall system performance. Resource
management policies ensure that each node receives its fair share of resources.
Graceful Degradation A robust system can gracefully degrade its services under heavy load or
resource constraints, prioritizing critical functions and delaying or reducing non-essential ones.
Failure Detection and Recovery Distributed systems need mechanisms to detect failures quickly
and initiate recovery processes. Heartbeat mechanisms and health checks are used to monitor the
status of nodes and services.
Self-Healing Mechanisms Automated processes that can detect faults and initiate corrective
actions help maintain system stability without manual intervention.Auto-scaling mechanisms can
dynamically adjust the number of resources based on demand.
Data Integrity and Consistency Ensuring data integrity is crucial in a distributed system.
Techniques such as replication, data partitioning, and distributed databases help maintain data
consistency even in the presence of failures.
Error Handling and Reporting Clear error messages and logging mechanisms aid in identifying
issues and diagnosing problems when they occur.
Rigorous testing, including failure testing and stress testing, helps uncover vulnerabilities and
weaknesses in the system's design.
Reduction of Single Points of Failure Identifying and eliminating single points of failure is
essential to avoid situations where a single component failure brings down the entire system.
Security and Access Control Ensuring proper security measures, including access control,
authentication, and encryption, helps prevent malicious attacks and unauthorized access that
could compromise system robustness. Achieving robustness in distributed systems is an ongoing
process that requires careful design, continuous monitoring, and proactive maintenance. By
considering fault scenarios and planning for graceful degradation and recovery, distributed
systems can offer reliable and consistent services even in the face of challenges.
Design issues:- Designing a distributed system is a complex task that involves addressing various
challenges and design issues to ensure the system's effectiveness, scalability, reliability, and
performance. Here are some of the key design issues that need to be considered when designing
a distributed system:
Concurrency Control: Managing concurrent access to shared resources is critical for maintaining
data consistency and avoiding race conditions. Decisions about locking mechanisms, transaction
management, and isolation levels need to be made.
Consistency and Replication: Ensuring data consistency across distributed nodes is complex,
especially when replication is involved. Designing mechanisms for maintaining consistency,
resolving conflicts, and managing replica synchronization is essential.
Fault Tolerance and Reliability: Designing mechanisms for fault detection, fault tolerance, and
recovery strategies is crucial. Redundancy, data backup, replication, and failover mechanisms
need to be considered.
Security and Privacy: Distributed systems face security threats such as unauthorized access, data
breaches, and attacks. Incorporating authentication, authorization, encryption, and secure
communication protocols is vital.
Naming and Directory Services: Designing a naming and directory service to locate resources in
a distributed environment is important. Decisions about how to organize and manage names and
addresses are essential for efficient resource discovery.
Distributed Algorithms: Designing efficient and reliable distributed algorithms for tasks such as
consensus, mutual exclusion, leader election, and distributed coordination is challenging due to
network uncertainties and asynchrony.
Data Storage and Retrieval: Efficiently storing and retrieving data in distributed environments
involves decisions about data partitioning, distribution, caching, and indexing.
Load Balancing: Designing load balancing mechanisms to distribute workloads evenly across
nodes while considering factors like network latency, resource availability, and node capabilities
is crucial for system performance.
Distributed File Systems: Designing distributed file systems that provide a unified view of files
and directories across multiple nodes while ensuring consistency, data integrity, and fault
tolerance is a complex task.
Middleware and APIs: Designing middleware and APIs that abstract the complexities of
distributed communication and interaction can simplify application development while
maintaining system robustness.
Monitoring and Management: Incorporating mechanisms for monitoring and managing
distributed components, including performance monitoring, debugging, and logging, is essential
for diagnosing and resolving issues.
Network Topology: Decisions about network topology, communication patterns, and data flow
significantly impact system performance and communication efficiency. Designing a distributed
system requires a deep understanding of these issues and careful consideration of trade-offs.
Design choices made at each level can have cascading effects on the overall system behavior and
performance. A well-designed distributed system balances these concerns to create a robust,
scalable, and reliable infrastructure.
Distributed file system:- A distributed file system (DFS) is a type of file system that enables the
storage, management, and access of files across multiple computers or nodes in a network.
Distributed file systems provide a unified view of files and directories, allowing users and
applications to interact with files as if they were stored on a single machine, regardless of their
actual physical location. This technology is particularly useful in distributed computing
environments where resources are spread across different machines. Here are some key concepts
and characteristics of distributed file systems:
Transparency: Distributed file systems aim to provide transparency to users and applications.
This means that users are unaware of the underlying complexities of file distribution, replication,
and data placement.
Location Independence: Users can access their files from any node in the network, regardless of
where the files are physically stored. This provides greater flexibility and convenience.
Replication: Distributed file systems often replicate files across multiple nodes to improve data
availability and fault tolerance. If one node fails, users can still access the replicated copies on
other nodes.
Scalability: Distributed file systems are designed to scale horizontally by adding more storage
nodes as needed. This enables the system to accommodate growing data storage requirements.
Caching: Caching mechanisms are used to store frequently accessed data in memory, reducing
the need to retrieve data from disk and improving performance.
Consistency and Coherency: Maintaining data consistency and coherency across multiple
replicas of a file is crucial. Distributed file systems use various strategies to ensure that changes
made to one replica are propagated to others.
Access Control: Distributed file systems implement access control mechanisms to manage user
permissions and ensure that users can only access files they are authorized to access.
Namespace Management: Providing a unified namespace for files and directories, regardless of
their physical location, is a key feature of distributed file systems.
Security: Ensuring data security during storage and transmission is essential. Distributed file
systems may use encryption and authentication mechanisms to protect data.
Data Migration: Some distributed file systems support data migration, which involves moving
data between nodes to optimize storage utilization and performance.
Andrew File System (AFS): One of the earliest distributed file systems, AFS provides a global
namespace and supports client caching, replication, and location independence.
Network File System (NFS): NFS is a widely used distributed file system protocol developed by
Sun Microsystems. It allows remote file access over a network and provides transparency and
access control.
Microsoft Distributed File System (DFS): DFS is a feature in Windows Server that allows
organizations to create a distributed file system structure, providing seamless access to files from
different servers.
Hadoop Distributed File System (HDFS): HDFS is designed for storing and processing large
datasets in a distributed computing environment. It's a core component of the Apache Hadoop
framework.
Google File System (GFS): GFS is a distributed file system developed by Google to support its
data-intensive applications. It focuses on high availability, fault tolerance, and scalability.
Distributed file systems play a critical role in modern distributed computing environments,
enabling efficient data storage, access, and management across networks of interconnected
nodes.
Case studies
THE LINUX SYSTEM:- Linux, being an open-source and widely adopted operating
system, has numerous case studies showcasing its versatility and impact in various domains.
Here are a few notable case studies that highlight the use of the Linux system:
Google relies heavily on Linux for its server infrastructure. Google's custom version of Linux,
known as "Google's Goobuntu," powers its vast data centers and services. Linux's flexibility,
scalability, and open-source nature align well with Google's need for a reliable and customizable
operating system to handle its massive workload.
Android, the world's most popular mobile operating system, is built on a Linux kernel. Android
devices, including smartphones, tablets, and smart TVs, use Linux as the foundation for
providing a rich and diverse ecosystem of applications and services.
Amazon Web Services (AWS), one of the leading cloud computing platforms, extensively uses
Linux as the base for its virtual servers and cloud services. Linux's compatibility with
virtualization and containerization technologies makes it an ideal choice for building and
managing cloud infrastructure.
Automotive: Tesla
Tesla's electric vehicles use Linux-based operating systems for various purposes, including
infotainment systems, driver assistance features, and autonomous driving capabilities. Linux's
stability, security, and support for embedded systems make it suitable for automotive
applications.
NASA's Mars rovers, such as Curiosity and Perseverance, use a customized version of Linux to
operate on the Martian surface. Linux's adaptability allows engineers to tailor the operating
system to the specific requirements of space exploration.
WordPress, one of the most popular content management systems for websites, can be hosted
on Linux-based web servers. Many web hosting providers offer Linux-based environments for
hosting WordPress sites due to their stability, security, and performance.
Entertainment: Netflix
Netflix runs its streaming service on Linux-based servers. The scalability and open-source
nature of Linux enable Netflix to manage the massive amount of data and requests generated by
its millions of users.
These case studies highlight the diverse range of applications and industries where Linux plays a
pivotal role. Its flexibility, robustness, and adaptability have made it a foundational component
of modern computing across various domains.
WINDOWS : 10Windows 10, as a widely used operating system, has numerous case studies
showcasing its impact and applications in various industries. Here are a few notable case studies
that highlight the use of Windows 10:
Baltimore County Public Schools (BCPS) in Maryland, USA, deployed Windows 10 devices
for students and teachers. Windows 10's compatibility with education-focused applications and
security features contributed to BCPS's efforts to enhance digital learning environments.
St. Luke's University Health Network, a healthcare organization, adopted Windows 10 devices
to improve patient care. Windows 10's security features, compatibility with electronic health
record (EHR) systems, and device management capabilities contributed to enhanced healthcare
workflows.
LEGO Group, a global toy company, implemented Windows 10 devices to enhance its retail
experiences. Windows 10's user interface, touch capabilities, and compatibility with retail
software facilitated interactive customer engagement and sales processes.
Manufacturing: Rolls-Royce
Rolls-Royce, an aerospace and defense company, deployed Windows 10 devices for its
manufacturing operations. Windows 10's security features, device management capabilities, and
compatibility with manufacturing applications contributed to streamlined operations.
Windows 10 offers integration with the Xbox ecosystem, allowing gamers to connect their
Xbox consoles and PCs. The Xbox Game Bar, Xbox Live integration, and DirectX
enhancements provide gamers with a unified gaming experience.
JPMorgan Chase & Co., a global financial services firm, adopted Windows 10 to enhance
security and compliance across its operations. Windows 10's security features, including
Windows Hello and BitLocker, contribute to safeguarding sensitive financial data.
The City of Barcelona in Spain deployed Windows 10 devices and Microsoft productivity tools
to modernize its public services. Windows 10's compatibility with cloud services and
collaboration tools contributed to improved citizen services.
These case studies illustrate the diverse applications of Windows 10 across various industries,
highlighting its role in enhancing productivity, security, innovation, and user experiences.
Windows 10's features and capabilities have made it a valuable operating system choice for
organizations and individuals worldwide.