0% found this document useful (0 votes)
19 views31 pages

III Unit Os

Uploaded by

Laxmi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views31 pages

III Unit Os

Uploaded by

Laxmi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 31

An Operating System (OS) is an interface between a computer user and computer hardware.

An operating system
is a software which performs all the basic tasks like file management, memory management, process
management, handling input and output, and controlling peripheral devices such as disk drives and printers.

An operating system is software that enables applications to interact with a computer's hardware. The software
that contains the core components of the operating system is called the kernel.

The primary purposes of an Operating System are to enable applications (spftwares) to interact with a computer's
hardware and to manage a system's hardware and software resources.

Some popular Operating Systems include Linux Operating System, Windows Operating System, VMS, OS/400,
AIX, z/OS, etc. Today, Operating systems is found almost in every device like mobile phones, personal
computers, mainframe computers, automobiles, TV, Toys etc.

Operating System Generations

Operating systems have been evolving over the years. We can categorise this evaluation based on different
generations which is briefed below:
0th Generation
The term 0th generation is used to refer to the period of development of computing when Charles Babbage
invented the Analytical Engine and later John Atanasoff created a computer in 1940. The hardware component
technology of this period was electronic vacuum tubes. There was no Operating System available for this
generation computer and computer programs were written in machine language. This computers in this generation
were inefficient and dependent on the varying competencies of the individual programmer as operators.
First Generation (1951-1956)
The first generation marked the beginning of commercial computing including the introduction of Eckert and
Mauchly’s UNIVAC I in early 1951, and a bit later, the IBM 701.
System operation was performed with the help of expert operators and without the benefit of an operating system
for a time though programs began to be written in higher level, procedure-oriented languages, and thus the
operator’s routine expanded. Later mono-programmed operating system was developed, which eliminated some
of the human intervention in running job and provided programmers with a number of desirable functions. These
systems still continued to operate under the control of a human operator who used to follow a number of steps to
execute a program. Programming language like FORTRAN was developed by John W. Backus in 1956.
Second Generation (1956-1964)
The second generation of computer hardware was most notably characterised by transistors replacing vacuum
tubes as the hardware component technology. The first operating system GMOS was developed by the IBM
computer. GMOS was based on single stream batch processing system, because it collects all similar jobs in
groups or batches and then submits the jobs to the operating system using a punch card to complete all jobs in a
machine. Operating system is cleaned after completing one job and then continues to read and initiates the next
job in punch card.
Researchers began to experiment with multiprogramming and multiprocessing in their computing services called
the time-sharing system. A noteworthy example is the Compatible Time Sharing System (CTSS), developed at
MIT during the early 1960s.
Third Generation (1964-1979)
The third generation officially began in April 1964 with IBM’s announcement of its System/360 family of
computers. Hardware technology began to use integrated circuits (ICs) which yielded significant advantages in
both speed and economy.
Operating system development continued with the introduction and widespread adoption of multiprogramming.
The idea of taking fuller advantage of the computer’s data channel I/O capabilities continued to develop.
Another progress which leads to developing of personal computers in fourth generation is a new development of
minicomputers with DEC PDP-1. The third generation was an exciting time, indeed, for the development of both
computer hardware and the accompanying operating system.
Fourth Generation (1979 – Present)
The fourth generation is characterised by the appearance of the personal computer and the workstation. The
component technology of the third generation, was replaced by very large scale integration (VLSI). Many
Operating Systems which we are using today like Windows, Linux, MacOS etc developed in the fourth
generation.
Following are some of important functions of an operating System.

 Memory Management
 Processor Management
 Device Management
 File Management
 Network Management
 Security
 Control over system performance
 Job accounting
 Error detecting aids
 Coordination between other software and users

Memory Management

Memory management refers to management of Primary Memory or Main Memory. Main memory is a large array
of words or bytes where each word or byte has its own address.
Main memory provides a fast storage that can be accessed directly by the CPU. For a program to be executed, it
must in the main memory. An Operating System does the following activities for memory management −
 Keeps tracks of primary memory, i.e., what part of it are in use by whom, what part are not in use.
 In multiprogramming, the OS decides which process will get memory when and how much.
 Allocates the memory when a process requests it to do so.
 De-allocates the memory when a process no longer needs it or has been terminated.

Processor Management
In multiprogramming environment, the OS decides which process gets the processor when and for how much
time. This function is called process scheduling. An Operating System does the following activities for processor
management −
 Keeps tracks of processor and status of process. The program responsible for this task is known as traffic
controller.
 Allocates the processor (CPU) to a process.
 De-allocates processor when a process is no longer required.

Device Management

An Operating System manages device communication via their respective drivers. It does the following activities
for device management −
 Keeps tracks of all devices. Program responsible for this task is known as the I/O controller.
 Decides which process gets the device when and for how much time.
 Allocates the device in the efficient way.
 De-allocates devices.

File Management

A file system is normally organized into directories for easy navigation and usage. These directories may contain
files and other directions.
An Operating System does the following activities for file management −
 Keeps track of information, location, uses, status etc. The collective facilities are often known as file system.
 Decides who gets the resources.
 Allocates the resources.
 De-allocates the resources.

Other Important Activities

Following are some of the important activities that an Operating System performs −
 Security − By means of password and similar other techniques, it prevents unauthorized access to programs
and data.
 Control over system performance − Recording delays between request for a service and response from the
system.
 Job accounting − Keeping track of time and resources used by various jobs and users.
 Error detecting aids − Production of dumps, traces, error messages, and other debugging and error
detecting aids.
 Coordination between other softwares and users − Coordination and assignment of compilers,
interpreters, assemblers and other software to the various users of the computer systems.
There are various components of an Operating System to perform well defined tasks. Though most of the
Operating Systems differ in structure but logically they have similar components. Each component must be a
well-defined portion of a system that appropriately describes the functions, inputs, and outputs.
There are following 8-components of an Operating System:

1. Process Management
2. I/O Device Management
3. File Management
4. Network Management
5. Main Memory Management
6. Secondary Storage Management
7. Security Management
8. Command Interpreter System

Following section explains all the above components in more detail:

Process Management

A process is program or a fraction of a program that is loaded in main memory. A process needs certain resources
including CPU time, Memory, Files, and I/O devices to accomplish its task. The process management component
manages the multiple processes running simultaneously on the Operating System.
A program in running state is called a process.
The operating system is responsible for the following activities in connection with process management:

 Create, load, execute, suspend, resume, and terminate processes.


 Switch system among multiple processes in main memory.
 Provides communication mechanisms so that processes can communicate with each others
 Provides synchronization mechanisms to control concurrent access to shared data to keep shared data
consistent.
 Allocate/de-allocate resources properly to prevent or avoid deadlock situation.

I/O Device Management

One of the purposes of an operating system is to hide the peculiarities of specific hardware devices from the user.
I/O Device Management provides an abstract level of H/W devices and keep the details from applications to
ensure proper use of devices, to prevent errors, and to provide users with convenient and efficient programming
environment.
Following are the tasks of I/O Device Management component:

 Hide the details of H/W devices


 Manage main memory for the devices using cache, buffer, and spooling
 Maintain and provide custom drivers for each device.

File Management

File management is one of the most visible services of an operating system. Computers can store information in
several different physical forms; magnetic tape, disk, and drum are the most common forms.
A file is defined as a set of correlated information and it is defined by the creator of the file. Mostly files represent
data, source and object forms, and programs. Data files can be of any type like alphabetic, numeric, and
alphanumeric.
A files is a sequence of bits, bytes, lines or records whose meaning is defined by its creator and user.
The operating system implements the abstract concept of the file by managing mass storage device, such as types
and disks. Also files are normally organized into directories to ease their use. These directories may contain files
and other directories and so on.
The operating system is responsible for the following activities in connection with file management:

 File creation and deletion


 Directory creation and deletion
 The support of primitives for manipulating files and directories
 Mapping files onto secondary storage
 File backup on stable (nonvolatile) storage media

Network Management

The definition of network management is often broad, as network management involves several different
components. Network management is the process of managing and administering a computer network. A
computer network is a collection of various types of computers connected with each other.
Network management comprises fault analysis, maintaining the quality of service, provisioning of networks, and
performance management.
Network management is the process of keeping your network healthy for an efficient communication between
different computers.
Following are the features of network management:

 Network administration
 Network maintenance
 Network operation
 Network provisioning
 Network security

Main Memory Management

Memory is a large array of words or bytes, each with its own address. It is a repository of quickly accessible data
shared by the CPU and I/O devices.
Main memory is a volatile storage device which means it loses its contents in the case of system failure or as soon
as system power goes down.
The main motivation behind Memory Management is to maximize memory utilization on the computer system.
The operating system is responsible for the following activities in connections with memory management:

 Keep track of which parts of memory are currently being used and by whom.
 Decide which processes to load when memory space becomes available.
 Allocate and deallocate memory space as needed.

Secondary Storage Management

The main purpose of a computer system is to execute programs. These programs, together with the data they
access, must be in main memory during execution. Since the main memory is too small to permanently
accommodate all data and program, the computer system must provide secondary storage to backup main
memory.
Most modern computer systems use disks as the principle on-line storage medium, for both programs and data.
Most programs, like compilers, assemblers, sort routines, editors, formatters, and so on, are stored on the disk
until loaded into memory, and then use the disk as both the source and destination of their processing.
The operating system is responsible for the following activities in connection with disk management:

 Free space management


 Storage allocation

Security Management
The operating system is primarily responsible for all task and activities happen in the computer system. The
various processes in an operating system must be protected from each other’s activities. For that purpose, various
mechanisms which can be used to ensure that the files, memory segment, cpu and other resources can be operated
on only by those processes that have gained proper authorization from the operating system.
Security Management refers to a mechanism for controlling the access of programs, processes, or users to the
resources defined by a computer controls to be imposed, together with some means of enforcement.
For example, memory addressing hardware ensure that a process can only execute within its own address space.
The timer ensure that no process can gain control of the CPU without relinquishing it. Finally, no process is
allowed to do it’s own I/O, to protect the integrity of the various peripheral devices.

Command Interpreter System

One of the most important component of an operating system is its command interpreter. The command
interpreter is the primary interface between the user and the rest of the system.
Command Interpreter System executes a user command by calling one or more number of underlying system
programs or system calls.
Command Interpreter System allows human users to interact with the Operating System and provides convenient
programming environment to the users.
Many commands are given to the operating system by control statements. A program which reads and interprets
control statements is automatically executed. This program is called the shell and few examples are Windows
DOS command window, Bash of Unix/Linux or C-Shell of Unix/Linux.

Other Important Activities

An Operating System is a complex Software System. Apart from the above mentioned components and
responsibilities, there are many other activities performed by the Operating System. Few of them are listed below:
 Security − By means of password and similar other techniques, it prevents unauthorized access to
programs and data.
 Control over system performance − Recording delays between request for a service and response from
the system.
 Job accounting − Keeping track of time and resources used by various jobs and users.
 Error detecting aids − Production of dumps, traces, error messages, and other debugging and error
detecting aids.
 Coordination between other softwares and users − Coordination and assignment of compilers,
interpreters, assemblers and other software to the various users of the computer systems.
TYPES OF OPERATING SYSTEM
Operating systems are present from the very first computer generation and they keep evolving with time. Some of
the important types of operating systems that are most commonly used are discussed below.

Batch operating system

The users of a batch operating system do not interact with the computer directly. Each user prepares his job on an
off-line device like punch cards and submits it to the computer operator. To speed up processing, jobs with similar
needs are batched together and run as a group. The programmers leave their programs with the operator and the
operator then sorts the programs with similar requirements into batches.
The problems with Batch Systems are as follows −
 Lack of interaction between the user and the job.
 CPU is often idle, because the speed of the mechanical I/O devices is slower than the CPU.
 Difficult to provide the desired priority.

Time-sharing operating systems

Time-sharing is a technique which enables many people, located at various terminals, to use a particular computer
system at the same time. Time-sharing or multitasking is a logical extension of multiprogramming. Processor's
time which is shared among multiple users simultaneously is termed as time-sharing.
The main difference between Multiprogrammed Batch Systems and Time-Sharing Systems is that in case of
Multiprogrammed batch systems, the objective is to maximize processor use, whereas in Time-Sharing Systems,
the objective is to minimize response time.
Multiple jobs are executed by the CPU by switching between them, but the switches occur so frequently. Thus,
the user can receive an immediate response. For example, in a transaction processing, the processor executes each
user program in a short burst or quantum of computation. That is, if n users are present, then each user can get a
time quantum. When the user submits the command, the response time is in few seconds at most.
The operating system uses CPU scheduling and multiprogramming to provide each user with a small portion of a
time. Computer systems that were designed primarily as batch systems have been modified to time-sharing
systems.
Advantages of Timesharing operating systems are as follows −
 Provides the advantage of quick response.
 Avoids duplication of software.
 Reduces CPU idle time.
Disadvantages of Time-sharing operating systems are as follows −
 Problem of reliability.
 Question of security and integrity of user programs and data.
 Problem of data communication.

Distributed operating System

Distributed systems use multiple central processors to serve multiple real-time applications and multiple users.
Data processing jobs are distributed among the processors accordingly.
The processors communicate with one another through various communication lines (such as high-speed buses or
telephone lines). These are referred as loosely coupled systems or distributed systems. Processors in a distributed
system may vary in size and function. These processors are referred as sites, nodes, computers, and so on.
The advantages of distributed systems are as follows −
 With resource sharing facility, a user at one site may be able to use the resources available at another.
 Speedup the exchange of data with one another via electronic mail.
 If one site fails in a distributed system, the remaining sites can potentially continue operating.
 Better service to the customers.
 Reduction of the load on the host computer.
 Reduction of delays in data processing.

Network operating System

A Network Operating System runs on a server and provides the server the capability to manage data, users,
groups, security, applications, and other networking functions. The primary purpose of the network operating
system is to allow shared file and printer access among multiple computers in a network, typically a local area
network (LAN), a private network or to other networks.
Examples of network operating systems include Microsoft Windows Server 2003, Microsoft Windows Server
2008, UNIX, Linux, Mac OS X, Novell NetWare, and BSD.
The advantages of network operating systems are as follows −
 Centralized servers are highly stable.
 Security is server managed.
 Upgrades to new technologies and hardware can be easily integrated into the system.
 Remote access to servers is possible from different locations and types of systems.
The disadvantages of network operating systems are as follows −
 High cost of buying and running a server.
 Dependency on a central location for most operations.
 Regular maintenance and updates are required.

Real Time operating System

A real-time system is defined as a data processing system in which the time interval required to process and
respond to inputs is so small that it controls the environment. The time taken by the system to respond to an input
and display of required updated information is termed as the response time. So in this method, the response time
is very less as compared to online processing.
Real-time systems are used when there are rigid time requirements on the operation of a processor or the flow of
data and real-time systems can be used as a control device in a dedicated application. A real-time operating
system must have well-defined, fixed time constraints, otherwise the system will fail. For example, Scientific
experiments, medical imaging systems, industrial control systems, weapon systems, robots, air traffic control
systems, etc.
There are two types of real-time operating systems.
Hard real-time systems
Hard real-time systems guarantee that critical tasks complete on time. In hard real-time systems, secondary
storage is limited or missing and the data is stored in ROM. In these systems, virtual memory is almost never
found.
Soft real-time systems
Soft real-time systems are less restrictive. A critical real-time task gets priority over other tasks and retains the
priority until it completes. Soft real-time systems have limited utility than hard real-time systems. For example,
multimedia, virtual reality, Advanced Scientific Projects like undersea exploration and planetary rovers, etc.
Following are the different properties of an Operating System.

1. Batch processing
2. Multitasking
3. Multiprogramming
4. Interactivity
5. Real Time System
6. Distributed Environment
7. Spooling

Batch processing
Batch processing is a technique in which an Operating System collects the programs and data together in a batch
before processing starts. An operating system does the following activities related to batch processing −
 The OS defines a job which has predefined sequence of commands, programs and data as a single unit.
 The OS keeps a number a jobs in memory and executes them without any manual information.
 Jobs are processed in the order of submission, i.e., first come first served fashion.
 When a job completes its execution, its memory is released and the output for the job gets copied into an
output spool for later printing or processing.

Advantages
 Batch processing takes much of the work of the operator to the computer.
 Increased performance as a new job get started as soon as the previous job is finished, without any manual
intervention.
Disadvantages

 Difficult to debug program.


 A job could enter an infinite loop.
 Due to lack of protection scheme, one batch job can affect pending jobs.

Multitasking

Multitasking is when multiple jobs are executed by the CPU simultaneously by switching between them. Switches
occur so frequently that the users may interact with each program while it is running. An OS does the following
activities related to multitasking −
 The user gives instructions to the operating system or to a program directly, and receives an immediate
response.
 The OS handles multitasking in the way that it can handle multiple operations/executes multiple programs
at a time.
 Multitasking Operating Systems are also known as Time-sharing systems.
 These Operating Systems were developed to provide interactive use of a computer system at a reasonable
cost.
 A time-shared operating system uses the concept of CPU scheduling and multiprogramming to provide
each user with a small portion of a time-shared CPU.
 Each user has at least one separate program in memory.
 A program that is loaded into memory and is executing is commonly referred to as a process.
 When a process executes, it typically executes for only a very short time before it either finishes or needs
to perform I/O.
 Since interactive I/O typically runs at slower speeds, it may take a long time to complete. During this time,
a CPU can be utilized by another process.
 The operating system allows the users to share the computer simultaneously. Since each action or
command in a time-shared system tends to be short, only a little CPU time is needed for each user.
 As the system switches CPU rapidly from one user/program to the next, each user is given the impression
that he/she has his/her own CPU, whereas actually one CPU is being shared among many users.

Multiprogramming

Sharing the processor, when two or more programs reside in memory at the same time, is referred
as multiprogramming. Multiprogramming assumes a single shared processor. Multiprogramming increases CPU
utilization by organizing jobs so that the CPU always has one to execute.
The following figure shows the memory layout for a multiprogramming system.
An OS does the following activities related to multiprogramming.
 The operating system keeps several jobs in memory at a time.
 This set of jobs is a subset of the jobs kept in the job pool.
 The operating system picks and begins to execute one of the jobs in the memory.
 Multiprogramming operating systems monitor the state of all active programs and system resources using
memory management programs to ensures that the CPU is never idle, unless there are no jobs to process.
Advantages

 High and efficient CPU utilization.


 User feels that many programs are allotted CPU almost simultaneously.

Disadvantages

 CPU scheduling is required.


 To accommodate many jobs in memory, memory management is required.

Interactivity

Interactivity refers to the ability of users to interact with a computer system. An Operating system does the
following activities related to interactivity −

 Provides the user an interface to interact with the system.


 Manages input devices to take inputs from the user. For example, keyboard.
 Manages output devices to show outputs to the user. For example, Monitor.

The response time of the OS needs to be short, since the user submits and waits for the result.

Real Time System

Real-time systems are usually dedicated, embedded systems. An operating system does the following activities
related to real-time system activity.

 In such systems, Operating Systems typically read from and react to sensor data.
 The Operating system must guarantee response to events within fixed periods of time to ensure correct
performance.

Distributed Environment

A distributed environment refers to multiple independent CPUs or processors in a computer system. An operating
system does the following activities related to distributed environment −
 The OS distributes computation logics among several physical processors.
 The processors do not share memory or a clock. Instead, each processor has its own local memory.
 The OS manages the communications between the processors. They communicate with each other through
various communication lines.

Spooling
Spooling is an acronym for simultaneous peripheral operations on line. Spooling refers to putting data of various
I/O jobs in a buffer. This buffer is a special area in memory or hard disk which is accessible to I/O devices.
An operating system does the following activities related to distributed environment −
 Handles I/O device data spooling as devices have different data access rates.
 Maintains the spooling buffer which provides a waiting station where data can rest while the slower device
catches up.
 Maintains parallel computation because of spooling process as a computer can perform I/O in parallel
fashion. It becomes possible to have the computer read data from a tape, write data to disk and to write out
to a tape printer while it is doing its computing task.

Advantages

 The spooling operation uses a disk as a very large buffer.


 Spooling is capable of overlapping I/O operation for one job with processor operations for another job.

Process

A process is basically a program in execution. The execution of a process must progress in a sequential fashion.
A process is defined as an entity which represents the basic unit of work to be implemented in the system.
To put it in simple terms, we write our computer programs in a text file and when we execute this program, it
becomes a process which performs all the tasks mentioned in the program.
When a program is loaded into the memory and it becomes a process, it can be divided into four sections ─ stack,
heap, text and data. The following image shows a simplified layout of a process inside main memory
S.N. Component & Description
1 Stack
The process Stack contains the temporary data such as method/function parameters,
return address and local variables.

2 Heap
This is dynamically allocated memory to a process during its run time.

3 Text
This includes the current activity represented by the value of Program Counter and the
contents of the processor's registers.

4 Data
This section contains the global and static variables.

Device Management in Operating System


Device management in an operating system means controlling the Input/Output devices like disk, microphone,
keyboard, printer, magnetic tape, USB ports, camcorder, scanner, other accessories, and supporting units like
supporting units control channels. A process may require various resources, including main memory, file access,
and access to disk drives, and others. If resources are available, they could be allocated, and control returned to
the CPU. Otherwise, the procedure would have to be postponed until adequate resources become available. The
system has multiple devices, and in order to handle these physical or virtual devices, the operating system requires
a separate program known as an ad device controller. It also determines whether the requested device is available.

The fundamentals of I/O devices may be divided into three categories:

1. Boot Device
2. Character Device
3. Network Device

Boot Device
It stores data in fixed-size blocks, each with its unique address. For example- Disks.

Character Device
It transmits or accepts a stream of characters, none of which can be addressed individually. For instance,
keyboards, printers, etc.

Network Device
It is used for transmitting the data packets.

Functions of the device management in the operating system


The operating system (OS) handles communication with the devices via their drivers. The OS component gives a
uniform interface for accessing devices with various physical features. There are various functions of device
management in the operating system. Some of them are as follows:

1. It keeps track of data, status, location, uses, etc. The file system is a term used to define a group of
facilities.
2. It enforces the pre-determined policies and decides which process receives the device when and for how
long.
3. It improves the performance of specific devices.
4. It monitors the status of every device, including printers, storage drivers, and other devices.
5. It allocates and effectively deallocates the device. De-allocating differentiates the devices at two levels:
first, when an I/O command is issued and temporarily freed. Second, when the job is completed, and the
device is permanently release

Types of devices
There are three types of Operating system peripheral devices: dedicated, shared, and virtual. These are as follows:

1. Dedicated Device
In device management, some devices are allocated or assigned to only one task at a time until that job releases
them. Devices such as plotters, printers, tape drivers, and other similar devices necessitate such an allocation
mechanism because it will be inconvenient if multiple people share them simultaneously. The disadvantage of
such devices is the inefficiency caused by allocating the device to a single user for the whole duration of task
execution, even if the device is not used 100% of the time.
2. Shared Devices
These devices could be assigned to a variety of processes. By interleaving their requests, disk-DASD could be
shared by multiple processes simultaneously. The Device Manager carefully controls the interleaving, and pre-
determined policies must resolve all difficulties.

3. Virtual Devices
Virtual devices are a hybrid of the two devices, and they are dedicated devices that have been transformed into
shared devices. For example, a printer can be transformed into a shareable device by using a spooling program
that redirects all print requests to a disk. A print job is not sent directly to the printer; however, it is routed to the
disk until it is fully prepared with all of the required sequences and formatting, at which point it is transmitted to
the printers. The approach can transform a single printer into numerous virtual printers, improving performance
and ease of use.

Features of Device Management


Various features of the device management are as follows:

1. The OS interacts with the device controllers via the device drivers while allocating the device to the
multiple processes executing on the system.
2. Device drivers can also be thought of as system software programs that bridge processes and device
controllers.
3. The device management function's other key job is to implement the API.
4. Device drivers are software programs that allow an operating system to control the operation of numerous
devices effectively.
5. The device controller used in device management operations mainly contains three registers: command,
status, and data.

Operating System as Resource Manager


Here are few points that describe how the operating system works as a Resource Manager.
 Now-a-days all modern computers consist of processors, memories, timers, network interfaces, printers,
and so many other devices.
 The operating system provides for an orderly and controlled allocation of the processors, memories, and
I/O devices among the various programs in the bottom-up view.
 Operating system allows multiple programs to be in memory and run at the same time.
 Resource management includes multiplexing or sharing resources in two different ways: in time and in
space.
 In time multiplexed, different programs take a chance of using CPU. First one tries to use the resource,
then the next one that is ready in the queue and so on. For example: Sharing the printer one after another.
 In space multiplexing, Instead of the customers taking a chance, each one gets part of the resource. For
example − Main memory is divided into several running programs, so each one can be resident at the same
time.

The diagram given below shows the functioning of OS as a resource manager −


DATABASE MANAGEMENT STSTEMS
Data:

It is a collection of information.

The facts that can be recorded and which have implicit meaning known as 'data'.

Example:

Customer ----- 1.cname.

2.cno.

3.ccity.

Database:

It is a collection of interrelated data

. These can be stored in the form of tables.


A database can be of any size and varying complexity.

A database may be generated and manipulated manually or it may be computerized.

Example:

Customer database consists the fields as cname, cno, and ccity

Cname Cno Ccity

Database System:

It is computerized system, whose overall purpose is to maintain the information and to make that the
information is available on demand.

Advantages:

1.Redundency can be reduced.

2.Inconsistency can be avoided.

3.Data can be shared.

4.Standards can be enforced.

5.Security restrictions can be applied.

6.Integrity can be maintained.

7.Data gathering can be possible.

8.Requirements can be balanced.

Database Management System (DBMS):

It is a collection of programs that enables user to create and maintain a database. In other words it is
general-purpose software that provides the users with the processes of defining, constructing and manipulating the
database for various applications.

Disadvantages in File Processing

 Data redundancy and inconsistency.


 Difficult in accessing data.
 Data isolation.
 Data integrity.
 Concurrent access is not possible.
 Security Problems.

Advantages of DBMS:

1.Data Independence.

2.Efficient Data Access.


3.Data Integrity and security.

4.Data administration.

5.Concurrent access and Crash recovery.

6.Reduced Application Development Time.

Applications

Database Applications:

Banking: all transactions

Airlines: reservations, schedules

Universities: registration, grades

Sales: customers, products, purchases

Online retailers: order tracking, customized recommendations

Manufacturing: production, inventory, orders, supply chain

Human resources: employee records, salaries, tax deductions.

DATA MODELS

The entire structure of a database can be described using a data model. A data model is a collection of
conceptual tools for describing

Data models can be classified into following types.

1. Object Based Logical Models. 2. Record Based Logical Models. 3. Physical Models.

1. Object Based Logical Models:

These models can be used in describing the data at the logical and view levels.

These models are having flexible structuring capabilities classified into following types.

a) The entity-relationship model.

b) The object-oriented model.

c) The semantic data model.

Entity Relational Model (E-R Model)


The E-R model can be used to describe the data involved in a real world enterprise in terms of objects
and their relationships.

An Entity-Relationship model is a high-level data model that describes the structure of the
database in a pictorial form which is known as ER-diagram. In simple words, an ER diagram is
used to represent logical structure of the database easily.

ER model develops a conceptual view of the data hence it can be used as a blueprint to implement the
database in the future.
Developers can easily understand the system just by looking at ER diagram. Let's first have a look at
the components of an ER diagram.

 Entity - Anything that has an independent existence about which we collect the data.

They are represented as rectangles in the ER diagram. For example - Car, house, employee.

 Entity Set - A set of the same type of entities is known as an entity set. For example - Set of students
studying in a college.

 Attributes - Properties that define entities are called attributes. They are represented by an ellipse shape.

 Relationships - A relationship in DBMS is used to describe the association between entities. They are
represented as diamond or rhombus shapes in the ER diagram.

Uses:

1. These models can be used in database design.

2. It provides useful concepts that allow us to move from an informal description to precise description.

3. This model was developed to facilitate database design by allowing the specification of overall logical
structure of a database.
4. It is extremely useful in mapping the meanings and interactions of real world enterprises onto a conceptual
schema.

5. These models can be used for the conceptual design of database applications.

Object-Oriented Data model

The object-oriented data model is a combination of object-oriented programming and relational data
model. In this data model, the data and their relationship are represented in a single structure which is
known as an object.

Since data is stored as objects we can easily store audio, video, images, etc in the database which was
very difficult and inconvenient to do in the relational model. As shown in the image below objects are
connected with each other through links.

 Here Transport, Bus, Ship, and Plane are objects.

 Bus has Road Transport as the attribute.

 Ship has Water Transport as the attribute.

 Plane has Air Transport as the attribute.

 The Transport object is the base object and the Bus, Ship, and Plane objects derive from it.

Semantic Data Model

The semantic data model is a method of structuring data in order to represent it in a specific logical way. It is a
conceptual data model that includes semantic information that adds a basic meaning to the data and the
relationships that lie between them. This approach to data modeling and data organization allows for the easy
development of application programs and also for the easy maintenance of data consistency when data is updated.

A semantic data model may be illustrated graphically through an abstraction hierarchy diagram, which shows data
types as boxes and their relationships as lines. This is done hierarchically so that types that reference other types
are always listed above the types that they are referencing, which makes it easier to read and understand.

An SDM employs the following three different types of abstraction:-


 Classification: This classifies different objects in objective reality by using “instance of” relations, such as
creating groups of objects by similar characteristics — a group of employees, for example.

 Aggregation: Aggregation defines a new object from a set of objects that become its components using
“has a” relations. For this example, we can mention an employer with characteristics such as name, age, or
contact.

 Generalization: Generalization defines the relationship of a subset between occurrences of two or more
objects by using “is a” relations. For example, an employer is a generalization of managers.

This example visualizes the relationship between real-world objects in the music industry. Between each object
are defined relationships and the direction of object dependence.

2. Record Based Logical Models:

These models can also be used in describing the data at the logical and view levels.

These models can be used for both to specify the overall logical structure of the database and a higher-level
description.

These models can be classified into,

1. Relational model.

2. Network model.

3. Hierarchal model

Relational Model

This is the most widely accepted data model. In this model, the database is represented as a collection of
relations in the form of rows and columns of a two-dimensional table. Each row is known as a tuple (a tuple
contains all the data for an individual record) while each column represents an attribute. For example -
 Any given row of the relation indicates a student i.e., the row of the table describes a real-world entity.

 The columns of the table indicate the attributes related to the entity. In this case, the roll number, CGPA,
and the name of the student.

NOTE: A database implemented and organized in terms of the relational model is known as a relational database
management system (RDBMS). Hence, the relational model describes how data is stored in relational databases.

Hierarchical Model

The Hierarchical Model was the first database management system model. This concept uses a hierarchical tree
structure to organize the data. The hierarchy begins at the root, which contains root data, and then grows into a
tree as child nodes are added to the parent node.

Features of a Hierarchical Model

1. Parent-Child Relationship

A parent node exists for each child node. However, a parent node might have several child nodes. It is not
permitted to have more than one parent.

2. One-to-many Relationship

The data is organised in a tree-like form, with the data types having a one-to-many relationship. There can only be
one path from any node to its parent. For example, in the preceding example, there is only one way to get to the
node ‘sneakers’, which is through the ‘men’s shoes’ node.
Network Model

A network model is nothing but a generalization of the hierarchical data model as this data model allows many
to many relationships therefore in this model a record can also have more than one parent.

The network model in DBMS can be represented as a graph and hence it replaces the hierarchical tree with a
graph in which object types are the nodes and relationships are the edges.

The node student has two parents, CSE Department and Library, in the example below. In the hierarchical model,
this was previously impossible.
3. Physical Models:

These models can be used in describing the data at the lowest level, i.e. physical level. These
models can be classified into

1. Unifying model

2. Frame memory model.

RDBMS

RDBMS stands for Relational Database Management System. RDBMS data is structured in
database tables, fields and records. Each RDBMS table consists of database table rows. Each
database table row consists of one or more database table fields. RDBMS store the data into
collection of tables, which might be related by common fields (database table columns). RDBMS
also provide relational operators to manipulate the data stored into the database tables. Most
RDBMS use SQL as database querylanguage. The most popular RDBMS are MS SQL Server,
DB2, Oracle and MySQL. The relational model is an example of record-based model. Record
based models are so named because the database is structured in fixed format records of several
types. Each table contains records of a particular type. Each record type defines a fixed number
of fields, or attributes. The columns of the table correspond to the attributes of the record types.
The relational data model is the most widely used data model, and a vast majority of current
database systems are based on the relational model. The relational model was designed by the
IBM research scientist and mathematician, Dr. E.F.Codd. Many modern DBMS do not conform
to the Codd’s definition of a RDBMS, but nonetheless they are still considered to be RDBMS.
Two of Dr.Codd’s main focal points when designing the relational model were to further reduce
data redundancy and to improve data integrity within database systems.

The relational model originated from a paper authored by Dr.codd entitled “A Relational Model
of Data for Large Shared Data Banks”, written in 1970. This paper included the following
concepts that apply to database management systems for relational databases. The relation is the
only data structure used in the relational data model to represent both entities and relationships
between them. Rows of the relation are referred to as tuples of the relation and columns are its
attributes. Each attribute of the column are drawn from the set of values known as domain. The
domain of an attribute contains the set of values that the attribute may assume. From the
historical perspective, the relational data model is relatively new .The first database systems
were based on either network or hierarchical models .The relational data model has established
itself as the primary data model for commercial data processing applications. Its success in this
domain has led to its applications outside data processing in systems for computer aided design
and other environments.
DIFFERENCE BETWEEN DBMS & RDBMS

The main differences between DBMS and RDBMS are given below:

No. DBMS RDBMS

1) DBMS applications store data RDBMS applications store data in a tabular


as file. form.

2) In DBMS, data is generally In RDBMS, the tables have an identifier


stored in either a hierarchical called primary key and the data values are
form or a navigational form. stored in the form of tables.

3) Normalization is not present Normalization is present in RDBMS.


in DBMS.

4) DBMS does not apply any RDBMS defines the integrity constraint for
security with regards to data the purpose of ACID (Atomocity,
manipulation. Consistency, Isolation and Durability)
property.

5) DBMS uses file system to store in RDBMS, data values are stored in the form
data, so there will be no of tables, so a relationship between these
relation between the tables. data values will be stored in the form of a
table as well.

6) DBMS has to provide some RDBMS system supports a tabular structure


uniform methods to access the of the data and a relationship between them to
stored information. access the stored information.

7) DBMS does not support RDBMS supports distributed database.


distributed database.

8) DBMS is meant to be for small RDBMS is designed to handle large amount


organization and deal with of data. it supports multiple users.
small data. it supports single
user.

9) Examples of DBMS are file Example of RDBMS are mysql, postgre, sql
systems, xml etc. server, oracle etc.

SQL

SQL stands for Structured Query Language. SQL lets you access and manipulate databases.
SQL became a standard of the American National Standards Institute (ANSI) in 1986, and of the
International Organization for Standardization (ISO) in 1987.

What Can SQL do?

 SQL can execute queries against a database

 SQL can retrieve data from a database

 SQL can insert records in a database

 SQL can update records in a database

 SQL can delete records from a database

 SQL can create new databases

 SQL can create new tables in a database

 SQL can create stored procedures in a database

 SQL can create views in a database

 SQL can set permissions on tables, procedures, and views

Transaction

A transaction is an event which occurs on the database. Generally a transaction reads a value
from the database or writes a value to the database. If you have any concept of Operating
Systems, then we can say that a transaction is analogous to processes.
Although a transaction can both read and write on the database, there are some fundamental
differences between these two classes of operations. A read operation does not change the image
of the database in any way. But a write operation, whether performed with the intention of
inserting, updating or deleting data from the database, changes the image of the database. That is,
we may say that these transactions bring the database from an image which existed before the
transaction occurred (called the Before Image or BFIM) to an image which exists after the
transaction occurred (called the After Image or AFIM).

The Four Properties of Transactions:

Every transaction, for whatever purpose it is being used, has the following four properties.
Taking the initial letters of these four properties we collectively call them the ACID Properties.

1. Atomicity: This means that either all of the instructions within the transaction will be reflected
in the database, or none of them will be reflected.

Say for example, we have two accounts A and B, each containing Rs 1000/-. We now start a
transaction to deposit Rs 100/-from account A to Account B.

Read A;

Write A; Read B; B = B + 100; Write B;

The transaction has 6 instructions to extract the amount from A and submit it to B. The AFIM
will show Rs 900/-in A and Rs 1100/-in B.

2. Consistency: If we execute a particular transaction in isolation or together with other


transaction, (i.e. presumably in a multi-programming environment), the transaction will yield the
same expected result.

To give better performance, every database management system supports the execution of
multiple transactions at the same time, using CPU Time Sharing. Concurrently executing
transactions may have to deal with the problem of sharable resources, i.e. resources that multiple
transactions are trying to read/write at the same time. For example, we may have a table or a
record on which two transaction are trying to read or write at the same time. Careful mechanisms
are created in order to prevent mismanagement of these sharable resources, so that there should
not be any change in the way a transaction performs. A transaction which deposits Rs 100/-to
account A must deposit the same amount whether it is acting alone or in conjunction with
another transaction that may be trying to deposit or withdraw some amount at the same time.

3. Isolation: In case multiple transactions are executing concurrently and trying to access a
sharable resource at the same time, the system should create an ordering in their execution so
that they should not create any anomaly in the value stored at the sharable resource.
There are several ways to achieve this and the most popular one is using some kind of locking
mechanism. Again, if you have the concept of Operating Systems, then you should remember the
semaphores, how it is used by a process to make a resource busy before starting to use it, and
how it is used to release the resource after the usage is over. Other processes intending to access
that same resource must wait during this time. Locking is almost similar. It states that a
transaction must first lock the data item that it wishes to access, and release the lock when the
accessing is no longer required. Once a transaction locks the data item, other transactions
wishing to access the same data item must wait until the lock is released.

4. Durability: It states that once a transaction has been complete the changes it has made should
be permanent.

As we have seen in the explanation of the Atomicity property, the transaction, if completes
successfully, is committed. Once the COMMIT is done, the changes which the transaction has
made to the database are immediately written into permanent storage. So, after the transaction
has been committed successfully, there is no question of any loss of information even if the
power fails. Committing a transaction guarantees that the AFIM has been reached.

Data center
A data center is a physical location that stores computing machines and their related hardware
equipment. It contains the computing infrastructure that IT systems require, such as servers, data
storage drives, and network equipment. It is the physical facility that stores any company’s
digital data.

Why are data centers important?

Every business needs computing equipment to run its web applications, offer services to
customers, sell products, or run internal applications for accounts, human resources, and
operations management. As the business grows and IT operations increase, the scale and amount
of required equipment also increases exponentially. Equipment that is distributed across several
branches and locations is hard to maintain. Instead, companies use data centers to bring their
devices to a central location and manage it cost effectively. Instead of keeping it on premises,
they can also use third-party data centers.

Data centers bring several benefits, such as:

 Backup power supplies to manage power outages

 Data replication across several machines for disaster recovery

 Temperature-controlled facilities to extend the life of the equipment


 Easier implementation of security measures for compliance with data laws

Cloud
A cloud can be described as a term used to describe a group of services either a global or
individual network of servers, which possesses a unique function. Cloud is not a physical entity,
but they are a group or network of remote servers which are arched together to operate as a
single entity for an assigned task.

In a nutshell, cloud is a building with lots of computer systems. We access the cloud via internet
because cloud Providers provides cloud as a service.

One of the much confusion that we have is whether cloud is same as cloud compute? The answer
is no. Cloud services like compute run in the cloud. The Compute service offered by cloud lets
users to ‘rent’ computer systems in datacenter on internet. Other example of cloud service is
storage. Now, what exactly is cloud computing? AWS says, “Cloud computing is the on-demand
delivery of IT resources over the Internet with pay-as-you-go pricing. Instead of buying, owning,
and maintaining physical data centers and servers, you can access technology services, such as
computing power, storage, and databases, on an as-needed basis from a cloud provider like
Amazon Web Services (AWS).”

Types of Cloud: Businesses use different methods of cloud resources, mainly there are four of
them:

 Public Cloud: It is a cloud methodology that is open to all with the Internet on the pay-
per-usage method.

 Private Cloud: It is a cloud methodology used by organizations to build their data


centers that are accessible only with the permission of the organization.

 Hybrid Cloud: It is a cloud methodology that is a combination of public and private


clouds. It serves the different needs of an organization for its services.

 Community Cloud: It is a cloud methodology that provides services to a group of people


in an organization or a single community.
Difference between Cloud and Data Center:

S.No Cloud Data Center

Cloud is a virtual resource that helps Data Center is a physical resource that
businesses to store, organize, and operate helps businesses to store, organize, and
1. data efficiently. operate data efficiently.

The scalability of the cloud required less The scalability of Data Center is huge in
2. amount of investment. investment as compared to the cloud.

The maintenance cost is high because


The maintenance cost is less than service developers of the organization do
3. providers maintain it. maintenance.

Third-Party needs to be trusted for the The organization’s developers are trusted
4. organization’s data to be stored. for the data stored in data centers.

Performance is huge as compared with Performance is less than compared to


5. investment. investment.

It is easily customizable without any


6. It requires a plan to customize the cloud. hard plan.

It requires a stable internet connection to It may and may not require an internet
7. provide the function. connection.

Data Centers require experienced


Cloud is easy to operate and is considered a developers to operate and are considered
8. viable option. not a viable option.
S.No Cloud Data Center

Here, data is collected from the


9. Data is generally collected from the internet Organization’s network.

It finds use in scenarios where security is


not a critical aspect. Hence, small web It finds use in scenarios where the
10. applications can be hosted easily. project requires a high level of security.

You might also like