0% found this document useful (0 votes)
83 views27 pages

COMPUTER FUNDAMENTAL MDU BBA Unit-3 and Unit-4

The document discusses the history and concepts of operating systems. It describes the four generations of operating systems from the 1940s to present day. It also covers memory management techniques including contiguous and non-contiguous allocation schemes.

Uploaded by

Prabha Arya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
83 views27 pages

COMPUTER FUNDAMENTAL MDU BBA Unit-3 and Unit-4

The document discusses the history and concepts of operating systems. It describes the four generations of operating systems from the 1940s to present day. It also covers memory management techniques including contiguous and non-contiguous allocation schemes.

Uploaded by

Prabha Arya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 27

UNIT – 3 OPERATING SYSTEM

3.1 CONCEPT

Operating System and its structure


The operating system is a system program that serves as an interface between the computing
system and the end-user. Operating systems create an environment where the user can run any
programs or communicate with software or applications in a comfortable and well-organized
way.
Furthermore, an operating is a software program that manages and controls the execution of
application programs, software resources and computer hardware. It also helps manage the
software/hardware resource, such as file management, memory management, input/ output and
many peripheral devices like a disk drive, printers, etc. These are the popular operating
system: Linux OS, Windows OS, Mac OS, VMS, OS/400 etc.

FIG 1. STRUCTURE OF OPERATING SYSTEM

3.2 HISTORY
Generations of Operating System
The First Generation (1940 to early 1950s)
When the first electronic computer was developed in 1940, it was created without any operating
system. In early times, users have full access to the computer machine and write a program for
each task in absolute machine language. The programmer can perform and solve only simple
mathematical calculations during the computer generation, and this calculation does not require
an operating system.
The Second Generation (1955 - 1965)
The first operating system (OS) was created in the early 1950s and was known as GMOS.
General Motors has developed OS for the IBM computer. The second-generation operating
system was based on a single stream batch processing system because it collects all similar jobs
in groups or batches and then submits the jobs to the operating system using a punch card to
complete all jobs in a machine. At each completion of jobs (either normally or abnormally),
control transfer to the operating system that is cleaned after completing one job and then
continues to read and initiates the next job in a punch card. After that, new machines were called
mainframes, which were very big and used by professional operators.
The Third Generation (1965 - 1980)
During the late 1960s, operating system designers were very capable of developing a new
operating system that could simultaneously perform multiple tasks in a single computer program
called multiprogramming. The introduction of multiprogramming plays a very important role in
developing operating systems that allow a CPU to be busy every time by performing different
tasks on a computer at the same time. During the third generation, there was a new development
of minicomputer's phenomenal growth starting in 1961 with the DEC PDP-1. These PDP's leads
to the creation of personal computers in the fourth generation.
The Fourth Generation (1980 - Present Day)
The fourth generation of operating systems is related to the development of the personal
computer. However, the personal computer is very similar to the minicomputers that were
developed in the third generation. The cost of a personal computer was very high at that time;
there were small fractions of minicomputers costs. A major factor related to creating personal
computers was the birth of Microsoft and the Windows operating system. Microsoft created the
first window operating system in 1975. After introducing the Microsoft Windows OS, Bill Gates
and Paul Allen had the vision to take personal computers to the next level. Therefore, they
introduced the MS-DOS in 1981; however, it was very difficult for the person to understand its
cryptic commands. Today, Windows has become the most popular and most commonly used
operating system technology. And then, Windows released various operating systems such as
Windows 95, Windows 98, Windows XP and the latest operating system, Windows 7. Currently,
most Windows users use the Windows 10 operating system. Besides the Windows operating
system, Apple is another popular operating system built in the 1980s, and this operating system
was developed by Steve Jobs, a co-founder of Apple. They named the operating system
Macintosh OS or Mac OS.
3.3 FUNCTIONS OF OPERATING SYSTEM
/* From computer fundamentals book by nasib gill pg 192, 193*/

3.4 TYPES OF OPERATING SYSTEM( to be read from pg no 195 to 199 of


nasib gill..copy of pages already shared on group)

3.5 Memory Management


Memory Management in Operating System
The term Memory can be defined as a collection of data in a specific format. It is used to
store instructions and process data. The memory comprises a large array or group of
words or bytes, each with its own location. The primary motive of a computer system is
to execute programs. These programs, along with the information they access, should be
in the main memory during execution. The CPU fetches instructions from memory
according to the value of the program counter.
To achieve a degree of multiprogramming and proper utilization of memory, memory
management is important. Many memory management methods exist, reflecting various
approaches, and the effectiveness of each algorithm depends on the situation.

Memory management plays several roles in a


computer
system.(ROLE/ADVANTAGE/FEATURES/NATURE/CHA
RACTERSTICS OF MEMORY MANAGEMENT)
o Memory manager is used to keep track of the status of memory locations, whether it is
free or allocated. It addresses primary memory by providing abstractions so that
software perceives a large memory is allocated to it.
o Memory manager permits computers with a small amount of main memory to execute
programs larger than the size or amount of available memory. It does this by moving
information back and forth between primary memory and secondary memory by using
the concept of swapping.
o The memory manager is responsible for protecting the memory allocated to each
process from being corrupted by another process. If this is not ensured, then the system
may exhibit unpredictable behavior.
o Memory managers should enable sharing of memory space between processes. Thus,
two programs can reside at the same memory location although at different times.

Memory management Techniques:

The Memory management Techniques can be classified into following main


categories:

o Contiguous memory management schemes


o Non-Contiguous memory management schemes
Contiguous memory management schemes:
In a Contiguous memory management scheme, each program occupies a single
contiguous block of storage locations, i.e., a set of memory locations with consecutive
addresses.

Single contiguous memory management schemes:

The Single contiguous memory management scheme is the simplest memory


management scheme used in the earliest generation of computer systems. In this
scheme, the main memory is divided into two contiguous areas or partitions. The
operating systems reside permanently in one partition, generally at the lower memory,
and the user process is loaded into the other partition.

Advantages of Single contiguous memory management schemes:

o Simple to implement.
o Easy to manage and design.
o In a Single contiguous memory management scheme, once a process is loaded, it is
given full processor's time, and no other processor will interrupt it.

Disadvantages of Single contiguous memory management schemes:

o Wastage of memory space due to unused memory as the process is unlikely to use all
the available memory space.
o The CPU remains idle, waiting for the disk to load the binary image into the main
memory.
o It can not be executed if the program is too large to fit the entire available main memory
space.
o It does not support multiprogramming, i.e., it cannot handle multiple programs
simultaneously.

Multiple Partitioning:

The single Contiguous memory management scheme is inefficient as it limits computers


to execute only one program at a time resulting in wastage in memory space and CPU
time. The problem of inefficient CPU use can be overcome using multiprogramming that
allows more than one program to run concurrently. To switch between two processes,
the operating systems need to load both processes into the main memory. The
operating system needs to divide the available main memory into multiple parts to load
multiple processes into the main memory. Thus multiple processes can reside in the
main memory simultaneously.

The multiple partitioning schemes can be of two types:

o Fixed Partitioning
o Dynamic Partitioning

Fixed Partitioning

The main memory is divided into several fixed-sized partitions in a fixed partition
memory management scheme or static partitioning. These partitions can be of the same
size or different sizes. Each partition can hold a single process. The number of partitions
determines the degree of multiprogramming, i.e., the maximum number of processes in
memory. These partitions are made at the time of system generation and remain fixed
after that.

Advantages of Fixed Partitioning memory management schemes:

o Simple to implement.
o Easy to manage and design.

Disadvantages of Fixed Partitioning memory management schemes:

o This scheme suffers from internal fragmentation.


o The number of partitions is specified at the time of system generation.

Dynamic Partitioning

The dynamic partitioning was designed to overcome the problems of a fixed


partitioning scheme. In a dynamic partitioning scheme, each process occupies only as
much memory as they require when loaded for processing. Requested processes are
allocated memory until the entire physical memory is exhausted or the remaining space
is insufficient to hold the requesting process. In this scheme the partitions used are of
variable size, and the number of partitions is not defined at the system generation time.

Advantages of Dynamic Partitioning memory management schemes:


o Simple to implement.
o Easy to manage and design.

Disadvantages of Dynamic Partitioning memory management schemes:

o This scheme also suffers from internal fragmentation.


o The number of partitions is specified at the time of system segmentation.

Non-Contiguous memory management schemes:

In a Non-Contiguous memory management scheme, the program is divided into


different blocks and loaded at different portions of the memory that need not
necessarily be adjacent to one another. This scheme can be classified depending upon
the size of blocks and whether the blocks reside in the main memory or not.

What is paging?

Paging is a technique that eliminates the requirements of contiguous allocation of main


memory. In this, the main memory is divided into fixed-size blocks of physical memory
called frames. The size of a frame should be kept the same as that of a page to
maximize the main memory and avoid external fragmentation.

Advantages of paging:

o Pages reduce external fragmentation.


o Simple to implement.
o Memory efficient.
o Due to the equal size of frames, swapping becomes very easy.
o It is used for faster access of data.

What is Segmentation?

Segmentation is a technique that eliminates the requirements of contiguous allocation


of main memory. In this, the main memory is divided into variable-size blocks of
physical memory called segments. It is based on the way the programmer follows to
structure their programs. With segmented memory allocation, each job is divided into
several segments of different sizes, one for each module. Functions, subroutines, stack,
array, etc., are examples of such modules.

What is Main Memory:

The main memory is central to the operation of a modern computer. Main Memory is a
large array of words or bytes, ranging in size from hundreds of thousands to billions.
Main memory is a repository of rapidly available information shared by the CPU and I/O
devices. Main memory is the place where programs and information are kept when the
processor is effectively utilizing them. Main memory is associated with the processor, so
moving instructions and information into and out of the processor is extremely fast.
Main memory is also known as RAM(Random Access Memory). This memory is a
volatile memory.RAM lost its data when a power interruption occurs.
What is Memory Management :

In a multiprogramming computer, the operating system resides in a part of memory and


the rest is used by multiple processes. The task of subdividing the memory among
different processes is called memory management. Memory management is a method in
the operating system to manage operations between main memory and disk during
process execution. The main aim of memory management is to achieve efficient
utilization of memory.

Why Memory Management is required:

 Allocate and de-allocate memory before and after process execution.


 To keep track of used memory space by processes.
 To minimize fragmentation issues.
 To proper utilization of main memory.
 To maintain data integrity while executing of process.
Now we are discussing the concept of logical address space and Physical address
space:

Logical and Physical Address Space:

Logical Address space: An address generated by the CPU is known as a “Logical


Address”. It is also known as a Virtual address. Logical address space can be defined as
the size of the process. A logical address can be changed.
Physical Address space: An address seen by the memory unit (i.e the one loaded into the
memory address register of the memory) is commonly known as a “Physical Address”. A
Physical address is also known as a Real address. The set of all physical addresses
corresponding to these logical addresses is known as Physical address space. A physical
address is computed by MMU. The run-time mapping from virtual to physical addresses
is done by a hardware device Memory Management Unit(MMU). The physical address
always remains constant.

Static and Dynamic Loading:

To load a process into the main memory is done by a loader. There are two different
types of loading :
 Static loading:- loading the entire program into a fixed address. It requires more
memory space.
 Dynamic loading:- The entire program and all data of a process must be in physical
memory for the process to execute. So, the size of a process is limited to the size of
physical memory. To gain proper memory utilization, dynamic loading is used. In
dynamic loading, a routine is not loaded until it is called. All routines are residing on
disk in a relocatable load format. One of the advantages of dynamic loading is that
unused routine is never loaded. This loading is useful when a large amount of code is
needed to handle it efficiently.

Static and Dynamic linking:

To perform a linking task a linker is used. A linker is a program that takes one or more
object files generated by a compiler and combines them into a single executable file.
 Static linking: In static linking, the linker combines all necessary program modules
into a single executable program. So there is no runtime dependency. Some operating
systems support only static linking, in which system language libraries are treated like
any other object module.
 Dynamic linking: The basic concept of dynamic linking is similar to dynamic
loading. In dynamic linking, “Stub” is included for each appropriate library routine
reference. A stub is a small piece of code. When the stub is executed, it checks
whether the needed routine is already in memory or not. If not available then the
program loads the routine into memory.

Swapping :

When a process is executed it must have resided in memory. Swapping is a process of


swapping a process temporarily into a secondary memory from the main memory, which
is fast as compared to secondary memory. A swapping allows more processes to be run
and can be fit into memory at one time. The main part of swapping is transferred time and
the total time is directly proportional to the amount of memory swapped. Swapping is
also known as roll-out, roll in, because if a higher priority process arrives and wants
service, the memory manager can swap out the lower priority process and then load and
execute the higher priority process. After finishing higher priority work, the lower
priority process swapped back in memory and continued to the execution process.
Contiguous Memory Allocation :

The main memory should oblige both the operating system and the different client
processes. Therefore, the allocation of memory becomes an important task in the
operating system. The memory is usually divided into two partitions: one for the resident
operating system and one for the user processes. We normally need several user
processes to reside in memory simultaneously. Therefore, we need to consider how to
allocate available memory to the processes that are in the input queue waiting to be
brought into memory. In adjacent memory allotment, each process is contained in a
single contiguous segment of memory.
Memory allocation:

To gain proper memory utilization, memory allocation must be allocated efficient


manner. One of the simplest methods for allocating memory is to divide memory into
several fixed-sized partitions and each partition contains exactly one process. Thus, the
degree of multiprogramming is obtained by the number of partitions.
Multiple partition allocation: In this method, a process is selected from the input queue
and loaded into the free partition. When the process terminates, the partition becomes
available for other processes.
Fixed partition allocation: In this method, the operating system maintains a table that
indicates which parts of memory are available and which are occupied by processes.
Initially, all memory is available for user processes and is considered one large block of
available memory. This available memory is known as a “Hole”. When the process
arrives and needs memory, we search for a hole that is large enough to store this process.
If the requirement is fulfilled then we allocate memory to process, otherwise keeping the
rest available to satisfy future requests. While allocating a memory sometimes dynamic
storage allocation problems occur, which concerns how to satisfy a request of size n from
a list of free holes. There are some solutions to this problem:
First fit:-
In the first fit, the first available free hole fulfills the requirement of the process
allocated.
Here, in this diagram 40 KB memory block is the first available free hole that can store
process A (size of 25 KB), because the first two blocks did not have sufficient memory
space.
Best fit:-
In the best fit, allocate the smallest hole that is big enough to process requirements. For
this, we search the entire list, unless the list is ordered by size.

Here in this example, first, we traverse the complete list and find the last hole 25KB is
the best suitable hole for Process A(size 25KB).
In this method memory utilization is maximum as compared to other memory allocation
techniques.
Worst fit:-In the worst fit, allocate the largest available hole to process. This method
produces the largest leftover hole.
Here in this example, Process A (Size 25 KB) is allocated to the largest available memory
block which is 60KB. Inefficient memory utilization is a major issue in the worst fit.

Fragmentation:

Fragmentation is defined as when the process is loaded and removed after execution from
memory, it creates a small free hole. These holes can not be assigned to new processes
because holes are not combined or do not fulfill the memory requirement of the process.
To achieve a degree of multiprogramming, we must reduce the waste of memory or
fragmentation problems. In the operating systems two types of fragmentation:
Internal fragmentation:
Internal fragmentation occurs when memory blocks are allocated to the process more
than their requested size. Due to this some unused space is leftover and creates an internal
fragmentation problem.
Example: Suppose there is a fixed partitioning is used for memory allocation and the
different size of block 3MB, 6MB, and 7MB space in memory. Now a new process p4 of
size 2MB comes and demand for the block of memory. It gets a memory block of 3MB
but 1MB block memory is a waste, and it can not be allocated to other processes too. This
is called internal fragmentation.
External fragmentation:
In external fragmentation, we have a free memory block, but we can not assign it to
process because blocks are not contiguous.
Example: Suppose (consider above example) three process p1, p2, p3 comes with size
2MB, 4MB, and 7MB respectively. Now they get memory blocks of size 3MB, 6MB, and
7MB allocated respectively. After allocating process p1 process and p2 process left 1MB
and 2MB. Suppose a new process p4 comes and demands a 3MB block of memory,
which is available, but we can not assign it because free memory space is not contiguous.
This is called external fragmentation.
Both the first fit and best-fit systems for memory allocation affected by external
fragmentation. To overcome the external fragmentation problem Compaction is used. In
the compaction technique, all free memory space combines and makes one large block.
So, this space can be used by other processes effectively.
Another possible solution to the external fragmentation is to allow the logical address
space of the processes to be noncontiguous, thus permit a process to be allocated physical
memory wherever the latter is available.

Paging:

Paging is a memory management scheme that eliminates the need for contiguous
allocation of physical memory. This scheme permits the physical address space of a
process to be non-contiguous.
 Logical Address or Virtual Address (represented in bits): An address generated by the
CPU
 Logical Address Space or Virtual Address Space (represented in words or bytes): The
set of all logical addresses generated by a program
 Physical Address (represented in bits): An address actually available on a memory
unit
 Physical Address Space (represented in words or bytes): The set of all physical
addresses corresponding to the logical addresses
The mapping from virtual to physical address is done by the memory
management unit (MMU) which is a hardware device and this mapping is known
as the paging technique.
 The Physical Address Space is conceptually divided into several fixed-size
blocks, called frames.
 The Logical Address Space is also split into fixed-size blocks, called pages.
 Page Size = Frame Size

3.6 File Management System

File Management Tools

File management tools are utility software that manages files of the computer system. Since files are an
important part of the system as all the data is stored in the files. Therefore, this utility software help to
browse, search, arrange, find information, and quickly preview the files of the system.
A utility software or system utilities is a type of system software that helps in the proper and smooth
functioning of a computer system. Moreover, they assist the operating system to manage, organize,
maintain and optimize the functioning of a computer system. A file management system has a limited
amount of capabilities. Furthermore, they work on files or a group of files. We can also call them file
manager.

Functions of the File Management System

Functions of a file management system are as follows:

 Store, arrange, or accessing files on a disk or other storage locations.


 Creating new files.
 Displaying the old files.
 Adding and editing the data in files.
 Moving files from one location to another.
 Sorting files according to the given criteria. For example, file size, file location, modified date,
creation date, etc.

Features of the File Management System

A file management system has the following features:

 Arranging the files and folders hierarchically.


 Report generation
 Notes
 Status
 Assigning documents for processing in a queue.
 Add or edit metadata of files.
 Create, modify, delete, or manage other file operations.
 Simple interact to access and manage files.
 Managing different types of files with extensions .xls, .pdf, .doc etc

For file management in the operating system or to make the operating system
understand a file, the file must be in a predefined structure or format. There are mainly
three types of file structures present in the operating systems:

1. text file: A text file is a non-executable file containing a sequence of numbers,


symbols, and letters organized in the form of lines.
2. source file: A source file is an executable file that contains a series of functions
and processes. In simple terms, we can say that a source file is a file that
contains the instructions of a program.

3. object file: An object file is a file that contains object codes in the form of
assembling language code or machine language code. In simple terms, we can
say that object files contain program instructions in the form of a series of bytes
and are organized in the form of blocks.

Objectives of File Management in Operating System


 The file management in operating system allows users to create a new file, and modify
and delete the old files present at different locations of the computer system.
 The operating system file management software manages the locations of the file store so
that files can be extracted easily.
 As we know, process shares files so, one of the most important features of file
management in operating system is to make files sharable between processes. It helps the
various processes to securely access the required information from a file.
 The operating system file management software also manages the files so that there is
very less chance of data loss or data destruction.
 The file management in the operating system provides input-output operation support to
the files so that the data can be written, read, or extracted from the file(s).
 It also provides a standard input-output interface for the user and system processes. The
simple interface allows the easy and fast modification of data.
 The file management in the operating system also manages the various user permission
present on a file. There is three user permission provided by the operating system, they
are: read, write, and execute.
 The file management in the operating system supports the various types of storage
devices such as flash drives, hard disk drives (HDD), magnetic tapes, optical disks, tapes,
etc, and it also allows the user(s) to store and extract them conveniently.
 It also organizes the files in a hierarchal manner in the form of files and folders
(directories) so that management of these files can be easier from the user's perspective as
well. Refer to the diagram below for better visualization.

Properties of File Management in Operating System


After learning about the properties of file management in operating systems, let us now learn
about the properties of file management in operating systems.
1. The files are arranged or grouped into a more complex structure i.e. tree which reflects
the relationship between the various file. File systems work in a similar way to the way
that libraries organize books. Hierarchical file systems usually have a special directory at
the root. It can be imagined to be similar to a tree.

2. Every file is associated with some name and some access permissions that tells us who
can access the files in which mode (read or write).

In the example above, we have the access permission depicted for a particular file.
Here r tells that the file is readable, w tells that the file is writeable, and x tells us that the
file is executable. While the dash (-) symbol tells no permission is given.

Now as we know in LINUX-based OS, there are three permission groups (owner, group,
and other). The first character tells us the type of file (either file or directory). Now the
next three characters are representing the permission of the owner, the next three is
representing the permissions of a group and the last three characters represent the
permissions of the other.

3. Whenever a user logs off then the file stored on the secondary storage device does not get
erased. If the data is stored in the primary memory like RAM the data gets lost.
Functions of File Management in Operating System
Now let us talk about some of the most important functions of file management in operating
systems.

 Allows users to create, modify and delete files on the computer system.
 Manages the locations of files present on the secondary memory or primary memory.
 Manages and handles the permissions of a particular file for various users and groups.
 Organizes the files in a tree-like structure for better visualization.
 Provides interface to I/O operations.
 Secures files from unauthorized access and hackers.

Advantages of File Management in OS


Some of the main advantages that the file system in the operating system provides are:

 Protection of the files from unauthorized access.


 Recovers the free space created when files are removed or deleted from the hard disk.
 Assigns the disk space to various files with the help of disk management software of the
operating system.
 As we know, a file may be stored at various locations in the form of segments so the file
management in operating system also keeps track of all the blocks or segments of a
particular file.
 Helps to manage the various user permission so that only authorized persons can perform
the file modifications.
 It also keeps our files secure from hackers with the help of security management in the
operating system.

Disadvantages of File Management in OS


Some of the main advantages that the file system in the operating system provides are:

 If the size of the files becomes large then the management takes a good amount of time
due to hierarchical order.
 To get more advanced management features, we need an advanced version of the file
management system. One of the advanced features can be the document management
feature (DMS) that can organize important documents.
 The file system in operating system can only manage the local files present in the
computer system.
 Security can be an issue sometimes as a virus in a file can spread across the various other
files due to tree-like (hierarchal) structure.
 Due to the hierarchal structure, file accessing can be slow sometimes.
Examples of File Management in Operating System
An example of file management in operating system is a file manager or a file browser. File
browsers are user interface that is developed to manage various files and folders present in the
operating system.

Some of the most common operations provided by the file browser of almost every operating
system are:

1. file creation.
2. file modification.
3. file deletion.
4. file transfer.
5. file renaming.
6. file copying and moving.
7. changing file creation.

Examples of file browsers are:

1. Windows file manager (This PC).


2. Finder.
3. Dolphin.
4. One Drive.
5. GNOME Files, etc.

Conclusion
 File management is one of the basic but important features provided by the operating system.
The file management in the operating system is software that handles or manages the files
present in computer software.
 The file system in the operating system is capable of managing individual as well as a group of
files present in the computer system.
 The file system in operating system tells us about the location, owner, time of creation and
modification, type, and state of a file present on the computer system.
 The file management in operating system allows users to create a new file, and modify and
delete the old files present at different locations of the computer system.
 Processes share files so the file management in the operating system makes files sharable
between processes. It helps the various processes to securely access the required information
from a file.
 The file management in the operating system provides input-output operation support to the
files so that the data can be written, read, or extracted from the file(s).
 It organizes the files in a hierarchal manner in the form of files and folders (directories) so that
management of these files can be easier from the user's perspective as well.
UNIT-4
Computer application in offices
Computers are used in many ways in an office. They're used to make calculations that would

take too much time to do by hand. This includes financial, economical, technical,

mathematical and much more than those.

Computers are also used to keep track of money flows, administration, making drawings for

mechanics, building websites like Quora and maintenance/monitoring of buildings.

Some of the popular uses of computers in offices include preparation of word documents

such as letters, reports, processing of work documents such as work orders and financial

reports, presentation of reports and proposals to and behalf of executive and higher level

office personnel, management of email services to maintain and sustain business and

communication services, filing, storage and retrieval of business information and support for

internal and external business services that require messaging, faxes, printing, photocopying,

video and electronic transmissions.

Modern computer networks facilitate business interactions among offices in small and large

businesses and also promote productive two way or multi-path business flows with other

remote business, governmental, legal, commercial and almost any other relevant or related

business concerns.

Application Of Computers In Books Publication


Computers have transformed the making, promotion and reading of books and magazines. Publishers
use

computers to design and produce hard-copy books and e-books, market books to readers and

track sales. Readers download books and magazines to their phones, laptops and tablets to

read wherever they go.


Some Of The Popular Uses Of Computers In Publishing
 Designing the Word

Computers make designing books both faster and more complex. Designers and selfpublished
writers use page-layout and illustration software to pull together illustrations, cover designs,
layouts and typefaces in a fraction of the time it would take by hand. If the bookneeds revision,
it's easy to revise and make changes to digital files.

 Behold the E-book

Digital books make up 30 percent of the book-publishing marketplace at the time of publication.
Special-interest magazines for niche markets thrive online, where they save on hard-copy printing
and paper costs. Digital publishing makes it easier for writers to release their work without a
traditional publisher's support.

 In publishing, as in most industries, computers and the Internet are vital marketing tools.

Publishers email customers with newsletters about new releases. Magazines announce when
the next issue goes on sale. Publishers and authors rely on social media for promotion --
tweeting about new books, creating Facebook pages or promoting books on Good Reads.
Self-published authors use email and social media to publicize their work or send out digital
copies to book-review websites.
 The Booksellers

Online bookselling works well for publishers who can sell to readers anywhere in the world,
but it's had the side effect of forcing many independent bookstores and a national chain to
shut their doors.

Application Of Computers In Data analysis


Data Analysis is a process of collecting, transforming, cleaning, and modeling data with the

goal of discovering the required information. The results so obtained are communicated,

suggesting conclusions, and supporting decision-making. Data visualization is at times used

to portray the data for the ease of discovering the useful patterns in the data. The terms Data

Modeling and Data Analysis mean the same.

Data Analysis Process consists of the following phases that are iterative in nature −

 Data Requirements Specification

 Data Collection
 Data Processing

 Data Cleaning

 Data Analysis

 Communication

All the above Data Analysis process are performed using the computer system.

Role of Computers in Accounting:


Labor Saving:
Labor saving is the main aim of introduction of computers in accounting. It refers to annual
savings in labor cost or increase in the volume of work handled by the existing staff.
Time Saving:
Savings in time is another object of computerization. Computers should be used whenever it
is important to save time. It is important that jobs should be completed in a specified time
such as the preparation of pay rolls and statement of accounts.
Accuracy:
Accuracy in accounting statements and books of accounts is the most important in business.
This can be done without any errors or mistakes with the help of computers. It also helps to
locate the errors and frauds very easily.
Minimization of Frauds:
Computer is mainly installed to minimize the chances of frauds committed by the employees,
especially in maintaining the books of accounts and handling cash.
Effect on Personnel:
Computer relieves the manual drudgery, reduces the hardness of work and fatigue, and to that
extent improves the morale of the employees.

Role of computers in banks


Banks use software to manage their trading desks and their clients accountants.

These systems often connect to financial markets such as securities exchanges or third party

providers of such as financial data vendors.

For example, a company such as Bloomberg is financial software, news, and data company

that offers financial software tools such as analytics and equity trading platform to financial

companies around the world through the Bloomberg Terminal.


COMPUTER APPLICATIONS IN MEDICAL FIELD

You might also like