Computer Application UNIT 1 & 2
Computer Application UNIT 1 & 2
Com-109)
UNIT I
Computer
A computer is an electronic device, operating under the control of instructions stored in its own memory that
can accept data (input), process the data according to specified rules, produce information (output), and store
the information for future use.
Functionalities of a computer
Takes data as input
Stores the data/instructions in its memory and use them when required
Processes the data and converts it into useful information
Generates the output
Controls all the above four steps
Computer Components
Any kind of computers consists of HARDWARE AND SOFTWARE.
Hardware:
Computer hardware is the collection of physical elements that constitutes a computer system. Computer
hardware refers to the physical parts or components of a computer such as the monitor, mouse, keyboard,
computer data storage, hard drive disk (HDD), system unit (graphic cards, sound cards, memory, motherboard
and chips), etc. all of which are physical objects that can be touched.
Input Devices:
Input device is any peripheral (piece of computer hardware equipment to provide data and control signals to
an information processing system such as a computer or other information appliance.
Input device Translate data from form that humans understand to one that the computer can work with. Most
common are keyboard and mouse.
Example of Input Devices:-
1. Keyboard 2. Mouse (pointing device) 3. Microphone
A CPU is brain of a computer. It is responsible for all functions and processes. Regarding
computing power, the CPU is the most important element of a computer system.
The CPU is comprised of three main parts:
* Arithmetic Logic Unit (ALU): Executes all arithmetic and logical operations. Arithmetic
calculations like as addition, subtraction, multiplication and division. Logical operation like
compare numbers, letters, or special characters
* Registers : Stores the data that is to be executed next, "very fast storage area".
Primary Memory:-
1. RAM: Random Access Memory (RAM) is a memory scheme within the computer system
responsible for storing data on a temporary basis, so that it can be promptly accessed by the
processor as and when needed. It is volatile in nature, which means that data will be erased
once supply to the storage device is turned off. RAM stores data randomly and the processor
accesses these data randomly from the RAM storage. RAM is considered "random access"
because you can access any memory cell directly if you know the row and column that
intersect at that cell.
2. ROM (Read Only Memory): ROM is a permanent form of storage. ROM stays active
regardless of whether power supply to it is turned on or off. ROM devices do not allow data
stored on them to be modified.
Secondary Memory:-
Stores data and programs permanently: its retained after the power is turned off
1. Hard drive (HD): A hard disk is part of a unit, often called a "disk drive," "hard drive," or
"hard disk drive," that store and provides relatively quick access to large amounts of data on
an electromagnetically charged surface or set of surfaces.
2. Optical Disk: An optical disc drive (ODD) is a disk drive that uses laser light as part of the
process of reading or writing data to or from optical discs. Some drives can only read from
discs, but recent drives are commonly both readers and recorders, also called burners or
writers. Compact discs, DVDs, and Blu-ray discs are common types of optical media which
can be read and recorded by such drives. Optical drive is the generic name; drives are usually
described as "CD" "DVD", or "Bluray", followed by "drive", "writer", etc. There are three
main types of optical media: CD, DVD, and Blu-ray disc. CDs can store up to 700 megabytes
(MB) of data and DVDs can store up to 8.4 GB of data. Blu-ray discs, which are the newest
type of optical media, can store up to 50 GB of data. This storage capacity is a clear
advantage over the floppy disk storage media (a magnetic media), which only has a capacity
of 1.44 MB.
3. Flash Disk: A storage module made of flash memory chips. A Flash disks have no
mechanical platters or access arms, but the term "disk" is used because the data are accessed
as if they were on a hard drive. The disk storage structure is emulated.
Output devices
An output device is any piece of computer hardware equipment used to communicate the
results of data processing carried out by an information processing system (such as a
computer) which converts the electronically generated information into human-readable
form.
Example on Output Devices:
1. Monitor 2. LCD Projection Panels
5. Plotters 6. Speaker(s)
7. Projector
Software
Software is a generic term for organized collections of computer data and instructions, often
broken into two major categories: system software that provides the basic non-task-specific
functions of the computer, and application software which is used by users to accomplish
specific tasks.
Software Types
A. System software is responsible for controlling, integrating, and managing the individual
hardware components of a computer system so that other software and the users of the system
see it as a functional unit without having to be concerned with the low-level details such as
transferring data from memory to disk, or rendering text onto a display. Generally, system
software consists of an operating system and some fundamental utilities such as disk
formatters, file managers, display managers, text editors, user authentication (login) and
management tools, and networking and device control software.
B. Application software is used to accomplish specific tasks other than just running the
computer system. Application software may consist of a single program, such as an image
viewer; a small collection of programs (often called a software package) that work closely
together to accomplish a task, such as a spreadsheet or text processing system; a larger
collection (often called a software suite) of related but independent programs and packages
that have a common user interface or shared data format, such as Microsoft Office, which
consists of closely integrated word processor, spreadsheet, database, etc.; or a software
system, such as a database management system, which is a collection of fundamental
programs that may provide some service to a variety of other independent applications.
Computers can be generally classified by size and power as follows, though there is
Considerable overlap:
Characteristics of Computer
Speed, accuracy, diligence, storage capability and versatility are some of the key
characteristics of a computer. A brief overview of these characteristics are :
• Speed: The computer can process data very fast, at the rate of millions of instructions per
second. Some calculations that would have taken hours and days to complete otherwise, can
be completed in a few seconds using the computer. For example, calculation and generation
of salary slips of thousands of employees of an organization, weather forecasting that requires
analysis of a large amount of data related to temperature, pressure and humidity of various
places, etc.
• Accuracy: Computer provides a high degree of accuracy. For example, the computer can
accurately give the result of division of any two numbers up to 10 decimal places.
• Diligence: When used for a longer period of time, the computer does not get tired or
fatigued. It can perform long and complex calculations with the same speed and accuracy
from the start till the end.
• Storage Capability: Large volumes of data and information can be stored in the computer
and also retrieved whenever required. A limited amount of data can be stored, temporarily, in
the primary memory. Secondary storage devices like floppy disk and compact disk can store
a large amount of data permanently.
• Versatility: Computer is versatile in nature. It can perform different types of tasks with the
same ease. At one moment you can use the computer to prepare a letter document and in the
next moment you may play music or print a document. Computers have several limitations
too. Computer can only perform tasks that it has been programmed to do.
COMPUTER MEMORY
A memory is just like a human brain. It is used to store data and instructions. Computer
memory is the storage space in the computer, where data is to be processed and instructions
required for processing are stored. The memory is divided into large number of small parts
called cells. Each location or cell has a unique address, which varies from zero to memory
size minus one. For example, if the computer has 64k words, then this memory unit has 64 *
1024 = 65536 memory locations. The address of these locations varies from 0 to 65535.
Memory is primarily of three types −
Cache Memory
Primary Memory/Main Memory
Secondary Memory
Cache Memory
Cache memory is a very high speed semiconductor memory which can speed up the CPU. It
acts as a buffer between the CPU and the main memory. It is used to hold those parts of data
and program which are most frequently used by the CPU. The parts of data and programs
are transferred from the disk to cache memory by the operating system, from where the CPU
can access them.
Advantages
The advantages of cache memory are as follows −
Disadvantages
The disadvantages of cache memory are as follows −
RAM (Random Access Memory) is the internal memory of the CPU for storing data,
program, and program result. It is a read/write memory which stores data until the machine
is working. As soon as the machine is switched off, data is erased.
Access time in RAM is independent of the address, that is, each storage location inside the
memory is as easy to reach as other locations and takes the same amount of time. Data in the
RAM can be accessed randomly but it is very expensive.
RAM is volatile, i.e. data stored in it is lost when we switch off the computer or if there is a
power failure. Hence, a backup Uninterruptible Power System (UPS) is often used with
computers. RAM is small, both in terms of its physical size and in the amount of data it can
hold.
RAM is of two types −
Long life
No need to refresh
Faster
Used as cache memory
Large size
Expensive
High power consumption
Non-volatile in nature
Cannot be accidentally changed
Cheaper than RAMs
Easy to test
More reliable than RAMs
Static and do not require refreshing
Contents are always known and can be verified
UNIT- II
1. User's View
2. System View
1. Processor management which involves putting the tasks into order and pairing them into
manageable size before they go to the CPU.
2. Memory management which coordinates data to and from RAM (random-access
memory) and determines the necessity for virtual memory.
3. Device management which provides interface between connected devices.
4. Storage management which directs permanent data storage.
5. Application which allows standard communication between software and your computer.
6. User interface which allows you to communicate with your computer.
In this type of system, there is no direct interaction between user and the computer.
The user has to submit a job (written on cards or tape) to a computer operator.
Then computer operator places a batch of several jobs on an input device.
Jobs are batched together by type of languages and requirement.
Then a special program, the monitor, manages the execution of each program in the batch.
The monitor is always in the main memory and available for execution.
In this the operating system picks up and begins to execute one of the jobs from memory.
Once this job needs an I/O operation operating system switches to another job (CPU and
OS always busy).
Jobs in the memory are always less than the number of jobs on disk(Job Pool).
If several jobs are ready to run at the same time, then the system chooses which one to run
through the process of CPU Scheduling.
In Non-multiprogrammed system, there are moments when CPU sits idle and does not do
any work.
In Multiprogramming system, CPU will never be idle and keeps on processing.
Time Sharing Systems are very similar to Multiprogramming batch systems. In fact time
sharing systems are an extension of multiprogramming systems.
In Time sharing systems the prime focus is on minimizing the response time, while in
multiprogramming the prime focus is to maximize the CPU usage.
3) Multiprocessor Systems
A Multiprocessor system consists of several processors that share a common physical
memory. Multiprocessor system provides higher computing power and speed. In
multiprocessor system all processors operate under single operating system. Multiplicity of
the processors and how they do act together are transparent to the others.
1. Enhanced performance
2. Execution of several tasks by different processors concurrently, increases the system's
throughput without speeding up the execution of a single task.
3. If possible, system divides task into many subtasks and then these subtasks can be
executed in parallel in different processors. Thereby speeding up the execution of single
tasks.
4) Desktop Systems
Earlier, CPUs and PCs lacked the features needed to protect an operating system from user
programs. PC operating systems therefore were neither multiuser nor multitasking.
However, the goals of these operating systems have changed with time; instead of
maximizing CPU and peripheral utilization, the systems opt for maximizing user convenience
and responsiveness. These systems are called Desktop Systems and include PCs
running Microsoft Windows and the Apple Macintosh. Operating systems for these
computers have benefited in several ways from the development of operating systems
for mainframes.
Microcomputers were immediately able to adopt some of the technology developed for
larger operating systems. On the other hand, the hardware costs for microcomputers are
sufficiently low that individuals have sole use of the computer, and CPU utilization is no
longer a prime concern. Thus, some of the design decisions made in operating systems for
mainframes may not be appropriate for smaller systems.
1. As there are multiple systems involved, user at one site can utilize the resources of
systems at other sites for resource-intensive tasks.
2. Fast processing.
3. Less load on the Host Machine.
Types of Distributed Operating Systems
Following are the two types of distributed operating systems used:
1. Client-Server Systems
2. Peer-to-Peer Systems
Client-Server Systems
Centralized systems today act as server systems to satisfy requests generated by client
systems. The general structure of a client-server system is depicted in the figure below:
Server Systems can be broadly categorized as: Compute Servers and File Servers.
Compute Server systems, provide an interface to which clients can send requests to
perform an action, in response to which they execute the action and send back results to
the client.
File Server systems, provide a file-system interface where clients can create, update,
read, and delete files.
Peer-to-Peer Systems
The growth of computer networks - especially the Internet and World Wide Web (WWW) –
has had a profound influence on the recent development of operating systems. When PCs
were introduced in the 1970s, they were designed for personal use and were generally
considered standalone computers. With the beginning of widespread public use of the
Internet in the 1990s for electronic mail and FTP, many PCs became connected to computer
networks.
In contrast to the Tightly Coupled systems, the computer networks used in these applications
consist of a collection of processors that do not share memory or a clock. Instead, each
processor has its own local memory. The processors communicate with one another through
various communication lines, such as high-speed buses or telephone lines. These systems are
usually referred to as loosely coupled systems ( or distributed systems). The general structure
of a client-server system is depicted in the figure below:
6) Clustered Systems
Like parallel systems, clustered systems gather together multiple CPUs to accomplish
computational work.
Clustered systems differ from parallel systems, however, in that they are composed of two
or more individual systems coupled together.
The definition of the term clustered is not concrete; the general accepted definition is that
clustered computers share storage and are closely linked via LAN networking.
Clustering is usually performed to provide high availability.
A layer of cluster software runs on the cluster nodes. Each node can monitor one or more
of the others. If the monitored machine fails, the monitoring machine can take ownership
of its storage, and restart the application(s) that were running on the failed machine. The
failed machine can remain down, but the users and clients of the application would only
see a brief interruption of service.
Asymmetric Clustering - In this, one machine is in hot standby mode while the other is
running the applications. The hot standby host (machine) does nothing but monitor the
active server. If that server fails, the hot standby host becomes the active server.
Symmetric Clustering - In this, two or more hosts are running applications, and they are
monitoring each other. This mode is obviously more efficient, as it uses all of the
available hardware.
Parallel Clustering - Parallel clusters allow multiple hosts to access the same data on the
shared storage. Because most operating systems lack support for this simultaneous data
access by multiple hosts, parallel clusters are usually accomplished by special versions of
software and special releases of applications.
Clustered technology is rapidly changing. Clustered system's usage and it's features should
expand greatly as Storage Area Networks(SANs). SANs allow easy attachment of multiple
hosts to multiple storage units. Current clusters are usually limited to two or four hosts due to
the complexity of connecting the hosts to shared storage.
8) Handheld Systems
Handheld systems include Personal Digital Assistants(PDAs), such as Palm-
Pilots or Cellular Telephones with connectivity to a network such as the Internet. They are
usually of limited size due to which most handheld devices have a small amount of memory,
include slow processors, and feature small display screens.
Many handheld devices have between 512 KB and 8 MB of memory. As a result, the
operating system and applications must manage memory efficiently. This includes
returning all allocated memory back to the memory manager once the memory is no
longer being used.
Currently, many handheld devices do not use virtual memory techniques, thus forcing
program developers to work within the confines of limited physical memory.
Processors for most handheld devices often run at a fraction of the speed of a processor in
a PC. Faster processors require more power. To include a faster processor in a handheld
device would require a larger battery that would have to be replaced more frequently.
The last issue confronting program designers for handheld devices is the small display
screens typically available. One approach for displaying the content in web pages is web
clipping, where only a small subset of a web page is delivered and displayed on the
handheld device.
Some handheld devices may use wireless technology such as BlueTooth, allowing remote
access to e-mail and web browsing. Cellular telephones with connectivity to the Internet fall
into this category. Their use continues to expand as network connections become more
available and other options such as cameras and MP3 players, expand their utility.
BOOTING PROCESS
When we start our Computer then there is an operation which is performed automatically by
the Computer which is also called as Booting. In the Booting, System will check all the
hardware’s and Software’s those are installed or Attached with the System and this will also
load all the Files those are needed for running a system.
In the Booting Process all the Files those are Stored into the ROM Chip will also be Loaded
for Running the System. In the Booting Process the System will read all the information from
the Files those are Stored into the ROM Chip and the ROM chip will read all the instructions
those are Stored into these Files. After the Booting of the System this will automatically
display all the information on the System. The Instructions those are necessary to Start the
System will be read at the Time of Booting.
There are two Types of Booting
1) Warm Booting: when the System Starts from the Starting or from initial State Means
when we Starts our System this is called as warm Booting. In the Warm Booting the System
will be started from its beginning State means first of all, the user will press the Power
Button, then this will read all the instructions from the ROM and the Operating System will b
automatically gets loaded into the System.
2) Cold Booting: The Cold Booting is that in which System Automatically Starts when we
are Running the System, For Example due to Light Fluctuation the system will Automatically
Restarts So that in this Chances Damaging of system are More. And the System will no be
start from its initial State So May Some Files will b Damaged because they are not Properly
Stored into the System.
These include translations between high-level and human-readable computer languages such
as C++, Java and COBOL, intermediate-level languages such as Java bytecode, low-level
languages such as the assembly language and machine code, and between similar levels of
language on different computing platforms, as well as from any of these to any other of
these.
A compiler is a computer program (or set of programs) that transforms source code written in
a programming language (the source language) into another computer language (the target
language, often having a binary form known as object code).
The most common reason for converting a source code is to create an executable program.
The name "compiler" is primarily used for programs that translate source code from a high-
level programming language to a lower level language (e.g., assembly language or machine
code).
If the compiled program can run on a computer whose CPU or operating system is different
from the one on which the compiler runs, the compiler is known as a cross-compiler. More
generally, compilers are a specific type of translators.
A program that translates from a low level language to a higher level one is a decompiler.
A program that translates between high-level languages is usually called a source-to-source
compiler or transpiler.
A language rewriter is usually a program that translates the form of expressions without a
change of language.
The term compiler-compiler is sometimes used to refer to a parser generator, a tool often used
to help create the lexer and parser.
1. Lexical analysis,
2. Preprocessing,
3. Parsing,
4. Semantic analysis (syntax-directed translation),
5. Code generation, and code optimization.
Program faults caused by incorrect compiler behavior can be very difficult to track down and
work around; therefore, compiler implementors invest significant effort to ensure compiler
correctness.
Compilers enabled the development of programs that are machine-independent.
Before the development of FORTRAN, the first higher-level language, in the 1950s,
machine-dependent assembly language was widely used.
While assembly language produces more abstraction than machine code on the same
architecture, just as with machine code, it has to be modified or rewritten if the program is to
be executed on different computer hardware architecture.
With the advent of high-level programming languages that followed FORTRAN, such as
COBOL, C, and BASIC, programmers could write machine-independent source programs. A
compiler translates the high-level source programs into target programs in machine languages
for the specific hardware. Once the target program is generated, the user can execute the
program.
ADVANTAGES OF COMPILER
1. Source code is not included, therefore compiled code is more secure than
interpreted code.
2. Tends to produce faster code than interpreting source code.
3. Produces an executable file, and therefore the program can be run without
need of the source code.
DISADVANTAGES OF COMPILER
1. Object code needs to be produced before a final executable file, this can be a
slow process.
2. The source code must be 100% correct for the executable file to be produced.
2. INTERPRETERS
In computer science, an interpreter is a computer program that directly executes,
i.e. performs, instructions written in a programming or scripting language, without
previously compiling them into a machine language program.
An interpreter is a program that reads in as input a source program, along with data for the
program, and translates the source program instruction by instruction.
EXAMPLE
The Java interpreter java translate a .class file into code that can be executed natively
on the underlying machine.
The program VirtualPC interprets programs written for the Intel Pentium architecture
(IBM-PC clone) for the PowerPC architecture (Macintosh). This enable Macintosh users to
run Windows programs on their computer.
An interpreter generally uses one of the following strategies for program execution:
1. parse the source code and perform its behavior directly.
2. translate source code into some efficient intermediate representation and
immediately execute this.
3. explicitly execute stored precompiled code made by a compiler which is part
of the interpreter system.
APPLICATIONS
ADVANTAGES OF INTERPRETER
1. Easier to debug(check errors) than a compiler.
2. Easier to create multi-platform code, as each different platform would have an
interpreter to run the same code.
3. Useful for prototyping software and testing basic program logic.
DISADVANTAGES OF INTERPRETER
1. Source code is required for the program to be executed, and this source code
can be read making it insecure.
2. Interpreters are generally slower than compiled programs due to the per-line
translation method.
3. ASSEMBLERS
An assembler translates assembly language into machine code.
An assembler is a program that creates object code by translating combinations of
mnemonics and syntax for operations and addressing modes into their numerical equivalents.
Assembly language
It consists of mnemonics for machine opcodes so assemblers perform a 1:1 translation
from mnemonic to a direct instruction.
An assembly language (or assembler language) is a low-level programming
language for a computer, or other programmable device, in which there is a very strong
(generally one-to-one) correspondence between the language and the architecture's machine
code instructions.
Each assembly language is specific to a particular computer architecture, in contrast
to most high-level programming languages, which are generally portable across multiple
architectures, but require interpreting or compiling.
Assembly language is converted into executable machine code by a utility
program referred to as an assembler; the conversion process is referred to as assembly,
or assembling the code.
For example:
Conversely, one instruction in a high level language will translate to one or more instructions
at machine level.
TYPES OF ASSEMBLERS
There are two types of assemblers based on how many passes through the source are needed
to produce the executable program.
1. One-pass assemblers go through the source code once. Any symbol used
before it is defined will require "errata" at the end of the object code (or, at least, no
earlier than the point where the symbol is defined) telling the linker or the loader to
"go back" and overwrite a placeholder which had been left where the as yet undefined
symbol was used.
2. Multi-pass assemblers create a table with all symbols and their values in the
first passes, then use the table in later passes to generate code.
In both cases, the assembler must be able to determine the size of each instruction on the
initial passes in order to calculate the addresses of subsequent symbols.
This means that if the size of an operation referring to an operand defined later depends on
the type or distance of the operand, the assembler will make a pessimistic estimate when first
encountering the operation, and if necessary pad it with one or more "no-operation"
instructions in a later pass or the errata. In an assembler with peephole optimization,
addresses may be recalculated between passes to allow replacing pessimistic code with code
tailored to the exact distance from the target.
The original reason for the use of one-pass assemblers was speed of assembly – often a
second pass would require rewinding and rereading a tape or rereading a deck of cards.
With modern computers this has ceased to be an issue. The advantage of the multi-pass
assembler is that the absence of errata makes the linking process (or the program load if the
assembler directly produces executable code) faster.
APPLICATIONS OF ASSEMBLERS
ADVANTAGES OF ASSEMBLER:
1. Very fast in translating assembly language to machine code as 1 to 1
relationship.
2. Assembly code is often very efficient (and therefore fast) because it is a low
level language.
3. Assembly code is fairly easy to understand due to the use of English-like
mnemonics.
DISADVANTAGES OF ASSEMBLERS:
1. Assembly language is written for a certain instruction set and/or processor.
2. Assembly tends to be optimised for the hardware it's designed for, meaning it
is often incompatible with different hardware.
3. Lots of assembly code is needed to do relatively simple tasks, and complex
programs require lots of programming time.
Error messages
generated during
an assembly may
originate from
the assembler or
Errors are displayed
Errors are displayed from a higher
for every
4. ERRORS after entire program is level language
instruction interpreted
checked. such as C (many
(if any)
assemblers are
written in C) or
from the
operating system
environment
Around the world language is a source of communication among human beings. Similarly, in
order to communicate with computer user also needs to have a language, that should be
understandable by the computers. For the purpose different languages are developed for
performing different types of work on the computer.
Machine Language
The lowest and most elementary language and was the first type of programming language to
be developed. Mache language is basically the only language which computer can
understand. In fact, a manufacturer designs a computer to obey just one language, its machine
code, which is represented inside the computer by a string of binary digits (bits) 0 and 1. The
symbol 0 stands for the absence of an electric pulse and 1 for the presence of an electric
pulse. Since a computer is capable of recognizing electric signals, therefore, it understands
machine language.
The set of binary codes which can be recognize by the computer is known as the machine
code instruction set. A machine language instruction consists of an operation code one or
more operands. The operation code specifies that operation that is to be performed e.g. read,
record etc. the operands identify the quantities to be operated on e.g. the numbers to be added
or the locations where data are stored. But, it is almost impossible to write programs directly
in machine code. For this reason, programs are normally written in assembly or high level
languages and then are translated in the machine language by different translators.
Advantages
1. It makes fast and efficient use of the computer
2. It requires no translator to translate the code i.e. directly understood by the computer.
Disadvantages
1. All operation codes have to be remembered
2. All memory addresses have to be remembered
3. It is hard to amend or find errors in a program written in the machine language
4. These languages are machine dependent i.e. a particular machine language can be
used on only one type of computer.
Assembly Languages
It was developed to overcome some of the many inconveniences of machine language. This is
another low level but a very important language in which operation codes and operands are
given in the form of alphanumeric symbols instead of 0’s and 1’s. These alphanumeric
symbols will be known as mnemonic codes and can have maximum up to 5 letter
combinations e.g. ADD for addition, SUB for subtraction, START LABEL etc. because of
this feature it is also known as “Symbolic Programming Language”. This language is very
difficult and needs a lot of practice to master it because very small English support is given.
This symbolic language helps in compiler orientations. The instructions of the assembly
language will also be converted to machine codes by language translator to be executed by
the computer
Advantages
1. It is easier to understand and use as compared to machine language
2. It is easy to locate and correct errors
3. It is modified easily
Disadvantages
1. Like machine language it is also machine dependent
2. Since it is machine dependent, there programmer should have the knowledge of he
hardware also.
2) Computer High Level Languages
High level computer languages give formats close to English language and the purpose of
developing high level languages is to enable people to write programs easily and in their own
native language environment (English). High-level languages are basically symbolic
languages that use English words and/or mathematical symbols rather than mnemonic codes.
Each instruction in the high level language is translated into many machine language
instructions thus showing one-to-many translation.
Many languages have been developed for achieving different variety of tasks, some are fairly
specialized others are quite general purpose. These are categorized according to their use as:
Business Data Processing. These languages emphasize their capabilities for maintaining
data processing procedures and files handling problems. Examples are:
1. COBOL (Common Business Oriented Language)
2. RPG (Report Program Generator).
String and List Processing. These are used for string manipulation including search for
patterns, inserting and deleting characters. Examples are: LISP (List Processing).
Multipurpose Language. A general purpose language used for algebraic procedures, data
and string processing. Examples are:
1. Pascal (after the name of Blaise Pascal).
2. PL/1 (Programming Language, version 1).
3. C language.
Disadvantages: There are certain disadvantages also Inspite these disadvantages high-level
languages have proved their worth. The advantages out-weigh the disadvantages by far, for
most applications. These are:
1. A high-level language has to be translated into the -machine language by a translator
and thus a price in computer time is paid.
2. The object code generated by a translator might be inefficient compared to an
equivalent assembly language program.
INTRODUCTION TO GUI
A graphical user interface is fondly called "GUI" pronounced "gooey." The word
"graphical" means pictures; "user" means the person who uses it; "interface" means what you
see on the screen and how you work with it. So a graphical user interface, then, means that
you (the user) get to work with little pictures on the screen to boss the computer around,
rather than type in lines of codes and commands.
(GUI) An INTERACTIVE outer layer presented by a computer software product (for
example an operating system) to make it easier to use by operating through pictures as well as
words. Graphical user interfaces employ visual metaphors, in which objects drawn on the
computer's screen mimic in some way the behaviour of real objects, and manipulating the
screen object controls part of the program.
A graphical user interface uses menus and icons (pictorial representations) to choose
commands, start applications, make changes to documents, store files, delete files, etc. You
can use the mouse to control a cursor or pointer on the screen to do these things, or you can
alternatively use the keyboard to do most actions. A graphical user interface is considered
user-friendly.
The most popular GUI metaphor requires the user to point at pictures on the screen with an
arrow pointer steered by a MOUSE or similar input device. Clicking the MOUSE BUTTONS
while pointing to a screen object selects or activates that object, and may enable it to be
moved across the screen by dragging as if it were a real object.
GUIs have many advantages and some disadvantages. They make programs much easier to
learn and use, by exploiting natural hand-to-eye coordination instead of numerous obscure
command sequences. They reduce the need for fluent typing skills, and make the operation of
software more comprehensible and hence less mysterious and anxiety- prone. For visually-
oriented tasks such as word processing, illustration and graphic design they have proved
revolutionary.
GUIs can also present great difficulties for people with visual disabilities, and their
interactive nature makes it difficult to automate repetitive tasks by batch processing. Neither
do GUIs automatically promote good user interface design. Hiding 100 poorly-chosen
commands behind the tabs of a property sheet is no better than hiding them among an old-
fashioned menu hierarchy - the point is to reduce them to 5 more sensible ones.
Historically, the invention of the GUI must be credited to Xerox PARC where the first GUI
based workstations - the XEROX STAR and XEROX DORADO - were designed in the early
1970s. These proved too expensive and too radical for commercial exploitation, but it was
following a visit to PARC by Steve Jobs in the early 1980s that Apple released the LISA, the
first commercial GUI computer, and later the more successful MACINTOSH. It was only
following the 1990 release of Windows version 3.0 that GUIs became ubiquitous on IBM-
compatible PCs.