Computer Application Unit 1
Computer Application Unit 1
The computer was born not for entertainment or email but out of a need to solve a serious number-crunching
crisis. By 1880, the U.S. population had grown so large that it took more than seven years to tabulate the
U.S. Census results. The government sought a faster way to get the job done, giving rise to punch-card based
computers that took up entire rooms.
Today, we carry more computing power on our smartphones than was available in these early models. The
following brief history of computing is a timeline of how computers evolved from their humble beginnings
to the machines of today that surf the Internet, play games and stream multimedia in addition to crunching
numbers.
1801: In France, Joseph Marie Jacquard invents a loom that uses punched wooden cards to automatically
weave fabric designs. Early computers would use similar punch cards.
1822: English mathematician Charles Babbage conceives of a steam-driven calculating machine that would
be able to compute tables of numbers. The project, funded by the English government, is a failure. More
than a century later, however, the world‘s first computer was actually built.
1890: Herman Hollerith designs a punch card system to calculate the 1880 census, accomplishing the task in
just three years and saving the government $5 million. He establishes a company that would ultimately
become IBM.
1936: Alan Turing presents the notion of a universal machine, later called the Turing machine, capable of
computing anything that is computable. The central concept of the modern computer was based on his ideas.
1943-1944: Two University of Pennsylvania professors, John Mauchly and J. Presper Eckert, build the
Electronic Numerical Integrator and Calculator (ENIAC). Considered the grandfather of digital computers, it
fills a 20-foot by 40-foot room and has 18,000 vacuum tubes.
1946: Mauchly and Presper leave the University of Pennsylvania and receive funding from the Census
Bureau to build the UNIVAC, the first commercial computer for business and government applications.
1953: Grace Hopper develops the first computer language, which eventually becomes known as COBOL.
Thomas Johnson Watson Jr., son of IBM CEO Thomas Johnson Watson Sr., conceives the IBM 701 EDPM
to help the United Nations keep tabs on Korea during the war.
1954: The FORTRAN programming language, an acronym for FORmula TRANslation, is developed by a
team of programmers at IBM led by John Backus, according to the University of Michigan.
1969: A group of developers at Bell Labs produce UNIX, an operating system that addressed compatibility
issues. Written in the C programming language, UNIX was portable across multiple platforms and became
the operating system of choice among mainframes at large companies and government entities. Due to the
slow nature of the system, it never quite gained traction among home PC users.
1974-1977: A number of personal computers hit the market, including Scelbi & Mark-8 Altair, IBM 5100,
Radio Shack‘s TRS-80 — affectionately known as the ―Trash 80‖ — and the Commodore PET.
1981: The first IBM personal computer, code-named ―Acorn,‖ is introduced. It uses Microsoft‘s MS-DOS
operating system. It has an Intel chip, two floppy disks and an optional color monitor. Sears & Roebuck and
Computerland sell the machines, marking the first time a computer is available through outside distributors.
It also popularizes the term PC.
1983: Apple‘s Lisa is the first personal computer with a GUI. It also features a drop-down menu and icons.
It flops but eventually evolves into the Macintosh. The Gavilan SC is the first portable computer with the
familiar flip form factor and the first to be marketed as a ―laptop.‖
1985: Microsoft announces Windows, according to Encyclopedia Britannica. This was the company‘s
response to Apple‘s GUI. Commodore unveils the Amiga 1000, which features advanced audio and video
capabilities.
1985: The first dot-com domain name is registered on March 15, years before the World Wide Web would
mark the formal beginning of Internet history. The Symbolics Computer Company, a small Massachusetts
computer manufacturer, registers Symbolics.com. More than two years later, only 100 dot-coms had been
registered.
1986: Compaq brings the Deskpro 386 to market. Its 32-bit architecture provides as speed comparable to mainframes.
1990: Tim Berners-Lee, a researcher at CERN, the high-energy physics laboratory in Geneva, develops
HyperText Markup Language (HTML), giving rise to the World Wide Web.
1993: The Pentium microprocessor advances the use of graphics and music on PCs.
1994: PCs become gaming machines as ―Command & Conquer,‖ ―Alone in the Dark 2,‖ ―Theme
Park,‖
―Magic Carpet,‖ ―Descent‖ and ―Little Big Adventure‖ are among the games to hit the market.
1996: Sergey Brin and Larry Page develop the Google search engine at Stanford University.
1997: Microsoft invests $150 million in Apple, which was struggling at the time, ending Apple‘s court case
against Microsoft in which it alleged that Microsoft copied the ―look and feel‖ of its operating system.
1999: The term Wi-Fi becomes part of the computing language and users begin connecting to the Internet
without wires.
2001: Apple unveils the Mac OS X operating system, which provides protected memory architecture and
pre-emptive multi-tasking, among other benefits. Not to be outdone, Microsoft rolls out Windows XP, which
has a significantly redesigned GUI.
2003: The first 64-bit processor, AMD‘s Athlon 64, becomes available to the consumer market.
2004: Mozilla‘s Firefox 1.0 challenges Microsoft‘s Internet Explorer, the dominant Web browser. Facebook,
a social networking site, launches.
2005: YouTube, a video sharing service, is founded. Google acquires Android, a Linux-based mobile phone
operating system.
2006: Apple introduces the MacBook Pro, its first Intel-based, dual-core mobile computer, as well as an
Intel-based iMac. Nintendo‘s Wii game console hits the market.
2010: Apple unveils the iPad, changing the way consumers view media and jumpstarting the dormant tablet
computer segment.
2011: Google releases the Chromebook, a laptop that runs the Google Chrome OS.
2015: Apple releases the Apple Watch. Microsoft releases Windows 10.
2016: The first reprogrammable quantum computer was created. ―Until now, there hasn‘t been
any quantum-computing platform that had the capability to program new algorithms into their system.
They‘re usually each tailored to attack a particular algorithm,‖ said study lead author Shantanu Debnath, a
quantum physicist and optical engineer at the University of Maryland, College Park.
2017: The Defense Advanced Research Projects Agency (DARPA) is developing a new ―Molecular
Informatics‖ program that uses molecules as computers. ―Chemistry offers a rich set of properties that we
may be able to harness for rapid, scalable information storage and processing,‖ Anne Fischer, program
manager in DARPA‘s Defense Sciences Office, said in a statement. ―Millions of molecules exist, and each
molecule has a unique three-dimensional atomic structure as well as variables such as shape, size, or even
color. This richness provides a vast design space for exploring novel and multi-value ways to encode and
process data beyond the 0s and 1s of current logic-based, digital architectures.‖
Types of Computer
There are three types of computers.
Analog computers.
Digital computers.
Hybrid computers.
Analog Computer
Let us begin with analogue computers. These computers were specifically designed to process
analogue data. For readers who are not familiar with the term, analogue data is a type of continuous
data that continually changes and does not have discrete values.
It can also be said that analogue computers are used when the users are not familiar with the exact
values like temperature, speed, current, and pressure. An intriguing feature of analogue computers is
accepting the measuring device's data without converting it into relevant codes and numbers.
This feature allows analogue computers to measure continuous changes in physical quantity. In
most cases, the output of these computers is read on a dial or scale. Some examples of analogue
computers are the mercury thermometer and speedometer.
There are many advantages of using analog computers. Some of those advantages are as follows.
These computers allow real-time computation and operations at the same time. Further, it
continuously represents all data within the range of the analog computer system.
In some applications, analog computers help perform calculations without using transducers
to convert both the inputs and outputs to a digital electronic form and vice versa.
Programmers can also scale the problem for the dynamic range of analog computers. This
provides excellent insight into the actual situation. It also helps in learning about any errors
and their effects.
Digital Computer
Digital computers were invented to perform different calculations and logical operations at a very
high speed. These computers accept the raw data as input, done in binary numbers (0 and 1) or
digitals.
After that, the device processes the information with programs that are already stored in the device's
memory. This process is followed to generate the output. Some examples of digital computers
include laptops, desktops, and other electronic devices like smartphones.
There are many advantages of digital computers. Some of those advantages are mentioned below.
Digital computers allow users to store a large amount of information. The stored information
can be retrieved whenever it is required.
New features can easily be added to the digital systems.
Ability to change the program without making any changes in the hardware of the system.
The cost of the hardware of digital computers is often less because of the advancement in the
Integrated Circuit (IC) technology.
These systems process data digitally at a very high speed.
Digital computers use error correction codes because of which these systems are very
reliable.
The output is not affected by humidity, noise, temperature, or other natural properties,
leading to the results' high reproducibility.
Hybrid Computers
Hybrid computers are devices that have features of both digital and analog computers. These
devices are similar in speed to analog computers and are identical to digital computers in their
memory and accuracy.
Hybrid computers can process both discrete and continuous data. These devices work by accepting
analog signals and converting those signals into a digital form before processing. This is why these
devices are popularly used in specialized applications where both analog and digital data has to be
processed.
For example, the processors used in petrol pumps convert fuel flow into values for both quantity and
price. Similar devices are used in hospitals, airplanes, and many scientific applications.
There are many benefits of using hybrid computers. A few of those benefits are mentioned below.
The computing speed of hybrid computers is very high. This is due to the all-parallel
configuration of the analog subsystem.
These computers help in online data processing.
Hybrid computers can manage and solve large equations in real-time.
The results are produced quickly and in a more efficient manner. The final results are both
accurate and useful.
Super computer
Supercomputers are one of the fastest computers in the world. These computers are costly and are
only employed for specialized applications that require a large number of mathematical calculations
or number crunching.
For example, supercomputers' tasks are animated graphics, scientific simulations, weather
forecasting, geological data analysis in industries like petrochemical prospecting, fluid dynamic
calculations, nuclear energy research, and electronic design.
It is exciting to note that supercomputers can process trillions of instructions in a single second!
This is mainly because these devices have thousands of interconnected processors. Also, the first
supercomputer was developed in 1976 by Roger Cray.
Did you know that supercomputers can decrypt your password? This task can be done to improve
protection for security reasons. It also produces excellent animations and is valuable in the virtual
testing of nuclear weapons and critical medical tests.
Supercomputers are also used for extracting useful information from data storage centers or cloud
systems. An excellent example of this is the insurance companies. Supercomputers also play an
essential role in managing the online world of currencies like the stock market and Bitcoin.
Mainframe computer
A mainframe can be described as a costly and extensive computer system. A mainframe is usually
capable of supporting hundreds and thousands of users at the same time. These devices concurrently
execute various programs and support multiple simultaneous executions of programs.
Due to these above-mentioned features, mainframe computers are usually used in large
organizations that need to process and manage high volumes of data - For example, telecom and
banking sector industries.
Mainframe computers usually have a very long life. A mainframe device can run smoothly for up to
50 years after its installation. It can also provide excellent performance with large-scale memory
management.
Mainframe computers also can distribute or share their workload among other processors or input
and output terminals. When it comes to errors, then there are fewer chances of errors in these
devices.
However, if any error occurs, then it is quickly fixed by the system. These devices protect the stored
data and any ongoing exchange of data or information. From this extensive description, it must be
quite evident that mainframe computers have a lot of applications. We have created a list of some of
those applications, and that list is given below.
In the field of defense, mainframe computers allow defense departments to share a large
amount of sensitive information with other branches of defense.
In the retail sector, large retail organizations often have a vast customer base. This is why
departments use mainframe computers to execute and handle information related to their
customer management, inventory management, and huge transactions within a short period.
Minicomputer
A minicomputer is a midsize, multi-processing system. Minicomputers are capable of supporting up
to 250 users at the same time. Usually, these devices have two or more processors.
It is common for minicomputers to be employed in institutes and departments related to accounting,
inventory management, and billing. Some experts also believe that minicomputers lie somewhere
between a microcomputer and a mainframe because minicomputers are smaller than a mainframe
but more extensive than a microcomputer.
Minicomputers are lighter in weight. These devices can easily fit anywhere and are portable. These
devices are less expensive and very fast compared to their size. Minicomputers tend to remain
charged for long intervals and can function in an environment without controlled operations.
You might also want to learn that minicomputers are primarily used to perform three functions.
These three functions are mentioned below.
Processing Control
Minicomputers are mainly used to possess control in manufacturing. These devices performed the
functions of collecting data and feedback. In case of any abnormalities during the process, the
minicomputer detects the abnormality and makes the necessary adjustment to fix the situation.
Managing Data
Small organizations use minicomputers to collect, store, and share data. For example, local hotels
and hospitals use minicomputers to record their customers and patients, respectively.
Microcomputer
A microcomputer is also known as a personal computer. These devices can be described as general-
purpose computers that are ideal for individual use. Microcomputers have a microprocessor as a
central processing unit, an input unit, storage area, memory, and an output unit.
Some examples of microcomputers are desktop computers and laptops. These devices are usually
used to make assignments, watch movies, or tackle business tasks for office work.
Microcomputers are the smallest in size of all the other types of computers. Only one user can use a
microcomputer at a time. These computers are less expensive and easier to use.
Users do not require any special training or skills to use these computers. These devices are also
often equipped with a single semiconductor chip. These devices can scan, browse, print, watch
videos, and perform many other tasks.
Characteristics of Computers
1. Speed
Speed means the duration computer system requires in fulfilling a task or completing an activity. It is well-
known that computers need very little time than humans in completing a task. Generally, humans take into
account a second or minute as a unit of time.
Nevertheless, computer systems have such fast operation capacity that the unit of time is in fractions of a
second. Today, computers are capable of doing 100 million calculations per second and that is why the
industry has developed Million Instructions per Second (MIPS) as the criterion to classify different
computers according to speed.
2. Accuracy
Accuracy means the level of precision with which calculations are made and tasks are performed. One may
invest years of his life in detecting errors in computer calculations or updating a wrong record. A large part
of mistakes in Computer Based Information System(CBIS) occurs due to bad programming, erroneous data,
and deviation from rules. Humans cause these mistakes.
Errors attributable to hardware are generally distinguished and corrected by the computer system itself. The
computers rarely commit errors and do all types of tasks precisely.
3. Diligence
A computer can perform millions of tasks or calculations with the same consistency and accuracy. It doesn‘t
feel any fatigue or lack of concentration. Its memory also makes it superior that of human beings.
4. Versatility
Versatility refers to the capability of a computer to perform different kinds of work with same accuracy and
efficiency.
5. Reliability
Reliability is the quality due to which the user can stay dependable on the computer. Computers systems are
well-adjusted to do repetitive tasks. They never get tired, bored or fatigued. Hence, they are a lot reliable
than humans. Still, there can be failures of a computer system due to internal and external reasons.
Any failure of the computer in a highly automated industry is disastrous. Hence, the industry in such
situations has a backup facility to take over tasks without losing much of the time.
6. Memory
Storage is the ability of the computer to store data in itself for accessing it again in future. Nowadays, apart
from having instantaneous access to data, computers have a huge ability to store data in a little physical
space.
A general computer system has a capacity of storing and providing online millions of characters and
thousands of pictures. It is obvious from the above discussion that computer capabilities outperform the
human capabilities. Therefore, a computer, when used rightfully, will tenfold the effectiveness of an
organization.
7. Adaptability
Adaptability of computer system means the quality of it to complete a different type of tasks: simple as well
as complex. Computers are normally versatile unless designed for a specific operation. Overall, a daily
purpose computer is used in any area of application: business, industry, scientific, statistical, technological
and so on
A general purpose computer, when introduced in a company, can replace the jobs of multiple specialists due
to its flexibility. A computer system can replace the functions of all these specialists because of being very
versatile.
Limitations of Computers
Limitations are the drawbacks of the computer system in which humans outperform them.
1. Lack of common-sense
This is one of the major limitations of computer systems. No matter how efficient, fast and reliable computer
systems might be but yet do not have any common sense because no full-proof algorithm has been designed
to programme logic into them. As computers function based on the stored programme(s), they simply lack
common sense.
2. Zero IQ
Another of the limitations of computer systems is that they have zero Intelligence Quotient (IQ). They are
unable to see and think the actions to perform in a particular situation unless that situation is already
programmed into them. Computers are programmable to complete each and every task, however small it
may be.
3. Lack of Decision-making
They can be programmed to take such decisions, which are purely procedure-oriented. If a computer has not
been programmed for a particular decision situation, it will not take a decision due to lack of wisdom and
evaluation faculties. Human beings, on the other hand, possess this great power of decision-making.
Computers are incapable of decision making as they do not possess the essential elements necessary to take
a decision i.e. knowledge, information, wisdom, intelligence and the ability to judge.
In any type of research ideas plays a vital role. In this context, computers can‘t express their ideas.
Though computers are helpful in storage of data and can contain the contents of encyclopedias even, but
only humans can decide and implement the policies.
Basic Computer Organization
1. CPU OPERATION
The program is represented by a series of numbers that are kept in some kind of computer memory.
There are four steps that nearly all CPUs use in their operation: fetch, decode, execute, and write
back.
(i) Fetch
(ii) Decode
The instruction is broken up into parts that have significance to other portions of the
CPU.
The way in which the numerical instruction value is interpreted is defined by the CPU‘s
instruction set architecture (ISA).
Opcode, indicates which operation to perform.
The remaining parts of the number usually provide information required for that
instruction, such as operands for an addition operation.
Such operands may be given as a constant value or as a place to locate a value: a register
or a memory address, as determined by some addressing mode.
(iii) Execute
During this step, various portions of the CPU are connected so they can perform the
desired operation.
If, for instance, an addition operation was requested, an arithmetic logic unit (ALU) will
be connected to a set of inputs and a set of outputs.
The inputs provide the numbers to be added, and the outputs will contain the final sum.
If the addition operation produces a result too large for the CPU to handle, an arithmetic
overflow flag in a flags register may also be set.
Simply ―writes back‖ the results of the execute step to some form of memory.
Very often the results are written to some internal CPU register for quick access by
subsequent instructions.
In other cases results may be written to slower, but cheaper and larger, main memory.
2. INPUT DEVICES
An input device is a hardware or peripheral device used to send data to a computer. An input device allows
users to communicate and feed instructions and data to computers for processing, display, storage and/or
transmission.
Input device enables the user to send data, information, or control signals to a computer. The Central
Processing Unit (CPU) of a computer receives the input and processes it to produce the output.
Keyboard
Mouse
Scanner
Joystick
Light Pen
Digitizer
Microphone
Magnetic Ink Character Recognition (MICR)
Optical Character Reader (OCR)
(i) Keyboard
The keyboard is a basic input device that is used to enter data into a computer or any other electronic device
by pressing keys. It has different sets of keys for letters, numbers, characters, and functions. Keyboards are
connected to a computer through USB or a Bluetooth device for wireless communication.
(ii) Mouse
The mouse is a hand-held input device which is used to move cursor or pointer across the screen. It is
designed to be used on a flat surface and generally has left and right button and a scroll wheel between them.
Laptop computers come with a touchpad that works as a mouse. It lets you control the movement of cursor
or pointer by moving your finger over the touchpad. Some mouse comes with integrated features such as
extra buttons to perform different buttons.
The mouse was invented by Douglas C. Engelbart in 1963. Early mouse had a roller ball integrated as a
movement sensor underneath the device. Modern mouse devices come with optical technology that controls
cursor movements by a visible or invisible light beam.
(iii) Scanner
The scanner uses the pictures and pages of text as input. It scans the picture or a document. The scanned
picture or document then converted into a digital format or file and is displayed on the screen as an output. It
uses optical character recognition techniques to convert images into digital ones.
(iv) Joystick
A joystick is also a pointing input device like a mouse. It is made up of a stick with a spherical base. The
base is fitted in a socket that allows free movement of the stick. The movement of stick controls the cursor
or pointer on the screen.
The first joystick was invented by C. B. Mirick at the U.S. Naval Research Laboratory. A joystick can be of
different types such as displacement joysticks, finger-operated joysticks, hand operated, isometric joystick,
and more. In joystick, the cursor keeps moving in the direction of the joystick unless it is upright, whereas,
in mouse, the cursor moves only when the mouse moves.
A light pen is a computer input device that looks like a pen. The tip of the light pen contains a light-sensitive
detector that enables the user to point to or select objects on the display screen. Its light sensitive tip detects
the object location and sends the corresponding signals to the CPU. It is not compatible with LCD screens,
so it is not in use today. It also helps you draw on the screen if needed. The first light pen was invented
around 1955 as a part of the Whirlwind project at the Massachusetts Institute of Technology (MIT).
(vi) Digitizer
Digitizer is a computer input device that has a flat surface and usually comes with a stylus. It enables the
user to draw images and graphics using the stylus as we draw on paper with a pencil. The images or
graphics
drawn on the digitizer appear on the computer monitor or display screen. The software converts the touch
inputs into lines and can also convert handwritten text to typewritten words.
It can be used to capture handwritten signatures and data or images from taped papers. Furthermore, it is
also used to receive information in the form of drawings and send output to a CAD (Computer-aided design)
application and software like AutoCAD. Thus, it allows you to convert hand-drawn images into a format
suitable for computer processing.
(vii) Microphone
The microphone is a computer input device that is used to input the sound. It receives the sound vibrations
and converts them into audio signals or sends to a recording medium. The audio signals are converted into
digital data and stored in the computer. The microphone also enables the user to telecommunicate with
others. It is also used to add sound to presentations and with webcams for video conferencing.
MICR computer input device is designed to read the text printed with magnetic ink. MICR is a character
recognition technology that makes use of special magnetized ink which is sensitive to magnetic fields. It is
widely used in banks to process the cheques and other organizations where security is a major concern. It
can process three hundred cheques in a minute with hundred-percent accuracy. The details on the bottom of
the cheque (MICR No.) are written with magnetic ink. A laser printer with MICR toner can be used to print
the magnetic ink.
The device reads the details and sends to a computer for processing. A document printed in magnetic ink is
required to pass through a machine which magnetizes the ink, and the magnetic information is then
translated into characters.
OCR computer input device is designed to convert the scanned images of handwritten, typed or printed text
into digital text. It is widely used in offices and libraries to convert documents and books into electronic
files.
It processes and copies the physical form of a document using a scanner. After copying the documents, the
OCR software converts the documents into a two-color (black and white), version called bitmap. Then it is
analyzed for light and dark areas, where the dark areas are selected as characters, and the light area is
identified as background. It is widely used to convert hard copy legal or historic documents into PDFs. The
converted documents can be edited if required like we edit documents created in ms word.
3. OUTPUT DEVICES
An output device is any device used to send data from a computer to another device or user. Most computer
data output that is meant for humans is in the form of audio or video. Thus, most output devices used by
humans are in these categories. Examples include monitors, projectors, speakers, headphones and printers.
Monitors
Speaker
Printer
(i) Monitors
Monitors, commonly called as Visual Display Unit (VDU), are the main output device of a computer. It
forms images from tiny dots, called pixels that are arranged in a rectangular form. The sharpness of the
image depends upon the number of pixels.
(ii) Speaker
Speakers are output devices that allow you to hear sound from your computer. Computer speakers are just
like stereo speakers. There are usually two of them and they come in various sizes.
(iii) Printers
Impact Printers
Non-Impact Printers
Purpose of Storage
The fundamental components of a general-purpose computer are arithmetic and logic unit, control circuitry,
storage space, and input/output devices. If storage was removed, the device we had would be a simple
calculator instead of a computer. The ability to store instructions that form a computer program, and the
information that the instructions manipulate is what makes stored program architecture computers versatile.
Primary Storage
Primary storage is directly connected to the central processing unit of the computer. It must be present for
the CPU to function correctly, just as in a biological analogy the lungs must be present (for oxygen storage)
for the heart to function (to pump and oxygenate the blood). As shown in the diagram, primary storage
typically consists of three kinds of storage:
Processors Register
It is the internal to the central processing unit. Registers contain information that the arithmetic and logic
unit needs to carry out the current instruction. They are technically the fastest of all forms of computer
storage.
Main memory
It contains the programs that are currently being run and the data the programs are operating on. The
arithmetic and logic unit can very quickly transfer information between a processor register and locations in
main storage, also known as a ―memory addresses‖. In modern computers, electronic solid-state
random access memory is used for main storage, and is directly connected to the CPU via a ―memory
bus‖ and a
―data bus‖.
Cache memory
It is a special type of internal memory used by many central processing units to increase their performance
or ―throughput‖. Some of the information in the main memory is duplicated in the cache memory, which is
slightly slower but of much greater capacity than the processor registers, and faster but much smaller than
main memory.
Memory
Memory is often used as a shorter synonym for Random Access Memory (RAM). This kind of memory is
located on one or more microchips that are physically close to the microprocessor in your computer. Most
desktop and notebook computers sold today include at least 512 megabytes of RAM (which is really the
minimum to be able to install an operating system). They are upgradeable, so you can add more when your
computer runs really slowly.
STORAGE DEVICES
The purpose of storage in a computer is to hold data or information and get that data to the CPU as quickly
as possible when it is needed. Computers use disks for storage: hard disks that are located inside the
computer, and floppy or compact disks that are used externally.
Computers Method of storing data & information for long term basis i.e. even after PC is
switched off.
It is non – volatile
Can be easily removed and moved & attached to some other device
Memory capacity can be extended to a greater extent
Cheaper than primary memory
Writing data
Reading data
The floppy disk drive (FDD) was invented at IBM by Alan Shugart in 1967. The first floppy drives used an
8-inch disk (later called a ―diskette‖ as it got smaller), which evolved into the 5.25-inch disk that was used
on the first IBM Personal Computer in August 1981. The 5.25-inch disk held 360 kilobytes compared to the
1.44 megabyte capacity of today‘s 3.5-inch diskette.
The 5.25-inch disks were dubbed ―floppy‖ because the diskette packaging was a very flexible
plastic envelope, unlike the rigid case used to hold today‘s 3.5-inch diskettes.
Your computer uses two types of memory: primary memory which is stored on chips located on the
motherboard, and secondary memory that is stored in the hard drive. Primary memory holds all of the
essential memory that tells your computer how to be a computer. Secondary memory holds the information
that you store in the computer.
Generations of Computers
A computer is an electronic device that manipulates information or data. It has the ability to store, retrieve,
and process data.
Nowadays, a computer can be used to type documents, send email, play games, and browse the Web. It can
also be used to edit or create spreadsheets, presentations, and even videos. But the evolution of this complex
system started around 1946 with the first Generation of Computer and evolving ever since.
Even more so the generation who have grown from infancy within the global desktop and laptop revolution
since the 1980s.
The history of the computer goes back several decades however and there are five definable generations of
computers.
Each generation is defined by a significant technological development that changes fundamentally how
computers operate – leading to more compact, less expensive, but more powerful, efficient and robust
machines.
These early computers used vacuum tubes as circuitry and magnetic drums for memory. As a result they
were enormous, literally taking up entire rooms and costing a fortune to run. These were inefficient
materials which generated a lot of heat, sucked huge electricity and subsequently generated a lot of heat
which caused ongoing breakdowns.
These first generation computers relied on ‗machine language‘ (which is the most basic programming
language that can be understood by computers). These computers were limited to solving one problem at a
time. Input was based on punched cards and paper tape. Output came out on print-outs. The two notable
machines of this era were the UNIVAC and ENIAC machines – the UNIVAC is the first every commercial
computer which was purchased in 1951 by a business – the US Census Bureau.
Advantages:
They were capable of making arithmetic and logical operations.
They used the electronic values in place of the key punch machines or the unit record machines.
Disadvantages:
They were too big in size, very slow, low level of accuracy and reliability.
They consumed lot of electricity, generated a lot of heat and break down frequently.
The replacement of vacuum tubes by transistors saw the advent of the second generation of computing.
Although first invented in 1947, transistors weren‘t used significantly in computers until the end of the
1950s. They were a big improvement over the vacuum tube, despite still subjecting computers to damaging
levels of heat. However they were hugely superior to the vacuum tubes, making computers smaller, faster,
cheaper and less heavy on electricity use. They still relied on punched card for input/printouts.
The language evolved from cryptic binary language to symbolic (‗assembly‘) languages. This meant
programmers could create instructions in words. About the same time high level programming languages
were being developed (early versions of COBOL and FORTRAN). Transistor-driven machines were the first
computers to store instructions into their memories – moving from magnetic drum to magnetic core
‗technology‘. The early versions of these machines were developed for the atomic energy industry.
Advantages:
They required very small space, were very fast and reliable and dependable.
They used less power and dissipated less heat and had large storage capacity.
They used better peripheral devices like card readers and printer etc.
Disadvantages:
They did not have any operating system and used assembly languages.
They lacked in intelligence and decision making and needed constant upkeep and maintenance.
By this phase, transistors were now being miniaturised and put on silicon chips (called semiconductors).
This led to a massive increase in speed and efficiency of these machines. These were the first computers
where users interacted using keyboards and monitors which interfaced with an operating system, a
significant leap up from the punch cards and printouts. This enabled these machines to run several
applications at once using a central program which functioned to monitor memory.
As a result of these advances which again made machines cheaper and smaller, a new mass market of users
emerged during the ‗60s.
Advantages:
The size was very small in comparison less costly and built with thousands of transistor which were
very cheap.
They used faster better device for storage, called auxiliary backing or secondary storage.
They used operating system for better resource management and used the concept of time sharing
and multiple programming.
Disadvantages:
They created lot of problems to the manufacturers at their initial stages.
They lacked thinking power and decision making capability.
They could not provide any insight into their internal working.
This revolution can be summed in one word: Intel. The chip-maker developed the Intel 4004 chip in 1971,
which positioned all computer components (CPU, memory, input/output controls) onto a single chip. What
filled a room in the 1940s now fit in the palm of the hand. The Intel chip housed thousands of integrated
circuits. The year 1981 saw the first ever computer (IBM) specifically designed for home use and 1984 saw
the MacIntosh introduced by Apple. Microprocessors even moved beyond the realm of computers and into
an increasing number of everyday products.
The increased power of these small computers meant they could be linked, creating networks. Which
ultimately led to the development, birth and rapid evolution of the Internet. Other major advances during this
period have been the Graphical user interface (GUI), the mouse and more recently the astounding advances
in lap-top capability and hand-held devices.
Advantages:
They were very small in size, and cost of operation was very less.
They were very compact faster and reliable as they used very large scale integrated circuits.
They were capable of facilitating the interactive on line remote programming by which one sitting at
the distant place can get his programs executed by centrally located computer.
Disadvantages:
They are less powerful and had less speed than the main frame computers.
Computer devices with artificial intelligence are still in development, but some of these technologies are
beginning to emerge and be used such as voice recognition.
AI is a reality made possible by using parallel processing and superconductors. Leaning to the future,
computers will be radically transformed again by quantum computation, molecular and nano technology.
The essence of fifth generation will be using these technologies to ultimately create machines which can
process and respond to natural language, and have capability to learn and organise themselves.
Advantages:
They are oriented towards integrated data base development to provide decision models.
They faster very cheap and have the highest possible storage capacity.
They have thinking power and decision making capability and thereby they will be able to aid
the executives in the management.
Disadvantages:
They need very low level languages; they may replace the human force and cause
grievous unemployment problems.