0% found this document useful (0 votes)
142 views27 pages

Sumitcomputer

A computer is a programmable machine that can perform arithmetic and logical operations on binary numbers by reading and executing a list of instructions sequentially. It consists of memory for data storage and a central processing unit that functions as a control unit containing an arithmetic logic unit. Peripherals like keyboards, mice, or graphics cards can be connected to allow input and output. A computer's processing unit executes a series of instructions to read, manipulate, and store data, using test and jump instructions to move within the program space based on the machine's state. The first computers in the 1950s used vacuum tubes, which produced a lot of heat. Transistors replaced tubes in the 1960s, making computers smaller, faster, cheaper, and

Uploaded by

Sumit Kumar
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
142 views27 pages

Sumitcomputer

A computer is a programmable machine that can perform arithmetic and logical operations on binary numbers by reading and executing a list of instructions sequentially. It consists of memory for data storage and a central processing unit that functions as a control unit containing an arithmetic logic unit. Peripherals like keyboards, mice, or graphics cards can be connected to allow input and output. A computer's processing unit executes a series of instructions to read, manipulate, and store data, using test and jump instructions to move within the program space based on the machine's state. The first computers in the 1950s used vacuum tubes, which produced a lot of heat. Transistors replaced tubes in the 1960s, making computers smaller, faster, cheaper, and

Uploaded by

Sumit Kumar
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 27

Computer

A computer is an programmable machine designed to read and execute


sequentially a list of instructions that make it perform arithmetical and logical
operations on binary numbers. Conventionally a computer consists of some form
of short or long term memory for data storage and a central processing unit, which
functions as a control unit and contains the arithmetic logic unit. Peripherals (for
example keyboard, mouse or graphics card) can be connected to allow a the
computer to receive outside input and display output.
A computers processing unit executes series of instructions that make it read,
manipulate and then store data. Test and jump instructions allow to move within
the program space and therefore to execute different instructions as a function of
the current state of the machine or its environment.
History of computing

 First Generation (1951-1958)

John W. Mauchly and J. Presper Eckert

(1951) The first generation of computers started with the UNIVAC I (Universal
Automatic Computer) built by Mauchly and Eckert. It was sold to the U.S. Census
Bureau. This machine was dedicated to business data processing and not military or
scientific purposes.

Characteristics of First Generation Computers

Use of vacuum tubes in electronic circuits: These tubes controlled internal operations
and were huge. As a consequence the machines were large.

Magnetic drum

as primary internal-storage medium: Electric currents passed through wires which


magnetized the core to represent on and off states

Limited main-storage capacity:

Slow input/output, punched-card-oriented: Operators performed input and output


operations through the use of punched cards.
Low level symbolic-language programming: The computer used machine language
which was cumbersome and accomplished through long strings of numbers made up
of Zeroes and Ones. In 1952, Dr. Grace Hopper (University of Pennsylvania)
developed a symbolic language called mnemonics (instructions written with symbolic
codes). Rather than writing instructions with Zeroes and Ones, the mnemonics were
translated into binary code. Dr. Hopper developed the first set of programs or
instructions to tell computers how to translate the mnemonics.

Heat and maintenance problems: Special air-conditioning and maintenance were


required of the machines. The tubes gave off tremendous amounts of heat.

Applications: payroll processing and record keeping though still oriented toward
scientific applications thatn business data processing.

Examples: IBM 650 UNIVAC I

 Second Generation Computers (1959-1964)

Characteristics of Second Generation Computers

Use of transitors for internal operations: tiny solid state transitors replace vacuum
tubes in computers. The heat problem was then minimized and computers could be
made smaller and faster.

Magnetic core as primary internal-storage medium: Electric currents pass through


wires which magnetize the core to represent on and off states.Data in the cores can be
found and retrieved for processing in a few millionths of a second.

Increased main-storage capacity: The internal or main storage was supplemented by


use of magnetic tapes for external storage. These tapes substituted for punched cards
or paper. Magnetic disks were also developed that stored information on circular
tracks that looked like phonograph records. The disks provided direct or random
access to records in a file.

Faster input/output; tape orientation: Devices could be connected directly to the


computer and considered "on-line". This allowed for faster printing and detection and
correction of errors.

High-level programming languages (COBOL,FORTRAN) : These languages


resembled English. FORTRAN (FORmula TRANslator) was the first high-level
language that was accepted widely. This language was used mostly for scientific
applications. COBOL (Common Business-Oriented Language) was developed in 1961
for business data processing. Its main features include: file-processing, editing, and
input/output capabilites.

Increased speed and reliability: Modular-hardware was developed through the


design of electronic circuits. Complete modules called "breadboards" could be
replaced if malfunctions occurred, or the machine "crashed". This decreased lost time
and also new modules could be added for added features such as file-processing,
editing , and input/output features.

Batch-oriented applications:billing, payroll processing, updating and inventory files:


Batch processing allowed for collection of data over a period time and then one
processed in one computer run. The results were then stored on magnetic tapes.

Examples:IBM 1401*(most popular business-oriented computer. Honeywell 200 CDC


1604

 Third Generation Computers (1965-1970)

Characteristics of Third Generation Computers:

Use of integrated circuits: The use of integrated circuits (Ics) replaced the transitors
of the second-generation machines. The circuits are etched and printed and hundreds
of electronic components could be put on silicon circuit chips less than one-eighth of
an inch square.

Magnetic core and solid-state main storage: Greater storage capacity was
developed.

More flexibility with input/output; disk-oriented:

Smaller size and better performance and reliability: Advances in solid-state


technology allowed for the design and building of smaller and faster computers.
Breadboards could easily be replaced on the fly.

Extensive use of high-level programming languages: The software industry evolved


during this time. Many users found that it was more cost effective to buy pre-
programmed packages than to write the programs themselves. The programs from the
second generation had to be rewritten since many of the programs were based on
second generation architecture.
Emergence of minicomputers: The mini computers offered many of the same
features as the mainframe computers only on a smaller scale. These machines filled
the needs of the small business owner.

Remote processing and time-sharing through communication: Computers were then


able to perform several operations at the same time. Remote terminals were developed
to communicate with a central computer over a specific geographic location. Time
sharing environments were established.

Availability of operating-systems(software) to control I/O and do tasks handled by


human operators: Software was developed to take care of routine tasks required of the
computer freed up the human operator.

Applications such as airline reservation systems, market forcasting, credit card


billing: The applications also included inventory, control, and scheduling labor and
materials. Multitasking was also accomplished. Both scientific and business
applications could be run on the same machine.

Examples: IBM System/360 NCR 395 Burroughs B6500

 Fourth Generation (1970-)

Characteristics of Fourth Generation Computers:

Use of large scale integrated circuits

Increased storage capacity and speed.

Modular design and compatibility between equipment

Special application programs

Versatility of input/ output devices

Increased use of minicomputers

Introduction of microprocessors and microcomputers

Applications: mathematical modeling and simulation, electronic funds transfer,


computer-aided instruction and home computers. Internet Explosion.
 Stored-program architecture
Several developers of ENIAC, recognizing its flaws, came up with a far more
flexible and elegant design, which came to be known as the "stored program
architecture" or von Neumann architecture. This design was first formally
described by John von Neumann in the paper First Draft of a Report on the
EDVAC, distributed in 1945. A number of projects to develop computers based on
the stored-program architecture commenced around this time, the first of these
being completed in Great Britain. The first working prototype to be demonstrated
was theManchester Small-Scale Experimental Machine (SSEM or "Baby") in
1948. The Electronic Delay Storage Automatic Calculator (EDSAC), completed a
year after the SSEM at Cambridge University, was the first practical, non-
experimental implementation of the stored program design and was put to use
immediately for research work at the university. Shortly thereafter, the machine
originally described by von Neumann's paper—EDVAC—was completed but did
not see full-time use for an additional two years.
Nearly all modern computers implement some form of the stored-program
architecture, making it the single trait by which the word "computer" is now
defined. While the technologies used in computers have changed dramatically
since the first electronic, general-purpose computers of the 1940s, most still use the
von Neumann architecture.
Beginning in the 1950s, Soviet scientists Sergei Sobolev and Nikolay
Brusentsov conducted research on ternary computers, devices that operated on a
base three numbering system of −1, 0, and 1 rather than the conventional binary
numbering system upon which most computers are based. They designed
the Setun, a functional ternary computer, at Moscow State University. The device
was put into limited production in the Soviet Union, but supplanted by the more
common binary architecture.
 Semiconductors and microprocessors
Computers using vacuum tubes as their electronic elements were in use throughout
the 1950s, but by the 1960s had been largely replaced bytransistor-based machines,
which were smaller, faster, cheaper to produce, required less power, and were
more reliable. The first transistorised computer was demonstrated at the University
of Manchester in 1953.In the 1970s, integrated circuit technology and the
subsequent creation of microprocessors, such as the Intel 4004, further decreased
size and cost and further increased speed and reliability of computers. By the late
1970s, many products such as video recorders contained dedicated computers
called microcontrollers, and they started to appear as a replacement to mechanical
controls in domestic appliances such as washing machines. The 1980s
witnessed home computers and the now ubiquitous personal computer. With the
evolution of the Internet, personal computers are becoming as common as
the television and the telephone in the household
Modern smartphones are fully programmable computers in their own right, and as
of 2009 may well be the most common form of such computers in existence.
 Further topics
Artificial intelligence

A computer will solve problems in exactly the way they are programmed to,
without regard to efficiency nor alternative solutions nor possible shortcuts nor
possible errors in the code. Computer programs which learn and adapt are part of
the emerging field of artificial intelligence and machine learning.
Hardware
The term hardware covers all of those parts of a computer that are tangible
objects. Circuits, displays, power supplies, cables, keyboards, printers and mice are
all hardware.

History of computing hardware

Calculators Antikythera mechanism, Difference engine, Norden bombsight


First Generation
(Mechanical/Electromechanical)
Programmable Devices Jacquard loom, Analytical engine, Harvard Mark I, Z3

Atanasoff–Berry Computer, IBM 604, UNIVAC 60, UNIVAC


Calculators
120

Second Generation (Vacuum Tubes)


Colossus, ENIAC, Manchester Small-Scale Experimental
Machine, EDSAC, Manchester Mark 1, Ferranti Pegasus, Ferranti
Programmable Devices
Mercury, CSIRAC, EDVAC, UNIVAC I, IBM 701, IBM
702, IBM 650, Z22

Third Generation (Discrete transistors Mainframes IBM 7090, IBM 7080, IBM System/360, BUNCH


and SSI, MSI, LSIIntegrated circuits)

Minicomputer PDP-8, PDP-11, IBM System/32, IBM System/36

Minicomputer VAX, IBM System i

4-bit microcomputer Intel 4004, Intel 4040

Intel 8008, Intel 8080, Motorola 6800, Motorola 6809, MOS


8-bit microcomputer
Technology 6502, Zilog Z80

16-bit microcomputer Intel 8088, Zilog Z8000, WDC 65816/65802

Fourth Generation (VLSI integrated


circuits)
32-bit microcomputer Intel 80386, Pentium, Motorola 68000, ARM architecture

64-bit microcomputer[39] Alpha, MIPS, PA-RISC, PowerPC, SPARC, x86-64

Embedded computer Intel 8048, Intel 8051

Desktop computer, Home computer, Laptop computer, Personal


Personal computer digital assistant (PDA),Portable computer, Tablet PC, Wearable
computer

Quantum computer, Chemical
computer, DNA computing, Optical
Theoretical/experimental
computer, Spintronics based
computer

Other Hardware Topics

Peripheral Mouse, Keyboard, Joystick, Image scanner, Webcam, Graphics


Input
device (Input/output) tablet, Microphone

Output Monitor, Printer, Loudspeaker

Both Floppy disk drive, Hard disk drive, Optical disc drive, Teleprinter


Short range RS-232, SCSI, PCI, USB

Computer busses
Long range (Computer
Ethernet, ATM, FDDI
networking)

Software
Main article: Computer software

Software refers to parts of the computer which do not have a material form, such as programs, data, protocols, etc. When software is stored in

hardware that cannot easily be modified (such as BIOS ROM in an IBM PC compatible), it is sometimes called "firmware" to indicate that it falls

into an uncertain area somewhere between hardware and software.

Computer software

Unix and BSD UNIX System V, IBM AIX, HP-UX, Solaris (SunOS), IRIX, List of BSD operating systems

GNU/Linux List of Linux distributions, Comparison of Linux distributions

Microsoft Windows Windows 95, Windows 98, Windows NT, Windows 2000, Windows XP, Windows Vista, Windows 7

Operating
DOS 86-DOS (QDOS), PC-DOS, MS-DOS, DR-DOS, FreeDOS
system

Mac OS Mac OS classic, Mac OS X

Embedded and real-time List of embedded operating systems

Experimental Amoeba, Oberon/Bluebottle, Plan 9 from Bell Labs

Multimedia DirectX, OpenGL, OpenAL

Library

Programming library C standard library, Standard Template Library

Data Protocol TCP/IP, Kermit, FTP, HTTP, SMTP


File format HTML, XML, JPEG, MPEG, PNG

Graphical user
Microsoft Windows, GNOME, KDE, QNX Photon, CDE, GEM, Aqua
interface(WIMP)

User interface

Text-based user
Command-line interface, Text user interface
interface

Word processing, Desktop publishing, Presentation program, Database management system,


Office suite
Scheduling & Time management, Spreadsheet,Accounting software

Internet Access Browser, E-mail client, Web server, Mail transfer agent, Instant messaging

Design and Computer-aided design, Computer-aided manufacturing, Plant management, Robotic manufacturing,


manufacturing Supply chain management

Raster graphics editor, Vector graphics editor, 3D modeler, Animation editor, 3D computer


Graphics
graphics, Video editing, Image processing

Application Audio Digital audio editor, Audio playback, Mixing, Audio synthesis, Computer music

Compiler, Assembler, Interpreter, Debugger, Text editor, Integrated development


Software engineering
environment, Software performance analysis, Revision control, Software configuration management

Educational Edutainment, Educational game, Serious game, Flight simulator

Strategy, Arcade, Puzzle, Simulation, First-person shooter, Platform, Massively multiplayer, Interactive


Games
fiction

Artificial intelligence, Antivirus software, Malware scanner, Installer/Package management


Misc
systems, File manager
Programming languages
Main article: Programming language

Programming languages provide various ways of specifying programs for computers to run.
Unlike natural languages, programming languages are designed to permit no ambiguity and to be
concise. They are purely written languages and are often difficult to read aloud. They are
generally either translated into machine code by a compiler or an assembler before being run, or
translated directly at run time by an interpreter. Sometimes programs are executed by a hybrid
method of the two techniques. There are thousands of different programming languages—some
intended to be general purpose, others useful only for highly specialized applications.

Programming languages

Lists of programming Timeline of programming languages, List of programming languages by category, Generational list of programming
languages languages, List of programming languages, Non-English-based programming languages

Commonly used Assembly
ARM, MIPS, x86
languages

Commonly used high-level
Ada, BASIC, C, C++, C#, COBOL, Fortran, Java, Lisp, Pascal, Object Pascal
programming languages

Commonly used Scripting
Bourne script, JavaScript, Python, Ruby, PHP, Perl
languages

Professions and organizations


As the use of computers has spread throughout society, there are an increasing number of careers
involving computers.

Computer-related professions

Hardware- Electrical engineering, Electronic engineering, Computer engineering, Telecommunications engineering, Optical


related engineering, Nanoengineering

Software- Computer science, Desktop publishing, Human–computer interaction, Information technology, Information


related systems, Computational science, Software engineering,Video game industry, Web design
The need for computers to work well together and to be able to exchange
information has spawned the need for many standards organizations, clubs and
societies of both a formal and informal nature.

MICROCOMPUTERS:-
A microcomputer is a computer with a microprocessor as its central processing unit. They are
physically small compared tomainframe and minicomputers. Many microcomputers (when
equipped with a keyboard and screen for input and output) are alsopersonal computers (in the
generic sense).[2][3]

The abbreviation "micro" was common during the 1970s and 1980s,[4] but has now fallen out of
common usage.
Origins

The term "Microcomputer" came into popular use after the introduction of the minicomputer,
although Isaac Asimov used the term microcomputer in his short story "The Dying Night" as
early as 1956 (published in "The Magazine of Fantasy and Science Fiction" in July that year.
Most notably, the microcomputer replaced the many separate components that made up the
minicomputer's CPU with one integrated microprocessor chip. The earliest models such as
the Altair 8800 were often sold as kits to be assembled by the user, and came with as little as
256 bytes of RAM, and no input/output devices other than indicator lights and switches, useful
as a proof of concept to demonstrate what such a simple device could do. However, as
microprocessors and semiconductor memory became less expensive, microcomputers in turn
grew cheaper and easier to use:

 Increasingly inexpensive logic chips such as the 7400 series allowed cheap dedicated


circuitry for improved user interfaces such as keyboard input, instead of simply a row of
switches to toggle bits one at a time.
 Use of audio cassettes for inexpensive data storage replaced manual re-entry of a
program every time the device was powered on.
 Large cheap arrays of silicon logic gates in the form of Read-only
memory and EPROMs allowed utility programs and self-booting kernels to be stored within
microcomputers.and only one of the yhing that happened These stored programs could
automatically load further more complex software from external storage devices without user
intervention, to form an inexpensive turnkey system that does not require a computer expert
to understand or to use the device.
 Random access memory became cheap enough to afford dedicating approximately 1-2
kilobytes of memory to a video display controller frame buffer, for a 40x25 or 80x25 text
display or blocky color graphics on a common household television. This replaced the slow,
complex, and expensive teletypewriter that was previously common as an interface to
minicomputers and mainframes.
History

Although they contained no microprocessors but were built around transistor-transistor


logic (TTL), Hewlett-Packardcalculators as far back as 1968 had various levels of
programmability such that they could be called microcomputers. The HP 9100B (1968) had
rudimentary conditional (if) statements, statement line numbers, jump statements (go to),
registers that could be used as variables, and primitive subroutines. The programming language
resembled Assembly language in many ways. Later models incrementally added more features,
including the BASIC programming language (HP 9830A in 1971). Some models had tape
storage and small printers. However, displays were limited to one line at a time. [1] The HP
9100Awas referred to as a personal computer in an advertisement in a
1968 Science magazine[5] but that advertisement was quickly dropped.[6] It is suspected[who?] that
HP was reluctant to call them "computers" because it would complicate government procurement
and export procedures.[citation needed]

The Datapoint 2200, made by CTC in 1970, is perhaps the best candidate for the title of "first
microcomputer". While it contains no microprocessor, it used the 4004 programming instruction
set and its custom TTL was the basis for the Intel 8008, and for practical purposes the system
behaves approximately as if it contains an 8008. This is because Intel was the contractor in
charge of developing the Datapoint's CPU but ultimately CTC rejected the 8008 design because
it needed 20 support chips.[7]

Another early system, the Kenbak-1, was released in 1971. Like the Datapoint 2200, it used
discrete transistor–transistor logic instead of a microprocessor, but functioned like a
microcomputer in most ways. It was marketed as an educational and hobbyist tool, but was not a
commercial success; production ceased shortly after introduction.[2]

In 1972 a Sacramento State University team led by Bill Pentz built the Sac State 8008 computer,
[8]
 able to handle thousands of patients' medical records. The Sac State 8008 was designed with
the Intel 8008 8-bit microprocessor. It had a full set of hardware and software components: a
disk operating system included in a series of programmable read-only memory chips (PROMs);
8 Kilobytes of RAM; IBM's Basic Assembly Language (BAL); a hard drive; a color display; a
printer output; a 150bps serial interface for connecting to a mainframe; and even the world's first
microcomputer front panel.[9]

Another system of note is the Micral-N, introduced in 1973 by a French company and powered
by the 8008; it was the first microcomputer sold completely assembled and not as a construction
kit.

Virtually all early microcomputers were essentially boxes with lights and switches; one had to
read and understand binary numbers and machine language to program and use them (the
Datapoint 2200 was a striking exception, bearing a modern design based on a monitor, keyboard,
and tape and disk drives). Of the early "box of switches"-type microcomputers, the MITS Altair
8800 (1975) was arguably the most famous. Most of these simple, early microcomputers were
sold as electronic kits--bags full of loose components which the buyer had to solder together
before the system could be used.

The period from about 1971 to 1976 is sometimes called the first generation of microcomputers.
These machines were for engineering development and hobbyist personal use. In 1975,
the Processor Technology SOL-20 was designed, which consisted of one board which included
all the parts of the computer system. The SOL-20 had built-in EPROM software which
eliminated the need for rows of switches and lights. The MITS Altair just mentioned played an
instrumental role in sparking significant hobbyist interest, which itself eventually led to the
founding and success of many well-known personal computer hardware and software companies,
such as Microsoft and Apple Computer. Although the Altair itself was only a mild commercial
success, it helped spark a huge industry.

1977 saw the introduction of the second generation, known as home computers. These were
considerably easier to use than their predecessors, which operation often demanded thorough
familiarity with practical electronics. The ability to connect to a monitor (screen) or TV set
allowed visual manipulation of text and numbers. The BASIC language, which was easier to
learn and use than raw machine language, became a standard feature. These features were
already common in minicomputers, with which many hobbyists and early produces were
familiar.

1979 saw the launch of the VisiCalc spreadsheet (initially for the Apple II) that first turned the
microcomputer from a hobby for computer enthusiasts into a business tool. After the 1981
release by IBM of their IBM PC, the term personal computer became generally used for
microcomputers compatible with the IBM PC architecture (PC compatible).
List of early microcomputers
This is a list of early microcomputers encompassing the microprocessor-based development
system/hobbyist microcomputers being made and sold as "DIY" kits or pre-built machines in
relatively small numbers in the mid-1970s, before the advent of the later, simpler to operate,
significantly hotter-selling home computers (listed in List of home computers). Most early
micros came without keyboards or displays, which had to be provided by the user, usually at
great expense. RAM was typically 4–16 KB.
Early microcomputers

 Micral (1973) was the earliest commercial, non-kit personal computer based on the Intel
8008 micro-processor [2]
 SCELBI (company formed 1973, kit advertised 1974) was the earliest commercial kit
personal computer based on the Intel 8008 micro-processor [3]
 Mark-8, 8008-based kit (1974)[4]
 MITS Altair 8800, introduced 1975, Intel 8080, introduced S-100 bus
 IMSAI 8080, Intel 8080
 MOS Technology KIM-1, introduced 1975, MOS Technology 6502
 Apple I, introduced 1976 MOS Technology 6502
 Rockwell AIM-65, MOS Technology 6502
 ECD Micromind, introduced 1977 MOS Technology 6512 (6502 w/ external clock)
 Cromemco Z-1, introduced 1976
 Motorola MEK6800D2, introduced 1976, with the Motorola 6800 microprocessor
 COSMAC ELF, introduced 1976, RCA 1802
 Intel SDK-85 based on the Intel 8085 (1977)
 Nascom, Nascom 1 introduced 1977, Nascom 2 followed in 1979 based on the Zilog Z80
 Netronics ELF II, RCA 1802
 Newbear 77-68, introduced 1977, Motorola 6800
 Quest SuperELF, RCA 1802
 Tesla PMI-80
 Electronics Australia Educ-8
 Electronics Australia 77up2 aka "Baby 2650"
 Elektor TV Games Computer, with the Signetics 2650 microprocessor
 Sinclair's MK14, a SC/MP based system
 The System 68 from a design published in Electronics today international.
 The PSI comp 80 by Powertran from a design in the magazine Wireless World

Universally accepted microcomputer hardware

 The working of the universally accepted microcomputer hardware is elaborated in the following
article.

      In the field of information technology hardware for a microcomputer system consists of a
variety of different devices. This physical equipment falls into three basic categories system unit,
input or output and secondary storage. Let us understand the same step by step: - The system
unit, also know as the system cabinet or chassis, is a container that houses most of the electronic
components that make up a computer system. Tow important components of the system unit are
the microprocessor and memory. The microprocessor controls and manipulates data to produce
information. Many times the microprocessor is contained within a protective cartridge. Memory,
also knows as primary storage or random access memory (RAM), holds data and program
instructions for processing the data. It also holds the processed information before it is output.
Memory is sometimes referred to as temporary storage because its contents will typically be lost
if the electrical power to the computer is disrupted. The input devices translate data and
programs that human can understand into a form that the computer can process. The most
common input devices are the keyboard and the mouse. Out put devices translate the processed
information from the computer into a form that humans can understand. The most common
output devices are monitors or video display screens and printers. Just like the memory,
secondary storage devices hold data and programs even after electrical power to the computer
system has been turned off. The most important kinds of secondary media are floppy, had, and
optical disks. Floppy disks are widely used to store and transport data from one computer to
another. They are called floppy because data is stored on a very thin flexible, or floppy, plastic
disk. Hard disks are typically used to store programs and very large datafiles. Using a rigid
metallic platter, hard disks have a much greater capacity and are able to access information much
faster than floppy disks. Optical disks user laser technology and have the greatest capacity. The
tow basic types of optical disks are compact discs (CDs) and digital versatile discs (DVDs)
which are now commonly used.
MICROPROCESSOR:-

A microprocessor incorporates most or all of the functions of a computer's central processing


unit (CPU) on a single integrated circuit (IC, or microchip).

The first microprocessors emerged in the early 1970s and were used for electronic calculators,
using binary-coded decimal (BCD) arithmetic on 4-bit words. Other embedded uses of 4-bit and
8-bit microprocessors, such as terminals, printers, various kinds of automation etc., followed
soon after. Affordable 8-bit microprocessors with 16-bit addressing also led to the first general-
purpose microcomputers from the mid-1970s on.

During the 1960s, computer processors were often constructed out of small and medium-scale
ICs containing from tens to a few hundred transistors. The integration of a whole CPU onto a
single chip greatly reduced the cost of processing power. From these humble beginnings,
continued increases in microprocessor capacity have rendered other forms of computers almost
completely obsolete (see history of computing hardware), with one or more microprocessors
used in everything from the smallest embedded systems and handheld devices to the
largestmainframes and supercomputers.

Since the early 1970s, the increase in capacity of microprocessors has been a consequence
of Moore's law, which suggests that the number of transistors that can be fitted onto a chip
doubles every two years. Although originally calculated as a doubling every year,[3] Moore later
refined the period to two years.It is often incorrectly quoted as a doubling of transistors every 18
months.
About Microprocessor Chips (MPU)
Microprocessor chips (MPU) are silicon devices that serve as the central processing unit (CPU)
in computers. They contain thousands of electronic components and use a collection of machine
instructions to perform mathematical operations and move data from one memory location to
another. Microprocessors contain an address bus that sends addresses to memory, read and write
lines, and a data bus that can send data to memory or receive data from memory. They also
include a clock line that enables a clock pulse to sequence the processor and a reset line that
resets the program counter and restarts execution. Basic microprocessor chip components include
one or more arithmetic logic units (ALU) and shift registers. There are two system architectures
for microprocessor chips. Devices that use a reduced instruction set computer (RISC) design
process a few simple instructions instead of many complex ones in order to speed operations. By
contrast, devices that use a complex instruction set computer (CISC) design provide variable
length instructions, multiple addressing forms, and contain only a small number of general-
purpose registers. 

Input/output (I/O) ports and interfaces are connections that provide a data path between
microprocessor chips (MPU) and external devices such as keyboards, displays, and readers.
The number of I/O ports is equal to the number of input, output, and general-purpose ports
(lines) combined. Communication controllers manage data inputs and outputs. They also convert
data outputs for transmission over communication lines and perform all of the necessary control
functions, error checking, and synchronization. Interfaces for microprocessor chips include
transport control protocol/internet protocol (TCP/IP), serial peripheral interface (SPI), inter-IC
(I2C) bus, infrared data association (IrDA), synchronous data link control (SDLC), high-level
data link control (HDLC), and pulse width modulation (PWM). Microprocessor chips (MPU)
that use system management bus (SMBus), control area network bus (CANbus), and universal
serial bus (USB) ports are also available.  

Important specifications to consider when selecting microprocessor chips (MPU) include data
bus, microprocessor family, supply voltage, clock speed, random access memory (RAM), power
dissipation, and operating temperature. Most microprocessor chips are available with an 8-
bit, 16-bit, 24-bit, 32-bit, 64-bit, 128-bit, or 256-bit data bus. Products from many proprietary
microprocessor families are commonly available. Supply voltages range from - 5 V to 5 V and
include intermediate voltages such as - 4.5 V, - 3.3 V, - 3 V, 1.2 V, 1.5 V, 1.8 V, 2.5 V, 3 V, 3.3
V, and 3.6 V. Clock speed, the frequency that determines how fast devices connected to the
system bus operate, is generally expressed in megahertz (MHz). RAM is usually expressed in
kilobytes (kB) or megabytes (MB). Power dissipation, the device's total power consumption,
is generally expressed in watts (W) or milliwatts (mW). Operating temperature is a full-required
range. 

Microprocessor chips (MPU) are available in a variety of integrated circuit (IC) package types
and with different numbers of pins. Basic IC package types include ball grid array (BGA), quad
flat package (QFP), single in-line package (SIP), and dual in-line package (DIP). Many
packaging variants are available. For example, BGA variants include plastic-ball grid array
(PBGA) and tape-ball grid array (TBGA). Fine-pitch land grid array (FLGA) packages are also
common. QFP variants include low-profile quad flat package (LQFP) and thin quad flat package
(TQFP). DIPs are available in either ceramic (CDIP) or plastic (PDIP). Other IC package types
for microprocessor chips (MPU) include small outline package (SOP), thin small outline package
(TSOP), shrink small outline package (SSOP), shrink zigzag inline package (SZIP), and thin
very small outline package (TVSOP). Small outline J-lead (SOJ), plastic leaded chip carrier
(PLCC), and leadless ceramic chip carrier (LCCC) packages are also available.

HISTORY
Intel 4004

The Intel 4004 is generally regarded as the first microprocessor,and cost thousands of


dollars. The first known advertisement for the 4004 is dated November 1971 and appeared
in Electronic News. The project that produced the 4004 originated in 1969, whenBusicom, a
Japanese calculator manufacturer, asked Intel to build a chipset for high-performance desktop
calculators. Busicom's original design called for a programmable chip set consisting of seven
different chips. Three of the chips were to make a special-purpose CPU with its program stored
in ROM and its data stored in shift register read-write memory. Ted Hoff, the Intel engineer
assigned to evaluate the project, believed the Busicom design could be simplified by using
dynamic RAM storage for data, rather than shift register memory, and a more traditional general-
purpose CPU architecture. Hoff came up with a four–chip architectural proposal: a ROM chip
for storing the programs, a dynamic RAM chip for storing data, a simple I/O device and a 4-bit
central processing unit (CPU). Although not a chip designer, he felt the CPU could be integrated
into a single chip. This chip would later be called the 4004 microprocessor.

The architecture and specifications of the 4004 came from the interaction of Hoff with Stanley
Mazor, a software engineer reporting to him, and with Busicom engineer Masatoshi Shima,
during 1969. In April 1970, Intel hired Federico Faggin to lead the design of the four-chip set.
Faggin, who originally developed the silicon gate technology (SGT) in 1968 at Fairchild
Semiconductorand designed the world’s first commercial integrated circuit using SGT, the
Fairchild 3708, had the correct background to lead the project since it was SGT that made it
possible to implement a single-chip CPU with the proper speed, power dissipation and cost.
Faggin also developed the new methodology for random logic design, based on silicon gate, that
made the 4004 possible. Production units of the 4004 were first delivered to Busicom in March
1971 and shipped to other customers in late 1971.
TMS 1000
The Smithsonian Institution says TI engineers Gary Boone and Michael Cochran succeeded in
creating the first microcontroller (also called a microcomputer) in 1971. The result of their work
was the TMS 1000, which went commercial in 1974.
TI developed the 4-bit TMS 1000 and stressed pre-programmed embedded applications,
introducing a version called the TMS1802NC on September 17, 1971 which implemented a
calculator on a chip.

TI filed for the patent on the microprocessor. Gary Boone was awarded U.S. Patent 3,757,306 for
the single-chip microprocessor architecture on September 4, 1973. It may never be known which
company actually had the first working microprocessor running on the lab bench. In both 1971
and 1976, Intel and TI entered into broad patent cross-licensing agreements, with Intel paying
royalties to TI for the microprocessor patent. A nice history of these events is contained in court
documentation from a legal dispute between Cyrix and Intel, with TI asintervenor and owner of
the microprocessor patent.

A computer-on-a-chip is a variation of a microprocessor that combines the microprocessor core


(CPU), some program memory and read/write memory, and I/O (input/output) lines onto
one chip. The computer-on-a-chip patent, called the "microcomputer patent" at the time, U.S.
Patent 4,074,351, was awarded to Gary Boone and Michael J. Cochran of TI. Aside from this
patent, the standard meaning of microcomputer is a computer using one or more microprocessors
as its CPU(s), while the concept defined in the patent is more akin to amicrocontroller.

Pico/General Instrument

In 1971 Pico Electronics[11] and General Instrument (GI) introduced their first collaboration in


ICs, a complete single chip calculator IC for the Monroe/Litton Royal Digital III calculator. This
chip could also arguably lay claim to be one of the first microprocessors or microcontrollers
having ROM, RAM and a RISC instruction set on-chip. The layout for the four layers of
the PMOS process was hand drawn at x500 scale on mylar film, a significant task at the time
given the complexity of the chip.

Pico was a spinout by five GI design engineers whose vision was to create single chip calculator
ICs. They had significant previous design experience on multiple calculator chipsets with both
GI and Marconi-Elliott.[12] The key team members had originally been tasked byElliott
Automation to create an 8 bit computer in MOS and had helped establish a MOS Research
Laboratory in Glenrothes, Scotland in 1967.

Calculators were becoming the largest single market for semiconductors and Pico and GI went
on to have significant success in this burgeoning market. GI continued to innovate in
microprocessors and microcontrollers with products including the PIC1600, PIC1640 and
PIC1650. In 1987 the GI Microelectronics business was spun out into the very successful PIC
microcontroller business
8-bit designs

The Intel 4004 was followed in 1972 by the Intel 8008, the world's first 8-bit microprocessor.
The 8008 was not, however, an extension of the 4004 design, but instead the culmination of a
separate design project at Intel, arising from a contract with Computer Terminals Corporation, of
San Antonio TX, for a chip for a terminal they were designing, [24] the Datapoint 2200 —
fundamental aspects of the design came not from Intel but from CTC. In 1968, CTC's Austin O.
“Gus” Roche developed the original design for the instruction set and operation of the processor.
In 1969, CTC contracted two companies, Intel and Texas Instruments, to make a single-chip
implementation, known as the CTC 1201.[25] In late 1970 or early 1971, TI dropped out being
unable to make a reliable part. In 1970, with Intel yet to deliver the part, CTC opted to use their
own implementation in the Datapoint 3300, using traditional TTL logic instead (thus the first
machine to run “8008 code” was not in fact a microprocessor at all!). Intel's version of the 1201
microprocessor arrived in late 1971, but was too late, slow, and required a number of additional
support chips. CTC had no interest in using it. CTC had originally contracted Intel for the chip,
and would have owed them $50,000 for their design work.[25]To avoid paying for a chip they did
not want (and could not use), CTC released Intel from their contract and allowed them free use
of the design.[25] Intel marketed it as the 8008 in April, 1972, as the world's first 8-bit
microprocessor. It was the basis for the famous "Mark-8" computer kit advertised in the
magazine Radio-Electronics in 1974.

The 8008 was the precursor to the very successful Intel 8080 (1974), which offered much
improved performance over the 8008 and required fewer support chips, Zilog Z80 (1976), and
derivative Intel 8-bit processors. The competing Motorola 6800 was released August 1974 and
the similar MOS Technology 6502 in 1975 (designed largely by the same people). The 6502
rivaled the Z80 in popularity during the 1980s.

A low overall cost, small packaging, simple computer bus requirements, and sometimes the
integration of extra circuitry (e.g. the Z80's built-in memory refresh circuitry) allowed thehome
computer "revolution" to accelerate sharply in the early 1980s. This delivered such inexpensive
machines as the Sinclair ZX-81, which sold for US$99.

The Western Design Center, Inc. (WDC) introduced the CMOS 65C02 in 1982 and licensed the
design to several firms. It was used as the CPU in the Apple IIe and IIc personal computers as
well as in medical implantable grade pacemakers and defibrilators, automotive, industrial and
consumer devices. WDC pioneered the licensing of microprocessor designs, later followed
by ARM and other microprocessor Intellectual Property (IP) providers in the 1990s.
Motorola introduced the MC6809 in 1978, an ambitious and thought-through 8-bit design source
compatible with the 6800 and implemented using purely hard-wired logic. (Subsequent 16-bit
microprocessors typically used microcode to some extent, as CISC design requirements were
getting too complex for purely hard-wired logic only.)

Another early 8-bit microprocessor was the Signetics 2650, which enjoyed a brief surge of
interest due to its innovative and powerful instruction set architecture.

A seminal microprocessor in the world of spaceflight was RCA's RCA 1802 (aka CDP1802,


RCA COSMAC) (introduced in 1976), which was used onboard the Galileo probe to Jupiter
(launched 1989, arrived 1995). RCA COSMAC was the first to implement CMOS technology.
The CDP1802 was used because it could be run at very low power, and because a variant was
available fabricated using a special production process (Silicon on Sapphire), providing much
better protection against cosmic radiation and electrostatic discharges than that of any other
processor of the era. Thus, the SOS version of the 1802 was said to be the first radiation-
hardened microprocessor.

The RCA 1802 had what is called a static design, meaning that the clock frequency could be
made arbitrarily low, even to 0 Hz, a total stop condition. This let the Galileo spacecraft use
minimum electric power for long uneventful stretches of a voyage. Timers and/or sensors would
awaken/improve the performance of the processor in time for important tasks, such as navigation
updates, attitude control, data acquisition, and radio communication.

12-bit designs

The Intersil 6100 family consisted of a 12-bit microprocessor (the 6100) and a range of


peripheral support and memory ICs. The microprocessor recognised the DEC PDP-
8 minicomputerinstruction set. As such it was sometimes referred to as the CMOS-PDP8. Since
it was also produced by Harris Corporation, it was also known as the Harris HM-6100. By
virtue of its CMOS technology and associated benefits, the 6100 was being incorporated into
some military designs until the early 1980s.

16-bit designs

The first multi-chip 16-bit microprocessor was the National Semiconductor IMP-16, introduced


in early 1973. An 8-bit version of the chipset was introduced in 1974 as the IMP-8.

Other early multi-chip 16-bit microprocessors include one used by Digital Equipment
Corporation (DEC) in the LSI-11 OEM board set and the packaged PDP 11/03 minicomputer,
and theFairchild Semiconductor MicroFlame 9440, both of which were introduced in the 1975 to
1976 timeframe.
In 1975, National introduced the first 16-bit single-chip microprocessor, the National
Semiconductor PACE, which was later followed by an NMOS version, the INS8900.

Another early single-chip 16-bit microprocessor was TI's TMS 9900, which was also compatible
with their TI-990 line of minicomputers. The 9900 was used in the TI 990/4 minicomputer,
the TI-99/4A home computer, and the TM990 line of OEM microcomputer boards. The chip was
packaged in a large ceramic 64-pin DIP package, while most 8-bit microprocessors such as the
Intel 8080 used the more common, smaller, and less expensive plastic 40-pin DIP. A follow-on
chip, the TMS 9980, was designed to compete with the Intel 8080, had the full TI 990 16-bit
instruction set, used a plastic 40-pin package, moved data 8 bits at a time, but could only address
16 KB. A third chip, the TMS 9995, was a new design. The family later expanded to include the
99105 and 99110.

The Western Design Center, Inc. (WDC) introduced the CMOS 65816 16-bit upgrade of the
WDC CMOS 65C02 in 1984. The 65816 16-bit microprocessor was the core of the Apple
IIgsand later the Super Nintendo Entertainment System, making it one of the most popular 16-bit
designs of all time.

Intel followed a different path, having no minicomputers to emulate, and instead "upsized" their
8080 design into the 16-bit Intel 8086, the first member of the x86 family, which powers most
modern PC type computers. Intel introduced the 8086 as a cost effective way of porting software
from the 8080 lines, and succeeded in winning much business on that premise. The 8088, a
version of the 8086 that used an external 8-bit data bus, was the microprocessor in the first IBM
PC, the model 5150. Following up their 8086 and 8088, Intel released the80186, 80286 and, in
1985, the 32-bit 80386, cementing their PC market dominance with the processor family's
backwards compatibility. The 8086 and 80186 had a crude method of segmentation, while the
80286 introduced a full-featured segmented memory management unit (MMU), and the 80386
introduced a flat 32-bit memory model with paged memory management.

32-bit designs

16-bit designs had only been on the market briefly when 32-bit implementations started to
appear.

The most significant of the 32-bit designs is the MC68000, introduced in 1979. The 68K, as it
was widely known, had 32-bit registers but used 16-bit internal data paths and a 16-bit external
data bus to reduce pin count, and supported only 24-bit addresses. Motorola generally described
it as a 16-bit processor, though it clearly has 32-bit architecture. The combination of high
performance, large (16 megabytes or 224bytes) memory space and fairly low cost made it the
most popular CPU design of its class. The Apple Lisa and Macintosh designs made use of the
68000, as did a host of other designs in the mid-1980s, including the Atari ST and Commodore
Amiga.

The world's first single-chip fully-32-bit microprocessor, with 32-bit data paths, 32-bit buses,
and 32-bit addresses, was the AT&T Bell LabsBELLMAC-32A, with first samples in 1980, and
general production in 1982[26][27] After the divestiture of AT&T in 1984, it was renamed the WE
32000 (WE for Western Electric), and had two follow-on generations, the WE 32100 and WE
32200. These microprocessors were used in theAT&T 3B5 and 3B15 minicomputers; in the 3B2,
the world's first desktop supermicrocomputer; in the "Companion", the world's first 32-bit laptop
computer; and in "Alexander", the world's first book-sized supermicrocomputer, featuring ROM-
pack memory cartridges similar to today's gaming consoles. All these systems ran the UNIX
System V operating system.

Intel's first 32-bit microprocessor was the iAPX 432, which was introduced in 1981 but was not a
commercial success. It had an advancedcapability-based object-oriented architecture, but poor
performance compared to contemporary architectures such as Intel's own 80286 (introduced
1982), which was almost four times as fast on typical benchmark tests. However, the results for
the iAPX432 was partly due to a rushed and therefore suboptimal Ada compiler.

n the late 1980s, "microprocessor wars" started killing off some of the microprocessors.
Apparently, with only one major design win, Sequent, the NS 32032 just faded out of
existence, and Sequent switched to Intel microprocessors.

From 1985 to 2003, the 32-bit x86 architectures became increasingly dominant in desktop,


laptop, and server markets, and these microprocessors became faster and more capable.
Intel had licensed early versions of the architecture to other companies, but declined to
license the Pentium, so AMD and Cyrix built later versions of the architecture based on their
own designs. During this span, these processors increased in complexity (transistor count)
and capability (instructions/second) by at least three orders of magnitude. Intel's Pentium
line is probably the most famous and recognizable 32-bit processor model, at least with the
public at large.

64-bit designs in personal computers

While 64-bit microprocessor designs have been in use in several markets since the early 1990s,
the early 2000s saw the introduction of 64-bit microprocessors targeted at the PC market.
With AMD's introduction of a 64-bit architecture backwards-compatible with x86, x86-64 (also
called AMD64), in September 2003, followed by Intel's near fully compatible 64-bit extensions
(first called IA-32e or EM64T, later renamed Intel 64), the 64-bit desktop era began. Both
versions can run 32-bit legacy applications without any performance penalty as well as new 64-
bit software. With operating systems Windows XP x64, Windows Vista x64, Windows
7 x64, Linux, BSD and Mac OS X that run 64-bit native, the software is also geared to fully
utilize the capabilities of such processors. The move to 64 bits is more than just an increase in
register size from the IA-32 as it also doubles the number of general-purpose registers.The move
to 64 bits by PowerPC processors had been intended since the processors' design in the early 90s
and was not a major cause of incompatibility. Existing integer registers are extended as are all
related data pathways, but, as was the case with IA-32, both floating point and vector units had
been operating at or above 64 bits for several years. Unlike what happened when IA-32 was
extended to x86-64, no new general purpose registers were added in 64-bit PowerPC, so any
performance gained when using the 64-bit mode for applications making no use of the larger
address space is minimal.

You might also like