01 History of Computer
01 History of Computer
The creation of the modern computer has changed the face of the planet. Today, there are more devices
fitted with a microchip than there are human beings. The idea of a “computer” cannot be attributed to a
single person. Rather, it has been the combined contribution of many innovative and forward-thinking
scientists, mathematicians, philosophers and engineers that has brought us to what we now refer to as
“the computer age”. This is their story…
It is no coincidence that the decimal number system rolls over after a count of 10. During a time when
numbers did not yet exist, fingers, along with twigs and pebbles, were the most convenient way of
tracking quantities.
ABACUS
Around 500 B.C., the Babylonians made advances
in accounting and devised the abacus — a small
contraption with a few rods and free-moving
beads. This device operated by accepting data
(quantities), a set of instructions (add or subtract)
and in return provided an answer — it captured
the essence of a computer. The abacus is an
example of the earliest known computing device: a
primitive calculator that possesses some sort of
innate intelligence allowing it to translate
instructions into meaningful answers.
Unfortunately, the adding machine performed only one mathematical function at a time. Charles
Babbage devoted his life’s work to remedy that. Starting in 1837, Babbage worked on the design of a
general purpose computer called the Analytical Engine. His efforts eventually earned him the title
“father of computing”.
A working model of the Analytical Engine never materialized due to financial constraints but the design
appeared to be sound. Were it ever to have been constructed, the Analytical Engine would have been
some 30 meters long and 10 meters wide, powered by a steam engine and accept not only data (for
example, the dimensions of a triangle) but also programs or functions (for computing the area of that
triangle). The input would be fed through hole-punched cards — an idea borrowed from the textile
industry which used such cards for automatically guiding patterns in weaving machines. The results were
designed to be output through a curve plotter or a bell. The Analytical Engine was the first
programmable computer and as such, it laid out the fundamental principles found in any modern
computer: input, a program, a data store for storing intermediary answers, output and an arithmetic
unit that performed the basic functions underlying all computation (add, subtract, divide and multiply).
Babbage’s principles found a purpose in the 1890s. The US Census Board realized that manually counting
the results of the current census would take more than 10 years, by which time another census would
be due. As part of a competition to come up with a solution, Herman Hollerith, an employee of the
census department devised a machine that took in punch cards similar to Babbage’s Analytical Engine
and added the results for over 6.2 million people in only six weeks. The idea for the system came to
Hollerith from railroad operators who punched tickets in a certain manner to indicate whether the ticket
holder was tall, dark, male, et cetera.
Mechanical computational engines continued to evolve and devices such as chronometers or watches
became the marvel of mechanical orchestration and miniaturization. Containing anywhere from a few
dozen to a few hundred moving parts comprising clutches, cogs, gears, springs, coils and so on, these
contraptions could keep a heartbeat for years or even decades with millisecond precision. The complex
orchestration of these parts also explains the higher price tags on some of the more sophisticated of
these watches, compared to the much cheaper digital counterparts of the modern era. Mechanical
contraptions, however, have a physical limit to how small they can get and succumb to the number and
size of parts, friction, weight, portability, power requirements and precision.
Fortunately, science in typical fashion made a leap during the mid-1800s. Thomas Alva Edison’s
pioneering work in the field of electricity allowed it to be harnessed for practical use for the first time.
With the control of electricity came the radio, the light bulb, wires and other invaluable electrical
inventions.
As physics paved the way for electrical innovation, scientists discovered in electrical charge a way to
represent data. The beads of the abacus were replaced by bits in the modern computer — essentially a
bit or ‘binary digit’ is a small electrical charge that represents a 1 or 0. The creation of the bit marked a
transition from the decimal system for humans (10 primary numbers from zero to nine) to a binary
system for computers (only two numbers, zero and one).
Binary arithmetic provided the foundation for operating with bits. It was the contribution of Gottfried
Leibniz, a prodigy in mathematics, philosophy, linguistics, law, politics and logic. In fact, he posited that
every argument could be deduced to numbers using binary logic. Hence, 1s and 0s in binary arithmetic
are also referred to as “true” and “false” (or “on” and “off” due to their application in electronic
switches).
George Boole, a mathematician and philosopher, relied on binary arithmetic to advance his theories of
logic, at the time still a branch of philosophy. The field would later evolve into Boolean algebra for
managing and calculating the outcome of a number of bits interacting with each other. The simplest of
Boolean logic might take the following form: a light switch toggles a light bulb — flipping the switch
turns the light on if, and only if, it’s off, and vice versa. In the modern computer, however, a hybrid of
few million such switches are attached to any single circuit and flipping a combination of these switches
can achieve results that can only be managed using the techniques of Boolean algebra.
At its core, a computer is doing just that, switching a galaxy of bits on or off. A ballet of bits is constantly
playing out and each flip of the switch results in a domino-like chain reaction. Each ballet of the bits is
used to represent an outcome and must be orchestrated with absolute accuracy.
During a 3D game for example, the tiniest movement of the mouse turns the ball, which turns a wheel
that is being monitored by a chip whose purpose is translating this movement into an electronic signal.
The chip changes a few thousand bits and causes a chain reaction down the wire of the mouse
connected to the computer. The reaction eventually ends up in the computer’s main processor, which in
turn tells the graphics processor that the mouse has moved one millimetre. The graphics card does a
few thousand mathematical computations to calculate the shadow, lighting, shading and angle of light,
and generates a new image corresponding to the movement of the mouse. While it does all this, the
memory in the computer does the job of remembering the previous position based on which the next
image is calculated. The graphics card renders the new image on the monitor by changing the state of a
few billion bits on the screen and producing a massive collage of a few million pixels — all this within a
fraction of a second.
The language of bits was not always the language of choice. The idea came from the early days of
telephone companies when they used switches with “on” and “off” states to connect or disconnect a
circuit. Human operators made the connections by manually operating the switches. For long distance
calls, a local operator connected to a foreign telephone exchange, which in turn connected to its own
local exchange and created a link between the calling parties. A computer uses the same principle of
using switches to control bits and direct the flow of information.
In his 1937 Masters degree thesis at the Massachusetts Institute of Technology, Claude Shannon proved
that management of a large number of these switches could be simplified using Boolean algebra.
Inversely, he also proved that switches could be used to solve Boolean algebraic problems. This meant
that if a set of bits interacted in a particular way, they would magically result in the answer — in the
same way that mixing red and green paint results in yellow.
How this magical interaction happened was the pioneering work of Alan Turing, father of modern
computing. In 1936, a year before Shannon’s thesis, Turing laid out the fundamental theoretical model
for all modern computers by detailing the Turing Machine. Its basic idea is quite simple: in a perfectly
well choreographed ballet, for example, a dancer does not need to keep track of the entire ballet.
Instead, she may need to keep track of only a few simple cues: step forward if dancer to the left steps
back; spin synchronously with the lead dancer; stop dancing when the dancer in front stops dancing.
Each dancer in the ballet follows their own set of cues, which creates a chain reaction among other
dancers. The ballet is initiated (or brought to a halt) by a single dancer responsible for starting the chain
reaction.
Similarly, bits react to cues and influence each other. When the ballet of bits concludes, the new state of
bits (for example, 111, 001 or 010) represent different results. Turing’s contribution is remarkable due to
the nature of the pioneering work and his thought experiments that led him to develop such a system.
Turing’s work added to centuries of advances and breakthroughs in engineering, mathematics, physics,
logic and an endless pursuit of human spirit that would manifest themselves in the form of a 30-tonne
machine called the Electronic Numerical Integrator And Computer (ENIAC).
The ENIAC was the first fully programmable machine capable of solving almost any mathematical
problem. Built by the US Army in 1946, the ENIAC was capable of adding 5,000 numbers per second. It
was powered by 18,000 vacuum tubes, 6,000 switches, around five million hand soldered joints and took
three years to build. This marvel of a machine, however, was specifically programmed to calculate in a
matter of hours the trajectory of artilleries to hit enemy targets. This was a task that would otherwise
have taken days to compute.
The vacuum tubes used in the ENIAC were vaguely similar to light bulbs in both function and form but
with metal casings instead of glass. These vacuum tubes functioned to represent data using electrical
charge. However, they were problematic at the same time and kept fusing out. The heat and other lights
on the ENIAC computer attracted a lot of moths which in turn caused a lot of short circuiting. Computer
problems henceforth came to be known as “bugs” and fixing them, “debugging”. Due to these problems
the ENIAC could sometimes be down for half a day at a time and required a lot of hands to keep it up
and running.
While the input data could be stored on the ENIAC, the program to operate on the input had to be wired
through plug board wiring. Programming it was cumbersome and each program required unplugging
and re-plugging hundreds of wires. This method of programming was almost as primitive as Babbage’s
punch cards. And the limitation meant that computers, although programmable, were restricted by the
complexity of the process.
It was the mathematician John von Neumann who, shortly after the ENIAC, introduced the concept of a
stored-program computer. Storing the program in the computer memory meant that the system of
semi-permanent plug board wiring on the ENIAC could be deprecated. Bits would represent not only
data, but also the programs themselves which consumed the data — bits controlled by bits.
The stored-program design had profound implications. Prior to this breakthrough, computers accepted
normal input and passed it on to programs which operated on it. However, if the program itself was an
input, then operating on this program would require another master program. Turing’s Universal
Machine described such a master program and von Neumann provided the implementation which has
now become the model for nearly all computers.
Even with the programmable architecture well in place, it was doubtful if vacuum tubes would allow
computers to scale. These deficient vacuum tubes set the backdrop for the most important invention of
the digital age: the transistor, for which its three co-inventors William Shockley, John Bardeen and
Walter Brattain went on to receive the Nobel Prize in 1956.
Transistors are microscopically small in contrast to the finger-sized vacuum tubes, require lesser power
and are capable of switching states (1 to 0 and 0 to 1) much faster. Their beauty also lies in their
composition: as solid-state semiconductors, they are built from material that has the ability to conduct
electrical charge, like metal, or block it, like rubber.
To deliver on the promise of transistors, Shockley would go on to head the Shockley Semiconductor
Laboratory in Northern California with his colleagues Walter Brattain and John Bardeen. The two
eventually left Shockley due to his paranoid and competitive nature (once, an employee cut her finger
which Shockley suspected was actually a plot targeted toward him and to find the culprit, forced a lie
detector test upon all his employees).
Along with Bardeen and Brattain, six other scientists quit Shockley Semiconductor. These “treacherous
eight” — as Shockley referred to them — went on to form Fairchild Semiconductor in the same region
and adapted the more abundant silicon as the semiconducting material of choice. This marked the
beginnings of the Silicon Valley which today is the epicenter of computers and high-tech businesses.
Transistors which represent the bits in a computer needed to be wired together for interaction.
Common configurations of wiring came together as integrated circuits or microchips, the first of which
was invented by Robert Noyce, one of the “treacherous eight”. If transistors are characters of the
alphabet, microchips are the words formed by those alphabets and computers are the composition of
dozens of these microchips. All digital electronic devices are composed of microchips with many of them
sharing the same common subset of chips.
Robert Noyce, along with Gordon Moore would go on to form Integrated Electronics, now better known
as Intel. It was at Intel that he oversaw the work of Ted Hoff who invented the greatest microchip of
them all. The microprocessor or the Central Processing Unit (CPU) found in all personal computers (PCs)
is a single, highly complex microchip that functions as the brain.
Co-founder Gordon Moore meanwhile gained notoriety for his speculation that the number of
transistors on a microprocessor would double every two years. Moore’s speculation became Moore’s
Law and has held up since it was first posited in 1965. Current Intel Pentium 4 processors have the
muscle of over 100 million transistors fitted inside a matchbox-size chip that is capable of adding over
5,000 million numbers per second. Contrast this with the 17,000 vacuum tubes in the 30-tonne ENIAC
which could add only 5,000 numbers per second and the significance of transistor technology becomes
clear. If the Greeks had an Intel Pentium 4, they could have saved themselves centuries of mathematical
labouring.
Intel processors started their legacy in 1975, by powering the first commercial personal computer, the
MITS Altair, with an Intel 8800 processor. Microsoft founders Bill Gates and Paul Allen would go on to
develop Altair BASIC, its first programming language. Interestingly enough, in the same year, Advanced
Micro Devices (AMD) — also formed by a group of Fairchild defectors — reverse engineered the Intel
8800 processor and started the long running Intel-AMD rivalry.
While the Altair was being sold as a hobbyist kit, the Apple I was the first fully assembled computer
developed around the same time by hobbyist Steve Wozniak and sold with the help of close friend,
Steve Jobs. The two subsequently founded Apple Computers in Jobs’ family garage. Today, 30 years
later, Jobs serves as the visionary and CEO of Apple Computers Incorporated.
Developed in 1973, it was the non-commercial Xerox Alto, however, that took the title for first personal
computer. The Alto, developed at Xerox PARC (Palo Alto Research Center) in Palo Alto, California, was
one of a dozen inventions to come out of the research centre including colour graphics, object oriented
programming and wide application of the mouse. After seeing a demo of the Alto, Apple engineers
purportedly adopted the concept for their own commercial computer Lisa, which eventually proved to
be too expensive and ahead of its time. The lack of commercial demand meant that over 2,000 Lisas
would need to be buried in a landfill.
Contrary to IBM chief Thomas Watson’s speculation in 1943 that “there is a world market for maybe five
computers,” personal computers found widespread demand in a growing market that has today reached
nearly two billion units. This figure primarily represents PCs, but its siblings and cousins (cellphones,
PDAs, laptops) far exceed the population of even humans on this planet. Whether in the form GPS
tracking devices, rain-sensing windshield wipers or electronic hearing implants, microchips continue to
shrink and integrate into our lives.
While the hunger for more powerful and smaller chips is insatiable, Moore’s Law seems to be giving
away as the current generation of microprocessors are showing signs of plateauing. Even though the
natural laws of physics dictate that bits can be as small as the atoms in which they are stored, we are far
from reaching this atomic threshold. The problem lies in the economics of miniaturisation as increasingly
expensive fabrication plants for producing smaller chips yield disproportionately diminishing returns.
Nonetheless, all hope is not yet lost as scientists are already exploring the frontiers of sub-atomic
particles.
Atoms are composed of a set of protons and neutrons orbiting around a nucleus. Removing protons or
neutrons changes the charge of the atom to a negative (0) or positive (1), allowing them to act as bits.
These electrons and neutrons are in turn made up of three quarks each. Understanding the nature of
these quarks and their influence on neutrons and protons will unlock the power to make today’s most
powerful supercomputers pale in comparison. If these quantum computers ever materialise, they will in
theory be able to compute in a matter of days what would by today’s computing ability take a few
million years.
While the shape, form and power of computing devices continues to evolve, a parallel evolution has
been taking place in the related field of communication technology.
The first electronic telegraphs (including wireless) were already communicating in 1832, a century prior
to the ENIAC. George Stibitz, a researcher at Bell Labs during the 1930s and 1940s, used a teletypewriter
(essentially a typewriter hooked up to a telephone line) to communicate with a calculator on the other
end and receive results for remote computation. This was the first time a computer had ever been
operated remotely over a phone line.
The US Department of Defense, Advanced Research Projects Agency (DARPA) duly noted the missing link
in computers and initiated efforts to fill the void. Around 1962, a series of memos about the “Galactic
Network” laid the conceptual foundations of the internet. Shortly thereafter, Vinton Cerf received a
“request for proposal” from DARPA to design a packet switched network. Cerf’s research efforts lent
itself heavily to the design of the first network of computers and earned him the title “father of the
internet”.
The resilience of the internet derives from the packet switched network Cerf detailed. In such a model,
all information is divided into tiny packets. Each of these packets is transmitted separately and embarks
on a journey to find their destination on the internet. Their only strategy to get to the destination is to
ask intermediate routers (who conduct traffic on the internet) for directions to the next router that
might lead the way and so on until the last router points them to their final destination. Anyone who has
ever gotten lost and asked for directions can probably relate to a packet. For these packets, a dozen
things can and do go wrong. They often get lost in transit, are captured by a hacker or arrive at their
destination out of order with other packets.
The research and prototyping for refining the packet switched network began at the University of
California at Los Angeles (UCLA) where Cerf was doing graduate work. By 1969 the Advanced Research
Projects Agency Network (ARPANET) would take shape as UCLA, University of California at Santa
Barbara, Stanford Research Institute and University of Utah came together to form a network.
Along with the contributions of dozens of other individuals, Cerf would go on to develop the
Transmission Control Protocol (TCP) in his new home at Stanford where he had taken up assistant
professorship in computer science and electrical engineering. After four iterations, the TCP suite was
finalised in 1978 following an exciting demonstration in July 1977 when a packet was sent on a 94,000
mile round trip on the ARPANET without losing a single bit. As a result of its resilient design and
infastructure, TCP/IP (Internet Protocol) became the standard for transferring data across networks.
Relying on TCP/IP, the ARPANET grew into the internet and has since continued to scale unchecked to
become what it is today.
The internet was primarily used for transferring and sharing data. It handled documents but webpages
as such did not exist until Tim Berners-Lee, an independent contractor at CERN, became frustrated with
the lack of ability to easily share and update information between researchers. He transformed the
internet landscape by introducing the concept of hyperlinks — the links on webpages that allow them to
point to each other with the click of a mouse. These hyperlinks created a ‘global web’ of linked pages
commonly referred to as the World Wide Web (WWW).
As far as communication networks go, the internet overshadows the telephony network, integrates the
television, radio and newspapers and challenges even our physical social realm. Its humble beginnings
ultimately brought the communication revolution to all its glory not only for humans but also for
devices.
Through microchips, electronic devices became aware of their own function. A chip acting as the brain
inside a cellphone encodes every bit of relevant information about the host. Relying on exacting
communication protocols, these devices suddenly become aware of the existence of other devices made
up of similar microchips and can speak to them in a similar language.
This new species of electronic beings are continually evolving and trying to overcome their cultural
differences so that an alarm clock can talk to the coffee maker in the morning or a health monitor can
check our vital statistics during recovery. These modern-day slaves encapsulate tiny worker atoms which
manage for us what our preoccupied minds rather not. The quality of life during our brief welcome on
this planet has been elevated because of them and in return for doing everything they are told, they ask
for nothing. If we are God’s creatures, then computers are ours: a manifestation of the human spirit and
potential.