0% found this document useful (0 votes)
22 views7 pages

(English) Electronic Computing - Crash Course Computer Science #2 (DownSub - Com)

Uploaded by

Trọng Đại
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views7 pages

(English) Electronic Computing - Crash Course Computer Science #2 (DownSub - Com)

Uploaded by

Trọng Đại
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
You are on page 1/ 7

Our last episode brought us to the start of

the 20th century, where early, special purpose

computing devices, like tabulating machines,


were a huge boon to governments and business

- aiding, and sometimes replacing, rote manual


tasks. But the scale of human systems continued

to increase at an unprecedented rate.


The first half of the 20th century saw the

world’s population almost double. World


War 1 mobilized 70 million people, and World

War 2 involved more than 100 million.


Global trade and transit networks became interconnected

like never before, and the sophistication


of our engineering and scientific endeavors

reached new heights – we even started to


seriously consider visiting other planets.

And it was this explosion of complexity, bureaucracy,


and ultimately data, that drove an increasing

need for automation and computation. Soon those cabinet-sized electro-mechanical

computers grew into room-sized behemoths that


were expensive to maintain and prone to errors.

And it was these machines that would set the


stage for future innovation.

INTRO

One of the largest electro-mechanical computers

built was the Harvard Mark I, completed in


1944 by IBM for the Allies during World War 2.

It contained 765,000 components, three


million connections, and five hundred miles of wire.

To keep its internal mechanics synchronized,

it used a 50-foot shaft running right through


the machine driven by a five horsepower motor.

One of the earliest uses for this technology


was running simulations for the Manhattan Project.

The brains of these huge electro-mechanical

beasts were relays: electrically-controlled


mechanical switches. In a relay, there is

a control wire that determines whether a circuit


is opened or closed. The control wire connects

to a coil of wire inside the relay. When current


flows through the coil, an electromagnetic

field is created, which in turn, attracts


a metal arm inside the relay, snapping it

shut and completing the circuit. You can think


of a relay like a water faucet. The control

wire is like the faucet handle. Open the faucet,


and water flows through the pipe. Close the

faucet, and the flow of water stops.

Relays are doing the same thing, just with

electrons instead of water. The controlled


circuit can then connect to other circuits,

or to something like a motor, which might


increment a count on a gear, like in Hollerith's

tabulating machine we talked about last episode.


Unfortunately, the mechanical arm inside of

a relay *has mass*, and therefore can’t


move instantly between opened and closed states.

A good relay in the 1940’s might be able


to flick back and forth fifty times in a second.

That might seem pretty fast, but it’s not


fast enough to be useful at solving large,

complex problems.
The Harvard Mark I could do 3 additions or

subtractions per second; multiplications took


6 seconds, and divisions took 15.

And more complex operations, like a trigonometric function, could take over a
minute.

In addition to slow switching speed, another


limitation was wear and tear. Anything mechanical

that moves will wear over time. Some things


break entirely, and other things start getting

sticky, slow, and just plain unreliable.

And as the number of relays increases, the

probability of a failure increases too. The


Harvard Mark I had roughly 3500 relays. Even

if you assume a relay has an operational life


of 10 years, this would mean you’d have

to replace, on average, one faulty relay every


day! That’s a big problem when you are in

the middle of running some important, multi-day


calculation.

And that’s not all engineers had to contend


with. These huge, dark, and warm machines

also attracted insects. In September 1947,


operators on the Harvard Mark II pulled a

dead moth from a malfunctioning relay. Grace Hopper who we’ll talk more about in a
later episode noted,

“From then on, when anything went wrong with a computer,

we said it had bugs in it.”

And that’s where


we get the term computer bug.

It was clear that a faster, more reliable


alternative to electro-mechanical relays was

needed if computing was going to advance further,


and fortunately that alternative already existed!

In 1904, English physicist John Ambrose Fleming


developed a new electrical component called

a thermionic valve, which housed two electrodes


inside an airtight glass bulb - this was the

first vacuum tube. One of the electrodes could


be heated, which would cause it to emit electrons

– a process called thermionic emission.


The other electrode could then attract these

electrons to create the flow of our electric


faucet, but only if it was positively charged

- if it had a negative or neutral charge,


the electrons would no longer be attracted

across the vacuum so no current would flow.

An electronic component that permits the one-way

flow of current is called a diode, but what


was really needed was a switch to help turn

this flow on and off. Luckily, shortly after,


in 1906, American inventor Lee de Forest added

a third “control” electrode that sits


between the two electrodes in Fleming’s design.

By applying a positive charge to the


control electrode, it would permit the flow

of electrons as before. But if the control


electrode was given a negative charge, it

would prevent the flow of electrons.


So by manipulating the control wire, one could

open or close the circuit. It’s pretty much


the same thing as a relay - but importantly,

vacuum tubes have no moving parts. This meant


there was less wear, and more importantly,

they could switch thousands of times per second.


These triode vacuum tubes would become the

basis of radio, long distance telephone, and


many other electronic devices for nearly a

half century. I should note here that vacuum


tubes weren’t perfect - they’re kind of

fragile, and can burn out like light bulbs,


they were a big improvement over mechanical relays.

Also, initially vacuum tubes were expensive

– a radio set often used just one, but a


computer might require hundreds or thousands of electrical switches.

But by the 1940s,


their cost and reliability had improved to

the point where they became feasible for use


in computers…. at least by people with deep

pockets, like governments.


This marked the shift from electro-mechanical

computing to electronic computing. Let’s


go to the Thought Bubble.

The first large-scale use of vacuum tubes


for computing was the Colossus Mk 1 designed

by engineer Tommy Flowers and completed in


December of 1943. The Colossus was installed

at Bletchley Park, in the UK, and helped to


decrypt Nazi communications.

This may sound familiar because two years


prior Alan Turing, often called the father

of computer science, had created an electromechanical


device, also at Bletchley Park, called the

Bombe. It was an electromechanical machine


designed to break Nazi Enigma codes, but the

Bombe wasn’t technically a computer, and


we’ll get to Alan Turing’s contributions

later.
Anyway, the first version of Colossus contained

1,600 vacuum tubes, and in total, ten Colossi


were built to help with code-breaking.

Colossus is regarded as the first programmable, electronic computer.

Programming was done by plugging hundreds


of wires into plugboards, sort of like old

school telephone switchboards, in order to


set up the computer to perform the right operations.

So while “programmable”, it still had


to be configured to perform a specific computation.

Enter the The Electronic Numerical Integrator


and Calculator – or ENIAC – completed

a few years later in 1946 at the University


of Pennsylvania.

Designed by John Mauchly and J. Presper Eckert, this was the world's first truly
general purpose,

programmable, electronic computer.

ENIAC could perform 5000 ten-digit additions


or subtractions per second, many, many times

faster than any machine that came before it.


It was operational for ten years, and is estimated

to have done more arithmetic than the entire


human race up to that point.

But with that many vacuum tubes failures were


common, and ENIAC was generally only operational

for about half a day at a time before breaking


down.

Thanks Thought Bubble. By the 1950’s, even


vacuum-tube-based computing was reaching its limits.

The US Air Force’s AN/FSQ-7 computer,


which was completed in 1955, was part of the

“SAGE” air defense computer system we’ll


talk more about in a later episode.
To reduce cost and size, as well as improve
reliability and speed, a radical new electronic

switch would be needed. In 1947, Bell Laboratory


scientists John Bardeen, Walter Brattain,

and William Shockley invented the transistor,


and with it, a whole new era of computing was born!

The physics behind transistors is


pretty complex, relying on quantum mechanics,

so we’re going to stick to the basics.

A transistor is just like a relay or vacuum


tube - it’s a switch that can be opened

or closed by applying electrical power via


a control wire. Typically, transistors have

two electrodes separated by a material that


sometimes can conduct electricity, and other

times resist it – a semiconductor.


In this case, the control wire attaches to

a “gate” electrode. By changing the electrical


charge of the gate, the conductivity of the

semiconducting material can be manipulated,


allowing current to flow or be stopped – like

the water faucet analogy we discussed earlier.


Even the very first transistor at Bell Labs

showed tremendous promise – it could switch


between on and off states 10,000 times per second.

Further, unlike vacuum tubes made


of glass and with carefully suspended, fragile

components, transistors were solid material known as a solid state component.

Almost immediately, transistors could be made smaller than the smallest possible
relays or vacuum tubes.

This led to dramatically smaller and cheaper computers, like the IBM 608, released
in 1957

– the first fully transistor-powered, commercially-available computer.

It contained 3000 transistors and


could perform 4,500 additions, or roughly

80 multiplications or divisions, every second.


IBM soon transitioned all of its computing

products to transistors, bringing transistor-based


computers into offices, and eventually, homes.

Today, computers use transistors that are


smaller than 50 nanometers in size – for

reference, a sheet of paper is roughly 100,000


nanometers thick. And they’re not only incredibly

small, they’re super fast – they can switch


states millions of times per second, and can run for decades.

A lot of this transistor and semiconductor development happened in the Santa Clara
Valley,

between San Francisco and San Jose, California.

As the most common material used


to create semiconductors is silicon, this

region soon became known as Silicon Valley.


Even William Shockley moved there, founding

Shockley Semiconductor, whose employees later


founded

Fairchild Semiconductors, whose employees


later founded

Intel - the world’s largest computer chip


maker today.

Ok, so we’ve gone from relays to vacuum


tubes to transistors. We can turn electricity

on and off really, really, really fast. But


how do we get from transistors to actually

computing something, especially if we don’t


have motors and gears?

That’s what we’re going to cover over


the next few episodes.

Thanks for watching. See you next week.

You might also like