Unit 2 Computer Architecture
Unit 2 Computer Architecture
● A word? - A humble 5 letter word has 265 possible combinations. That’s almost 12 million.
● A letter? - 26 possibilities (52 if we distinguish between upper and lower case)
● A numerical digit? - Still 10 possibilities
● A light bulb! - just 2 possible states! on or off.
In Computer Science, this on or off value is known as a bit, and can be thought of the literal presence or
absence of an electrical charge within a transistor. We tend to simplify it in programming to refer to it as a 1
or 0.
Our modern computer processors are made of millions of these transistors, each of which is on or off at a
given moment in time, and it is on this foundation that our entire world of computing is built upon! Computers
take the simple on/off values of a transistor, and scale them up to create the complexity we know and love.
Given each bulb can have two states, “on or off”, or 2 possible combinations, how many total possible
combinations are possible with 8 light bulbs?
If we replace the off/on of electrical charge with 0/1, we can establish a number system. The number system
built around bits is known as the binary number system.
Hexadecimal
● Hexadecimal is a base 16 numbering system. Base 16 means it has 16
numerals for each column.
● The least significant column is the 1s, the next column is the number of
16s, and the column after that is the number of 256s (16^2).
● Hexadecimal is even more convenient than octal because each hexadecimal numeral can represent a
4 digit binary number, resulting in a two digit hexadecimal being able to convey one byte!
Practice conversions!
140
71
0110 1010
1001 1111
300
123
222
A4
8F
B9
2. Representing data
2.1 ASCII
As powerful as computers can be using bits and bytes, humans don't intuitively operate on that level, we use
characters and words, so there needed to be a way for letters (and thereby words) to be represented
somehow within a computer.
To achieve this, an arbitrary table was decided on in the 1960s called the ASCII table. This table still forms the
basis of much of today's character representation system. The significance of this table is that a "string of
letters" is really just a sequence of binary that, according to this lookup table, can be used to represent
alphanumeric text.
1
Weiman, D 2010: ASCII Conversion Chart
http://web.alfredstate.edu/faculty/weimandn/miscellaneous/ascii/ascii_index.html
Practice: Convert the following ASCII to it's binary representation.
2.2 Unicode
ASCII was incredibly useful and opened up a world of computing accessible to a lot of people, but there are
still significant limitations. ASCII was built on an 8 bit / 1 byte conversion table. That means there are only
256 possible characters that can be used for conversion. While this is generally fine for Latin based
languages such as English, it imposes restrictions on how multilingual computing is capable of being.
The solution to overcome this was the development of the UNICODE standard, which was published in 1991.
UNICODE is a 16 bit lookup table (65536 possible values). While this means it takes 2 bytes to store every
letter, the cost of data storage has fallen significantly enough that that is not a major problem. The upside is it
means all Asian characters etc can now be represented.
In the examples we have been looking at so far, we have used the entire byte to represent a number: 8 bits to
represent values 0 to 255. In reality computers need to be able to cater to negative values as well, so the
most significant bit is actually reserved to indicate the sign (positive or negative) of the number. This system
is known as two's complement, or having a signed integer. To use the full size of the binary number for a
positive only value is known as having an unsigned integer.
0000 0011 3
0000 0010 2
0000 0001 1
0000 0000 0
1111 1111 -1
1111 1110 -2
1111 1101 -3
1111 1100 -4
Notice that this will mean the number range is greater for the negatives than the positives. For an 8 bit
integer, the decimal values will range from +127 to -128.
Converting two's complement binary to negative decimal (example using 1110 1110):
Addition of 68 + 12
1 1 (carry over row)
0100 0100 (68)
+ 0000 1100 (12)
0101 0000 (80)
For our purposes, you don't need to be able to do manual conversions with floating point numbers, you just
need to understand the concept, it's limitations and workarounds (as Tom Scott outlines in his video).
A good illustration of the problems with floating point numbers would be to run the following code
# Python
a = 0.1
b = a + a + a
print(b) # What do you expect to print? What actually prints?
What is the cause of the unexpected output from the above code snippet? As a programmer, how should you
mitigate this in your applications?
Internally, floating point numbers are stored like scientific notation by recording two integers. The first
represents the value component, the second represents the power of the exponent. In this way, they can store
very large and very small numbers but with a limited degree of accuracy. The 64 bit floating point number
uses 1 bit for the sign (positive/negative), 8 bits for the exponent, and 55 bits for the significant number.
For example, the speed of light may be represented as 3.0 x 108 (m/s) using scientific notation, so as a
floating point number this would be the integers 3 and 8. The catch is to remember everything is using binary
numbers. This means the bits in the "value" component represent 1/2, then 1/4, then 1/8 and so forth; and
the exponent is for powers of 2 rather than 10. Our example number of 3.0 x 108 is actually
1.00011110000110100011 x 228.
What is the value of the number? Colours in the computer are actually split into RGB – Red, Green Blue. One
unsigned byte (256 values) for each.
2.6 Time
Computers store time internally as the number of seconds that have lapsed from an arbitrarily agreed epoch
(zero-point) of midnight, 1st January 1970 UTC.
32 bit computers take their name by the fact their internal calculations are performed using an integer size of
32 bits. A signed 32bit integer has a range of −2,147,483,648 to 2,147,483,647.
That means that a little after 2 billion seconds have lapsed from the start of the 1970s, a 32 bit computer
would be unable to accurately store an integer that represented the time! In fact, it would clock over from
being 1970 plus 2 billion seconds to becoming 1970 minus 2 billion seconds! When do we reach this limit?
03:14:07 UTC on 19 January 2038!
The subsequent second, any computer still running a 32 bit signed system will clock over to 13 December
1901, 20:45:52.
While your personal computer may be a 64 bit system, so you think you are safe, there are a lot of systems
still around that we all rely on that have 32 bit internals. This is particularly true of embedded systems in
transportation infrastructure, electrical grid control, pumps for water and sewer systems, internal chips on
cars and other machinery, even a lot of Android mobile phones (though admittedly the changes of one of
them still being in use in 20 years is unlikely!). If you research into the "2038 problem" you'll discover just how
many critical systems are still vulnerable.
Practice
Looking for more practice? This website has a number of online quizzes for you to convert between number
systems and practice your binary arithmetic.
http://www.free-test-online.com/binary/binary_numbers.htm
3. Logic gates & circuits
So we've seen that binary can be used to store numbers, text, and colour codes, what we can use binary for
much more than storing values; binary also forms the basis of all the logic functionality that occurs within
computers.
We do this through what are commonly known as logic gates. All gates can be simplified down into the three
of AND, OR and NOT but there are six gates we will learn to love in this course:
AND OR NOT
NAND NOR XOR
The following video provides a great introduction into how you can easily create your own logic gates and
how they work.
The six logic gates, their symbols, and their truth tables are as follows:
To help you try to remember what the various symbols look like, it might be helpful to remember the ANDroid
way (cheesy I know)...
Like PEMDAS in mathematics, an order of precedence exists for equations involving gates. The order of
precedence is:
1. NOT
2. AND (NAND)
3. OR (NOR, XOR)
Logic equations can either use the written name of the relevant logic gates, or they could be expressed using
boolean notation as per the following table. Unfortunately there are several different notations that you may
come across.
As a result, the following are all ways you could represent an AND gate:
X = A AND B
A B X
X = AB
0 0 0
X=A&B
0 1 0
X = A∩B
1 0 0
X = A•B
1 1 1
You may be required to convert from any one of these methods, to any other method.
ie: Logic diagram <--> logic equation <--> truth table
Practice
Question 1. Convert this truth table to logic diagram and logic equation
A B C X
0 0 0 0
0 0 1 0
0 1 0 1
0 1 1 1
1 0 0 1
1 0 1 1
1 1 0 1
1 1 1 1
Question 2. Convert this truth table to logic diagram and logic equation
A B C X
0 0 0 0
0 0 1 1
0 1 0 0
0 1 1 1
1 0 0 0
1 0 1 1
1 1 0 1
1 1 1 1
Question 3. Determine the truth table and logic equation.
A B C X
0 0 0 0
0 0 1 1
0 1 0 1
0 1 1 1
1 0 0 0
1 0 1 1
1 1 0 0
1 1 1 1
Question 6. Find the values for Y in the truth table
A B C X
0 0 0 0
0 0 1 0
0 1 0 0
0 1 1 1
1 0 0 1
1 0 1 0
1 1 0 0
1 1 1 1
X = (A | B) & (not C | B)
4. The CPU
"A computer processor does moronically simple things — it moves a byte from memory to register, adds a
byte to another byte, moves the result back to memory. The only reason anything substantial gets completed
is that these operations occur very quickly. To quote Robert Noyce, ‘After you become reconciled to the
nanosecond, computer operations are conceptually fairly simple.’”
(code: The Hidden Language of Computer Hardware and Software by Charles Petzold)
The functions it can perform are not complicated, it’s power comes from it’s speed. They are just created
through millions/billions of logic gates working together.
What is the speed of a typical CPU today? What does that speed “mean”?
A CPU can add, subtract, multiply, divide, load from memory, save to memory. From those simple building
blocks we get the computers we have today.
4.1 Structure of the CPU
The internal structure of a CPU can be presented by the pink area in the diagram below:
● Fetch - Each instruction is stored in memory and has its own address. The processor takes this
address number from the program counter, which is responsible for tracking which instructions the
CPU should execute next.
● Decode - All programs to be executed are translated into Assembly instructions. Assembly code must
be decoded into binary instructions, which are understandable to your CPU. This step is called
decoding.
● Execute - While executing instructions the CPU can do one of three things: Do calculations with its
ALU, move data from one memory location to another, or jump to a different address.
● Store - The CPU must give feedback after executing an instruction and the output data is written to
the memory.
From https://turbofuture.com/computers/What-are-the-basic-functions-of-a-CPU
CPU Very fast Very small (a few Very expensive (built into For immediate use
register bytes) the CPU)
Cache Very fast Small (a few MB) Very expensive Immediate use
RAM Fast Large-ish (8 GB) About USD 1c/MB Short term (seconds to
minutes)
SSD Moderate Large 100s of GB About USD 0.2c/MB Long term, non-volatile
HDD Slow Large TBs Cheap! About USD Long term, non-volatile
0.005c/MB
Operating System (OS) can be defined as a set of programs that manage computer hardware resources and
provide common services for application software. The operating system acts as an interface between the
hardware and the programs requesting I/O.
For now, there is just one question: What are the main functions of an operating system?
What are the hardware resources that require managing? What are some of the services an OS provides to
applications? Rather than repeating myself, see my notes in unit 6 for this as it is all addressed there.
Finally, after we have an operating system to manage the hardware, we get to run our application software to
"do stuff"!
What are some of the common application uses available with computing? Some common categories
include:
● Word processors
● Spreadsheets
● Database management systems
● Email
● Web browsers
● Computer aided design (CAD)
● Graphic processing
● Video & audio editing
Names of a few of the main ones for each category? A key differentiator between each?
What are some of the more common features across most applications?
Which features are provided by the OS, and which by the application?
Past paper questions for review
(refer to separate document)