0% found this document useful (0 votes)
1 views

Lecture 01

Uploaded by

Tinotenda Kondo
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1 views

Lecture 01

Uploaded by

Tinotenda Kondo
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 42

CHAPTER 1- Introduction to

computer systems
Assoc. Prof. Dr. Ezgi Deniz Ülker
European University of Lefke
Department of Computer Engineering
COMP333- Computer Organization and Architecture
A chronology of early computing
• 3000 BC: The first abacus is likely invented and used by the
Babylonians.

• 1800 BC: Algorithms are developed by a Babylonian mathematician as


an aid in solving numerical problems.

• 1614: Napier's bones are a manually-operated calculating device


created by John Napier for calculation of products and quotients of
numbers.
A chronology of early computing
• 1642: The first mechanical calculating machine, the Pascaline is built
by Blaise Pascal who was 19 years old.

• 1833: The Analytical Machine, which is considered the original


general purpose computer, is designed by Charles Babbage.

• 1884: A patent application for a punch-card tabulating machine is


filed by Herman Hollerith.
A chronology of early computing
• 1938: Hewlett-Packard Co. is founded by David Packard & Bill Hewlett
in California, garage.
• 1936: Alan Turing presents the notion of a universal machine later
called the Turing machine capable of computing anything that is
computable. The central concept of modern computer was based on
this idea.

• 1939: Atanasoff and his graduate student Clifford Berry, design a


computer that can solve 29 equations simultaneously. This marks the
first time a computer is able to store information on its main memory.
A chronology of early computing
• 1941: Z3 - first programmable computer (Zuse)

• 1945: ENIAC (Electronic Numerical Integrator and Computer) considered


the grandfather of digital computers. It fills a 20 foot x 40 foot room and
has 18000 vacuum tubes. But, programming via plug-board is tedious and
slow.
The silicon age
• 1954: the first silicon transistor was developed by the Bell Labs.
• 1959: The first integrated circuit by Kilbi and Noyce was awarded the
Nobel prize in 2000.
• 1964: Douglas Engelbart shows a prototype of the modern computer
with a mouse and GUI. This marks an evolution of the computer from
a specialized machine for scientists & mathematicians to do
technology that is more accessible to general public.
The silicon age
1969: A group of developers Bell Labs produce UNIX, an operating
system that addressed compatibility issues. Written in C programming
language. Due to the slow nature of the system, it never quite gained
traction among home PC users.
1971: Alan Shugart leads a team of IBM engineers who invent the
floppy disk allowing data to be shared among computers.

1975: Paul Allen and Bill Gates, offer to


write SW for the Altair 8080 using the new
BASIC language. They built their own SW
company.
1976: Steve Jobs and Steve Wozniak start
Apple computers on April fool’s day.
The Internal organization of computers
• ALU (Arithmetic Logic Unit): executes arithmetic and logical operations.
It represents the fundamental building block of the central processing
unit of a computer. Modern CPU’s contain very powerful and complex
ALUs.
• CPU (Central Processing Unit): is the combination of the control logic,
associated registers and the arithmetic logic unit (brains of the
computer).
The Internal organization of computers
• A processor -- "the part of a computer (a microprocessor chip)
that does most of the data processing; carries out a series of
simple instructions such as fetch instruction, decode it, fetch
required data, execute instruction and store the results.
The Internal organization of computers:
Von Neumann architecture (1945)
• A processing unit that contains an arithmetic logic unit and processor
registers.
• A control unit that contains an instruction register and program
counter.
• Memory that stores data and instructions.
• External mass storage.
• Input and output mechanisms.
The Internal organization of computers:
Von Neumann architecture (1945)
• It is generally called stored-program
computer that stores program
instructions in electronic memory.
• Instruction fetch and data operation
cannot occur at the same time,
because they share the common bus.
• This is called Von-Neumann bottle neck
and often limits the performance.
• Several instruction processing does not
allow parallel execution of a program.
Parallel executions are simulated later
by the OS.
The Internal organization of computers:
Harvard architecture (1939)
• Separate storage and signal pathways
for instructions and data.
• Has storage entirely contained within
the CPU and there is no direct access to
the instruction storage as data.
• It has 2 memories, this allows parallel
access to data & instructions.
• If there is a free space in data memory,
it can not be used for instruction
memory.
Comparison of Von-Neumann and
Harvard Architectures
• In Von-Neumann, instructions and data are stored in the same
memory, so the instructions are fetched over the same data path
used to fetch data. This means CPU can not simultaneously read an
instruction and read/write from or to the memory.
• In Harvard, CPU can both read an instruction and perform a data
memory at the same time even without a cache. Harvard is faster for
a given circuit complexity because instruction fetches and data access
do not compete for a single memory.
• While modified Harvard architecture is generally used in small
embedded computers and signal processing, and modified Von-
Neumann architecture is preferred by desktops, laptops, workstations
or high performance computers.
Main Computer Buses
** One of the main objectives of a bus is to minimize the lines
that are needed for communication.**

• A bus is a set of wires that connect the computer


components. Buses are responsible for movement of data
from input devices, to output devices and from/to CPU and
memory.
Data Bus: Carries the data that needs processing
Address Bus: Determines where data should be sent
Control Bus: Determines data processing
• The interconnection diagram for a simple computer is
shown in the figure. The majority of system buses are made
up of 50 to 100 distinct lines for communication.
Modern Computer components
• Control bus: is used by the CPU to communicate with devices that are contained within the
computer. This occurs through physical connections such as cables or printed circuits.

• Address bus: is used to transfer data between devices. The devices are identified by the hardware
address of the physical memory. The address is stored in the form of binary numbers to enable the
data bus to access memory storage.

• Data bus: is a system within a computer or device, consisting of a connector or set of wires, that
provides transportation for data.

A more detailed view


Modern Computer Components
Processor: CPU that executes the programs.
Control unit: It tells the computer’s memory or I/O devices how
to respond to the instructions that have been sent to the
processor.
Execution unit (ALU): performs operations and calculations
instructed by the program.
Super I/O: SIO is an integrated circuit on a computer
motherboard that handles the slower and less prominent
input/output devices. Game port, Infrared, Keyboard and mouse
(non-USB), temperature sensor and fan speed are handled by SIO.
Bus: Connects every components together.
Memory: stores programs, data and program instructions.
Frame buffer: An area of memory used to hold the frame of data
that is continuously being sent to the screen. The buffer is the size
of the maximum image that can be displayed and may be a
separate memory bank on the graphics card or a reserved part of
regular memory.
Sound card: An expansion card for producing sound on a
computer that can be heard through speakers or headphones.
Memory Hierarchy In computer architecture, the memory hierarchy separates
computer storage into a hierarchy based on response time.
There 4 major storage levels:
1. Primary storage: Registers, cache and RAM.
2. Secondary storage: Hard drive.
3. Tertiary storage: Cloud.
4. Off-line storage: CD, DVD, USB flash disk.

REGISTER: are small memory units. These are the fields


where information about how a processor, microcontroller, or
an integrated processor performs operations. Records such as
memory section to be checked before an operation, values
found as a result of calculations.
CACHE: When trying to read from or write to a location in
main memory, the processor checks whether the data from
• SRAM and DRAM are the modes of RAM where SRAM is that location is already in the cache. If so, the processor will
comparatively faster than DRAM; hence SRAM is used read from or write to cache instead of the much slower than
for cache memory while DRAM is used for main memory. (TO REDUCE THE TIME OR ENERGY)
memory. 1. Data cache: to speed up data cache.
• Cache is used between main memory and registers not 2. Instruction cache: to speed up executable instruction
to use DRAM. fetch.
Memory Hierarchy
Registers and Register Files

• Registers are temporary storage locations inside the CPU that hold data and
addresses.
• The register file is the component that contains all the general purpose registers
of the microprocessor.
• Computers all about operating on information:
- information arrives into memory from input devices.
- memory is a large "byte array" which can hold anything we want.
• Computer conceptually takes values from memory, performs
whatever operations, and then stores results back
• In practice, CPU operates on registers:
- a register is an extremely fast piece of on-chip memory.
- modern CPUs have between 8 and 128 registers, each 32/64 bits.
- data values are loaded from memory into registers before operation.
- result goes into register; eventually stored back to memory again.
Registers and Register Files

• A register is a box of chocolates and a computer usually has many.(The number depends on the architecture
and whether the computer is diabetic or not 


The slots you see are the registers and the pieces of chocolate are analogous to the bits which are zero or
one in computer language. So either you can get a dark chocolate(a zero) or a milk chocolate(a one) in there.
Now what these registers can be used for?
They can be used for storing instructions.
They can be used for storing results of operations.
Can be used to manipulate the data stored in them.

So basically any program you write in a high level language like say Java, is translated to some basic
instructions (specific to that particular processor) by the compiler of that language. These basic
operations will let you add,subtract,multiply,shift,compare(think of if conditions in the
programming language of your choice).signal is low, no new values are written.
Registers and Register Files

• Besides the fact that registers are the only


reasonable way to interact with most data
on many processor architectures, the
main advantage of processor registers is
that they can be accessed much, much
more quickly than main memory.

• If a program has to manipulate more data


than can fit in these registers, then it needs
to use cache or main memory instead,
which slows things down.
Fetch-Decode-Execute Cycle

• The main job of the CPU is to execute programs using the fetch-decode-
execute cycle (also known as the instruction cycle). This cycle begins as
soon as you turn on a computer.
• To execute a program, the program code is copied from secondary storage
into the main memory. The CPU's program counter is set to the memory
location where the first instruction in the program has been stored, and
execution begins. The program is now running.
• In a program, each machine code instruction takes up a slot in the main
memory. These slots (or memory locations) each have a unique memory
address. The program counter stores the address of each instruction and
tells the CPU in what order they should be carried out.
• When a program is being executed, the CPU performs the fetch-decode-
execute cycle, which repeats over and over again until reaching the STOP
instruction.
Fetch-Decode-Execute Cycle

1. The processor checks the program counter


to see which instruction to run next.
2. The program counter gives an address
value in the memory of where the next
instruction is.
3. The processor fetches the instruction
value from this memory location.
4. Once the instruction has been fetched, it
needs to be decoded and executed. For
example, this could involve taking one
value, putting it into the ALU, then taking
Fetch – gets the next program command from a different value from a register and
the computer’s memory
adding the two together.
Decode – deciphers what the program is telling 5. Once this is complete, the processor goes
the computer to do back to the program counter to find the
Execute – carries out the requested action
next instruction.
Store – saves the results to a Register or 6. This cycle is repeated until the program
Memory. ends.
Computer Languages and Levels
• We program computers to do certain tasks teaching them to
act according to a set of rules (algorithms) whenever they
receive input of predefined type(s), in order to receive
expected output. For all such purposes we use programming
languages.
• Programming languages can be broadly classified into three
categories: Machine languages, Assembly Language, High
Level language.
• MACHINE LANGUAGES: Imagine them as the “native
tongue” of the computer, the language closest to the
hardware itself. Each unique computer has a unique machine
language. A machine language program is made up of a
series of binary patterns (e.g., 01011100) which represent
simple operations that can be accomplished by the computer
(e.g., add two operands, move data to a memory location).
Machine language programs are executable, meaning that
they can be run directly. Programming in machine language
requires memorization of the binary codes and can be
difficult for the human programmer.
Computer Languages and Levels

2. ASSEMBLY LANGUAGES: They represent an effort to make programming easier for the human. The machine
language instructions are replaced with simple pneumonic abbreviations (e.g., ADD, MOV). Thus assembly
languages are unique to a specific computer (machine). Prior to execution, an assembly language program
requires translation to machine language. This translation is accomplished by a computer program known as an
assembler. Assemblers are written for each unique machine language.
3. HIGH LEVEL LANGUAGES: High-level languages, like C,C++, JAVA etc., are more English-like and, therefore,
make it easier for programmers to “think” in the programming language. High-level languages also require
translation to machine language before execution. This translation is accomplished by either a compiler or an
interpreter. Compilers translate the entire source code program before execution.(Eg: C++, Java)
Interpreters translate source code programs one line at a time. (Eg: Python) Interpreters are more interactive
than compilers.
Number representation
• In computer system, the numbers are represented in binary, octal and
hexadecimal .
• Unsigned numbers; represents values from 0 to 2𝑛−1 .
• For large numbers binary is unwieldy: use hexadecimal base 16.
• To convert to binary numbers, hexadecimals are converted into groups of 4.
• Sign magnitude is a very simple representation of negative numbers. In sign
magnitude the first bit is dedicated to represent the sign and hence it is called
sign bit.
For example,
• Sign bit ‘1’ represents negative sign. +25 = 011001
where 11001 = 25
• Sign bit ‘0’ represents positive sign. And 0 for ‘+’
-25 = 111001
where 11001 = 25
and 1 for ‘-‘.
Number representation
• 2’s complement: It is used in computing as a method of signed number representation.
• To represent a negative number in this form, first we need to take the 1’s complement of the
number represented in simple positive binary form and then add 1 to it .
Example: (8)10 = (1000)2
1’s complement of 1000 =
0111
Adding 1 to it, 0111 + 1 =
1000
So, (-8)10 = (1000)2

• Please don’t get confused with (8)10 =1000 and (-8)10=1000. As with 4 bits, we can’t represent a
positive number more than 7. So, 1000 is representing -8 only. (4 bits is not enough to represent a
positive number above 7)
• In practice, all modern computers use 2’s complement.
Unsigned arithmetic

011110 30
000111 7
+
100101 37
5-bit register is not enough
to keep this data.

0+0 =0 with no carry


1+0=1 with no carry
0+1=1 with no carry
1+1=0 with carry 1.

2’s complement of 27
Representation of signed numbers
Signed Addition Overflow Rule
• If 2 Two's Complement numbers are added, and they both have the same sign
(both positive or both negative), then overflow occurs if and only if the result
has the opposite sign. Overflow never occurs when adding operands with
different signs.
i.e. Adding two positive numbers must give a positive result
Adding two negative numbers must give a negative result
Overflow occurs if
-5 try using 4 bit, then 5 bits.
(+A) + (+B) = −C -3 1011+1101=Overflow
(−A) + (−B) = +C -8 01011+01101=11000 (-8)
RULE:

-7 1001 (n=4)
-6 1010
-13 1 0011 Overflow!! (LARGEST NEGATIVE NUMBER IS -8 THAT WE CAN REPRESENT
(3)
Signed Addition –Self Examples
Signed Addition
Signed Addition
Bus hierarchy
In practice, there are lots of different buses with different characteristics e.g. Data width, max
number of devices , max length.
Mainly, there are 2 types of buses;
Synchoronous buses:
• A bus used to interconnect devices that comprise a computer system where the timing of
transactions between devices is under the control of a synchronizing clock signal.
• A device connected to a synchronous bus must guarantee to respond to a command within a
period set by the frequency of the clock signal or a transmission error will occur.

Asynchronous buses:
• A bus that interconnects devices of a computer system where information transfers between
devices are self-timed rather than controlled by a synchronizing clock signal.
• A connected device indicates its readiness for a transfer by activating a request signal. A
responding device indicates the completion of this transfer by activating an acknowledge
signal.
• The time required to complete the transaction is determined by the response times of the
devices and the delays along the interconnecting bus and may vary for different devices.
Synchoronous Buses

• The start of a synchronous protocol is always at a rise of the bus clock


signal.
• The last event in a synchronous protocol is usually planned to occur
at a fall of the bus clock signal (so that the next protocol can start at a
rising edge).

• Advantage: quite simple to implement.


• Disadvantage: If memory is slow, the memory operations will fail.
Asynchoronous Buses
• The bus protocol makes sure that devices know each others' state of affair when
they take the next step.
• In an asynchronous protocol, the communicating devices signal each other using
special control signals.
• A simple asynchronous communication protocol can be constructed by using two
control signals:
• MSYNC: Master Sync signal
• Signals that the "master" device is ready (ready for what will depend on the steps of the communication
protocol)
• SSYNC: Slave Sync signal
• Signals that the "slave" device is ready (again, ready for what will depend on the steps of the
communication protocol)
• In data communication, the device that initiates the communication is called
the Master Device. The device that the master device wants to communicate
with is called the Slave Device.
• In a read operation (where the CPU requests that the memory sends it some
data), the CPU is the master device and the memory is the slave device.
Asynchoronous Buses
*More handshaking, if multiplex
address & data lines.
•Asynchronous buses have no shared
clock; instead use handshaking,
•e.g. CPU; master- Memory; Slave.
•- CPU puts address onto address lines and,
after settle, asserts control lines
•- next, CPU asserts /SYN to say everything
ready
•- once memory notices /SYN, it fetches
data from address and puts it onto bus
•- memory then asserts /ACK to say data is
ready
•- CPU latches data, then deasserts /SYN
•- finally, Memory deasserts /ACK
Interrupts

•Bus reads and writes are transaction based: CPU requests


something and waits until it happens.
•But e.g. reading a block of data from a hard-disk takes
~2ms, which might be over 10,000,000 clock cycles!
•Interrupts provide a way to separete CPU requests from
device responses.
1. CPU uses bus to make a request (e.g. writes some special values to
addresses decoded by some device).
2. Device goes off to get info.
3. Meanwhile CPU continues doing other stuff.
4. When device finally has information, raises an interrupt.
5. CPU uses bus to read info from device.
Interrupts
• In digital computers, an interrupt is an input
signal to the processor indicating an event that
needs immediate attention.
• An interrupt signal alerts the processor and
serves as a request for the processor to interrupt
the currently executing code, so that the event
can be processed in a timely manner. If the
request is accepted, the processor responds by
suspending its current activities, saving its state,
and executing a function called an interrupt
handler (or an interrupt service routine, ISR) to
deal with the event.
• This interruption is temporary, and, unless the
interrupt indicates a fatal error, the processor
resumes normal activities after the interrupt
handler finishes.
• Interrupts are commonly used by hardware
devices to indicate electronic or physical state
changes that require attention. Interrupts are
also commonly used to implement computer
multitasking, especially in real time computing.
Systems that use interrupts in these ways are
said to be interrupt-driven.
Direct Memory Access (DMA)
• Direct memory access (DMA) is a feature of computer systems that allows
certain hardware subsystems to access main system memory (RAM)
independent of the CPU.
• DMA allows for a direct transfer between a device and main memory.
• The device driver provides an address in main memory.
• The DMA controller facilitates the copy data to or from starting at this
memory location.
• The CPU can now work in parallel so long as the system bus is not required.
Direct Memory Access (DMA)
• Without DMA, when the CPU is using programmed I/O, it is typically
fully occupied for the entire duration of the read or write operation,
and is thus unavailable to perform other work.
• With DMA, the CPU first initiates the transfer, then it does other
operations while the transfer is in progress, and it finally receives
an interrupt from the DMA controller (DMAC) when the operation is
done.
• This feature is useful at any time that the CPU cannot keep up with
the rate of data transfer, or when the CPU needs to perform work
while waiting for a relatively slow I/O data transfer.
End of the chapter.
Thank you all.
Assoc. Prof. Dr. Ezgi Deniz Ülker

You might also like