0% found this document useful (0 votes)
22 views11 pages

Chapter 3 Von Neumann O Level

Chapter 3 discusses the Von Neumann architecture, focusing on the Central Processing Unit (CPU) as the core component of modern computers, which includes the Control Unit (CU), Arithmetic and Logic Unit (ALU), and registers. It explains the Fetch-Decode-Execute cycle, memory operations, and the role of buses in data transmission, alongside factors affecting CPU performance such as clock speed, cache memory, and multi-core configurations. The chapter concludes with an overview of instruction sets and embedded systems, highlighting their significance in computing.

Uploaded by

Bluack Star
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views11 pages

Chapter 3 Von Neumann O Level

Chapter 3 discusses the Von Neumann architecture, focusing on the Central Processing Unit (CPU) as the core component of modern computers, which includes the Control Unit (CU), Arithmetic and Logic Unit (ALU), and registers. It explains the Fetch-Decode-Execute cycle, memory operations, and the role of buses in data transmission, alongside factors affecting CPU performance such as clock speed, cache memory, and multi-core configurations. The chapter concludes with an overview of instruction sets and embedded systems, highlighting their significance in computing.

Uploaded by

Bluack Star
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

CHAPTER 3

Von Neumann Architecture (Processor)

Note: This is the intellectual property of


https://navidsaqib.com/o-level-cs-2210/
Central Processing Unit (CPU):
The heart of any contemporary computer system, including tablets and smartphones, is the
central processing unit (CPU), sometimes referred to as a microprocessor or processor. On a
single microprocessor, the CPU is frequently implemented as an integrated circuit. The CPU is
in charge of carrying out or processing each and every instruction and piece of data in a
computer program. As depicted in Figure 3.1, the CPU is composed of:
​ control unit (CU)

​ arithmetic and logic unit (ALU)

​ registers and buses.

Von Neumann architecture:


Early computers could not function without significant human intervention since data had to be
supplied into the machines while they were actually operating; programs or data could not be
stored. John von Neumann created the idea of the "stored program computer" in the middle of
the 1940s, and it has served as the foundation for computer design ever since. Prior to the
middle of the 1940s, none of the following primary innovative aspects of the von Neumann
architecture were present in computers:
●​ the idea of a processor, often known as a central processing unit
●​ The memory could be directly accessed by the CPU.
●​ Data and programs could both be stored in computer memory.
●​ The instructions that made up stored programs could be carried out in any order.

There are many diagrams of von Neumann CPU architecture in other textbooks and on the
internet. The following diagram is one example of a simple representation of von Neumann
architecture:
Components of the central processing unit (CPU):
The Control Unit (CU), Arithmetic & Logic Unit (ALU), and system clock are the three primary
parts of the CPU.

Arithmetic & Logic Unit (ALU):


The Arithmetic & Logic Unit (ALU) enables the necessary logic (AND, OR) or arithmetic (+, -, and
shifting) operations to be performed during the execution of a program. A computer may
contain multiple ALUs to do different tasks. The processes of addition, subtraction, and left or
right logical shift are used to perform multiplication and division.

Control Unit (CU):


An instruction is read from memory by the control unit. The Program Counter (PC) contains the
address of the place where the instruction is located. The Fetch–Decode–Execute cycle is then
used to interpret this instruction (see later in this section). In order to instruct the other
computer components on what to do, signals are generated along the control bus during that
procedure. The computer's control unit makes sure that program instructions and data flow are
coordinated across the system. In order to guarantee that this crucial synchronization occurs,
the system clock generates timing signals on the control bus; without the clock, the computer
would simply fail!
All of the information and programs that the CPU needs to access are stored in the RAM. The
Immediate Access Store (IAS) is another common name for RAM. The CPU temporarily loads
data and programs from backup storage (such a hard disk drive) into RAM. This is done because
reading and writing data to RAM is far faster than reading and writing data to a backup store.
As a result, any important data that an application needs are temporarily kept in RAM to greatly
accelerate processes.

Registers:
The registers are among the most essential parts of the von Neumann system. Registers may
have a specific function or be broad. Only the special purpose registrations will be taken into
account. Table provides a summary of all the registers utilized in this textbook. The Fetch–
Decode–Execute cycle provides a more thorough explanation of how to use these registers (see
later in this section).

Specific purpose registers

System buses and memory:


Earlier on, Figure 3.1 referred to some components labelled as buses. Figure shows how these
buses are used to connect the CPU to the memory and to input/ output devices.
System buses and memory

Memory:
There are several partitions that make up the computer memory. There is an address and its
contents in every partition. Each address in the table has eight bits, and the content has eight
bits. The address and its contents are actually much greater than this in a genuine computer
memory.
Every location in the memory will be uniquely identified by its address, and the binary value
stored in each location will be its contents.
Now, let's look at two instances of read and write operations to and from memory that can be
performed using the MAR and MDR registers:
Think about the READ operation first. The memory region displayed in Table 3.2 will be utilized.
The two registers are used as follows if we wish to read the contents of memory address 1111
0001:
●​ the address of location 1111 0001 to be read from is first written into the MAR (memory
address register):

●​ a ‘read signal’ is sent to the computer memory


●​ the contents of memory location 1111 0001 are then put into the MDR (memory data
register):

Now let us now consider the WRITE operation. Again, we will use the memory section
shown in Table 3.2. Suppose this time we want to show how the value 1001 0101 was
written into memory location 1111 1101:
●​ the data to be stored is first written into the MDR (memory data register):
●​ this data has to be written into location with address: 1111 1101; so this address is
now written into the MAR:

●​ finally, a ‘write signal’ is sent to the computer memory and the value 10010101 will then
be written into the correct memory location.

Input and output devices:


In Section 3.2, the input and output devices will be discussed in further detail.
They are the primary means of loading and unloading data from computer systems. Input
devices, such as keyboards, touch screens, and microphones, transform external data into a
format that computers can comprehend and analyze. Output devices, such as printers, displays,
and loudspeakers, display computer processing results in a format that is comprehensible to
humans.

(System) buses:
Computers employ (system) buses as parallel transmission components; a single bit of data is
transmitted by each wire in the bus. The address bus, data bus, and control bus are the three
commonly utilized buses in the von Neumann architecture.

Address bus:
The address bus distributes addresses across the computer system, as its name implies. The
address bus is unidirectional, meaning that bits can only flow in one direction between the CPU
and memory. This prevents addresses from being transported back to the CPU, which is a
characteristic that is not wanted.
A bus's breadth plays a crucial role. More memory locations can be directly addressed at any
given time the wider the bus; for example, a bus with a width of 16 bits can address 216
(65536) memory places, whereas a bus with a width of 32 bits can address 4294967296
memory locations concurrently. Even so, it's not big enough for contemporary computers, but
this book doesn't cover the technologies underlying even broader buses.

Data bus:
Data can be sent along the data bus in both ways because it is bidirectional. This implies that
data can go between input/output devices and the CPU as well as between memory and the
CPU. It is crucial to remember that data can be a number, an address, or a command. The width
of the data bus is crucial, just like it is for the address bus; the broader the bus, the longer
words that can be carried. (A word is a collection of bits that are considered to be a single unit;
the most typical word lengths are 16 bits, 32 bits, or 64 bits.) The overall performance of the
computer can be enhanced by longer words.

Control bus:

In addition, the control bus is bidirectional. From the control unit (CU), it transmits signals to
every other part of the computer. Its typical width is eight bits. Since it just transmits control
signals, there is really no reason for it to be any wider.

Fetch–Decode–Execute cycle:
The CPU retrieves certain data and instructions from memory and places them in the
appropriate registers before executing a sequence of instructions. In this process, the address
bus and data bus are both utilized. Once this is done, each instruction needs to be decoded
before finally being executed. The cycle known as Fetch-Decode-Execute is this.

Fetch:
MDR has the capacity to hold both instructions and data. The following instruction is obtained
from the memory address currently held in the MAR and stored in the MDR during the fetch-
decode-execute cycle. The Current Instruction Register (CIR) is then copied with the contents of
the MDR. After that, the PC is incremented (by 1) in order to process the following instruction.
Decode:
The instruction is then decoded so that it can be interpreted in the next part of the cycle.
Execute:
The CPU passes the decoded instruction as a set of control signals to the appropriate
components within the computer system. This allows each instruction to be carried out in its
logical sequence.
Figure shows how the Fetch–Decode–Execute cycle is carried out in the von Neumann computer
model.
Fetch–Decode–Execute cycle flowchart

Cores, cache and internal clock:


We'll now talk about the variables that affect a CPU's performance. The function of the system
clock should be taken into account initially. The clock establishes the synchronization clock
cycle for all computer activities. The control bus, as previously indicated, sends timing signals to
make sure everything is perfectly synchronized. The computer's processing speed increases in
tandem with clock speed (a typical contemporary figure is 3.5GHz, or 3.5 billion clock cycles per
second).
It isn't feasible to state that utilizing a higher clock speed inevitably improves a computer's
overall performance, even though the computer's speed may have grown. Other elements must
be taken into account, such as:
1.​ It is important to consider the width of the address bus and data bus, as they have been
known to impact computer performance.
2.​ Consideration should be given to overclocking. By going into the BIOS (Basic
Input/Output System) and adjusting the settings, one can modify the clock speed.
Nevertheless, employing a clock speed greater than what the machine was intended to
support can cause issues, such as:
i.​ execution of instructions outside design limits can lead to seriously unsynchronized
operations (i.e. an instruction is unable to complete in time before the next one is
due to be executed) – the computer would frequently crash and become unstable.
ii.​ overclocking can lead to serious overheating of the CPU again leading to unreliable
performance.

3.​ Cache memory utilization can also enhance CPU performance. Cache memory offers
substantially faster data access times than RAM since it is housed inside the CPU, as
opposed to RAM. CPU performance is increased by using cache memory to store
frequently used instructions and data that must be accessed more quickly. A CPU that
wants to read data from memory will first look in the cache, and if that doesn't have the
necessary information, it will then go on to main memory or RAM. The CPU performs
better the greater the cache memory size.
4.​ Changing the number of cores used in a computer can enhance its performance.
An ALU, a control unit, and the registers comprise one core. Numerous computers have
either a quad core CPU, which has four cores, or a twin core CPU, which has two cores.
Increasing the number of cores reduces the requirement to keep clock rates rising. But,
as the CPU must interact with each core, doubling the number of cores does not
automatically result in a doubling of the computer's performance; rather, performance
will be decreased overall.
For instance, a dual core CPU uses a single channel to interact with both cores, which
limits part of the performance gain that could otherwise be possible:

while, with a quad core the CPU communicates with all four cores using six channels,
considerably reducing potential performance:
It is therefore necessary to examine each of these elements while evaluating computer
performance. Recapitulating these ideas:
●​ increasing bus width (data and address buses) increases the performance and speed of a
computer system.
●​ increasing clock speed will potentially increase the speed of a computer.
●​ a computer’s performance can be changed by altering bus width, clock speed and use of
multi-core CPUs.
●​ use of cache memories can also speed up a CPU’s performance.

Instruction set:
Instructions are a series of operations that are sequentially decoded in a computer system.
Every operation will provide instructions to the CPU's ALU and CU. An operand and an opcode
make up an operation.

The opcode informs the CPU what The operand is the data which
operation needs to be done needs to be acted on or it can
refer to a register in the memory

There are actually only a certain number of opcodes that can be used; these are referred to as
the instruction set because the computer must comprehend the operation in order for it to be
performed. Every piece of computer software has a set of instructions (that must be turned into
binary). The process by which the CPU processes each instruction one after the other is known
as the fetch-decode-execute cycle.
The X86 instruction set is an example of a widespread CPU standard found in many
contemporary systems. If the computer is built around the X86 CPU, then all designs will have
nearly the same instruction set, even though various computer manufacturers will use their
own internal circuitry designs. For instance, while being built on very distinct electrical
architectures, the X86 instruction sets used by AMD Athlon and Intel Pentium CPUs are nearly
identical.
Instruction sets are the low-level language instructions that tell the CPU how to do an
operation; be careful not to mix them with programming code. To translate program code into
an instruction set that a computer can understand, interpreters or compilers are required. ADD,
JMP, LDA, and other operations are a few instances of instruction set operations.)
Embedded systems:
A hardware and software combination created with a specific set of tasks in mind is called an
embedded system. Electronic, electrical, or electro-mechanical hardware makes up the system.
Foundations for embedded systems include:
microcontrollers: this has a CPU in addition to some RAM and ROM and other peripherals all

You might also like