0% found this document useful (0 votes)
11 views5 pages

1.1 Characteristics of Contemporary Processors

The document outlines the characteristics and components of contemporary processors, including the roles of the ALU, CU, registers, and buses in CPU operation. It discusses the FDE cycle, factors affecting performance like clock speed and core count, and the distinctions between Von Neumann and Harvard architectures. Additionally, it covers RISC and CISC architectures, multi-core systems, parallel processing, and the function of GPUs in enhancing computational efficiency.

Uploaded by

mohit.reddy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views5 pages

1.1 Characteristics of Contemporary Processors

The document outlines the characteristics and components of contemporary processors, including the roles of the ALU, CU, registers, and buses in CPU operation. It discusses the FDE cycle, factors affecting performance like clock speed and core count, and the distinctions between Von Neumann and Harvard architectures. Additionally, it covers RISC and CISC architectures, multi-core systems, parallel processing, and the function of GPUs in enhancing computational efficiency.

Uploaded by

mohit.reddy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 5

1.

1 Characteristics of
Contemporary Processors
ALU – The Arithmetic Logic Unit completes all of the arithmetic and logical
operations required by the CPU.

CU – The control unit directs the operation of the CPU. It does so by:

 Controlling and coordinating the activities of the CPU


 Accepting the next instruction
 Decoding instructions
 Controlling the flow of data inside and outside of the CPU
 Loading data to the correct memory address

Registers – Small memory cells which operate at very high speeds. They
store all of the arithmetic, logical and shift operations which occur in the
these registers.

PC – The Program Counter stores the address of the instruction that will
next be performed.

Accumulator – Stores the results of calculations.

MAR – The Memory Address Register stores the memory addresses of


data that needs to be read or addresses which data will be loaded to.

MDR – The Memory Data Register stores the actual data which has been
read or about to be written to the memory.

CIR – The Current Instruction Register holds the current instruction that is
being executed divided up into opcode and operand.

Buses – A set of parallel wires which connect 2 or more components in a


computer. The data bus, address bus and control bus are collectively
known as the system bus.

Data Bus – A bi-directional bus which transmits data and instructions


between components.

Address Bus – A bus which transmits the memory locations which data is
to be sent to or retrieved from. The width of the address bus is
proportional to the number of addressable memory locations.

Control Bus – A bi-directional bus which transmits control signals


between internal and external components. They manage the use of
address buses and data buses as well as providing status information
between components.

Pipelining (A-Level Only):

 The process of completing the fetch, decode and execute cycles of 3


separate instructions simultaneously, holding appropriate data in a
buffer which is in close proximity to the CPU. While one instruction is
being executed, another can be fetched and another decoded.
 The aim of pipelining is to reduce the amount of time which the
computer is kept idle. It is separated into instruction pipelining and
arithmetic pipelining. Instruction pipelining splits the instruction into
3 stages – fetching, decoding and executing. Arithmetic pipelining
breaks down arithmetic operations and overlap them as they are
performed.

FDE Cycle:

 Fetch:
o The address is copied from the PC to the MAR via an address
bus
o The PC is incremented by 1
o The data stored in the MAR is passed onto the address bus
o The read signal is sent by a control bus
o The RAM copies the data from the specified location by the
address bus onto a data bus
o The data in the data bus is passed onto the MDR
o Data is copied from the MDR into the CIR
 Decode:
o The instruction in the CIR is split up into the opcode and the
operand
o The instruction is then decoded by the CU
 Execute:
o The instruction is executed.

Factors affecting performance:

 Clock speed – The clock speed is amount of time taken for one clock
cycle to complete. The clock speed is determined by the system
clock, an electronic device which generates signals switching
between 0 and 1. All processor activities begin on a clock pulse, and
the CPU operation starts as the signal changes from 0 to 1.
 Number of Cores – A core is an independent processor that can run
its own fetch-execute cycle. This means that a device with multiple
cores can complete multiple fetch-execute cycles simultaneously
meaning it is theoretically faster. However, not all software can
utilise all of the cores available, so it is not always the case that
having more cores makes a device faster.
 Amount and type of cache

Von Neuman Harvard


Architecture Architecture
Definition This architecture This architecture uses
includes the basic physically separate
components of the memories and buses
computer and for data and
processor. The core instructions.
principle is that data
and instructions share
memory and buses. It
is built off the stored
program concept.
Advantages Cheaper to develop as Quicker execution as
the CU is less data and instructions
complicated and can be fetched in
hence easier to parallel.
design. Memories can be of
Programs can be different sizes which
optimised in size can make more
efficient use of space.
Use Case General Purpose Embedded Systems
Computers

Contemporary Processing – Contemporary processors use a


combination of both Von Neumann and Harvard architecture. Von
Neumann is used in the main memory whilst Harvard is used in the cache.

Reduced Instruction Set Computers (RISC):

 In these processors, there is a small instruction set, with each


instruction taking up approximately one machine code instruction
and therefore 1 clock cycle.
Complex Instruction Set Computers (CISC):

 These processors have a large instruction set, with the aim of


accomplishing as much as possible in as few lined of assembly code
as possible. These instructions are built into the hardware. They are
generally used by embedded systems.

Multi-Core and parallel systems:

 Multi-core processors are able to distribute workload across multiple


processor cores, thus achieving significantly higher performance by
performing several tasks in parallel.

 They are therefore known as parallel systems

 Many personal computers are dual-core or quad-core

Using parallel processing:

 The software has to be written to take advantage of multiple cores.

 For example, browsers such as Google Chrome can run several


concurrent processes.

 Using tabbed browsing, different cores can work simultaneously on


different tabs.
Co-processor systems:

 A co-processor is an extra processor used to supplement the


functions of the primary processor.

 Such as a GPU

 It may be used to perform floating point arithmetic, graphics


processing e.c.t

 Generally carries out a limited amount of functions.

GPU:

 A Graphics Processing Unit (GPU) is a specialised electronic circuit


which is very efficient at manipulating computer graphics and image
processing.

 It consists of thousands of small efficient cores designed for parallel


processing.

 It can process large blocks of visual data simultaneously.

 A GPU can act together with a CPU to accelerate scientific,


engineering and other applications.

You might also like