0% found this document useful (0 votes)
7 views

Types of Computer Architecture

The document discusses various types of computer architecture, including Harvard, Von Neumann, ARM, Service Oriented Architecture (SOA), and Pipeline Architecture, highlighting their characteristics and uses. It also covers Intel and AMD architectures, as well as other notable architectures like SPARC and PowerPC. Additionally, it provides an introduction to computer architecture, detailing the components and hierarchical structure of a computer system.
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views

Types of Computer Architecture

The document discusses various types of computer architecture, including Harvard, Von Neumann, ARM, Service Oriented Architecture (SOA), and Pipeline Architecture, highlighting their characteristics and uses. It also covers Intel and AMD architectures, as well as other notable architectures like SPARC and PowerPC. Additionally, it provides an introduction to computer architecture, detailing the components and hierarchical structure of a computer system.
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 17

Types of Architecture

Harvard Architecture:

It is a computer architecture with physically separate storage and signal


tracks for instructions and data.

By (2016), most processors implement such separate signal paths for


performance reasons, but in reality they implement a modified Harvard
architecture, so that they can support tasks such as loading a program
from a disk drive as data for later execution.

Modern uses of Harvard architecture:

-Texas Instruments TMS320 C55x processors.

_Atmel Corp's AVR microcontrollers and Microchip Technology, Inc.'s


PIC.

Von Neumann Architecture:


Traditionally, microprocessor systems are based on this architecture, in
which the central processing unit (CPU) is connected to a single main
memory (almost always just RAM) where program instructions and data
are stored. This memory is accessed through a single bus system
(control, address and data).
In a system with Von Neumann architecture, the size of the data unit or
instructions is determined by
the width of the bus that connects the memory to the CPU. Thus an 8-
bit microprocessor with an 8-bit bus will have to handle data and
instructions of one or more 8-bit units (bytes) in length. If you need to
access an instruction or data that is more than one byte in length, you
will have to perform more than one memory access.
Having a single bus makes the microprocessor slower in its response,
since it cannot search
memory for a new instruction until the data transfers of the previous
instruction are completed.

ARM Architecture:

ARM is a 32-bit RISC architecture and recently with the arrival of its
V8-A version also 64-bit developed by ARM Holdings.

The ARM architecture is supported by a wide range of embedded and


real-time operating systems, including Windows CE, Windows 8
RT, .NET Micro Framework, Symbian, ChibiOS/RT, FreeRTOS, eCos,
Integrity, Nucleus PLUS, MicroC/OS-II, QNX, RTEMS, BRTOS, RTXC
Quadros, ThreadX, Unison Operating System, uTasker, VxWorks, MQX,
and OSE.
Systems that conform to the UNIX standard specification and support
the ARM architecture are:

-Solaris-Apple
OS X (in progress)
Families:
Service Oriented Architecture (SOA):

Service Oriented Architecture (SOA) is an architectural paradigm for


designing and developing distributed systems. SOA solutions have been
created to meet business objectives which include ease and flexibility of
integration with legacy systems, direct alignment to business processes
reducing implementation costs, innovation of customer services and
agile adaptation to changes including early reaction to competitiveness.

It allows the creation of highly scalable information systems that reflect


the organization's business, while providing a well-defined way of
exposing and invoking services (commonly but not exclusively web
services), which facilitates the interaction between different own or
third-party systems.

SOA provides a methodology and framework for documenting business


capabilities and can support integration and consolidation activities.

There are several standards related to web services; including the


following:

 XML
 HTTP
 SOAP
 REST
 WSDL
 UDDI

It should be considered, however, that a SOA system does not


necessarily use these standards to be “Service Oriented” but their use is
highly recommended.

Pipeline Architecture:

Based on filters, it consists of transforming a data flow into a process


comprised of several sequential phases, the input of each being the
output of the previous one.

This architecture is very common in the development of programs for


the command interpreter, since commands can be easily concatenated
with pipes.

It is also a very natural architecture in the functional programming


paradigm, since it is equivalent to the composition of mathematical
functions.

32 AND 64-BIT INTEL AND AMD ARCHITECTURES:


IA-32 (Intel Architecture, 32-bit): is the instruction set architecture
of Intel's 80x86 processors and early AMD microprocessors. IA-32 was a
32-bit extension first implemented in 1986 on the Intel 80386 processor,
the successor to the older 16-bit Intel 8086, 8088, 80186 and 80286
processors and the common denominator for all subsequent 80x86
designs (80486, 80586, 80686). For this reason, it is also known
generically as the i386, x86-32 or x86 architecture, although under this
last name (x86), and also as x86-16, 16-bit Intel processors are usually
included.
Intel's IA-64 (Intel Architecture, 64-bit) architecture (a truism),
was released in 1999, and is not directly compatible with the IA-32
instruction set (except under software emulation) as is the case with the
Intel 64 and AMD64 architectures. IA-64 is the architecture used by the
Itanium and Itanium 2 line of processors, which is why it was initially
known by the name Intel Itanium Architecture.
OTHER ARCHITECTURES:

In addition to Intel and AMD architectures, there are many others,


among the best known:

-SPARC (Scalable Processor ARChitecture): is a RISC architecture


originally designed by Sun Microsystems and which we can find in Sun
processors (now Oracle).

-PowerPC (usually abbreviated PPC): is the name of the RISC-type


computer architecture developed by IBM, Motorola and Apple.
Processors in this family are primarily used in Apple Macintosh
computers until 2006 and in several IBM models.

-ARM (Advanced RISC Machines): is a family of RISC


microprocessors designed by the company Acorn Computers and
developed by Advanced RISC Machines Ltd., a company derived from
the former.

-PA-RISC: is the name given to a microprocessor architecture


developed by Hewlett-Packard systems and VLSI Technology Operation.
This architecture is based on the RISC model and on PA (Precision
Architecture). It is also often referred to as the HPPA (Hewlett Packard
Precision Architecture). The first PA-RISCs were 32-bit devices. The
design was updated in 1996 resulting in version 2.0 of the architecture.
This new version was a 64-bit architecture.

-Alpha: is a microprocessor architecture designed by DEC and


introduced in 1992 under the name AXP. It has a 64-bit RISC instruction
set, but can also handle 32-bit, 16-bit and finally 8-bit data.
1. Introduction to Computer Architecture

Computer architecture is the conceptual design and fundamental operational structure of a


system that makes up a computer. That is, it is a model and functional description of the
design requirements and implementations for various parts of a computer, with special
interest in how the central processing unit (CPU) works internally and accesses memory
addresses.

The architecture of a computer explains the location of its components and allows determining
the capabilities of a computer system, with a given configuration, to perform the operations
for which it will be used. The basic architecture of any complete computer is made up of only 5
basic components: processor, RAM, hard drive, input/output devices and software.

1.1 Initial Concepts of Computer Architecture

A computer is a complex synchronous sequential system that processes information, this is


binary information, using only the logical value digits '1' and '0'. These binary logic values
correspond to electrical voltage values, such that a logic '1' corresponds to a high level of 5
volts and a logic '0' corresponds to a low voltage level close to 0 volts; these voltages depend
on the technology used by the computer devices.

1.1.1 Processor
It is the brain of the system, responsible for processing all data and information.
Although it is a very sophisticated device, it cannot do anything on its own. To make this work
we need some more components such as memories, disk drives, input/output devices and
programs. The processor or central core is made up of millions of transistors and electronic
components of microscopic size. The processing of tasks or events that it performs is based on
nanoseconds, making the thousands of transistors it contains work in the order of MHz. Binary
information is input through peripheral devices that serve as an interface between the outside
world and the user. These peripherals will translate the information that the user enters into
electrical signals, which will be interpreted as ones and zeros, which are interpreted more
quickly by the computer, since machine language uses binary code to be interpreted by the
computer.

A hierarchical system is a set of interrelated systems, each of which is organized


hierarchically, one after another, until it reaches the lowest level of elementary subsystem. A
possible classification would be:

1. Component Level. The elements at this level are P-type and N-type impurity diffusions in
silicon, crystalline polysilicon, and metal diffusions that are used to build transistors.

2. Electronic Level. The components are transistors, resistors, capacitors and diodes built with
the diffusions of the previous level. This very high scale integration or VLSI technology is the
one used in the manufacture of integrated circuits. At this level, logic gates are built from
transistors.

3. Digital Level. Logic gates, flip-flops and other combinational and sequential modules are
described by ones and zeros. This level is the application of Boolean algebra and the properties
of digital logic.

4. RTL level. The RTL register transfer level will be preferred for describing computers. Typical
elements at this level of abstraction are registers and arithmetic combinational modules.

5. PMS Level. This level is the highest in the hierarchy. The acronym PMS comes from the
English Processor Memory Switch. With hierarchy elements the buses, memories, processors
and other high-level modules.

1.2 Classic Architecture of a Von Neumann Model Computer

Von Neumann architecture has its origins in the work of mathematician John Von
Neumann developed with John Mauchly and John P. Eckert and disclosed in 1945 at the Moore
School of the University of Pennsylvania, United States, where the EDVAC (Electronic Discrete
Variable Automatic Computer) was presented. From here arose the architecture of the
program stored in memory and sequential search/execution of instructions. Generally
speaking, a computer has to perform 3 functions:

 Data Processing
 Data Storage
 Data transfer
Just as a PC (Personal Computer) must process data, transforming the information received, it
must also store data as the final result of these. You must also perform data transfer between
your environment and it. The architecture of a computer refers to the organization of its
elements into modules with a defined functionality and the iteration between them. The
diagram in Figure 1.1 shows the basic Von Neumann structure that a computer must have for
its correct operation.

FIGURE 1.1: BASIC STRUCTURE OF A COMPUTER.

· CPU (acronym for central processing unit): The central processing unit is the heart of the
computer. It controls the flow of data, processes it, and governs the sequencing of actions
throughout the system. To do this, it needs an external oscillator or clock that synchronizes
operations and marks the processing speed, this marks the evolution of the CPU and measures
its operating speed; unfortunately, the CPU clock frequency is limited by the technology of the
CPU and the entire computer, depending on the peripherals, its graphic cards, memories, etc.
Therefore, excessive use of the computer's resources can result in overheating that partially or
totally damages the CPU.

· Memory: is responsible for data storage.

· Input/Output: transfers data between the outside environment and the computer. It contains
the peripheral controllers that form the interface between the peripherals, memory and the
processor.

· Interconnection system: Buses; this is the mechanism that allows the flow of data between
the CPU, memory and input/output modules. Here the electrical signals are propagated and
interpreted as logical ones and zeros.

· Peripherals: These devices are those that allow data to enter the computer, and the
information to be output once it has been processed. A group of peripherals can be
understood as a set of transducers between external physical information and binary
information interpretable by the computer. Examples of these devices are the keyboard,
monitor, mouse, hard drive, and network cards.

1.2.1 Central Processing Unit


Controls the operation of the elements of a computer. Since the system is powered by a
current, it does not stop processing information until the power supply is cut off. The CPU is
the most important part of the processor, because it is used to perform all the computer's
operations and calculations. The CPU has another internal structure that is shown in Figure 1.2.

FIGURE 1.2: CPU STRUCTURE AND ITS CONNECTION WITH MEMORY.

· Control Unit (UC): The control unit is responsible for reading the instructions to be executed
from memory and sequencing access to data and operations to be performed by the
processing unit. The UC generates the control signals that establish the flow of data
throughout the computer and internally in the CPU. An instruction is nothing more than a
combination of ones and zeros. It consists of a binary operation code to execute the
instruction, the UC stores it in a special register, interprets its operation code and executes the
appropriate sequence of actions, in short, it decodes the instruction.

· Arithmetic Logic Unit or ALU (for its acronym in English Arithmetic Logic Unit): It is the part
of the CPU in charge of performing data transformations. Governed by the UC, the ALU
consists of a series of modules that perform arithmetic and logical operations. The UC is
responsible for selecting the operation to be performed by enabling the data paths between
the various ALU operators and between the internal registers.

· Internal Registers: storing the results of the execution of instructions in the main memory
could be slow and would have excessively much data in the interconnection system with the
memory, which would lower performance. In the same way, the internal configuration of the
CPU or the information during the last operation of the ALU are also stored in internal
registers. The main registers of a CPU are:

1. Program counter: responsible for storing the address of the next instruction to be executed.

2. Instruction Register: stores the instruction captured in memory and the one being executed.

3. Status Register: composed of a series of bits that report the result obtained in the last ALU
operation.

4. Accumulator Register: some CPUs perform arithmetic operations in a register called an


accumulator, its function is to store the results of arithmetic and logical operations.
The cycle to execute any instruction is divided into fetch cycle and instruction cycle as
illustrated in the schematic in Figure 1.3. The former causes the CPU to generate appropriate
signals to access the memory and read the instruction; the latter is similar; the difference
between the two is the opcode of each instruction.

FIGURE 1.3: CYCLES OF THE VON NEUMANN MACHINE

1.2.2 Memory

The program and data that the CPU is going to execute are stored in memory.
Instructions are binary codes interpreted by the control unit, data is also stored in binary form.

The various storage technologies depend on the time of access to the data; therefore, a
hierarchical design of the system memory is carried out so that it can quickly access the data.
The principle of making memory faster by having speeds similar to the CPU is used to design
the memory system. The main memory of computers has a structure similar to that shown in
the diagram in Figure 1.4. It is considered as a matrix of cells in which the memory can access
data randomly.

FIGURE 1.4: DIAGRAM OF A RANDOM ACCESS MEMORY.


This matrix is organized into words, each of which is assigned an address that indicates
its position. Each word is made up of a series of cells that are accessed in parallel; each one
stores a bit and these are what define the instructions.

1.2.3 Entry/Exit

As we know, a computer has input and output devices such as those contained in the
cabinet, hard drive, motherboard, CD or DVD drives, etc. The main problem that exists
between them is their technology and that they have different characteristics to those of the
CPU, these also need an interface on how they will understand the CPU, as well as the
processor and the peripheral controller to exchange data between the computer.

Figure 1.5 shows how each peripheral control has a unique address in the system. The
I/O interface decodes the address bus to detect that the CPU is addressing it. Addressing is
very similar to that of memories. The data bus is used to pass data between the peripheral and
the memory. Special control lines serve to coordinate and synchronize the transfer.

FIGURE 1.5: DIAGRAM OF AN INPUT/OUTPUT INTERFACE.

1.2.4 Interconnection System: Buses.

The connection of the various components of a computer, such as hard drives,


motherboards, CD drives, keyboards, mice, etc. are carried out through buses. A bus is defined
as a shared communication link that uses multiple wires to connect subsystems. Each line is
capable of transmitting an electrical voltage that represents a '1' or a '0'. When there are
several devices on the same bus, there will be one that can send a signal that will be processed
by the other modules. If the data is sent at the same time, it will result in an error or bus
contention, so access will be denied. Depending on the functionality criteria, buses are divided
into:

· Data buses: are used to transmit data between the different devices of the computer.

· Address Buses: used to indicate the position of the data that needs to be accessed.
· Control Bus: used to select the sender and receiver in a bus transaction.

· Power bus: used to provide different voltages to devices.

1.2.5 Peripherals.

This refers to all those devices that are necessary to supply data to the computer or
display the results. Peripherals are connected via a special bus to their controller or I/O
module.

Among the input peripherals we have the keyboard, mice, screens, digitizers and
more. Other fundamental peripheral devices for the interaction between man and the
computer are video terminals and graphics cards.

1.3 Computer technology.

Technological trends are advancing over time, and in terms of computing and
electronics, faster digital integrated circuits are emerging, which is also linked in high monetary
terms; updates to a computer system are relatively expensive depending on the characteristics
of the technology being implemented. Integrated circuits as we know them today are
becoming even smaller as there are many advances in technology in the miniaturization
sciences such as micro and nanotechnology, since devices that were once huge and occupied
the size of a room are now so small that they can fit in the palm of our hands. What is meant
by systems moving forward are the following terms:

· Technology: the transistors used by computer devices are called bipolar junction transistors
or BJTs, which in turn generated technological families such as TTL. This technology has had
the advantage of being able to supply current easily and quickly, but the disadvantage is its
high energy consumption compared to CMOS; this second technology is based on the use of
field effect transistors and is currently chosen to manufacture most CPUs. Another technology
such as BiCMOS combines BJT and CMOS transistors in a single technological process, trying to
combine the advantages of both.

· Speed: refers to the response time and the inevitable delays that occur in its operation. This
makes even the simplest ICs dependent on the technology used. The speed problem will be
that parallel execution will require more circuitry and the circuit would be larger.

· Integration Scale: CMOS ICs (Integrated Circuits) are built from lithography to which masks
are applied that project the silhouettes of the polygons that form the transistors. The wafer is
chemically treated and the transistors are made in the different fusions; these are divided into
segments that can reach microns in size. The better and more precise the process of creating
the diffusions, the smaller the sizes will be, and therefore more logic could be included in the
same silicon surface.

· Size: depends on the manufacturing of the IC, whether it is simple or how complex it may be
for the operations for which it was programmed.

1.3.1 Memory Circuits.


The storage of information is done through memory devices that store the
information in binary form so that the data can later be recovered. These contribute to a
hierarchy in which the fastest devices are closest to the CPU and the slowest devices are
further away. The most important parameters to measure memory circuits are:

· Access Time: is the time required to recover information from memory devices.

· Information density: depends on the technology used since they occupy a different space for
each bit of information.

· Volatility: refers to the loss of information if the circuit is not powered; this information must
be automatically recovered when the power is reconnected and the computer begins to
operate.

a) Asynchronous static RAM.

It is a volatile, fast-access memory that can store and read information. Its
characteristic makes it ideal for use as main memory in computers. The SRAM storage cell
contains 4 MOS transistors that store 1 and 0 as long as the circuit is powered.

b) Synchronous static RAM

It uses the same technology as SRAM, making them volatile and quick to access. The
difference is that there is a clock signal that synchronizes the reading and writing process. The
external cache memories of some microprocessors are of this type to facilitate data access in
burst mode and speed up the process of accessing memory blocks.

c) Dynamic RAM.

DRAM has capacities that are accessed with a single transistor, rather than cells with
multiple transistors. The problem is that the capacities are discharged by the transistor loss
current and are also slow compared to SRAM; they have a matrix-shaped structure, with the
addresses multiplexed in rows and columns, they have faster access modes in what supplies
the high address part; this access mode is called page mode and speeds up access by not
having to supply the full page address for each access.

d) ROM memories

Read-only memories, once they have been written or programmed, only the contents
of the cells can be read. They are usually used to store the code that allows systems to start
up; they are manufactured for mass applications with silicon masks. There are 3 types of ROM
memories that can be programmed in the lab, some can be erased.

· PROM memory: these are ROM memories that are electrically programmable using a special
programmer that generates high voltage peaks, which physically melt fuses, recording them
permanently on the device. They have the disadvantage that they cannot be erased and
require a special card to be read.

· EPROM memory: they are also programmed with a programming device connected to the
computer. The difference with PROM is that these can be erased; this is done using UV rays.
For this to happen, EPROMs have a small transparent quartz window in which the cell matrix is
exposed as shown in figure 1.6. Once scheduled, this window must be labeled to prevent it
from being accidentally deleted.

FIGURE 1.6 EPROM MEMORY

· EEPROM memory: these are programmable and erasable memories using a special device
that is connected to the computer.

e) FLASH memory

These are memories that behave in the same way as SRAM, but their writing is
different, they must first be erased and then written; this type of memory has an internal
instruction register and a state machine that generates the signals necessary to erase/write a
block or the entire memory.

Memory is divided into several layers or levels with a structure whose shape may remind us of
a pyramidal structure. Table 1.1 below shows the maximum and minimum size that flash
memories can present, as well as the time it takes to access the information.

Name Maximum Size Access time

Records Up to 200 Bytes Less than 10 Nanoseconds

Cache Memory Up to 512 Bytes Between 10 and 30


Nanoseconds

Main Memory More than 1 Between 30 and 100


Gigabyte Nanoseconds

Table 1.1. Layers into which memory is divided.

1.4 The Best Configuration.

The first thing we must take into account when configuring our equipment is what it
will be used for, that is, what programs will be used on it. For example, a PC used in an office
that handles Word, Excel and the Internet does not need to have a powerful processor, but it is
essential to provide it with good RAM and a fast hard drive for reading and writing data.
However, when a computer is intended for heavy applications or for games with three-
dimensional graphics, the main thing is to have a fast processor combined with a good, fast
graphics card.

1.4.1 The Motherboard

It is the main component, therefore, it has to be chosen with the utmost care so that
the computer has excellent quality as well as its performance in the execution of tasks. When
purchasing a motherboard, we must see what type of processor it supports and whether it has
enough expansion slots for the peripherals we want to install. A board is made using a
technique called MPCB (Multiple Layer Contact Board), which consists of several boards
stacked as if they were one; this type of board must be manufactured with great care, since a
minimal error in the position of the tracks would cause interference and make the board
unstable. The quality of the motherboards does not depend precisely on the brand, but we
must make sure of the brand we are purchasing, since, to find drivers for the devices of said
motherboard it will be easier to enter the manufacturer's website.

1.4.2 RAM memory

If the computer has little RAM, our system will have to use our hard drive to store
those programs that do not fit in RAM, this is called Virtual Memory; which due to overload
can make our system very slow. On the other hand, installing more RAM will be a waste
because it will not make the system faster; you will notice that you need to install more when
the system is slow. For example, if you work with simple office applications, the minimum
amount of RAM to use would be 64MB, but the ideal would be 128MB; if you run programs at
the same time, 256MB is enough, since the current use of RAM also depends on our Operating
System, since over the years these evolve increasingly, occupying more complex applications,
which is why more RAM is needed. The more RAM the PC will have, the faster it will be for
longer, since over time there are more complex applications and these make the system more
sophisticated.

1.4.3 Processor

It depends on what the computer is going to be used for, for example if it is going to
be used for games it would be worth investing in a processor like an Athlon or Pentium 4. If it
is for small applications, a Duron processor is more than enough with enough RAM.

1.4.4 Hard Drive

It is important to know the access time, rotation speed and density of the hard drive.
The access time determines how long it takes the read head to find the data to be read.
Rotational speed is measured in rpm, revolutions per minute. The density or amount of data
that fits on each hard drive also determines its performance, since the data will be more
exposed to each other and will be located more quickly.

1.4.5 Graphic card

There are 2D and 3D cards. There are also 3D accelerator cards that must be used
with a regular 2D card. There are also “combo” graphics cards, which perform both 2D and 3D
functions. Today, even on-board graphics cards (referring to devices that are integrated into
the motherboard) come with 3D capabilities, although their performance is nowhere near that
of a quality graphics card.

1.4.6 Sound Card


It has no influence on the performance of the equipment, it only determines the
audio quality. For normal use, Sound Blasters are generally used with Yamaha chipsets. More
expensive sound cards make all the difference if you want to work in music editing, or if you
want to listen to MIDI music in the highest quality. There are also 3D sound cards, such as the
Sound Blaster Live, which generate sounds that seem to come from all directions. This effect is
widely used in home theaters, to hear the sound in a more realistic way.

1.5 Extensions and Updates.

Upgrading means changing some components of an old computer in order to improve


its performance. However, many times, the computer is so old that it would be necessary to
replace almost all the components to achieve acceptable performance; in this case it would be
better to buy a new computer with the most recent updates for optimal performance. The
secret to a successful upgrade is to detect the “weak points” of the configuration, the
components to achieve acceptable performance with the rest. In addition, you have to know
how to choose the components in relation to quality. It is worth mentioning that it is not
necessary to buy an outdated device to make a worthwhile upgrade because if the device is
very outdated, it is best to buy a new one with the updates that satisfy the needs that the user
requires at that time and thus be able to have a cutting-edge and well-updated device.

You might also like