Computer Architecture
Computer Architecture
Arquitectura De Computadoras
1
Introduction
The architecture of a computer defines the physical and logical components of a computer and
allows determining the possibilities that a computer system, with a certain configuration, can
perform the operations for which it will be used.
Any user who wishes to purchase a computer system, whether a large company or an
individual, must answer a series of preliminary questions: what do you want to do with the
new computer system? What are the objectives to be achieved? What software will be the
most appropriate to achieve the set objectives? What impact will the introduction of the new
computer system have on the organization (work or personal)?
Finally, when these questions have been answered, the user will have an approximate idea of
the objectives that the different computer systems to be evaluated must meet.
2
Computer Architecture
Classic Architecture
These architectures were developed into the first electromechanical and vacuum tube
computers. They are still used in low-end embedded processors and are the basis of most
modern architectures.
The main disadvantage of this architecture is that the single data and address bus becomes a
bottleneck through which all the information that is read from or written to the memory must
pass, forcing all accesses to this are sequential. This limits the degree of parallelism (actions
that can be performed at the same time) and therefore the performance of the computer.
This effect is known as the Von Newman bottleneck.
Segmented Architectures.
Segmented architectures or pipeline segmentation seek to improve performance by performing
several stages of the instruction cycle in parallel at the same time. The processor is divided into
several independent functional units and the processing of instructions is divided among them.
Multiprocessing architectures.
When you want to increase performance beyond what the pipeline segmentation technique
allows (theoretical limit of one instruction per clock cycle), it is necessary to use more than one
processor for the execution of the application program.
Multiprocessing CPUs:
Vector processors – These are computers designed to apply the same numerical algorithm to a
series of matrix data, especially in the simulation of complex physical systems, such as
simulators to predict the weather, atomic explosions, complex chemical reactions, etc., where
the data They are represented as large numbers of data in matrix form on which the same
numerical algorithm must be applied.
In SMP (Simetric Multiprocessors) systems, several processors share the same main memory
and I/O peripherals, usually connected by a common bus. They are known as symmetrical,
since no processor takes the role of master and the others as slaves, but they all have similar
rights regarding access to memory and peripherals and both are managed by the operating
system.
3
Component Analysis
CPU
The central processing unit, UCP or CPU (for the acronym in English of central
processing unit), or simply the processor or microprocessor, is the component
of the computer and other programmable devices, which interprets the
instructions contained in the programs and processes the data. . CPUs provide
the fundamental feature of the digital computer (programmability) and are one
of the necessary components found in computers of any time, along with
primary storage and input/output devices. The CPU that is manufactured with
integrated circuits is known as a microprocessor. Since the mid-1970s, single-
chip microprocessors have almost completely replaced all types of CPUs, and
today, the term "CPU" is usually applied to all microprocessors.
4
processing unit (CPU), implements the fundamentals of the von Neumann
Architecture computer model, used since the years 1940.
Device based on circuits that enable limited storage of information and its
subsequent recovery.
The main classification of memories are RAM and ROM. These memories are
used for primary storage.
Entry/exit management.
In computing, input/output, also abbreviated I/O or I/O, is the collection of
interfaces used by the different functional units (subsystems) of an information
processing system to communicate with each other. with others, or the signals
(information) sent through those interfaces. The inputs are the signals received
by the unit, while the outputs are the signals sent by it.
5
programming language defines functions that allow your programs to perform
I/O through streams, that is, they allow you to read data from and write data to
your programs.
Input and Output devices allow communication between the computer and the
user.
First of all, we will talk about the input devices, which, as their name indicates,
are used to enter data (information) into the computer for processing. Data is
read from the input devices and stored in the central or internal memory.
Input devices convert information into electrical signals that are stored in central
memory. Typical input devices are keyboards, others are: optical pens,
joysticks, CD-ROMs, compact discs (CDs), etc. Nowadays it is very common for
the user to use an input device called a mouse that moves an electronic pointer
over a screen that facilitates user-machine interaction.
Secondly, we have the output devices, which allow us to represent the results
(output) of the data process. The typical output device is the screen or monitor.
Other output devices are: printers (print results on paper), graphic plotters
(plotters), speakers, among others, which are mentioned below...
Buses
In the first electronic computers, all the buses were of parallel type, so that the
communication between the parts of the computer was done by means of tapes or many
tracks on the printed circuit, in which each conductor has a fixed function and the connection
It is simple, requiring only input and output ports for each device.
The trend in recent years was to use serial buses such as USB, Firewire for communications
with peripherals, replacing parallel buses, including the case of the microprocessor with the
chipset on the motherboard.
There are two main types classified by the method of sending information: parallel bus or
serial bus.
There are differences in performance and until a few years ago it was considered that
appropriate use depended on the physical length of the connection: for short distances the
parallel bus, for long distances the serial.
parallel bus
It is a bus in which data is sent by bytes at the same time, with the help of several lines that
have fixed functions. The amount of data sent is quite large at a moderate frequency and is
equal to the width of the data times the operating frequency. In computers it has been used
intensively, from the processor bus, hard drive buses, expansion and video cards, to printers.
6
The front-side bus of Intel processors is a bus of this type and like any bus it has some
functions on dedicated lines:
The address lines are responsible for indicating the memory location or the device with
which you wish to establish communication.
The control lines are responsible for sending arbitration signals between the devices.
Among the most important are interrupt lines, DMA, and status indicators.
Data lines transmit bits randomly so that a bus typically has a width that is a power of
2.
A parallel bus has complex physical connections, but the logic is simple, which makes it useful
in systems with little computing power. In the first microcomputers, the bus was simply the
extension of the processor bus and the other integrated ones "listened" to the address lines,
waiting to receive instructions. In the original IBM PC, the bus design was decisive when
choosing a processor with 8-bit I/O (Intel 8088), over one with 16 (the 8086), because it was
possible to use hardware designed for other processors, making the product cheaper.
serial bus
In this the data is sent, bit by bit, and reconstructed through registers or software routines. It is
made up of few conductors and its bandwidth depends on the frequency. It has been used for
less than 10 years in buses for hard drives, solid state drives, expansion cards and for the
processor bus.
Interruptions
Operating System routines called device handlers usually handle interrupts generated by the
device. Operating Systems use interruptions to implement time sharing. They have a device
called a timer that generates an interruption after a specific interval of time. The Operating
System initializes the timer before updating the Program Counter to execute a user program.
When the timer expires, it generates an interrupt causing the CPU to execute the timer
interrupt service routine.
A signal is software notification that an event has occurred. It is usually the response of the
Operating System. For example, ctrl-C generates an interrupt for the device handler that
handles the keyboard. The handler notifies the appropriate process by sending a signal. The
Operating System can also send signals to a process to notify the completion of an I/O or an
error.
7
Interruptions can be produced by hardware or software.
Hw interrupts are produced by a device and travel over the same system bus.
Interruptions by Sw are produced by the execution of a special operation known as a
"system call" or by errors produced within a process, also known as exceptions.
I/O interrupt
In order to initiate an I/O operation the CPU loads the appropriate registers into the device
driver, the driver in turn examines the contents of these registers to determine what action to
take, for example, if a request is encountered. reading, the controller will begin transferring
data from the device to its local buffer, when it has finished doing this the controller will
inform the CPU that it has completed its operation, this communication is generated by means
of an interrupt.
Program interruptions
Software interruptions are caused by programs using a special language function. Their
objective is for the CPU to execute some type of function. When this function finishes
executing, the program that caused the interruption will continue executing.
Interrupts are mainly subroutines of the BIOS or DOS that can be called by a program, their
function is to control the hardware, serve as a contact between the programs and the
functions of the BIOS and DOS.
External interruptions
The use of interruptions helps us in creating programs, using them our programs are shorter, it
is easier to understand them and they usually have better performance due in large part to
their smaller size.
External interruptions are generated by peripheral devices, such as: keyboard, printers,
communications cards; They are also generated by the coprocessors.
These interruptions are not sent directly to the CPU, but are sent to an integrated circuit
whose function is exclusively to handle this type of interruptions.
8
Below is how a processor is organized, for this the following requirements must be considered:
Fetch instructions: The processor reads an instruction from memory (register, cache or
main memory).
Interpret Instruction: The instruction is coded to determine what action is necessary.
Fetch data: Executing an instruction may require reading data from memory or an I/O
module.
Process data: The execution of an instruction may require performing some arithmetic
or logical operation on the data.
Write Data: The results of an execution may require writing data to memory or the I/O
module.
To do these things, the processor needs to store instructions and data temporarily while an
instruction is executing, in other words the processor needs a small internal memory.
Record structure
There is no clear separation between these two categories of records. For example, on Intel
processors, the program counter ( PC ) is visible to the user; on IBM PowerPC processors, it is
not.
A user-visible register is one that can be referenced by the machine language executed by the
CPU. These records can be categorized as follows:
General Purpose Registers: They can be used for a variety of functions by the
programmer. Sometimes its use is orthogonal within the instruction set, meaning that
it can be used to contain the operands of the instructions. However, there are some
restrictions; for example, there may be registers dedicated to floating point operations
and stack operations. In some cases, general-purpose registers can be used for
addressing functions; for example, to specify indirect displacements. In some cases,
there is a clear distinction and separation between registers for data and registers for
addresses.
Data registers could be used only to store data and not to calculate the address of an
operand.
Address registers may be partly general-purpose registers, or they may be used only
for a particular mode of addressing. For example:
9
o Index records: Used for indexed addressing and can be self-indexed.
o Stack pointer : If there is stack addressing visible to the user on the machine,
then the stack is in main memory and there is a register dedicated to pointing
to the top of the stack. This allows implicit addressing, so typical stack
operations, such as push, pop, and others, do not require explicit stack
operands.
Status Registers (Flags or Condition Codes): Condition codes or flags are bits whose
values are assigned by the CPU hardware based on the result of the execution of the
instructions. For example, an instruction that implements an arithmetic operation can
produce a positive, negative, zero, or overflow value. In addition, the result of the
operation can be stored in a register or in memory, for this purpose a flag or condition
code is also established. The flag can also be tested as part of a conditional jump
operation.
There are a variety of CPU registers that are used to control the operation of the CPU. Most of
these, on most machines, are not visible to the user. Some may be visible to machine
instructions executed in a control mode or operating system.
Of course, different machines will have different log organizations and use different
terminology to refer to them. Below is a reasonably complete list of record types and their
descriptions.
Program Counter (PC – Program Counter): Contains the address of the next instruction
to be executed and which has to be extracted from memory using a fetch operation.
Instruction Register (IR – Instruction Register): Contains the instruction to be executed,
the one that was the last to be extracted from memory using a fetch.
Memory Address Register (MAR – Memory Address Register): Contains the address of a
memory location.
Memory Buffer Register (MBR): Contains a word of data that will be written to memory
or that has just been read from memory.
Typically, the CPU updates the PC after performing each instruction fetch, so that the PC
always points to the next instruction to execute. A jump instruction also modifies the contents
of PC , storing in it the address that corresponds to the jump specified in the instruction.
The instruction that has been fetched from memory is stored in the IR , where the operation
code (part of the binary expression of the instruction that defines the operation to be
performed) and the operand specifiers are analyzed.
10
Data is exchanged with memory using the MAR and MBR registers. In a bus-organized system,
the MAR connects directly to the address bus , and the MBR connects directly to the data bus .
User-visible records consequently exchange data directly with the MBR .
The four registers mentioned above are used for data transfer between the CPU and memory.
Within the CPU, data must be passed to the ALU for processing. The ALU can have direct
access to the MBR and user-visible records. Alternatively, there may be additional buffer
registers at the edges of the ALU: these registers serve as input and output registers for the
ALU, and to exchange data with the MBR and user-visible registers.
All CPU designs include one or more registers known as PSW ( Program Status Word ), which
contain information about the status of program execution. Typically PSW records contain
codes for special program execution conditions and information required by the system. The
most common fields or flags in these records are:
Sign: Contains the sign bit of the result of the last arithmetic operation.
Zero: The corresponding value is given depending on whether the result of the last
operation is zero or not.
Carry: Its value depends on whether after addition a carry results, or after a
subtraction a borrow in the upper part of the result. It is used in multiword arithmetic
operations.
Equal: Its value depends on whether the logical comparison of the data results in
equality or not.
Overflow: Used to indicate an overflow condition.
Enable/Disable Interrupts: Used to enable or disable interrupts.
Supervisor: Indicates whether the CPU is working in supervisor or user mode. Some
privileged instructions can be executed only in supervisor mode, and certain areas of
memory can be accessed in supervisor mode.
It is possible to find other registers related to status and control in some CPU designs. In
addition to the PSW registers, there may be a pointer to a block of memory that contains
additional status information; for example, process control blocks.
Instruction Cycle
An instruction cycle (also called a fetch-and-execute cycle or fetch-decode-execute cycle in
English) is the period it takes for the central processing unit (CPU) to execute a machine
language instruction.
It comprises a specific sequence of actions that the CPU must carry out to execute each
instruction in a program. Each instruction in a CPU's instruction set may require a different
number of instruction cycles to execute. An instruction cycle is made up of one or more
machine cycles.
For any data processing system based on a microprocessor (for example a computer) or
microcontroller (for example an MP3 player) to perform a task (program) it must first look up
each instruction in main memory and then execute it.
11
Segmentation
Segmentation or pipeline is a processor implementation technique by which the execution of
instructions overlaps. Nowadays it is the key technique for realizing fast CPUs. The basic idea
of segmentation can be extracted from a car assembly line. The cars are not assembled one by
one, but their construction is divided into successive phases and the assembly of the car is
carried out as it progresses through these phases. This way each phase is working
simultaneously on building a different car. In this way, building a car costs the same time as
before, but now the frequency with which cars are built is much greater (as many as there are
phases of construction). Each of these phases is called a segment or segmentation stage. As in
cars, the productivity of a computer will depend on the number of instructions it completes
per unit of time, and not on what it costs for an individual instruction.
Instruction set
An instruction set or instruction repertoire, instruction set or ISA (from English Instruction Set
Architecture, Instruction Set Architecture) is a specification that details the instructions that a
computer CPU can understand and execute, or the set of all the commands implemented by a
particular design of a CPU. The term describes the aspects of the processor generally visible to
a programmer, including native data types, instructions, registers, memory architecture, and
interrupts, among other aspects.
There are mainly 3 types: CISC (Complex Instruction Set Computer), RISC (Reduced Instruction
Set Computer) and SISC (Simple Instruction Set Computing).
Instruction set architecture (ISA) is sometimes used to distinguish this set of characteristics
from microarchitecture, which are the elements and techniques used to implement the
instruction set. Among these elements are microinstructions and cache systems.
Processors with different internal designs can share a set of instructions; For example, the Intel
Pentium and AMD Athlon implement almost identical versions of the x86 instruction set,
although they have completely opposite internal designs.
The characteristics that a set of instructions are intended to have are four, mainly:
Complete: Any task executable with a computer (computable or decidable) can be carried
out in a finite time.
Efficient: That allows high calculation speed without requiring high complexity in its UC and
ALU and without consuming excessive resources (memory), that is, it must complete its
task in a reasonable time, minimizing the use of resources.
Self-contained: That is, they contain within themselves all the information necessary to be
executed.
-> writing
12
-> move right one position and read
-> stop
RISC architectures are based on this idea, however, with this set the efficiency of the instruction
repertoire cannot be achieved, so in practice the set is usually broader in order to achieve better
performance, both in terms of resource use as in time consumption.
The operation field of an instruction specifies the operation to be performed. This must be
executed on some data stored in computer registers or in memory words, that is, on the
operands. The addressing mode specifies the way to interpret the information contained in
each operand field to locate, in. Computers use addressing techniques for the following
purposes:
- Provide programming versatility to the user by providing facilities such as indexes, indirect
addresses, etc., this versatility will help us handle complex data structures such as vectors,
matrices, etc.
To the inexperienced user, the variety of addressing modes on a computer may seem overly
complicated. However, the availability of different addressing schemes gives the experienced
programmer flexibility to write programs that are more efficient in terms of number of
instructions and execution time.
The importance of addressing modes is such that the power of a machine is measured both by
its repertoire of instructions and by the variety of addressing modes it is capable of supporting.
Definition: The addressing modes of a computer are the different ways of transforming the
operand field of the instruction into the address of the operand.
In this definition the term address should be interpreted in its most general sense of location
of the operand, anywhere, and not in the strictest sense of memory address.
We will call the address obtained from the previous transformations the effective address. This
address, if it is a memory address, is the one that will be loaded into the MAR or memory
address register.
Calling xa the information of the operand field and Aef. to the effective direction, the function f
that from x gives us Aef. will constitute the addressing mode used:
Aef. = f(x)
Other information in addition to the information present in the operand field of the instruction
may be involved in the evaluation of the function f. This information may reside in processor
registers or in memory.
13
The addressing mode specification can be in the opcode or in the field of each operand.
Normally it is encoded in the opcode if the number of modes is small, otherwise it is encoded
with each operand, this last form of encoding favors orthogonality.
Chipset
The Auxiliary Integrated Circuit or Chipset is a set of integrated circuits that is responsible for
performing the functions that the microprocessor delegates to them. Chipset literally
translated from English means a set of integrated circuits. The auxiliary integrated circuit is
designated as the integrated circuit that is peripheral to a system but necessary for its
operation. Most systems require more than one auxiliary integrated circuit; However, the term
chipset is often used today when talking about IBM PC motherboards.
In common processors, the chipset is made up of 2 auxiliary circuits to the main processor:
The north bridge is used as a link bridge between said processor and memory. The North
Bridge controls access functions to and between the microprocessor, RAM, the AGP graphics
port, and communications with the South Bridge.
The South Bridge controls the associated devices such as the IDE disk controller, USB ports,
Firewire, SATA, RAID, PCI slots, AMR slot, CNR slot, infrared ports, floppy drive, LAN and a long
list of all the elements we can imagine integrated into the motherboard. The south bridge is
responsible for communicating the processor with the rest of the peripherals).
Applications
Entrance exit
Modern electronic computers are an essential tool in many areas: industry, government,
science, education,…, actually in almost all fields of our lives.
The role played by computer peripheral devices is essential; Without such devices it would not
be completely useful. Through peripheral devices we can enter data into the computer that is
useful for solving a problem and consequently obtain the result of said operations, that is; be
able to communicate with the computer.
Storage
Floppy drives.
14
No matter how bad and outdated a computer is, you always have at least one of these devices.
Their capacity is totally insufficient for current needs, but they have the advantage that comes
from the many years that they have been the absolute standard for portable storage.
3.5” DS/DD 720 Kb Two-sided, double density. Outdated but very common
Service environments
Business
This document presents a short description of the development, from the 1940s to the present
day, of the historical evolution of the computational tool, in its impact on the different blocks
of the administrative process, located in time from linear models and isolated to the systems
theory applications that characterize administrative computing today and the immediate
future.
It also presents a discussion on the scope of strategic planning schemes, within the framework
of current management problems.
Electronic Commerce
The development of these technologies and telecommunications has caused data exchanges to
grow to extraordinary levels, becoming increasingly simpler and creating new forms of
commerce, and within this framework Electronic Commerce is developed.
Parallel processing
Basics of Parallel Computing
Parallel computing is a form of computing in which many instructions are executed
simultaneously, operating on the principle that large problems can often be divided into
smaller ones, which are then solved simultaneously (in parallel). There are several different
forms of parallel computing: bit-level parallelism, instruction-level parallelism, data parallelism,
and task parallelism. Parallelism has been used for many years, especially in high-performance
computing, but interest in it has grown recently due to physical limitations that prevent
frequency increases. As power consumption—and therefore heat generation—of computers
has become a concern in recent years, parallel computing has become the dominant paradigm
in computer architecture, primarily in the form of multicore processors.
15
Types of parallel computing
Bit-level parallelism
From the advent of large scale integration (VLSI) as a computer chip manufacturing technology
in the 1970s until around 1986, acceleration in computer architecture was largely achieved by
doubling the word size in the computer, the amount of information the processor can handle
per cycle.18 Increasing the word size reduces the number of instructions the processor must
execute to perform an operation on variables whose sizes are greater than the word length.
For example, when an 8-bit processor must add two 16-bit integers, the processor must first
add the 8 low-order bits of each integer with the addition instruction, then add the 8 high-
order bits using the carry-addition instruction that takes into account the carry bit of the
lower-order addition, in this case an 8-bit processor requires two instructions to complete a
single operation, where a 16-bit processor requires a single instruction to complete complete
it.
Historically, 4-bit microprocessors were replaced by 8-bit ones, then 16-bit and 32-bit ones,
this general trend came to an end with the introduction of 64-bit processors, which has been a
standard in purpose computing. general during the last decade.
Modern processors have multi-stage instruction pipelines. Each stage in the pipeline
corresponds to a different action that the processor performs in the instruction corresponding
to the stage; A processor with an N-stage pipeline can have up to n different instructions in
different completion stages. The canonical example of a pipelined processor is a RISC
processor, with five stages: request instruction, decode, execute, memory access, and write.
The Pentium 4 processor had a pipeline of stages.
A superscalar processor with a five-stage pipeline, capable of executing two instructions per
cycle. You can have two instructions in each stage of the pipeline, for a total of up to 10
instructions (shown in green) executed simultaneously.
In addition to the instruction-level parallelism of pipelining, some processors can execute more
than one instruction at a time. These are known as superscalar processors. Instructions can be
grouped together only if there is no data dependency between them. Scoreboarding and the
Tomasulo algorithm—which is similar to scoreboarding but makes use of register renaming—
are two of the most common techniques for implementing out-of-order execution and
instruction-level parallelization.
16
Data parallelism
Data parallelism is the parallelism inherent in programs with loops, which focuses on the
distribution of data between different computational nodes that must be processed in parallel.
"Parallelization of loops often leads to similar sequences of operations—not necessarily
identical—or functions being performed on the elements of a large data structure." 21 Many
scientific and engineering applications exhibit data parallelism.
A loop termination dependency is the dependency of one iteration of a loop on the output of
one or more previous iterations. Loop termination dependencies prevent parallelization of
loops. For example, consider the following pseudocode that calculates the first few Fibonacci
numbers:
This loop cannot be parallelized because CUR depends on itself (PREV2) and PREV1, which are
calculated at each iteration of the loop. Since each iteration depends on the result of the
previous one, they cannot be done in parallel. As the size of a problem becomes larger, the
available data parallelization generally does as well.22
Task parallelism
Task parallelism is the characteristic of a parallel program in which "completely different
calculations can be performed on any same or different set of data." This contrasts with data
parallelism, where the same calculation is performed on different or the same sets of data.
Task parallelism generally does not scale with the size of a problem.
Flynn's Taxonomy
Michael J. Flynn created one of the first computer classification systems, parallel and
sequential programs, now known as Flynn's taxonomy. Flynn classifies programs and
computers based on whether they are operating with one or more sets of instructions and
whether those instructions are used on one or more sets of data.
Second Life. Patterson and John L. Hennessy, "Some machines are hybrids of these categories;
of course, this classic model has survived because it is simple, easy to understand, and gives a
good first approximation. Furthermore, it is, perhaps due to its understandability, the most
used scheme.»
17
architecture presents two important difficulties: the current high cost of this type of hardware
and the poor portability to migrate a program coded in a multicomputer system to another
shared memory platform.
Distributed shared memory (DSM) is an abstraction used to share data between computers
that do not share physical memory. Processes access DSM to read and update, within their
address spaces, what appears to be the normal internal memory allocated to a process.
However, there is an underlying system at runtime that transparently ensures that different
processes running on different computers observe the updates made to each other. It is as if
the processes access a single shared memory, but in fact the physical memory is distributed
The main characteristic of DSM is that it saves the programmer everything related to message
passing when writing their applications, an issue that in another system should be kept in
mind. DSM is primarily a tool for parallel applications or for distributed applications or groups
of applications where individual data that they share can be directly accessed. In general, DSM
is less appropriate for client-server systems, since clients see the server as a manager of
resources in the form of abstract data that is accessed through requests (for modularity and
protection reasons). However, servers can provide shared DSM between clients. For example,
memory mapped files that are shared and for which a certain degree of consistency is
managed are a form of DSM.
18
Bibliography
https://sites.google.com/site/sistemasoperativospaty/unit-4/unit-4-memoria-compartida-
distribuida
http://www.fing.edu.uy/cluster/grupo/cluster_memoria_distribuida.pdf
http://www.cimec.org.ar/ojs/index.php/mc/article/viewFile/126/113
http://users.dcc.uchile.cl/~rbaeza/cursos/proyarq/choviedo/numa_definicion.html
http://www.fdi.ucm.es/profesor/rhermida/aic.htm
http://www.aliatuniversidades.com.mx/bibliotecasdigitales/pdf/sistemas/
Arquitectura_de_computadoras_II.pdf
http://es.wikipedia.org/wiki/Computaci%C3%B3n_paralela
http://publiespe.espe.edu.ec/articulos/sistemas/arquitectura/arquitectura.htm
http://seleccionensambledeequipos.blogspot.mx/2010/12/unidad-3-seleccion-de-
componentes-para.html
http://www.itescam.edu.mx/principal/sylabus/fpdb/recursos/r62976.PDF
http://www.sites.upiicsa.ipn.mx/polilibros/portal/Polilibros/P_terminados/PolilibroFC/
Unidad_V/Unidad%20V_2.htm
http://www.emagister.com/curso-arquitectura-ordenadores/modos-directión-formatos
http://es.wikipedia.org/wiki/Instruction_Set
http://www.uv.es/varnau/AEC_610.pdf
http://es.wikipedia.org/wiki/Ciclo_de_instrucci%C3%B3n
19
http://orgaproject.galeon.com/6uc/CONTENTS/6uc-ciclos.pdf
http://www.ac.uma.es/~sromero/so/Capitulo1.pdf
http://wwwdi.ujaen.es/~mcdiaz/docencia/cur04_05/fi/teoria/04_Ordenador.pdf
http://www.fing.edu.uy/inco/cursos/arqsis2/teorico/clase03.pdf
http://html.rincondelvago.com/estructura-de-microprocessadores.html
http://redesej.tripod.com/organizacionyarquitectura.html
http://ditec.um.es/~jpujante/documentos/Tema4-slides.pdf
http://arquidecomp.galeon.com/unidad03.htm
http://arquitecturadecomputadorass.blogspot.mx/2012/09/organizacion-del-procesador.html
http://redes-mg.blogspot.mx/
http://www.taringa.net/posts/ciencia-educacion/13019000/El-computador-Estructura-y-
Funcionamiento.html
http://es.wikipedia.org/wiki/Bus_(inform%C3%A1tica)
http://www.itescam.edu.mx/principal/sylabus/fpdb/recursos/r49114.PDF
http://tics-arquitectura.blogspot.mx/2012/03/modelos-de-arquitecturas-de-computo.html
http://antares.itmorelia.edu.mx/~mfraga/arqui/apuntes%20unidad%201.pdf
20