0% found this document useful (0 votes)
9 views

Computer Architecture

This document describes different aspects of computer architecture. It explains three architecture models (classical, pipelined, and multiprocessing) and analyzes key components such as the CPU, memory, and I/O. It also covers topics such as buses, interrupts, CPU organization, instruction cycle, and addressing modes. The document provides a general introduction to the fundamental concepts of computer architecture.
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views

Computer Architecture

This document describes different aspects of computer architecture. It explains three architecture models (classical, pipelined, and multiprocessing) and analyzes key components such as the CPU, memory, and I/O. It also covers topics such as buses, interrupts, CPU organization, instruction cycle, and addressing modes. The document provides a general introduction to the fundamental concepts of computer architecture.
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 21

iNSTITUTO TECNOLÓGICO DE CANCÚN

Arquitectura De Computadoras

Ingeniería en Sistemas computacionales

Yessenia Ramos Martínez


Content
Introduction.................................................................................................................................2
Computer Architecture................................................................................................................2
Computer Architecture Model.................................................................................................3
Classic Architecture..............................................................................................................3
Segmented Architectures.....................................................................................................3
Multiprocessing architectures..............................................................................................3
Component Analysis.................................................................................................................4
CPU.......................................................................................................................................4
Memory................................................................................................................................4
Entry/exit management.......................................................................................................5
Buses....................................................................................................................................6
Interruptions........................................................................................................................7
Structure and operation of the CPU.........................................................................................8
Processor organization.........................................................................................................8
Record structure...................................................................................................................9
Instruction Cycle.................................................................................................................11
Segmentation.....................................................................................................................12
Instruction set....................................................................................................................12
ADDRESSING MODES AND FORMATS.................................................................................13
Selection of components for assembly of computing equipment..........................................14
Chipset...............................................................................................................................14
Applications........................................................................................................................14
Service environments.........................................................................................................15
Parallel processing..................................................................................................................15
Basics of Parallel Computing..............................................................................................15
Types of parallel computing...............................................................................................16
Shared memory systems........................................................................................................17
Distributed memory systems.................................................................................................18
Bibliography...............................................................................................................................19

1
Introduction

The architecture of a computer defines the physical and logical components of a computer and
allows determining the possibilities that a computer system, with a certain configuration, can
perform the operations for which it will be used.

Any user who wishes to purchase a computer system, whether a large company or an
individual, must answer a series of preliminary questions: what do you want to do with the
new computer system? What are the objectives to be achieved? What software will be the
most appropriate to achieve the set objectives? What impact will the introduction of the new
computer system have on the organization (work or personal)?

Finally, when these questions have been answered, the user will have an approximate idea of
the objectives that the different computer systems to be evaluated must meet.

2
Computer Architecture

Computer Architecture Model

Classic Architecture
These architectures were developed into the first electromechanical and vacuum tube
computers. They are still used in low-end embedded processors and are the basis of most
modern architectures.

Mauchly-Eckert Architecture (Von Newman)

The main disadvantage of this architecture is that the single data and address bus becomes a
bottleneck through which all the information that is read from or written to the memory must
pass, forcing all accesses to this are sequential. This limits the degree of parallelism (actions
that can be performed at the same time) and therefore the performance of the computer.
This effect is known as the Von Newman bottleneck.

Segmented Architectures.
Segmented architectures or pipeline segmentation seek to improve performance by performing
several stages of the instruction cycle in parallel at the same time. The processor is divided into
several independent functional units and the processing of instructions is divided among them.

Multiprocessing architectures.
When you want to increase performance beyond what the pipeline segmentation technique
allows (theoretical limit of one instruction per clock cycle), it is necessary to use more than one
processor for the execution of the application program.

Multiprocessing CPUs:

SISO – (Single Instruction, Single Operand) independent computers


SIMO – (Single Instruction, Multiple Operand) vector processors
MISO – (Multiple Instruction, Single Operand) Not implemented
MIMO – (Multiple Instruction, Multiple Operand) SMP systems, Clusters

Vector processors – These are computers designed to apply the same numerical algorithm to a
series of matrix data, especially in the simulation of complex physical systems, such as
simulators to predict the weather, atomic explosions, complex chemical reactions, etc., where
the data They are represented as large numbers of data in matrix form on which the same
numerical algorithm must be applied.
In SMP (Simetric Multiprocessors) systems, several processors share the same main memory
and I/O peripherals, usually connected by a common bus. They are known as symmetrical,
since no processor takes the role of master and the others as slaves, but they all have similar
rights regarding access to memory and peripherals and both are managed by the operating
system.

3
Component Analysis
CPU
The central processing unit, UCP or CPU (for the acronym in English of central
processing unit), or simply the processor or microprocessor, is the component
of the computer and other programmable devices, which interprets the
instructions contained in the programs and processes the data. . CPUs provide
the fundamental feature of the digital computer (programmability) and are one
of the necessary components found in computers of any time, along with
primary storage and input/output devices. The CPU that is manufactured with
integrated circuits is known as a microprocessor. Since the mid-1970s, single-
chip microprocessors have almost completely replaced all types of CPUs, and
today, the term "CPU" is usually applied to all microprocessors.

The term "central processing unit" is, broadly speaking, a description of a


certain class of logic machines that can execute complex computer programs.
This broad definition can easily be applied to many of the early computers that
existed long before the term "CPU" was in wide use. However, the term itself
and its acronym have been in use in the computer industry since at least the
early 1960s. The shape, design, and implementation of CPUs has changed
dramatically since the earliest examples, but their fundamental operation has
remained fairly similar.
The first CPUs were custom designed as part of a larger computer, usually a
one-of-a-kind computer. However, this expensive method of custom-designing
CPUs for a particular application has largely disappeared, replaced by the
development of cheap, standardized classes of processors tailored for one or
many purposes. This standardization trend generally began in the era of
discrete transistors, mainframes, and microcomputers, and was rapidly
accelerated with the popularization of the integrated circuit (IC), which has
allowed more complex CPUs to be designed and manufactured in small spaces
(in the order of millimeters). Both the miniaturization and standardization of
CPUs have increased the presence of these digital devices in modern life far
beyond the limited applications of dedicated computing machines. Modern
microprocessors appear in everything from cars, televisions, refrigerators,
calculators, airplanes, to mobile or cell phones, toys, among others.
Memory
In computing, memory (also called storage) refers to part of the components
that make up a computer. They are devices that retain computer data for some
interval of time. Computer memories provide one of the main functions of
modern computing, the retention or storage of information. It is one of the
fundamental components of all modern computers that, coupled to a central

4
processing unit (CPU), implements the fundamentals of the von Neumann
Architecture computer model, used since the years 1940.

Device based on circuits that enable limited storage of information and its
subsequent recovery.

Memories are usually quick access, and can be volatile or non-volatile.

The main classification of memories are RAM and ROM. These memories are
used for primary storage.
Entry/exit management.
In computing, input/output, also abbreviated I/O or I/O, is the collection of
interfaces used by the different functional units (subsystems) of an information
processing system to communicate with each other. with others, or the signals
(information) sent through those interfaces. The inputs are the signals received
by the unit, while the outputs are the signals sent by it.

The term can be used to describe an action; "perform an input/output" refers to


executing an input or output operation. I/O devices are used by a person or
another system to communicate with a computer. In fact, keyboards and mice
are considered input devices of a computer, while monitors and printers are
seen as output devices of a computer. Typical devices for communication
between computers perform both operations, both input and output, and among
others are modems and network cards.

It is important to note that the designation of a device, whether input or output,


changes as the perspective from which it is viewed changes. Keyboards and
mice take as input the physical movement that the user produces as output and
convert it to an electrical signal that the computer can understand. The output of
these devices is an input to the computer. Similarly, monitors and printers take
as input the signals that the computer produces as output. They then convert
those signals into intelligible representations that can be interpreted by the user.
The interpretation will be, for example, through sight, which functions as input.

In computer architecture, the combination of a central processing unit (CPU)


and main memory (that which the CPU can directly write or read using
individual instructions) is considered the heart of the computer and any
movement of information to or from it. That set is considered input/output. The
CPU and its complementary circuitry provide input/output methods that are
used in low-level programming for the implementation of device drivers.

Higher-level operating systems and programming languages provide different,


more abstract concepts and input/output primitives. For example, an operating
system provides applications that handle the concept of files. The C

5
programming language defines functions that allow your programs to perform
I/O through streams, that is, they allow you to read data from and write data to
your programs.

Input and Output devices allow communication between the computer and the
user.

First of all, we will talk about the input devices, which, as their name indicates,
are used to enter data (information) into the computer for processing. Data is
read from the input devices and stored in the central or internal memory.

Input devices convert information into electrical signals that are stored in central
memory. Typical input devices are keyboards, others are: optical pens,
joysticks, CD-ROMs, compact discs (CDs), etc. Nowadays it is very common for
the user to use an input device called a mouse that moves an electronic pointer
over a screen that facilitates user-machine interaction.

Secondly, we have the output devices, which allow us to represent the results
(output) of the data process. The typical output device is the screen or monitor.
Other output devices are: printers (print results on paper), graphic plotters
(plotters), speakers, among others, which are mentioned below...
Buses
In the first electronic computers, all the buses were of parallel type, so that the
communication between the parts of the computer was done by means of tapes or many
tracks on the printed circuit, in which each conductor has a fixed function and the connection
It is simple, requiring only input and output ports for each device.

The trend in recent years was to use serial buses such as USB, Firewire for communications
with peripherals, replacing parallel buses, including the case of the microprocessor with the
chipset on the motherboard.

There are two main types classified by the method of sending information: parallel bus or
serial bus.

There are differences in performance and until a few years ago it was considered that
appropriate use depended on the physical length of the connection: for short distances the
parallel bus, for long distances the serial.

parallel bus

It is a bus in which data is sent by bytes at the same time, with the help of several lines that
have fixed functions. The amount of data sent is quite large at a moderate frequency and is
equal to the width of the data times the operating frequency. In computers it has been used
intensively, from the processor bus, hard drive buses, expansion and video cards, to printers.

Diagram of a Backplane Bus as an extension of the processor bus.

6
The front-side bus of Intel processors is a bus of this type and like any bus it has some
functions on dedicated lines:

 The address lines are responsible for indicating the memory location or the device with
which you wish to establish communication.
 The control lines are responsible for sending arbitration signals between the devices.
Among the most important are interrupt lines, DMA, and status indicators.
 Data lines transmit bits randomly so that a bus typically has a width that is a power of
2.

A parallel bus has complex physical connections, but the logic is simple, which makes it useful
in systems with little computing power. In the first microcomputers, the bus was simply the
extension of the processor bus and the other integrated ones "listened" to the address lines,
waiting to receive instructions. In the original IBM PC, the bus design was decisive when
choosing a processor with 8-bit I/O (Intel 8088), over one with 16 (the 8086), because it was
possible to use hardware designed for other processors, making the product cheaper.

serial bus

In this the data is sent, bit by bit, and reconstructed through registers or software routines. It is
made up of few conductors and its bandwidth depends on the frequency. It has been used for
less than 10 years in buses for hard drives, solid state drives, expansion cards and for the
processor bus.

Interruptions

An interrupt is a mechanism that allows a block of instructions to be executed, interrupting the


execution of a program, and then restoring its execution without directly affecting it. In this
way, a program can be temporarily interrupted to attend to some urgent need of the
computer and then continue its execution normally and as if nothing had happened.

Operating System routines called device handlers usually handle interrupts generated by the
device. Operating Systems use interruptions to implement time sharing. They have a device
called a timer that generates an interruption after a specific interval of time. The Operating
System initializes the timer before updating the Program Counter to execute a user program.
When the timer expires, it generates an interrupt causing the CPU to execute the timer
interrupt service routine.

A signal is software notification that an event has occurred. It is usually the response of the
Operating System. For example, ctrl-C generates an interrupt for the device handler that
handles the keyboard. The handler notifies the appropriate process by sending a signal. The
Operating System can also send signals to a process to notify the completion of an I/O or an
error.

7
 Interruptions can be produced by hardware or software.
 Hw interrupts are produced by a device and travel over the same system bus.
 Interruptions by Sw are produced by the execution of a special operation known as a
"system call" or by errors produced within a process, also known as exceptions.

I/O interrupt
In order to initiate an I/O operation the CPU loads the appropriate registers into the device
driver, the driver in turn examines the contents of these registers to determine what action to
take, for example, if a request is encountered. reading, the controller will begin transferring
data from the device to its local buffer, when it has finished doing this the controller will
inform the CPU that it has completed its operation, this communication is generated by means
of an interrupt.

Program interruptions
Software interruptions are caused by programs using a special language function. Their
objective is for the CPU to execute some type of function. When this function finishes
executing, the program that caused the interruption will continue executing.

Interrupts are mainly subroutines of the BIOS or DOS that can be called by a program, their
function is to control the hardware, serve as a contact between the programs and the
functions of the BIOS and DOS.

External interruptions
The use of interruptions helps us in creating programs, using them our programs are shorter, it
is easier to understand them and they usually have better performance due in large part to
their smaller size.

External interruptions are generated by peripheral devices, such as: keyboard, printers,
communications cards; They are also generated by the coprocessors.

These interruptions are not sent directly to the CPU, but are sent to an integrated circuit
whose function is exclusively to handle this type of interruptions.

Structure and operation of the CPU


Processor organization
A processor includes both user-visible registers and control/state registers. The registers visible
to the user can be of general use or have a special utility, while the control and status registers
are used to control the operation of the processor, a clear example is the program counter.

Processors use instruction segmentation to speed up execution. Pipeline segmentation can be


divided into the instruction cycle into several separate stages that operate sequentially, such
as instruction fetch, instruction decoding, operand address calculation, instruction execution,
and structure of the result operand.

8
Below is how a processor is organized, for this the following requirements must be considered:

 Fetch instructions: The processor reads an instruction from memory (register, cache or
main memory).
 Interpret Instruction: The instruction is coded to determine what action is necessary.
 Fetch data: Executing an instruction may require reading data from memory or an I/O
module.
 Process data: The execution of an instruction may require performing some arithmetic
or logical operation on the data.
 Write Data: The results of an execution may require writing data to memory or the I/O
module.

To do these things, the processor needs to store instructions and data temporarily while an
instruction is executing, in other words the processor needs a small internal memory.

Record structure

There is no clear separation between these two categories of records. For example, on Intel
processors, the program counter ( PC ) is visible to the user; on IBM PowerPC processors, it is
not.

Records Visible to the User

A user-visible register is one that can be referenced by the machine language executed by the
CPU. These records can be categorized as follows:

 General Purpose Registers: They can be used for a variety of functions by the
programmer. Sometimes its use is orthogonal within the instruction set, meaning that
it can be used to contain the operands of the instructions. However, there are some
restrictions; for example, there may be registers dedicated to floating point operations
and stack operations. In some cases, general-purpose registers can be used for
addressing functions; for example, to specify indirect displacements. In some cases,
there is a clear distinction and separation between registers for data and registers for
addresses.

 Data registers could be used only to store data and not to calculate the address of an
operand.

 Address registers may be partly general-purpose registers, or they may be used only
for a particular mode of addressing. For example:

o Segment pointers: On a machine with segmented addressing, a segment


register stores the base (start) address of the segment. There can be multiple
logs, one for the operating system and one for the running process, for
example.

9
o Index records: Used for indexed addressing and can be self-indexed.
o Stack pointer : If there is stack addressing visible to the user on the machine,
then the stack is in main memory and there is a register dedicated to pointing
to the top of the stack. This allows implicit addressing, so typical stack
operations, such as push, pop, and others, do not require explicit stack
operands.

 Status Registers (Flags or Condition Codes): Condition codes or flags are bits whose
values are assigned by the CPU hardware based on the result of the execution of the
instructions. For example, an instruction that implements an arithmetic operation can
produce a positive, negative, zero, or overflow value. In addition, the result of the
operation can be stored in a register or in memory, for this purpose a flag or condition
code is also established. The flag can also be tested as part of a conditional jump
operation.

Control and Status Records

There are a variety of CPU registers that are used to control the operation of the CPU. Most of
these, on most machines, are not visible to the user. Some may be visible to machine
instructions executed in a control mode or operating system.

Of course, different machines will have different log organizations and use different
terminology to refer to them. Below is a reasonably complete list of record types and their
descriptions.

Four registers are essential for instruction execution:

 Program Counter (PC – Program Counter): Contains the address of the next instruction
to be executed and which has to be extracted from memory using a fetch operation.
 Instruction Register (IR – Instruction Register): Contains the instruction to be executed,
the one that was the last to be extracted from memory using a fetch.
 Memory Address Register (MAR – Memory Address Register): Contains the address of a
memory location.
 Memory Buffer Register (MBR): Contains a word of data that will be written to memory
or that has just been read from memory.

Typically, the CPU updates the PC after performing each instruction fetch, so that the PC
always points to the next instruction to execute. A jump instruction also modifies the contents
of PC , storing in it the address that corresponds to the jump specified in the instruction.

The instruction that has been fetched from memory is stored in the IR , where the operation
code (part of the binary expression of the instruction that defines the operation to be
performed) and the operand specifiers are analyzed.

10
Data is exchanged with memory using the MAR and MBR registers. In a bus-organized system,
the MAR connects directly to the address bus , and the MBR connects directly to the data bus .
User-visible records consequently exchange data directly with the MBR .

The four registers mentioned above are used for data transfer between the CPU and memory.
Within the CPU, data must be passed to the ALU for processing. The ALU can have direct
access to the MBR and user-visible records. Alternatively, there may be additional buffer
registers at the edges of the ALU: these registers serve as input and output registers for the
ALU, and to exchange data with the MBR and user-visible registers.

All CPU designs include one or more registers known as PSW ( Program Status Word ), which
contain information about the status of program execution. Typically PSW records contain
codes for special program execution conditions and information required by the system. The
most common fields or flags in these records are:

 Sign: Contains the sign bit of the result of the last arithmetic operation.
 Zero: The corresponding value is given depending on whether the result of the last
operation is zero or not.
 Carry: Its value depends on whether after addition a carry results, or after a
subtraction a borrow in the upper part of the result. It is used in multiword arithmetic
operations.
 Equal: Its value depends on whether the logical comparison of the data results in
equality or not.
 Overflow: Used to indicate an overflow condition.
 Enable/Disable Interrupts: Used to enable or disable interrupts.
 Supervisor: Indicates whether the CPU is working in supervisor or user mode. Some
privileged instructions can be executed only in supervisor mode, and certain areas of
memory can be accessed in supervisor mode.

It is possible to find other registers related to status and control in some CPU designs. In
addition to the PSW registers, there may be a pointer to a block of memory that contains
additional status information; for example, process control blocks.

Instruction Cycle
An instruction cycle (also called a fetch-and-execute cycle or fetch-decode-execute cycle in
English) is the period it takes for the central processing unit (CPU) to execute a machine
language instruction.

It comprises a specific sequence of actions that the CPU must carry out to execute each
instruction in a program. Each instruction in a CPU's instruction set may require a different
number of instruction cycles to execute. An instruction cycle is made up of one or more
machine cycles.

For any data processing system based on a microprocessor (for example a computer) or
microcontroller (for example an MP3 player) to perform a task (program) it must first look up
each instruction in main memory and then execute it.

11
Segmentation
Segmentation or pipeline is a processor implementation technique by which the execution of
instructions overlaps. Nowadays it is the key technique for realizing fast CPUs. The basic idea
of segmentation can be extracted from a car assembly line. The cars are not assembled one by
one, but their construction is divided into successive phases and the assembly of the car is
carried out as it progresses through these phases. This way each phase is working
simultaneously on building a different car. In this way, building a car costs the same time as
before, but now the frequency with which cars are built is much greater (as many as there are
phases of construction). Each of these phases is called a segment or segmentation stage. As in
cars, the productivity of a computer will depend on the number of instructions it completes
per unit of time, and not on what it costs for an individual instruction.

Instruction set
An instruction set or instruction repertoire, instruction set or ISA (from English Instruction Set
Architecture, Instruction Set Architecture) is a specification that details the instructions that a
computer CPU can understand and execute, or the set of all the commands implemented by a
particular design of a CPU. The term describes the aspects of the processor generally visible to
a programmer, including native data types, instructions, registers, memory architecture, and
interrupts, among other aspects.

There are mainly 3 types: CISC (Complex Instruction Set Computer), RISC (Reduced Instruction
Set Computer) and SISC (Simple Instruction Set Computing).

Instruction set architecture (ISA) is sometimes used to distinguish this set of characteristics
from microarchitecture, which are the elements and techniques used to implement the
instruction set. Among these elements are microinstructions and cache systems.

Processors with different internal designs can share a set of instructions; For example, the Intel
Pentium and AMD Athlon implement almost identical versions of the x86 instruction set,
although they have completely opposite internal designs.

The characteristics that a set of instructions are intended to have are four, mainly:

 Complete: Any task executable with a computer (computable or decidable) can be carried
out in a finite time.

 Efficient: That allows high calculation speed without requiring high complexity in its UC and
ALU and without consuming excessive resources (memory), that is, it must complete its
task in a reasonable time, minimizing the use of resources.

 Self-contained: That is, they contain within themselves all the information necessary to be
executed.

 Independent: That do not depend on the execution of any other instruction.


It can be seen that for a set of instructions to be complete, only four instructions are needed:

-> writing

-> move left one position and read

12
-> move right one position and read

-> stop

RISC architectures are based on this idea, however, with this set the efficiency of the instruction
repertoire cannot be achieved, so in practice the set is usually broader in order to achieve better
performance, both in terms of resource use as in time consumption.

ADDRESSING MODES AND FORMATS

The operation field of an instruction specifies the operation to be performed. This must be
executed on some data stored in computer registers or in memory words, that is, on the
operands. The addressing mode specifies the way to interpret the information contained in
each operand field to locate, in. Computers use addressing techniques for the following
purposes:

- Provide programming versatility to the user by providing facilities such as indexes, indirect
addresses, etc., this versatility will help us handle complex data structures such as vectors,
matrices, etc.

- Reduce the number of bits in the operand field.

To the inexperienced user, the variety of addressing modes on a computer may seem overly
complicated. However, the availability of different addressing schemes gives the experienced
programmer flexibility to write programs that are more efficient in terms of number of
instructions and execution time.

The importance of addressing modes is such that the power of a machine is measured both by
its repertoire of instructions and by the variety of addressing modes it is capable of supporting.

Definition: The addressing modes of a computer are the different ways of transforming the
operand field of the instruction into the address of the operand.

In this definition the term address should be interpreted in its most general sense of location
of the operand, anywhere, and not in the strictest sense of memory address.

We will call the address obtained from the previous transformations the effective address. This
address, if it is a memory address, is the one that will be loaded into the MAR or memory
address register.

Calling xa the information of the operand field and Aef. to the effective direction, the function f
that from x gives us Aef. will constitute the addressing mode used:

Aef. = f(x)

Other information in addition to the information present in the operand field of the instruction
may be involved in the evaluation of the function f. This information may reside in processor
registers or in memory.

13
The addressing mode specification can be in the opcode or in the field of each operand.
Normally it is encoded in the opcode if the number of modes is small, otherwise it is encoded
with each operand, this last form of encoding favors orthogonality.

Selection of components for assembly of computing equipment

Chipset
The Auxiliary Integrated Circuit or Chipset is a set of integrated circuits that is responsible for
performing the functions that the microprocessor delegates to them. Chipset literally
translated from English means a set of integrated circuits. The auxiliary integrated circuit is
designated as the integrated circuit that is peripheral to a system but necessary for its
operation. Most systems require more than one auxiliary integrated circuit; However, the term
chipset is often used today when talking about IBM PC motherboards.

In common processors, the chipset is made up of 2 auxiliary circuits to the main processor:

The north bridge is used as a link bridge between said processor and memory. The North
Bridge controls access functions to and between the microprocessor, RAM, the AGP graphics
port, and communications with the South Bridge.

The South Bridge controls the associated devices such as the IDE disk controller, USB ports,
Firewire, SATA, RAID, PCI slots, AMR slot, CNR slot, infrared ports, floppy drive, LAN and a long
list of all the elements we can imagine integrated into the motherboard. The south bridge is
responsible for communicating the processor with the rest of the peripherals).

Applications
Entrance exit

Modern electronic computers are an essential tool in many areas: industry, government,
science, education,…, actually in almost all fields of our lives.

The role played by computer peripheral devices is essential; Without such devices it would not
be completely useful. Through peripheral devices we can enter data into the computer that is
useful for solving a problem and consequently obtain the result of said operations, that is; be
able to communicate with the computer.

The computer needs inputs to generate outputs.

and these occur through two types of existing peripheral devices:

• Peripheral input devices.

• Peripheral output devices.

Storage

Floppy drives.

14
No matter how bad and outdated a computer is, you always have at least one of these devices.
Their capacity is totally insufficient for current needs, but they have the advantage that comes
from the many years that they have been the absolute standard for portable storage.

Size Disk Type Capacity Explanation

5.25” SS/DD 180 Kb Single-sided, double density. Out of phase

5.25” DS/DD 360 Kb Two-sided, double density. Out of phase

5.25” DS/HP 1.2 MB Two-sided, high density. Outdated but useful

3.5” DS/DD 720 Kb Two-sided, double density. Outdated but very common

3.5” DS/HD 1.44 MB Two-sided, high density. The current standard.

Service environments

Business

This document presents a short description of the development, from the 1940s to the present
day, of the historical evolution of the computational tool, in its impact on the different blocks
of the administrative process, located in time from linear models and isolated to the systems
theory applications that characterize administrative computing today and the immediate
future.

It also presents a discussion on the scope of strategic planning schemes, within the framework
of current management problems.

Electronic Commerce

The development of these technologies and telecommunications has caused data exchanges to
grow to extraordinary levels, becoming increasingly simpler and creating new forms of
commerce, and within this framework Electronic Commerce is developed.

Parallel processing
Basics of Parallel Computing
Parallel computing is a form of computing in which many instructions are executed
simultaneously, operating on the principle that large problems can often be divided into
smaller ones, which are then solved simultaneously (in parallel). There are several different
forms of parallel computing: bit-level parallelism, instruction-level parallelism, data parallelism,
and task parallelism. Parallelism has been used for many years, especially in high-performance
computing, but interest in it has grown recently due to physical limitations that prevent
frequency increases. As power consumption—and therefore heat generation—of computers
has become a concern in recent years, parallel computing has become the dominant paradigm
in computer architecture, primarily in the form of multicore processors.

15
Types of parallel computing

Bit-level parallelism
From the advent of large scale integration (VLSI) as a computer chip manufacturing technology
in the 1970s until around 1986, acceleration in computer architecture was largely achieved by
doubling the word size in the computer, the amount of information the processor can handle
per cycle.18 Increasing the word size reduces the number of instructions the processor must
execute to perform an operation on variables whose sizes are greater than the word length.
For example, when an 8-bit processor must add two 16-bit integers, the processor must first
add the 8 low-order bits of each integer with the addition instruction, then add the 8 high-
order bits using the carry-addition instruction that takes into account the carry bit of the
lower-order addition, in this case an 8-bit processor requires two instructions to complete a
single operation, where a 16-bit processor requires a single instruction to complete complete
it.

Historically, 4-bit microprocessors were replaced by 8-bit ones, then 16-bit and 32-bit ones,
this general trend came to an end with the introduction of 64-bit processors, which has been a
standard in purpose computing. general during the last decade.

Parallelism at the instruction level


A five-stage canonical pipeline on a RISC machine (IF = Instruction Request, ID = Instruction
Decode, EX = Execute, MEM = Memory Access, WB = Write)

A computer program is, in essence, a sequence of instructions executed by a processor. These


instructions can be reordered and combined into groups that are then executed in parallel
without changing the result of the program. This is known as instruction-level parallelism.
Advances in instruction-level parallelism dominated computer architecture from the mid-
1980s to the mid-1990s.

Modern processors have multi-stage instruction pipelines. Each stage in the pipeline
corresponds to a different action that the processor performs in the instruction corresponding
to the stage; A processor with an N-stage pipeline can have up to n different instructions in
different completion stages. The canonical example of a pipelined processor is a RISC
processor, with five stages: request instruction, decode, execute, memory access, and write.
The Pentium 4 processor had a pipeline of stages.

A superscalar processor with a five-stage pipeline, capable of executing two instructions per
cycle. You can have two instructions in each stage of the pipeline, for a total of up to 10
instructions (shown in green) executed simultaneously.

In addition to the instruction-level parallelism of pipelining, some processors can execute more
than one instruction at a time. These are known as superscalar processors. Instructions can be
grouped together only if there is no data dependency between them. Scoreboarding and the
Tomasulo algorithm—which is similar to scoreboarding but makes use of register renaming—
are two of the most common techniques for implementing out-of-order execution and
instruction-level parallelization.

16
Data parallelism
Data parallelism is the parallelism inherent in programs with loops, which focuses on the
distribution of data between different computational nodes that must be processed in parallel.
"Parallelization of loops often leads to similar sequences of operations—not necessarily
identical—or functions being performed on the elements of a large data structure." 21 Many
scientific and engineering applications exhibit data parallelism.

A loop termination dependency is the dependency of one iteration of a loop on the output of
one or more previous iterations. Loop termination dependencies prevent parallelization of
loops. For example, consider the following pseudocode that calculates the first few Fibonacci
numbers:

This loop cannot be parallelized because CUR depends on itself (PREV2) and PREV1, which are
calculated at each iteration of the loop. Since each iteration depends on the result of the
previous one, they cannot be done in parallel. As the size of a problem becomes larger, the
available data parallelization generally does as well.22

Task parallelism
Task parallelism is the characteristic of a parallel program in which "completely different
calculations can be performed on any same or different set of data." This contrasts with data
parallelism, where the same calculation is performed on different or the same sets of data.
Task parallelism generally does not scale with the size of a problem.

Flynn's Taxonomy
Michael J. Flynn created one of the first computer classification systems, parallel and
sequential programs, now known as Flynn's taxonomy. Flynn classifies programs and
computers based on whether they are operating with one or more sets of instructions and
whether those instructions are used on one or more sets of data.

The single-instruction-single-data (SISD) classification is equivalent to a fully sequential


program. Single-instruction-multiple-data (SIMD) sorting is analogous to doing the same
operation multiple times on a large data set. This is commonly done in signal processing
applications. Multiple-instructions-single-data (MISD) is a classification that is rarely used.
Although computer architectures were designed in this category—such as systolic arrays—very
few applications materialized. Multiple instruction-multiple-data (MIMD) programs are the
most common type of parallel programs.

Second Life. Patterson and John L. Hennessy, "Some machines are hybrids of these categories;
of course, this classic model has survived because it is simple, easy to understand, and gives a
good first approximation. Furthermore, it is, perhaps due to its understandability, the most
used scheme.»

Shared memory systems


In the shared memory model, the processors communicate with the memory through a high-
speed bus or system of switches (keys). This allows them to achieve better performance
compared to distributed memory systems. Another advantage of this model is its more
efficient use of memory, since there is no need for data replication. However, this type of

17
architecture presents two important difficulties: the current high cost of this type of hardware
and the poor portability to migrate a program coded in a multicomputer system to another
shared memory platform.

Distributed memory systems


Distributed shared memory (DSM) systems represent the hybrid creation of two types of
parallel computing: distributed memory in multiprocessor systems and distributed systems.
They provide the abstraction of shared memory in systems with physically distributed
memories and consequently combine the best features of both approaches. Because of this,
the concept of distributed shared memory is recognized as one of the most attractive
approaches for creating scalable, high-performance multiprocessor systems.

Distributed shared memory (DSM) is an abstraction used to share data between computers
that do not share physical memory. Processes access DSM to read and update, within their
address spaces, what appears to be the normal internal memory allocated to a process.
However, there is an underlying system at runtime that transparently ensures that different
processes running on different computers observe the updates made to each other. It is as if
the processes access a single shared memory, but in fact the physical memory is distributed

The main characteristic of DSM is that it saves the programmer everything related to message
passing when writing their applications, an issue that in another system should be kept in
mind. DSM is primarily a tool for parallel applications or for distributed applications or groups
of applications where individual data that they share can be directly accessed. In general, DSM
is less appropriate for client-server systems, since clients see the server as a manager of
resources in the form of abstract data that is accessed through requests (for modularity and
protection reasons). However, servers can provide shared DSM between clients. For example,
memory mapped files that are shared and for which a certain degree of consistency is
managed are a form of DSM.

18
Bibliography
https://sites.google.com/site/sistemasoperativospaty/unit-4/unit-4-memoria-compartida-
distribuida

http://www.fing.edu.uy/cluster/grupo/cluster_memoria_distribuida.pdf

http://www.cimec.org.ar/ojs/index.php/mc/article/viewFile/126/113
http://users.dcc.uchile.cl/~rbaeza/cursos/proyarq/choviedo/numa_definicion.html

http://www.fdi.ucm.es/profesor/rhermida/aic.htm

http://www.aliatuniversidades.com.mx/bibliotecasdigitales/pdf/sistemas/
Arquitectura_de_computadoras_II.pdf

http://es.wikipedia.org/wiki/Computaci%C3%B3n_paralela

http://publiespe.espe.edu.ec/articulos/sistemas/arquitectura/arquitectura.htm

http://seleccionensambledeequipos.blogspot.mx/2010/12/unidad-3-seleccion-de-
componentes-para.html

http://www.itescam.edu.mx/principal/sylabus/fpdb/recursos/r62976.PDF

http://www.sites.upiicsa.ipn.mx/polilibros/portal/Polilibros/P_terminados/PolilibroFC/
Unidad_V/Unidad%20V_2.htm

http://www.emagister.com/curso-arquitectura-ordenadores/modos-directión-formatos

http://es.wikipedia.org/wiki/Instruction_Set

http://www.uv.es/varnau/AEC_610.pdf

http://es.wikipedia.org/wiki/Ciclo_de_instrucci%C3%B3n

19
http://orgaproject.galeon.com/6uc/CONTENTS/6uc-ciclos.pdf

http://www.ac.uma.es/~sromero/so/Capitulo1.pdf

http://wwwdi.ujaen.es/~mcdiaz/docencia/cur04_05/fi/teoria/04_Ordenador.pdf

http://www.fing.edu.uy/inco/cursos/arqsis2/teorico/clase03.pdf

http://html.rincondelvago.com/estructura-de-microprocessadores.html

http://redesej.tripod.com/organizacionyarquitectura.html

http://ditec.um.es/~jpujante/documentos/Tema4-slides.pdf

http://arquidecomp.galeon.com/unidad03.htm

http://arquitecturadecomputadorass.blogspot.mx/2012/09/organizacion-del-procesador.html

http://redes-mg.blogspot.mx/

http://www.taringa.net/posts/ciencia-educacion/13019000/El-computador-Estructura-y-
Funcionamiento.html

http://es.wikipedia.org/wiki/Bus_(inform%C3%A1tica)

http://www.itescam.edu.mx/principal/sylabus/fpdb/recursos/r49114.PDF

http://tics-arquitectura.blogspot.mx/2012/03/modelos-de-arquitecturas-de-computo.html

http://antares.itmorelia.edu.mx/~mfraga/arqui/apuntes%20unidad%201.pdf

20

You might also like