Instruction Set Architecture: System Design Which Includes All of The Other Hardware Components Within A Computing
Instruction Set Architecture: System Design Which Includes All of The Other Hardware Components Within A Computing
It may also be defined as the science and art of selecting and interconnecting hardware
components to create computers that meet functional, performance and cost goals.
• Instruction set architecture, or ISA, is the abstract image of a computing system that is
seen by a machine language (or assembly language) programmer, including the
instruction set, word size, memory address modes, processor registers, and address and
data formats.
• System Design which includes all of the other hardware components within a computing
system such as:
Once both ISA and microarchitecture have been specified, the actual device needs to be designed
into hardware. This design process is called the implementation. Implementation is usually not
considered architectural definition, but rather hardware design engineering.
Implementation can be further broken down into three (not fully distinct) pieces:
For CPUs, the entire implementation process is often called CPU design.
More specific usages of the term include more general wider-scale hardware architectures, such
as cluster computing and Non-Uniform Memory Access (NUMA) architectures.
Contents
[hide]
• 1 History
• 2 Computer architectures
• 3 Computer architecture topics
o 3.1 Sub-definitions
o 3.2 The Role Of Computer Architecture
3.2.1 Computer Architecture: The Definition
3.2.2 Instruction Set Architecture
3.2.3 Computer Organization
o 3.3 Design goals
o 3.4 Performance
o 3.5 Power consumption
• 4 See also
• 5 Notes
• 6 References
• 7 External links
History
The term “architecture” in computer literature can be traced to the work of Lyle R. Johnson,
Muhammad Usman Khan and Frederick P. Brooks, Jr., members in 1959 of the Machine
Organization department in IBM’s main research center. Johnson had the opportunity to write a
proprietary research communication about Stretch, an IBM-developed supercomputer for Los
Alamos Scientific Laboratory. In attempting to characterize his chosen level of detail for
discussing the luxuriously embellished computer, he noted that his description of formats,
instruction types, hardware parameters, and speed enhancements was at the level of “system
architecture” – a term that seemed more useful than “machine organization”. Subsequently,
Brooks, one of the Stretch designers, started Chapter 2 of a book (Planning a Computer System:
Project Stretch, ed. W. Buchholz, 1962) by writing, “Computer architecture, like other
architecture, is the art of determining the needs of the user of a structure and then designing to
meet those needs as effectively as possible within economic and technological constraints”.
Brooks went on to play a major role in the development of the IBM System/360 line of
computers, where “architecture” gained currency as a noun with the definition “what the user
needs to know”. Later the computer world would employ the term in many less-explicit ways.
The first mention of the term architecture in the referred computer literature is in a 1964 article
describing the IBM System/360.[3] The article defines architecture as the set of “attributes of a
system as seen by the programmer, i.e., the conceptual structure and functional behavior, as
distinct from the organization of the data flow and controls, the logical design, and the physical
implementation”. In the definition, the programmer perspective of the computer’s functional
behavior is key. The conceptual structure part of an architecture description makes the functional
behavior comprehensible, and extrapolatable to a range of Use cases. Only later on did ‘internals’
such as “the way by which the CPU performs internally and accesses addresses in memory,”
mentioned above, slip into the definition of computer architecture.
Computer architectures
The quantum computer architecture holds the most promise to revolutionize computing.[4]
Sub-definitions
Some practitioners of computer architecture at companies such as Intel and AMD use more fine
distinctions:
It can also be defined as designing of task performing part of computers, i.e how various gates,
transistor are interconneted and are forced to do functions as per instructions given by assembly
language programmer.
Addressing modes are the ways in which the instructions locate there operands.
Computer Organization
Computer organization also helps plan the selection of a processor for a particular project.
Multimedia projects may need very rapid data access, while supervisory software may need fast
interrupts.
Sometimes certain tasks need additional components as well. For example, a computer capable of
virtualization needs virtual memory hardware so that the memory of different simulated
computers can be kept separated.
The computer organization and features also affect the power consumption and the cost of the
processor.
Design goals
The exact form of a computer system depends on the constraints and goals for which it was
optimized. Computer architectures usually trade off standards, cost, memory capacity, latency
and throughput. Sometimes other considerations, such as features, size, weight, reliability,
expandability and power consumption are factors as well.
The most common scheme carefully chooses the bottleneck that most reduces the computer's
speed. Ideally, the cost is allocated proportionally to assure that the data rate is nearly the same
for all parts of the computer, with the most costly part being the slowest. This is how skillful
commercial integrators optimize personal computers.
Performance
Computer performance is often described in terms of clock speed (usually in MHz or GHz). This
refers to the cycles per second of the main clock of the CPU. However, this metric is somewhat
misleading, as a machine with a higher clock rate may not necessarily have higher performance.
As a result manufacturers have moved away from clock speed as a measure of performance.
Computer performance can also be measured with the amount of cache a processor has. If the
speed, MHz or GHz, were to be a car then the cache is like the gas tank. No matter how fast the
car goes, it will still need to get gas. The higher the speed, and the greater the cache, the faster a
processor runs.[dubious – discuss]
Modern CPUs can execute multiple instructions per clock cycle, which dramatically speeds up a
program. Other factors influence speed, such as the mix of functional units, bus speeds, available
memory, and the type and order of instructions in the programs being run.
There are two main types of speed, latency and throughput. Latency is the time between the start
of a process and its completion. Throughput is the amount of work done per unit time. Interrupt
latency is the guaranteed maximum response time of the system to an electronic event (e.g. when
the disk drive finishes moving some data). Performance is affected by a very wide range of
design choices — for example, pipelining a processor usually makes latency worse (slower) but
makes throughput better. Computers that control machinery usually need low interrupt latencies.
These computers operate in a real-time environment and fail if an operation is not completed in a
specified amount of time. For example, computer-controlled anti-lock brakes must begin braking
almost immediately after they have been instructed to brake.
The performance of a computer can be measured using other metrics, depending upon its
application domain. A system may be CPU bound (as in numerical calculation), I/O bound (as in
a webserving application) or memory bound (as in video editing). Power consumption has
become important in servers and portable devices like laptops.
Benchmarking tries to take all these factors into account by measuring the time a computer takes
to run through a series of test programs. Although benchmarking shows strengths, it may not help
one to choose a computer. Often the measured machines split on different measures. For example,
one system might handle scientific applications quickly, while another might play popular video
games more smoothly. Furthermore, designers have been known to add special features to their
products, whether in hardware or software, which permit a specific benchmark to execute quickly
but which do not offer similar advantages to other, more general tasks.
Power consumption
Power consumption is another design criterion that factors in the design of modern computers.
Power efficiency can often be traded for performance or cost benefits. With the increasing power
density of modern circuits as the number of transistors per chip scales (Moore's law), power
efficiency has increased in importance. Recent processor designs such as the Intel Core 2 put
more emphasis on increasing power efficiency. Also, in the world of embedded computing,
power efficiency has long been and remains the primary design goal next to performance.
See also
• Computer hardware
• CPU design
• Orthogonal instruction set
• Software architecture
• Computer organization
• von Neumann architecture
• Influence of the IBM-PC on the personal computer market