RTSEC Documentation
RTSEC Documentation
RELEVANT TOOLS,
STANDARDS
AND/OR
ENGINEERING
CONSTRAINTS
Design goals
Cost
Performance
The clock speed of a computer is often used to describe its output (usually
in MHz or GHz). This is the number of cycles per second that the CPU's main
clock runs at. However, this measure can be deceiving, as a system with a
higher clock rate does not always mean it will work better. As a result,
manufacturers have abandoned clock speed as an output indicator.
The speed of a computer may also be affected. The amount of cache a
processor has can also be used to assess computer performance. If the rpm,
measured in MHz or GHz, were a vehicle, the cache would be a traffic light. A
green traffic light will not stop the vehicle, no matter how fast it is driving. The
faster a processor runs, the higher its speed and the larger its cache.
RIZAL TECHNOLOGICAL UNIVERSITY GRADUATE SCHOOL PAGE 1
Power consumption
counts have increased about 32 to 40% every year, thanks to Moore’s Law.
Moore’s Law was basically proposed by Gordon Moore of Intel in 1965and he
proposed that the transistor densities are going to be doubled every 18 to 24
months and that has really been holding good. The memory capacity also has
gone up to about 60% per year. All these technological advancements give room
for better or new applications. The applications demand more and more and the
processors are becoming better and better and this is vicious cycle. The
performance improved greatly from 1978 to 2005. After 2005, you find that the
performance has actually slowed down due to what is called the power wall and
the memory wall.
The main driving forces of computer systems are energy and cost. Today
everybody is striving to design computer systems which will minimize your
energy and cost. Also, we’ll have to look at the different types of parallelism that
your applications exhibit and try to exploit this parallelism in the computer
systems that we designed. So that becomes the primary driving force of a
computer system. The different types of parallelism that programs may exhibit
are called data level parallelism and task level parallelism. You need to design
systems that exploit them. There are different techniques that processors use to
exploit parallelism. Even in a sequential execution, there are different techniques
available to exploit the instruction level parallelism, ILP, i.e., executing
independent instructions parallel. When there is data level parallelism available in
programs, vector processors and SIMD style of architectures try to exploit them.
Processors also look at having multiple threads of execution. Thread level
parallelism is exploited more in terms of task level parallelism and when it is done
in a more loosely coupled architecture, we call it a request level parallelism. So,
applications exhibit different types of parallelism and the computer hardware that
you’re designing should try to exploit that parallelism and try to give
better performance. To summarize, in this module, we pointed out why
you need to study computer architecture, that is, the motivation for the
course, what is it that you are going to study in this computer architecture course,
and then be pointed out the functional units of a digital computer and how they
are interconnected, what is meant by a traditional von Neumann architecture.
Last of all, we pointed out the different classes of computer systems and the
driving forces that are driving us to come up with better and better computer
architectures in order to exploit the parallelism that is available among the
various applications and also bring down the energy and cost.