ASOGWA SOCHIOMA C.-Architecture
ASOGWA SOCHIOMA C.-Architecture
ENUGU STATE
AN ASSIGNMENT SUBMITTED
BY
ASOGWA SOCHIOMA C.
REG NO: 2021/SD/37773
COURSE CODE: COS 331
COURSE TITLE: COMPILER CONSTRUCTION
LEVEL: 300/400
Question:
Question:
1. List the functions of the following in a microprocessor.
i. ALU
ii. Internal Registers
iii. Flag Register
iv. Instruction Register
v. Control Sequencer
2. What is pipeline in computer organization?
3. What is serial and parallel computation in computer organization?
4. Explain the concept of in order – out order architectural design in computer organization
5. List four advantage of parallel computation.
1
SEPTEMBER, 2023
2
No1 (i)
ALU
The ALU performs simple addition, subtraction, multiplication, division, and logic
operations, such as OR and AND. The memory stores the program’s instructions and
data. The control unit fetches data and instructions from memory.
Function of ALU
ALUs perform arithmetic and logic functions. An ALU diagram shows inputs, processes,
outputs, and storage registers.
The NOT Gate is one transistor and one input logic gate. NOT gates create outputs that
are the opposite of the input. For example, an input 1 would become an output 0.
The OR Gate has multiple transistors and two inputs. Output equals 1 only if the first or
second input is a 1. The OR gate creates an output of 0 only if both inputs are 0.
The AND Gate has multiple transistors and two inputs. Output equals 1 only if both the
first and second inputs are 1.
Arithmetic Operations
ALUs perform addition, subtraction, and multiplication using sequences of logic gates.
Division operations are typically performed by an FPU because division may result in a
fraction, or floating-point unit.
Addition: The CPU acquires operands, often from a (storage) register. The CPU routes
the operands to the ALU's input registers. Opcode input is created by the CU, and the
opcode input says "perform addition." The operation adds the first operand to the
second operand using sequences of OR, AND, and XOR gates. The integer result is
placed in the appropriate storage register.
Subtraction: The CPU acquires operands from the register and sends them to the ALU
input registers, while the CU tells the opcode input to "perform subtraction." The
operation subtracts the first operand from the second operand, and the result is placed
in the appropriate storage register.
Multiplication: The CPU acquires operands, sends them to the ALU input registers, and
the CU creates an Opcode that says "multiply."
ALUs typically do not perform division operations because the result may not be an
integer, but a floating-point unit, or a fraction. FPUs manage these operations.
1
No 1(ii)
INTERNAL REGISTERS
The three important functions of computer registers are fetching, decoding, and
execution. Data instructions from the user are collected and stored in the specific
location by the register. The instructions are interpreted and processed so that the
desired output is given to the user. The information has to be fully processed so that the
user gets and understands the results as expected. The tasks are interpreted by the
registers and stored in computer memory. When a user asks for the same, it is given to
the user. The processing is done according to the need of the user.
Functions
Frequently used, data, instructions, and the address and location of all these are
held in the registers so that CPU can fetch it whenever needed. CPU processing
instructions are held in the register. Any data to be processed should pass
through the registers before processing. Hence, we can say that registers are
used to enter the data by the users to be processed from the CPU.
Data is quickly accepted, stored, and transferred in the registers and any type of
register is used to perform the specific functions required by CPU. Users need
not know much about the register as it is held by CPU for buffering data and as
temporary memory.
Registers are buffers to store data that is copied from the main memory so that
processor can fetch the data whenever it is needed. The information is held in
the register so that the location and address is known to the register and can be
used to know the IP addresses.
The base register can modify computer operations or the operands according to
the need and address portion can be added to the register in the instruction of
the computer system.
2
No1 (iii)
FLAG REGISTER
The flag register is a 16-bit register in the Intel 8086 microprocessor that contains
information about the state of the processor after executing an instruction. It is
sometimes referred to as the status register because it contains various status flags
that reflect the outcome of the last operation executed by the processor.
The flag register is divided into various bit fields, with each bit representing a specific
flag. Some of the important flags in the flag register include the carry flag (CF), the
3
zero flag (ZF), the sign flag (SF), the overflow flag (OF), the parity flag (PF), and the
auxiliary carry flag (AF). These flags are used by the processor to determine the
outcome of conditional jump instructions and other branching instructions.
No 1(iv)
Instruction Register
An instruction register holds a machine instruction that is currently being executed. In
general, a register sits at the top of the memory hierarchy. A variety of registers serve
different functions in a central processing unit (CPU) – the function of the instruction
register is to hold that currently queued instruction for use.
No 1 (v)
CONTROL SEQUENCER
It directs the flow of data sequence between the processor and other devices.
It can interpret the instructions and controls the flow of data in the processor.
It generates the sequence of control signals from the received instructions or
commands from the instruction register.
It has the responsibility to control the execution units such as ALU, data buffers,
and registers in the CPU of a computer.
It has the ability to fetch, decode, handle the execution, and store results.
It cannot process and store the data
4
To transfer the data, it communicates with the input and output devices and
controls all the units of the computer.
NO 2
PIPELINING
Before moving forward with pipelining, check these topics out to understand the
concept better:
Memory Organization
Memory Mapping and Virtual Memory
Parallel Processing
5
No 3
SERIAL COMPUTING
Serial computing refers to the use of a single processor to execute a program, also
known as sequential computing, in which the program is divided into a sequence of
instructions, and each instruction is processed one by one. Traditionally, the software
offers a simpler approach as it has been programmed sequentially, but the processor's
speed significantly limits its ability to execute each series of instructions. Also,
sequential data structures are used by the uni-processor machines in which data
structures are concurrent for parallel computing environments.
PARALLEL COMPUTING
This type of computing is also known as parallel processing. The primary objective of
parallel computing is to increase the available computation power for faster application
processing or task resolution. Parallel computing infrastructure is standing within a
single facility where many processors are installed in one or separate servers which
are connected together. It is generally implemented in operational
environments/scenarios that require massive computation or processing power.
6
No 4
Out-of order execution is just opposite behaviour of In-order. (1) Executes instructions
in non-sequential order (2) Even if current instruction is NOT completed, it will execute
next instruction. (This is done only if the next instruction does not depend on the result
of current instruction) (3) Faster execution speed.
This is obviously mostly useful if an instruction waits for memory to be read. An in-
order implementation would just stall until the data becomes available, whereas an out
of order implementation can (provided there are instructions ahead that can't be
executed independently) get something else done while the processor waits for the
data to be delivered from memory.
Note that both compilers and (if the compiler is not clever enough) programmers can
take advantage of this by moving potentially expensive reads from memory as far
away as possible from the point where the data is actually used. This makes no
difference for an in-order implementation but can help hiding memory latency in an
out-of-order implementation and therefore makes the code run faster.
7
No 5
1. Increased Performance
2. Scalability
Parallel computing offers excellent scalability, meaning it can efficiently handle larger
workloads as the number of processing units increases. As technology advances and
more powerful processors become available, parallel computing can take full
advantage of these resources, enabling faster and more efficient processing of data
and tasks.
3. Real-time Processing
4. Resource Utilization
8
9
Reference
"Concurrency is not Parallelism", Waza conference Jan 11, 2012, Rob
Pike (slides Archived 2015-07-30 at the Wayback Machine) (video)
Hennessy, John L.; Patterson, David A.; Larus, James R. (1999). Computer
organization and design: the hardware/software interface (2. ed., 3rd print. ed.).
San Francisco: Kaufmann. ISBN 978-1-55860-428-5.
Barney, Blaise. "Introduction to Parallel Computing". Lawrence Livermore National
Laboratory. Retrieved 2007-11-09.
Thomas Rauber; Gudula Rünger (2013). Parallel Programming: for Multicore and
Cluster Systems. Springer Science & Business Media.
p. 1. ISBN 9783642378010.
Hennessy, John L.; Patterson, David A. (2002). Computer architecture / a quantitative
approach (3rd ed.). San Francisco, Calif.: International Thomson.
p. 43. ISBN 978-1-55860-724-8.
Rabaey, Jan M. (1996). Digital integrated circuits : a design perspective. Upper Saddle
River, N.J.: Prentice-Hall. p. 235. ISBN 978-0-13-178609-7.
Flynn, Laurie J. (8 May 2004). "Intel Halts Development Of 2 New
Microprocessors". New York Times. Retrieved 5 June 2012.
Thomas Rauber; Gudula Rünger (2013). Parallel Programming: for Multicore and
Cluster Systems. Springer Science & Business Media.
p. 2. ISBN 9783642378010.
Thomas Rauber; Gudula Rünger (2013). Parallel Programming: for Multicore and
Cluster Systems. Springer Science & Business Media.
p. 3. ISBN 9783642378010.
10