0% found this document useful (0 votes)
14 views12 pages

ASOGWA SOCHIOMA C.-Architecture

The document discusses various components of a microprocessor including the ALU, internal registers, flag register, instruction register, and control sequencer. It provides details on the functions of each component, such as the ALU performing arithmetic operations, registers temporarily storing data and instructions, the flag register storing status flags after operations, the instruction register holding the current instruction, and the control sequencer directing data flow and operation sequencing.

Uploaded by

ofobuikef
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views12 pages

ASOGWA SOCHIOMA C.-Architecture

The document discusses various components of a microprocessor including the ALU, internal registers, flag register, instruction register, and control sequencer. It provides details on the functions of each component, such as the ALU performing arithmetic operations, registers temporarily storing data and instructions, the flag register storing status flags after operations, the instruction register holding the current instruction, and the control sequencer directing data flow and operation sequencing.

Uploaded by

ofobuikef
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 12

UNIVERSITY OF NIGERIA, NSUKKA

ENUGU STATE

FACULTY OF VOCATIONAL AND TECHNICAL


EDUCATION
DEPARTMENT OF COMPUTER AND ROBOTICS
EDUCATION

AN ASSIGNMENT SUBMITTED
BY

ASOGWA SOCHIOMA C.
REG NO: 2021/SD/37773
COURSE CODE: COS 331
COURSE TITLE: COMPILER CONSTRUCTION

LEVEL: 300/400
Question:
Question:
1. List the functions of the following in a microprocessor.
i. ALU
ii. Internal Registers
iii. Flag Register
iv. Instruction Register
v. Control Sequencer
2. What is pipeline in computer organization?
3. What is serial and parallel computation in computer organization?
4. Explain the concept of in order – out order architectural design in computer organization
5. List four advantage of parallel computation.

LECTURER: DR. U. P. UZOCHUKWU

1
SEPTEMBER, 2023

2
No1 (i)
ALU
The ALU performs simple addition, subtraction, multiplication, division, and logic
operations, such as OR and AND. The memory stores the program’s instructions and
data. The control unit fetches data and instructions from memory.
Function of ALU
ALUs perform arithmetic and logic functions. An ALU diagram shows inputs, processes,
outputs, and storage registers.
The NOT Gate is one transistor and one input logic gate. NOT gates create outputs that
are the opposite of the input. For example, an input 1 would become an output 0.
The OR Gate has multiple transistors and two inputs. Output equals 1 only if the first or
second input is a 1. The OR gate creates an output of 0 only if both inputs are 0.
The AND Gate has multiple transistors and two inputs. Output equals 1 only if both the
first and second inputs are 1.
Arithmetic Operations
ALUs perform addition, subtraction, and multiplication using sequences of logic gates.
Division operations are typically performed by an FPU because division may result in a
fraction, or floating-point unit.
Addition: The CPU acquires operands, often from a (storage) register. The CPU routes
the operands to the ALU's input registers. Opcode input is created by the CU, and the
opcode input says "perform addition." The operation adds the first operand to the
second operand using sequences of OR, AND, and XOR gates. The integer result is
placed in the appropriate storage register.
Subtraction: The CPU acquires operands from the register and sends them to the ALU
input registers, while the CU tells the opcode input to "perform subtraction." The
operation subtracts the first operand from the second operand, and the result is placed
in the appropriate storage register.
Multiplication: The CPU acquires operands, sends them to the ALU input registers, and
the CU creates an Opcode that says "multiply."
ALUs typically do not perform division operations because the result may not be an
integer, but a floating-point unit, or a fraction. FPUs manage these operations.

1
No 1(ii)
INTERNAL REGISTERS

The three important functions of computer registers are fetching, decoding, and
execution. Data instructions from the user are collected and stored in the specific
location by the register. The instructions are interpreted and processed so that the
desired output is given to the user. The information has to be fully processed so that the
user gets and understands the results as expected. The tasks are interpreted by the
registers and stored in computer memory. When a user asks for the same, it is given to
the user. The processing is done according to the need of the user.
Functions
 Frequently used, data, instructions, and the address and location of all these are
held in the registers so that CPU can fetch it whenever needed. CPU processing
instructions are held in the register. Any data to be processed should pass
through the registers before processing. Hence, we can say that registers are
used to enter the data by the users to be processed from the CPU.
 Data is quickly accepted, stored, and transferred in the registers and any type of
register is used to perform the specific functions required by CPU. Users need
not know much about the register as it is held by CPU for buffering data and as
temporary memory.
 Registers are buffers to store data that is copied from the main memory so that
processor can fetch the data whenever it is needed. The information is held in
the register so that the location and address is known to the register and can be
used to know the IP addresses.
 The base register can modify computer operations or the operands according to
the need and address portion can be added to the register in the instruction of
the computer system.

2
No1 (iii)
FLAG REGISTER
The flag register is a 16-bit register in the Intel 8086 microprocessor that contains
information about the state of the processor after executing an instruction. It is
sometimes referred to as the status register because it contains various status flags
that reflect the outcome of the last operation executed by the processor.

The flag register is an important component of the 8086 microprocessor because it is


used to determine the behavior of many conditional jump and branch instructions. The
various flags in the flag register are set or cleared based on the result of arithmetic,
logic, and other instructions executed by the processor.

The function of flag register in the microprocessor are;

1. Efficient conditional branching: The flag register enables efficient conditional


branching in assembly language programming. Programmers can use
conditional jump instructions to make decisions based on the state of the flags
in the flag register, allowing for more efficient and optimized code.
2. Improved arithmetic and logic operations: The flag register is used to store
the results of arithmetic and logic operations, allowing for more complex
calculations to be performed efficiently. The various flags in the flag register
provide information about the outcome of these operations, such as whether a
result is negative or zero, or whether there was an overflow or carry.
3. Easy access to processor status information: The flag register provides a
convenient way to access important information about the status of the
processor after executing an instruction. This information can be used to debug
programs and to optimize performance.
4. Improved error handling: The flag register can be used to detect errors and
exceptions, such as overflow or divide-by-zero errors. This allows programs to
handle these errors gracefully and to take appropriate corrective action.

The flag register is divided into various bit fields, with each bit representing a specific
flag. Some of the important flags in the flag register include the carry flag (CF), the

3
zero flag (ZF), the sign flag (SF), the overflow flag (OF), the parity flag (PF), and the
auxiliary carry flag (AF). These flags are used by the processor to determine the
outcome of conditional jump instructions and other branching instructions.

No 1(iv)
Instruction Register
An instruction register holds a machine instruction that is currently being executed. In
general, a register sits at the top of the memory hierarchy. A variety of registers serve
different functions in a central processing unit (CPU) – the function of the instruction
register is to hold that currently queued instruction for use.

In a typical CPU, in addition to an accumulator, there are registers such as an address


register, a data register and an index register, along with the instruction register. The
CPU performs fetch, decode and execute operations on memory units according to its
use of the registers. All of this serves the purpose of the memory processing that is at
the heart of the CPU’s raison d’etre, which is why some experts call registers “the
most important part of the CPU.” In a sense, the instruction register is particularly
important in that it holds the “active” memory value that is being worked on at a given
time.

No 1 (v)

CONTROL SEQUENCER

The functions of the Control Sequencer Microprocessor include the following.

 It directs the flow of data sequence between the processor and other devices.
 It can interpret the instructions and controls the flow of data in the processor.
 It generates the sequence of control signals from the received instructions or
commands from the instruction register.
 It has the responsibility to control the execution units such as ALU, data buffers,
and registers in the CPU of a computer.
 It has the ability to fetch, decode, handle the execution, and store results.
 It cannot process and store the data

4
 To transfer the data, it communicates with the input and output devices and
controls all the units of the computer.

NO 2

PIPELINING

Pipelining is the process of accumulating instruction from the processor through a


pipeline. It allows storing and executing instructions in an orderly process. It is also
known as pipeline processing.

Before moving forward with pipelining, check these topics out to understand the
concept better:

 Memory Organization
 Memory Mapping and Virtual Memory
 Parallel Processing

Pipelining is a technique where multiple instructions are overlapped during execution.


Pipeline is divided into stages and these stages are connected with one another to
form a pipe like structure. Instructions enter from one end and exit from another end.

Pipelining increases the overall instruction throughput.

In pipeline system, each segment consists of an input register followed by a


combinational circuit. The register is used to hold data and combinational circuit
performs operations on it. The output of combinational circuit is applied to the input
register of the next segment.

5
No 3

SERIAL COMPUTING

Serial computing refers to the use of a single processor to execute a program, also
known as sequential computing, in which the program is divided into a sequence of
instructions, and each instruction is processed one by one. Traditionally, the software
offers a simpler approach as it has been programmed sequentially, but the processor's
speed significantly limits its ability to execute each series of instructions. Also,
sequential data structures are used by the uni-processor machines in which data
structures are concurrent for parallel computing environments.

PARALLEL COMPUTING

Parallel computing is the use of two or more processors (cores, computers) in


combination to solve a single problem. It is a type of computing architecture in which
several processors execute or process an application or computation simultaneously.
Traditional computing follows a sequential execution model, where tasks are executed
one after the other. In contrast, parallel computing breaks down complex problems
into smaller, independent tasks that can be processed simultaneously. These tasks
are distributed among multiple processing units, such as CPU cores or computer
nodes, to achieve a substantial reduction in processing time. Most supercomputers
employ parallel computing principles to operate.

This type of computing is also known as parallel processing. The primary objective of
parallel computing is to increase the available computation power for faster application
processing or task resolution. Parallel computing infrastructure is standing within a
single facility where many processors are installed in one or separate servers which
are connected together. It is generally implemented in operational
environments/scenarios that require massive computation or processing power.

6
No 4

THE CONCEPT OF IN ORDER – OUT ORDER ARCHITECTURAL DESIGN IN


COMPUTER ORGANIZATION

Out-of order execution is just opposite behaviour of In-order. (1) Executes instructions
in non-sequential order (2) Even if current instruction is NOT completed, it will execute
next instruction. (This is done only if the next instruction does not depend on the result
of current instruction) (3) Faster execution speed.

Out-of-order execution "greedily" executes every instruction it can as quickly as


possible without waiting for previous instructions to finish unless they depend on the
result of an as-yet unfinished instruction.

This is obviously mostly useful if an instruction waits for memory to be read. An in-
order implementation would just stall until the data becomes available, whereas an out
of order implementation can (provided there are instructions ahead that can't be
executed independently) get something else done while the processor waits for the
data to be delivered from memory.

Note that both compilers and (if the compiler is not clever enough) programmers can
take advantage of this by moving potentially expensive reads from memory as far
away as possible from the point where the data is actually used. This makes no
difference for an in-order implementation but can help hiding memory latency in an
out-of-order implementation and therefore makes the code run faster.

The downside is of course that out-of-order implementations tend to be more complex


and more power hungry because of all the book-keeping involved.

7
No 5

ADVANTAGES OF PARALLEL COMPUTING

1. Increased Performance

One of the primary advantages of parallel computing is its ability to significantly


improve performance. By distributing tasks across multiple processing units, parallel
computing can handle complex calculations and data-intensive operations much faster
than sequential computing. This is particularly advantageous in tasks like scientific
simulations, data analysis, and rendering high-quality graphics.

2. Scalability

Parallel computing offers excellent scalability, meaning it can efficiently handle larger
workloads as the number of processing units increases. As technology advances and
more powerful processors become available, parallel computing can take full
advantage of these resources, enabling faster and more efficient processing of data
and tasks.

3. Real-time Processing

Certain applications, such as video processing, real-time simulations, and online


gaming, require rapid and continuous processing of data. Parallel computing allows
these applications to meet the demands of real-time processing, ensuring seamless
and responsive user experiences.

4. Resource Utilization

Parallel computing optimizes resource utilization by leveraging multiple processing


units concurrently. It ensures that no processing power goes unused, maximizing the
efficiency of hardware resources.

8
9
Reference
"Concurrency is not Parallelism", Waza conference Jan 11, 2012, Rob
Pike (slides Archived 2015-07-30 at the Wayback Machine) (video)
Hennessy, John L.; Patterson, David A.; Larus, James R. (1999). Computer
organization and design: the hardware/software interface (2. ed., 3rd print. ed.).
San Francisco: Kaufmann. ISBN 978-1-55860-428-5.
Barney, Blaise. "Introduction to Parallel Computing". Lawrence Livermore National
Laboratory. Retrieved 2007-11-09.
Thomas Rauber; Gudula Rünger (2013). Parallel Programming: for Multicore and
Cluster Systems. Springer Science & Business Media.
p. 1. ISBN 9783642378010.
Hennessy, John L.; Patterson, David A. (2002). Computer architecture / a quantitative
approach (3rd ed.). San Francisco, Calif.: International Thomson.
p. 43. ISBN 978-1-55860-724-8.
Rabaey, Jan M. (1996). Digital integrated circuits : a design perspective. Upper Saddle
River, N.J.: Prentice-Hall. p. 235. ISBN 978-0-13-178609-7.
Flynn, Laurie J. (8 May 2004). "Intel Halts Development Of 2 New
Microprocessors". New York Times. Retrieved 5 June 2012.
Thomas Rauber; Gudula Rünger (2013). Parallel Programming: for Multicore and
Cluster Systems. Springer Science & Business Media.
p. 2. ISBN 9783642378010.
Thomas Rauber; Gudula Rünger (2013). Parallel Programming: for Multicore and
Cluster Systems. Springer Science & Business Media.
p. 3. ISBN 9783642378010.

10

You might also like