COA QP - A Print

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 14

A

1) What is an Instruction Register


An instruction register (IR) is a component of the central processing unit (CPU) of a
computer that stores the instruction that is presently being executed. The IR is a small
memory location within the CPU that temporarily stores the memory instruction until it is
decoded and executed.
The CPU loads instructions from memory into the IR, where they are decoded and executed
by the CPU. The instruction is stored in the IR in a manner that the CPU can easily decipher.
Following the execution of the instruction, the contents of the IR are updated with the next
instruction to be executed. The instruction cycle (the process through which a CPU collects,
decodes, and executes instructions) relies heavily on the IR.

2) Which are the different fields in instruction formats?


Ans.) Instruction formats are a way of encoding the information required to specify an instruction in
a computer's instruction set architecture (ISA). The specific fields included in an instruction format
can vary depending on the design of the ISA, but some common fields include:

1) Opcode field: This field specifies the operation to be performed, such as addition or subtraction.

2) Operand fields: These fields specify the data on which the operation is to be performed, such as
registers or memory locations.

3) Immediate field: This field contains a constant or literal value that is used as an operand in the
instruction.

4) Address field: This field contains the memory address of an operand or the target of a branch or
jump instruction.

5) Control field: This field contains bits that control the behavior of the instruction or the processor
itself, such as specifying whether an interrupt is enabled or disabled.

6) Condition code field: This field contains bits that specify the condition under which a branch or
jump instruction is taken.

7) Register specifier field: This field specifies which register is used as an operand or result.

8) Mode field: This field specifies the addressing mode used to access operands, such as direct or
indirect addressing.

The specific fields and their sizes can vary depending on the ISA's design goals, such as optimizing for
code size or performance, and can also evolve over time as new instructions are added or existing
instructions are modified.

3) What is byte addressability ?


A)Byte addressing in hardware architectures supports accessing individual bytes. Computers with byte
addressing are sometimes called byte machines, in contrast to word-addressable architectures, word
machines, that access data by word.

4) What is a bus ?
Ans ) A bus refers to a communication pathway or a set of wires that allow multiple components
within a computer system to exchange information and signals. These components may include the
processor, memory, input/output devices, and other peripherals. Buses can vary in size, speed, and
functionality, and can be classified as system buses, expansion buses, or local buses depending on
their usage and scope.

5) What is the purpose of using status register?


Ans . Status registers are used in computer architecture to store the state of the processor or other
components in a computer system. These registers are used to indicate various conditions and
outcomes of instructions and operations, and can be used to control the flow of a program or
diagnose problems.

6) Write the classification of computer instructions?


Ans. A basic computer has three instruction code formats which are:

Memory - reference instruction.

Register - reference instruction.

Input-Output instruction.

7) what is the use of condition code bits?


Ans. Condition code bits, also known as flags, are a set of status bits in a computer’s central processing
unit (CPU) that indicate the outcome of arithmetic or logical operations. These bits are used to determine
whether a particular condition has occurred or not, and to make decisions based on the results of those
operations. Some common condition code bits include the zero flag (ZF), which is set if the result of an
operation is zero; the carry flag (CF), which is set if an arithmetic operation results in a carry or borrow;
the sign flag (SF), which is set if the result of an operation is negative; and the overflow flag (OF), which is
set if an arithmetic operation results in an overflow

8) Differentiate between RAM and ROM


Ans.)RAM (Random Access Memory) RAM is a form of computer memory that can be
read and changed in any order, typically used to store working data and machine
code.
ROM (Read Only Memory) ROM is a type of non-volatile memory used in computers
and other electronic devices.

The major differences between RAM and ROM are:

2
RAM ROM

Definition of RAM is Random Definition of ROM is Read-only Memory


Access Memory

Random Access Memory (RAM) ROM is cheaper when compared to RAM.


is expensive when compared to
ROM

The speed of Random Access The speed of Read-only Memory (ROM) is slower when
Memory (RAM) is higher when compared to RAM.
compared to ROM

Random Access Memory (RAM) ROM has a lower capacity compared to RAM
has a higher capacity when
compared to ROM

Data in RAM can be modified, Data in ROM can only be read, it cannot be modified or
erased, or read. erased.

The data stored in RAM is used The data stored in ROM is used to bootstrap the
by the Central Processing Unit computer.
(CPU) to process current
instructions

Data stored on RAM can be If the Central Processing Unit (CPU) needs to access the
accessed by the Central data on ROM, first the data must be transferred to RAM,
Processing Unit. and then the Central Processing Unit (CPU) will be able to
access the data.

Data of RAM is very volatile, it Data present in Read-Only Memory (ROM) is not volatile,
will exist as long as there is no it is permanent. Data will remain unchanged even when
interruption in power. there is a disruption in the power supply.

9) Compare static and dynamic RAM.


ans. SRAM (static RAM) is a type of random access memory (RAM) that retains data bits in its
memory as long as power is being supplied. Unlike dynamic RAM (DRAM), which must be
continuously refreshed, SRAM does not have this requirement, resulting in better performance
and lower power usage.

10) What are the features of PROM?

3
Ans)A programmable read-only memory (PROM) is a form of digital memory where the contents
can be changed once after manufacturing of the device the data is then permanent and cannot be
changed.

11) What are multiprocessor system?


Ans)A Multiprocessor is a computer system with two or more central processing units (CPUs) share
full access to a common RAM. The main objective of using a multiprocessor is to boost the
system’s execution speed, with other objectives being fault tolerance and application matching.

12) How the efficiency of a pipeline can be measured?


Ans)•The efficiency of n stages in a pipeline is defined as ratio of the actual speedup to the
maximum speed. The maximum speed up that can be achieved is always equal to the number of
stages. This is achieved when efficiency becomes 100%. Practically, efficiency is always less than
100%. Therefore speed up is always less than number of stages in pipelined architecture.

•Formula is E (n)= m / n+m-1

13) Explain basic Operational concept between


Memory and Processor.
The processor retrieves instructions and data from memory and conducts computations and
operations on them, which is the core operational notion between memory and processor in a
computer.

When the processor needs to execute an instruction, it reads it from memory and saves it in the
instruction register (IR). The instruction is subsequently decoded, and the processor determines
what operation is required.

If the operation requires the use of data, the processor retrieves it from memory and stores it in a
register. The CPU then executes the operation on the data in the register and stores the result in
memory or another register.

This process is repeated as the processor runs through each instruction in the programme. The
processor reads data from memory, executes operations on it, and then saves the result in
memory or a register.

Overall, the memory-processor relationship is crucial for the operation of a computer system since
the processor relies on memory to execute programme instructions and perform calculations.

14) How micro processor differentiates between data


and instruction? Explain.
In computer systems an instruction's address comes from PC (Program Counter) and data's
address don't come from PC.

Ans.)In modern microprocessors, the differentiation between data and instructions is typically
handled by the processor's instruction decoder.

4
The instruction decoder is a component within the processor that is responsible for reading the
binary instructions stored in memory and translating them into a set of signals that control the
various components of the processor.

When the processor retrieves an instruction from memory, the instruction decoder examines the
bits in the instruction to determine whether the instruction is a data instruction or an instruction
to be executed.

In most microprocessors, data instructions are encoded using a different bit pattern than
instructions that perform operations. The instruction decoder can recognize this difference in
encoding and route the instruction to the appropriate component within the processor for
execution.

Once an instruction has been identified as an operation instruction, the instruction decoder will
pass it on to the appropriate processing unit within the processor, such as the arithmetic logic unit
(ALU), to be executed.

Data instructions, on the other hand, are typically processed by separate units within the
processor, such as the memory management unit (MMU), which is responsible for managing the
processor's access to memory.

In summary, modern microprocessors use a combination of instruction decoding and specialized


processing units to differentiate between data and instructions, allowing them to effectively
execute the program stored in memory.

15) Explain use of timing and control signals give


examples ?

A)The timing for all registers in the basic computer is controlled by a master clock
generator. The clock pulses are applied

to all flip-flops and registers in the system, including the flip-flops and registers in the control unit.
The clock pulses do not change the state of a register unless the register is enabled by a control
signal. The control signals are generated in the control unit and provide control inputs for the
multiplexers in the common bus, control inputs in processor registers, and microoperations for the
accumulator.

There are two major types of control organization:

hardwired control and

microprogrammed control.

In the hardwired organization, the control logic is implemented with gates, flip-flops, decoders,
and other digital circuits. It has the advantage that it can be optimized to produce a fast mode of
operation. In the microprogrammed organization, the control information is stored in a control
memory. The control memory is programmed to initiate the required sequence of
microoperations. A hardwired control, as the name implies, requires changes in the wiring among
the various components if the design has to be modified or changed.

5
In the microprogrammed control, any required changes or modifications can be done by updating
the microprogram in control memory.

It consists of two decoders,

a sequence counter, and

a number of control logic gates.

16) Explain register addressing mode with example.


Ans) In the register addressing mode, the operands are stored in a register rather than memory.
The address of the register holding the operand is used as an operand in the instruction.

For example, consider the following instruction in Assembly language:MOV AX, BX

In this example, the instruction moves the contents of register BX to register AX. Here, the register
BX is the source operand and the register AX is the destination operand.

Another example of the register addressing mode is:ADD AX, CX

In this example, the instruction adds the contents of register CX to register AX. The register CX is
the source operand and the register AX is the destination operand.The register addressing mode is
efficient as the data is accessed and processed quickly since the data is stored in registers which
are located in the processor itself. However, the number of registers is limited, and this mode
cannot be used when there is more data than the number of available registers. Moreover, as the
data is stored in registers, it can be lost if the program is terminated or if there is a power outage.

17) Explain memory heirarchy

6
Ans.Memory hierarchy refers to the organization of computer memory into different levels, each
with varying speeds, capacities, and costs. The memory hierarchy is designed to provide a balance
between performance and cost, by using faster but more expensive memory at the higher levels
and slower but cheaper memory at the lower levels.

The memory hierarchy typically consists of several levels, including:

1. Registers: Registers are the fastest and smallest type of memory in a computer system. They are
used to store the most frequently accessed data and instructions directly inside the processor,
allowing for very fast access times.

2. Cache: Cache memory is a small amount of memory that is located close to the processor, and is
used to store frequently accessed data and instructions. Cache memory is faster than main
memory, but is more expensive and has a smaller capacity.

3. Main memory: Main memory, also known as RAM (Random Access Memory), is the primary
memory in a computer system. It is used to store data and instructions that are currently being
used by the processor. Main memory is slower than cache memory, but has a larger capacity and is
less expensive.

4. Secondary storage: Secondary storage, such as hard disk drives and solid state drives are used
to store data and programs that are not currently being used by the processor. Secondary storage
is slower than main memory, but has a much larger capacity and is less expensive.

5. Tertiary storage: Tertiary storage, such as tape drives, are used for long-term storage of data that
is not frequently accessed. Tertiary storage is much slower than secondary storage, but has a much
larger capacity and is less expensive.

The memory hierarchy allows a computer system to quickly access frequently used data and
instructions, while still providing enough storage capacity for less frequently used data and
programs. By using a combination of different memory types, the memory hierarchy is able to
provide an optimal balance between performance and cost.

18) Distinguish between cache and associative


memory?
Ans. Difference Between Cache And Associative Memory

Cache and Associative memory are memory units used to store data.

Cache memory is very fast and stores frequently used instructions, from where CPU can access
them immediately if needed, whereas, Associative memory is comparatively slow and uses data or
content to perform searches.

DIFFERENCE BETWEEN ASSOCIATIVE MEMORY AND CACHE MEMORY:

ASSOCIATIVE MEMORY CACHE MEMORY

In the case of associative memory, as we know from the name CAM (Content Addressable
Memory) that it is addressed by the type of content In case of cache memory, a very small
amount of memory which speeds up the accessing time of the main memory from your system

7
This type of memory is used to serve as the parallel data search mechanism This type of
memory is used to carry data from the secondary memory to the main memory which increases
the access speeds

This memory is used to implement the parallel data search mechanism, in this the content is
matched with the existing data in the memory and if that data matches with the content then that
very memory unit is accessed This memory basically resides in the main memory of the CPU; the
job of this very memory is to implement certain kind of mapping between those pieces of data
that are currently stored in the main memory

This memory is considerably cheaper than the cache memory This memory is considerably
expensive as compared to the associative memory

It is quite slower than the cache memory It is very faster as compared to the associative
memory

ASSOCIATIVE MEMORY:

The type of memory in which part of the content is used to access the memory unit is called
Associative memory. It is commonly known as CAM (Content Addressable Memory).

In Associative memory, read and write operations from a memory location is done on the basis of
its content. For the write operation, the memory has the ability to search itself an empty location
to store the respective data without requiring any physical address. On the other hand, to perform
read operation, a part of related content is required which then would be used in retrieving all the
matching contents.

CACHE MEMORY:

The type of memory which is very fast, paced and considerably small in size which is basically used
to access the main memory is called Cache memory.

Cache memory stores all the recent instructions from main memory in order to make readily
available to the CPU. It lies between register and main memory. It usually stores only limited data
such as instruction that is needed by CPU, User inputs and more.

It can be possible that a data which is demanded by CPU might not be present in Cache, this is
referred as Cache miss. In such case, main memory comes into picture and provides that particular
data block to Cache which then handed over to CPU.

19) What is virtual memory? How it is useful?


Ans. Virtual memory is a technique used by modern operating systems to allow a computer to use
more memory than it physically has available. It works by temporarily transferring some of the
data stored in RAM (Random Access Memory) to a hard disk or SSD (Solid State Drive), and then
transferring it back into RAM when it is needed again. The primary benefit of virtual memory is
that it allows a computer to run more applications simultaneously than would otherwise be
possible. Without virtual memory, the amount of RAM in a computer would place a hard limit on
the number of applications that could be run at the same time. With virtual memory, however,
applications can be loaded into RAM as needed, with the data that is not being actively used
swapped out to disk. This means that a computer can effectively run more applications than it has
physical RAM available. Virtual memory can also be useful in situations where an application
requires more memory than is available on the computer. Rather than crashing the application or

8
the entire system, virtual memory can allow the application to continue running by temporarily
swapping data in and out of RAM as needed.

20) What is parallel processing?


Ans.) Parallel processing is a computing technique in which multiple instructions or
tasks are executed simultaneously or in parallel, using multiple processors or
computing cores. This technique is used to improve the overall performance and
efficiency of a system by reducing the time required to complete a task.
In parallel processing, the tasks are divided into smaller subtasks that can be
executed independently, and then distributed across multiple processors or
computing cores. Each processor or core then works on its assigned subtask
simultaneously with the others. Once all the subtasks are completed, the results are
combined to produce the final output.
Parallel processing is commonly used in many applications, including scientific
simulations, data analytics, and image and video processing. It can also be used in
large-scale computing systems such as supercomputers and cloud computing
platforms to process large amounts of data or perform complex computations.

21) List and explain some techniques to prevent


pipeline conflicts
ans. Hardware Interlocks − Hardware interlocks are electronic circuits that
detect instructions whose source operands are destinations of instructions
further up in the pipeline. After detecting this situation, the instruction whose
source is not available is delayed by a suitable number of clock periods. In
this way, the conflict is resolved.

Operand Forwarding − This procedure need special hardware to identify a


conflict and then prevent it by routing the information through a unique
direction between pipeline segments. This approach needed additional
hardware direction through MUXs (multiplexers).

Delayed Branching − In this procedure, the compiler is responsible for


resolving the pipelining conflicts. The compiler identifies the branch
instructions and arranges the machine language code sequence by adding
appropriate instructions that hold the pipeline functioning without
obstructions.

9
Branch Prediction − This method utilizes some sort of intelligent
forecasting through appropriate logic. A pipeline with branch prediction
guesses the result of a conditional branch instruction before it is
implemented. The pipeline fetches the stream of instructions from a
predicted path, thus saving the time that is wasted by branch penalties.

22) Explain stack organisation in detail.


In computer science, a stack is a type of data structure that is used to store and manage data in a
last-in, first-out (LIFO) manner. In a stack, data is added and removed from one end only, called the
top of the stack.

The organization of a stack typically involves two operations: push and pop. The push operation adds
a new element to the top of the stack, while the pop operation removes the top element from the
stack. Other common operations on a stack include peek, which returns the value of the top element
without removing it, and is_empty, which checks whether the stack is empty.

Stacks are often implemented as arrays or linked lists. In an array implementation, the stack is
represented as a fixed-size array, and a variable called the top pointer points to the top element of
the stack. When an element is pushed onto the stack, the top pointer is incremented, and the new
element is added to the top of the stack. When an element is popped from the stack, the top pointer
is decremented, and the top element is removed.

In a linked list implementation, the stack is represented as a linked list of nodes, where each node
contains a value and a pointer to the next node in the list. The top of the stack is represented by the
first node in the list, and new elements are added to the top by creating a new node and setting its
next pointer to the current top node. When an element is popped from the stack, the top pointer is
updated to point to the next node in the list, and the top node is removed.

Stacks are used in many programming languages and applications, including function calls,
expression evaluation, and parsing. For example, in function calls, the stack is used to keep track of
the order in which functions are called, and to store local variables and function arguments. In
expression evaluation and parsing, the stack is used to keep track of operators and operands, and to
ensure that expressions are evaluated in the correct order.

23) Explain and distinguish magnetic storage devices


and optical storage devices.
Magnetic vs Optical Storage

Magnetic storage devices and optical storage devices are two types of data storage technologies
that are commonly used today.

Magnetic storage devices use magnetic fields to store and retrieve data. They consist of a spinning
disk coated with a magnetic material, which stores the data in the form of magnetic charges.
Examples of magnetic storage devices include hard disk drives (HDDs), floppy disks, and magnetic
tape.

10
On the other hand, optical storage devices use lasers to read and write data. They use a reflective
layer on a plastic disc or a glass disc to store the data, which is encoded as tiny pits and bumps on
the surface of the disc. Examples of optical storage devices include CD-ROMs, DVDs, and Blu-ray
discs.

One of the main advantages of magnetic storage devices is that they typically offer larger storage
capacities and faster data transfer rates than optical storage devices. Additionally, magnetic
storage devices are typically less expensive than optical storage devices.

Optical storage devices, on the other hand, are more durable and resistant to damage from
physical wear and tear. They are also less susceptible to data loss from magnetic interference,
making them ideal for long-term storage of important data. Additionally, optical storage devices
can be read by a wider range of devices, including computers, DVD players, and gaming consoles.

In summary, magnetic storage devices and optical storage devices both have their own advantages
and disadvantages, and are used in different contexts based on their respective strengths.

24) Explain flynn's architectural classification


scheme
Flynn's architectural classification scheme is a taxonomy of computer architectures proposed by
Michael J. Flynn in 1966. The scheme categorizes computer architectures based on the number of
instruction streams and data streams that can be processed simultaneously.

There are four classifications in Flynn's taxonomy:

Single Instruction, Single Data (SISD): This is the most basic type of architecture, where the
computer processes only one instruction and one data stream at a time. This is similar to the von
Neumann architecture, which is used in most conventional computers.

Single Instruction, Multiple Data (SIMD): In this architecture, multiple data streams are processed
simultaneously by the same instruction. This type of architecture is commonly used in parallel
processing and vector processing applications, such as graphics processing units (GPUs) and digital
signal processors (DSPs).

Multiple Instruction, Single Data (MISD): In this architecture, multiple instructions are applied to a
single data stream simultaneously. This type of architecture is not commonly used in practice, as it
is difficult to find applications that require this type of processing.

Multiple Instruction, Multiple Data (MIMD): This architecture allows for multiple instruction
streams and data streams to be processed simultaneously, and is commonly used in parallel
computing and distributed computing systems. This type of architecture can be further divided
into two subcategories: shared memory and distributed memory.

In shared memory MIMD systems, all processors have access to a shared memory space, and can
communicate with each other through that shared memory. In distributed memory MIMD
systems, each processor has its own memory space, and communication between processors is
achieved through message passing.

11
Overall, Flynn's classification scheme provides a useful framework for understanding the basic
types of computer architectures and their respective strengths and weaknesses.

12
25) What is an array processor? Explain with the help
of neat diagrams
A. An array processor is a type of parallel computing architecture that uses a large number of
processing elements (PEs) to work together on a single task. The PEs are typically simple
processors that are connected to each other in a regular or irregular array structure, and are
controlled by a central controller. In an array processor, the data is divided into small pieces and
distributed among the PEs. Each PE performs the same operation on its assigned piece of data
simultaneously, which results in a massive speedup in processing time. This makes array
processors ideal for tasks that require a large amount of data to be processed in a short amount of
time, such as image and signal processing.

The architecture of an array processor can be visualized using a diagram like the one below:

mathematica

Copy code

13
___________________________

| PE Array |

|___________________________|

| Central Control |

|___________________________|

In this diagram, the PE array consists of a regular grid of processing elements, which are connected
to each other through a communication network. Each processing element is responsible for
performing a small part of the overall computation. The central control unit is responsible for
controlling the flow of data between the PEs and ensuring that the computation is performed
correctly.

Overall, array processors offer a highly parallel approach to computing that can deliver massive
speedups in processing time. However, designing and programming these systems can be complex,
and they may not be well-suited for all types of applications.

14

You might also like