Brown's Town Community College: CAPE Computer Science Unit 1 Notes Prepared By: Ms. G. Sawyers November 17, 2020

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 12

Brown’s Town Community College

CAPE Computer Science Unit 1 Notes

Prepared by: Ms. G. Sawyers November 17, 2020

Combinational Circuits

A combinational circuit is a connected arrangement of logic gates with a set of inputs and
outputs. At any given time, the binary values of the outputs are a function of the binary
combination of the inputs. The n binary input variables come from an external source, the m
binary output variables go to an external destination, and in between there is an interconnection
o logic gates. A combinational circuit transforms binary information from the given input data
to the required output data. Combinational circuits are employed in digital computers for
generating binary control decisions and for providing digital components required for data
processing.
A combinational circuit can be described by a truth table showing the binary
relationship between the n input variables and the m output variables. The truth table lists the
corresponding output binary values for each of the 2n input combinations. A combinational
circuit can also be specified with m Boolean functions, one for each output variable. Each
output function is expressed in terms of the n input variables.

Half-Adder
The most basic
digital
arithmetic
circuit is the addition of two binary digits. A combinational circuit that performs the arithmetic
addition of two bits is called a half-adder. One that performs the addition of three bits (two
significant bits and a previous carry) is called a full adder. The name of the former stems from
the fact that two half-adders are needed to implement a full-adder.
The input variables of a half-adder are called the augend and addend bits. The output
variables the sum and carry.
Full-Adder
Flip Flops
The storage elements employed in clocked sequential circuits are called flip-flops. A flip-flop
is a binary cell capable of storing one bit of information. It has two inputs, one for the normal
value and one for the complement value of the bit stored in it. A flip-flop maintains a binary
state until directed by a clock pulse to switch states. The difference among various types of
flip-flops is in the number of inputs they possess and in the manner in which the inputs affect
the binary state. The most common flip-flops are:
SR, D(data), JK, T(topple) and Edge-Triggered.

Sequential Circuits
A sequential circuit is an interconnection of flip-flops and gates. There are two types of
sequential circuits, synchronous and asynchronous. There classification depends on the timing
of their signals. With the asynchronous the outputs depend upon the order in which the input
variables change and can be affected at any instance of time. These are basically combinational
circuits with feedback paths. The synchronous system uses storage elements called flip-flops
that are employed to change their binary state value only at discrete instance of time, at a clock
pulse.

A Register is a digital circuit used within the CPU to store one bit of information. Two basic
types of registers are commonly used: parallel registers ad shift registers.

Parallel Registers consists of a set of 1-bit memories that can be read or written to
simultaneously. It is used to store data.

Shift Registers accepts and/or transfers information serially. Shift registers are used to
interface to serial I/O devices. In addition they can be used within the ALU to perform logical
shift and rotate functions.

Counters
A counter is a register whose value is easily incremented by 1 modulo the capacity of the
register. Thus the register made up of n flip-flops can count to 2n – 1. When the counter is
incremented beyond its maximum value, it is set to 0.
They can be designated as asynchronous or synchronous, depending on the way in which they
operate. Asynchronous counters are relatively slow because the output of one flip-flop
triggers a change in the status of the next flip-flop. This type of counter is referred to as a
ripple counter, because the change that occurs to increment the counter starts at one end and
‘ripples’ through to the other end. In a Synchronous counter, all the flip-flops change state at
the same time. It is much faster, so they are used in CPUs. A counter that follows a binary
sequence is called a binary counter. E.g. 0000 to 0001 to 0010 to 0011

Data Transfer
Binary information received from an external device is usually stored in memory for later
processing. Information transferred from the central computer into an external device
originates in the memory unit. Data transfer between the CPU and an I/O is initiated by the
CPU. The CPU merely executes the I/O instructions and may accept the data temporarily, but
the ultimate source or destination is the memory unit.
This transfer may be handled in a variety of modes.
Modes of Data Transfer
Data transfer to and from peripherals may be handled in one of three possible modes:
1. Programmed I/O
2. Interrupt-initiated I/O
3. Direct memory access (DMA)

Address Space and Memory Space


An address used by a programmer will be called a virtual address, and the set of such
addresses the address space. An address in memory is called a location or physical address.
The set of such location is called the memory space. Thus the address space is the set of
addresses generated by programs as they reference instructions and data; the memory locations
directly addressable for processing. In most computers the address and memory spaces are
identical. In computers with virtual memory the address space is allowed to be larger than the
memory space.

I/O Bus and Interface Modules


The I/O bus consists of data lines, address lines and control lines. For example if a magnetic
disk, printer, and terminal are connected to a computer they all have an interface unit. Each
interface decodes the address and control received from the I/O bus, interprets them for the
peripheral, and provides signals for the peripheral controller. It also synchronizes the data flow
and supervises the transfer between peripheral and controller. There are four types of
commands that interface may receive. They are classified as control, status, data output, and
data input.

The memory bus also contain data, address, and read/write control lines. The computer busses
can be made to communicate with memory and I/O in three ways:
1. Use two separate buses, one for memory and the other for I/O
2. Use one common bus for both memory and I/O but separate control lines for each
3. Use one common bus for memory and I/O with common control lines

When all the I/O interface address are isolated from the addresses assigned to memory it is
referred to as an isolated I/O method for assigning addresses to the common bus.
When the computer employs only one set of read and write signals and do not distinguish
between memory and I/O addresses the configuration is referred to as memory-mapped I/O.

Instruction Formats

The basic computer has three instruction code formats, each format has 16 bits. The operation
code (opcode) part of the instruction contains three bits and the meaning of the remaining 13
bits depends on the operation code encountered. The three instruction formats are: memory –
reference instruction, register-reference instruction, and input-output instruction.
The bits of the instruction are divided into groups called fields. The most common fields found
in instruction formats are:
1. An operation code field that specifies the operation to be performed.
2. An address field that designates a memory address or a processor register.
3. A mode field that specifies the way the operand or the effective address is determined.
Computers have instructions of several different lengths containing varying number of
addresses. The number of address fields in the instruction format of a computer depends on the
internal organization of its registers. Most computers fall into one of three types of CPU
organizations:
1. Single accumulator organization
2. General register organization
3. Stack organization

Addressing Modes

The operation field of an instruction specifies the operation to be performed. This operation
must be executed on some data stored n computer registers or memory words. The way the
operands are chosen during program execution is dependent on the addressing mode of the
instruction. The addressing mode specifies a rule for interpreting or modifying the address
fields of the instruction before the operand is actually referenced. Computers use addressing
mode techniques for the purposes of accommodating one or both of the following provisions:
1. To give programming versatility to the user by providing such facilities as pointers to
memory, counters for loop control, indexing of data, and program relocation.
2. To reduce the number of bits in the addressing field of the instruction.

The control unit of a computer is designed to go through an instruction cycle that is divided
into three major phases:
1. Fetch the instruction from memory
2. Decode the instruction
3. Read the effective address from memory if the instruction has an indirect address.
4. Execute the instruction

The various addressing modes include:


1. Implied mode: this mode needs no address at all. The operands are specified implicitly
in the definition of the instruction.
2. Immediate mode: this mode needs no address at all. The operand is specified in the
instruction itself.
3. Register mode: in this mode the operands are in registers that reside within the CPU.
The particular register is selected from a register field in the instruction. A k-bit field
can specify any one of 2k registers.
4. Register Indirect mode: in this mode the instruction specifies a register in the CPU
whose contents give the address of the operand in memory.
5. Auto increment or Auto decrement mode: This is similar to the register indirect mode
except that the register is incremented or decremented after (or before) its value is used
to access memory.
6. Direct Address Mode: in this mode the effective address is equal to the address part of
the instruction.
7. Indirect Address Mode: In this mode the address field of the instruction gives the
address where the effective address is stored in memory.
Effective address = address part of the instruction + content of CPU register
8. Relative address mode: in this mode the content of the program counter is added to the
address part of the instruction in order to obtain the effective address.
9. Indexed Address Mode: In this mode the content of the index register is added to the
address part of the address part of the instruction to obtain the effective problem.
10. Base Register Addressing Mode: In this mode the content of a base register is added to
the address part of the instruction to obtain the effective address.

Address Field

The address field (AD field) specifies the value for the address field of the microinstruction in
one of three possible ways:
a. With a symbolic address, which must also appear as a label
b. With the symbol NEXT to designate the next address in sequence
c. When the BR field contains RET or MAP symbol, the AD field is left empty and is
converted to seen zeros by the assembler.

Components of a Basic Computer

The basic computer consists of the following hardware components:


1. A memory unit with 4096 words of 16 bits each
2. Nine registers: AR, PC, DR, AC, IR, IR, TR, OUTR, INPR, and SC
3. Seven flip-flops: I, S, E, R, IEN, FGI, and FGO
4. Two decoders: 3 x 8 operation decoder and 4 x 16 timing decoder
5. A 16-bits common bus
6. Control logic gates
7. Adder and logic circuit connected to the input of AC

Floating Point
A floating-point number has four parts: a sign, a mantissa, a radix, and an exponent. The sign is
either a 1 or -1. The mantissa, always a positive number, holds the significant digits of the
floating-point number. The exponent indicates the positive or negative power of the radix that
the mantissa and sign should be multiplied by. The four components are combined as follows
to get the floating-point value:
sign * mantissa * radix exponent

Floating-point numbers have multiple representations, because one can always multiply the
mantissa of any floating-point number by some power of the radix and change the exponent to
get the original number. For example, the number -5 can be represented equally by any of the
following forms in radix 10:

Forms of -5
Sign Mantissa Radix exponent
50 10 -1  
-1 5 10 0
-1 0.5 10 1
-1 0.05 10 2

For each floating-point number there is one representation that is said to be normalized. A
floating-point number is normalized if its mantissa is within the range defined by the following
relation:
1/radix <= mantissa < 1
A normalized radix 10 floating-point number has its decimal point just to the left of the first
non-zero digit in the mantissa. The normalized floating-point representation of -5 is
-1 * 0.5 * 10 1. In other words, a normalized floating-point number's mantissa has no non-zero
digits to the left of the decimal point and a non-zero digit just to the right of the decimal point.
Any floating-point number that doesn't fit into this category is said to be denormalized. Note
that the number zero has no normalized representation, because it has no non-zero digit to put
just to the right of the decimal point. "Why be normalized?" is a common exclamation among
zeros.

Floating Point arithmetic procedures

 The algorithm for addition and subtraction is divided into four parts:
1. Check for zeros 2. Align the mantissa 3. Add or subtract the mantissa
4. Normalize the result
 The multiplication of two floating-point numbers requires that we multiply the mantissa
and add the exponents. The multiplication algorithm can be subdivided into four parts:
1. Check for zeros 2. Add the exponents 3. Multiply the mantissas
4. Normalize the product

 Floating-point division requires that the exponent be subtracted and the mantissa
divided. The division algorithm can be subdivided into five parts:
1. Check for zeros 2. Initialize registers and evaluate the sign
3. Align the dividend 4. Subtract the exponents 5. Divide the mantissas
Clock Speed
The clock speed of a CPU is defined as the frequency that a processor executes instructions or
that data is processed. This clock speed is measured in millions of cycles per second or
megahertz (MHz). The clock itself is actually a quartz crystal that vibrates at a certain
frequency when electricity is passed through it. Each vibration sends out a pulse or beat, like a
metronome, to each component that's synchronized with it.

Cache Memory

A cache memory, often simply called a cache, is a fast local memory used as a buffer for a more
distant, larger and slower memory in order to improve the average memory access speed. Although
relatively small, cache memories rely on the principle of locality that indicates that if they retain
information recently referenced, they will tend to contain much of the information needed in the
near future.
There are two principal types of cache memory in a conventional computer architecture.
A data cache buffers transfers of data and instructions between the main memory and the
processor.
A paging cache, also known by such names as the TLB or Translation Lookaside Buffer is used in
a virtual memory architecture to buffer the recently referenced entries from page and segment
tables, thus avoiding an extra memory reference when these items are needed again.

It can also be said that a virtual memory itself is a type of cache system in which the main memory
is a buffer for the secondary memory.
Comparison of Cache Types
This table summarizes the characteristics of the three types of "cache" memories used in a
typical storage system: data cache, paging cache, and the virtual memory itself.

You might also like