0% found this document useful (0 votes)
0 views

Error_code

The document discusses the process of floating-point addition, which involves multiple steps to align mantissas and exponents before performing the addition. It also covers error correction codes, explaining how additional bits can detect and correct errors in data storage and transmission, with concepts like parity bits and Hamming distance. Finally, it addresses memory organization, detailing how memory is structured in bits and words, and the importance of matching memory size with data and address sizes to avoid inefficiencies.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
0 views

Error_code

The document discusses the process of floating-point addition, which involves multiple steps to align mantissas and exponents before performing the addition. It also covers error correction codes, explaining how additional bits can detect and correct errors in data storage and transmission, with concepts like parity bits and Hamming distance. Finally, it addresses memory organization, detailing how memory is structured in bits and words, and the importance of matching memory size with data and address sizes to avoid inefficiencies.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

Figure 41 : Floating-point: addition.

A single addition of two floating point numbers implies 8 steps needed


to prepare the fraction and exponent parts. The mantissas can be added but only after biasing, comparing
and adjusting the difference between the exponents, what will lead to reposition (shift) the smaller mantissa.
The leading one must be added before the addition and removed when building the result.

2.7.4 Error Correction Codes


Data may be modified in memory by accident or during transmission. In order to avoid failure due to unexpected
changes on data bits, additional bits are used to allow the detection of errors during the storage or the transmission
of data, these are the error correction codes. When we add r control bits to a sequence of m bits of data, we obtain
an n-bit coded word (n=m+r) or a set of 2n words in which only 2m words will be valid. With these control bits, it is
possible to detect the change in value of one or more bits, or even, with more elaborate methods, to correct errors.
The parity bit. The simplest mechanism to detect errors in data transmission or stored data is by checking the
parity of the word. The parity bit is an additional bit allowing the detection of an error in a set of bits. It is added
to obtain an even (even parity) or odd (odd parity) number of bits to 1 in a word. It only allows the detection of a simple
error (change of a single bit); it cannot be used to determine which of the bits has changed or when two or more bits
have changed. The even parity means to add a control bit (even parity bit) to obtain an even number of bits at 1. The
cyclic redundancy check CRC, performed during data transmission, uses in its most basic form the parity bit (or division
by the polynomial x+1). The odd parity adds a control bit (odd parity bit) in order to obtain an odd number of bits at
1.

35
Figure 42 : The parity bit. (a) The parity bit is a control bit added to a data word. The parity bit value serves
to have even or odd parity for to full set of bits. The error, the change of a single bit in the set, will be
detected by just checking the parity. (b) the example shows the even and odd parity of a set of 8 bits (1
partity control + 7 data). The 3 examples show the values of this control bit in the case of even (un even
number of bits to 1) or odd (odd number of bits to 1) parity.

The Hamming distance. To generalize the error detection, there are rules showing the number of control bits
that need to be added to a data word and the errors that can be detected and/or corrected. The Hamming distance
represents the number of bits that need to change to transform a correct word into another equally correct word
(see Figure 43). This distance is a measure of the robustness of the coding. The Hamming distance d between two words
indicates the number of different bits between these two words. To determine the number of different bits between
two words, we use the exclusive-OR operation XOR between the two words, the number of bits at 1 determines
this distance d. The greater this distance, the more valid words will be separated, which will make it easier to detect
errors; we will therefore obtain a more robust code.

Figure 43 : The Hamming distance. The Hamming distance d between two words indicates the number of
different bits between these two words. The example shows that to have a Hamming distance d=3, we need
at least 2 3-2=6 non-valid words between two valid words.

By using control bits added to a valid word it is possible to detect or to correct un error, the change of one or
multiple bits in the set.
Error detection. To be able to detect an error on e bits, it is necessary to use words coded using a Hamming
distance d=e+1. It will therefore be necessary to modify (e+1) bits to transform a valid word into another equally
valid word. The parity bit helps to detect the error of just one bit, thus e=1 and the distance d=2, for that only one
control bit was added (see Figure 42).

Figure 44 : Error detection using the parity bit. The parity bit allows to detect a simple error e=1. The parity
bit is added to the word to protect. Two different valid words will have a Hamming distance d=2=e+1, that
means that a single bit change in one of the received words will be detected. Note two examples where only
one bit changes, by just checking the parity, the error can be detected (the resulting values have an
even parity).

Error correction. When it is required, not just to detect an error but also correct it, more control bits must
be added. To be able to correct an error on e bits, it is necessary to use words coded with a Hamming distance
d=2e+1. In that case, when the error is detected, it will be necessary to replace the word by the one whose
Hamming distance will be the closest. For example, to correct a double error (e = 2), it will therefore be necessary
to use a Hamming distance code d=5 (2*2 + 1).

36
Figure 45 : Error correction. In order to detect and correct a simple error e=1 it is required to have a Hamming
distance d=3=2e+1. The example shows two valid symbols with a Hamming distance d=5 between them, in
that case an error e=2 (d=2e+1=5) can be detected and corrected. The error is detected by computing the
distance of the received value to the valid symbols. The example shows the second valid symbol received
with some changed bits. In the first example, 2 bits changed, in the second one 3 bits changed and in the
third one for bits changed. Only in the first case, the right symbol was identified.

Control bits. To correct a simple error (e = 1), at least one code with a Hamming distance d = 3 or (2e + 1) is
required between two valid words. We need a Hamming distance d=2e+1, it will require to add n non-valid words
for each 2m valid words, therefore, the total number of words 2n n*2m + 2m = (n + 1) 2m words. Assuming that
the resulting number of bits (valid word + control) n=m+r, the minimum number of control bits r to be added must
satisfy 2r m+r+1 (see table on Figure 46).

Figure 46 : Error correction: Control bits needed to correct a simple error e=1. The number of control r bits
that need to be added to 2m words of m bits must satisfy the inequation 2r m+r+1. Note that in the case
of just 2 valid words (m=1), r=2 control bits must be added, and the resulting number of bits n=3 shows an
overhead of 200%. However, the overhead reduces when the number of bits to protect gro ws. In the case of
words of 512 bits, just r=10 control bits must be added, producing an overhead of just 2%.

The Hamming's algorithm (1950). The principle of operation of Hamming's algorithm can be demonstrated
by a diagram of Ven. The example shows the coding of 4-bit words to detect and correct 1 single error (1-bit error).
In this diagram, a 4-bit valid word can be represented with bits placed at the intersection of the three sets A, B and
C and r=3 control bits can be added to the remaining places; the total number of bits becomes n=m+r=7. The
value of each control bit can be computed to have an even parity on each set. In the case of a single bit altered, the
set where the parity is not even indicates the place where the changed bit is located. In the example (see ) the parity
changed on the intersection AC but not on the intersection ABC, that is AC ABC indicates the place where the
bit value changed.

Figure 47 : Error correction: The Hamming's algorithm. The 4 bits of a given value (0100) are represented in
the intersection AB, AC, BC and ABC (one bit in each intersection) of three circles A, B and C. In each empty
region, we add a bit of parity so that the parity of each circle is even (bits in red); 3 control bits will therefore
be used (see table of the number of control bits required). When a bit is modified (for example the AC

37
intersection bit), the parity of regions A and C will be affected. However, the intersection ABC was not
affected, thus, the place of the erroneous bit is AC - ABC.

Code Hamming. Each control bit r sets the parity of a certain number m of bits of the code word. Thus, a bit
b can be controlled by the bits such as b1, b2 bj in a manner that b1 + b2 j = b. In the example, for a
16-bit word (m=16), and r=5 control bits, the bit at position 5 is controlled by bits b1 and b4 so that 1+4=5. Bits
are numbered starting from 1, with bit 1 being leftmost (most significant). The parity bits carry the numbers
corresponding to powers of 2. In the example (see Figure 48), bits 1, 2, 4, 8 and 16 are parity bits and the others
are data bits. Another way to find the error is to first detect bit positions with incorrect parity; the sum of the
incorrect position values will give the position of the defective bit.

Figure 48 : Error correction: The Hamming's coding. Bit 1 checks bits 1, 3, 5, 7, 9, 11, 13, 15, 17, 19 and 21.
Bit 2 checks bits 2, 3, 6, 7, 10, 11, 14, 15 , 18, and 19. In the example, if bit 5 is inverted, bits 1 and 4 show
incorrect parity. By difference between bits 1 and 4, we find that the error is between bits 5, 7, 11, 15 and
19. But, considering bits 2, 8, and 16 where the parity is correct, we know that bits 7, 11, 15 and 19 are
correct. Another way to find the error is to first detect bit positions with incorrect parity; the sum of the
incorrect position values will give the position of the defective bit. In the example, if parity bits 1 and 4 are
incorrect, bit 5 = 1 + 4 has been reversed.

2.8 The memory : basic concepts


The processor requires a memory to store the execution program as well as data. Indeed, the memory stores
the programs, in the form of instructions to be executed, as well as data.
The memory consists of an array of bits arranged in rows and columns (see Figure 49). A regular memory is
therefore made up of a number of C cells of a certain size k in bits (k cells of 1 bit, for example). A Bit is a unit of
memory, which can have a binary (base 2) value of 0 or 1. The Binary coding is very efficient in terms of data
storage capacity.
Bits can be organized into groups to form another type of cell. For instance, 1-bit cells, grouped in a row,
constitute a k-bit data word. A word is the smallest amount of addressable information or addressable unit. It can
consist of one or more bits (for example 8 bits, i.e., 1 byte). The quantity or amount of information, for a word of k bits,
is represented by the number of different values that this word can store, in our example 2k different possible values.
For example, a word of 8 bits, a byte, can represent 28 (256) possible values. Note that a word of 1 mega-byte
(or MB) in base 10 represents 106 bytes, but in base 2 is 1,048,576 bytes (= 10242 bytes = 220 bytes). When
organizing the bits in a word, the most significant bits MSB are sometimes (we will see it later) placed at the extreme
left and the least significant bits LSB at the extreme right (see Figure 51).
Memory words can be accessed using an address. The address refers to the position of a k-bit word in memory.
For a C-word memory, we need 0 to C-1 different addresses encoded using n-bit addresses (n = log2C). Thus, to access
a single word in a memory of 1024 words we need a 10-bit address (10 = log21024).
It is important to mention that the size in bits of a data word generally determines the size of the data registers
used by the processor as well as how the processors are named, for example an 8-bit or 32-bit processor. In the past,
processors used registers of different sizes, sometimes related to their processing power, however, today's
processors mostly use multiples of bytes ( ).

38
Figure 49 : Memory words and addresses basics. (a) a simple memory as an array of 4 lines of 3 bits each can store
4 words of 3 bits. (b) generic memory and example showing the relation between the number of words, number of
required address bits and data size. (c) examples of some well-known traditional processors and their register sizes
(note that today desktop processors are 32 to 64 bits, however 4 to 8-bit processors are still in use of embedded,
peripherals and low power tasks).

Sometimes the memory is not well matched to the size of the data words used by the processor, as can happen
with external memories. In this case, we may notice that some of the memory or addresses are unused. Thus, the
memory organization is very important, as it determines the best relationship between the number of bits per word
(or information cell) and the number of addressing bits. In the past, the number of bits, and therefore the
organization used by computer manufacturers, was not standardized (see Figure 49). For example, the IBM used 8-
16 bit cells, while DEC used 12 bits. The number of address bits sometimes limit the size of the memory, for
instance, 8-bit processors such as the Intel 8051, that use 16-bit addresses, can only address 65k memory positions
(216 = 65 536). However, matching address, memory and data size remains a common problem, particularly
important not only during design, but also when the storage capacity of an existing processor needs to be expanded.
For instance, the PIC 16F is an 8-bit microprocessor, that is 8-bit data size but depending of the sub-family, the
instructions can have a size of 11, 12 or 14 bits. Then, for now, we'll just see what happens when this size
relationship is not fully achieved.
The following example shows the size relationship that must exist between the memory, data, and address to
have a perfect match (no loss of memory or unused memory or address locations). Let us consider a memory of a
certain size M in bits which must be organized in a number of C cells of k bits. The choice of the size k of bits per cell
with respect to the size M of the memory determines the number of bits necessary for addressing the cells. For an
M=C*k organization, we will need A=log2C address bits. For example, for a memory of M = 96 bits and 8 bits per
word, we will have to address 12 words (96/8) and we will need at least 4-bit addresses (log212=3.6), we therefore
lose 4 addresses (24-12 = 16 12 = 4 unused addresses). On the contrary, for the same memory size and 16-bit
words, we will have to address 6 words (96/16) and we will need at least 3-bit addresses (log26=2.58), so we lose 2
addresses (23-6 = 8 6 = 2 unused addresses). For that memory, the perfect match is using 12-bit words, then we
will have to address 8 cells (96/12) using 3-bit address (log28=3), so we will use all the available addresses (23 =
8). Now, even that this is out of the scope of the example, you can imagine similar matching problems when the
address or the word size cannot change, so other design techniques must be applied.

39
Figure 50 : Memory, words and addresses: size relationship example: a memory of 96 bits. (a) 12 words of
8-bit require 4-bit addresses, i.e., 4 unused addresses. (b) 8 12-bit words require 3-bit addresses (all
addresses are used). (c) 6 16-bit words require 3-bit addresses, i.e., 2 unused addresses.

2.8.1 The memory: endianness or byte ordering


In a word, the bits can be ordered from left to right or from right to left. That order is not always the same, it
depends on the type and family of processor we are using, and problems will arise if we do not know that order. In
the first case, the most significant bits are placed starting from the left side of the word and the least significant bits are
placed at the right.
The same can happen with the bytes on a word composed by more than 8 bits, that are placed in a memory of 8
bits. In that case, we can place the Most Significant Bytes MSB first, then the Least Significant Bytes LSB, or
vice versa. Another case is when we use a memory that can contain long words of multiple bytes, we need to know
how those bytes are ordered on the memory, to retrieve the right value when reading the memory (see Figure 51).
These two different ordering of bits or bytes are named endianness. We thus distinguish the order Big Endian
and the Little Endian.

Figure 51 : Memory words and addresses are not always related. A memory word is composed by several
bits. In the example, a word of 16 bits (2 bytes) is represented. The bits are organized by placing the most
significant bits MSB to the left and the least significant bits LSB to the right. The address is used to access
each one of the memory words. In the example, the memory is organized in bytes, then two address positions
are used to store a single word of 2 bytes.

The Big Endian (used by Motorola, MIPS and SUN microsystems) is the ordering of the bytes on a word
starting with the leftmost byte, the MSB, from left to right, to LSB (see Figure 52 (b)). That is why is named Big
Endian, we start from be biggest significant value that is normally placed at the left side of the word. However, the
reverse order exists.
The Little Endian (used by Intel) is the ordering of the bytes starting with the rightmost byte, the LSB, from
right to left, towards the MSB (see Figure 52 (c)).

40
Figure 52 : Endianness: Big and Little Endian. (a) a memory word composed by multiple bytes can be ordered
and read from left, or the most significant bytes MSB, to the right, the least significant bytes LSB. (b) The Big
Endian ordering starts by writing on memory the MSB, thus, from left to right. In that case, the original word,
that can fit perfectly on a memory word of 4 bytes, will be placed on a memory of two bytes by writing the
first two MSB at the first address and the last two (the LSB) placed at the second address. (c) The Little
Endian ordering will start by writing the least significant bytes. In that case, the original word when placed
on a memory of 4-byte words will contain the MSB at the right side of the word and the LSB at the left. When
using a memory of 2-byte words, little endian will start by writing the first two LSB at the first address and
the last two bytes (the MSB) will be placed at the second address position.

The endianness becomes a problem when writing words that fit on a single memory position. Indeed,
according to the size of a word in memory, you can see that, the bytes of a single long word, for instance of 4 bytes,
will be split and placed at multiple address positions according to the endianness. This can become a problem when
retrieving the word from memory, if we do not know the order the bytes where stored we will not know how to
build the original word (see Figure 53).

Figure 53 : Endianness and memory word size. (a) a memory word composed by multiple bytes can be
ordered and read from left, or the most significant bytes MSB, to the right, the least significant bytes LSB.
(b) In a byte addressable memory, a memory that contains 1-byte words, when following the Big Endian
ordering, will have the MSB placed at the first address and the LSB at the last one. (c) In a 2-Byte addressable
memory and using Bit Endianness, the first two MSB will be placed at the first two memory positions and the
last two LSB placed at the end. Here, each address contains 2 bytes also ordered according to the Big
Endianness. (d) In a 4-Byte memory, the full word can be perfectly placed and we will not have problem to
read it, but only if the processor is a big endian one.

The endianness becomes a problem when using a memory shared between processors of different endianness.
In that case reading and writing can completely change the meaning or value of a word stored in memory (see Figure
54).

Figure 54 : Endianness and data transfer. (a) a Big endian machine will write the memory by placing the MSB
at the first address and consecutive bytes up to the LSB to the following addresses. Thus, the MSB is sent
firstly through the memory bus to be placed at the first memory position. However, if that memory is read
by a Little endian machine, it will consider the first byte as the LSB and then the order of the bytes in the

41
retrieved word will be the opposite. (b) The Little endian machine will start by transferring the LSB, that will
be placed at the first address location of the memory. The MSB will be placed at th e last memory location.
When retrieving that word from memory using a Big endian machine, it will consider, wrongly, that the byte
at the first memory position is the MSB, and the retrieved word will have the bytes inversed.

2.8.2 Memory: types, contents and examples


A memory can contain instructions as well as data, that is a unified memory, or just one of these two things. Based
on this, we will see how memory is used and also the terminology that goes with it.
A memory that just contains instructions will only be used in a read-only-mode, thus, the used term is ROM or
Read-Only-Memory. Normally this type of memory just contains the program that will be executed by the processor,
written only ones and stored permanently. Because the program will not change, at least until its next upgrade, and
it must remain on memory with or without powering up the processor, the program is written on a non-volatile
memory. Thus, a non-volatile memory NVM does not need a constant power supply to retain its data, which
remains even with the processor powered off. As a program, which is composed by instructions, can also contain
constant values, those values are also stored on the ROM memory designated for instructions.
A memory can also just contains data that can be produced or consumed, in that case, we need to read and write
that memory. The traditional term used is RAM or Random-Access-Memory, or a read-write memory. In the
a non-volatile memory. However, some data
used to configure the processor, for instance, define the frequency, the memory size and location, or the number
of available I/O, must be preserved on a non-volatile memory, even that this memory only contains data.
We can see some examples of memory organization according to its content by looking at the basic architectures,
the Von Neumann and Harvard. As the Von Neumann uses a unified memory that contains data and instructions, the
memory locations can be grouped as instructions (the program) placed on a ROM and data, placed on a RAM. In
the case of the Harvard architecture (see Figure 55), the memory is split into instructions ROM and data RAM but
using two separated buses, one for instructions, going to the control unit, and another for data, used by the arithmetic
and logic unit. In that case we have two parallel paths: the control path (composed by the instructions memory and
the control unit) and the data path (composed by the ALU and data memory).
The addressing, type and content of the memory will depend on the processor architecture. For instance,
processors like the 8-bit Zilog Z8 will have, depending on the version, a 12-bit or a 13-bit address to access 4K
(212=4096) or 8K (213=8192) non-volatile ROM or Flash memory positions to store the program, and just 64 to
512 positions in volatile RAM memory to store variables and data.
General Purpose Processors GPP need to be able to load the programs of several users, therefore it uses the RAM
memory for these programs, and the bootloader, stored in ROM memory, is ready to redirect the execution to the
users programs (see Figure 55). The bootloader, is then a small program that will first manage the upload of the
user program, the application, onto a non-volatile RAM memory, and after that, the user program will be executed
by the control unit.

Figure 55 : The Harvard architecture and two possible memory organizations: (a) application program stored on
non-volatile read only memory ROM and separated data RAM memory for the data path. (b) A bootloader program stored on non-

42
volatile ROM memory (to upload the user ram) and the application program on a second read-write non-volatile
RAM memory.
The Intel 8051 processor (A Von Neumann architecture) uses a 16-bit address bus (64K memory positions) to
address instructions and data (see Figure 56). The instructions are stored in the external ROM memory and the
data in the external RAM memory. Both exchange with the processor via a single 8-bit data bus. The instructions
are of variable size, therefore, composed by 1 or more groups of 8 bits. The instructions placed at the external
ROM are addressed using the PC+AC (PC, the Program Counter and AC, the Accumulator, a dedicated register) address.
That ROM can contain the program or the bootloader. In the second case, the external RAM can contain the
application or user program in addition to data. Data placed at the external RAM can be addressed by using the
DPTR+AC (DPTR, the data pointer, a 12-bit register to access data in external memory). Instructions placed at the
externa RAM can be addressed by using the PC+AC address. An internal RAM is available for registers (4 banks of
8 registers of 8 bits each), bit operands, the stack and special function registers.

Figure 56 : Intel 8051 processor memory organization: The Intel 8051 processor uses a 16-bit address bus
(64K memory positions) to address the external memory containing instructions (CODE) and data. Both
exchanges via a single 8-bit bus. The instructions, of variable size, are composed by 1 or more groups of 8
bits. The PC+AC (PC, the Program Counter and AC, the Accumulator) address is used for instructions. The
DPTR+AC (DPTR, the data pointer, a 12-bit register) address is used for data. The ROM can contain the
program or the bootloader. In the second case, the external RAM can contain the application or user
program. An internal RAM is available for registers (4 banks of 8 registers of 8 bits each), bit operands, the
stack and special function registers.

The Microchip PIC16 uses three memories with different word sizes according to the type of content (see Figure
57). As the PIC16 is basically a Harvard architecture, it has the instructions, of 12, 14 or 16-bit size depending on
the family, stored on an 8K ROM, constant values on a small separated 256x8 EEPROM, and data on a 368x8-bit
RAM. You will also notice that it has a small RAM used as a stack of temporary data, the pointed by the stack-
pointer. The first addresses of the ROM contain the interrupt vector, the addresses of the routines to be used to
process interrupts (see Interrupts Section), particularly the address of the reset interrupt routine (the routine taking
care of the reset interrupt) placed at the address 0h, the rest of that memory contains the program.

Figure 57 : The Microchip PIC16 memories. The PIC16 is a Harvard architecture, with ROM and EEPROM memories
for instructions and constants and RAM memories for data and stack of data (pointed by the stack-pointer). The

43
instructions, of 12, 14 or 16-bit size depending on the family, are stored on an 8K ROM, constant values on a small
separated 256x8 EEPROM, and data on a 368x8-bit RAM. The first addresses of the ROM contain the interrupt vector,
the addresses of the routines to be used to process interrupts, the address of the reset interrupt routine (the routine
taking care of the reset interrupt) is placed at the address 0h. The Program Counter PC points to the address of the
next instruction to be performed. During a reset of the processor, the PC points to the first position of this memory
(generally, the address 0000h) where the first instruction to be performed is located or to a pointer ( an address
placed at this location) to the next instruction or to the position where the program is actually located (in the
example, the address 005h). In this last position, we can place a bootloader (this is the case for our 8051, as well as
for the PIC). The stack is placed in RAM memory and is used to keep an execution order during the calls and returns
of function and interrupts (we will see this later).

2.8.3 Hierarchy of memories


Different types of memories are used by the microprocessor due to multiple reasons, for instance, speed, storage
capacity, price, or size, that a single type of memory cannot always satisfy. However, memories are be classified
according to the proximity to the heart of the processor, in a hierarchical manner that depends on the speed, price
and data storage capacity (see Figure 58).
The registers are used inside the processor to store data. Registers are at the heart of the processor. A register
is a fast memory that can store a certain number of bits. The most common registers used by the control unit are the
Program Counter PC, used to point to the next instruction to execute, and the Instruction Regiter IR, to store
the instruction being executed. The data path has also some dedicated registers such as the accumulator ACC, to
store input/output operands/results used/produced by the ALU, the stack pointer SP, a mechanism that provides
a stack organization of data, and the status register STA, that contains flags indicating the status of the ALU after
performing a certain operation or incoming events. Some registers are more generic and used indistinctly by the
ALU or the control unit and grouped in a bank of registers. Others are related to I/O peripherals (for instance,
registers for the UART transmission/Reception of data, ADC/DAC, timers, or generic I/O ports). They are the
most expensive but fastest memory elements, which is why it is better to use instructions using registers rather than
any other type of memory access. Every single bit of a register is a volatile bistable, a Latch or a master-slave
combination of latches, named Flip-Flop.

Figure 58 : Hierarchy of memories. The fastest memories, Flip-Flops and latches, are used within the
processor. Multiple Flip-flops are combined to form dedicated registers of a certain size in bits. Register
banks and stacks are then provided by using static read/write volatile SRAMs. SRAMs can also be used as
cache memories. Dynamic read/write RAMs, slower but with higher storage capacity are also used as cache
memories. Non-volatile memories read-only ROM and Flash are used to store even larger sets of data,
including multiple programs, as external memories. Magnetic disks and CDs are also still in use as cheap and
high storage capacity memories.

44

You might also like