Memory Selection of ES
Memory Selection of ES
Memory Selection of ES
Hariharan AP/EEE
• What is a machine cycle?
Machine cycle is a unit that refers to the time required
by the microcontroller to execute instructions.
Step 1. Fetch
Obtain program instruction
or data item from memory
Memory
Step 2.
Step 4. Store Decode
Write result to memory Translate
instruction into
Processor commands
ALU Control Unit
Step 3.
Execute
Carry out command
R.Hariharan AP/EEE
Important Consideration of a system designer
to selecting processor:
• Instruction set
• Maximum bits in an operand (8 or 16 or 32).
• Clock Frequency in MHZ and processing
speed in Millions Instruction Per Second
[MIPS].
• Processor ability to solve complex
algorithm.
R.Hariharan AP/EEE
Design Process for Embedded System
R.Hariharan AP/EEE
Parameters
R.Hariharan AP/EEE
R.Hariharan AP/EEE
R.Hariharan AP/EEE
I. Secondary
Memory
II. Primary Memory
a)RAM
i. SRAM
ii. DRAM
b)ROM
i. PROM
ii. EPROM
c)Hybrid
i. EEPROM
ii. NVRAM
iii. Flash
Memory
d)Cache Memory
e) Virtual Memory
8
The computer usually uses its input/output
channels to access secondary storage and
transfers the desired data using intermediate
area in primary storage. Secondary storage does
not lose the data when the device is powered
down—it is non-volatile. Per unit, it is typically
also an order of magnitude less expensive than
primary storage.
The secondary storage is often formatted
according to
a file system format, which provides the abstraction
necessary to organize data into files and
directories, providing also additional information
(called metadata) describing the owner of a
certain file, the access time, the access
permissions, and other information. Hard disk
are usually used as secondary storage. 9
Primary storage (or main memory or internal memory), often referred
to simply as memory, is the only one directly accessible to the CPU.
The CPU continuously reads instructions stored there and executes
them as required.
Main memory is directly or indirectly connected to the CPU via a
memory bus. It is actually two buses (not on the diagram): an address
bus and a data bus. The CPU firstly sends a number through an
address bus, a number called memory address, that indicates the
desired location of data. Then it reads or writes the data itself using the
data bus.
It is divided into RAM and ROM.
1
The RAM family includes two important memory
devices: static RAM (SRAM) and dynamic RAM
(DRAM). The primary difference between them is the
lifetime of the data they store.
1 SRAM retains its contents as long as electrical power
)
is applied to the chip. If the power is turned off or
temporarily, its contents will be lost
lost
2
forever.
DRAM, on the other hand, has an extremely short
)
data lifetime-typically about four milliseconds. This is
true even when power is applied constantly. DRAM
controller is used to refresh the data before it
expires, the contents of memory can be kept alive for
as long as they are needed. So DRAM is as useful as
SRAM after all.
1
Double Data Rate synchronous dynamic random access
memory or also known as DDR1 SDRAM is a class of
memory integrated circuits used in computers. The interface
uses double pumping (transferring data on both the rising and
falling edges of the clock signal) to lower the clock frequency.
One advantage of keeping the clock frequency down is that it
reduces the signal integrity requirements on the circuit board
connecting the memory to the controller.
1
DDR2 memory is fundamentally similar to DDR SDRAM. Still, while DDR
SDRAM can transfer data across the bus two times per clock, DDR2 SDRAM can
perform four transfers per clock. DDR2 uses the same memory cells, but
doubles the bandwidth by using the multiplexing technique.
The DDR2 memory cell is still clocked at the same frequency as DDR SDRAM
and SDRAM cells, but the frequency of the input/output buffers is higher with
DDR2 SDRAM (as shown in Fig. on next Slide). The bus that connects the
memory cells with the buffers is twice wider compared to DDR.
Thus, the I/O buffers perform multiplexing: the data is coming in from the
memory cells along a wide bus and is going out of the buffers on a bus of the
same width as in DDR SDRAM, but of a twice bigger frequency. This allows to
increase the memory bandwidth without increasing the operational frequency.
1
The interface uses double
pumping (transferring data on
both the rising and falling
edges of the clock signal to
lower the clock frequency.
One advantage of keeping
the clock frequency down is
that it reduces the signal
integrity
requirements on the circuit
board connecting the
memory to the controller.
1
Memories in the ROM family are distinguished by the methods
used to write new data to them (usually called programming), and the
number of times they can be rewritten.
This
classification reflects the evolution of ROM devices from hardwired
to programmable to erasable-and-programmable. A common
feature is their ability to retain data and programs forever, even
during a power failure.
The contents of the ROM had to be specified before chip
production, so the actual data could be used to arrange the transistors
inside the chip.
1
PROM
One step up from the masked ROM is the PROM (programmable
1
As memory technology has matured in recent years, the line
between RAM and ROM has blurred. Now,
several types of memory
combine features of both.
These devices do not belong to either group and can be collectively
referred to as hybrid memory devices. Hybrid memories can be read and
written as desired, like RAM, but maintain their contents without electrical
power, just like ROM.
Two of the hybrid devices, EEPROM and flash, are descendants of
ROM devices. These are typically used to store code. The third hybrid,
NVRAM, is a modified version of SRAM. NVRAM usually holds persistent
data.
1
EEPROM are electrically-erasable-and-programmable. Internally,
S
they are similar to EPROMs, but the erase operation is accomplished
electrically, rather than by exposure to ultraviolet light. Any byte within
an EEPROM may be erased and rewritten.
Once written, the new data will remain in the device forever-or at least
until it is electrically erased. The primary tradeoff for this improved
functionality is higher cost, though write cycles are also significantly
longer than writes to a RAM. So you wouldn't want to use an EEPROM
for your main system memory.
1
Flash memory combines the best features of the memory
devices described thus far. Flash memory devices are high
density, low cost, nonvolatile, fast (to read, but not to write),
and electrically reprogrammable. These advantages are
overwhelming and, as a direct result, the use of flash
memory has increased dramatically in embedded systems.
From a software viewpoint, flash and EEPROM technologies
are very similar. The major difference is that flash devices
can only be erased one sector at a time, not byte-by-byte.
Typical sector sizes are in the range 256 bytes to 16KB.
Despite this disadvantage, flash is much more popular than
EEPROM and is rapidly displacing many of the ROM devices
as well.
2
The third member of the hybrid memory class is NVRAM (non-volatile
RAM). Non volatility is also a characteristic of the ROM and hybrid
memories discussed previously. However, an NVRAM is physically very
different from those devices. An NVRAM is usually just an SRAM with a
battery backup.
When the power is turned on, the NVRAM operates just like any other
SRAM. When the power is turned off, the NVRAM draws just enough power
from the battery to retain its data. NVRAM is fairly common in embedded
systems.
However, it is expensive-even more expensive than SRAM, because of the
battery-so its applications are typically limited to the storage of a few
hundred bytes of system-critical information that can't be stored in any
better way.
2
A CPU cache is a cache used by the central processing unit of a computer
to reduce the average time to access memory. The cache is a smaller,
faster memory which stores copies of the data from the most frequently
used main memory locations. As long as most memory accesses are
cached memory locations, the average latency of memory accesses will be
closer to the cache latency than to the latency of main memory.
When the processor needs to read from or write to a location in main
memory, it first checks whether a copy of that data is in the cache. If so,
the processor immediately reads from or writes to the cache, which is
much faster than reading from or writing to main memory
2
Cache Memory
The diagram on the right shows two memories. Each location in each
memory has a datum (a cache line), which in different designs ranges in
size from 8 to 512 bytes. The size of the cache line is usually larger than the
size of the usual access requested by a CPU instruction,
which ranges from 1 to 16 bytes.
Each location in each memory also
has an index, which is a unique number used to refer to that location.The
index for a location in main memory is called an address.
Each location in the cache
has a tag that contains the index of the
datum in main memory that has been cached. In a CPU's data cache these
entries are called cache lines or cache blocks.
2
It is a computer system technique which gives
an application program the impression that it has
contiguous working memory (an address space),
while in fact it may be physically fragmented and
may even overflow on to disk storage.
computer operating systems generally use
virtual memory techniques for ordinary
applications, such as word processors,
spreadsheets,multimedia,players accounting,
etc., except where the required hardware support
(memory management unit) is unavailable or
insufficient.
2
Type Volatile Writeable? Erase Max Erase Cost (per Byte) Speed
? Size Cycles
SRAM Yes Yes Byte Unlimited Expensive Fast
R.Hariharan AP/EEE
Direct Memory Access (DMA)
Least Processor Intervention
R.Hariharan AP/EEE
• DMA transfer is controlled by a DMA controller,
controller which
request control of the bus from CPU.
Re
qu
es
t
R.Hariharan AP/EEE
A bus with DMA Controller
DMA requires 2 bus signals : (i). Bus Request (ii). Bus Grant
R.Hariharan AP/EEE
Data transfer occurs between hard disk & system
memory.
R.Hariharan AP/EEE
R.Hariharan AP/EEE
• Single Transfer at a time - (bytes transfer) –
release the IO bus hold in each transfer.
• Burst Transfer at a time – (Kilobytes transfer)
- release the IO bus hold in each transfer.
• Bulk Transfer at a time – release the IO bus
hold after transfer is completed.
DMAC is first initialized,
i). Read or write.
ii). Mode of DMA
transfer.
iii). Total number of bytes
to be transferred
iv). Starting memory
address.
R.Hariharan AP/EEE
• 8086, 8085 processors doesn't have DMAC
units.
• 8051 family member 83C152JA & MC68340
microcontroller have two DMA channels on
chip.
R.Hariharan AP/EEE
Memory Management
• Memory is the important resource,
resource there is a
constant interaction and communication
between the processor.
• Semiconductor memory is the primary
memory
• Hard disk is the secondary memory
• All programs run with the data and code in the
RAM
R.Hariharan AP/EEE
IN-CIRCUIT EMULATOR(ICE)
In Circuit Emulator is an emulator of microprocessor or
microcontroller of target circuit in a target emulating
circuit.
R.Hariharan AP/EEE