Open navigation menu
Close suggestions
Search
Search
en
Change Language
Upload
Sign in
Sign in
Download free for days
0 ratings
0% found this document useful (0 votes)
15 views
DBMS Notes 1
DBMS notes 1
Uploaded by
anonymous.gonzu
AI-enhanced title
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content,
claim it here
.
Available Formats
Download as PDF or read online on Scribd
Download now
Download
Save DBMS notes 1 For Later
Download
Save
Save DBMS notes 1 For Later
0%
0% found this document useful, undefined
0%
, undefined
Embed
Share
Print
Report
0 ratings
0% found this document useful (0 votes)
15 views
DBMS Notes 1
DBMS notes 1
Uploaded by
anonymous.gonzu
AI-enhanced title
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content,
claim it here
.
Available Formats
Download as PDF or read online on Scribd
Download now
Download
Save DBMS notes 1 For Later
Carousel Previous
Carousel Next
Save
Save DBMS notes 1 For Later
0%
0% found this document useful, undefined
0%
, undefined
Embed
Share
Print
Report
Download now
Download
You are on page 1
/ 22
Search
Fullscreen
locality of reference 12-5 Cache Memory Analysis of a large number of typical programs has shown that the refer- ences to memory at any given interval of time tend to be confined within a few localized areas in memory. This phenomenon is known as the property of locality of reference. The reason for this property may be understood considering that a typical computer program flows in a straight-line fashion with program loops and subroutine calls encountered frequently. When a program loop is executed, the CPU repeatedly refers to the set of instructions in memory that constitute the loop. Every time a given subroutine is called, its set of instruc. tions are fetched from memory. Thus loops and subroutines tend to localize the references to memory for fetching instructions. To a lesser degree, mem. ory references to data also tend to be localized. Table-lookup procedures repeat edly refer to that portion in memory where the table is stored. Iterative procedures refer to common memory locations and array of numbers are con. fined within a local portion of memory. The result of all these observations is the locality of reference Property, which states that over a short interval oftime, the addresses generated by a typical program refer to a few localized areas of memory repeatedly, while the remainder of memory is accewed relatively infrequently. If the active portions of the program and data are placed in a fast small memory, the average memory access time can be reduced, thus reducing the total execution time of the program. Such a fast small memory is referred to and main memory as illus- the memory hierarchy and approaches the speed of C) ‘The fundamental idea of cache organizati ion is that by keeping the most frequently accessed instructions and data in the fast cache memory, the‘upping STOW ILS Cache Memory 465, average rmion) ackes ine wll . will appro Ae f caches only smal at ch the access time of the cache. ries gsm faint esse fat meno ae pp re npr A ee duh abies dele eles Tecate “The basic operston he oa ee a ace aes erin he ake ims, When Gn PU needs 10 sess memory the cache is examine I he word found in te canbe. Ses neceee ine found in re ti eae ia secant word. A block of words piscine ERP then transferred fom main memory (0 popreone eter ead edit fiom one word (he onejust accessed) ieabshaluh ges taper het accessed, In this manner, some data eas references to memory find the required “The performance of cache memory is quant called ita, When the cay oe ee in cache, itis said to produce a hit. If the word is not found in cache, itis muansmemon a coats asa The aio ofthe mbes os vided the total CPU references o merory (is pls mise) is tye hit ratio. The eat is best measured experimentally by runing represent ‘programs aie computer and measng the mumber of is and mises Cm even in omval of ime. Hit ratios of 0.9 and higher have been reported. This high sate vere the validity ofthe locality of reference prope vers th Yaemory aces ane ofa compute ye ca improved considerably by ute of cache Ihe hit ae BME ‘Png so that most of cones the CPU acceses the ce instead of TAD pemory, the average the ame coer to the aces nef he fs cache memory a computer Wi +0 ey and a hit ato of 09 prodace 7 ee access ime of 200 05. This jp considerable improvement 8 ar computer without acache Men foxy, whose access time is 1000 ns. ae mapping proce’ mann ising tne oan of eache memory 1. Associative mapping 2, Direct mapping 3, Sotasociative MAPPIPS will use a spe a icusion of nese 2 APE procedures We we te an sony orga Fig, 120. The main cic example 9K words of 12D Mach, The cache is capable of storing arget ee sors ay gene For Sm) eed stored i cache there is 512 of these466 cum rwe AVE Memory Organisation ee ‘Main memory cru 82K x 12 Cache memory 512 x 12 Figure 12-10 Example of cache memory. a duplicate copy in main memory. The CPU communicates with both mem. ovis I ist sends 15-bit aves to cache If there i i the CPU accep the 12-bit data from cache. If there is a miss, the CPU reads the word from ‘main memory and the word is then transferred to cache, Associative Mapping The fastest and most flexible cache organization uses an associative memory, This organization is illustrated in Fig. 12-11, The associative memory stores both the address and content (data) of the memory word. This permits any location in cache to store any word from main memory. The diagram shows three words presently stored in the cache. The address value of 15 bits is shown as a five-digit octal number and its corresponding 12-bit word is chown as a fourdigit octal number. A CPU address of 15 bits is placed in the argu- ‘ment register and the associative memory is searched for a matching address, Figure 12-11 Associative mapping cache (all numbes (CPU addres (15 bits) ‘Argument register |-—Aatress —+}-— Data +] 01000 3450 | 02777 6710 | 2aaes [aaaeld section 12:5 Cache Memory 467 If the address is 7] ss is for the CPU. Thao sagen the comesponding 12-bit date is read and sent t© addressdata pair is then tra the main memory is accessed for the word ‘The adkdress-data pai x thn transferred tothe associative cache ‘memory. If the Get pended and note eke athe diapers cate room for & Pat replaced is determined. rprently in the cache. The decision as to what pair is Teen or the A the replacement algorithm that the ‘designer round-robin o1 Soe simple procedure & to replace cells of the cache 10 is ce " never a new wort request yemory- ‘This constitutes a first-in first-out (FIFO) i poly main, memory Direct Mapping ‘Associative memories are ©: i sxpensive compared to random-accrss memories i the added logic associated ith each cell. The possibility of using 2 re cess memory for the cache is iavestigated > ig. 12-12. The CPU address of 15 bits is divided into two fields Thesaine least significant bits Con” stitute the index field and the remaining six bits fort» the tag field. The Sigure ee main memory needs an address that includes Both the tag and the iat the number of bits in the index field is equal the number of fo access the cache memory. Tin the general case, there are 2+ words in cache memory and 2" words in “The n bit memory address is divided into (wo fields: & bits for Tine for the tag field. The direct mappin} ‘cache vn uses the mbit address to access the malt memory and the &bit a che, The internal organization of the words in the cache cin Fig, 12-13(b). Each word in cache consists of the data deer ew word is first brought into the cache, Gata bits. When the CPU generates & shows # index bit address bits required t organizatio index to access the memory is as show! jvord and its associated tag. Ter tag bits are stored alongside th Jpips herween main and eache memoriss 12. Addressing relations Figure 12-1 512 x 12 Cache memory | “Address = 9 bits 1b468 I OWHVE Memory Opgantation Memory Index address Memory data address Tas ooo [1320] om [ee ata | ™ (b) Cache memory (@) Main Memory Figure 12-13 Direct mapping cache organization. memory request, the index field is used for the address to access the cache. ‘The tag field of the CPU address is compared with the tag in the word read from the cache. Ifthe two tags match, there isa hit and the desired data word is in cache. If there is no match, there is a miss and the required word is read from main memory. It is then stored in the cache together with the new tag, replacing the previous value. The disadvantage of direct mapping is that the hitratio can drop considerably iftwo or more words whose addresses have the same index but different tags are accessed repeatedly. However, this possbil ity is minimized by the fact that such words are relatively far apart in the address range (multiples of 512 locations in this example). To see how the direct-mapping organization operates, consider the ‘numerical example shown in Fig. 12-13, The word at address zero is presently stored in the cache (index = 000, tag = 00, data = 1220). Suppose thatthe CPU now wants to access the word at address (12000. The index address is 00, soit is used to access the cache. The two tags are then compared. The cache tag is 00 but the address tag is 02, which does not produce a match Therefore, the main memory is accessed and the data word 3670 is transferred to the CPU. The cache word at index address 000 is then replaced with a lag of 02 and data of 5670. The direct- mapping example just described uses a block size of one word. ‘The same organization but using a block size of 8 words i shown in Fig, 1211469 socmion 12-5 Cache Meroe cd [oa ———— Index igure 12-14 Direct mapping cache wit biock size of $ words ‘The index field is now divided into two parts: the block field and the word fl In 2 512-word cache there ae 64 blocs of 8 words each, since 64 x 8 = 512. ‘The block mumber is specified with a bitfield and the word within the block is specified with abit eld, Theta field stored within the cache is conuuon to all eight words of the same block. Everytime a miss occurs, an entire block of eight words must be trasfened from main memory to cache memory ‘Although this takes extra time the hitrato will most kely improve with a luger block size because of the sequential nature of computer programs. Set-Associative Mapping Tt was mentioned previously that the disadvantage of direct mapping is that two words with the same index in their address but wit different tag values ‘cannot reside in cache memory atthe same time. A third type of cache organ- jaation, called set associative mapping, i an improvement over the direct ‘mapping organization in tha each word of cache can store two or more words ‘of memory under the same index addres. Each data word i stored together ‘vith its fag andthe aumber of tag-cataitems in one word of cache i sid to form a set An example of a set associative cache organization for a set size of th is shown in Fig, 1-15. Bach index addres refers to two data words and their associated tags. ach tag requires sbxbits and each data word has 12 bits, to the word length is (6 + 12)= 36 bits. An index address of nine bits can ommadate 312 words. Thus the size of cache memory is 512 X 36. ft can aecrrmedate 1024 words of main memory since each word of cache contains aac ports, In general, sec associative cache of st size will accommio- ddate k words of main memory in each word of cache470 caren rwsive Memory Organisation replacement algorithms: Index Tag Dota Tag hii’ oof or | asso || oz | $670 r 7 T | moa | 6710 |[ oo | 2a40 | Figure 12-15 Two-way set-associative mapping cache. The octal numbers listed in Fig. 12-15 are with reference to the main memory contents illustrated in Fig. 12-13(a). The words stored at addresses, 01000 and 02000 of main memory are stored in cache memory at index address 000. Similarly, the words ai addresses 02777 and 00777 are stored in cache at index address 777. Wher. the CPU generates a memory request, the index value of the address is used to access the cache. The tag field of the CPU address is then compared with both tags in the cache to determine if a match ‘occurs. The comparison logic is dcne by an associative search of the tags in the set similar to an associative memory search: thus the name “set-associative.” ‘The hit ratio will improve as the set size increases because more words with the same index but different tags can reside in cache. However, an increase in the set size increases the number of bits in words of cache and requires more com- plex comparison logic. When a miss occurs in a setassociative cache and the set is full, itis nec- essary to replace one of the tag-data items with a new value. The most com- ‘mon replacement algorithms used are: random replacement, firstin, first-out, (FIFO), and least recently used (LRU). With the random replacement policy the control chooses one tag-data item for replacement at random. The FIFO procedure selects for replacement the item that has been in the set the longest. ‘The LRU algorithm selects for replacement the item that has been least recently used by the CPU. Both FIFO and LRU can be implemented by adding a few extra bits in each word of cache. d Writing into Cache An important aspect of cache organization is concerned with memory write requests, When the CPU finds a word in cache during a read operation, the ‘main memory is not involved in the transfer. However, if the operation is a write, there are two ways that the system can proceed.ali i ection 12-6 Virtual Memory 471 0 update mai memory being address. This is faye that main emake Sipplest and most commonly sed procedure updated in parallel Ii contained pert ane ttacog called eae method. This method has the advants memory always contains the same data as the cache. This characteristi¢ '* Important in systems with direct memory acess transfers. I ensures (0 the data resing in main memory ae valida al times so that an 1/0 device CO" cating through DMA would receive the most recent updated dat iy Ths, Second procedure is ealled the writedack method: Ia this ‘method only the cache location is updated during a waite operation, The location, is then marked by a flag so that later when the word is removed from the cache itis copied into main memory. The reason for the write-back method is that during the time a word resides in the cache, it may be updated several times: however, as long as the word remains in the cache, it does not matter whether the copy in main memory is out of date, since requests from the word are filled from the cache. Itis only when the word is displaced from the cache that a7 accurate copy need be reriten nto main memory. Analyte results indicate fat the number of memory writes in a typical program ranges between 10 an bpm ere aon nen aaa Cache Initialization One more aspect of cache organization that m is the problem of initialization. The cache i to the computer or when the main memory is loaded with a complete set of programs from auxiliary memory. After initialization the cache is considered fo be empty, but in effect it contains some nonvalid data. It is customary t¢ jeciade with each word in cache a valid bitto indicate whether or not the word contains valid data. “The cache is initialized by clearing all the valid bits to 0. The valid bit of is set (0 1 the first time this word is loaded from main SiRmory and stays set unless the cache has to be initialized again. The intro- Juation of the valid bit means that a word in cache is not replaced by another ce ction jess the valid bit is set to 1 and a mismatch of tags occurs. If the valid Pe heppens to be 0, the new word automatically replaces the invalid data epee nitialization condition has the effect of forcing misses from the cache until it fills with valid data. ust be taken into consideration iialized when power is applied ‘a particular cache word 12-6 Virtual Memory jem, programs and data are first stored data are brought into main memory as they ‘a concept used in some large com- struct programs as though a large Ina memory hierarchy sys memory. Portions of a program or re needed by the CPU, Virtual memory is puter systems that permit the user to cons472 carrer reve Memory Onion address space ‘memory space memory space were available, equal tothe totality of auxiliary memory. Each address that is referenced by the CPU goes through an address mapping from the so-called virtual address to a physical address jp main memory. Virtual ‘memory is used to give programmers the illusion that they have a very large ‘memory at their disposal, even though the computer actually has a relatively small main memory. A virtual memory system prevides a mechanism for translating program-generated addresses into correct main memory locations, This is done dynamically, while programs are being executed in the CPU. The translation or mapping is handled automatically by the hardware by means of ‘a mapping table. ‘Address Space and Memory Space An address used by a programmer will be called a vintual address, and the set of such addresses the addres space An address in main memory i called a loca- tion or pyscal address. The set of such locations is called the memory space. Thus the address space isthe set of addresses generated by programs as they refer- ence instructions and data; the memory space consists of the actual main memory locations directly addressable for processing, In most computers the address and memory spaces are identical. The address space is allowed to be larger than the memory space in computers with virteal memory. "As an illustration, consider @ computer with a main-memory capacity of 32K words (K = 1024). Fifteen bits are needed to specty a physical address in memory since 32K = 2°. Suppose that the computer has available auxiliary memory for storing 2”” = 1024K words. Thus auxiliary memory has a capac- ity for storing information equivalent to the capacity of 32 main memories. Denoting the address space by N and the memory space by M, we then have for this example NV = 1024K and M= 32K. In a multiprogram computer system, programs end data are transferred to and from auxiliary memory and main memory based on demands imposed by the CPU. Suppose that program 1 is currently being executed in the CPU. Program 1 and a portion of its associated data are moved from auxiliary ‘memory into main memory as shown in Fig. 12-16, Portions of programs and data need not be in contiguous locations in memory since information is being moved in and out, and empty spaces may be available in scattered locations in memory. In a virtual memory system, programmers are ‘old that they have the total address space at their disposal. Moreover, the adéress field of the instruc tion code has a sufficient number of bits to specify all virtual addresses. In our example, the address field of an instruction code will consist of 20 bits but physical memory addresses must be specified with only 15 bits. Thus CPU will reference instructions and data with a 20-bit address, but the information at this address must be taken from physical memory because access to auxiliary siorage for individual words will be prohibitively long. (Remember that forSECTION 12-6 Virwal Memory 473 Auxiliary memory ‘Main memory co fa L E = Figure 12-16 | Relation between address andl memory space in a virtual memory system, efficient transfers, auxiliary storage moves an entire record to the main memory.) A table is then needed, as shown in Fig. 12-17, to map a virtual address of 20 bits to a physical address of 15 bits. The mapping is a dynamic operation, which means that every address is translated immediately as a word is referenced by CPU. ‘The mapping table may be stored in a separate memory as shown in Fig. 12-17 or in main memory. In the first case, an additional memory unit is required as well as one extra memory access time. In the second case, the Figure 12-17 Memory table for mapping a virtual address. Virtual address | Mot Memory ‘address Main requir mapping register memory (20 bits) 7 (15 bits) aie i gee eee | Main memory | [ Meron table (Lbuer register [Unfit |474 caseren rwewve Memory Organisation pages and blocks page frame table takes space from main memory and two accesses to memory are required with the program running at half speed. A third alternative isto use an associative memory as explained below. ‘Address Mapping Using Pages The table implementation of the address mapping is simplified if the infor mation in the address space and the memory space are each divided into ‘groups of fixed size. The physical memory is broken down into groups of equal size called blocks, which may range from 64 to 4096 words each. The term page refers to groups of address space of the same size. For example, if a page or block consists of IK words, then, using the previous example, address space is divided into 1024 pages and main memory is divided into 82 blocks. Although both a page and a block are split into groups of 1K words, a page refers to the organization of address space, while @ block refers to the organization of memory space. The programs are also considered to be split into pages. Portions of programs are moved from auxiliary memory to ‘main memory in records equal to the size of a page. The term “page frame” is sometimes used to denote a block. Consider a computer with an address space of 8K and a memory space ‘of 4K. If we split each into groups of IK words we obtain eight pages and four blocks as shown in Fig. 1218, At any given time, up to four pages of address space may reside in main memory in any one of the four blocks. ‘The mapping from address space to memory space is facilitated if each virtual address is considered to be represented by two numbers: a page num- ber address and a line within the page. In a computer with 2" words per page, {pits are used to specify a line address and the remaining high-order bits of the virtual address specify the page number. In the example of Fig. 12-18, a virtual address has 13 bits. Since each page consists of 2"° = 1024 words, the high-order three bits ofa virtual address wll specify one ofthe eight pages and the low-order 10 bits give the line address within the page, Note that the line address in address space and memory space is the same; the only mapping required is from a page number toa block number. ‘The organization of the memory mapping table in a paged system is shown in Fig, 12-19. The memory-page table consists of eight words, one for teach page. The address in the page table denotes the page number and the content ofthe word gives the block number where that page is stored in main memory. The table shows that pages 1,2, 5, and 6 are now available in main memory in blocks 3, 0,1, and 2, respectively. A presence bit in each location indicates whether the page has been transferred from auxiliary memory into main memory. A 0 in the presence bit indicates that this page is not available jn main memory. The CPU references a word in memory with a virtual address of 13 bits. The three high-order bits of the virtual address specify a page number and also an addres for the memory-page table. The content of475 SECTION 12-6 Virwsal Memory Block 0] Block 1 Block 2 Block | ‘Address space N=8K 38 Figure 12-18 Adsress pace and memory space split into groups of IK words the word in the memory page table at the page number address is read out fhe or in the me pone eo ie ge es ps block moe into the memary tbe ureter he Pr gc pain memory bg thus sea waned the ovo Righonde io ad a seein cueing, ne tne mbes trom a 3 go ah Figure 12-19 Memory table in a paged system. Page no. Line number [ho ifo 01010 01] Virnaladdress Table Presence address Cie ‘900 a Main memory Block 0 | oo [IF [_ Block o_| Block 1 on \ Malirmemory Block 3 ‘dress resister — oro [00 Ee 0 0) | owioroonr PI Block 2 a MER Memory pase ble4760 cuyeteR rwnve Memory Onanization ‘memory transfers the content of the word to the main memory buffer register ready (o be used by the CPU. Ifthe presence bit in the word read from the page table i 0, it signifies that the content of the word referenced by the vir _ tual address does not reside in main memory. call to the operating system is then generated to fetch the required page from auxiliary memory and place it into main memory before resuming computation. Associative Memory Page Table A random-access memory page table is inefficient with respect {0 storage uti lization. In the example of Fig. 12-19 we observe that eight words of memory are needed, one for each page, but at least four words will always be marked empty because main memory cannot accommodate more than four blocks. In general, a system with n pages and m blocks would require a memory, table of n locations of which up to m blocks will be marked with block num. bers and all others will be empty. As a second numerical example, consider an address space of 1024K words and memory space of 32K words. If each page or block contains 1K words, the number of pages is 1024 and the num- ber of blocks 32. The capacity of the memory-page table must be 1024 words and only 32 locations may have a presence bit equal to 1. At any given time, at least 992 locations will be empty and not in use. "A more efficient way to organize the page table would be to construct it with a number of words equal (v the number of blocks in main memory. In this way the size of the memory is reduced and each location is fully utlized This method can be implemented by means of an associative memory with Figure 12-20 An associative memory page table. Virtual address a See come [100] Keyreiter feorpas] co ee —— Page no. Block noage fal scrim 12-6 Vural Memory 477 each word in memo ach word in memory containing a page namber together with Hs comesP oe, ding block number. The page field Fntcach word is compared with the P&S? nber inthe virtual address, Hm match occurs, the word read from Me™ ory anit corresponding block number irenrcted ee again the case of eight pages and four blocks as in the example of Fig. 1219. We replace the sadam acces memory page tle WA ie sascnie eee ‘of four words as shown in Fig. 12-20. Each entry in ee ssroriative memory array conta of two field, The fist shee specify for sorng the page number, The lst vo be coma a Geld 1 ing the block number. The virtual address is placed in the argument register ‘The page number bits in the argument register are compared with all Page ‘numbers in the page field of the associative memory. If the page number i8 found, the S-bit word is read out from memory. The corresponding BOCK number, being in the same word, is transferred to the main memory, address Tegister._Ifno match occurs, a call to the operating system is generated to Dring the required page from ausiliary memory Page Replacement {A virtual memory system is a combination of hardware and 000i tech: Diques. ‘The memory management software system handles all the sofover Mia ony for the ecient utzation of memory space. 1¢ must decide ()) GInich page in main memory ought to be removed to make room for a net age, 2) when a new page iso be transferred from auxiliary memory to Mew pase, (0) wind (@) where the page is to be placed in main memory. The hard aie ping mechanism and the memory management software together Constitute the architecture of a virtual memory. Wet a program starts execution, one or more pages are transferred into main meson and the page table is set to indicate their position. The programy rae eed om main memory wnt itattempts to reference a page that is 1 is exe etecy memory. ‘This condition is called page fault, When page feult in au euocution ofthe present program is suspended until the required occa brought into main memory Since Loading «page from muniery -o Pas Brain’ memory is basically an 1/0 operation, the operating! systery On 0 tis tak tothe 1/0 procesor In dhe menntime, control is transterres, assigns (ht rogram in memory that is waiting to be processed in the CPUS to the nex! Pig memory block has been assigned and the transfer completed, the original program can resume its operation. riginal progr alt occur in a virtual memory eyster, i signifies that the age Niesdnced by the CPU is notin main memory A new page is then an page referenced Ty Jy memory to main memory. If main memory is fal, it ferred from auto remove a page from a memory block to make room for vyonld be mecettns pobey for chooring pages to remove is determined from the nev Page algorithm tat is used. The goal of «replacement policy is to the replace ne page least ikely tobe referenced in the immediate future,478 FIFO LRU CHAPTER TWELVE Memory Onganzation Two of the most common replacement algorithms used are the fa {fist-ot[FLFO} and the Last nny sed (LRU), The FIFO algoridn ie for replacement the page that has been in memory the longest time. Each time « page is loaded into memory, its identification number is pushed into a FIFO stack. FIFO will be full whenever memory has no more empty blocks. When ‘anew page must be loaded, the page least recently brought in is removed. The page to be removed is easily determined because its identification number ig at the top of the FIFO stack. The FIFO replacement policy has the advantage of being easy to implement. It has the disadvantage that under certain ci. cumstances pages are removed and loaded from memory too frequently. ‘The LRU policy is more difficult to implement but has been more attrac. tive on the assumption thatthe least recently used page i a better candidate for removal than the east recently loaded page as in FIFO. The LRU algorithm can be implemented by associating a counter with every page that is in main mem. ‘ory. When a page is referenced, its associated counter is set to zero. At fixed inter vals of time, the counters associated with all pages presently in memory are incremented by 1, The least recently used page isthe page with the highest count ‘The counters are often called aging registers, as their count indicates their age, that is, how long ago their associated pages have been referenced. 12-7_Memory Management Hardware Ina mulliprogiamming environment where many programs reside in mem- ory it becomes necessary to move programs and data around the memory, to vary the amount of memory in use by a given program, and to prevent a pro gram from changing other programs. The demands on computer memory Brought about by multiprogramming have created the need for « memory management system. A memory management system is a collection of hard ware and software procedures for managing the various programs residing in memory. The memory management software is part of an overall operating {stem available in many computers. Here we are concerned with the sar ware unit associated with the memory management system. ‘The basic components of a memory management unit are: 1. A facility for dynamic storage relocation that maps logical memory erences into physical memory addresses 2, A provision for sharing common programs stored in memory by i ferent users 3. Protection of information against unauthorized ace and preventing users from changing operating system functions ess between users are is a mapping process similar ‘The dynamic storage relocation hardw: The fixed page size used in the to the paging system described in Sec. 12-6.Pa ng ial address ware 479 StCTON 12-7 Memory Management Hard virtual memory system cause th eapeet (0 Pe peo: Y system causes certain difficulties with respect {© PIER pro. and the logieal nructa femme and detain Elly related ae ‘Segments may be generated by the programmer or by the Examples of segments are a subroubne. am array of Gat, ovauser's program, | 1" Subroutine, an ae triprogram- Ne sharing of common programs is an integral part of a multiprogy ey” ng system, For example, several users wishing to compile their Fortran Py grams should be able t share single copy ofthe compiler rather than 62¢% ses having a separate copy in memory. Othe system programs se=iding in memory are also shared by all users in a multiprogramming syste™ ovng to produce maple cps e third amie in mulliprog-amming is protecting one unwanted interaction with anevtierAn-example of unwanted interaction fp ‘one user's unauthorized copying of another user’s program. Another br seh protection is concered with preventing the occasional wser from performing: operating system functions and thereby interrupting the orderly sequence Of ‘operations in a computer installation, The secrecy of certain programs mus be kept from unauthorized personnel to: prevent abuses in the confidential activijes of an organization sie alder generated bya segmented program is called nga adres ING is simular to a virtual address except that logical address space is assoc ated with variable-length segments rather than fixed-length pages. The logical address may be larger than the physical memory address as in virtual mem- ory, but it may also be equaly and sometimes even smaller than the length of the physical memory address In adsition to relocation information, each seg: ment has protection information associated with st. Shared programs are placed in a unique seginent in each user's logical address space s0 that a sin- Ele physical copy can be shared. The function of the memory management fini is to map logical addresses into physical addresses similar to Uhe virtual memory mapping concept. Of programs, Jt is moxe convenient t0 ites (© logical parts called segments. A segment is @ 560 °F Oot” program from ‘Segmented-Page Mapping It was already mentioned that the property of logical space is that it uses variable-length segments. The length of each segment is allowed to grow and Contract according to the needs of the program being executed. One way of specilying the length of a segment is by associating with it a number of equal-size pages. To see how this is done, consider the logical address shown in Fig. 12-21. The logical address is partitioned into three fields. The segment field specifies a segment number. The page field specifies the page within the segment and the word field gives the specific word within the page. A page field of & bits can specify up to 2" pages. A segment number may be associated with just one480 CHNPTER-TWEL, Memory Organization ———————— | | Jame | mm a | ae lis beet] Tog Wo] (eae ere Argument register ee a = (b) Associative memory translation look-asie buffer (TB) Figure 12-21 Mapping in segmented-page memory management unit. page or with as many as 2' pages. Thus the length of a segment would vary according to the number of pages that are assigned to it. ‘The mapping of the logical address into a physical address is done by means of two tables, as shown in Fig. 12-21(a), The segment number of the logical address specifies the address for the segment table. The entry in the481 Seth 127 Meanory Managment Hania Segment table is a pointer address for a page table base. The page table base is added tothe page number given in the log adress The sum produces «pointer adress an entry in the page table. The value foun in the Pose table provides the block number in physi! memory. The concatenation of the block eld withthe word field produces te inal physical mapped adres ‘The two mapping tables may be stored in two separate small me or in main memory. In ether case, a memory reference from the CPU wit require three accesses to memory: one from the segment table, ane from the page table, and the third from main memory. This would slow the system 518° ieasty when compared toa conventional system that requies oly one oa exence to memory. To avoid this speed penalty, a fast associative memory is used to hold the most recently relerenced able ensies (This type of memory is sometimes called a translation lonkaside bufr, abbreviated TLB.) The first time a given block is referenced, its value together with the corresponding s¢s ‘ment and page numbers are entered into the associative memory as shown i Fig. 12-21(b) Thus the mapping process is fist attempted by associative search with the given segment and page numbers. fit succeeds, the mapping delay is only that ofthe associative memory. I'ao match occurs, the slower table ‘mapping of Fig, 12-2) is used and the result transformed into the associative ‘memory for future reference. Numerical Example ‘A numerical example may clarify the operation of the memory management unit, Consider the 20-bit logical address specified in Fig. 12-22(a). The 4-bit segment number specifies one of 16 possible segments. The &-bit page number can specify up to 256 pages, and the 8-bit word field implies a page size of 256 words. This configuration allows each segment to have any num- ber of pages up to 256. The smallest possible segment will have one page ot 256 words. The largest possible segment will have 256 pages, for a total of 256 X 256 = 64K words, ‘The physical memory shown in Fig. 12:22(b) consists of 2® words of S2bits each. The 20-bit address is divided into two fields: a 12-bit block number and an 8bit word number. Thus, physical memory is divided into 4096 blocks of 256 words each. A page in a logical address has a correspon- ding block in physical memory. Note that both the logical and physical address have 20 bits. Inthe absence of a memory management unit, the 20-bit address from the CPU can be used to access physical memory directly Consider a program loaded into memory that requires five pages. The operating system may assign to this program segment 6 and pages O through 4, as shown in Fig. 12-23(a). The total logical address range for the program is from hexadecimal 60000 to 604FF. When the program is loaded into physical ‘memory, itis distributed among five blocks in physical memory where the operating system finds empty spaces. The correspondence between each ‘memory block and logical page number is then entered in a table as shown in482 CHATTER TWELVE Memory Onniztion (a) Logealaddeess format: 16 segments of 256 pages each, cath page has 250 words Bee 8 ears tH eT sa ATT TF] 2 x 39 Pays ey EEEHICE address format: 4096 blocks of 256 words each, ford has 32 bis Figure 12-22 An example of logical and physical addresses. Fig. 12-23(b). The information from this table is entered in the segment and page tables as shown in Fig, 12-24ia). Now consider the specific logical address given in Fig. 12-24. The 20-bit address is listed as a five-digit hexadecimal number. It refers to word number 7E of page 2 in segment 6. The base of segment 6 in the page table is at address 35. Segment 6 has associated with it five pages, as shown in the page {able at addresses 35 through 39. Page 2 of segment 6 is at address 35 +9 39 The physical memory block is found in the page table to be 019. Word 7E in block 19 gives the 20-bit physical address 0197E, Note that page 0 of segment & maps into block 12 and page 1 maps into block 0. The associative memory Figure 12-23 Example of logical and physical memory address assignment. Hexadecimal address Page number 60000 Segment Page | Block 60100 6 00 | ow 6 o1 | 0 60200 6 02 | ow 60300 6 03 | 053 60400-——— 6 os | ag GosrF|___ Page ¢ eee (a) Logical address assignment (0) Segment-page versus ‘memory block assignmentoie address (in haxadecimal) “Zo ry IE. Bt table ° Page table Peete Physical memory 0 Ww goo00 -— pee Block 0 o| 35 000FF 01200 Block 12 o1FF | 1900 - 1978, | as|_ ow | “D19FF (a) Segment and page table mapping Segment | Pag Block 6 2 019 6 04 AGL (b) Associative memory (TLB) 1 to physical memory mapping example (all numbers are Figure 12-24 Logi in hexadecimal). 483484 ory mana, ach logical page ny. Pages can move to space requirements. The only updat ml e page table. Segments can, fecting each other. Different segments required to share a program by many sical memory can be assigned a segment number (a). Memory Protection Memory protection can be assigned to the physical address or the logical address. The protection of memory through the physical address can be done by assigning to each block in memory a number of protection bits that indi- cate the type of access allowed to its corresponding block. Every time a page fom one block to another it would be necessary to update the block protection bits. A much better place to apply protection is in the logical address space rather than the physical address space. This can be done by including protection information within the segment table or segment register of the memory management hardware, The content of each entry in the segment table or a segment register is called a descriptor. A typical descriptor would contain, in addition to a base address fie'd, one or two additional fields for protection purposes. A typical format for a segment descriptor is shown in Fig, 12-25. The base address field gives the base of the page table address in a segmented-page organization or the block Ease address in a segment register organization, This is the address used in mapping from a logical to the physical address, The length field gives the segment size by specifying the maximum number of pages assigned to the segment. The length field is compared against the page number in the logical address. A size violation occurs if the page number falls outside the segment length bousdary. Thus a given program and its data cannot access memory not assigned to it by the operating system, The protection field in a segment descriptor specifies the access rights available to the particular segment. In a segmented-page organization, each Figure 12-25 Format of a typical segment descriptor. Base address |] LengthSEH 127 Memory Management Hardware 485 acre ih abe ible may have is own protetion field to describe the Fe Ble ge Th protien at othe data tor by the master contol program af the operating system. Some of the access rights of interest that are used for ‘protecting the programs residing in memory are: 1. Fill ead and write privileges 2 Read only write protection) 3. Execute ony (program protection) 4 System only operating system protection) read and write privileges are given toa program when itis execut- Ovrinsrucon Wie ptcion swell fr sharing tem progras ‘ility programs and other library routines. These system programs are Stored in an atea of memory where they can be shared by many users. They ‘can be read by all programs, but no writing is allowed. This protects them from being changed by other programs. The execute-only condition protects programs from being copied. It be referenced or during the instruction fetch phase: the execute phase. Thus it allows the users to execute the seg- ‘ment program instructions but prevents them from reading the instructions as data for the purpose of copying their content Portions ofthe operating system wi These system programs must be protected b Unauthorized users. The operating system protection condition is placed in the descriptors ofall operating system programs to prevent the oceasional user {rom accessing operating system segments “hs __ 12-1, How many 128 8 RAM chip are needed to provide a memory capac 122, A computer uses RAM chips of 1024 X 1 capacity, 4 How many chips are needed, end how should their addres lines be com> nected to provide a memory eapacity of 1024 bytes? 1. How many chips are needed to. provide & memory capacity of 16K bytes? Explain in words how the chips are to be connected 2 addess bus
You might also like
Cache Memory Mapping
PDF
No ratings yet
Cache Memory Mapping
8 pages
Cache Memory
PDF
No ratings yet
Cache Memory
24 pages
DECO - Module 4.3 - Cache
PDF
No ratings yet
DECO - Module 4.3 - Cache
20 pages
Lec 23 CAOCache Memory
PDF
No ratings yet
Lec 23 CAOCache Memory
11 pages
Unit-2_CDA_DrManojY
PDF
No ratings yet
Unit-2_CDA_DrManojY
81 pages
7. Memory Cache
PDF
No ratings yet
7. Memory Cache
21 pages
BiD 05
PDF
No ratings yet
BiD 05
97 pages
Characteristics Location Capacity Unit of Transfer Access Method Performance Physical Type Physical Characteristics Organisation
PDF
No ratings yet
Characteristics Location Capacity Unit of Transfer Access Method Performance Physical Type Physical Characteristics Organisation
53 pages
Chapter 2z Ppt
PDF
No ratings yet
Chapter 2z Ppt
54 pages
William Stallings Computer Organization and Architecture 7th Edition Cache Memory
PDF
No ratings yet
William Stallings Computer Organization and Architecture 7th Edition Cache Memory
64 pages
04 - Cache Memory
PDF
No ratings yet
04 - Cache Memory
79 pages
Cache Memory: Replacement Algorithms
PDF
No ratings yet
Cache Memory: Replacement Algorithms
9 pages
Lecture 04 IS064
PDF
No ratings yet
Lecture 04 IS064
41 pages
04_Cache Memory
PDF
No ratings yet
04_Cache Memory
61 pages
Unit Iii
PDF
No ratings yet
Unit Iii
42 pages
Computer Organization and Architecture: Cache Memory
PDF
100% (1)
Computer Organization and Architecture: Cache Memory
57 pages
Cache Memory
PDF
No ratings yet
Cache Memory
5 pages
William Stallings Computer Organization and Architecture 7th Edition Cache Memory
PDF
No ratings yet
William Stallings Computer Organization and Architecture 7th Edition Cache Memory
57 pages
Unit 1 Part 2 (Chapter 4) Cache Memory
PDF
No ratings yet
Unit 1 Part 2 (Chapter 4) Cache Memory
53 pages
Memory Organization Assignment
PDF
No ratings yet
Memory Organization Assignment
61 pages
Cache Memory CAD
PDF
No ratings yet
Cache Memory CAD
16 pages
Computer Architecture: Memory Hierarchy Design
PDF
No ratings yet
Computer Architecture: Memory Hierarchy Design
60 pages
Cache - Memory - Concept
PDF
No ratings yet
Cache - Memory - Concept
73 pages
04 - Cache Memory PDF
PDF
No ratings yet
04 - Cache Memory PDF
71 pages
04 - Cache Memory (Compatibility Mode)
PDF
No ratings yet
04 - Cache Memory (Compatibility Mode)
12 pages
William Stallings Computer Organization and Architecture 7th Edition Cache Memory
PDF
No ratings yet
William Stallings Computer Organization and Architecture 7th Edition Cache Memory
57 pages
William Stallings Computer Organization and Architecture 8th Edition Cache Memory
PDF
No ratings yet
William Stallings Computer Organization and Architecture 8th Edition Cache Memory
71 pages
William Stallings Computer Organization and Architecture 8th Edition Cache Memory
PDF
No ratings yet
William Stallings Computer Organization and Architecture 8th Edition Cache Memory
71 pages
Conspect of Lecture 7
PDF
No ratings yet
Conspect of Lecture 7
13 pages
Cache Memory
PDF
No ratings yet
Cache Memory
8 pages
4.2 cachememory
PDF
No ratings yet
4.2 cachememory
12 pages
CH7- Memory Organization
PDF
No ratings yet
CH7- Memory Organization
38 pages
Chap 6
PDF
No ratings yet
Chap 6
48 pages
11 Cache Memory
PDF
No ratings yet
11 Cache Memory
40 pages
Chapter 7
PDF
No ratings yet
Chapter 7
39 pages
CH 4.ppt Type I
PDF
No ratings yet
CH 4.ppt Type I
60 pages
CH 06
PDF
No ratings yet
CH 06
58 pages
Chapter 4 Cache - Memory Willim Sailling
PDF
No ratings yet
Chapter 4 Cache - Memory Willim Sailling
71 pages
Memory Organization 1
PDF
No ratings yet
Memory Organization 1
49 pages
William Stallings Computer Organization and Architecture 7th Edition Cache Memory
PDF
No ratings yet
William Stallings Computer Organization and Architecture 7th Edition Cache Memory
67 pages
William Stallings Computer Organization and Architecture 8th Edition Cache Memory
PDF
No ratings yet
William Stallings Computer Organization and Architecture 8th Edition Cache Memory
71 pages
Comp Arch Chapter 6
PDF
No ratings yet
Comp Arch Chapter 6
93 pages
Memory
PDF
No ratings yet
Memory
57 pages
Characteristics Location Capacity Unit of Transfer Access Method Performance Physical Type Physical Characteristics Organisation
PDF
No ratings yet
Characteristics Location Capacity Unit of Transfer Access Method Performance Physical Type Physical Characteristics Organisation
53 pages
6.cache Memory - BVK
PDF
No ratings yet
6.cache Memory - BVK
47 pages
Cache + Associations Ch-4
PDF
No ratings yet
Cache + Associations Ch-4
52 pages
William Stallings Computer Organization and Architecture 6th Edition Cache Memory
PDF
No ratings yet
William Stallings Computer Organization and Architecture 6th Edition Cache Memory
54 pages
Cache Memory-Direct Mapping
PDF
0% (1)
Cache Memory-Direct Mapping
30 pages
William Stallings Computer Organization and Architecture 6th Edition Cache Memory
PDF
No ratings yet
William Stallings Computer Organization and Architecture 6th Edition Cache Memory
45 pages
4.1 Computer Memory System Overview
PDF
No ratings yet
4.1 Computer Memory System Overview
12 pages
Cache Memory
PDF
67% (3)
Cache Memory
72 pages
Characteristics Location Capacity Unit of Transfer Access Method Performance Physical Type Physical Characteristics Organisation
PDF
No ratings yet
Characteristics Location Capacity Unit of Transfer Access Method Performance Physical Type Physical Characteristics Organisation
53 pages
William Stallings Computer Organization and Architecture 7th Edition Cache Memory
PDF
No ratings yet
William Stallings Computer Organization and Architecture 7th Edition Cache Memory
66 pages
Cache Memory in Computer Organizatin
PDF
No ratings yet
Cache Memory in Computer Organizatin
12 pages