Open navigation menu
Close suggestions
Search
Search
en
Change Language
Upload
Sign in
Sign in
Download free for days
0 ratings
0% found this document useful (0 votes)
108 views
23 pages
Unit 5
Uploaded by
sourabhyadavrry
AI-enhanced title
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content,
claim it here
.
Available Formats
Download as PDF or read online on Scribd
Download
Save
Save unit 5 For Later
Share
0%
0% found this document useful, undefined
0%
, undefined
Print
Embed
Report
0 ratings
0% found this document useful (0 votes)
108 views
23 pages
Unit 5
Uploaded by
sourabhyadavrry
AI-enhanced title
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content,
claim it here
.
Available Formats
Download as PDF or read online on Scribd
Carousel Previous
Carousel Next
Download
Save
Save unit 5 For Later
Share
0%
0% found this document useful, undefined
0%
, undefined
Print
Embed
Report
Download
Save unit 5 For Later
You are on page 1
/ 23
Search
Fullscreen
MULTIPROCESSOR A multiprocessor is a computer system with two or more central processing units (CPUs), with each one sharing the common main memory as well as the peripherals. This helps in simultaneous processing of programs. The key objective of using a multiprocessor is to boost the system's execution speed, with other objectives being fault tolerance and application matching. ‘A good illustration of a multiprocessor is a single central tower attached to two computer systems. A multiprocessor is regarded as a means to improve computing speeds, performance and cost-effectiveness, as well as to provide enhanced availability and reliability. Characteristics of Multiprocessors ‘A multiprocessor system is an interconnection of two or more CPU, with memory and input- output equipment. As defined earlier, multiprocessors can be put under MIMD category. The term multiprocessor is some times confused with the term multicomputers. Though both support concurrent operations, there is an important difference between a system with multiple computers and a systefn_with multiple processors. In a multicomputers system, there are multiple computers, with their own operating systems, which communicate with each other, if needed, through communication links. A multiprocessor system, on the-other hand, is controlled by a single operating system, which coordinate the activities -of the various processors, either through shared memory or interprocessor messages. The multiprocessor system must have more than one processing element. The capabilities of all the processing elements should be nearly same. The advantages of multiprocessor systems are: + Increased reliability because of redundancy in processors + Increased throughput because of execution of - Multiple jobs in parallel - Portions of the same job in parallel Scanned with CarScanner ‘Scanned wih CamScannerCPU, ‘CPU; CPU, registers registers registers _— ache ache cache memory Figure 1.6 Symmetric multiprocessing architecture, Tightly or closely coupled multiprocessor It is a type of multiprocessing system in which, there is shared memory. In tightly coupled multiprocessor system, data rate is high eather than loosely coupled multiprocessor system. There is a global common memory that all CPUs may access. The information may be shared between the CPUs by inserting it in common global memory. In’ tightly coupled multiprocessor system, modules are connected through PMIN; IOPIN and ISIN networks. Some of the characteristics of tightly coupled multiprocessors are: * There is shared memory. Has high data rate ‘© Tightly coupled multiprocessor system is more costly. ‘Tightly coupled multiprocessor system have memory conflicts. + Has high degree of interaction between tasks ‘© Applications of tightly coupled multiprocessor are in parallel processing systems. Loosely coupled multiprocessor A loosely coupled multiprocessor system is a type of multiprocessing where the individual processors are. configured with their own memory and are capable of executing user and operating system instructions independent of each other. This type of architecture paves the way for parallel processing. Loosely coupled multiprocessor systems are connected via high-speed communication networks. Loosely coupled multiprocessor systems are also known as distributed memory, as the processors do not share physical memory and have their own IO channels. ‘Some of the characteristics of loosely coupled multiprocessors are: * Distributed memory = Lowcontention ‘Scanned wih CamScanner+ High scalability + High delay = Low data rate = Low cost + Staticinterconnection + Capable of running multiple OSs + Low throughput + Low security + Increased space requirements + High power consumption + Reusable and flexible components Structure of Multiprocessor. ‘A Multiprocessor is a computer system with two or more central processing units (CPUs) share full access to a common RAM. The main objective of using a multiprocessor is to boost the system's execution speed, with other objectives being fault tolerance and application matching. There are two types of multiprocessors, one is called shared memory multiprocessor and another is distributed memory multiprocessor. In shared memory multiprocessors, all the CPUs shares the common memory but in a distributed memory multipsocessor, every CPU haas its own private memory. ‘Scanned wih CamScannerDifferent types of interconnection structure. The different types of interconnection structures are as follows: © TIME SHARED COMMON BUS A system common bus multiprocessor system consists of a number of processors connected through path to a memory unit. Memory unit crut cruz crus, oP! 1oP2 © MULTIPORT MEMORY SYSTEM A multiport memory system employs separate buses between each memory module and each CPU. Each processor bus is connected to each memory module. A processor bus consists of the address; data, and control lines required to communicate with memory. The memory module is said to have four ports and each port accommodates one of the buses. The module must have internal control logic to determine which port will have access to memory at any given time. Memory access conflicts are resolved by assigning fixed priorities to each memory port. The priority for memory access associated with each processor may be established by the physical port position that its bus occupies in eac module. The advanitage of the multi port memory organization is the high transfer rate that can be achieved because of the multiple paths between processors and memory. The disadvantage is that it requires expensive memory control logic and a large number of cables and connectors. As a consequence, this interconnection structure is usually appropriate for systems with a small number of processors. ‘Scanned wih CamScannercur [| [| | cus ous HL ___1 © CROSSBAR SWITCH SYSTEM Switched networks give dynamic interconnections among the inputs and outputs. Small or medium size systems mostly use crossbar networks. crossbar switch is a single-stage network. Though a single stage network is cheaper to build, but multiple passes may be needed to establish certain connections. © MULTISTAGE AND COMBINING NETWORK SYSTEM Multistage networks or multistage interconnection networks are a class of high-speed computer networks which is mainly composed of processing elements on one end of the network and memory elements on the other end, connected by switching ‘Scanned wih CamScannerelements.These networks are applied to build larger multiprocessor systems. This includes Omega Network, Butterfly Network and many more. INTERPROCESSOR COMMUNICATION The various processors in a multiprocessor system, must be provided with a facility for ‘communicating with each other. A communication path can be established through common input-output channels. In 2 Shared’ memory multiprocessor system, the most ‘common procedure is to set aside a portion of memory that is accessible to all processors. In computer science, inter-process communication or inter process communication (IPC) refers specifically to the mechanisms an-operating system provides to allow the processes to manage shared data. Typically, applications can use IPC, categorized as clients and servers, where the client requests data and the server responds to client requests. Many applications are both clients and servers, as commonly seen in distributed computing. Methods for doinig IPC are divided into categories which vary based on software requirements, Such as performance and modularity requirements, and system circumstances, such as network bandwidth and latency. In order to cooperate concurrently executing processes must communicate and synchronize. Intes process communication is based on the use of shared variables (variables that can be referenced by more than one process) or message passing. INTERPROCESSOR ARBITRATION Computer system needs buses to facilitate the transfer of information between its various components. For example, even in a uni-processor system, if the CPU has to access a memory location, it sends the address of the memory location on the address bus. This address activates @ memory chip. The CPU then sends a red signal through the control bus, in the response of which the memory puts the data on the address bus. This address activates a memory chip. The CPU then sends a read signal through the control bus, in the response of which the memory puts the data on the data bus. Similarly, in a multiprocessor ‘Scanned wih CamScannersystem, if any processor has to read a memory location from the shared areas, it follows the similar routine. There are buses that transfer data between the CPUs and memory. These are called memory buses. An I/O bus is used to transfer data to and from input and output devices. A bbus that connects major components in a multiprocessor system, such as CPUs, I/Os, and memory is called system bus. A processor, in a multiprocessor system, requests the access of a component through the system bus. In case there is no processor accessing the bus at that time, it is given then control of the bus immediately. If there is a second processor utilizing the bus, then this processor has to wait for the bus to be freed. If at anytime, there is request for the services of the bus by more than one processor, then the arbitration is performed to resolve the conflict. A bus controller is placed between the local bus and the system bus to handle this. INTEPROCESSOR SYNCHRONIZATION The instruction set of a multiprocessor contains basic ifstructions that are used to implement communication and synchronization between cooperating processes. Communication refers to the exchange of data between different processes. For example, parameters passed to a procedure in a different processor constitute interprocessor communication. Synchronization refers to the special case where the data used to communicate between processors’i control information. Synchronization is needed to enforce the correct sequence of processes and to ensure mutually exclusive access to shared writable data. Multiprocessor systems usually include various mechanisms to deal with the synchronization of resources. Low-level primitives are implemented directly by the hardware. These primitives are the basic mechanisms that enforce mutual exclusion for more complex mechanisms implemented in software. A number of hardware mechanisms for mutual exclusion have been developed. One of the most popular methods is through the use of a binary Semaphore CACHE COHERENCE The primary advantage of cache is its ability to reduce the average access time in uniprocessors. When the processor finds a word in cache during a read operation, the main ‘memory is not involved in the transfer. If the operation is to write, there are two commonly used procedures to update memory. In the write-through policy, both cache and main memory are updated with every write operation. In the write-back policy, only the cache is updated and the location is marked so that it can be copied later into main memory. Ina shared memory multiprocessor system, all the processors share a common memory. In addition, each processor may have a local memory, part or all of which may be a cache. The compelling reason for having separate caches for each processor is to reduce the average access time in each processor. The same information may reside in a number of copies in some caches and main memory. To ensure the ability of the system to execute memory operations correctly, the multiple copies must be kept identical. This requirement imposes a cache coherence problem. A memory scheme is coherent if the value returned on a load struction is always the value given by the latest store instruction with the same address. ‘Scanned wih CamScannerWithout a proper solution to the cache coherence problem, caching cannot be used in bus- oriented multiprocessors with two or more processors. Memory organization in multiprocessor. Each processor generally contains a primary cache and a secondary he to exploit the locality of reference phenomenon. Each processor module be connected to the communication network as shown in figure. Secondary Cache Network Interface Interconnection Network The memory modules are. accessed in a single global address space, where range of physical addresses is allocated to each memory module. The processors access all memory modules in the same way-in such a shared memory system. This is the simplest use of the address space from the software standpoint. Each node has a processor and a portion of the memory in NUMA-organized multi-processor as shown ‘Scanned wih CamScannerSeconda It is also convenient to use a single global address space in this case. Again, the processor accesses all memory modules in the same way, but the accesses to the local memory component of the global address space take less time to complete than accesses to remote memory modules. Each processor accesses directly only its self-local memory in the organization. Hence there is no global address space, each memory module constitutes the private address space of ‘one processor. Sending messages implemented any interaction between processes or programs running on various processors from one processor to another. Each processor views the interconnection network as an 1/0 device in the form of communication, each node in such @ system behaves as a computer in the same manner for uniprocessor machines. This type of systems is known as multicomputer We must ensure that the processors observe the Same value for a given data item if data are shared among many processors. A problem will create in this respect due to presence of different caches in a shared memory system. Parallel Processing Instead of processing each instruction sequentially, a parallel processing system provides concurrent data processing to incsease the execution time. Parallel processing can be described as a class of techniques which enables the system to achieve simultaneous data- processing tasks to increase the computational speed of a computer system. A parallel processing system can carry out simultaneous data-processing to achieve faster execution time. For instance, while an instruction is being processed in the ALU component of the CPU, the next instruction can be read from memory. The primary purpose of parallel processing is to enhance the computer processing capability and increase its throughput, i.e. the amount of processing that can be accomplished during a given interval of time. ‘Scanned wih CamScannerA parallel processing system can be achieved by having a multiplicity of functional units that perform identical or different operations simultaneously. The data can be distributed among various multiple functional units. The following diagram shows one possible way of separating the execution unit into eight functional units operating in parallel. ‘The operation performed in each functional unit is indicated in each block if the diagram: * The adder and integer multiplier performs the arithmetic operation wit numbers, ‘* The floating-point operations are separated into three circuits operating in parallel * The logic, shift, and increment operations can be performed concurrently on different data. All units are independent of each other, so one number can be shifted while another number is being incremented. integer Flynn's Taxonomy MJ. Flynn proposed a classification for the organization of a computer system by the number of instructions and data items that are manipulated simultaneously. The sequence ‘Scanned wih CamScannerof instructions read from memory constitutes an instruction stream. The operations performed on the data in the processor constitute a data stream. Parallel processing may ‘occur in the instruction stream, in the data stream, or both. Flynn's classification divides computers into four major groups that are: Data stream + sisD ISD stands for 'Single Instruction and Single Data Stream’. It represents the organization of a single computer containing a control utilt, & processor unit, and a memory unit. Instructions are executed sequentially, and the system may or may not have internal parallel processing capabilities. Most conventional computers have SISD architecture like the traditional Von-Neumann computers. Parallel processing, in this case, may be achieved by means of multiple functional units or by pipeline processing. Instructions are decoded by the Control Unit and then the Control Unit sends the instructions to the processing units for execution. Data Stream flows between the processors and memory bi~ rectionally. ‘Scanned wih CamScannerExamples: Older generation computers, minicomputers, and workstations SIMD SIMD stands for 'Single Instruction and Multiple Data Stream’. It represents an organization that includes many processing units under the supervision of a common control unit. All processors receive the same instruction from the control unit but operate on different items of data. The shared memory unit must contain multiple modules so that it can communicate with all the processors simultaneously. ‘Scanned wih CamScanner‘SIMD: SIMD is mainly dedicated to array processing machines. However, vector processors can also be seen as a part of this group. * Isp MISO stands for'Multiple Instruction and Single Data stream’. MISD structure is only of theoretical interest since no practical system has been constructed using this organization. In MISD, multiple processing units operate on one single-data stream. Each processing unit operates on the data independently via separate instruction stream. Example: ‘Scanned wih CamScannerThe experimental Carnegie-Mellon C.mmp computer (1971) * MIMD MIMD stands for'Multiple Instruction and Multiple Data Stream’ In this organization, all processors in a parallel computer can execute different instructions and operate on various data at the same time. In MIMD, each processor has a separate program and an instruction stream is generated from each program. Example: Cray 790, Cray T3E, IBM-SP2 Pipeline and its advantages. The term Pipelining réfers tod technique of decomposing a sequential process into sub- operations, with each sub-operation being executed in a dedicated segment that operates concurrently with all other segments. The most important characteristic of a pipeline technique is that several computations can be in progress in distinct segments at the same time. The overlapping of computation is made possible by associating a register with each segment in the pipeline. The registers provide isolation between each segment so that each can operate on distinct data simultaneously. Scanned wh camseanner ‘Scanned wih CamScannerPipeline Processing: A 8 c, The structure of a pipeline organization can be represented simply by including an register for each segment followed by a combinational circuit. put Registers R1, R2, R3, and R4 hold the data and the combinational circuits operate in a particular segment. The output generated by the combinational circuit in a given segment is applied as an input register of the next segment. For instance, from the block diagram, we can see that the register R3 is used as one of the input registers for the combinational adder circuit. In general, the pipeline organization is applicable for two areas of computer design which includes: ‘© Arithmetic pipeline ‘© Instruction pipeline ADVANTAGES OF PIPELINE : ‘The cycle time of the processor is reduced. * Itincreases the throughput of the system. * It makes the system reliable. Space time diagram for a four-segment pipeline showing the time it takes to process eight tasks. ‘Scanned wih CamScannera SPACE STATE DIAGRAM. The vertical axis displays the segment number and the horizontal axis gives the time in clock cycles. The diagram represents 8 states T1 througli T8 executed in 4 segments. Initially the operation T1 is controlled by segment. 4. After the first clock cycle the segment 2 is busy with operation T1 and segment 1is busy With operation T2. This process is cofitinuing inthis manner ,the first operation 71 is completed after fourth clock cycle. At fourth, clack cycle now pipe complete the operation every clock cycle .Thus, ‘no matter how many segments are there in the system once the pipeline is full it takes only one clock periad to achieve an output. Let us assume a K segment pipeline with a clock cycle timetgis used to execute n operations. Because there are K segment in the pipe the operation T1 needs a time equal to Ktp to finish this operation. The remaining n -1 operation emerge from the pipe at the rate of one operations per clock cycle and they will be finished after a time equal to ( n -1 )tp.Thus, for completion of an operations using a K segment pipeline needs (k + n 1) clock cycles. Thus, the time needed to finish all the operations of space time diagram with 4 segments and 8 tasks is K=4 N=8 ‘Scanned wih CamScannerTherefore, k+(n-1)= 4+ (8-1) = 11 ive. 11 clock cycles. Efficiency in terms of Pipeline A pipeline performance can be measured by its throughput in term of millions of instructions executed per second or MIPS. Another popular measure of performance is the number of clock cycles per instruction or CPI. These quantities are related by the equation cPi= f/miPs where f is the pipeline's clock frequency in MHz and the values of CPI and MPS are average figures that can be determined experimentally by pro suits of representative programs. The maximum value of CPI for pipeline is one, making the pipeliné’s maximum possible throughput equal to f. Space-time diagram is a useful way to visualize pipeline behavior, which shows the yeline stage as a function of time, In general, a space-time diagram for an m-stage pipeline has the form of a marr grid, where n is the number of clock cycles to complete the processing of some sequefice of N instructions of interest. Another general measure of pipeline performance is the speedup S(m) defined by S{rm) = T(1)/T(m) where T(m) is the execution time for some target workload on an m-stage pipeline, and T (1) is the execution time for the same workload on a similar, nonpipelined processor. It is reasonable to assume that T(1)smT(m), in which case S(m)sm. Difference between arithmetic and instruction pipeline ARITHMETIC PIPELINE INSTRUCTION PIPELINE ‘An arithmetic pipeline divides an | An instruction pipeline is a technique for arithmetic operation into sub | implementing instruction level parallelism operations for execution _in | within a single processor. the pipeline segments. 2 | They are used for floating point | it performs operations like fetch, decode operations, multiplication of fixed-point | and execute instructions. numbers ete. 3 __| Arithmetic pipelining is the overlapping | Instruction pipelining is the overlapping of of computation within the ALU | instructions in the CPU data path based {arithmetic logic unit) or FPU (floating: | primarily on the defined instruction cycle ‘Scanned wih CamScannerpoint unit). for the processor. 4 | Arithmetic Pipelines are mostly used in | Digital computers with complex high-speed computers, instructions require instruction pipeline Four Segment Instruction Pipeline It combines two or more different segments and makes it as a single one. For instance, the decoding of the instruction can be combined with the calculation of the effective address into one segment. The following block diagram shows an example of a four-segment instruction pipeline. The instruction cycle is completed in four segments The four segments are represented as : ‘© Fl:segment 1 that fetches the instruction. ‘© DA: segment 2 that decodes the instruction and calculates the effective address. ‘+ FO: segment 3 that fetches the operands. ‘Scanned wih CamScanner© EX:segment 4 that executes the instruction. The space time diagram for the 4-segment instruction pipeline is: ie B iS 2 [3 [4 [5 [é [7 [8 [9 1 FI | DA | FO | EX 2 FI_| DA | FO | EX 3 FI_| Da | Fo | Ex 4 5 FI_| DA | FO | Ex FI_[DA| FO [Ex 6 Fr_[DA|FO | eX Fig: timing diagram for 4-segment instruction pipeline Pipeline Conflicts There are some factors that cause the pipeline to deviate its normal performance. Some of these factors are given below Timing Variations: All stages cannot take same amount of time. This problem generally occurs in instruction processing where different instructions have different operand requirements and thus different processing time. Data Hazards: When several instructions are in partial execution, and if they reference same data then the problem arises. We must ensure that next instruction does not attempt to access data before the current instruction, because this will lead to incorrect results Branching: In order to fetch and execute the next instruction, we must know what that instruction is. Ifthe present instruction is a conditional branch, ang its result will lead us to the next instruction, then the next instruction may not be known until the current one is processed. Interrupts: Interrupts set unwanted instruction into the instruction stream, Interrupts effect the execution of instruction. Data Dependency: It arises when an instruction depends upon the result of a i is not yet available. VECTOR PROCESSING Vector processing performs the arithmetic operation on the large array of integers or floating-point number. It operates on all the elements of the array in parallel providing each pass is independent of the other. Vector processing avoids the overhead of the loop control ‘mechanism that occurs in general-purpose computers. ‘Scanned wih CamScannerWe need computers that can solve mathematical problems for us which includes, arithmetic operations on the large arrays of integers or floating-point numbers quickly. The general- purpose computer would use loops to operate on an array of integers or floating-point ‘numbers. But, for large array using loop would cause overhead to the processor. To avoid the overhead of processing loops and fasten the computation, some kind of parallelism must be introduced. Vector processing operates on the entire array in just one operation i.e. it operates on elements of the array in parallel. But, vector processing is possible only if the operations performed in parallel are independent. Applications of Vector Processing- * Petroleum explorations ‘© Medical diagnosis ‘* Attifical intelligence and expert systems Image processing ‘© Long range weather forecasting Seismic data analysis ‘Aerodynamics and space flight simulations ‘© Mapping the human genome ARRAY PROCESSING Array processors are also known as multiprocessors or vector processors. They perform computations on large arrays of data. Thus, they are used to improve the performance of the computer. WHY USE THE ARRAY PROCESSOR? * A’ray processors increases the overall instruction processing speed. ‘As most of the Array processors operates asynchronously from the host CPU, hence it improves the overall capacity of the system. Array Processors has its own local memory, hence providit systems with low memory. extra memory for There are basically two types of array processors: 1. Attached Array Processors: ‘An attached array processor is a processor which is attached to a general-purpose computer and its purpose is to enhance and improve the performance of that computer in numerical computational tasks. It achieves high performance by means of parallel processing with multiple functional units. 2. SIMD Array Processors: SIMD is the organization of a single computer containing multiple processors operating in parallel. The processing units are made to operate under the control of a common control unit, thus providing a single instruction stream and multiple data streams ‘Scanned wih CamScannerSuper Computers. ‘A supercomputer is a computer with a high level of performance as compared to a general- purpose computer. The performance of a supercomputer is commonly measured in floating point operations per second (FLOPS) instead of million instruction per second(MIPS). It is a commercial computer with vector instructions and pipelined floating-point arithmetic task. They are very powerful high-performance machines employed mostly for scientific computations. The components are tightly packed together to reduce the distance which the electronic signals have to travel for speed up the operation due to close proximity. The special techniques are used by supercomputers for removing the heat from the circuits to prevent them from burning ‘A supercomputer is 2 system with high computational speed fast and large memory systems and uses parallel processing extensively. It is designed with multiple functional units and each unit has its own pipeline configuration. The instruction set of super computers consists of standard data transfer data manipulation and program control instructions also, it process vectors and combinations of scalars and vectors, Thus, it is capable of general-purpose applications found it all the computers. It is specifically optimized for the type of numerical calculations involving vectors and matrices of fioating-point numbers. Because of their high price, they have limited use. They are limited in number of scientific applications like- numerical weather forecasting, seismic wave analysis and space research. The first supercomputer was introduced in 1976-CRAY-1. It is employed vector processing with 12 different functional nits in parallel. Supercomputer with multiprocessor configuration is known as CRAY-X-MP and CRAY-Y-MP. CRAY-2 stipercomputer is 12 times more powerful as compared to CRAY-1 supercomputer. RISC = Its known as Reduced Instruction Set Computer. It is a type of microprocessor that has a limited number of instructions. — They have small set of instructions with fixed format (32 bit). — They can execute their instructions very fast because instructions are very small and simple. RISC chips require fewer transistors which make them cheaper to design and produce. — In RISC, the instruction set contains simple and basic instructions from which more complex instruction can be produced. — Most instructions complete in one cycle, which allows the processor to handle many instructions at the same time. — In this instruction are register based and data transfer takes place from register to register. — Pipelining of instructions can be achieved — Emphasis on software ‘Scanned wih CamScannerThe control is Hardwired rather than microprogrammed Examples : RISC Processors are: DEC’S ALPHA 21064, 21164 & 21264 SUN'S SPARC AND ULTRA SPARC POWER PC PROCESSORS etc.. csc It is known as Complex Instruction Set Computer. It was first developed by Intel. It contains large number of complex instructions. In this instruction are not register based. Instructions cannot be completed in one machine cycle. Data transfer is from memory to memory. Micro programmed control unit is found in CISC Also, they have variable instruction formats (16-64 bits per instruction). Emphasis on Hardware less Load on Programmer Pipelining of infrastructures difficult Instructions in reality which are not even used frequently Examples : CISC Processors are: Intel 386, 426, PENTIUM, PENTIUM PRO, PENTIUM I, PENTIUM II MOTOROLA'S 68000, 68020, 68030, 68040, etc. Intel and AMD Dual Core Processor A multi-core processor is a single computing component with two or more independent actual processing units (called "cores"), which are the units that read and execute program instructions. The instructions are ordinary CPU instructions such as add, move data, and branch, but the multiple cores can run multiple instructions at the same time, increasing overall speed for programs amenable to parallel computing Intel and AMD have the same design philosophy but different approaches in their micro architecture and implementation. AMD technology uses more cores than Intel, but Intel uses Hyper-threading technology to augment the multi-core technology. AMD uses Hyper-Transport technology to connect one processor to another and Non-Uniform Memory Access to (NUMA) to access memory. Intel on the other hand ‘uses Quick Path Interconnect technology to connect processor to one another and Memory controller Hub for memory access. ‘Scanned wih CamScannerAMD supports virtualization using Ray wualization Indexing and Direct Connect architecture. While Intel virtualization technology is Virtual Machine Monitor. ‘AMD ranked higher in virtualization support than Intel. Moreover, the Quick Path Interconnect in Intel ProLiant server have self-healing links and clock failover, hence their technology focuses more on data security while AMD Proliant servers focuses more on power management. ‘Scanned wih CamScanner
You might also like
Unit VI
PDF
No ratings yet
Unit VI
50 pages
COA Assignment
PDF
No ratings yet
COA Assignment
21 pages
2ad6a430 1637912349895
PDF
No ratings yet
2ad6a430 1637912349895
51 pages
Symmetric Multiprocessors: Unit 5 Memory Organization
PDF
No ratings yet
Symmetric Multiprocessors: Unit 5 Memory Organization
6 pages
Unit6 - Microprocessor - Final 1
PDF
No ratings yet
Unit6 - Microprocessor - Final 1
30 pages
Contents:: Multiprocessors: Characteristics of Multiprocessor, Structure of Multiprocessor
PDF
No ratings yet
Contents:: Multiprocessors: Characteristics of Multiprocessor, Structure of Multiprocessor
52 pages
Chapter 10
PDF
No ratings yet
Chapter 10
6 pages
FALLSEM2024-25 CSI3021 TH VL2024250101925 2024-09-20 Reference-Material-I
PDF
No ratings yet
FALLSEM2024-25 CSI3021 TH VL2024250101925 2024-09-20 Reference-Material-I
25 pages
CO Unit6
PDF
No ratings yet
CO Unit6
8 pages
CSO Notes Unit 5 Multiprocessor
PDF
No ratings yet
CSO Notes Unit 5 Multiprocessor
52 pages
Multiprocessor Architecture and Programming
PDF
No ratings yet
Multiprocessor Architecture and Programming
20 pages
Microprocessor
PDF
No ratings yet
Microprocessor
7 pages
COA
PDF
No ratings yet
COA
107 pages
MCA Operating System and Unix Shell Programming 15
PDF
No ratings yet
MCA Operating System and Unix Shell Programming 15
12 pages
Unit 6
PDF
No ratings yet
Unit 6
36 pages
Unit-3 2 Multiprocessor Systems
PDF
No ratings yet
Unit-3 2 Multiprocessor Systems
12 pages
Unit 11
PDF
No ratings yet
Unit 11
10 pages
Interconnection Structures
PDF
No ratings yet
Interconnection Structures
7 pages
Unit 6 - Computer Organization and Architecture - WWW - Rgpvnotes.in
PDF
No ratings yet
Unit 6 - Computer Organization and Architecture - WWW - Rgpvnotes.in
14 pages
Pipeline
PDF
No ratings yet
Pipeline
43 pages
Multi Processors
PDF
No ratings yet
Multi Processors
15 pages
A502018463 23825 5 2019 Unit6
PDF
No ratings yet
A502018463 23825 5 2019 Unit6
36 pages
Lectures On Multiprocessors: Unit 10
PDF
No ratings yet
Lectures On Multiprocessors: Unit 10
26 pages
Lectures On Lectures On Multiprocessors: Unit 10
PDF
No ratings yet
Lectures On Lectures On Multiprocessors: Unit 10
26 pages
Chapter Ten Architeture
PDF
No ratings yet
Chapter Ten Architeture
14 pages
Unit-6 Multiprocessors
PDF
No ratings yet
Unit-6 Multiprocessors
21 pages
B.tech CS S8 High Performance Computing Module Notes Module 4
PDF
No ratings yet
B.tech CS S8 High Performance Computing Module Notes Module 4
33 pages
Unit-5 Part-2
PDF
No ratings yet
Unit-5 Part-2
22 pages
Midterm Reviewer
PDF
No ratings yet
Midterm Reviewer
17 pages
Final Unit5 CO Notes
PDF
No ratings yet
Final Unit5 CO Notes
7 pages
Unit 1
PDF
No ratings yet
Unit 1
14 pages
Co Unit-V
PDF
No ratings yet
Co Unit-V
12 pages
Multi Processor Classification
PDF
No ratings yet
Multi Processor Classification
11 pages
Multiprocessing: - Classification
PDF
No ratings yet
Multiprocessing: - Classification
14 pages
COA Group Assigment
PDF
No ratings yet
COA Group Assigment
11 pages
Multiprocessors
PDF
No ratings yet
Multiprocessors
12 pages
Chapter Thirteen: Multiprocessors
PDF
No ratings yet
Chapter Thirteen: Multiprocessors
55 pages
Coa Unit5
PDF
No ratings yet
Coa Unit5
11 pages
9 Module 4
PDF
No ratings yet
9 Module 4
49 pages
Multiprocessors
PDF
No ratings yet
Multiprocessors
8 pages
Characteristics Multi Processors
PDF
No ratings yet
Characteristics Multi Processors
7 pages
Hahhaha 3333
PDF
No ratings yet
Hahhaha 3333
7 pages
Unit 3
PDF
No ratings yet
Unit 3
28 pages
Lecture 19
PDF
No ratings yet
Lecture 19
20 pages
Multiprocessor
PDF
No ratings yet
Multiprocessor
45 pages
05 - 02 Multi Processors
PDF
No ratings yet
05 - 02 Multi Processors
18 pages
William Stallings Computer Organization and Architecture: Parallel Processing
PDF
No ratings yet
William Stallings Computer Organization and Architecture: Parallel Processing
40 pages
Multiprocessor Architectures and Programming
PDF
No ratings yet
Multiprocessor Architectures and Programming
89 pages
Multiprocessor System and Interconnection Networks
PDF
No ratings yet
Multiprocessor System and Interconnection Networks
66 pages
CO2202 L13 MultiProcessor
PDF
No ratings yet
CO2202 L13 MultiProcessor
31 pages
Computer Art. P
PDF
No ratings yet
Computer Art. P
16 pages
Os MD 4
PDF
No ratings yet
Os MD 4
112 pages
Multi-Processor / Parallel Processing
PDF
No ratings yet
Multi-Processor / Parallel Processing
12 pages
Multi-Processor-Parallel Processing PDF
PDF
No ratings yet
Multi-Processor-Parallel Processing PDF
12 pages
Multi-Processor / Parallel Processing
PDF
No ratings yet
Multi-Processor / Parallel Processing
12 pages
Slot28 CH17 ParallelProcessing 32 Slides
PDF
No ratings yet
Slot28 CH17 ParallelProcessing 32 Slides
32 pages
Chapter 3
PDF
No ratings yet
Chapter 3
35 pages
Saurabh Yadav
PDF
No ratings yet
Saurabh Yadav
2 pages
Unit 1
PDF
No ratings yet
Unit 1
14 pages
Unit 4
PDF
No ratings yet
Unit 4
31 pages
Unit 2
PDF
No ratings yet
Unit 2
12 pages