0% found this document useful (0 votes)
3 views

1 lecture parallel

محاضرات الانظمة الموزعة

Uploaded by

bankapp2022
Copyright
© © All Rights Reserved
Available Formats
Download as PDF or read online on Scribd
0% found this document useful (0 votes)
3 views

1 lecture parallel

محاضرات الانظمة الموزعة

Uploaded by

bankapp2022
Copyright
© © All Rights Reserved
Available Formats
Download as PDF or read online on Scribd
You are on page 1/ 10
Concept of Parallel & Distribute Computing Distributed Systems Definition A distributed system, also known as distributed computing, is a system with multiple components located on different machines that communicate and coordinate actions in order to appear as a single coherent system to the end-user. Although distributed systems can sometimes be obscure, they usually have two primary characteristics: all components run concurrently, and all components fail independently of each other. Benefits of distributed sys Horizontal Scalability—Since computing happens independently on each node, it is easy and generally inexpensive to add additional nodes and functionality as necessary. Reliability—Most distributed systems are fault-tolerant as they can be made up of hundreds of nodes that work together. The system generally doesn’t exptrience any disruptions if a single machine fails. Performance—Distributed systems are extremely efficient because workloads can be broken up and sent to multiple machines. * Sharing of data/resources: Allows systems to use each other’s resources. k Disadvantages of distributed systems ¢ Difficulties of developing distributed software: how should operating systems, programming languages, and applications look liket * Networking problems: several problems are created by the network infrastructure, Which must be dealt with: loss of messages, overloading... « © Security problems: private resources would be exposed to a wider range of potential hackers, with unauthorized accesses from any computers connected to the system. Examples of distributed systems “The Internet *~ Google.com ™ Facebook Serial Computing Traditionally, the software has been written for serial computation: *A problem is broken into a*discrete series of instructions. «Instructions are executed sequentially one after another. *Executed on a single processor. * Only one instruction may execute at any moment in time. Parallel Computing In the simplest sense, parallel computing is the simultaneous use of multiple compute resourcesto solve a computational problem: +A problem is broken into discrete parts that can be solved concurrently. *Each part is further broken down to a series of instructions. eInstructions from each part execute simultaneously on different processors. + An overall control/coordination mechanism is employed. 1 a probes Imatrections sydd The parallel computational problem should be able to: *Be broken apart into discrete pieces of work that can be solved simultaneously! *Execute multiple program instructions at any moment in time! Be solved in less time with multiple computing resources than with a single computing resource. *The compute resources are typically: *A single computer with multiple processors/cores. . An arbitrary number of such computers connected by a network. Reduced instruction set computer VS Complex instruction set computer RISE: is a CPU design strategy which only uses simple commands that can be divided into several instructions which achieve low-level operation within a single CLK cycle, as its name propose “Reduced Instruction Set” CISC: is a CPU design where single instructions can execute several low-level operations (such as a load from memory, an arithmetic operation, and a memory store) or are capable of multi-step operations or addressing modes within single instructions. cIsc RISC > Greater = i Code Code Generation Generation Greater Complexity H.W?) Compare RISC and CISC: in terms of: - e Architecture (using figures) ¢ Memory Unit » « Time «¢ Decoding e External memory EDN

You might also like