0% found this document useful (0 votes)
3 views

Parallel Processing]']

Parallel processing is a computing method that divides large tasks into smaller parts to be completed simultaneously using multiple CPUs, improving performance and reducing task completion time. It can be categorized into four types of computer organization: SISD, SIMD, MISD, and MIMD, each with distinct characteristics. The major hardware architectures for multiprocessing include Symmetric Multiprocessing (SMP), Massively Parallel Processing (MPP), and Non-uniform Memory Architecture (NUMA), each offering unique benefits and challenges.

Uploaded by

ameerhussain8335
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

Parallel Processing]']

Parallel processing is a computing method that divides large tasks into smaller parts to be completed simultaneously using multiple CPUs, improving performance and reducing task completion time. It can be categorized into four types of computer organization: SISD, SIMD, MISD, and MIMD, each with distinct characteristics. The major hardware architectures for multiprocessing include Symmetric Multiprocessing (SMP), Massively Parallel Processing (MPP), and Non-uniform Memory Architecture (NUMA), each offering unique benefits and challenges.

Uploaded by

ameerhussain8335
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 4

What is parallel processing?

Parallel processing or multiprocessing refers to a computing method that helps process


large tasks by separating them into multiple parts and completing them simultaneously
with two or more central processing units (CPU). This type of processing helps improve
performance and reduces the time for completing a task. You can use any operating
system with multiple CPUs, such as multi-core processors, to perform multiprocessing
methods.

Types of multiprocessing
You may divide parallel processors into four groups based on data streams and
instructions. These groups include:

SISD computer organization

SISD means single instruction and single data stream. This computer organization
includes a processing unit, control unit, and memory unit. SISD, like a serial computer,
executes instructions sequentially and may perform multiprocessing functions. In this
method, instructions carried out sequentially may overlap during the execution stages.
In addition, a SISD computer may have more than one functional unit, but these units
function under the administration of one control unit. You can execute multiprocessing
in these systems using several functional units or pipeline processing.

SIMD computer organization

SIMD refers to single instruction and multiple data streams. This organization method
includes several processing elements functioning under the administration of a single
control unit. In this system, the processors receive similar instructions from the control
unit but execute them on several data items. In addition, the shared subsystem has
numerous modules to help you communicate with the processors simultaneously. You
can further divide the SIMD system into bit-scale and word slice mode organizations.

MISD computer organization

MISD means multiple instructions and single data stream. This organization method
includes several processing units receiving different instructions and operating over a
similar data flow. In this structure, the output of one processor becomes the input of
another processor. It's important to note that developers initially implemented this
structure for theoretical interest.

MIMD computer organization

MIMD refers to multiple instructions and multiple data streams. This computer
organization involves the processors in the parallel system executing different
instructions and operating on various data simultaneously. In MIMD computer
organization, each processor operates on a separate program, and you can generate a
unique instruction stream for each program.

Types of multiprocessing hardware architecture


The major multiprocessing hardware architecture in the server market include:

Symmetric multiprocessing (SMP)

This architecture's a single device with several processors managed by a single


operating system with access to a similar memory area and disk. An SMP typically has
eight to 32 processors, a large memory, a parallel database, good design, and a good
disk. This type of architecture usually performs well with a medium-sized warehouse. It's
important that the database runs its processes in parallel, and the data warehouse
design can take advantage of these parallel capabilities.

Generally, the processors can quickly access shared resources, but the access path
may become a bottleneck due to scalability issues. The SMP machine is a single entity.
As a result, it may become a single point of failure in the warehouse. To solve this
problem, hardware companies developed techniques that allow you to link multiple SMP
machines to each other.

Massively parallel processing (MPP)

These systems include different independent computers with separate disks, operating
systems, and memory coordinated by sharing information with each other. This system
is relatively fast and efficiently provides solutions to problems. The major advantage of
this system is the capability to link hundreds of machine nodes and use them to solve
any issue by applying the brute-force approach. For instance, suppose you want to
perform a full scan of a larger table. In that case, applying a 100-node MPP system
allows each node to scan 1/100th of the table.

Non-uniform memory architecture (NUMA)

The non-uniform memory architecture is a set of MPP and SMP. It attempts to combine
the parallel speed of MPP and the shared disk adaptability of SMP. This innovation is
novel and may be suitable for high-run data warehousing. This architecture is
conceptually similar to SMP clustering machines but includes greater coordination
among nodes, more bandwidth, and tighter connections. You can consider using the
NUMA architecture if you can divide the data warehouse into independent groups and
put each group on its node.

Benefits of multiprocessing
Here are some of the benefits of using multiprocessing methods in your workplace:

 Supports multiprocessors: This type of processing allows you to use


multiprocessors or different processors connected through a network.
 Executes code efficiently: Multiprocessing is an efficient means of
executing codes and also helps reduce the computing time.
 Solves larger programming issues: Multiprocessing can help you
resolve larger programming issues in a short period.
 Simplifies complex or large data: This processing method helps you
analyze data sets that may be too complex or large that it may be
impractical to analyze them sequentially.
 Reduces data analysis costs: Implementing multiprocessing helps
you save costs in the long run by giving you a better cost per
performance. You can also build multiprocessing computers from
relatively cheap components.
 Increases data organization: This processing method also helps you
properly organize the organization's data and makes communication
and data sharing easier.
 Enhances data storage capabilities: Multiprocessing helps you
optimize the company's data storage facilities.
 Real-world application: Unlike sequential processing, you can use
multiprocessing for simulating, understanding, and modelling real-world
phenomena.

Related: 12 Examples of Organization in the Workplace (With Tips)

Disadvantages of multiprocessing
Here are some challenges to note before creating parallel systems in your workplace:

 Complex parallel structures: Writing programming to target parallel


structures may be challenging due to the complex nature of parallel
structures.
 Increased costs: You may incur extra costs due to synchronization,
data transfers, thread creation or destruction, and communication. For
instance, multi-core processors may require huge power to function
effectively, increasing electricity costs.
 Code adjustments for various target architectures: The parallel
system may require you to perform different code tweaking to improve
the performance in different target architectures.
 Data clusters may require additional cooling: The parallel system
may require better cooling technologies for your data clusters.
 Long debugging and implementation times: The solutions in parallel
systems may be harder to prove correct, debug, or implement and may
not perform optimally due to coordination and communication overhead.

You might also like