0% found this document useful (0 votes)
26 views18 pages

Parallel Comp Point Main

Uploaded by

deebakwa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views18 pages

Parallel Comp Point Main

Uploaded by

deebakwa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 18

Definition: Parallel computing is use of the multiple computer

resources to solve large computational problems


simultaneously. It uses two or more processors in combination
to solve a single problem in order to speed up computational
time for algorithms or big datasets that cannot fit in a computer
at the same time.

Reasons for parallel computing


•Serial performance improvements have slowed, while parallel
hardware has become ubiquitous
•Parallel programs are typically harder to write and debug than
serial programs.
This is how simple parallel for loop works

by speedup=𝑡𝑖𝑚𝑒𝑜𝑙𝑑 / 𝑡𝑖𝑚𝑒𝑛𝑒𝑤
•“Speedup” is a measure of performance improvement given

•For a parallel program, we can with an arbitrary number of


cores, n.
•Parallel speedup is a function of the number of cores
speedup(p)=𝑡𝑖𝑚𝑒𝑜𝑙𝑑 /𝑡𝑖𝑚𝑒𝑛𝑒𝑤(𝑝)

Types of CPUs hardware

1. Single Core
Characteristics: No parallelism

2. multicore hardware

•Each processor core runs independently


•All cores can access system memory
•Common in desktops, laptops, smartphones, probably
toasters…

3. Multicore, Multiprocessors

•Each processor core runs independently


•All cores can access system memory
•Common in workstations and servers

4. Accelerators

Accelerator/GPU is a separate chip with many simple cores.


•GPU memory is separate from system memory
•Not all GPUs are suitable for research computing tasks
(need support for APIs, decent floating-point performance)
5. Hardware Clusters

•Several independent computers, linked via network


•System memory is distributed (i.e. each core cannot access
all cluster memory)
•Bottlenecks: inter-processor and inter-node
communications, contention for memory, disk, network
bandwidth, etc.

Task can be divided into parts that can be executed separately.


Examples:
 Monte Carlo Integration
 Bootstrap
 Cross-Validation

Communication between processes


 Forking
 Threading
 OpenMP (good for multicore machines)
shared memory multiprocessing
 PVM (Parallel Virtual Machine)
 MPI (Message Passing Interface; de facto standard for
large scale parallel computations)
 For\Big Data": Hadoop and related approaches.

Matlab parallel computing environment


Matlab is one of the most widely used mathematical
computing tool in technical computing. It has an interactive
environment which provides high performance computing
(HPC) procedures and is easy to use. Parallel computing with
Matlab has been an interested area for scientists of parallel
computing researches for a number of years. In this study,
we present some of the past, present attempts of parallel
computing with Matlab.

Most software has been written for serial computation. That


means it run on a single computer which having a single
Central Processing Unit (CPU). Usually, the problem will be
divided into a number of series instructions. Where the
execution of the instructions will be sequentially. Parallel
computing is one of the computing methods which execute
many computation (processes) simultaneously. Where the
principle of parallel computing can divided the large problem
into smaller pieces, then solved it concurrently or run on
multiple CPUs. Figure 1 (a) and (b) shows how to divide the
problem in sequential and parallel. In-fact, the main
advantages of parallel computing are 1) save time and
money; 2) solve larger problems; 3) provide concurrency; 4)
use of non-local resources; and 5) limits to serial computing
Graphics Processing Unit (GPU)
Graphical Processing Units (GPUs) were invented in 1999 by
NVIDIA. A GPU is a highly parallel computing device. It is
designed to accelerate the analysis of large datasets such as
image , video and voice processing or to increase the
performance with graphics rendering , computer games. In
the last ten years, the GPU has a major development where
it became used in many applications such as the iterative
solution of PDEs, video processing, machine learning, and 3D
medical imaging. The GPU has gained significant popularity
as powerful tools for high performance computing (HPC)
because of the low cost, flexible and accessible of the
GPU[22]. Figure 10 illustrates architectural differences
between GPUs and CPUs , the GPU has a number of threads
where each thread can execution different program.
In object-oriented programming (OOP) languages, the ability to
encapsulate software concerns of the dominant decomposition
in objects is the key to reaching high modularity and loss of
complexity in large scale designs. However, distributed-
memory parallelism tends to break modularity, encapsulation,
and the functional independence of objects, since parallel
computations cannot be encapsulated in individual objects,
which reside in a single address space. For reconciling object-
orientation and distributed-memory parallelism. The better
cost-benefit of parallel computing platforms for High
Performance Computing (HPC),1 due to the success of off the-
shelf distributed-memory parallel computing platforms, such as
Clusters [1] and Grids [2], has motivated the emergence of new
classes of applications from computational sciences and
engineering. Besides high performance requirements, these
applications introduce stronger requirements of modularity,
abstraction, safety and productivity for the existing parallel
programming tools [3]. Unfortunately, parallel programming is
still hard to incorporate into the usual large scale software
development platforms that may be developed to deal with
such kinds of requirements [4]. Also, automatic parallelization
is useful only in restricted contexts, such as in scientific
computing libraries [5]. Skeletal programming
[6], a promising alternative for high-level parallel programming,
has not achieved the acceptance expected [7]. These days,
libraries of message-passing subroutines that conform to the
MPI (Message Passing Interface) standard [8] are widely
adopted by parallel programmers, offering expressiveness,
portability and efficiency across a wide range of parallel
computing platforms. However, they still present a low level of
abstraction and modularity in dealing with the requirements of
the emerging large scale applications in HPC domains.
In the context of corporative applications, object-oriented
programming (OOP) has been consolidated as the main
programming paradigm to promote development productivity
and software quality. Object-orientation is the result of two
decades of research in programming tools and techniques
motivated by the need to deal with the increasing levels of
software complexity since the software crisis context of the
1960s [9]. Many programming languages have been designed
to support OOP, such as: C++, Java, C#, SmallTalk, Ruby,
Objective-C, and so on. Despite their success in the software
industry,
Object-oriented languages are not popular in HPC, dominated
by Fortran and C, as a consequence of the high level of
abstraction and modularity offered by these languages. When
parallelism comes onto the scene, the situation is worse, due
to the lack of safe ways to incorporate explicit message-passing
parallelism to these languages without breaking important
principles, such as the functional independence of objects and
their encapsulation.
OOPP (Object Oriented Parallel Programming), a style of parallel
programming where objects are intrinsically parallel, so
deployed in a set of nodes of a distributed-memory parallel
computer, and communication is distinguished in two layers:
intra-object communication, for common process interaction by
message passing, and inter-object communication, for usual
object coordination by method invocations. In OOPP, objects
are called p-objects (parallel objects). The decision to support
C++ comes from the wide acceptance of C++ in HPC.
However, OOPP might be supported by other OO languages,
such as Java and C#.

Context and contributions

• Message-Passing (MP), intended for HPC applications, which


have stronger performance requirements as the main driving
force, generally found in scientific and engineering domains;
• Object-Orientation (OO), intended for large-scale applications,
which have stronger productivity requirements for development
and maintenance, generally found in corporative domains.
The following sections review concepts of the two above
programming techniques that are important in the context of
this work, also providing a discussion about the strategies that
have been applied for their integration

2.1. Parallel programming and message passing with MPI


MPI is a standard specification for libraries of subroutines for
message-passing parallel programming that are portable across
distributed-memory parallel computing platforms. MPI was
developed in the mid-1990s by a consortium integrating
representatives from academia and industry, interested in a
message-passing interface that could be implemented
efficiently in virtually any distributed parallel computer
architecture, replacing the myriad of proprietary interfaces
developed at that time by supercomputer vendors for the
specific features and needs of their machines. It was observed
that such diversity results in higher costs for users of high-end
parallel computers, due to the poor portability of their
applications between architectures from distinct vendors. Also,
the lack of standard practices breaks the technical evolution
and dissemination of computer architectures and programming
techniques for parallel computing. MPI was initially proposed as
a kind of parallel programming ‘‘assembly’’, on top of which
specific purpose, higher-level parallel programming interfaces
could be developed, including parallel versions of successful
libraries of subroutines for scientific computing and
engineering.
However, MPI is now mostly used to develop final applications.
The MPI specification is now maintained by the MPI Forum
(http://www.mpi-forum-org).
MPI is now the main representative of message-passing parallel
programming. Perhaps it is the only parallel programming
interface, both portable and general purpose, to efficiently
exploit the performance of high-end distributed parallel
computing platforms. Since the end of the 1990s, any new
installed cluster or MPP2 has supported some implementation
of MPI. In fact, most vendors of parallel computers adopt MPI as
their main programming interface, offering

Principles of object-oriented languages


Object-orientation is an influential data abstraction mechanism
whose basis was introduced in the mid-1960s, with the
Simula’67 programming language. Following Simula’67, the
most prominent object-oriented language was Smalltalk,
developed at Xerox PARC in the 1970s. The designers of
Smalltalk adopted the pervasive use of objects as a
computation basis for the language, being the first to coin the
term object-oriented programming (OOP). During the 1990s,
OOP became the mainstream in programming, mostly
influenced by the rising in popularity of graphical user
interfaces (GUI), where OOP techniques were extensively
applied. However, the interest in OOP rapidly surpassed the use
in GUI’s, as the software engineers and programmers
recognized the power of OOP principles in dealing with the
increasing complexity and scale of software. Today, the most
used OOP languages are C++, Java, and C#.
Modern object-oriented languages are powerful programming
artifacts. Often, their rich syntax, complex semantics, and
comprehensive set of libraries hide the essential principles of
object-orientation.

2.2.1. Objects
In an imperative programming setting, a pure object is a
runtime software entity consisting of the following parts:
• a state defined by a set of internal objects called attributes;
• a set of subroutines called methods, which define the set of
valid messages the object may accept. The methods of an
object define the object’s valid state transformations, which
define the computational meaning of the object. The signatures
of the methods of an object form the object’s interface.
An object-oriented program comprises a set of objects that
coordinate their tasks by exchanging messages in the form of
method invocations. From the software architecture point of
view, each object addresses a concern in the dominant
decomposition of the application. Thus, coordination of objects
results in the implementation of the overall application
concern.
Concerns in software engineering. The different programming
paradigms created in the last decades primarily tried to break
the complexity of a problem by recursively dividing it into a set
of smaller subproblems that can be easier to be understood
and solved. This is software modularization. From this
perspective, a software is recursively broken into a set of
software modules, whose relation and interaction are specified.
Each module must address a software concern, which is
defined as a conceptual part of a solution such that the
composition of concerns may define the solution needed by the
software. The modularization process based on concerns is
called separation of concerns (SoC). The concrete notion of
module in a programming environment depends on the
programming paradigm. For example, object-oriented
programming uses objects to describe concerns, whereas
functional programming uses functions in a mathematical
sense.

Encapsulation
The most primitive principle behind object-oriented
programming (OOP) is encapsulation, also called information
hiding, which states that an object which knows the interface of
another object does not need to make assumptions about its
internal details to use its functionality. It only needs to
concentrate on the interface of the objects they depend on. In
fact, an OOP language statically prevents an object from
accessing the internal state of another object, by exposing only
its interface. Encapsulation prevents programmers from
concentrating on irrelevant details about the internal structure
of a particular
Implementation of an object. In fact, the implementation details
and attributes of an object may be completely modified E.G.
Pinho, E., G, and De Carvalho, F., H. (2014). Junior Science of
Computer Programming pp. 65–90. without affecting the parts
of the software that depend on the object, provided its
interface, as well as its behavior, is preserved. In this sense,
encapsulation is an important property of OOP in dealing with
software complexity and scale. More importantly, encapsulation
brings to programmers the possibility of working at higher
levels of safety and security, by allowing only essential and
valid accesses to be performed on critical subsets of the
program state.

Classes
A class is defined as a set of similar objects, presenting a set of
similar attributes and methods. Classes may also be introduced
as the programming-time counterparts of objects, often called
prototypes or templates, specifying the attributes and methods
that objects instantiated from them must carry at run time. Let
A be a class with a set of attributes α and a set of methods μ. A

with a set of attributes α′ and a set of methods μ′, such that α ⊆


parallel programmer may derive a new class from A, called A′,

α′ and μ ⊆ μ′ . This is called inheritance. A is a superclass


(generalization) of A′, whereas A′ is a subclass (specialization)
of A. By the substitution principle, an object of class A′ can be
used in a context where an object of class A is required. Thus,
in a good design, all the valid internal states and state
transformations of A are also valid in A′. Such safety
requirement cannot be enforced by the usual OOP languages.
Inheritance of classes can be single or multiple. In single
inheritance, a derived class has exactly one superclass,
whereas in multiple inheritance a class may be derived from a
set of superclasses. Modern OOP languages, such as Java and
C#, have abolished multiple inheritance, still supported by C+
+, by adopting the single inheritance mechanism once
supported by Smalltalk. To deal with use cases of multiple
inheritance, Java introduced the notion of interface. An
interface declares a set of methods that must be supported by
objects that implement it. Interfaces define a notion of type for
objects and classes.

Abstraction
Classes and inheritance bring four important abstraction
mechanisms to OOP
• Classification/instantiation constitutes the essence of the use
of classes. As already defined, classes group objects with
similar structure (methods and attributes). Objects represent
instances of classes.
• Aggregation/decomposition comes from the ability to have
objects as attributes of other objects. Thus, a concept
represented by an object may be described by their constituent
parts, also defined as objects, forming a recursive hierarchy of
objects that represent the structure behind the concept.
• Generalization/specialization comes from inheritance, making
it possible to recognize commonalities between different
classes of objects by creating superclasses from them. Such an
ability makes possible a kind of polymorphism that is typical in
modern OO languages, where an object reference, or variable,
that is typed with a class may refer to an object of any of its
subclasses.
• Grouping/individualization is supported due to the existence
of collection classes, which allows for the grouping together of
objects with common interests according to the application
needs. With polymorphism, collections of objects of related
classes, by inheritance relations, may be valid.

Modularity
Modularity is a way of managing complexity in software, by
promoting the division of large scale and complex systems into
collections of simple and manageable parts. There are some
accepted criteria in classifying the level of modularity achieved
by a programming method: decomposability, composability,
understandability, continuity, and protection.
OOP promotes the organization of the software in classes from
which the objects that perform the application will be
instantiated at run time. In fact, classes will be the building
blocks of OOP software. In a good design, classes capture
simple and well-defined concepts in the application domain,
orchestrating them to perform the application in the form of
objects (decomposability). Classes promote the reuse of
software parts, since the concept captured by a class of objects
may be present in several applications (composability). Indeed,
abstraction mechanisms makes it possible to reuse only those
class parts that are common between objects in distinct
applications. Encapsulation and a high functional independence
degree promote independence between classes, making it
possible to understand the meaning of a class without
examining the code of other classes it depends on
(understandability). Also, they avoid the propagation of
modifications in the requirements of a given class
implementation to other classes (continuity). Finally, exception
mechanisms makes it possible to restrict the scope of the effect
of an error condition at runtime (protection).

Functional independence
Functional independence is an important property of objects to
be achieved in the design of their classes. It is a measure of the
independence among the objects that constitute the
application. It is particularly important for the purposes of this
paper. Functional independence is calculated by two means:
cohesion and coupling. The cohesion of a class measures the
degree to which the tasks its objects perform define a
meaningful unit. Thus, a highly cohesive class addresses a
single and well-defined concern. The coupling of a class
measures its degree of dependency in relation to other classes.
Low coupling means that modifications in a class tend to cause
minor effects in other classes they depend on. Also, low
coupling minimizes propagation of errors from defective classes
to the classes they depend on. From the discussion above, we
may conclude that functional independence is better as high
cohesion and low coupling are achieved.

Parallelism support in object-oriented languages


As a result of the wide acceptance of MPI among parallel
programmers in HPC domains, implementations of MPI for
object-oriented languages have been developed. Some of these
implementations are simple wrappers for native MPI
Implementations, whereas others adopt an object-oriented
style. Both approaches present two drawbacks. First of all, they
go against the original MPI designer’s intention to serve as just
a portability layer for message-passing in parallel
implementations of specific purpose scientific libraries.
Secondly, MPI promotes the decomposition of a program in the
dimension of processes, causing the breaking of concerns in a
set of cooperating objects, or classes, increasing their coupling
and, as a consequence, sacrificing functional independence.
Charm++ is a library for message-passing on top of C++,
portable across distributed and shared-memory parallel
architectures. A program is built from a set of parallel
processes called chares, which can be dynamically created by
other
chares. They can communicate via explicit message-passing
and share data through special kinds of objects. The placement
of chares is defined by a dynamic load balancing strategy
implemented in the run-time system. Chares are special
objects, bringing the drawback of using objects to encapsulate
processes instead of concerns. A common approach, supported
by JavaParty, ParoC++, POP-C++, relies on deploying objects,
representing processes, across the nodes of the parallel
computer, where they can interact through method invocations
instead of message-passing. Indeed, such an approach may be
supported by any OO language with some form of remote
method invocation. Despite avoiding the backdoor
communication promoted by raw message-passing, method
invocations promote client–server relations between the objects
that act as processes, whereas most of the parallel algorithms
assume peer-to-peer relations among processes. For this
reason, ParoC++ proposed forms of asynchronous method
invocations in order to improve the programmability of
common process interaction patterns. POP-C++ extended
ParoC++ for grid computing. With the advent of virtual
execution machines, another common approach is to
implement parallel virtual machines, where parallelism is
managed implicitly by the execution environment [26].
However, we argue that such implicit approaches will never
reach the level of performance supported by explicit message-
passing in the general case, since efficient and scalable parallel
execution depends on specific features of the architectures and
applications, such as the strategy of distributing the data
across nodes of a cluster in order to promote data locality and
minimize the amount of communication.

Another parallel programming model that has been proposed


for object-oriented languages is PGAS (Partitioned Global
Address Space), supported by languages like X10, Chapel,
Fortress, and Titanium. Most of these languages have been
developed under the HPCS (High Productivity Computer
Systems) program of DARPA, since 2002. The HPCS program
has two goals: to boost the performance of parallel computers
and increment their usability. In PGAS, the programmer can
work with shared and local variables without explicitly sending
and receiving messages. Thus, there is neither a notion of
parallel object nor a notion of message-passing interaction
between objects, like in the other approaches and the approach
proposed in this paper. Objects interact through the partitioned
address space, despite being placed in distinct nodes of the
parallel computer. Such an approach makes parallel
programming easier, but may incur in performance penalties
since memory movements are controlled by the system. Each
language has its own way of expressing task and data
parallelism, through different forms, such as: asynchronous
method invocation, explicit process spawn, dynamic parallelism
to handle ‘‘for loops’’ and partitioned arrays.
Initially, object-orientation and parallelism originated and
developed as separate and relatively independent areas.
During the last decade, however, more and more researchers
were attracted by the benefits from a potential marriage of the
two powerful paradigms. Numerous research projects and an
increasing number of practical applications were aimed at
different forms of amalgamation of parallelism with object-
orientation. It has been realized that parallelism is a inherently
needed enhancement for the traditional object-oriented
programming (OOP) paradigm, and that object orientation can
add significant flexibility to the parallel programming paradigm.
Why add parallelism to OOP? Primary OOP concepts such as
objects, classes, inheritance, and dynamic typing were first
introduced in the Simula language and were initially intended
to serve specific needs of real-world modelling and simulation.
Object-orientation developed further as an independent
general-purpose paradigm which strives to analyze, design and
implement computer applications through modelling of real-
world objects. From a programming perspective, object
orientation originated as a specific method for modelling
through programming but evolved to a general approach to
programming through modelling. Many real-world objects
perform concurrently with other objects, often forming
distributed systems. Because modelling of real-world objects is
the backbone of the object-oriented paradigm and because
real-world objects are often parallel, this paradigm needs to be
extended with appropriate forms of parallelism. Why add
object-orientation to parallel programming? The high cost of
specialized high performance SIMD and MIMD machines is still
an obstacle for some potential users. However, the rapid
development of ATM networks and other fast connections
opens opportunities to integrate existing workstations into
relatively cheap distributed computing resources. Nowadays,
diverse parallel processing platforms become increasingly
available to application programmers. The expanding parallel
applications cover not only traditional areas, such as scientific
computations, but also new domains, such as, for example,
multimedia. It has been widely recognized, however, that
parallel languages and language environments are behind the
needs of parallel programmers and users. For example, parallel
software reuse and portability are particularly important areas
that need improvement. Reuse of existing parallel software is
very important because of the significant diversity of new
parallel platform and the short lifetime of existing ones.
Because parallel languages and
compilers are architecture-oriented, parallel applications are
difficult to port. Researchers believe that parallel programming
can benefit from object-orientation in the same way as
traditional serial programming does. For example, object-
oriented languages can provide better reuse of parallel
software through the mechanisms of inheritance and
delegation. Object-orientation can also help with portability of
parallel applications because OOP languages support high level
of abstraction through separation of services form
implementation. Parallel applications can be consistently and
naturally developed through object-oriented analysis, design,
and programming.

Current State of Parallel OOP


Very active research has been conducted in the last decade in
the area of parallel OOP. A significant number of parallel OOP
languages have been designed, implemented, and tested in
diverse parallel applications. Despite of the progress in the area
of parallel OOP, many difficulties and open issues still exist.
Existing parallel OOP languages are often compromised in
important areas, such as inheritance capability, ease of use,
efficiency, heterogeneity, automatic memory management, and
debugging.
1) Inheritance capability. Many of the proposed languages
fail to provide inheritance for objects which can be
distributed in a network and which can accept and handle
messages concurrently. Even languages that permit some
amalgamation of parallelism with inheritance tend to
support only singleclass inheritance. Most languages are
weak in providing inheritance for the synchronization code
of parallel objects.
2) Efficiency and ease of use. The largest group of
experimental languages for parallel OOP consists of C++
extensions. Such extensions can be very large and
complex, and therefore, not easy to learn, use, or
implement efficiently. An alternative group includes
interpretation-oriented languages (i.e. Smalltalk, Self,
actor-based languages, Java) which do not provide high
run-time efficiency and lack the reliable static-type
checking.
3) Heterogeneity. Nowadays computing environments are
becoming more and more heterogeneous. Users typically
have access to a variety of platforms (such as
workstations, PCs, specialized high-performance
computers) which are networked locally or are
geographically distributed. Most parallel OOP languages,
however, are targeted at specific high-performance
platforms, or implemented for homogeneous networks.
There are no compilation-oriented languages that can
convert heterogeneous networks into a single high-
performance object-oriented computing environment in
which the peculiarities of the specific architectures are
transparent for the user.

Researchers have proposed diverse approaches to the


problematic points of parallel OOP. For example, the following
alternatives have been advocated:
1) Enhancing popular serial OOP languages: with parallelism
versus designing entirely new parallel OOP languages,
explicit parallelism versus parallelizing compilers, parallel
OOP languages versus parallel OOP libraries, and so on.
The numerous proposals to integrate objects and
parallelism typically follow two distinct scenarios: a)
design a new parallel OOP language with built-in
parallelism, or b) use an existing OOP language and
enhance it with parallelism mechanisms. Most recent
proposals follow the second approach. Its proponents take
the object-oriented paradigm as given (on the basis of its
contribution to the production of quality software) and
investigate how to enhance it so that it covers parallel
applications as well. The transition from a sequential
language to its parallel enhancement would be easier and
less frustrating than the transition to a new language
designed from scratch. The problem is not so much that of
learning a new language as it is of rewriting a 100,000-line
program. For entirely practical reasons like the above one,
the parallel extension of an existing language may have
better utility than a new language designed from scratch.
2) Some researchers assume that the programmer is
interested in and capable of specifying parallelism in
explicit form. Because OOP means programming by
modelling, and because real-world objects may exist and
do things concurrently, OOP languages may have to
provide explicit support to modelling parallelism.
3) Other researchers adhere to the idea that parallelism
should be transparent to the programmer, and that a
parallelizing compiler should take the burden of finding
and exploiting potential parallelism. It seems possible to
combine both approaches in certain beneficial proportions.
A sequential language can be enhanced with explicit
parallelism by means of a special library of low-level
parallelism primitives, such as fork and join for example.
Alternatively, the language can be extended with
additional linguistic constructs that implement higher-level
parallelism abstractions, such as monitors. The main
motivation for the library approach is that the underlying
sequential language does not need to be changed.
External library primitives can provide flexible but
uncontrolled access to data and hardware resources;
unfortunately, such access can result in unreliable
programs. Some authors overcome this difficulty through
encapsulating the library services in special classes.
Parallel class libraries are extended by end-users and
adapted to their specific needs.

Parallel OOP has proven to be, or is expected to be, a beneficial


implementation approach in a number of application areas,
such as Modelling and simulation, scientific computing,
management information systems, telecommunications, and
banking. New developments in the domain of parallel OOP
occur in close interaction with other computer science fields,
such as database management and artificial intelligence.
More research is needed to further improve the design and
implementation of parallel OOP platforms, and to increase their
applicability.

You might also like