What is Distributed Computing
What is Distributed Computing
Due to its ability to provide parallel processing between multiple systems, distributed
computing can increase performance, resilience and scalability, making it a
common computing model in database systems and application design.
Parallel execution. Once the nodes are assigned, they independently execute
their assigned subtask concurrently with other nodes. This parallel processing
enables faster computation of complex tasks compared to sequential
processing.
Communication. Nodes in a distributed system communicate with one another
to share resources, coordinate tasks and maintain synchronization. This
communication can take place through a variety of network protocols.
An
example showing how networks, servers and computers are structured in distributed
computing.
Types of distributed computing architecture
In enterprise settings, distributed computing generally puts various steps in
business processes at the most efficient places in a computer network. For
example, a typical distribution has a three-tier model that organizes applications
into the presentation tier -- or user interface -- the application tier and the data tier.
These tiers function as follows:
Efficiency. Complex requests can be broken down into smaller pieces and
distributed among different systems. This way, the request is simplified and
worked on as a form of parallel computing, reducing the time needed to
compute requests.
Cost. While distributed systems can be cost-effective in the long run, they often
incur high deployment costs initially. This can be an issue for some
organizations, especially when compared to the relatively lower upfront costs
of centralized systems.
Grid computing
Grid computing involves a distributed architecture of multiple computers
connected to solve a complex problem. Servers or PCs run independent
tasks and are linked loosely by the internet or low-speed networks. In the
grid computing model, individual participants can enable some of their
computer's processing time to solve complex problems.
Distributed computing
Grid computing and distributed computing are similar concepts that can be
hard to tell apart. Generally, distributed computing has a broader definition
than grid computing. Grid computing is typically a large group of dispersed
computers working together to accomplish a defined task.
Cloud computing
Cloud computing is also similar in concept to distributed computing. Cloud
computing is a general term for anything that involves delivering hosted
services and computing power over the internet. These services, however,
are divided into three main types: infrastructure as a service, platform as a
service and software as a service. Cloud computing is also divided
into private and public clouds. A public cloud sells services to another party,
while a private cloud is a proprietary network that supplies a hosted service
to a limited number of people, with specific access and permissions
settings. Cloud computing aims to provide easy, scalable access to
computing resources and IT services.
For instance, distributed computing is used in the field of genomics to analyze large-
scale DNA sequences. The Human Genome Project, which mapped the entire
human genome, is a prime example of this. The project involved processing and
analyzing vast amounts of genetic data, which was distributed across multiple
machines for faster computation.
3. Financial Services
In the financial services sector, distributed computing examples are plenty. Financial
institutions deal with vast amounts of data, from customer transactions to market
data. Processing and analyzing this data in real-time is critical to making informed
decisions.
Distributed computing is used in IoT to manage and process this data. For instance,
it is used in smart home systems to control and monitor various devices, such as
thermostats and security systems. By distributing data and computations across
multiple devices, these systems can operate more efficiently and effectively.
Also, the major difference is that SOAP cannot make use of REST
whereas REST can make use of SOAP. You can also read about
the difference between REST API and SOAP API
What is API (Application Programming Interface)
Integration?
API (Application Programming Interface) Integration is the connection
between two or more applications, via APIs, letting you exchange data. It
is a medium through which you can share data and communicate with
each other by involving APIs to allow web tools to communicate. Due to
the rise in cloud-based products, API integration has become very
important.
What is API (Application Programming Interface)
Testing?
API (Application Programming Interface) testing is a kind of software testing
that analyzes an API in terms of its functionality, security, performance,
and reliability. It is very important to test an API so as to check whether
it’s working as expected or not. If not, again changes are made in the
architecture and re-verified.
APIs are the center of software development to exchange data across
applications. The API testing includes sending requests to single/multiple
API endpoints and validating the response. It focuses majorly on business
logic, data responses and security, and performance bottlenecks.
Types of Testing:
Unit Testing
Integration Testing
Security Testing
Performance Testing
Functional Testing
Must Read: API Testing in Software Testing
API Testing Tools:
Postman
Apigee
JMeter
Ping API
Soap UI
vREST
How to Create APIs?
Creating an API is an easy task unless you are very well clear on the
basic concepts. It’s an iterative process (based on feedback) that just
includes a few easy steps:
Plan your goal and the intended users
Design the API architecture
Develop (Implement the code) and Test API
Monitor its working and work on feedback
Must Read: Tips for Building an API
Restrictions of Using APIs
When an API (Application Programming Interface) is made it’s not really
released as software for download and it has some policies governing its
use or restricting its use to everyone, usually, there are three main types
of policies governing APIs, are:
Private: These APIs are only made for a single person or entity (like a
company that has spent the resources to make it or bought it).
Partner: Just like the name it gives the authority to use APIs to some
partners of entities that own APIs for their private use.
Public: You should be aware of them cause you can only find these
APIs in the market for your own use if you don’t own specific API
access from some entity that owns private these APIs for their private
use. An example of a Public API is ‘Windows API’ by Microsoft for
more public APIs you can visit this GitHub repository -
> https://github.com/public-apis/public-apis .
Advantages of APIs
Efficiency: API produces efficient, quicker, and more reliable results
than the outputs produced by human beings in an organization.
Flexible delivery of services: API provides fast and flexible delivery of
services according to developers’ requirements.
Integration:The best feature of API is that it allows the movement of
data between various sites and thus enhances the integrated user
experience.
Automation: As API makes use of robotic computers rather than
humans, it produces better and more automated results.
New functionality : While using API the developers find new tools and
functionality for API exchanges.
Disadvantages of APIs
Cost: Developing and implementing API is costly at times and requires
high maintenance and support from developers.
Security issues: Using API adds another layer of surface which is then
prone to attacks, and hence the security risk problem is common in
APIs.
Conclusion
By now, you must have had a clear idea of What is API? it’s working,
types, testing tools used, etc. After understanding these concepts, you
can try working on them by implementing some of the concepts in
projects. Not just theoretical knowledge, you must also have a practical
idea of it by working on it. Developers must have a deep understanding of
APIs in order to implement them.
This programmer's reference describes an interface to the transport layer of the Basic
Reference Model of Open Systems Interconnection (OSI). Although the API is capable of
interfacing to proprietary protocols, the Internet open network protocols are the intended
providers of the transport service. This document uses the term "open" to emphasize that
any system conforming to one of these standards can communicate with any other system
conforming to the same standard, regardless of vendor. These protocols are contrasted with
proprietary protocols that generally support a closed community of systems supplied by a
single vendor External Data Representation and Marshalling.
The information stored in running programs is represented as data structures – for example,
by sets of interconnected objects – whereas the information in messages consists of
sequences of bytes. Irrespective of the form of communication used, the data structures
must be flattened (converted to a sequence of bytes) before transmission and rebuilt on
arrival.
The individual primitive data items transmitted in messages can be data values of many
different types, and not all computers store primitive values such as integers in the same
order. The representation of floating-point numbers also differs between architectures. To
support any data type that can be passed as an argument or returned as a result must be
able to be flattened and the individual primitive data values represented in an agreed
format.
External data representation– an agreed standard for the representation of data structures
and primitive values
Marshalling– the process of taking a collection of data items and assembling them into a
form suitable for transmission in a message
Group Communication
What is a group?
A number of processes which cooperate to provide a service.
An abstract identity to name a collection of processes.
Client and server communication take place when both are connected to each
other via a network. Client and the server are two individual computing systems
having their own operating system, applications and functions. When connected
via a network they are able to share their applications with each other.
It is not necessary that client and server use a same platform as operating system,
many varied operating systems can be connected with each other for advanced
communication using communication protocol. The responsibility of
implementing the communication protocol lies with an application known as
communication software.
Using the features of a communication software client and server can exchange
files and data for effective communication. The process of communication
between client and server can be explained as follows:
What is Marshalling?
Marshalling in CORBA
The sender may not know how many processes receive the message or
their identities; this depends on the implementation of the multicast
mechanism.
Multicast communication can be achieved using a broadcast
mechanism. UDP is an example of a protocol that supports broadcast
directly, but not multicast. In this case, transport layer ports can be used
as a means of group message filtering by arranging that only the subset
of processes that are members of the group listen on the particular port.
The group membership action join group can be implemented locally by
the process binding to the appropriate port and issuing a receive-from
call.
In both types of multicast communication, that is, directly supported by
the communication protocol or fabricated by using a broadcast
mechanism, there can be multiple groups, and each individual process
can be a member of several different groups. This provides a useful way
to impose some control and structure on the communication at the higher
level of the system or application. For example, the processes concerned
with a particular functionality or service within the system can join a
specific group related to that activity.