CC All
CC All
3. Rapid elasticity:
The Computing services should have IT resources that are able to scale out and in
quickly and on as needed basis. Whenever the user require services it is provided to
him and it is scale out as soon as its requirement gets over.
4. Resource pooling:
The IT resource (e.g., networks, servers, storage, applications, and services) present
are shared across multiple applications and occupant in an uncommitted manner.
Multiple clients are provided service from a same physical resource.
5. Measured services:
The resource utilization is tracked for each application and occupant, it will provide
both the user and the resource provider with an account of what has been used.
This is done for various reasons like monitoring billing and effective use of resource.
1. Cost Savings
Cost saving is one of the biggest Cloud Computing benefits. It helps you to save
substantial capital cost as it does not need any physical hardware investments. Also,
you do not need trained personnel to maintain the hardware. The buying and
managing of equipment is done by the cloud service provider.
2. Strategic edge
Cloud computing offers a competitive edge over your competitors. It is one of the
best advantages of Cloud services that helps you to access the latest applications
any time without spending your time and money on installations.
3. High Speed
Cloud computing allows you to deploy your service quickly in fewer clicks. This
faster deployment allows you to get the resources required for your system within
fewer minutes.
4. Back-up and restore data
Once the data is stored in a Cloud, it is easier to get the back-up and recovery of
that, which is otherwise very time taking process onpremise.
5. Automatic Software Integration
In the cloud, software integration is something that occurs automatically. Therefore,
you don’t need to take additional efforts to customize and integrate your applications
as per your preferences.
Disadvantages
1. Application Virtualization.
Network Virtualization. 3. Desktop Virtualization.
4. Storage Virtualization.
• Application Virtualization:
o Application virtualization helps a user to have remote access of an application
from a server. The server stores all personal information and other characteristics
of the application but can still run on a local workstation through the internet.
o Example of this would be a user who needs to run two different versions of the
same software. Technologies that use application virtualization are hosted
applications and packaged applications..
• Network Virtualization:
o The ability to run multiple virtual networks with each has a separate control and
data plan. It co-exists together on top of one physical network. It can be managed
by individual parties that potentially confidential to each other.
o Network virtualization provides a facility to create and provision virtual
networks—logical switches, routers, firewalls, load balancer, Virtual Private
Network (VPN), and workload security within days or even in weeks.
• Desktop virtualization:
o Desktop virtualization allows the users’ OS to be remotely stored on a server in
the data centre. It allows the user to access their desktop virtually, from any
location by a different machine.
o Users who want specific operating systems other than Windows Server will need
to have a virtual desktop. Main benefits of desktop virtualization are user
mobility, portability, easy management of software installation, updates, and
patches.
• Storage Virtualization:
o Storage virtualization is an array of servers that are managed by a virtual storage
system. The servers aren’t aware of exactly where their data is stored, and instead
function more like worker bees in a hive. It makes managing storage from
multiple sources to be managed and utilized as a single repository. storage
virtualization software maintains smooth operations, consistent performance
and a continuous suite of advanced functions despite changes, break down and
differences in the underlying equipment.
• Emulation –
Guest programs are executed within an environment that is controlled by the
virtualization layer, which ultimately is a program. Also, a completely different
environment with respect to the host can be emulated, thus allowing the execution
of guest programs requiring specific characteristics that are not present in the
physical host.
TAXONOMY OF VIRTUALIZATION:
Virtualization techniques can be categorized into different types based on the level of
abstraction and the components they virtualize. Here is a taxonomy of virtualization
techniques:
1. Hardware Virtualization:
• Full Virtualization (Type 1 Hypervisor): This involves running a
hypervisor directly on the hardware to create and manage virtual machines.
Guest operating systems run on top of the hypervisor without modification.
Examples include VMware ESXi and Microsoft Hyper-V.
• Para-virtualization (Type 1 Hypervisor): The guest operating systems are
aware of the virtualization layer, and their kernels are modified to interact
more efficiently with the hypervisor. This approach aims to reduce the
performance overhead associated with full virtualization. Xen is an example
of a para-virtualization hypervisor.
• Hardware-Assisted Virtualization (Type 1 Hypervisor): This type
leverages hardware extensions, such as Intel VT-x or AMD-V, to enhance
virtualization performance. It allows virtual machines to execute certain
instructions directly on the hardware, improving efficiency. Examples include
KVM (Kernel-based Virtual Machine) and Hyper-V with hardware-assisted
virtualization.
2. Operating System Virtualization (Type 2 Hypervisor):
• Full Virtualization (Type 2 Hypervisor): In this case, a hypervisor runs on
a host operating system, and virtual machines are created and managed
within the host OS. Examples include VMware Workstation and Oracle
VirtualBox.
• Para-virtualization (Type 2 Hypervisor): Similar to the Type 1 para-
virtualization, the guest operating systems are aware of the virtualization
layer, and their kernels are modified to interact more efficiently with the
hypervisor. However, in this case, the hypervisor runs on top of the host
operating system.
3. Application Virtualization:
• Application Layer Virtualization: This technique virtualizes individual
applications, separating them from the underlying operating system.
Applications run in isolated containers, allowing for better portability and
simplified deployment. Docker and Kubernetes are popular examples.
4. Network Virtualization:
• Network Function Virtualization (NFV): NFV involves decoupling network
functions from dedicated hardware and running them as software-based
instances on commodity hardware. This enhances flexibility and scalability in
network management.
• Software-Defined Networking (SDN): SDN separates the control plane from
the data plane in networking, providing a programmable and centralized
approach to network management. It allows for more efficient resource
utilization and dynamic network configuration.
5. Storage Virtualization:
• Storage Area Network (SAN) Virtualization: This involves abstracting and
pooling physical storage resources, allowing for centralized management and
better utilization of storage capacity.
• Network-Attached Storage (NAS) Virtualization: Similar to SAN
virtualization, NAS virtualization abstracts and pools storage resources but is
designed for network-attached storage environments.
Pros of Virtualization:
Cons of Virtualization:
Paravirtualization:
As we know, cloud computing technology is used by both small and large organizations
to store the information in cloud and access it from anywhere at anytime using the
internet connection. Cloud computing architecture is a combination of service-oriented
architecture and event-driven architecture. Cloud computing architecture is divided
into the following two parts – 1) Front End 2) Back End.
Front End
The front end is used by the client. It contains client-side interfaces and applications that
are required to access the cloud computing platforms. The front end includes web servers
(including Chrome, Firefox, internet explorer, etc.), thin & fat clients, tablets, and mobile
devices.
Back End
The back end is used by the service provider. It manages all the resources that are required
to provide cloud computing services. It includes a huge amount of data storage, security
mechanism, virtual machines, deploying models, servers, traffic control mechanisms, etc.
2. Application: The application may be any software or platform that a client wants to
access.
3. Service: A Cloud Services manages that which type of service you access according to
the client’s requirement. Cloud computing offers the following three type of services:
4. Runtime Cloud: Runtime Cloud provides the execution and runtime environment to
the virtual machines.
6. Infrastructure: It provides services on the host level, application level, and network
level. Cloud infrastructure includes hardware and software components such as servers,
storage, network devices, virtualization software, and other storage resources that are
needed to support the cloud computing model.
9. Internet: The Internet is medium through which front end and back end can interact
and communicate with each other.
IAAS:
Iaas is also known as Hardware as a Service (HaaS). It is one of the layers of the cloud
computing platform. It allows customers to outsource their IT infrastructures such as
servers, networking, processing, storage, virtual machines, and other resources.
Customers access these resources on the Internet using a pay-as-per use model. In
traditional hosting services, IT infrastructure was rented out for a specific period of time,
with pre-determined hardware configuration. The client paid for the configuration and
time, regardless of the actual use. With the help of the IaaS cloud computing platform
layer, clients can dynamically scale the configuration to meet changing requirements and
are billed only for the services actually used. IaaS cloud computing platform layer
eliminates the need for every organization to maintain the IT infrastructure. IaaS is offered
in three models: public, private, and hybrid cloud. The private cloud implies that the
infrastructure resides at the customer-premise. In the case of public cloud, it is located at
the cloud computing platform vendor's data center, and the hybrid cloud is a combination
of the two in which the customer selects the best of both public cloud or private cloud.
Charactertics:
Charactertics:
SAAS:
It provides a virtual data center It provides virtual platforms It provides web software
to store information and create and tools to create, test, and and apps to complete
platforms for app development, deploy apps. business tasks.
testing, and deployment.
TYPES OF CLOUD:
Public cloud: Public clouds are managed by third parties which provide cloud services
over the internet to the public, these services are available as pay-as-you-go billing models.
They offer solutions for minimizing IT infrastructure costs and become a good option for
handling peak loads on the local infrastructure. Public clouds are the go-to option for
small enterprises, which can start their businesses without large upfront investments by
completely relying on public infrastructure for their IT needs. The fundamental
characteristics of public clouds are multitenancy. A public cloud is meant to serve multiple
users, not a single customer. A user requires a virtual computing environment that is
separated, and most likely isolated, from other users.
Private cloud: Private clouds are distributed systems that work on private infrastructure
and provide the users with dynamic provisioning of computing resources. Instead of a pay-
as-you-go model in private clouds, there could be other schemes that manage the usage
of the cloud and proportionally billing if the different departments or sections of an
enterprise. Private cloud providers are HP Data Centers, Ubuntu, Elastic-Private cloud,
Microsoft, etc.
Open challenges
1. Data security and privacy: Data security is a major concern when switching to cloud
computing. User or organizational data stored in the cloud is critical and private. Even if
the cloud service provider assures data integrity, it is your responsibility to carry out user
authentication and authorization, identity management, data encryption, and access
control. Security issues on the cloud include identity theft, data breaches, malware
infections, and a lot more which eventually decrease the trust amongst the users of your
applications.
2. Cost management: Even as almost all cloud service providers have a “Pay As You Go”
model, which reduces the overall cost of the resources being used, there are times when
there are huge costs incurred to the enterprise using cloud computing. When there is
under optimization of the resources, let’s say that the servers are not being used to their
full potential, add up to the hidden costs. If there is a degraded application performance
or sudden spikes or overages in the usage, it adds up to the overall cost. Unused resources
are one of the other main reasons why the costs go up.
7. Lack of knowledge and expertise: Due to the complex nature and the high demand
for research working with the cloud often ends up being a highly tedious task. It requires
immense knowledge and wide expertise on the subject. Although there are a lot of
professionals in the field they need to constantly update themselves. Cloud computing is
a highly paid job due to the extensive gap between demand and supply. There are a lot of
vacancies but very few talented cloud engineers, developers, and professionals. Therefore,
there is a need for upskilling so these professionals can actively understand, manage and
develop cloud-based applications with minimum issues and maximum reliability.
Unit 3:
Xaas:
Xaas is an acronym that stands for "Anything as a Service". It refers to the delivery of
various services, such as software, infrastructure, and platform services, over the internet,
on a subscription basis. The idea behind XaaS is to allow organizations to access the
services they need, when they need them, without having to make a large upfront
investment in hardware, software, or IT infrastructure.
Storage as a service:
✓ Storage as a Service (SaaS) is a cloud business model in which a company leases or
rents its storage infrastructure to another company or individuals to store data.
✓ Small companies and individuals often find this to be a convenient methodology for
managing backups, and providing cost savings in personnel, hardware and physical
space.
✓ As an alternative to storing magnetic tapes offsite in a vault, IT administrators are
meeting their storage and backup needs by Service Level Agreements (SLAs) with an
SaaS provider, usually on a cost-per gigabyte- stored and cost-per-data-transferred
basis. The client transfers the data meant for storage to the service provider on a
set schedule over the SaaS provider's wide area network or over the Internet.
✓ The storage provider provides the client with the software required to access their
stored data. Clients use the software to perform standard tasks associated with
storage, including data transfers and data. backups. Corrupted or lost company
data can easily be restored.
Process as a service:
✓ Business Process as a Service (PaaS) is a cloud-based delivery model where a service
provider manages the operations and processes of a business organization on behalf
of the client.
✓ This includes activities such as HR management, financial management, customer
service, and other business operations.
✓ The service provider uses its technology and expertise to automate, streamline, and
manage these processes, freeing up the client's time and resources to focus on other
areas of the business.
✓ The PaaS model allows businesses to outsource specific business processes,
reducing the cost and effort associated with managing them in-house, and
improving overall efficiency and productivity.
Database as a service:
✓ Database as a Service (DBaaS) is a cloud business model in which a company leases
or rents Database services to another company or individuals to store their data.
✓ Database-as-a-Service (DBaaS) is the fastest growing cloud service.
✓ The term "Database-as-a-Service" (DBaaS) refers to software that enables users to
provision, manage, consume, configure, and operate database software using a
common set of abstractions (primitives), without having to either know nor care
about the exact implementations of those abstractions for the specific database
software.
✓ Database-as-a-Service (DBaaS) is a cloud computing service model in which a third-
party provider hosts and manages the infrastructure and maintenance of a
customer's database. This eliminates the need for the customer to manage their own
hardware and software, freeing up resources and reducing costs.
✓ The provider manages the database servers, backup and recovery, and security,
allowing customers to access and use their data through APIs or web interfaces.
Information as a service:
✓ Information as a Service (IaaS) is a model where information or data is
provided as a service over the internet. This service provides access to a wide
range of information such as news, weather, financial data, and market
trends, among others. IaaS offers a centralized platform for users to access
and manage the information they need, without the need for software or
hardware installations.
✓ IaaS is designed to be flexible, scalable, and cost-effective, making it a popular
choice for organizations of all sizes. The data is stored in cloud based servers
and accessed through a secure web portal. The service providers are
responsible for managing and maintaining the infrastructure, security, and
performance of the platform.
✓ With IaaS, organizations can reduce their costs associated with data
management and gain access to the latest information in real-time. This can
help them make more informed decisions and stay ahead of their competitors.
The services can be customized based on the specific needs of the
organizations, and the data can be accessed from anywhere, at any time,
through any device with internet access. Integration as a service
✓ Integration as a Service (iPaaS) in the cloud refers to a cloud-based platform
for integrating different applications, services, and data sources. It provides
a centralized solution for connecting, integrating, and managing various
systems, which helps organizations automate business processes and
streamline data exchange across the enterprise. iPaaS enables users to build,
deploy, and manage integrations without the need for extensive technical
expertise or on-premise infrastructure.
✓ Some common use cases for iPaaS in the cloud include data migration, data
integration, system integration, application integration, and business process
automation. iPaaS solutions typically provide a range of connectivity options,
including APIs, pre-built connectors, and custom code options. Additionally,
many iPaaS solutions offer monitoring, management, and security features
that help organizations manage and secure their integration environments.
Testing as a service
✓ Testing as a Service is an outsourcing model, in which testing activities are
outsourced to a third party that specializes in simulating real world testing
environments as per client requirements. It is also abbreviated as TaaS.
Types of TaaS
Functional Testing.
Performance Testing.
Security Testing.
✓ Functional Testing as a Service o TaaS Functional Testing may include
UI/GUI Testing, regression, integration and automated User Acceptance
Testing (UAT) but not necessary to be part of functional testing.
✓ Performance Testing as a Service o Multiple users are accessing the
application at the same time. TaaS mimic as a real world users environment
by creating virtual users and performing the load and stress test.
✓ Security Testing as a Service o TaaS scans the application and websites for
any vulnerability
✓ Features of TaaS
✓ Self-service portal for running application for function and load tests.
✓ Test library with full security controls that saves all the test assets available
to end users.
✓ To maximize the hardware utilization, sharing of Cloud hardware
✓ On-demand availability for complete test labs that includes ability to deploy
complex deploy complex multi-tier applications, test scripts, and test tools
✓ It ensures the detection of bottlenecks and solve the problems for the
application under test by monitoring it
✓ The metering capabilities allows tracking and charging for that the services
used by customer
Scaling of Cloud:
Vertical Scaling
To understand vertical scaling, imagine a 20-story hotel. There are innumerable rooms
inside this hotel from where the guests keep coming and going. Often there are spaces
available, as not all rooms are filled at once. People can move easily as there is space for
them. As long as the capacity of this hotel is not exceeded, no problem. This is vertical
scaling. With computing, you can add or subtract resources, including memory or storage,
within the server, as long as the resources do not exceed the capacity of the machine.
Although it has its limitations, it is a way to improve your server and avoid latency and
extra management. Like in the hotel example, resources can come and go easily and
quickly, as long as there is room for them.
Horizontal Scaling
Horizontal scaling is a bit different. This time, imagine a two-lane highway. Cars travel
smoothly in each direction without major traffic problems. But then the area around the
highway develops - new buildings are built, and traffic increases. Very soon, this two-lane
highway is filled with cars, and accidents become common. Two lanes are no longer
enough. To avoid these issues, more lanes are added, and an overpass is constructed.
Although it takes a long time, it solves the problem. Horizontal scaling refers to adding
more servers to your network, rather than simply adding resources like with vertical
scaling. This method tends to take more time and is more complex, but it allows you to
connect servers together, handle traffic efficiently and execute concurrent workloads.
Diagonal Scaling
It is a mixture of both Horizontal and Vertical scalability where the resources are added
both vertically and horizontally. Well, you get diagonal scaling, which allows you to
experience the most efficient infrastructure scaling. When you combine vertical and
horizontal, you simply grow within your existing server until you hit the capacity. Then,
you can clone that server as necessary and continue the process, allowing you to deal with
a lot of requests and traffic concurrently.
Unit 4
1. Application Model:
• Components: Applications in Aneka are composed of tasks, which are the
basic units of computation. Tasks are submitted to the Aneka framework for
execution.
2. Task Execution Environment:
• Components: Aneka provides a runtime environment for executing tasks,
including libraries, APIs, and execution policies.
• Responsibilities: Manages the execution of tasks, ensuring parallelism and
distributed computing.
3. Resource Manager:
• Components: Aneka includes a Resource Manager responsible for
managing and allocating resources across the cloud infrastructure.
• Responsibilities: Monitors the availability and performance of resources,
dynamically allocating resources based on application requirements.
4. Communication Middleware:
• Components: A communication middleware facilitates communication
between tasks and resources in a distributed environment.
• Responsibilities: Ensures efficient and reliable communication among
tasks and between different nodes in the Aneka framework.
5. Task Scheduling and Load Balancing:
• Components: Aneka incorporates algorithms for task scheduling and load
balancing.
• Responsibilities: Optimizes the distribution of tasks across available
resources to maximize efficiency and minimize execution time.
6. Service-Oriented Architecture (SOA):
• Components: Aneka is designed with a service-oriented architecture,
allowing developers to expose their applications as services.
• Responsibilities: Enables the development of scalable and modular
applications using a service-oriented approach.
7. Elasticity and Auto-Scaling:
• Components: Aneka supports elasticity and auto-scaling features.
• Responsibilities: Allows the Aneka framework to dynamically scale
resources up or down based on demand, optimizing resource utilization.
8. Security:
• Components: Security features are integrated to ensure the protection of
data and resources.
• Responsibilities: Manages access control, authentication, and encryption
to secure the execution environment.
9. Cloud Integration:
• Components: Aneka can integrate with various cloud providers and
platforms.
• Responsibilities: Facilitates the deployment of applications on public,
private, or hybrid cloud environments.
10. Developer Tools:
• Components: Aneka provides tools and APIs for developers to build, deploy,
and manage applications.
• Responsibilities: Aids developers in creating parallel and distributed
applications efficiently.
1. Textile Services:
Fabric Services defines the lowest level of the software stack that represents multiple
containers. They provide access to resource-provisioning subsystems and monitoring
features implemented in many.
2. Foundation Services:
Fabric Services are the core services of Manya Cloud and define the infrastructure
management features of the system. Foundation services are concerned with the logical
management of a distributed system built on top of the infrastructure and provide
ancillary services for delivering applications.
3. Application Services:
Application services manage the execution of applications and constitute a layer that
varies according to the specific programming model used to develop distributed
applications on top of Aneka.
Logical organization of aneka:
• The logical organization of Aneka Clouds can be very diverse, since it strongly
depends on the configuration selected for each of the container instances belonging
to the Cloud. The most common scenario is to use a master-worker configuration
with separate nodes for storage.
• The master node features all the services that are most likely to be present in one
single copy and that provide the intelligence of the Aneka Cloud.
• A common configuration of the master node is as follows:
o Index Service (master copy)
o Heartbeat Service
o Logging Service
o Reservation Service
o Resource Provisioning Service
o Accounting Service o Reporting and Monitoring Service
• The master node also provides connection to an RDBMS facility where the state
of several services is maintained. For the same reason, all the scheduling services
are maintained in the master node. They share the application store that is
normally persisted on the RDBMS in order to provide a fault-tolerant
infrastructure.
• The worker nodes constitute the workforce of the Aneka Cloud and are generally
configured for the execution of applications.
• Storage nodes are optimized to provide storage support to applications. They
feature, among the mandatory and usual services, the presence of the Storage
Service. The number of storage nodes strictly depends on the predicted workload
and storage consumption of applications. Storage nodes mostly reside on machines
that have considerable disk space to accommodate a large quantity of files.
Infrastructure Organization:
Infrastructure organization refers to the structured management and
arrangement of physical and virtual components that make up the underlying
foundation of an IT environment. This organization is crucial for ensuring
that IT resources are efficiently utilized, maintained, and optimized to
support the overall goals and operations of an organization
Private cloud deployment mode: A private deployment mode is mostly
constituted by local physical resources and infrastructure management
software providing access to a local pool of nodes, which might be virtualized.
In this scenario Aneka Clouds are created by harnessing a heterogeneous
pool of resources such has desk- top machines, clusters, or workstations.
These resources can be partitioned into different groups, and Aneka can be
configured to leverage these resources according to application needs.
Moreover, leveraging the Resource Provisioning Service, it is possible to
integrate virtual nodes provisioned from a local resource pool managed by
systems such as Xen Server. Eucalyptus, and OpenStack. A private
deployment mode is mostly constituted by local physical resources and
infrastructure management software providing access to a local pool of nodes,
which might be virtualized. In this scenario Aneka Clouds are created by
harnessing a heterogeneous pool of resources such has desk- top machines,
clusters, or workstations. These resources can be partitioned into different
groups, and Aneka can be configured to leverage these resources according
to application needs. Moreover, leveraging the Resource Provisioning Service,
it is possible to integrate virtual nodes provisioned from a local resource pool
managed by systems such as XenServer. Eucalyptus, and OpenStack.
Public cloud deployment mode: Public Cloud deployment mode features
the installation of Aneka master and worker nodes over a completely
virtualized infrastructure that is hosted on the infrastructure of one or more
resource providers such as Amazon EC2 or GoGrid. In this case it is possible
to have a static deployment where the nodes are provisioned beforehand and
used as though they were real machines. This deployment merely replicates
a classic Aneka installation on a physical infrastructure without any dynamic
provisioning capability. More interesting is the use of the elastic features of
IaaS providers and the creation of a Cloud that is completely dynamic.
Hybrid cloud deployment mode: The hybrid deployment model constitutes
the most common deployment of Aneka. In many cases, there is an existing
computing infrastructure that can be leveraged to address the computing
needs of applications. This infrastructure will constitute the static
deployment of Aneka that can be elastically scaled on demand when
additional resources are required. This scenario constitutes the most
complete deployment for Aneka that is able to leverage all the capabilities of
the framework: • Dynamic Resource Provisioning • Resource Reservation •
Workload Partitioning Accounting. Monitoring, and Reporting
AWS : Amazon Web Services (AWS) is a cloud computing platform that provides a
wide range of services to help organizations and businesses build, deploy, and
manage applications and services in the cloud. It is one of the leading cloud
computing platforms and is used by many organizations and businesses worldwide.
With AWS, you only pay for the services you use, and you can easily scale up or
down as your needs change. Additionally, AWS provides a highly secure and
reliable infrastructure, with multiple layers of security and compliance built in.
AWS offers a variety of services, including compute, storage, databases,
networking, security, analytics, machine learning, mobile, and application services.
Amazon Web Services offers a wide range of different business purpose global
cloud-based products. The products include storage, databases, analytics,
networking, mobile, development tools, enterprise applications, with a pay as-you-
go pricing model.
AWS Compute Services Here, are Cloud Compute Services offered by Amazon: 1.
EC2(Elastic Compute Cloud)- EC2 is a virtual machine in the cloud on which you
have OS level control. You can run this cloud server whenever you want.
2. LightSail- This cloud computing tool automatically deploys and manages the
computer, storage, and networking capabilities required to run your applications.
3. AWS Lambda- This AWS service allows you to run functions in the cloud. The
tool is a big cost saver for you as you to pay only when your functions execute.
Storage services:
1. Amazon Glacier- It is an extremely low-cost storage service. It offers secure and
fast storage for data archiving and backup.
2. Amazon Elastic Block Store (EBS)- It provides block-level storage to use with
Amazon EC2 instances. Amazon Elastic Block Store volumes are network-attached
and remain independent from the life of an instance.
3. AWS Storage Gateway- This AWS service is connecting on-premises software
applications with cloud-based storage. It offers secure integration between the
company’s on-premises and AWS’s storage infrastructure. Database Services :
1. Amazon RDS- This Database AWS service is easy to set up, operate, and scale
a relational database in the cloud.
2. Amazon DynamoDB- It is a fast, fully managed NoSQL database service. It is a
simple service which allow cost-effective storage and retrieval of data. It also allows
you to serve any level of request traffic.
Advantages of AWS:
1. Cost Savings: Pay-as-you-go pricing model, allowing you to only pay for the
resources you use, rather than investing in expensive hardware.
2. Scalability: The ability to quickly and easily scale up or down as your computing
needs change.
3. Reliability: Built on a global infrastructure with multiple data centers and
redundant systems, ensuring high availability and reliability.
4. Flexibility: A wide range of services, making it easy to integrate with existing
systems and workflows.
What is ERP?
ERP is an abbreviation for Enterprise Resource Planning and is a software
similar to CRM that is hosted on cloud servers which helps the enterprises to
manage and manipulate their business data as per their needs and user
requirements. ERP software follows pay per use methodologies of payment,
that is at the end of the month, the enterprise pay the amount as per the
cloud resources utilized by them. There are various ERP vendors available
like Oracle, SAP, Epicor, SAGE, Microsoft Dynamics, Lawson Softwares and
many more.