We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 25
UNIT 5 CLOUD COMPUTING ARCHITECTURE
Preetha V, AP/CSE, SRIT
UNIT 5 SYLLABUS Fundamental Cloud Architectures-Workload Distribution Architecture- Resource Pooling Architecture -Dynamic Scalability Architecture- Elastic Resource Capacity Architecture-Service Load Balancing Architecture- Advanced Cloud Architectures Hypervisor Clustering Architecture- Load Balanced Virtual Server Instances Architecture-Non-Disruptive Service Relocation Architecture-Zero Downtime Architecture-Cloud Balancing Architecture-Case Study.
Preetha V, AP/CSE, SRIT
UNIT OUTCOME Cloud technology architectures formalize functional domains within cloud environments by establishing well-defined solutions comprised of interactions, behaviors, and distinct combinations of cloud computing mechanisms and other specialized cloud technology components.
Preetha V, AP/CSE, SRIT
FUNDAMENTAL CLOUD ARCHITECTURES
Preetha V, AP/CSE, SRIT
WORKLOAD DISTRIBUTION ARCHITECTURE • IT resources can be horizontally scaled via the addition of one or more identical IT resources, and a load balancer that provides runtime logic capable of evenly distributing the workload among the available IT resources. • The resulting workload distribution architecture reduces both IT resource overutilization and under- utilization to an extent dependent upon the sophistication of the load balancing algorithms and runtime logic.
Preetha V, AP/CSE, SRIT
This fundamental architectural model can be applied to any IT resource, with workload distribution commonly carried out in support of distributed virtual servers, cloud storage devices, and cloud services.
Preetha V, AP/CSE, SRIT
• In addition to the base load balancer mechanism, and the virtual server and cloud storage device mechanisms to which load balancing can be applied, the following mechanisms can also be part of this cloud architecture: • Audit Monitor – When distributing runtime workloads, the type and geographical location of the IT resources that process the data can determine whether monitoring is necessary to fulfill legal and regulatory requirements. • Cloud Usage Monitor – Various monitors can be involved to carry out runtime workload tracking and data processing. • Hypervisor – Workloads between hypervisors and the virtual servers that they host may require distribution. • Logical Network Perimeter – The logical network perimeter isolates cloud consumer network boundaries in relation to how and where workloads are distributed.
Preetha V, AP/CSE, SRIT
• Resource Cluster – Clustered IT resources in active/active mode are commonly used to support workload balancing between different cluster nodes. • Resource Replication – This mechanism can generate new instances of virtualized IT resources in response to runtime workload distribution demands.
Preetha V, AP/CSE, SRIT
Resource Pooling Architecture • A Resource pooling architecture is based on the use of one or more resource pools, in which identical IT resources are grouped and maintained by a system that automatically ensures that they remain synchronized. • Dedicated pools can be created for each type of IT resource and individual pools can be grouped into a larger pool, in which case each individual pool becomes a sub-pool
Preetha V, AP/CSE, SRIT
Provided here are common examples of resource pools: Physical server pools are composed of networked servers that have been installed with operating systems and other necessary programs and/or applications and are ready for immediate use. Virtual server pools are usually configured using one of several available templates chosen by the cloud consumer during provisioning. For example, a cloud consumer can set up a pool of mid-tier. Windows servers with 4 GB of RAM or a pool of low-tier Ubuntu servers with 2 GB of RAM. Storage pools, or cloud storage device pools, consist of file-based or block-based storage structures that contain empty and/or filled cloud storage devices.
Preetha V, AP/CSE, SRIT
Network pools (or interconnect pools) are composed of different pre configured network connectivity devices. For example, a pool of virtual firewall devices or physical network switches can be created for redundant connectivity, load balancing, or link aggregation. CPU pools are ready to be allocated to virtual servers, and are typically broken down into individual processing cores. Memory pools-Pools of physical RAM can be used in newly provisioned physical servers or to vertically scale physical servers.
Preetha V, AP/CSE, SRIT
Resource pools can become highly complex, with multiple pools created for specific cloud consumers or applications. A hierarchical structure can be established to form parent, sibling, and nested pools in order to facilitate the organization of diverse resource pooling Preetha V, AP/CSE, SRIT requirements In the nested pool model, larger pools are divided into smaller pools that individually group the same type of IT resources together. Nested pools can be used to assign resource pools to different departments or groups in the same cloud consumer organization. Preetha V, AP/CSE, SRIT • In addition to cloud storage devices and virtual servers, which are commonly pooled mechanisms, the following mechanisms can also be part of this cloud architecture: Audit Monitor Cloud Usage Monitor Hypervisor Logical Network Perimeter Pay-Per-Use Monitor Remote Administration System Resource Management System Resource replication
Preetha V, AP/CSE, SRIT
Dynamic Scalability Architecture • The dynamic scalability architecture is an architectural model based on a system of predefined scaling conditions that trigger the dynamic allocation of IT resources from resource pools. • The automated scaling listener is configured with workload thresholds that dictate when new IT resources need to be added to the workload processing. • Dynamic allocation enables variable utilization as dictated by usage demand fluctuations, since unnecessary IT resources are efficiently reclaimed without requiring manual interaction. • Other mechanisms used in this architecture- Cloud Usage Monitor, Hypervisor, Pay-Per-Use Monitor Preetha V, AP/CSE, SRIT The following types of dynamic scaling are commonly used: • Dynamic Horizontal Scaling – IT resource instances are scaled out and in to handle fluctuating workloads. The automatic scaling listener monitors requests and signals resource replication to initiate IT resource duplication, as per requirements and permissions. • Dynamic Vertical Scaling – IT resource instances are scaled up and down when there is a need to adjust the processing capacity of a single IT resource. • Dynamic Relocation – The IT resource is relocated to a host with more capacity. For example, a database may need to be moved from a tape-based SAN storage device with 4 GB per second I/O capacity to another disk based SAN storage device with 8AP/CSE, Preetha V, GBSRITper second I/O capacity Process of Dynamic Horizontal Scaling
Preetha V, AP/CSE, SRIT
2
Preetha V, AP/CSE, SRIT
Elastic Resource Capacity Architecture • The elastic resource capacity architecture is primarily related to the dynamic provisioning (resources are deployed flexibly to match a customers fluctuating demands) of virtual servers, using a system that allocates and reclaims CPUs and RAM in immediate response to the fluctuating processing requirements of hosted IT resources.
Preetha V, AP/CSE, SRIT
Preetha V, AP/CSE, SRIT Preetha V, AP/CSE, SRIT • Resource pools are used by scaling technology that interacts with the hypervisor and/or VIM to retrieve and return CPU and RAM resources at runtime. • The virtual server and its hosted applications and IT resources are vertically scaled in response. • This type of cloud architecture can be designed so that the intelligent automation engine script sends its scaling request via the VIM instead of to the hypervisor directly. • Additional mechanisms that can be included are Cloud Usage Monitor, Pay-Per-Use Monitor, Resource Replication. Preetha V, AP/CSE, SRIT Service Load Balancing Architecture • The service load balancing architecture can be considered a specialized variation of the workload distribution architecture that is geared specifically for scaling cloud service implementations. • Redundant deployments of cloud services are created, with a load balancing system added to dynamically distribute workloads. • The duplicate cloud service implementations are organized into a resource pool, while the load balancer is positioned as either an external or built-in component to allow the host servers to balance the workloads themselves. Preetha V, AP/CSE, SRIT Preetha V, AP/CSE, SRIT Preetha V, AP/CSE, SRIT