Lecture 2 Cloud Computing Platform and Infrastructure
The document provides an overview of major cloud service providers, including Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP), highlighting their strengths, services, and ideal use cases. It discusses cloud architecture, resource management, load balancing, auto-scaling, and networking concepts essential for efficient cloud operations. Additionally, it touches on emerging technologies like edge and fog computing that enhance real-time data processing capabilities.
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0 ratings0% found this document useful (0 votes)
18 views31 pages
Lecture 2 Cloud Computing Platform and Infrastructure
The document provides an overview of major cloud service providers, including Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP), highlighting their strengths, services, and ideal use cases. It discusses cloud architecture, resource management, load balancing, auto-scaling, and networking concepts essential for efficient cloud operations. Additionally, it touches on emerging technologies like edge and fog computing that enhance real-time data processing capabilities.
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 31
Cloud Computing Platforms and Infrastructure Cloud Computing Spring 2025
2.1 Overview of Major Cloud Service Providers
• Cloud computing has revolutionized the way organizations deploy, manage, and scale their IT infrastructure. • The three dominant players in the cloud computing market are Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). • Each of these providers offers a comprehensive suite of services, but they differ in their strengths, pricing models, and target audiences. Amazon Web Services (AWS) AWS, launched in 2006, is the most mature and widely adopted cloud platform. It offers over 200 services, including computing power (EC2), storage (S3), databases (RDS), machine learning (SageMaker), and more. Strengths: AWS is known for its extensive global infrastructure, scalability, and a vast ecosystem of third-party integrations. It is particularly popular among startups and enterprises for its flexibility and pay-as-you-go pricing model. Use Cases: AWS is ideal for businesses looking for a broad range of services, from web hosting to big data analytics and IoT. Amazon Web Services (AWS) AWS offers a broad range of services, including: • Compute Services: EC2 (Elastic Compute Cloud), Lambda (serverless computing) • Storage Services: S3 (Simple Storage Service), EBS (Elastic Block Store) • Networking: VPC (Virtual Private Cloud), Route 53 (DNS service) • Database Services: RDS (Relational Database Service), DynamoDB (NoSQL database) • AI/ML: SageMaker, Rekognition • SageMaker is a managed service that automates the building and deploying of machine learning data models. • Rekognition automates image recognition and video analysis for your applications without machine learning (ML) experience. Microsoft Azure Azure, launched in 2010, is Microsoft's cloud platform. It provides a wide array of services, including virtual machines, AI and machine learning (Azure ML), and enterprise applications like Office 365 and Dynamics 365. Strengths: Azure is deeply integrated with Microsoft’s software ecosystem, making it a natural choice for enterprises already using Windows Server, Active Directory, or other Microsoft products. It also offers strong hybrid cloud capabilities, allowing seamless integration between on-premises and cloud environments. Use Cases: Azure is well-suited for enterprises with existing Microsoft Microsoft Azure Key Azure services include: Compute: Virtual Machines, Azure Functions (serverless computing) Storage: Blob Storage, Azure Files Networking: Virtual Network, Azure Load Balancer Database: Azure SQL Database, Cosmos DB AI & Analytics: Azure Machine Learning, Power BI integration Google Cloud Platform (GCP) GCP, launched in 2011, is Google’s cloud offering. It is known for its strengths in data analytics, machine learning (TensorFlow), and container orchestration (Kubernetes). Strengths: GCP excels in big data and machine learning, leveraging Google’s expertise in these areas. It also offers competitive pricing and strong performance for data-intensive applications. Use Cases: GCP is ideal for organizations focused on data analytics, AI/ML, and containerized applications. Containerized Applications • Containerized applications are applications run in isolated packages of code called containers. • Containers include all the dependencies that an application might need to run on any host operating system, such as libraries, binaries, configuration files, and frameworks, into a single lightweight executable. • Software developers use containerization to deploy applications in multiple environments without rewriting the program code. • They build an application once and deploy it on multiple operating systems. • For example, they run the same containers on Linux and Windows operating systems. Google Cloud Platform (GCP) Key services include: Compute: Compute Engine, Cloud Functions Storage: Cloud Storage, Persistent Disks Networking: Cloud Load Balancing, VPC Database: BigQuery, Cloud Spanner AI & ML: TensorFlow, AutoML 2.2 Cloud Architecture and Data Centers • Cloud architecture refers to the design and structure of cloud environments, including the components and subcomponents required for cloud computing. • At the heart of cloud architecture are data centers, which house the physical infrastructure (servers, storage, networking equipment) that powers cloud services. Cloud Architecture Layers Cloud computing infrastructure is built on a network of global data centers that provide scalable and high-availability services. Cloud architecture typically consists of the following layers: Infrastructure Layer: Physical hardware, data centers, networking components Virtualization Layer: Hypervisors, virtual machines (VMs), containers Platform Layer: Middleware, APIs, and orchestration tools Application Layer: SaaS applications, cloud-native services Key Components of Cloud Architecture 1. Front-End: The client-side interface that users interact with (e.g., web browsers, mobile apps). 2. Back-End: The cloud infrastructure, including servers, storage, and databases. 3. Network: The communication channels that connect front- end and back-end components. 4. Middleware: Software that enables communication and data management between applications. Data Centers and Regions Cloud providers operate data centers globally, categorized into: Regions: Geographically distinct areas with multiple data centers Availability Zones (AZs): Multiple data centers within a region, ensuring redundancy Edge Locations: Content delivery and caching points for faster access Data Centers Design: Modern data centers are designed for high availability, scalability, and energy efficiency. They are often distributed across multiple geographic regions to ensure redundancy and low latency. Global Infrastructure: Major cloud providers operate data centers in multiple regions and availability zones. For example, AWS has regions in North America, Europe, Asia, and more, each consisting of multiple isolated data centers. Sustainability: Cloud providers are increasingly focusing on renewable energy and energy-efficient designs to reduce the environmental impact of data centers. 2.3 Resource Management in the Cloud • Resource management in the cloud involves allocating and optimizing computing resources such as CPU, memory, storage, and network bandwidth to meet application demands efficiently. Resource Management in the Cloud Cloud providers enable efficient resource management through: Elasticity: The ability to scale resources up or down as demand fluctuates Multi-tenancy: Shared infrastructure among multiple users while ensuring security and isolation Monitoring & Optimization: Tools like AWS CloudWatch, Azure Monitor, and Google Stackdriver help track resource usage and performance Key Aspects of Resource Management Provisioning: Allocating resources to applications or users based on demand. Monitoring: Tracking resource usage to identify bottlenecks or underutilization. Optimization: Adjusting resource allocation to improve performance and reduce costs. Cost Management: Using tools like AWS Cost Explorer or Azure Cost Management to monitor and control cloud spending. Challenges of Resource Management Over-Provisioning: Allocating more resources than necessary, leading to higher costs. Under-Provisioning: Allocating insufficient resources, resulting in poor performance. Dynamic Workloads: Managing resources for applications with fluctuating demand. Virtual Machines vs. Containers Virtual Machines (VMs): Provide full OS-level virtualization, enabling isolated environments Containers: Lightweight, portable environments (e.g., Docker, Kubernetes) with faster deployment and scaling 2.4 Load Balancing and Auto-Scaling Load Balancing: Load balancing distributes incoming network traffic across multiple servers to ensure no single server is overwhelmed. This improves application availability, reliability, and performance. Cloud-based load balancing options include: AWS Elastic Load Balancer (ELB) Azure Load Balancer GCP Cloud Load Balancing Types of Load Balancers Application Load Balancer (ALB): Operates at the application layer (Layer 7) and is ideal for HTTP/HTTPS traffic.
Network Load Balancer (NLB): Operates at the transport layer
(Layer 4) and is suitable for TCP/UDP traffic.
Global Load Balancer: Distributes traffic across multiple
regions for global applications. Auto-Scaling Auto-scaling automatically adjusts the number of compute resources based on real-time demand. This ensures optimal performance during peak times and cost savings during low traffic. Vertical Scaling: Increasing the capacity of existing resources (e.g., adding more CPU or memory). Horizontal Scaling: Adding more instances of a resource (e.g., launching additional servers). Auto-Scaling Major cloud auto-scaling services include: AWS Auto Scaling Azure Scale Sets GCP Managed Instance Groups By leveraging these cloud infrastructure components, organizations can achieve high availability, scalability, and cost efficiency in their IT operations. 2.5 Cloud Networking: VPCs, Subnets, and Firewalls Cloud networking involves the configuration and management of network resources in the cloud. Virtual Private Cloud (VPC): A VPC is a logically isolated section of the cloud where you can launch resources in a virtual network. Subnets: Subdivisions of a VPC that allow you to segment resources for security and performance. Firewalls: Security groups and network access control lists (ACLs) that control inbound and outbound traffic to resources. Key Features: Private and Public Subnets: Public subnets allow internet access, while private subnets are isolated for sensitive resources. VPN and Direct Connect: Secure connections between on-premises networks and the cloud. Cloud Service Orchestration and Automation Tools Automation is crucial for managing cloud infrastructure efficiently. Popular tools include: • Terraform • Kubernetes • Edge Computing • Fog Computing • Cloud Formation & ARM Templates Terraform Terraform is an infrastructure-as-code (IaC) tool that enables you to define and provision cloud resources using declarative configuration files. Benefits: Version control, repeatability, and consistency in resource provisioning. Use Cases: Automating the deployment of complex cloud environments. Declarative configuration management Declarative configuration management refers to the class of tools that allow operators to declare a desired state of some system (be it a physical machine, an EC2 VPC, an entire Google Cloud account, or anything else), and then allow the system to automatically compare that desired state to the present state, and then automatically update the managed system to match the declared state. Kubernetes Kubernetes is an open-source container orchestration platform that automates the deployment, scaling, and management of containerized applications. Key Features: Pods: The smallest deployable units in Kubernetes. Services: Enable communication between pods. Scaling: Automatically adjusts the number of pods based on demand. Edge Computing Edge computing brings computation and data storage closer to the devices where data is generated, reducing latency and bandwidth usage. Processing data at or near the source (e.g., IoT devices, autonomous vehicles) instead of centralized cloud data centers. Use Cases: IoT, real-time analytics, and autonomous vehicles. Examples: AWS IoT Greengrass, Azure IoT Edge. Fog Computing Fog Computing: Fog computing extends cloud computing to the edge of the network, enabling data processing at intermediate points between the cloud and end devices. Distributes computing resources between edge devices and the cloud to improve efficiency and scalability. Use Cases: Smart cities, industrial automation. Examples: Cisco IOx, OpenFog Consortium. Conclusion Cloud computing platforms and infrastructure form the backbone of modern IT systems. Understanding the offerings of major cloud providers, the architecture of data centers, and the tools for resource management, networking, and automation is essential for designing and managing scalable, efficient, and secure cloud environments. As cloud technologies continue to evolve, concepts like edge and fog computing are becoming increasingly important, enabling new possibilities for real-time data processing and IoT applications.