We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3
UNIT-II
Cloud Enabling Technologies Service Oriented Architecture
1. Service Oriented Architecture (SOA) Service-Oriented Architecture (SOA) is a design pattern where software components, known as "services," are developed and deployed as independent units. These services interact over a network (often via web services), allowing for loose coupling between components. In SOA: Key characteristics: o Interoperability: Services are platform-agnostic and can be used by any system, regardless of language or platform. o Loose Coupling: Each service is independent, reducing dependencies and simplifying maintenance. o Reusability: Services can be reused across different applications. SOA often relies on Web Services for communication between services. 2. REST (Representational State Transfer) REST is an architectural style for designing networked applications. It uses standard HTTP methods like GET, POST, PUT, DELETE, and PATCH to perform operations on resources, which are represented by URLs. Key Principles of REST: o Stateless: Each request from a client to a server must contain all the information needed to understand and process the request. o Uniform Interface: A consistent and predefined interface makes it easier to interact with resources. o Cacheable: Responses can be explicitly marked as cacheable or non-cacheable to improve performance. o Layered System: A REST API can be composed of multiple layers (e.g., load balancers, caching servers) without the client needing to know about the underlying layers. 3. Web Services and Publish-Subscribe Model Web services are a way for applications to communicate over a network, usually the internet, using standard protocols like HTTP, SOAP, or REST. The Publish-Subscribe model is a messaging pattern used in distributed systems: Publish-Subscribe: In this model, services (publishers) send messages to a "topic" or "channel," and other services (subscribers) receive those messages. This is asynchronous, meaning publishers do not need to know who the subscribers are or wait for responses. This model decouples the services, enhancing scalability and fault tolerance. In the context of Web Services, the publish-subscribe model is often used in message queues or event-driven architectures (e.g., Kafka, RabbitMQ, etc.). 4. Basics of Virtualization Virtualization is the creation of a virtual (rather than physical) version of something, such as an operating system, server, storage device, or network resource. Virtualization involves abstracting hardware or software resources to make them more flexible, efficient, and scalable. A hypervisor is used to create and manage virtual machines (VMs) on physical hardware. 5. Types of Virtualizations Virtualization can occur at several levels within an IT environment: Hardware Virtualization: Involves creating virtual machines that run an independent operating system on a physical host. This is typically managed by a hypervisor (e.g., VMware ESXi, Microsoft Hyper-V, KVM). Operating System Virtualization (Containerization): Allows multiple isolated user spaces to be created on a single host OS (e.g., Docker, LXC). Storage Virtualization: Aggregates multiple physical storage devices into a single logical unit to improve manageability and efficiency (e.g., SAN, NAS). Network Virtualization: Combines multiple physical networks into a virtualized network infrastructure (e.g., SDN, NFV). 6. Implementation Levels of Virtualization Virtualization can be implemented at different layers in an IT infrastructure: Hardware Level: The most fundamental level, where the physical hardware is abstracted by a hypervisor (bare-metal or hosted) to run multiple virtual machines. Operating System Level: Here, the virtualized instances share the host OS, but the kernel isolates the workloads (e.g., containers). Application Level: Virtualization is done on the application layer, often using middleware to create isolated instances of applications or resources. 7. Virtualization Structures There are several components and layers involved in virtualization: Hypervisor (Virtual Machine Monitor): The software that creates and runs virtual machines. It sits between the hardware and the operating system to manage the allocation of resources to each virtual machine. Virtual Machines (VMs): These are independent operating systems running on virtualized hardware. Virtual Networks: These are network resources created and managed by virtualization tools to isolate and segment virtualized environments. 8. Tools and Mechanisms for Virtualization Various tools and platforms are used for creating and managing virtualized environments: VMware vSphere/ESXi: A popular hypervisor that provides virtualization management. Microsoft Hyper-V: A virtualization technology from Microsoft for creating virtualized computing environments. KVM (Kernel-based Virtual Machine): An open-source virtualization module in Linux. Docker, Kubernetes: Popular tools for container-based virtualization (operating system-level virtualization). VMware vCenter, OpenStack: Cloud management platforms for managing large-scale virtualized environments. 9. Virtualization of CPU, Memory, and I/O Devices CPU Virtualization: The hypervisor allocates CPU resources to virtual machines, enabling each VM to think it has its own dedicated processor, while actually sharing physical CPUs. Memory Virtualization: Virtual memory management allows each virtual machine to believe it has its own memory, though it is actually mapped from a shared physical memory pool. I/O Device Virtualization: I/O resources (e.g., storage, networking) are abstracted and shared among multiple virtual machines. Virtual devices are presented to VMs, while the hypervisor maps these requests to physical devices. 10. Virtualization Support for Disaster Recovery Virtualization plays a significant role in disaster recovery and business continuity: Snapshots: Hypervisors allow for taking snapshots of virtual machines, enabling recovery to a known good state in case of failure. Live Migration: Allows virtual machines to be moved from one physical host to another without downtime, helping in load balancing and ensuring high availability. Replication: Virtual machines can be replicated across data centers, ensuring that, in the case of a disaster, they can be quickly restored from replicated copies. Automated Failover: Virtualized environments can automate the failover process, ensuring minimal disruption and reducing recovery times.