0% found this document useful (0 votes)
24 views44 pages

Icc Final

Uploaded by

ankitpal2802
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views44 pages

Icc Final

Uploaded by

ankitpal2802
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 44

Experiment - 1

1. Title: - AWS case study of all the cloud services meanly AWS EC2 and AWS S3

2. Outcome: - enhanced scalability, flexibility, and cost-efficiency, optimizing


resource allocation and facilitating efficient data management and accessibility.

3. Objectives: - showcase how leveraging AWS EC2 and S3 services can enhance
scalability, flexibility, and cost-efficiency while optimizing resource allocation and
facilitating efficient data management for businesses.

4. Description:-AWS stands for Amazon Web Services. It is a comprehensive and widely


used cloud computing platform offered by Amazon.com. AWS provides a variety of cloud
services, including computing power, storage solutions, networking capabilities,
databases, machine learning, artificial intelligence, and more, all delivered as on-demand
services over the internet.

List of AWS cloud services:

- Compute: EC2, Lambda


- Storage: S3, EBS
- Database: RDS, DynamoDB
- Networking: VPC, Route 53
- Security: IAM, KMS
- Management: CloudFormation, CloudWatch
- Analytics: Athena, EMR
- Machine Learning: SageMaker, Rekognition
- IoT: IoT Core, FreeRTOS
- Developer Tools: CodeCommit, CodeBuild
- Mobile: Amplify, Pinpoint
- AR/VR: Sumerian
- Customer Engagement: Connect

AWS EC2 (Elastic Compute Cloud):


- Virtual Servers: EC2 offers virtual servers in the cloud, enabling users to run
applications and workloads without investing in physical hardware.
- Compute Capacity: Users can rent compute capacity on-demand, allowing
for flexible and scalable infrastructure.
- Instance Types: EC2 provides a wide selection of instance types optimized
for different use cases, including general-purpose, compute-optimized,
memory-optimized, and storage-optimized instances.
- Operating Systems: Supports various operating systems, including Linux and
Windows, allowing users to choose the environment that best suits their needs.
- Auto Scaling: EC2 offers Auto Scaling, which automatically adjusts the
number of instances based on traffic demand, ensuring optimal performance
and cost-efficiency.
- Elastic IP Addresses: Users can assign Elastic IP addresses to EC2 instances,
providing consistent public-facing IP addresses even if instances are stopped
and restarted.
- Use Cases: EC2 is suitable for hosting web applications, databases,
development and testing environments, batch processing, and more.

AWS EC2 Operations:


1. Provisioning Instances:
- Create EC2 instances through the AWS Management Console, CLI, or SDKs.
- Select instance type, operating system, and configuration options.
- Configure security groups, key pairs, and networking settings.
2. Managing Instances:
- Start, stop, reboot, or terminate instances as needed.
- Monitor instance performance and health using Amazon CloudWatch.
- Scale instances vertically or horizontally based on workload demands.
3. Automating Deployment:
- Use AWS CloudFormation or other automation tools to provision and manage
EC2 resources using Infrastructure as Code (IaC).
- Implement auto-scaling policies to automatically adjust the number of
instances based on demand.
4. Ensuring Security:
- Apply IAM roles and policies to control access to EC2 resources.
- Configure security groups and network ACLs to restrict inbound and
outbound traffic.
- Encrypt data at rest and in transit using AWS Key Management Service
(KMS) and SSL/TLS.
5. Monitoring and Troubleshooting:
- Set up CloudWatch alarms to monitor instance health, CPU utilization, and
other metrics.
- Use AWS Systems Manager for remote management, patching, and
troubleshooting of EC2 instances.
- Enable CloudTrail logging to track API activity and audit changes made to
EC2 resources.

AWS S3 (Simple Storage Service):


- Object Storage: S3 is an object storage service designed to store and retrieve
any amount of data from anywhere on the web.
- Highly Durable: S3 automatically replicates data across multiple Availability
Zones within a region, providing 99.999999999% (11 nines) durability of
objects.
- Scalable: S3 scales automatically to accommodate any amount of data,
allowing users to store and retrieve large volumes of data without worrying
about capacity constraints.
- Storage Classes: Offers multiple storage classes, including Standard,
Infrequent Access (IA), One Zone-IA, Glacier, and Glacier Deep Archive,
allowing users to optimize storage costs based on data access patterns.
- Versioning: Supports versioning, allowing users to keep multiple versions of
an object in the same bucket, protecting against accidental deletion or
modification.
- Lifecycle Policies: Users can define lifecycle policies to automatically
transition objects to different storage classes or delete them after a specified
period, helping optimize storage costs.
- Use Cases: S3 is used for a wide range of use cases, including data backup
and archiving, static website hosting, content distribution, data lakes, and
analytics.

AWS S3 Operations:
1. Creating Buckets:
- Create S3 buckets through the AWS Management Console, CLI, or SDKs.
- Choose a region, bucket name, and set access permissions.
2. Uploading and Managing Objects:
- Upload objects (files) to S3 buckets using the console, CLI, or SDKs.
- Manage object permissions, including public access, using bucket policies and
access control lists (ACLs).
- Set object metadata, tags, and lifecycle policies to automate data management
tasks.
3. Data Lifecycle Management:
- Define lifecycle policies to automatically transition objects between storage
classes based on access patterns and retention requirements.
- Use versioning to maintain multiple versions of objects and protect against
accidental deletion or modification.
4. Security and Access Control:
- Implement bucket policies and IAM policies to control access to S3 buckets
and objects.
- Encrypt data at rest using server-side encryption (SSE) with S3-managed keys
(SSE-S3) or customer-provided keys (SSE-C).
- Secure data in transit by enabling SSL/TLS for connections to S3.
5. Monitoring and Analytics:
- Monitor S3 bucket metrics and events using CloudWatch.
- Enable S3 Access Logs to track requests made to buckets and objects for
auditing and analysis.
- Use S3 Inventory to generate reports on the objects stored in buckets for
compliance and governance purposes.
Experiment - 2

1. Title: - Create a virtual Machine on Oracle virtual box or VM ware tool


and try to migrate virtual machine

2. Outcome: - Create a virtual machine in Oracle VirtualBox or VMware, then export


it as an. ova or .ovf file respectively. Import the exported file into the other
virtualization platform to complete the migration process. Ensure compatibility and
test functionality post-migration.

3. Objectives: - The objective is to demonstrate the process of creating a virtual machine


using either Oracle VirtualBox or VMware, then migrating the virtual machine
between the two virtualization platforms. This exercise aims to showcase
interoperability and migration capabilities between different virtualization solutions,
ensuring seamless transition and functionality across platforms.

4. Description and steps to create a virtual machine: -

▪ Create a virtual box


1.Downloads the virtual box from https://www.virtualbox.org/wiki/Downloads to your
Windows machine. Accept the default install options which include network interfaces
and USB.
2.Click Finish to close the installer and open VirtualBox.

Install OS in Virtual Box


• To setup virtual Machine in virtual Box Click on New to create a new virtual
machine.

• Name the Virtual Machine and click Next. VirtualBox will automatically
suggest the Type and Version of the OS used in the virtual machine by using
the name. In our example we are using kali-linux-2024 so VirtualBox set the
Type to Linux, and Version to Ubuntu
• Select a new virtual hard disk for the VM and click Create. This will start the
process of creating a file that will hold the VM’s operating system.

• Set the location of the VDI hard disk and set the size of the file. Then click
Create. We set the size to 30.72 GB, large enough for an kali-linux 2024 install.
The size is the maximum that the VM can use so make sure to set a reasonable
amount of space.
Installation of kali Linux: -
▪ Click on start to install the kali Linux

▪ Click on graphical install

▪ Set the language, location, configure the keyboard .


▪ Set username and password

▪ Click on “guided -use entire disk”.

▪ Select “SCSI3” and “all files in one partition” which was recommended
▪ Click “continue “ to apply changes

▪ Click “yes” to changes to disk

▪ Click continue to install the “system software “


▪ Click “continue “to install the GRUB boot loader

▪ Now , click continue to reboot the system


▪ The kali Linux is open in the system

▪ Enter the user name and password

▪ The kali Linux OS is open.


Experiment - 3

1. Title: - Perform any collaborative cloud applications like Google drive, Google sheet,
Google calendar and write it is the advantage and features.
2. Outcome: - Collaborative use of Google Sheets enhances teamwork with real-time
editing and centralized data storage, fostering productivity and streamlined
communication among users.
3. Objectives: - The objective of collaborative cloud applications like Google Sheets is
to facilitate seamless teamwork by providing real-time editing capabilities, centralized
data storage, and streamlined communication, ultimately enhancing productivity and
collaboration among users.
4. Description and Steps to create Spreadsheets: -
Steps to create Google sheets: -
Step 1: -Start Google Sheets using web. Create new Blank Spreadsheet.

Step 2: - Fill some data in the spreadsheet and navigate to File>Share>Share with others.
Step 3: - Enter the email addresses of the collaborators and send invite with/without message.

Step 4: - The Collaborators will receive invite through main.


Step 5: - Here is the result of two people working on a single spreadsheet

Advantages: -
1. Real-time Collaboration: Multiple users can work on the same document
simultaneously from different locations. Changes made by one user are instantly visible
to others, enabling seamless collaboration without the need for version control or manual
file sharing.

2. Access Anywhere, anytime: Cloud-based applications can be accessed from any


internet-enabled device, allowing users to work on documents from home, office, or on
the go. This flexibility enhances productivity and enables remote work.

3. Version History and Tracking: Collaborative tools often maintain a version history of
documents, allowing users to review changes, revert to previous versions, and track edits
made by collaborators. This feature enhances transparency and accountability.

4. Reduced IT Overhead: Cloud-based applications typically do not require installation or


maintenance of software on individual devices. Updates and patches are applied
automatically by the service provider, reducing IT overhead and ensuring users always
have access to the latest features.

5. Scalability and Flexibility: Cloud-based applications scale automatically to


accommodate changing user needs and data volumes. Users can easily add or remove
collaborators, adjust access permissions, and increase storage capacity as needed without
the need for additional infrastructure.
6. Integration and Compatibility: Collaborative cloud applications often integrate
seamlessly with other cloud services and tools, enabling users to streamline workflows,
share data between applications, and automate tasks. This interoperability enhances
productivity and efficiency.

7. Enhanced Security: Cloud-based applications typically offer robust security features,


including data encryption, access controls, and compliance certifications. Service
providers invest heavily in security measures to protect user data from unauthorized
access, breaches, and data loss.

8. Cost Savings: Cloud-based applications often follow a subscription-based pricing model,


allowing users to pay only for the resources and features they use. This pay-as-you-go
model eliminates upfront capital expenditures on software licenses and hardware, making
collaborative tools cost-effective for businesses of all sizes.

Some key features of collaborative cloud applications like Google Sheets:

1. Real-Time Collaboration: Multiple users can work on the same spreadsheet


simultaneously, seeing changes in real-time, enabling seamless collaboration and
teamwork.

2. Comments and Discussions: Users can add comments to specific cells or sections of the
spreadsheet, facilitating discussions, feedback, and collaboration among team members.

3. Revision History: Google Sheets maintains a detailed revision history, allowing users to
view previous versions of the spreadsheet, track changes, and revert to earlier versions if
needed.

4. Access Control: Administrators can control access to spreadsheets by setting


permissions, allowing users to view, edit, or comment based on their role or specific
needs.

5. Sharing and Collaboration: Spreadsheets can be easily shared with individuals or


groups via email, links, or by embedding them in websites, facilitating collaboration with
internal and external stakeholders.
6. Templates and Add-ons: Google Sheets offers a variety of templates for different use
cases, such as budgeting, project management, and inventory tracking. Users can also
enhance functionality with add-ons for tasks like data analysis, visualization, and
automation.

7. Data Import and Export: Users can import data from various sources, including CSV
files, Excel spreadsheets, and databases, making it easy to consolidate and analyze data.
Similarly, data can be exported in different formats for sharing or further analysis.

8. Data Visualization: Google Sheets provides tools for creating charts, graphs, and pivot
tables to visualize data and gain insights quickly. Users can customize visualizations and
apply formatting options to enhance clarity and presentation.

9. Offline Access: Google Sheets offers offline access through the use of browser
extensions or mobile apps, allowing users to view and edit spreadsheets even when not
connected to the internet. Changes are synced automatically once the connection is
restored.

10. Integration with Other Google Services: Google Sheets integrates seamlessly with
other Google services such as Google Drive, Gmail, Google Calendar, and Google Docs,
enabling users to share data, collaborate on documents, and automate workflows across
different applications.
Experiment - 4

1. Title: - Write a case study on Microsoft Azure services in details

2. Outcome: - Implementing Microsoft Azure services enabled streamlined digital


transformation, fostering innovation, scalability, and cost-efficiency while ensuring
enhanced security and regulatory compliance for the retail corporation.

3. Objectives: - The objective of this case study on Microsoft Azure services is to


demonstrate how adopting Azure solutions facilitates digital transformation for
businesses, driving innovation, scalability, cost-efficiency, and security enhancements
to maintain a competitive edge in the market.

4. Description: -Microsoft Azure is a comprehensive cloud computing platform


offered by Microsoft. It provides a wide range of cloud services, including computing
power, storage solutions, networking capabilities, databases, artificial intelligence
(AI), machine learning, Internet of Things (IoT), and more. Azure enables
organizations to build, deploy, and manage applications and services through
Microsoft's global network of data centers.

Some key Microsoft Azure services:

1. Compute Services:
- Azure Virtual Machines (VMs): On-demand, scalable compute instances running
Windows or Linux.
- Azure Kubernetes Service (AKS): Managed Kubernetes service for deploying,
managing, and scaling containerized applications.
- Azure Functions: Serverless compute service for running event-driven code without
managing infrastructure.
2. Storage Services:
- Azure Blob Storage: Scalable object storage for unstructured data like images,
documents, and videos.
- Azure File Storage: Fully managed file shares for cloud or hybrid deployments.
- Azure Disk Storage: Persistent, high-performance block storage for VMs and
applications.

3. Database Services:
- Azure SQL Database: Fully managed relational database service with built-in high
availability and security features.
- Azure Cosmos DB: Globally distributed, multi-model database service for building
planet-scale applications.
- Azure Database for MySQL/PostgreSQL: Managed databases for MySQL and
PostgreSQL with automatic backups and updates.

4. Networking Services:
- Azure Virtual Network (VNet): Isolated network environments with customizable IP
address ranges, subnets, and security policies.
- Azure Load Balancer: Distributes incoming network traffic across multiple VMs or
instances.
- Azure VPN Gateway: Securely connect on-premises networks to Azure VNet over the
internet.

5. Identity and Access Management:


- Azure Active Directory (AAD): Cloud-based identity and access management service
for managing users and groups.
- Azure AD B2C: Identity and access management service for customer-facing
applications.
- Azure AD Domain Services: Provides managed domain services like domain join,
LDAP, and Kerberos.
6. AI and Machine Learning:
- Azure Machine Learning: Cloud-based platform for building, training, and deploying
machine learning models.
- Azure Cognitive Services: Pre-built AI APIs for vision, speech, language, and decision-
making capabilities.
- Azure Bot Service: Build, connect, and deploy intelligent bots that interact with users
across multiple channels.

7. Internet of Things (IoT):


- Azure IoT Hub: Managed service for bi-directional communication between IoT devices
and Azure services.
- Azure IoT Central: Fully managed IoT application platform for building and scaling IoT
solutions.
- Azure IoT Edge: Deploy and run AI, Azure services, and custom logic directly on IoT
devices.

8. Security Services:
- Azure Security Center: Unified security management and advanced threat protection for
hybrid cloud workloads.
- Azure Sentinel: Cloud-native security information and event management (SIEM)
service for threat detection and response.
- Azure Key Vault: Securely store and manage cryptographic keys, secrets, and
certificates used by cloud applications and services.

Some advantages of using Microsoft Azure:

1. Scalability: Azure allows you to scale resources up or down quickly based on demand,
ensuring optimal performance and cost-efficiency.

2. Global Presence: With data centers located in regions around the world, Azure provides
low-latency access to services and data for users globally.
3. Hybrid Capabilities: Azure supports hybrid cloud deployments, enabling seamless
integration between on-premises environments and the cloud.

4. Security and Compliance: Azure offers robust security features and compliance
certifications, including encryption, identity management, and regulatory compliance.

5. Flexible Pricing: Azure provides various pricing options, including pay-as-you-go,


reserved instances, and spot instances, allowing you to optimize costs based on your needs.

6. High Availability: Azure ensures high availability and reliability through redundancy, fault
tolerance, and disaster recovery features.

7. Integration: Azure integrates seamlessly with other Microsoft products and services, as
well as third-party tools and platforms, enabling seamless workflows and interoperability.

8. AI and Machine Learning: Azure offers a suite of AI and machine learning services,
including cognitive services, Azure Machine Learning, and AI-powered analytics,
empowering organizations to leverage AI capabilities easily.

9. Developer Tools: Azure provides a rich set of developer tools, including Azure DevOps,
Visual Studio integration, and support for popular programming languages and frameworks,
enabling rapid application development and deployment.

10. Management and Monitoring: Azure offers comprehensive monitoring and management
capabilities, including Azure Monitor, Azure Resource Manager, and Azure Policy,
empowering organizations to effectively manage and monitor their cloud resources.

11. Big Data and Analytics: Azure provides a range of big data and analytics services,
including Azure Synapse Analytics, Azure Data Factory, and Azure HDInsight, for processing
and analyzing large volumes of data.
12. IoT: Azure offers IoT solutions for connecting, monitoring, and managing IoT devices, as
well as analytics and machine learning capabilities for IoT data, enabling organizations to
harness the power of IoT for their business needs.

Some key operational aspects of using Microsoft Azure:

1. Provisioning Resources:
- Azure allows users to provision various resources such as virtual machines, databases,
storage accounts, and networking components through the Azure Portal, Azure CLI, or
Azure Resource Manager (ARM) templates.

2. Managing Resources:
- Users can manage and monitor their Azure resources through the Azure Portal,
PowerShell, Azure CLI, or third-party management tools.
- Azure Resource Manager (ARM) provides a unified management layer to organize and
manage resources in groups called resource groups.

3. Scaling Resources:
- Azure enables users to scale resources vertically (increasing or decreasing resource
capacity) or horizontally (adding or removing instances) based on workload demands.
- Azure Autoscale allows automatic scaling of resources based on predefined metrics or
schedules.

4. Monitoring and Logging:


- Azure provides monitoring and logging capabilities through Azure Monitor, which
allows users to collect, analyze, and act on telemetry data from Azure resources.
- Azure Monitor provides insights into performance, health, and usage of resources,
enabling proactive management and troubleshooting.
5. Backup and Disaster Recovery:
- Azure offers backup and disaster recovery solutions for protecting data and applications
against accidental deletion, corruption, or outages.
- Azure Backup provides backup-as-a-service for Azure VMs, files, and databases, while
Azure Site Recovery enables replication and failover of on-premises or Azure-based
workloads.

6. Security and Compliance:


- Azure provides built-in security features such as role-based access control (RBAC),
network security groups (NSGs), Azure Security Center, and Azure Policy for managing
security and compliance.
- Azure Security Center provides threat detection, security recommendations, and
compliance management across hybrid cloud workloads.

7. Cost Management:
- Azure Cost Management helps users optimize and manage cloud spending by providing
visibility into resource usage and costs, cost analysis tools, budgeting, and cost
allocation capabilities.
- Azure Cost Management also offers cost-saving recommendations and cost alerts to help
users control and optimize their Azure spending.

8. Identity and Access Management (IAM):


- Azure Active Directory (AAD) enables centralized identity and access management for
Azure resources, allowing users to control access permissions, manage user identities,
and enforce security policies.
Experiment - 5

1. Title: - Deploy any mobile app on any of the cloud platform like Google, AWS,
heruko.
2. Outcome: - Deploying a mobile app on Google Cloud ensures seamless scalability,
global accessibility, and optimized performance, enhancing user experience and
simplifying management for developers.
3. Objectives: - The objective is to utilize cloud platforms like Google Cloud to deploy
mobile apps, enhancing scalability, reliability, and user experience while streamlining
development and operational processes for faster delivery and innovation.
4. Description and Establishing a Google Cloud project:-Deploying a mobile
app on Google Cloud Platform involves leveraging its scalable infrastructure and managed
services to host the backend, manage user data securely, and ensure high availability,
enabling developers to focus on building innovative features while delivering a seamless
experience to users.

Establishing a Google Cloud project for deployment


Step 1: Create a New Project Once signed in.

Step 2: - In the Navigation bar hover over APIs & Services and select library.
Enabling required API’s
Step 1: - Search Cloud Build API and enable. Click on Create Credentials( To use this service
we have to create credentials first.)

Step 2: - Select application data and click next.

Step 3: - Name the Service account and click create and continue.
.
Step 4: - Click on Add role and select Storage Admin from Cloud storage Add one more role
(Storage Object Viewer) to the service account. Click continue and done.

Step 5: - Search and enable App Engine Admin API. Search and enable App Engine API.

Creating Application on GAE:


Step 1: - From Navigation bar go to App Engine. Click on create application.

Step 2: - Choose suitable region and select sevice account. Click next.

Step 3: - Change the Environment field from Static to Flexible.


Deploying Application to the Google Cloud Platform GAE service.
Step 1: - Now add app.yaml inside app folder.

Step 2: - Open terminal and run “gcloud init” command.


Step 3: - Enter 1 to Re-initialize previous configuration with new settings, enter 2 to create a
new configuration and 3 to continue with existing configuration with no changes.

Step 4: - Choose the account with you want to proceed.

Step 5: - Choose the project with you want to proceed.


Step 6: - Run the command “gcloud app deploy”. Click ‘y’ and “enter” when prompted. After
successful deployment you can access the app using the highlighted link
(https://portfolio-418812.uc.r.appspot.com).
Experiment - 6

1. Title: - write a case study on open-source cloud platform like open stack, or sales force.

2. Outcome: -Implementing OpenStack for enterprise cloud infrastructure resulted in


enhanced agility, cost savings, and vendor flexibility, fostering collaboration and
innovation within the organization.

3. Objectives: - How adopting an open-source cloud platform like OpenStack


empowers organizations with greater control, cost-effectiveness, and vendor
flexibility, while fostering collaboration and innovation within the organization for
continuous improvement in cloud operations.

4. Description:-OpenStack is an open-source cloud computing platform that enables


organizations to build and manage private and public clouds. It provides a set of software
tools for creating and managing cloud infrastructure, including compute, storage, and
networking resources.
The operation of OpenStack:
1. Deployment:
- Installation: OpenStack is typically installed on physical servers or virtual machines
running Linux. Deployment tools like DevStack, Packstack, or OpenStack-Ansible can
automate the installation process.
- Configuration: After installation, administrators configure each OpenStack component
according to their requirements, including networking, storage, authentication, and service
endpoints.
2. Management:
- Component Management: OpenStack components such as Nova (compute), Neutron
(networking), Cinder (block storage), and Keystone (identity) are managed through their
respective configuration files and administrative APIs.
- Resource Allocation: Administrators allocate resources such as compute instances, storage
volumes, and network resources to users and projects using role-based access control
(RBAC) and quota management features.
- Monitoring and Logging: OpenStack provides monitoring and logging capabilities through
services like Ceilometer (telemetry) and logging drivers. Administrators monitor resource
usage, performance metrics, and system logs to ensure optimal operation and troubleshoot
issues.
3. Scaling:
- Horizontal Scaling: OpenStack allows horizontal scaling by adding more compute, storage,
or network nodes to the cloud infrastructure to accommodate increased demand.
- Vertical Scaling: Administrators can vertically scale individual components by adjusting
resource allocations or upgrading hardware to improve performance or capacity.
4. Security:
- Access Control: OpenStack's identity service (Keystone) provides authentication and
authorization mechanisms to control access to cloud resources based on user roles and
permissions.
- Network Security: Neutron provides network security features such as security groups and
network ACLs to enforce security policies and restrict traffic between VMs and networks.
- Data Security: OpenStack components like Cinder and Swift offer encryption and access
control features to protect sensitive data stored in block storage volumes or object storage
containers.
5. Automation:
- Orchestration: OpenStack's orchestration service (Heat) enables the automation of resource
provisioning and deployment through templates. Administrators define infrastructure-as-
code (IaC) using Heat templates to automate complex deployment scenarios.
- Lifecycle Management: OpenStack supports lifecycle management features for automating
tasks such as VM provisioning, scaling, and decommissioning through APIs and
orchestration tools.
6. Maintenance and Upgrades:
- Patch Management: Administrators apply patches and updates to OpenStack components
regularly to address security vulnerabilities and bug fixes.
- Version Upgrades: OpenStack supports rolling upgrades and version compatibility checks
to ensure smooth upgrades between different releases without disrupting service
availability.
The core services provided by OpenStack:
1. Compute Service (Nova):
- Nova is the compute service in OpenStack, responsible for provisioning and managing
virtual machines (VMs) on demand. It provides APIs for managing compute instances,
including launching, terminating, resizing, and migrating VMs.
2. Networking Service (Neutron):
- Neutron is the networking service in OpenStack, responsible for creating and managing
virtual networks, subnets, routers, and security groups. It enables connectivity between
VMs and external networks while enforcing network security policies.
3. Block Storage Service (Cinder):
- Cinder is the block storage service in OpenStack, providing persistent storage volumes for
VMs and applications. It offers features such as snapshots, cloning, and volume resizing,
allowing users to manage their storage resources effectively.
4. Object Storage Service (Swift):
- Swift is the object storage service in OpenStack, designed for storing and retrieving large
amounts of unstructured data, such as images, videos, and documents. It offers scalability,
durability, and data redundancy across distributed storage clusters.
5. Identity Service (Keystone):
- Keystone is the identity service in OpenStack, providing authentication and authorization
services for users and services. It enables centralized identity management and access
control across the OpenStack cloud.
6. Dashboard (Horizon):
- Horizon is the web-based dashboard for OpenStack, offering a graphical user interface
(GUI) for administrators and users to manage and monitor their cloud resources. It provides
visibility into compute, storage, and networking components, as well as user authentication
and role-based access control (RBAC).
7. Orchestration Service (Heat):
- Heat is the orchestration service in OpenStack, enabling users to automate the deployment
and management of cloud resources through templates. It allows users to define
infrastructure-as-code (IaC) using the YAML syntax, making it easier to provision complex
environments consistently.
8. Telemetry Service (Ceilometer):
- Ceilometer is the telemetry service in OpenStack, providing metering and monitoring
capabilities for tracking resource usage, performance metrics, and billing data. It collects
data from various OpenStack components and stores it in a centralized database for
analysis and reporting.
9. Image Service (Glance):
- Glance is the image service in OpenStack, responsible for managing virtual machine
images and snapshots. It allows users to upload, store, and share images across the
OpenStack cloud, making it easier to deploy new VMs and applications.
10. Database Service (Trove):
- Trove is the database service in OpenStack, offering database-as-a-service (DBaaS)
capabilities for provisioning and managing relational and NoSQL databases. It simplifies
database deployment and management tasks, such as backup, scaling, and replication.
Experiment – 7

1. Title: - Deploy virtual machine on AWS or Microsoft Azure cloud platform.

2. Outcome: - Deploying a virtual machine on AWS facilitates scalable, reliable cloud


computing solutions with careful cost management and security configurations being
crucial for optimal performance and resource utilization.

3. Objectives:- The objective of deploying a virtual machine on AWS is to leverage cloud


infrastructure for scalable and reliable computing solutions, optimizing resource
utilization while ensuring cost-efficiency and maintaining robust security configurations.

Description and steps to create a virtual machine: -


What is a Virtual Machine (VM)?
A virtual machine is a software emulation of a physical computer. It operates like an independent
computer system with its own operating system and applications but runs on virtualized
hardware provided by a physical host machine. VMs offer several advantages, including resource
isolation, flexibility, and scalability.
Amazon Web Services (AWS) EC2
Amazon Elastic Compute Cloud (EC2) is a web service that provides resizable compute capacity
in the cloud. It allows users to rent virtual machines (known as instances) and run applications on
them. EC2 is a fundamental building block of AWS, offering a wide range of instance types
optimized for various use cases.
Key Concepts:
1. Amazon Machine Image (AMI): An AMI is a template that contains the software
configuration (including the operating system and additional software) required to launch an
instance. Users can choose from a variety of pre-configured AMIs provided by AWS or create
custom AMIs.
2. Instance Types: Instance types define the computing power, memory, storage, and networking
capacity of an EC2 instance. AWS offers a diverse range of instance types optimized for different
workloads, such as general-purpose, compute-optimized, memory-optimized, and storage-
optimized instances.
3. Security Groups: Security groups act as virtual firewalls that control inbound and outbound
traffic for EC2 instances. Users can define rules to allow or deny traffic based on protocols,
ports, and IP addresses.
4. Key Pairs: Key pairs are used for securely accessing EC2 instances via SSH (for Linux
instances) or RDP (for Windows instances). Users create a key pair and download the private key
file, which is used to authenticate and establish secure communication with the instance.
5. Elastic Block Store (EBS): EBS provides block-level storage volumes that can be attached to
EC2 instances. It offers features such as snapshotting, encryption, and different volume types
(such as General Purpose SSD, Provisioned IOPS SSD, and Magnetic).
Steps to create virtual machine.
• Sign in to the AWS Management Console.
• Navigate to the EC2 Dashboard.

• Click on "Launch Instance" to start the process.


• Choose an Amazon Machine Image (AMI) based on your requirements.
• Select an Instance Type that fits your workload needs.

• Create a Key Pair for secure access to your instance.

• Configure Instance settings such as network, storage, and security.


• Add Storage as needed for your instance.

• Optionally, add Tags to organize and identify your instance.


• Configure Security Group to control inbound and outbound traffic.
• Review the instance configuration and click "Launch".

• Launch the instance.


• Access your instance using SSH (for Linux) or RDP (for Windows) with the
appropriate credentials and the downloaded key pair file.
▪ Now, go to instance and click instance.

• Now, click connect.

▪ Now, go SSH client

• Now open to command prompt and enter action to navigate “key pair”location.
• Run the command generated by AWS “ssh -i "vm_key.pem" ubuntu@ec2-54-196-184-
248.compute-1.amazonaws.com” and run on command prompt .

• Now you can see the ubuntu interface


Experiment - 8

1. Title: - Deploy any storage service like AWS s3 or Microsoft storage service
2. Outcome: - Deploying a storage service like AWS S3 provides scalable, durable,
and highly available storage for various data types, enabling efficient data
management and accessibility for applications and users.
3. Objectives: - The objective is to leverage storage services like AWS S3 to provide
a scalable and reliable solution for storing and accessing data, ensuring efficient data
management, high availability, and durability to meet the needs of applications and
users.
4. Description: -

The objective is to leverage storage services like AWS S3 to provide a scalable and
reliable solution for storing and accessing data, ensuring efficient data
management, high availability, and durability to meet the needs of applications and
users.

Deploying AWS S3 storage service :


Step 1: Navigate to AWS Console.
Step2: - Search and select S3 service.

Step3: - Click on create bucket, to create a new storage bucket.

Step 4: - Enter the name of bucket and create bucket with other default parameters.
Step 5: - Bucket is created successfully. Now click on bucket name.

Step 6: - Here all the files stored in the bucket are present. Click on upload to store files or
folders in bucket.

Step 7: - Drag and drop or choose files and folder from local storage.
Step 8:- The PDF file is successfully uploaded.
Experiment - 9

1. Title: - Write about the new evolving service or technology on cloud platform.

2. Outcome: - New evolving services on cloud platforms drive efficiency, scalability,


cost savings, innovation, global accessibility, enhanced security, and sustainability,
fostering rapid digital transformation and competitive advantage for businesses.

3. Objectives: - The primary objectives of new evolving services on cloud platforms


are to optimize operations and resources while driving innovation and maintaining
competitive advantage in the market.

4. Description: -One of the most intriguing and rapidly evolving technologies in the
cloud platform space is serverless computing. Serverless computing, also known as
Function as a Service (FaaS), is a cloud computing model where cloud providers
automatically manage the infrastructure required to run code, allowing developers to
focus solely on writing and deploying functions or microservices.

The new developments and advancements in serverless computing:


1. Increased Adoption: Serverless computing has seen significant adoption across various
industries due to its scalability, cost-effectiveness, and ease of use. Organizations are leveraging
serverless platforms like AWS Lambda, Azure Functions, and Google Cloud Functions to build
and deploy applications faster and more efficiently.
2. Expanded Use Cases: Initially popular for event-driven and microservices architectures,
serverless computing is now being used for a broader range of use cases, including web
applications, real-time data processing, IoT applications, and machine learning inference.
3. Multi-cloud Support: Cloud providers are enhancing their serverless offerings to support
multi-cloud deployments, allowing developers to build and deploy serverless applications across
multiple cloud platforms seamlessly. This enables organizations to avoid vendor lock-in and
leverage the strengths of different cloud providers.
4. Container Integration: Serverless platforms are increasingly integrating with container
technologies like Docker and Kubernetes, enabling developers to package and deploy serverless
functions as containers. This approach provides more flexibility and control over the execution
environment while retaining the benefits of serverless computing.
5. Hybrid Cloud Support: Serverless computing is extending beyond public cloud
environments to support hybrid and edge computing use cases. Cloud providers are offering
serverless solutions that can run on-premises or at the edge, allowing organizations to deploy
serverless applications closer to where the data is generated or consumed.
6. Event-Driven Architectures: Serverless computing is driving the adoption of event-driven
architectures, where applications respond to events or triggers in real-time. Developers can
leverage serverless functions to process events from various sources, such as message queues,
databases, IoT devices, and external APIs, enabling reactive and scalable applications.
7. DevOps Integration: Serverless computing is reshaping DevOps practices by enabling
developers to automate and streamline the deployment and management of serverless
applications using infrastructure as code (IaC) tools and continuous integration/continuous
deployment (CI/CD) pipelines. This accelerates the development lifecycle and improves
collaboration between development and operations teams.

You might also like