0% found this document useful (0 votes)
25 views60 pages

GCCF Unit 3

Uploaded by

dhanushbabu363
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views60 pages

GCCF Unit 3

Uploaded by

dhanushbabu363
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 60

Please read this disclaimer before

proceeding:
This document is confidential and intended solely for the educational purpose of
RMK Group of Educational Institutions. If you have received this document through
email in error, please notify the system manager. This document contains
proprietary information and is intended only to the respective group / learning
community as intended. If you are not the addressee you should not disseminate,
distribute or copy through e-mail. Please notify the sender immediately by e-mail
if you have received this document by mistake and delete this document from your
system. If you are not the intended recipient you are notified that disclosing,
copying, distributing or taking any action in reliance on the contents of this
information is strictly prohibited.
20CS929

GOOGLE CLOUD COMPUTING


FOUNDATIONS

Department : CSE

Batch/Year : 2021-2025 / III Year

Created by:

Ms. A. Jasmine Gilda, Assistant Professor / CSE


Ms. T. Sumitha, Assistant Professor / CSE

Date : 04.09.2023
1. CONTENTS

S. No. Contents

1 Contents

2 Course Objectives

3 Pre-Requisites

4 Syllabus

5 Course outcomes

6 CO- PO/PSO Mapping

7 Lecture Plan

8 Activity based learning

9 Lecture Notes

10 Assignments

11 Part A Questions & Answers

12 Part B Questions

13 Supportive online Certification courses

14 Real time Applications

15 Contents beyond the Syllabus

16 Assessment Schedule

17 Prescribed Text Books & Reference Books

18 Mini Project Suggestions


2. COURSE OBJECTIVES

 To describe the different ways a user can interact with Google


Cloud.
 To discover the different compute options in Google Cloud and
implement a variety of structured and unstructured storage
models.
 To confer the different application managed service options in
the cloud and outline how security in the cloud is administered
in Google Cloud.

 To demonstrate how to build secure networks in the cloud


and identify cloud automation and management tools.
 To determine a variety of managed big data services in the
cloud.
3. PRE REQUISITES

• Pre-requisite Chart

20CS929 - GOOGLE CLOUD


COMPUTING FOUNDATIONS

20CS404 – OPERATING 20IT403 – DATABASE


SYSTEMS MANAGEMENT SYSTEMS
4. SYLLABUS
GOOGLE CLOUD COMPUTING FOUNDATIONS L T P C
20CS929
3 0 0 3
UNIT I INTRODUCTION TO GOOGLE CLOUD 9
Cloud Computing - Cloud Versus Traditional Architecture - IaaS, PaaS, and SaaS - Google Cloud
Architecture - The GCP Console - Understanding projects - Billing in GCP - Install and configure
Cloud SDK - Use Cloud Shell - GCP APIs - Cloud Console Mobile App.

UNIT II COMPUTE AND STORAGE 9


Compute options in the cloud - Exploring IaaS with Compute Engine - Configuring elastic apps
with autoscaling - Exploring PaaS with App Engine - Event driven programs with Cloud Functions
- Containerizing and orchestrating apps with Google Kubernetes Engine - Storage options in the
cloud - Structured and unstructured storage in the cloud - Unstructured storage using Cloud
Storage - SQL managed services - Exploring Cloud SQL - Cloud Spanner as a managed service
-NoSQL managed service options - Cloud Datastore, a NoSQL document store - Cloud Bigtable
as a NoSQL option.
UNIT III APIs AND SECURITY IN THE CLOUD 9
The purpose of APIs - Cloud Endpoints - Using Apigee Edge - Managed message services - Cloud
Pub/Sub - Introduction to security in the cloud - The shared security model - Encryption options
- Authentication and authorization with Cloud IAM - Identify Best Practices for Authorization
using Cloud IAM.
UNIT IV NETWORKING, AUTOMATION AND MANGAEMENT TOOLS 9
Introduction to networking in the cloud - Defining a Virtual Private Cloud - Public and private IP
address basics - Google’s network architecture - Routes and firewall rules in the cloud - Multiple
VPC networks - Building hybrid clouds using VPNs, interconnecting, and direct peering - Different
options for load balancing - Introduction to Infrastructure as Code - Cloud Deployment Manager
- Public and private IP address basics - Monitoring and managing your services, applications,
and infrastructure - Stackdriver.
UNIT V BIG DATA AND MACHINE LEARNING SERVICES 9
Introduction to big data managed services in the cloud - Leverage big data operations with
Cloud Dataproc - Build Extract, Transform, and Load pipelines using Cloud Dataflow - BigQuery,
Google’s Enterprise Data Warehouse - Introduction to machine learning in the cloud - Building
bespoke machine learning models with AI Platform - Cloud AutoML - Google’s pre-trained
machine learning APIs.
TOTAL: 45 PERIODS
5. COURSE OUTCOME

At the end of this course, the students will be able to:


CO1: Describe the different ways a user can interact with Google
Cloud.

CO2: Discover the different compute options in Google Cloud and


implement a variety of structured and unstructured storage
models.

CO3: Discuss the different application managed service options in


the cloud and outline how security in the cloud is administered in
Google Cloud.

CO4: Demonstrate how to build secure networks in the cloud and


identify cloud automation and management tools.

CO5: Discover a variety of managed big data services in the cloud.


6. CO - PO / PSO MAPPING

PROGRAM OUTCOMES PSO


K3,K
CO HKL K3 K4 K5 K5 4,K5 A3 A2 A3 A3 A3 A3 A2

PSO3
PSO2
PSO1
PO- PO- PO- PO- PO- PO- PO- PO- PO- PO- PO- PO-
1 2 3 4 5 6 7 8 9 10 11 12

C306.1 K3 2 1 1 - - - - - - 2 2 2 2 2 2

C306.2 K3 3 3 3 - - - - 2 2 2 2 2 2 2 2

C306.3 K3 3 3 3 - - 2 - 2 2 2 2 2 2 2 2

C306.4 K3 3 3 3 - - - - 2 2 2 2 2 2 2 2

C306.5 K3 3 3 3 - - 2 - - 2 2 2 2 2 2 2

Correlation Level:
1. Slight (Low)
2. Moderate (Medium)
3. Substantial (High)
If there is no correlation, put “-“.
7. LECTURE PLAN

Number Actual
Sl. Proposed Taxonomy Mode of
Topic of Lecture CO
No. Date Level Delivery
Periods Date

Chalk
1 The purpose of APIs 1 04.09.2023 CO3 K2 & talk

PPT/
2 Cloud Endpoints 1 07.09.2023 CO3 K3
Demo
Using Apigee Edge -
PPT/
3 Managed message 1 08.09.2023 CO3 K2
Demo
services
PPT/
4 Cloud Pub/Sub 1 20.09.2023 CO3 K3
Demo
Introduction to
security in the cloud
5 - The shared security 1 21.09.2023 CO3 K2 PPT
model
PPT/
6 Encryption options 1 22.09.2023 CO3 K2
Demo
Authentication and
PPT/
7 authorization with 1 23.09.2023 CO3 K3
Demo
Cloud IAM
Lab: User
Authentication:
8 Cloud Identity-Aware 1 27.09.2023 CO3 K3 Demo
Proxy

Identify Best
Practices for PPT/
9 Authorization using 1 29.09.2023 CO3 K2
Demo
Cloud IAM
8. ACTIVITY BASED LEARNING

Role play on the topic authentication and authorization


with IAM.
9. UNIT III - LECTURE NOTES
APIs AND SECURITY IN THE CLOUD

THE PURPOSE OF APIs:

Introduction to APIs:

 An application programming interface (API) is a way for two or more computer


programs to communicate with each other. It is a type of software interface,
offering a service to other pieces of software.
 A document or standard that describes how to build or use such a connection or
interface is called an API specification.
 APIs are used to simplify the way different, disparate; software resources
communicate.

How an API works:

 An API is a set of defined rules that explain how computers or applications


communicate with one another.
 APIs sit between an application and the web server, acting as an intermediary
layer that processes data transfer between systems.
 A client application initiates an API call to retrieve information—also known as a
request. This request is processed from an application to the web server via the
API’s Uniform Resource Identifier (URI) and includes a request verb, headers, and
sometimes, a request body.
 After receiving a valid request, the API makes a call to the external program or
web server. The server sends a response to the API with the requested
information.

 The API transfers the data to the initial requesting application.


APIs offer security by design because their position as middleman facilitates the
abstraction of functionality between two systems—the API endpoint decouples the
consuming application from the infrastructure providing the service. API calls usually
include authorization credentials to reduce the risk of attacks on the server, and an
API gateway can limit access to minimize security threats. Also, during the exchange,
HTTP headers, cookies, or query string parameters provide additional security layers
to the data.

For example, consider an API offered by a payment processing service. Customers can
enter their card details on the frontend of an application for an ecommerce store. The
payment processor doesn’t require access to the user’s bank account; the API creates a
unique token for this transaction and includes it in the API call to the server. This ensures
a higher level of security against potential hacking threats.

REST APIs:

 REpresentational State Transfer, or REST, is currently the most popular


architectural style for services.
 It outlines a key set of constraints and agreements that a service must comply
with. If a service complies with these REST constraints, it’s said to be RESTful.
 APIs intended to be spread widely to consumers and deployed to devices with
limited computing resources, like mobile, are well suited to a REST structure.
 REST APIs use HTTP requests to perform GET, PUT, POST, and DELETE
operations.
 For example, a REST API would use a GET request to retrieve a record, a POST
request to create one, a PUT request to update a record, and a DELETE request
to delete one.
 All HTTP methods can be used in API calls. A well-designed REST API is similar to
a website running in a web browser with built-in HTTP functionality.
 The state of a resource at any particular instant, or timestamp, is known as the
resource representation.
 This information can be delivered to a client in virtually any format including
JavaScript Object Notation (JSON), HTML, XLT, Python, PHP, or plain text.
 JSON is popular because it’s readable by both humans and machines—and it is
programming language-agnostic.
 Request headers and parameters are also important in REST API calls because
they include important identifier information such as metadata, authorizations,
uniform resource identifiers (URIs), caching, cookies and more.

 Request headers and response headers, along with conventional HTTP status
codes, are used within well-designed REST APIs.
 One of the main reasons REST APIs work well with the cloud is due to t heir
stateless nature. State information does not need to be stored or referenced for
the API to run.
 An authorization framework like OAuth 2.0 can help limit the privileges of third-
party applications.
 Using a timestamp in the HTTP header, an API can also reject any request that
arrives after a certain time period.
 Parameter validation and JSON Web Tokens are other ways to ensure that only
authorized clients can access the API.

Challenges of deploying and managing APIs:

When deploying and managing APIs on your own, there are several issues to consider.

 Interface Definition
 Authentication and Authorization
 Logging and Monitoring
 Management and Scalability
CLOUD ENDPOINTS:

Endpoints is an API management system that helps you secure, monitor, analyze, and
set quotas on your APIs using the same infrastructure Google uses for its own APIs.

Cloud Endpoints is a distributed API management system that uses a distributed


Extensible Service Proxy, which is a service proxy that runs in its own Docker container.
It helps to create and maintain the most demanding APIs with low latency and high
performance. After you deploy your API to Endpoints, you can use the Cloud Endpoints
Portal to create a developer portal, a website that users of your API can access to view
documentation and interact with your API. Cloud Endpoints provides an API console,
hosting, logging, monitoring, and other features to help you create, share, maintain, and
secure your APIs. Cloud Endpoints supports applications running in App Engine, Google

Kubernetes Engine, and Compute Engine. Clients include Android, iOS, and Javascript.

The Endpoints options:

To have your API managed by Cloud Endpoints, you have three options, depending on
where your API is hosted and the type of communications protocol your API uses:

 Cloud Endpoints for OpenAPI


 Cloud Endpoints for gRPC
 Cloud Endpoints Frameworks for the App Engine standard environment

Cloud Endpoints for OpenAPI

 Endpoints works with the Extensible Service Proxy (ESP) and the Extensible Service
Proxy V2 (ESPv2) to provide API management.
 Endpoints supports version 2 of the OpenAPI Specification —the industry standard
for defining REST APIs.
 Cloud Endpoints supports APIs that are described using version 2.0 of the OpenAPI
specification. API can be implemented using any publicly available REST
framework such as Django or Jersey. API can be described in a JSON or YAML file
referred to as an OpenAPI document.

Extensible Service Proxy:

The Extensible Service Proxy (ESP) is a Nginx-based high-performance, scalable proxy


that runs in front of an OpenAPI or gRPC API backend and provides API management
features such as authentication, monitoring, and logging.

Extensible Service Proxy V2:

The Extensible Service Proxy V2 (ESPv2) is an Envoy-based high-performance, scalable


proxy that runs in front of an OpenAPI or gRPC API backend and provides API
management features such as authentication, monitoring, and logging.

ESPv2 supports version 2 of the OpenAPI Specification and gRPC Specifications.

Cloud Endpoints for gRPC

gRPC is a high performance, open-source universal RPC framework, developed by Google.


In gRPC, a client application can directly call methods on a server application on a
different machine as if it was a local object, making it easier to create distributed
applications and services.

With Endpoints for gRPC, you can use the API management capabilities of Endpoints to
add an API console, monitoring, hosting, tracing, authentication, and more to your gRPC
services. In addition, once you specify special mapping rules, ESP and ESPv2 translate
RESTful JSON over HTTP into gRPC requests. This means that you can deploy a gRPC
server managed by Endpoints and call its API using a gRPC or JSON/HTTP client, giving
you much more flexibility and ease of integration with other systems.
Cloud Endpoints Frameworks

Cloud Endpoints Frameworks is a web framework for the App Engine standard Python 2.7
and Java 8 runtime environments. Cloud Endpoints Frameworks provides the tools and
libraries that allow you to generate REST APIs and client libraries for your application.

Endpoints Frameworks includes a built-in API gateway that provides API management
features that are comparable to the features that ESP provides for Endpoints for OpenAPI
and Endpoints for gRPC.

Endpoints Frameworks intercepts all requests and performs any necessary checks (such
as authentication) before forwarding the request to the API backend. When the backend
responds, Endpoints Frameworks gathers and reports telemetry. Metrics can be viewed
for API on the Endpoints Services page in the Google Cloud console.

Endpoints Frameworks ca be used with or without API management functionality.


APIGEE EDGE:

Apigee Edge is a platform for developing and managing APIs. By fronting services with a
proxy layer, Edge provides an abstraction or facade for your backend service APIs and
provides security, rate limiting, quotas, analytics, and more.

Apigee is an API gateway management framework owned by Google which helps in


exchanging data between different cloud applications and services. Many services and
sites available to the users are delivered through RESTful APIs, API gateways act as a
medium to connect these sites and services with data and feeds, and proper
communication capabilities. In simple words, Apigee is a tool to manage an API gateway
for developing, deploying, and producing user-friendly apps.

High-level architecture of Apigee:


Apigee consists of the following primary components:

 Apigee services: The APIs that you use to create, manage, and deploy your API
proxies.

 Apigee runtime: A set of containerized runtime services in a Kubernetes cluster


that Google maintains. All API traffic passes through and is processed by these
services.

In addition, Apigee uses other components including:

 GCP services: Provides identity management, logging, analytics, metrics, and


project management functions.

 Back-end services: Used by your apps to provide runtime access to data for
your API proxies.

Flavors of Apigee:

Apigee comes in the following flavors:

Apigee: A cloud version hosted by Apigee in which Apigee maintains the environment,
allowing you to concentrate on building your services and defining the APIs to those
services.

Apigee hybrid: A hybrid version consisting of a runtime plane installed on-premises or in


a cloud provider of your choice, and a management plane running in Apigee's cloud. In
this model, API traffic and data are confined within your own enterprise-approved
boundaries.
Make services available through Apigee:

Apigee enables you to provide secure access to your services with a well-defined API that
is consistent across all of your services, regardless of service implementation. A consistent
API:

 Makes it easy for app developers to consume your services.


 Enables you to change the backend service implementation without affecting the
public API.
 Enables you to take advantage of the analytics, developer portal, and other
features built into Apigee.

The following image shows an architecture with Apigee handling the requests from client
apps to your backend services:
Rather than having app developers consume your services directly, they access an API
proxy created on Apigee. The API proxy functions as a mapping of a publicly available
HTTP endpoint to your backend service. By creating an API proxy, you let Apigee handle
the security and authorization tasks required to protect your services, as well as to analyze
and monitor those services.

Because app developers make HTTP requests to an API proxy, rather than directly to
your services, developers do not need to know anything about the implementation of
your services. All the developer needs to know is:

 The URL of the API proxy endpoint.


 Any query parameters, headers, or body parameters passed in a request.
 Any required authentication and authorization credentials.
 The format of the response, including the response data format, such as XML or
JSON.
The API proxy isolates the app developer from your backend service. Therefore, you are
free to change the service implementation as long as the public API remains consistent.
For example, you can change a database implementation, move your services to a new
host, or make any other changes to the service implementation. By maintaining a
consistent frontend API, existing client apps will continue to work regardless of changes
on the backend.

API Gateway:

API Gateway enables you to provide secure access to your backend services through a
well-defined REST API that is consistent across all of your services, regardless of the
service implementation. Clients consume your REST APIS to implement standalone apps
for a mobile device or tablet, through apps running in a browser, or through any other
type of app that can make a request to an HTTP endpoint.
MANAGED MESSAGE SERVICES:

Messaging services on provide the interconnectivity between components and


applications that are written in different languages and hosted in the same cloud, multiple
clouds or on-premises.

PUB/SUB:

Pub/Sub allows services to communicate asynchronously, with latencies on the order of


100 milliseconds.

Pub/Sub is used for streaming analytics and data integration pipelines to ingest and
distribute data. It's equally effective as a messaging-oriented middleware for service
integration or as a queue to parallelize tasks.

Pub/Sub enables you to create systems of event producers and consumers, called
publishers and subscribers. Publishers communicate with subscribers asynchronously by
broadcasting events, rather than by synchronous remote procedure calls (RPCs).

Publishers send events to the Pub/Sub service, without regard to how or when these
events are to be processed. Pub/Sub then delivers events to all the services that react to
them. In systems communicating through RPCs, publishers must wait for subscribers to
receive the data. However, the asynchronous integration in Pub/Sub increases the
flexibility and robustness of the overall system.

Types of Pub/Sub services:

Pub/Sub consists of two services:

Pub/Sub service: This messaging service is the default choice for most users and
applications. It offers the highest reliability and largest set of integrations, along with
automatic capacity management. Pub/Sub guarantees synchronous replication of all data
to at least two zones and best-effort replication to a third additional zone.

Pub/Sub Lite service: A separate but similar messaging service built for lower cost. It
offers lower reliability compared to Pub/Sub. It offers either zonal or regional topic
storage. Zonal Lite topics are stored in only one zone. Regional Lite topics replicate data
to a second zone asynchronously. Also, Pub/Sub Lite requires you to pre-provision and
manage storage and throughput capacity. Consider Pub/Sub Lite only for applications
where achieving a low cost justifies some additional operational work and lower reliability.

The Basics of a Publish/Subscribe Service:

 Topic. A named resource to which messages are sent by publishers.


 Subscription. A named resource representing the stream of messages from a
single, specific topic, to be delivered to the subscribing application.
 Message. The combination of data and (optional) attributes that a publisher
sends to a topic and is eventually delivered to subscribers.
 Message attribute. A key-value pair that a publisher can define for a message.
For example, key iana.org/language_tag and value en could be added to

messages to mark them as readable by an English-speaking subscriber.


 Publisher. An application that creates and sends messages to a single or multiple
topics.
 Subscriber. An application with a subscription to a single or multiple topics to
receive messages from it.
 Acknowledgment (or "ack"). A signal sent by a subscriber to Pub/Sub after it
has received a message successfully. Acknowledged messages are removed from
the subscription message queue.
 Push and pull. The two message delivery methods. A subscriber receives
messages either by Pub/Sub pushing them to the subscriber chosen endpoint, or
by the subscriber pulling them from the service.

ľ h e following diagíam shows the basic flow of messages thíough Pub/Sub:

In this scenario, there are two publishers publishing messages on a single topic. There
are two subscriptions to the topic. The first subscription has two subscribers, meaning
messages will be load-balanced across them, with each subscriber receiving a subset
of the messages. The second subscription has one subscriber that will receive all of
the messages. The bold letters represent messages. Message A comes from Publisher
1 and is sent to Subscriber 2 via Subscription 1, and to Subscriber 3 via Subscription
2. Message B comes from Publisher 2 and is sent to Subscriber 1 via Subscription 1
and to Subscriber 3 via Subscription 2.

Integrations:

Pub/Sub has many integrations with other Google Cloud products to create a fully
featured messaging system:

Stream processing and data integration. Supported by Dataflow, including Dataflow


templates and SQL, which allow processing and data integration into BigQuery and data
lakes on Cloud Storage. Dataflow templates for moving data from Pub/Sub to Cloud
Storage, BigQuery, and other products are available in the Pub/Sub and Dataflow UIs in
the Google Cloud console. Integration with Apache Spark, particularly when managed
with Dataproc is also available. Visual composition of integration and processing pipelines
running on Spark + Dataproc can be accomplished with Data Fusion.

Monitoring, Alerting and Logging. Supported by Monitoring and Logging products.

Authentication and IAM. Pub/Sub relies on a standard OAuth authentication used by


other Google Cloud products and supports granular IAM, enabling access control for
individual resources.

APIs. Pub/Sub uses standard gRPC and REST service API technologies along with client
libraries for several languages.

Triggers, notifications, and webhooks. Pub/Sub offers push-based delivery of


messages as HTTP POST requests to webhooks. You can implement workflow automation
using Cloud Functions or other serverless products.

Orchestration. Pub/Sub can be integrated into multistep serverless Workflows


declaratively. Big data and analytic orchestration often done with Cloud Composer, which
supports Pub/Sub triggers. You can also integrate Pub/Sub with Application Integration
(Preview) which is an Integration-Platform-as-a-Service (iPaaS) solution. Application
Integration provides a Pub/Sub trigger to trigger or start integrations.

Integration Connectors. (Preview) These connectors let you connect to various data
sources. With connectors, both Google Cloud services and third-party business
applications are exposed to your integrations through a transparent, standard interface.
For Pub/Sub, you can create a Pub/Sub connection for use in your integrations.
Publisher-subscriber relationships can be one-to-many (fan-out), many-to-one (fan-in),
and many-to-many, as shown in the following diagram:

The following diagram illustrates how a message passes from a publisher to a subscriber.
For push delivery, the acknowledgment is implicit in the response to the push request,
while for pull delivery it requires a separate RPC.
Pub/Sub Basic Architecture:

The system is designed to be horizontally scalable, where an increase in the number of


topics, subscriptions, or messages can be handled by increasing the number of instances
of running servers.

Pub/Sub servers run in all Google Cloud regions around the world. This allows the service
to offer fast, global data access, while giving users control over where messages are
stored. Cloud Pub/Sub offers global data access in that publisher and subscriber clients
are not aware of the location of the servers to which they connect or how those services
route the data.

Pub/Sub’s load balancing mechanisms direct publisher traffic to the nearest Google Cloud
data center where data storage is allowed.
Any individual message is stored in a single region. However, a topic may have messages
stored in many regions. When a subscriber client requests messages published to this
topic, it connects to the nearest server which aggregates data from all messages
published to the topic for delivery to the client.

Pub/Sub is divided into two primary parts: the data plane, which handles moving
messages between publishers and subscribers, and the control plane, which handles
the assignment of publishers and subscribers to servers on the data plane. The servers
in the data plane are called forwarders, and the servers in the control plane are called
routers. When publishers and subscribers are connected to their assigned forwarders,
they do not need any information from the routers (as long as those forwarders remain
accessible). Therefore, it is possible to upgrade the control plane of Pub/Sub without
affecting any clients that are already connected and sending or receiving messages.

Control Plane:

The Pub/Sub control plane distributes clients to forwarders in a way that provides
scalability, availability, and low latency for all clients. Any forwarder is capable of serving
clients for any topic or subscription. When a client connects to Pub/Sub, the router
decides the data centers the client should connect to based on shortest network distance,
a measure of the latency on the connection between two points.

The router provides the client with an ordered list of forwarders it can consider connecting
to. This ordered list may change based on forwarder availability and the shape of the
load from the client.

A client takes this list of forwarders and connects to one or more of them. The client
prefers connecting to the forwarders most recommended by the router, but also takes
into consideration any failures that have occurred
Data Plane:

The data plane receives messages from publishers and sends them to clients.

In general, a message goes through these steps:

1. A publisher sends a message.


2. The message is written to storage.
3. Pub/Sub sends an acknowledgement to the publisher that it has received the
message and guarantees its delivery to all attached subscriptions.
4. At the same time as writing the message to storage, Pub/Sub delivers it to
subscribers.
5. Subscribers send an acknowledgement to Pub/Sub that they have processed the
message.
6. Once at least one subscriber for each subscription has acknowledged the message,
Pub/Sub deletes the message from storage.

Different messages for a single topic and subscription can flow through many publishers,
subscribers, publishing forwarders, and subscribing forwarders. Publishers can publish to
multiple forwarders simultaneously and subscribers may connect to multiple subscribing
forwarders to receive messages. Therefore, the flow of messages through connections
among publishers, subscribers, and forwarders can be complex. The following diagram
shows how messages could flow for a single topic and subscription, where different colors
indicate the different paths, messages may take from publishers to subscribers:
INTRODUCTION TO SECURITY IN THE CLOUD:

Cloud security refers to a broad set of policies, technologies, applications, and controls
utilized to protect virtualized IP, data, applications, services, and the associated
infrastructure of cloud computing.

The fiver layers of protection Google provides to keep customers' data safe:

1. Hardware infrastructure
2. Service deployment
3. Storage services
4. Internet communication
5. Operational security
At the hardware infrastructure layer:

Hardware design and provenance: Both the server boards and the networking
equipment in Google data centers are custom designed by Google. Google also designs
custom chips, including a hardware security chip that's currently being deployed on both
servers and peripherals.

Secure boot stack: Google server machines use various technologies to ensure that
they are booting the correct software stack, such as cryptographic signatures over the
BIOS, bootloader, kernel, and base operating system image.

Premises security: Google designs and builds its own data centers, which incorporate
multiple layers of physical security protections. Access to these data centers is limited to
only a small fraction of Google employees. Google also hosts some servers in third-party
data centers, where we ensure that there are Google-controlled physical security
measures on top of the security layers provided by the data center operator.

At the service deployment layer:

Encryption of inter-service communication: Google’s infrastructure provides


cryptographic privacy and integrity for remote procedure call (“RPC”) data on the
network. Google’s services communicate with each other using RPC calls. The
infrastructure automatically encrypts all infrastructure RPC traffic which goes between
data centers. Google has started to deploy hardware cryptographic accelerators that will
allow it to extend this default encryption to all infrastructure RPC traffic inside Google
data centers.

User identity: Google’s central identity service, which usually manifests to end users as
the Google login page, goes beyond asking for a simple username and password. The
service also intelligently challenges users for additional information based on risk factors
such as whether they have logged in from the same device or a similar location in the
past. Users also have the option of employing secondary factors when signing in,
including devices based on the Universal 2nd Factor (U2F) open standard.

At the storage services layer:

Encryption at rest: Most applications at Google access physical storage (in other words,
“file storage”) indirectly by using storage services, and encryption (using centrally
managed keys) is applied at the layer of these storage services. Google also enables
hardware encryption support in hard drives and SSDs.

At the internet communication layer:

Google Front End (GFE): Google services that want to make themselves available on
the internet register themselves with an infrastructure service called the Google Front
End, which ensures that all TLS connections are ended using a public-private key pair
and an X.509 certificate from a Certified Authority (CA) and follows best practices such
as supporting perfect forward secrecy. The GFE also applies protections against Denial-
of-Service attacks.

Denial of Service (DoS) protection: The sheer scale of its infrastructure enables
Google to simply absorb many DoS attacks. Google also has multi-tier, multi-layer DoS
protections that further reduce the risk of any DoS impact on a service running behind a
GFE.
Finally, at Google’s operational security layer:

Intrusion detection: Rules and machine intelligence give Google’s operational security
teams warnings of possible incidents. Google conducts Red Team exercises to measure
and improve the effectiveness of its detection and response mechanisms.

Reducing insider risk: Google aggressively limits and actively monitors the activities of
employees who have been granted administrative access to the infrastructure.

Employee U2F use: To guard against phishing attacks against Google employees,
employee accounts require use of U2F-compatible security keys.

Software development practices: Google employs central source control and requires
two-party review of new code. Google also provides its developers with libraries that
prevent them from introducing certain classes of security bugs. Google also runs a
Vulnerability Rewards Program where we pay anyone who can discover and inform us of
bugs in our infrastructure or applications.

THE SHARED SECURITY MODEL

Cloud computing and storage provide users with capabilities to store and process
their data in third-party data centers. Organizations use the cloud in a variety of different
service models (with acronyms such as SaaS, PaaS, and IaaS) and deployment models
(private, public, hybrid, and community).

Security concerns associated with cloud computing are typically categorized in two ways:
as security issues faced by cloud providers (organizations providing software-, platform-
, or infrastructure-as-a-service via the cloud) and security issues faced by their customers
(companies or organizations who host applications or store data on the cloud).

Security responsibilities are shared between the customer and Google Cloud.
The responsibility is shared, however, and is often detailed in a cloud provider's "shared
security responsibility model" or "shared responsibility model." The provider must ensure
that their infrastructure is secure and that their clients’ data and applications are
protected, while the user must take measures to fortify their application and use strong
passwords and authentication measures.

When a customer deploys an application to their on-premises infrastructure, they are


responsible for the security of the entire stack: from the physical security of the hardware
and the premises in which they are housed, through to the encryption of the data on
disk, the integrity of the network, all the way up to securing the content stored in those
applications.

But when they move an application to Google Cloud, Google handles many of the lower
layers of security, like the physical security, disk encryption, and network integrity.

The upper layers of the security stack, including the securing of data, remain the
customer’s responsibility. Google provides tools like the resource hierarchy and IAM to
help them define and implement policies, but ultimately this part is their responsibility.

Data access is usually the customer’s responsibility. They control who or what has access
to their data. Google Cloud provides tools that help them control this access, such as
Identity and Access Management, but they must be properly configured to protect your
data.

GOOGLE CLOUD ENCRYPTION OPTIONS

Several encryption options are available on Google Cloud. These range from simple but
with limited control, to greater control flexibility but with more complexity.

The simplest option is Google Cloud default encryption, followed by customer-managed


encryption keys (CMEK), and the option that provides the most control: customer-
supplied encryption keys (CSEK).
A fourth option is to encrypt your data locally before you store it in the cloud. This is
often called client-side encryption.

Google Cloud will encrypt data in transit and at rest by default. Data in transit is encrypted
by using Transport Layer Security (TLS). Data encrypted at rest is done with AES 256-bit
keys. The encryption happens automatically.

Customer-managed encryption keys (CMEK):

 With customer-managed encryption keys, you manage your encryption keys that
protect data on Google Cloud.
 Cloud Key Management Service, or Cloud KMS, automates and simplifies the
generation and management of encryption keys. The keys are managed by the

customer and never leave the cloud.


 Cloud KMS supports encryption, decryption, signing, and verification of data. It
supports both symmetric and asymmetric cryptographic keys and various popular
algorithms.

 Cloud KMS lets you both rotate keys manually and automate the rotation of keys
on a time-based interval.
 Cloud KMS also supports both symmetric keys and asymmetric keys for encryption
and signing.

Customer-supplied encryption keys (CSEK):

 Customer-supplied encryption keys give users more control over their keys, but
with greater management complexity.

 With CSEK, users use their own AES 256-bit encryption keys. They are responsible
for generating these keys.
 Users are responsible for storing the keys and providing them as part of Google
Cloud API calls.
 Google Cloud will use the provided key to encrypt the data before saving it. Google
guarantees that the key only exists in-memory and is discarded after use.
Persistent disks, such as those that back virtual machines, can be encrypted with
customer-supplied encryption keys. With CSEK for persistent disks, the data is encrypted
before it leaves the virtual machine. Even without CSEK or CMEK, persistent disks are still
encrypted. When a persistent disk is deleted, the keys are discarded, and the data is
rendered irrecoverable by traditional means.

Other encryption options:

To have more control over persistent disk encryption, users can create their own
persistent disks and redundantly encrypt them.

And finally, client-side encryption is always an option. With client-side encryption, users
encrypt data before they send it to Google Cloud. Neither the unencrypted data nor the
decryption keys leave their local device.

AUTHENTICATION AND AUTHORIZATION WITH CLOUD IAM

Authentication is the process of determining the identity of the principal attempting to


access a resource.

Authorization is the process of determining whether the principal or application


attempting to access a resource has been authorized for that level of access.

Google provides many APIs and services, which require authentication to access.

Identity and Access Management (IAM):

 IAM lets you grant granular access to specific Google Cloud resources and helps
prevent access to other resources. IAM lets you adopt the security principle of
least privilege, which states that nobody should have more permissions than they
actually need.
 With IAM, you manage access control by defining who (identity) has what access
(role) for which resource. For example, Compute Engine virtual machine instances,
Google Kubernetes Engine (GKE) clusters, and Cloud Storage buckets are all
Google Cloud resources.
 The organizations, folders, and projects that you use to organize your resources
are also resources.
 In IAM, permission to access a resource isn't granted directly to the end user.
Instead, permissions are grouped into roles, and roles are granted to authenticated
principals.

 An allow policy, also known as an IAM policy, defines and enforces what roles are
granted to which principals. Each allow policy is attached to a resource. When an
authenticated principal attempts to access a resource, IAM checks the resource's

allow policy to determine whether the action is permitted.

Access management has three main parts:

 Principal. A principal can be a Google Account (for end users), a service account
(for applications and compute workloads), a Google group, or a Google Workspace
account or Cloud Identity domain that can access a resource. Each principal has
its own identifier, which is typically an email address.

 Role. A role is a collection of permissions. Permissions determine what operations


are allowed on a resource. When you grant a role to a principal, you grant all the
permissions that the role contains.

 Policy. The allow policy is a collection of role bindings that bind one or more
principals to individual roles. When you want to define who (principal) has what
type of access (role) on a resource, you create an allow policy and attach it to the
resource.
Concepts related to identity:

In IAM, you grant access to principals. Principals can be of the following types:

 Google Account
 Service account
 Google group
 Google Workspace account
 Cloud Identity domain
 All authenticated users
 All users

 Google Account - represents a developer, an administrator, or any other person


who interacts with Google Cloud. Any email address that's associated with a Google
Account can be an identity, including gmail.com or other domains.

 Service account - an account that is designed to only be used by a service /


application, not by a regular user.
 Google group - A Google group is a named collection of Google Accounts and
service accounts. Every Google group has a unique email address that's associated
with the group. Google Groups are a convenient way to apply access controls to
a collection of users. You can grant and change access controls for a whole group
at once instead of granting or changing access controls one at a time for individual
users or service accounts.
 Google Workspace account - A Google Workspace account represents a virtual
group of all of the Google Accounts that it contains. Google Workspace accounts
are associated with your organization's internet domain name, such
as example.com. When you create a Google Account for a new user, such
as [email protected], that Google Account is added to the virtual group for
your Google Workspace account.
 Cloud Identity domain - A Cloud Identity domain is like a Google Workspace
account, because it represents a virtual group of all Google Accounts in an
organization. However, Cloud Identity domain users don't have access to Google
Workspace applications and features.
 All authenticated users - The value allAuthenticatedUsers is a special identifier
that represents all service accounts and all users on the internet who have
authenticated with a Google Account.

 All users - The value allUsers is a special identifier that represents anyone who is
on the internet, including authenticated and unauthenticated users.

Concepts related to access management:

When an authenticated principal attempts to access a resource, IAM checks the resource's
allow policy to determine whether the action is allowed.

Resource

 If a user needs access to a specific Google Cloud resource, you can grant the user
a role for that resource.

 IAM permissions can be granted at the project level.


 The permissions are then inherited by all resources within that project.
Permissions

Permissions determine what operations are allowed on a resource. In the IAM world,
permissions are represented in the form of service.resource.verb, for example,
pubsub.subscriptions.consume.

Permissions are not granted to users directly. Instead, the roles that contain the
appropriate permissions are identified, and then the roles are granted to the user.
Roles

A role is a collection of permissions. You cannot grant a permission to the user directly.
Instead, you grant them a role. When you grant a role to a user, you grant them all the
permissions that the role contains.

There are several kinds of roles in IAM:

 Basic roles: Basic roles are highly permissive roles that existed prior to the
introduction of IAM. Basic roles can be used to grant principals broad access to
Google Cloud resources. These roles are Owner, Editor, and Viewer.

Name Title Permissions

roles/viewer Viewer Permissions for read-only actions that do not affect state, such
as viewing (but not modifying) existing resources or data.

roles/editor Editor All viewer permissions, plus permissions for actions that modify
state, such as changing existing resources.

Note: The Editor role contains permissions to create and delete


resources for most Google Cloud services. However, it does not
contain permissions to perform all actions for all services.

roles/owner Owner All Editor permissions and permissions for the following actions:

 Manage roles and permissions for a project and all


resources within the project.

 Set up billing for a project.

 Predefined roles: Predefined roles give granular access to specific Google Cloud
resources. These roles are created and maintained by Google. For example, the
predefined role Pub/Sub Publisher (roles/pubsub.publisher) provides access to
only publish messages to a Pub/Sub topic.

Example Compute Engine roles:

Compute Engine roles Permissions

Compute Admin compute.*


(roles/compute.admin) resourcemanager.projects.get
resourcemanager.projects.list
Full control of all Compute Engine serviceusage.quotas.get
resources. serviceusage.services.get
serviceusage.services.list
Compute Image User compute.images.get
(roles/compute.imageUser) compute.images.getFromFamily
compute.images.list
compute.images.useReadOnly
resourcemanager.projects.get
resourcemanager.projects.list
serviceusage.quotas.get
serviceusage.services.get
serviceusage.services.list
Compute Instance Admin (beta) compute.acceleratorTypes.*
compute.addresses.createInternal
(roles/compute.instanceAdmin)
compute.addresses.deleteInternal
compute.addresses.get
compute.addresses.list
compute.disks.create
compute.disks.createSnapshot
compute.disks.delete
compute.disks.get
compute.disks.list
compute.disks.resize
compute.instanceGroupManagers.*
compute.instanceGroups.*
compute.instanceTemplates.*
compute.instances.*
compute.regions.*
compute.zones.*
Compute Load Balancer Admin compute.addresses.*
compute.backendBuckets.*
(roles/compute.loadBalancerAdmin)
compute.backendServices.*
compute.forwardingRules.*
compute.globalAddresses.*
compute.globalForwardingRules.*
compute.globalNetworkEndpointGroups.*
compute.healthChecks.*
compute.httpHealthChecks.*
compute.httpsHealthChecks.*

 Custom roles: Roles that you create to tailor permissions to the needs of your
organization when predefined roles don't meet your needs. IAM also lets you
create custom IAM roles. Custom roles help you enforce the principle of least
privilege, because they help to ensure that the principals in your organization have
only the permissions that they need.

Service Accounts:

A service account is a special type of Google account intended to represent a non-human


user that needs to authenticate and be authorized to access data in Google APIs.

A service account is used by an application or compute workload, such as a Compute


Engine virtual machine (VM) instance, rather than a person. Applications use service
accounts to make authorized API calls, authorized as either the service account itself, or
as Google Workspace or Cloud Identity users through domain-wide delegation.

A service account is identified by its email address, which is unique to the account.

Service accounts are used in scenarios such as:

 Running workloads on virtual machines (VMs).


 Running workloads on on-premises workstations or data centers that call Google
APIs.

 Running workloads which are not tied to the lifecycle of a human user.

Types of service accounts:

User-managed service accounts

 You can create user-managed service accounts in your project using the IAM API,
the Google Cloud console, or the Google Cloud CLI. You are responsible for
managing and securing these accounts.

 By default, you can create up to 100 user-managed service accounts in a project.


 When you create a user-managed service account in your project, you choose a
name for the service account. This name appears in the email address that
identifies the service account, which uses the following format:

[email protected]

Default service accounts

When you enable or use some Google Cloud services, they create user-managed service
accounts that enable the service to deploy jobs that access other Google Cloud resources.
These accounts are known as default service accounts.

Google-managed service accounts

 Some Google Cloud services need access to your resources so that they can act
on your behalf. For example, when you use Cloud Run to run a container, the
service needs access to any Pub/Sub topics that can trigger the container.
 To meet this need, Google creates and manages service accounts for many Google
Cloud services. These service accounts are known as Google-managed service
accounts. You might see Google-managed service accounts in your project's allow
policy, in audit logs, or on the IAM page in the Google Cloud console.
 Google-managed service accounts are not listed in the Service accounts page in
the Google Cloud console.

Managing service accounts:


 Service accounts can be thought of as both a resource and as an identity.
 When thinking of the service account as an identity, you can grant a role to a
service account, allowing it to access a resource (such as a project).
 When thinking of a service account as a resource, you can grant roles to other
users to access or manage that service account.

BEST PRACTICES FOR AUTHORIZATION USING CLOUD IAM:

 Use projects to group resources that share the same trust boundary.
 Check the policy granted on each resource and ensure to recognize the
inheritance.

 Because of inheritance, use the principle of least privilege when you grant roles.
 Finally, audit policies by using Cloud Audit Logs and audit the memberships of
groups that are used in policies.
10. ASSIGNMENT

1. Create a list of pub/sub topics and pub/sub subscription. Publish


message to the topics. Use a pull subscriber to output individual topic
messages.

2. Create 2 users. Login with the first user and assign a role to a second
user and remove assigned roles associated with Cloud IAM. More
specifically, you sign in with 2 different sets of credentials to
experience how granting and revoking permissions works from
Google Cloud Project Owner and Viewer roles.
11. PART A QUESTIONS AND ANSWERS
1. Define API.
 An application programming interface (API) is a way for two or more computer
programs to communicate with each other. It is a type of software interface,

offering a service to other pieces of software.


 A document or standard that describes how to build or use such a connection
or interface is called an API specification.

 APIs are used to simplify the way different, disparate, software resources
communicate.
2. How API works?
 A client application initiates an API call to retrieve information—also known as
a request. This request is processed from an application to the web server via
the API’s Uniform Resource Identifier (URI) and includes a request verb,
headers, and sometimes, a request body.
 After receiving a valid request, the API makes a call to the external program or
web server. The server sends a response to the API with the requested
information.

 The API transfers the data to the initial requesting application.


3. What is REST API?
 REpresentational State Transfer, or REST, is currently the most popular
architectural style for services.

 It outlines a key set of constraints and agreements that a service must comply
with. If a service complies with these REST constraints, it’s said to be RESTful.
 APIs intended to be spread widely to consumers and deployed to devices with
limited computing resources, like mobile, are well suited to a REST structure.
 REST APIs use HTTP requests to perform GET, PUT, POST, and DELETE
operations.
4. List the challenges in deploying and managing APIs.
When deploying and managing APIs on your own, there are several issues to consider.

 Interface Definition
 Authentication and Authorization
 Logging and Monitoring
 Management and Scalability
5. What is Cloud Endpoint?
Endpoints is an API management system that helps you secure, monitor, analyze,
and set quotas on your APIs using the same infrastructure Google uses for its own
APIs.
Cloud Endpoints is a distributed API management system that uses a distributed
Extensible Service Proxy, which is a service proxy that runs in its own Docker

container. It helps to create and maintain the most demanding APIs with low
latency and high performance.
6. What are the cloud endpoints options to manage API?
To have your API managed by Cloud Endpoints, you have three options, depending
on where your API is hosted and the type of communications protocol your API uses:

 Cloud Endpoints for OpenAPI


 Cloud Endpoints for gRPC
 Cloud Endpoints Frameworks for the App Engine standard environment
7. What is ESP?
The Extensible Service Proxy (ESP) is an Nginx-based high-performance, scalable
proxy that runs in front of an OpenAPI or gRPC API backend and provides API
management features such as authentication, monitoring, and logging.
8. What is Cloud Endpoint framework?
Cloud Endpoints Frameworks is a web framework for the App Engine
standard Python 2.7 and Java 8 runtime environments. Cloud Endpoints
Frameworks provides the tools and libraries that allow you to generate REST APIs
and client libraries for your application.

9. Define Apigee Edge.


Apigee Edge is a platform for developing and managing APIs. By fronting services
with a proxy layer, Edge provides an abstraction or facade for your backend service
APIs and provides security, rate limiting, quotas, analytics, and more. Apigee is an
API gateway management framework owned by Google which helps in exchanging
data between different cloud applications and services.

10. What are the components of Apigee?

Apigee consists of the following components:

 Apigee services: The APIs that you use to create, manage, and deploy your API
proxies.

 Apigee runtime: A set of containerized runtime services in a Kubernetes cluster


that Google maintains. All API traffic passes through and is processed by these
services.

 GCP services: Provides identity management, logging, analytics, metrics, and


project management functions.

 Back-end services: Used by your apps to provide runtime access to data for
your API proxies.

11. How do you make the services available through Apigee?


Apigee enables you to provide secure access to your services with a well-defined
API that is consistent across all of your services, regardless of service
implementation. A consistent API:

 Makes it easy for app developers to consume your services.


 Enables you to change the backend service implementation without affecting the
public API.
 Enables you to take advantage of the analytics, developer portal, and other
features built into Apigee.

12. What is API Gateway?


API Gateway enables you to provide secure access to your backend services
through a well-defined REST API that is consistent across all of your services,
regardless of the service implementation. Clients consume your REST APIS to
implement standalone apps for a mobile device or tablet, through apps running in
a browser, or through any other type of app that can make a request to an HTTP

endpoint.

13. What is the use of pub/sub?


Pub/Sub allows services to communicate asynchronously, with latencies on the
order of 100 milliseconds. Pub/Sub is used for streaming analytics and data
integration pipelines to ingest and distribute data. Pub/Sub enables you to create
systems of event producers and consumers, called publishers and subscribers.

Publishers communicate with subscribers asynchronously by broadcasting events.


14. What are the types of pub/sub services?
Pub/Sub consists of two services:
Pub/Sub service: This messaging service is the default choice for most users
and applications. It offers the highest reliability and largest set of integrations,
along with automatic capacity management
Pub/Sub Lite service: A separate but similar messaging service built for lower
cost. It offers lower reliability compared to Pub/Sub. It offers either zonal or
regional topic storage.

15. Define topic and subscription.


Topic is a named resource to which messages are sent by publishers.
Subscription is a named resource representing the stream of messages from a
single, specific topic, to be delivered to the subscribing application.

16. Define publisher and subscriber.


Publisher is an application that creates and sends messages to a single or
multiple topics.
Subscriber is an application with a subscription to a single or multiple topics to
receive messages from it.

17. List the five layers of protection provided by Google.


The fiver layers of protection Google provides to keep customers' data safe:

1. Hardware infrastructure
2. Service deployment
3. Storage services
4. Internet communication
5. Operational security
18. What is shared security model?
Security concerns associated with cloud computing are typically categorized in two
ways: as security issues faced by cloud providers and security issues faced by their
customers. Security responsibilities are shared between the customer and Google
Cloud. The provider must ensure that their infrastructure is secure and that their
clients’ data and applications are protected, while the user must take measures to
fortify their application and use strong passwords and authentication measures.

19. What are the encryption options available in google cloud?


Several encryption options are available on Google Cloud. These range from simple
but with limited control, to greater control flexibility but with more complexity. The
simplest option is Google Cloud default encryption, followed by customer-managed
encryption keys (CMEK), and the option that provides the most control: customer-
supplied encryption keys (CSEK).
20. What is the role of KMS?
 Cloud Key Management Service, or Cloud KMS, automates and simplifies the
generation and management of encryption keys. The keys are managed by the
customer and never leave the cloud.
 Cloud KMS supports encryption, decryption, signing, and verification of data. It
supports both symmetric and asymmetric cryptographic keys and various popular
algorithms.

 Cloud KMS lets you both rotate keys manually and automate the rotation of keys
on a time-based interval.
21. Define IAM.
IAM grant granular access to specific Google Cloud resources and helps to prevent
access to other resources. IAM lets to adopt the security principle of least privilege,

which states that nobody should have more permissions than they actually need.
With IAM, we can manage access control by defining who (identity) has what
access (role) for which resource.
22. How a role is related to permission?
A role is a collection of permissions. Permissions determine what operations are
allowed on a resource. When you grant a role to a principal, you grant all the
permissions that the role contains.
23. Define policy in IAM.
The allow policy is a collection of role bindings that bind one or more principals to
individual roles. When you want to define who (principal) has what type of access
(role) on a resource, you create an allow policy and attach it to the resource.

24. What are the types of roles in IAM?


 Basic roles: Basic roles are highly permissive roles that existed prior to the
introduction of IAM. Basic roles can be used to grant principals broad access to
Google Cloud resources. These roles are Owner, Editor, and Viewer.
 Predefined roles: Predefined roles give granular access to specific Google Cloud
resources. These roles are created and maintained by Google.
 Custom roles: Roles that you create to tailor permissions to the needs of your
organization when predefined roles don't meet your needs. IAM also lets you
create custom IAM roles.

25. Define service accounts?


A service account is a special type of Google account intended to represent a non-
human user that needs to authenticate and be authorized to access data in Google
APIs. A service account is used by an application or compute workload, such as a
Compute Engine virtual machine (VM) instance, rather than a person. Applications
use service accounts to make authorized API calls, authorized as either the service
account itself, or as Google Workspace or Cloud Identity users through domain-

wide delegation. A service account is identified by its email address, which is


unique to the account.
26. When service accounts can be used?
Service accounts are used in scenarios such as:

 Running workloads on virtual machines (VMs).


 Running workloads on on-premises workstations or data centers that call Google
APIs.
 Running workloads which are not tied to the lifecycle of a human user.
27. What are the types of service accounts?
 Default service accounts
 User-managed service accounts
 Google-managed service accounts
28. List the best practices for authorization using cloud IAM.
 Use projects to group resources that share the same trust boundary.
 Check the policy granted on each resource and ensure to recognize the
inheritance.

 Because of inheritance, use the principle of least privilege when you grant roles.
 Finally, audit policies by using Cloud Audit Logs and audit the memberships of
groups that are used in policies.

12. PART B QUESTIONS

1. Explain the purpose of API and list the challenges in deploying and managing the
APIs.

2. Explain how Cloud Endpoints are used in API management.


3. Explain in detail about Apigee Edge.
4. What is managed message service. Briefly explain PUB/SUB.
5. How security is implemented in Google cloud?
6. Explain in detail about IAM.
13. ONLINE CERTIFICATIONS

1. Cloud Digital Leader

Cloud Digital Leader | Google Cloud

2. Associate Cloud Engineer:

Associate Cloud Engineer Certification | Google Cloud

3. Google Cloud Computing Foundations Course

https://onlinecourses.nptel.ac.in/noc20_cs55/preview

4. Google Cloud Computing Foundations

https://learndigital.withgoogle.com/digitalgarage/course/gcloud-

computing-foundations
14. REAL TIME APPLICATIONS

Modernizing Twitter's ad engagement analytics platform

As part of the daily business operations on its advertising platform, Twitter serves billions
of ad engagement events, each of which potentially affects hundreds of downstream
aggregate metrics. To enable its advertisers to measure user engagement and track ad
campaign efficiency, Twitter offers a variety of analytics tools, APIs, and dashboards that
can aggregate millions of metrics per second in near-real time.

Twitter Revenue Data Platform engineering team, led by Steve Niemitz, migrated their
on-prem architecture to Google Cloud to boost the reliability and accuracy of Twitter's ad
analytics platform.

Over the past decade, Twitter has developed powerful data transformation pipelines to
handle the load of its ever-growing user base worldwide. The first deployments for those
pipelines were initially all running in Twitter's own data centers.

To accommodate for the projected growth in user engagement over the next few years
and streamline the development of new features, the Twitter Revenue Data Platform
engineering team decided to rethink the architecture and deploy a more flexible and
scalable system in Google Cloud.

Six months after fully transitioning its ad analytics data platform to Google Cloud, Twitter
has already seen huge benefits. Twitter's developers have gained in agility as they can
more easily configure existing data pipelines and build new features much faster. The
real-time data pipeline has also greatly improved its reliability and accuracy, thanks to
Beam's exactly-once semantics and the increased processing speed and ingestion
capacity enabled by Pub/Sub, Dataflow, and Bigtable.

Twitter’s data transformation pipelines for ads | Google Cloud Blog


16. ASSESSMENT SCHEDULE

• Tentative schedule for the Assessment During 2023-2024 odd semester

Name of the
S.NO Assessment Start Date End Date Portion

1 IAT 1 09.09.2023 15.09.2023 UNIT 1 & 2

2 IAT 2 26.10.2023 01.11.2023 UNIT 3 & 4

3 MODEL 15.11.2023 25.11.2023 ALL 5 UNITS


17. PRESCRIBED TEXTBOOKS AND REFERENCES

REFERENCES:

1. https://cloud.google.com/docs
2. https://www.cloudskillsboost.google/course_templates/153
3. https://nptel.ac.in/courses/106105223
18. MINI PROJECT
You are just starting your junior cloud engineer role with Jooli inc. So far you have been
helping teams create and manage Google Cloud resources.

You are now asked to help a newly formed development team with some of their initial
work on a new project around storing and organizing photographs, called memories. You
have been asked to assist the memories team with initial configuration for their
application development environment; you receive the following request to complete the
following tasks:

 Create a bucket for storing the photographs.


 Create a Pub/Sub topic that will be used by a Cloud Function you create.
 Create a Cloud Function.
 Remove the previous cloud engineer’s access from the memories project.
Some Jooli Inc. standards you should follow:
 Create all resources in the us-east1 region and us-east1-b zone, unless otherwise
directed.
 Use the project VPCs.
 Naming is normally team-resource, e.g., an instance could be named kraken-
webserver1
 Allocate cost effective resource sizes. Projects are monitored and excessive
resource use will result in the containing project's termination (and possibly yours),
so beware. This is the guidance the monitoring team is willing to share; unless
directed, use f1-micro for small Linux VMs and n1-standard-1 for Windows or
other applications such as Kubernetes nodes.
Thank you

Disclaimer:

This document is confidential and intended solely for the educational purpose of RMK Group of
Educational Institutions. If you have received this document through email in error, please notify the
system manager. This document contains proprietary information and is intended only to the
respective group / learning community as intended. If you are not the addressee you should not
disseminate, distribute or copy through e-mail. Please notify the sender immediately by e-mail if you
have received this document by mistake and delete this document from your system. If you are not
the intended recipient you are notified that disclosing, copying, distributing or taking any action in
reliance on the contents of this information is strictly prohibited.

You might also like