GCCF Unit 3
GCCF Unit 3
proceeding:
This document is confidential and intended solely for the educational purpose of
RMK Group of Educational Institutions. If you have received this document through
email in error, please notify the system manager. This document contains
proprietary information and is intended only to the respective group / learning
community as intended. If you are not the addressee you should not disseminate,
distribute or copy through e-mail. Please notify the sender immediately by e-mail
if you have received this document by mistake and delete this document from your
system. If you are not the intended recipient you are notified that disclosing,
copying, distributing or taking any action in reliance on the contents of this
information is strictly prohibited.
20CS929
Department : CSE
Created by:
Date : 04.09.2023
1. CONTENTS
S. No. Contents
1 Contents
2 Course Objectives
3 Pre-Requisites
4 Syllabus
5 Course outcomes
7 Lecture Plan
9 Lecture Notes
10 Assignments
12 Part B Questions
16 Assessment Schedule
• Pre-requisite Chart
PSO3
PSO2
PSO1
PO- PO- PO- PO- PO- PO- PO- PO- PO- PO- PO- PO-
1 2 3 4 5 6 7 8 9 10 11 12
C306.1 K3 2 1 1 - - - - - - 2 2 2 2 2 2
C306.2 K3 3 3 3 - - - - 2 2 2 2 2 2 2 2
C306.3 K3 3 3 3 - - 2 - 2 2 2 2 2 2 2 2
C306.4 K3 3 3 3 - - - - 2 2 2 2 2 2 2 2
C306.5 K3 3 3 3 - - 2 - - 2 2 2 2 2 2 2
Correlation Level:
1. Slight (Low)
2. Moderate (Medium)
3. Substantial (High)
If there is no correlation, put “-“.
7. LECTURE PLAN
Number Actual
Sl. Proposed Taxonomy Mode of
Topic of Lecture CO
No. Date Level Delivery
Periods Date
Chalk
1 The purpose of APIs 1 04.09.2023 CO3 K2 & talk
PPT/
2 Cloud Endpoints 1 07.09.2023 CO3 K3
Demo
Using Apigee Edge -
PPT/
3 Managed message 1 08.09.2023 CO3 K2
Demo
services
PPT/
4 Cloud Pub/Sub 1 20.09.2023 CO3 K3
Demo
Introduction to
security in the cloud
5 - The shared security 1 21.09.2023 CO3 K2 PPT
model
PPT/
6 Encryption options 1 22.09.2023 CO3 K2
Demo
Authentication and
PPT/
7 authorization with 1 23.09.2023 CO3 K3
Demo
Cloud IAM
Lab: User
Authentication:
8 Cloud Identity-Aware 1 27.09.2023 CO3 K3 Demo
Proxy
Identify Best
Practices for PPT/
9 Authorization using 1 29.09.2023 CO3 K2
Demo
Cloud IAM
8. ACTIVITY BASED LEARNING
Introduction to APIs:
For example, consider an API offered by a payment processing service. Customers can
enter their card details on the frontend of an application for an ecommerce store. The
payment processor doesn’t require access to the user’s bank account; the API creates a
unique token for this transaction and includes it in the API call to the server. This ensures
a higher level of security against potential hacking threats.
REST APIs:
Request headers and response headers, along with conventional HTTP status
codes, are used within well-designed REST APIs.
One of the main reasons REST APIs work well with the cloud is due to t heir
stateless nature. State information does not need to be stored or referenced for
the API to run.
An authorization framework like OAuth 2.0 can help limit the privileges of third-
party applications.
Using a timestamp in the HTTP header, an API can also reject any request that
arrives after a certain time period.
Parameter validation and JSON Web Tokens are other ways to ensure that only
authorized clients can access the API.
When deploying and managing APIs on your own, there are several issues to consider.
Interface Definition
Authentication and Authorization
Logging and Monitoring
Management and Scalability
CLOUD ENDPOINTS:
Endpoints is an API management system that helps you secure, monitor, analyze, and
set quotas on your APIs using the same infrastructure Google uses for its own APIs.
Kubernetes Engine, and Compute Engine. Clients include Android, iOS, and Javascript.
To have your API managed by Cloud Endpoints, you have three options, depending on
where your API is hosted and the type of communications protocol your API uses:
Endpoints works with the Extensible Service Proxy (ESP) and the Extensible Service
Proxy V2 (ESPv2) to provide API management.
Endpoints supports version 2 of the OpenAPI Specification —the industry standard
for defining REST APIs.
Cloud Endpoints supports APIs that are described using version 2.0 of the OpenAPI
specification. API can be implemented using any publicly available REST
framework such as Django or Jersey. API can be described in a JSON or YAML file
referred to as an OpenAPI document.
With Endpoints for gRPC, you can use the API management capabilities of Endpoints to
add an API console, monitoring, hosting, tracing, authentication, and more to your gRPC
services. In addition, once you specify special mapping rules, ESP and ESPv2 translate
RESTful JSON over HTTP into gRPC requests. This means that you can deploy a gRPC
server managed by Endpoints and call its API using a gRPC or JSON/HTTP client, giving
you much more flexibility and ease of integration with other systems.
Cloud Endpoints Frameworks
Cloud Endpoints Frameworks is a web framework for the App Engine standard Python 2.7
and Java 8 runtime environments. Cloud Endpoints Frameworks provides the tools and
libraries that allow you to generate REST APIs and client libraries for your application.
Endpoints Frameworks includes a built-in API gateway that provides API management
features that are comparable to the features that ESP provides for Endpoints for OpenAPI
and Endpoints for gRPC.
Endpoints Frameworks intercepts all requests and performs any necessary checks (such
as authentication) before forwarding the request to the API backend. When the backend
responds, Endpoints Frameworks gathers and reports telemetry. Metrics can be viewed
for API on the Endpoints Services page in the Google Cloud console.
Apigee Edge is a platform for developing and managing APIs. By fronting services with a
proxy layer, Edge provides an abstraction or facade for your backend service APIs and
provides security, rate limiting, quotas, analytics, and more.
Apigee services: The APIs that you use to create, manage, and deploy your API
proxies.
Back-end services: Used by your apps to provide runtime access to data for
your API proxies.
Flavors of Apigee:
Apigee: A cloud version hosted by Apigee in which Apigee maintains the environment,
allowing you to concentrate on building your services and defining the APIs to those
services.
Apigee enables you to provide secure access to your services with a well-defined API that
is consistent across all of your services, regardless of service implementation. A consistent
API:
The following image shows an architecture with Apigee handling the requests from client
apps to your backend services:
Rather than having app developers consume your services directly, they access an API
proxy created on Apigee. The API proxy functions as a mapping of a publicly available
HTTP endpoint to your backend service. By creating an API proxy, you let Apigee handle
the security and authorization tasks required to protect your services, as well as to analyze
and monitor those services.
Because app developers make HTTP requests to an API proxy, rather than directly to
your services, developers do not need to know anything about the implementation of
your services. All the developer needs to know is:
API Gateway:
API Gateway enables you to provide secure access to your backend services through a
well-defined REST API that is consistent across all of your services, regardless of the
service implementation. Clients consume your REST APIS to implement standalone apps
for a mobile device or tablet, through apps running in a browser, or through any other
type of app that can make a request to an HTTP endpoint.
MANAGED MESSAGE SERVICES:
PUB/SUB:
Pub/Sub is used for streaming analytics and data integration pipelines to ingest and
distribute data. It's equally effective as a messaging-oriented middleware for service
integration or as a queue to parallelize tasks.
Pub/Sub enables you to create systems of event producers and consumers, called
publishers and subscribers. Publishers communicate with subscribers asynchronously by
broadcasting events, rather than by synchronous remote procedure calls (RPCs).
Publishers send events to the Pub/Sub service, without regard to how or when these
events are to be processed. Pub/Sub then delivers events to all the services that react to
them. In systems communicating through RPCs, publishers must wait for subscribers to
receive the data. However, the asynchronous integration in Pub/Sub increases the
flexibility and robustness of the overall system.
Pub/Sub service: This messaging service is the default choice for most users and
applications. It offers the highest reliability and largest set of integrations, along with
automatic capacity management. Pub/Sub guarantees synchronous replication of all data
to at least two zones and best-effort replication to a third additional zone.
Pub/Sub Lite service: A separate but similar messaging service built for lower cost. It
offers lower reliability compared to Pub/Sub. It offers either zonal or regional topic
storage. Zonal Lite topics are stored in only one zone. Regional Lite topics replicate data
to a second zone asynchronously. Also, Pub/Sub Lite requires you to pre-provision and
manage storage and throughput capacity. Consider Pub/Sub Lite only for applications
where achieving a low cost justifies some additional operational work and lower reliability.
In this scenario, there are two publishers publishing messages on a single topic. There
are two subscriptions to the topic. The first subscription has two subscribers, meaning
messages will be load-balanced across them, with each subscriber receiving a subset
of the messages. The second subscription has one subscriber that will receive all of
the messages. The bold letters represent messages. Message A comes from Publisher
1 and is sent to Subscriber 2 via Subscription 1, and to Subscriber 3 via Subscription
2. Message B comes from Publisher 2 and is sent to Subscriber 1 via Subscription 1
and to Subscriber 3 via Subscription 2.
Integrations:
Pub/Sub has many integrations with other Google Cloud products to create a fully
featured messaging system:
APIs. Pub/Sub uses standard gRPC and REST service API technologies along with client
libraries for several languages.
Integration Connectors. (Preview) These connectors let you connect to various data
sources. With connectors, both Google Cloud services and third-party business
applications are exposed to your integrations through a transparent, standard interface.
For Pub/Sub, you can create a Pub/Sub connection for use in your integrations.
Publisher-subscriber relationships can be one-to-many (fan-out), many-to-one (fan-in),
and many-to-many, as shown in the following diagram:
The following diagram illustrates how a message passes from a publisher to a subscriber.
For push delivery, the acknowledgment is implicit in the response to the push request,
while for pull delivery it requires a separate RPC.
Pub/Sub Basic Architecture:
Pub/Sub servers run in all Google Cloud regions around the world. This allows the service
to offer fast, global data access, while giving users control over where messages are
stored. Cloud Pub/Sub offers global data access in that publisher and subscriber clients
are not aware of the location of the servers to which they connect or how those services
route the data.
Pub/Sub’s load balancing mechanisms direct publisher traffic to the nearest Google Cloud
data center where data storage is allowed.
Any individual message is stored in a single region. However, a topic may have messages
stored in many regions. When a subscriber client requests messages published to this
topic, it connects to the nearest server which aggregates data from all messages
published to the topic for delivery to the client.
Pub/Sub is divided into two primary parts: the data plane, which handles moving
messages between publishers and subscribers, and the control plane, which handles
the assignment of publishers and subscribers to servers on the data plane. The servers
in the data plane are called forwarders, and the servers in the control plane are called
routers. When publishers and subscribers are connected to their assigned forwarders,
they do not need any information from the routers (as long as those forwarders remain
accessible). Therefore, it is possible to upgrade the control plane of Pub/Sub without
affecting any clients that are already connected and sending or receiving messages.
Control Plane:
The Pub/Sub control plane distributes clients to forwarders in a way that provides
scalability, availability, and low latency for all clients. Any forwarder is capable of serving
clients for any topic or subscription. When a client connects to Pub/Sub, the router
decides the data centers the client should connect to based on shortest network distance,
a measure of the latency on the connection between two points.
The router provides the client with an ordered list of forwarders it can consider connecting
to. This ordered list may change based on forwarder availability and the shape of the
load from the client.
A client takes this list of forwarders and connects to one or more of them. The client
prefers connecting to the forwarders most recommended by the router, but also takes
into consideration any failures that have occurred
Data Plane:
The data plane receives messages from publishers and sends them to clients.
Different messages for a single topic and subscription can flow through many publishers,
subscribers, publishing forwarders, and subscribing forwarders. Publishers can publish to
multiple forwarders simultaneously and subscribers may connect to multiple subscribing
forwarders to receive messages. Therefore, the flow of messages through connections
among publishers, subscribers, and forwarders can be complex. The following diagram
shows how messages could flow for a single topic and subscription, where different colors
indicate the different paths, messages may take from publishers to subscribers:
INTRODUCTION TO SECURITY IN THE CLOUD:
Cloud security refers to a broad set of policies, technologies, applications, and controls
utilized to protect virtualized IP, data, applications, services, and the associated
infrastructure of cloud computing.
The fiver layers of protection Google provides to keep customers' data safe:
1. Hardware infrastructure
2. Service deployment
3. Storage services
4. Internet communication
5. Operational security
At the hardware infrastructure layer:
Hardware design and provenance: Both the server boards and the networking
equipment in Google data centers are custom designed by Google. Google also designs
custom chips, including a hardware security chip that's currently being deployed on both
servers and peripherals.
Secure boot stack: Google server machines use various technologies to ensure that
they are booting the correct software stack, such as cryptographic signatures over the
BIOS, bootloader, kernel, and base operating system image.
Premises security: Google designs and builds its own data centers, which incorporate
multiple layers of physical security protections. Access to these data centers is limited to
only a small fraction of Google employees. Google also hosts some servers in third-party
data centers, where we ensure that there are Google-controlled physical security
measures on top of the security layers provided by the data center operator.
User identity: Google’s central identity service, which usually manifests to end users as
the Google login page, goes beyond asking for a simple username and password. The
service also intelligently challenges users for additional information based on risk factors
such as whether they have logged in from the same device or a similar location in the
past. Users also have the option of employing secondary factors when signing in,
including devices based on the Universal 2nd Factor (U2F) open standard.
Encryption at rest: Most applications at Google access physical storage (in other words,
“file storage”) indirectly by using storage services, and encryption (using centrally
managed keys) is applied at the layer of these storage services. Google also enables
hardware encryption support in hard drives and SSDs.
Google Front End (GFE): Google services that want to make themselves available on
the internet register themselves with an infrastructure service called the Google Front
End, which ensures that all TLS connections are ended using a public-private key pair
and an X.509 certificate from a Certified Authority (CA) and follows best practices such
as supporting perfect forward secrecy. The GFE also applies protections against Denial-
of-Service attacks.
Denial of Service (DoS) protection: The sheer scale of its infrastructure enables
Google to simply absorb many DoS attacks. Google also has multi-tier, multi-layer DoS
protections that further reduce the risk of any DoS impact on a service running behind a
GFE.
Finally, at Google’s operational security layer:
Intrusion detection: Rules and machine intelligence give Google’s operational security
teams warnings of possible incidents. Google conducts Red Team exercises to measure
and improve the effectiveness of its detection and response mechanisms.
Reducing insider risk: Google aggressively limits and actively monitors the activities of
employees who have been granted administrative access to the infrastructure.
Employee U2F use: To guard against phishing attacks against Google employees,
employee accounts require use of U2F-compatible security keys.
Software development practices: Google employs central source control and requires
two-party review of new code. Google also provides its developers with libraries that
prevent them from introducing certain classes of security bugs. Google also runs a
Vulnerability Rewards Program where we pay anyone who can discover and inform us of
bugs in our infrastructure or applications.
Cloud computing and storage provide users with capabilities to store and process
their data in third-party data centers. Organizations use the cloud in a variety of different
service models (with acronyms such as SaaS, PaaS, and IaaS) and deployment models
(private, public, hybrid, and community).
Security concerns associated with cloud computing are typically categorized in two ways:
as security issues faced by cloud providers (organizations providing software-, platform-
, or infrastructure-as-a-service via the cloud) and security issues faced by their customers
(companies or organizations who host applications or store data on the cloud).
Security responsibilities are shared between the customer and Google Cloud.
The responsibility is shared, however, and is often detailed in a cloud provider's "shared
security responsibility model" or "shared responsibility model." The provider must ensure
that their infrastructure is secure and that their clients’ data and applications are
protected, while the user must take measures to fortify their application and use strong
passwords and authentication measures.
But when they move an application to Google Cloud, Google handles many of the lower
layers of security, like the physical security, disk encryption, and network integrity.
The upper layers of the security stack, including the securing of data, remain the
customer’s responsibility. Google provides tools like the resource hierarchy and IAM to
help them define and implement policies, but ultimately this part is their responsibility.
Data access is usually the customer’s responsibility. They control who or what has access
to their data. Google Cloud provides tools that help them control this access, such as
Identity and Access Management, but they must be properly configured to protect your
data.
Several encryption options are available on Google Cloud. These range from simple but
with limited control, to greater control flexibility but with more complexity.
Google Cloud will encrypt data in transit and at rest by default. Data in transit is encrypted
by using Transport Layer Security (TLS). Data encrypted at rest is done with AES 256-bit
keys. The encryption happens automatically.
With customer-managed encryption keys, you manage your encryption keys that
protect data on Google Cloud.
Cloud Key Management Service, or Cloud KMS, automates and simplifies the
generation and management of encryption keys. The keys are managed by the
Cloud KMS lets you both rotate keys manually and automate the rotation of keys
on a time-based interval.
Cloud KMS also supports both symmetric keys and asymmetric keys for encryption
and signing.
Customer-supplied encryption keys give users more control over their keys, but
with greater management complexity.
With CSEK, users use their own AES 256-bit encryption keys. They are responsible
for generating these keys.
Users are responsible for storing the keys and providing them as part of Google
Cloud API calls.
Google Cloud will use the provided key to encrypt the data before saving it. Google
guarantees that the key only exists in-memory and is discarded after use.
Persistent disks, such as those that back virtual machines, can be encrypted with
customer-supplied encryption keys. With CSEK for persistent disks, the data is encrypted
before it leaves the virtual machine. Even without CSEK or CMEK, persistent disks are still
encrypted. When a persistent disk is deleted, the keys are discarded, and the data is
rendered irrecoverable by traditional means.
To have more control over persistent disk encryption, users can create their own
persistent disks and redundantly encrypt them.
And finally, client-side encryption is always an option. With client-side encryption, users
encrypt data before they send it to Google Cloud. Neither the unencrypted data nor the
decryption keys leave their local device.
Google provides many APIs and services, which require authentication to access.
IAM lets you grant granular access to specific Google Cloud resources and helps
prevent access to other resources. IAM lets you adopt the security principle of
least privilege, which states that nobody should have more permissions than they
actually need.
With IAM, you manage access control by defining who (identity) has what access
(role) for which resource. For example, Compute Engine virtual machine instances,
Google Kubernetes Engine (GKE) clusters, and Cloud Storage buckets are all
Google Cloud resources.
The organizations, folders, and projects that you use to organize your resources
are also resources.
In IAM, permission to access a resource isn't granted directly to the end user.
Instead, permissions are grouped into roles, and roles are granted to authenticated
principals.
An allow policy, also known as an IAM policy, defines and enforces what roles are
granted to which principals. Each allow policy is attached to a resource. When an
authenticated principal attempts to access a resource, IAM checks the resource's
Principal. A principal can be a Google Account (for end users), a service account
(for applications and compute workloads), a Google group, or a Google Workspace
account or Cloud Identity domain that can access a resource. Each principal has
its own identifier, which is typically an email address.
Policy. The allow policy is a collection of role bindings that bind one or more
principals to individual roles. When you want to define who (principal) has what
type of access (role) on a resource, you create an allow policy and attach it to the
resource.
Concepts related to identity:
In IAM, you grant access to principals. Principals can be of the following types:
Google Account
Service account
Google group
Google Workspace account
Cloud Identity domain
All authenticated users
All users
All users - The value allUsers is a special identifier that represents anyone who is
on the internet, including authenticated and unauthenticated users.
When an authenticated principal attempts to access a resource, IAM checks the resource's
allow policy to determine whether the action is allowed.
Resource
If a user needs access to a specific Google Cloud resource, you can grant the user
a role for that resource.
Permissions determine what operations are allowed on a resource. In the IAM world,
permissions are represented in the form of service.resource.verb, for example,
pubsub.subscriptions.consume.
Permissions are not granted to users directly. Instead, the roles that contain the
appropriate permissions are identified, and then the roles are granted to the user.
Roles
A role is a collection of permissions. You cannot grant a permission to the user directly.
Instead, you grant them a role. When you grant a role to a user, you grant them all the
permissions that the role contains.
Basic roles: Basic roles are highly permissive roles that existed prior to the
introduction of IAM. Basic roles can be used to grant principals broad access to
Google Cloud resources. These roles are Owner, Editor, and Viewer.
roles/viewer Viewer Permissions for read-only actions that do not affect state, such
as viewing (but not modifying) existing resources or data.
roles/editor Editor All viewer permissions, plus permissions for actions that modify
state, such as changing existing resources.
roles/owner Owner All Editor permissions and permissions for the following actions:
Predefined roles: Predefined roles give granular access to specific Google Cloud
resources. These roles are created and maintained by Google. For example, the
predefined role Pub/Sub Publisher (roles/pubsub.publisher) provides access to
only publish messages to a Pub/Sub topic.
Custom roles: Roles that you create to tailor permissions to the needs of your
organization when predefined roles don't meet your needs. IAM also lets you
create custom IAM roles. Custom roles help you enforce the principle of least
privilege, because they help to ensure that the principals in your organization have
only the permissions that they need.
Service Accounts:
A service account is identified by its email address, which is unique to the account.
Running workloads which are not tied to the lifecycle of a human user.
You can create user-managed service accounts in your project using the IAM API,
the Google Cloud console, or the Google Cloud CLI. You are responsible for
managing and securing these accounts.
When you enable or use some Google Cloud services, they create user-managed service
accounts that enable the service to deploy jobs that access other Google Cloud resources.
These accounts are known as default service accounts.
Some Google Cloud services need access to your resources so that they can act
on your behalf. For example, when you use Cloud Run to run a container, the
service needs access to any Pub/Sub topics that can trigger the container.
To meet this need, Google creates and manages service accounts for many Google
Cloud services. These service accounts are known as Google-managed service
accounts. You might see Google-managed service accounts in your project's allow
policy, in audit logs, or on the IAM page in the Google Cloud console.
Google-managed service accounts are not listed in the Service accounts page in
the Google Cloud console.
Use projects to group resources that share the same trust boundary.
Check the policy granted on each resource and ensure to recognize the
inheritance.
Because of inheritance, use the principle of least privilege when you grant roles.
Finally, audit policies by using Cloud Audit Logs and audit the memberships of
groups that are used in policies.
10. ASSIGNMENT
2. Create 2 users. Login with the first user and assign a role to a second
user and remove assigned roles associated with Cloud IAM. More
specifically, you sign in with 2 different sets of credentials to
experience how granting and revoking permissions works from
Google Cloud Project Owner and Viewer roles.
11. PART A QUESTIONS AND ANSWERS
1. Define API.
An application programming interface (API) is a way for two or more computer
programs to communicate with each other. It is a type of software interface,
APIs are used to simplify the way different, disparate, software resources
communicate.
2. How API works?
A client application initiates an API call to retrieve information—also known as
a request. This request is processed from an application to the web server via
the API’s Uniform Resource Identifier (URI) and includes a request verb,
headers, and sometimes, a request body.
After receiving a valid request, the API makes a call to the external program or
web server. The server sends a response to the API with the requested
information.
It outlines a key set of constraints and agreements that a service must comply
with. If a service complies with these REST constraints, it’s said to be RESTful.
APIs intended to be spread widely to consumers and deployed to devices with
limited computing resources, like mobile, are well suited to a REST structure.
REST APIs use HTTP requests to perform GET, PUT, POST, and DELETE
operations.
4. List the challenges in deploying and managing APIs.
When deploying and managing APIs on your own, there are several issues to consider.
Interface Definition
Authentication and Authorization
Logging and Monitoring
Management and Scalability
5. What is Cloud Endpoint?
Endpoints is an API management system that helps you secure, monitor, analyze,
and set quotas on your APIs using the same infrastructure Google uses for its own
APIs.
Cloud Endpoints is a distributed API management system that uses a distributed
Extensible Service Proxy, which is a service proxy that runs in its own Docker
container. It helps to create and maintain the most demanding APIs with low
latency and high performance.
6. What are the cloud endpoints options to manage API?
To have your API managed by Cloud Endpoints, you have three options, depending
on where your API is hosted and the type of communications protocol your API uses:
Apigee services: The APIs that you use to create, manage, and deploy your API
proxies.
Back-end services: Used by your apps to provide runtime access to data for
your API proxies.
endpoint.
1. Hardware infrastructure
2. Service deployment
3. Storage services
4. Internet communication
5. Operational security
18. What is shared security model?
Security concerns associated with cloud computing are typically categorized in two
ways: as security issues faced by cloud providers and security issues faced by their
customers. Security responsibilities are shared between the customer and Google
Cloud. The provider must ensure that their infrastructure is secure and that their
clients’ data and applications are protected, while the user must take measures to
fortify their application and use strong passwords and authentication measures.
Cloud KMS lets you both rotate keys manually and automate the rotation of keys
on a time-based interval.
21. Define IAM.
IAM grant granular access to specific Google Cloud resources and helps to prevent
access to other resources. IAM lets to adopt the security principle of least privilege,
which states that nobody should have more permissions than they actually need.
With IAM, we can manage access control by defining who (identity) has what
access (role) for which resource.
22. How a role is related to permission?
A role is a collection of permissions. Permissions determine what operations are
allowed on a resource. When you grant a role to a principal, you grant all the
permissions that the role contains.
23. Define policy in IAM.
The allow policy is a collection of role bindings that bind one or more principals to
individual roles. When you want to define who (principal) has what type of access
(role) on a resource, you create an allow policy and attach it to the resource.
Because of inheritance, use the principle of least privilege when you grant roles.
Finally, audit policies by using Cloud Audit Logs and audit the memberships of
groups that are used in policies.
1. Explain the purpose of API and list the challenges in deploying and managing the
APIs.
https://onlinecourses.nptel.ac.in/noc20_cs55/preview
https://learndigital.withgoogle.com/digitalgarage/course/gcloud-
computing-foundations
14. REAL TIME APPLICATIONS
As part of the daily business operations on its advertising platform, Twitter serves billions
of ad engagement events, each of which potentially affects hundreds of downstream
aggregate metrics. To enable its advertisers to measure user engagement and track ad
campaign efficiency, Twitter offers a variety of analytics tools, APIs, and dashboards that
can aggregate millions of metrics per second in near-real time.
Twitter Revenue Data Platform engineering team, led by Steve Niemitz, migrated their
on-prem architecture to Google Cloud to boost the reliability and accuracy of Twitter's ad
analytics platform.
Over the past decade, Twitter has developed powerful data transformation pipelines to
handle the load of its ever-growing user base worldwide. The first deployments for those
pipelines were initially all running in Twitter's own data centers.
To accommodate for the projected growth in user engagement over the next few years
and streamline the development of new features, the Twitter Revenue Data Platform
engineering team decided to rethink the architecture and deploy a more flexible and
scalable system in Google Cloud.
Six months after fully transitioning its ad analytics data platform to Google Cloud, Twitter
has already seen huge benefits. Twitter's developers have gained in agility as they can
more easily configure existing data pipelines and build new features much faster. The
real-time data pipeline has also greatly improved its reliability and accuracy, thanks to
Beam's exactly-once semantics and the increased processing speed and ingestion
capacity enabled by Pub/Sub, Dataflow, and Bigtable.
Name of the
S.NO Assessment Start Date End Date Portion
REFERENCES:
1. https://cloud.google.com/docs
2. https://www.cloudskillsboost.google/course_templates/153
3. https://nptel.ac.in/courses/106105223
18. MINI PROJECT
You are just starting your junior cloud engineer role with Jooli inc. So far you have been
helping teams create and manage Google Cloud resources.
You are now asked to help a newly formed development team with some of their initial
work on a new project around storing and organizing photographs, called memories. You
have been asked to assist the memories team with initial configuration for their
application development environment; you receive the following request to complete the
following tasks:
Disclaimer:
This document is confidential and intended solely for the educational purpose of RMK Group of
Educational Institutions. If you have received this document through email in error, please notify the
system manager. This document contains proprietary information and is intended only to the
respective group / learning community as intended. If you are not the addressee you should not
disseminate, distribute or copy through e-mail. Please notify the sender immediately by e-mail if you
have received this document by mistake and delete this document from your system. If you are not
the intended recipient you are notified that disclosing, copying, distributing or taking any action in
reliance on the contents of this information is strictly prohibited.