Nptel Ques
Nptel Ques
1 Which is/ are not the primary functions of a typical command interpreter?
a) to provide the interface between the API and application program
b) to handle the files in the operating system
c) to get and execute the next user-specified command
d) to validate the command provided by the user
Correct Answer: (a), (b)
Detailed Solution: The primary function of a command interpreter is to get and execute the next
user-specified command. Command Interpreter checks for valid command and then runs that
command else it will throw an error.
2 Which device(s) need(s) the physical addressing (MAC address) system to forward/ route
network packets?
a) Hub
b) Router
c) Bridge
d) Switch
Correct Answer: (b), (c), (d)
Detailed Solution: Router, Bridge, Switch use physical addressing system to route/ forward packets.
3 Which of the following is FALSE?
a) Kernel level threads cannot share the code segment.
b) User level threads are not scheduled by the kernel.
c) Context switching between user level threads is faster than context switching between
kernel level threads.
d) When a user level thread is blocked, all other threads of its process are blocked.
Correct Answer: (a)
Detailed Solution: Kernel-level threads can share code segments. So, A is FALSE. User level
threads are scheduled by the thread library and the kernel is not involved. So, B is TRUE. Context
switching between user level threads is faster as they have no actual context-switch, nothing is
saved while for kernel level threads, Registers, PC and SP must be saved and restored. So, C is
TRUE. When a user level thread is blocked, all other threads of its process are blocked. So, D is
TRUE.
4 In classful addressing, the IP address 172.16.52.63 belongs to which class?
a) Class A
b) Class B
c) Class C
d) Class D
Correct Answer: (b)
Detailed Solution: In Class B, IP addresses range from 128.0.0.0 to 191.255.255.255.
6 In Operating Systems, which of the following is/are CPU scheduling algorithms?
a) Priority
b) Minimum Cost Spanning Tree
c) Shortest Job First
d) Shortest Path
Correct Answer: (a) and (c)
Detailed Solution: In Operating Systems, CPU scheduling algorithms are:
i) First Come First Served scheduling; ii) Shortest Job First scheduling; iii) Priority scheduling; iv)
Round Robin scheduling; v) Multilevel Queue scheduling; vi) Multilevel Feedback Queue
Scheduling
7 Which of the following is/are example(s) of DBMS?
a) MySQL
b) Tableau
c) Microsoft Access
d) Google search engine
Correct Answer: (a) and (c)
Detailed Solution: MySQL and Microsoft Access are database management systems while Google
is a search engine, and Tableau is a data visualization and analytics software. MySQL is a Linux-
based database management system, Microsoft Access is a tool that is a part of Microsoft Office
used to store data.
8 In OSI network architecture, the error handling is managed by:
a) Network Layer
b) Transport Layer
c) Data Link Layer
d) Session Layer
Correct Answer: (c)
Detailed Solution: In OSI network architecture, error handling is managed by the data link layer
using checksums and bit stuffing.
Week 2
Which of the following GCP Services aid us to run Windows and Linux-based virtual machines?
a) Compute engine
b) Google App engine
c) Kubernetes Engines
d) All of the above
Correct Answer: (a)
Explanation: Compute engine is a type of GCP Service that aids us to run Windows and Linux-
based virtual machines
What is the maximum size of the persistent disc that can be attached to a virtual machine in
Google Compute Engine?
a) 32GB
b) 32TB
c) 64GB
d) 64TB
Correct Answer: (d)
Explanation: Maximum 64TB of network storage can be attached to VMs as persistent disc.
Containerized applications can be deployed, managed, and scaled on Google using which service?
a) Compute engine
b) Google App engine
c) Kubernetes Engines
d) None of the above
Correct Answer: (c)
Explanation: Containerized applications can be deployed, managed, and scaled on the Google
Kubernetes engine.
Say, five VMs are running in a managed group with CPU utilization of 80%, 70%, 75%, 85%, and
90% respectively. How many VMs would the autoscaler add if the targeted CPU utilization is
50%?
a) 0
b) 1
c) 2
d) 3
Correct Answer: (d)
Explanation: Here the actual utilization is {(80+70+75+85+90)/5}% = 80%. As the actual utilization
is more than the targeted, autoscaler would add VMs.
If it adds one VM then the actual utilization would be {(80+70+75+85+90)/6}% = 66.67%.
If it adds two VMs then the actual utilization would be {(80+70+75+85+90)/7}% = 57.14%
If it adds three VMs then the actual utilization would be {(80+70+75+85+90)/8}% = 50%
The correct answer is 3 as the targeted utilization can be reached only after adding 3 VMs.
Which of the following could be an event that can trigger cloud functions?
a) Files added to the google storage
b) A new virtual machine instance is created
c) Changes occurred in database
d) None of the above
Correct Answer: (a), (b), and (c)
Explanation: Google Cloud Functions can trigger cloud functions in case of events like file added
to the storage, a new VM is created, changes occurred in database, request to http endpoints etc.
Which cloud service model provides the most control over the computing environment while still
abstracting the underlying hardware?
a) IaaS (Infrastructure as a service )
b) PaaS (Platform as a service)
c) SaaS(Software as a service)
d) FaaS (Function as a Service)
Correct Answer: (a)
Explanation: Infrastructure as a Service (IaaS) provides the most control over the computing
environment because it offers virtualized computing resources over the internet. Users have control
over operating systems, storage, and deployed applications, while the underlying hardware is
abstracted and managed by the service provider. This allows for significant flexibility and
customization compared to PaaS, SaaS, and FaaS.
Consider the following statements:
1. Serverless computing abstracts infrastructure management tasks from the developer.
2. Serverless computing allows developers to run code without provisioning or managing
servers.
Which of the following is correct?
a) Only I
b) Only II
c) Both I and II
d) None of them are correct
Correct Answer: (c)
Explanation: Serverless computing abstracts infrastructure management tasks from the developer,
allowing them to focus on writing code. This model also enables developers to run code without
provisioning or managing servers, as the cloud provider automatically handles the scaling and
infrastructure management. This makes both statements correct.
How can Cloud Functions integrate with other services in the Google Cloud Platform?
a) Through direct API calls.
b) By subscribing to events from services like Pub/Sub or Cloud Storage.
c) By making SQL queries to managed databases.
d) By accessing virtual machine instances using SSH.
Correct Answer: (b)
Explanation: Cloud Functions are event-driven, serverless functions that execute in response to
specific events. By subscribing to events from services like Pub/Sub or Cloud Storage, Cloud
Functions can automatically trigger and execute code based on events such as messages published
to a Pub/Sub topic or object changes in a Cloud Storage bucket.
Kubernetes is __________ and Docker is a _________.
a) virtual operating system, container
b) container, container orchestration tool
c) container orchestration tool, container
d) container, virtual operating system
Correct Answer: (c)
Explanation: Docker is a container, an open platform for developing, shipping, and running
applications whereas Kubernetes is a container orchestration tool.
Which statement best describes Google App Engine (GAE)?
a) GAE allows you to manage virtual machines directly.
b) GAE automatically scales your application based on incoming traffic.
c) GAE supports only Python programming language.
d) GAE requires you to manage the underlying infrastructure and scaling manually.
Correct Answer: (b)
Explanation: Google App Engine (GAE) is a serverless platform that automatically manages
scaling based on incoming traffic. It abstracts away the underlying infrastructure, allowing
developers to focus on writing code and deploying applications without worrying about managing
virtual machines directly or scaling manually.
Week 3
CloudSQL can scale upto _____ processor core and ______ of storage capacity?
a) 64, 16TB
b) 64, 10TB
c) 32, 10TB
d) 32, 16TB
Correct Answer: (b)
Detailed Solution: CloudSQL can scale upto 64 processor core and 10TB of storage capacity
What is the difference between a bucket and an object in cloud storage??
a) A bucket is a physical location for objects, while an object is a logical container for data.
b) A bucket is a logical container for objects, while an object is a physical location for data.
c) A bucket and an object are the same thing.
d) A bucket and an object are different types of resources in cloud storage.
Correct Answer: (b)
Detailed Solution: Buckets are logical containers, while objects are physical files. Buckets are used
to organize objects and control access to them. Objects are the actual data that is stored in cloud
storage
Which of the following is considered structured data?
a) A collection of social media posts
b) Customer names and addresses stored in a database
c) A repository of company policies in PDF format
d) Audio recordings of customer service calls
Correct Answer: (b)
Detailed Solution: Customer names and addresses stored in a database are considered structured
data because they are organized in a defined manner, often in rows and columns, which allows for
easy retrieval and analysis. Social media posts, PDF documents, and audio recordings are examples
of unstructured data.
Which of the following is NOT a type of NoSQL database?
a) Key-value stores.
b) Relational databases.
c) Document databases.
d) Graph databases.
Correct Answer: (b)
Detailed Solution: The four main types of NoSQL databases are key-value stores, document
databases, wide-column stores, and graph databases.
Which of the following requirements is best suited for nearline storage?
a) Data that is rarely accessed but needs to be retrieved within milliseconds
b) Frequently accessed data
c) Data that needs long-term retention with infrequent access
d) Data backup and disaster recovery
Correct Answer: (a)
Detailed Solution: Nearline storage is designed for data that is accessed less frequently but still
needs to be retrieved quickly when needed, making it ideal for data that is rarely accessed but needs
to be retrieved within milliseconds. Frequently accessed data would be better suited for standard
storage, while long-term retention with infrequent access fits coldline storage, and disaster recovery
can be appropriate for both coldline and nearline depending on the access frequency requirements.
What types of SQL databases can you manage with Google Cloud SQL?
a) MySQL, MongoDB, Oracle
b) PostgreSQL, SQL Server, MySQL
c) PostgreSQL, Cassandra, SQLite
d) SQL Server, MongoDB, MariaDB
Correct Answer: (b)
Detailed Solution: Google Cloud SQL supports the management of PostgreSQL, SQL Server, and
MySQL databases, providing flexibility for different application requirements.
Which of the following is a unique feature of Cloud Spanner compared to traditional relational
databases?
a) Support for SQL queries
b) Strongly consistent transactions
c) Globally distributed data with horizontal scalability
d) Indexing capabilities
Correct Answer: (c)
Detailed Solution: Cloud Spanner's unique feature is its ability to provide globally distributed data
with horizontal scalability while maintaining strong consistency and support for SQL queries.
What type of data model does Cloud Bigtable use?
a) Document-based
b) Graph-based
c) Wide-column
d) Key-value
Correct Answer: (c)
Detailed Solution: Cloud Bigtable uses a wide-column data model, which allows for efficient
storage and retrieval of large datasets by organizing data into rows and columns with high
scalability.
In terms of data consistency, which statement is true comparing Google Cloud Datastore with a
relational database?
a) Google Cloud Datastore provides strong consistency for all transactions.
b) Relational databases provide eventual consistency by default.
c) Both Google Cloud Datastore and relational databases provide strong consistency.
d) Consistency models depend solely on the application's configuration.
Correct Answer: (d)
Detailed Solution: Google Cloud Datastore offers eventual consistency by default, meaning that
reads may not immediately reflect the most recent write. Relational databases can provide strong
consistency depending on how they are configured, often defaulting to stronger consistency models
than NoSQL databases.
Week 4
In a RESTful API, which HTTP status code indicates that a resource has been successfully created?
a) 200
b) 201
c) 204
d) 400
Correct Answer: (b)
Detailed Solution: In the context of RESTful APIs, different HTTP status codes are used to indicate
the result of a client's request. The status code 201 Created specifically indicates that a new resource
has been successfully created as a result of the client's request.
Which of the following best describes Google Cloud Endpoints?
a) A managed service to store and retrieve large amounts of data
b) A tool for monitoring and logging applications
c) A distributed database service
d) A fully managed service for creating, deploying, and managing APIs
Correct Answer:(d)
Detailed Solution:
Google Cloud Endpoints is a fully managed service that makes it easy to create, deploy, and manage
APIs on Google Cloud Platform (GCP). It allows developers to develop APIs that can be consumed
by different clients such as mobile apps, web apps, and other services. Cloud Endpoints provides
features like authentication, monitoring, logging, and support for both REST and gRPC protocols,
which help in managing the lifecycle of APIs efficiently. This service helps ensure that APIs are
secure, scalable, and easy to maintain.
Which of the following tasks can be facilitated by managed messaging services?
a) Distributing real-time analytics reports
b) Triggering automated backups
c) Processing and routing real-time events
d) Storing large volumes of static data
Correct Answer: (c)
Detailed Solution: Managed messaging services like Google Cloud Pub/Sub or Amazon SQS are
designed to handle the processing and routing of real-time events and messages between
applications.
They facilitate asynchronous communication and event-driven architectures, making them suitable
for scenarios where real-time events need to be processed and distributed across systems.
Which of the following is/are applicable for Cloud Pub/Sub?
a) It is a software application that collects data from users
b) It exchanges data directly between publisher and subscriber
c) It cannot handle multiple applications
d) None of the above
Correct Answer: (d)
Detailed Solution: Cloud Pub/Sub is a messaging middleware that enables asynchronous
communication between independent applications. It acts as a message bus that allows publishers
to send messages to a topic, which are then asynchronously delivered to subscribers. This decouples
the sender and receiver, allowing them to operate independently and improving scalability and
reliability. Cloud Pub/Sub can handle multiple applications by facilitating communication and acting
as a buffer between them, ensuring message delivery even if the receiver is temporarily unavailable.
We can use Cloud Pub/Sub while ___________ data in big-data processing model.
a) analysing
b) processing
c) ingesting
d) storing
Correct Answer: (c)
Detailed Solution: Cloud Pub/Sub is used for ingesting data
Which Google Cloud service is used for managing user identities and access to resources?
a) Cloud Storage
b) Cloud IAM
c) Cloud SQL
d) Cloud Pub/Sub
Correct Answer: (b)
Detailed Solution: Cloud IAM (Identity and Access Management) is the service used to manage user
identities and control access to Google Cloud Platform resources. It allows administrators to grant
specific permissions to users, groups, or service accounts at the project level or for specific resources
within a project.
Which statement best describes Customer-Supplied Encryption Keys (CSEK) in Google Cloud?
a) Encryption keys managed and provided by Google Cloud for customer data
b) Keys that are automatically generated for each Cloud Storage bucket
c) Keys provided by customers and used to encrypt data stored in Google Cloud
d) Keys used only for encrypting VM instances in Google Compute Engine
Correct Answer: (c)
Detailed Solution: Customer-Supplied Encryption Keys (CSEK) are encryption keys provided by
customers to encrypt data stored in Google Cloud services like Cloud Storage. Google Cloud uses
these keys
to encrypt and decrypt customer data, providing an additional layer of control over data encryption.
Which encryption standard does Google Cloud use to encrypt data at rest?
a) AES-128
b) RSA-2048
c) DES
d) RSA-4096
Correct Answer: (a)
Detailed Solution: Google Cloud Platform uses AES-128 (Advanced Encryption Standard with a
128-bit key) to encrypt data at rest by default. AES-128 is widely recognized for its security and
efficiency in encrypting sensitive data.
What does CMEK stand for in Google Cloud Platform (GCP) encryption?
a) Cloud Managed Encryption Keys
b) Customer Managed Encryption Keys
c) Centralized Managed Encryption Keys
d) Certified Managed Encryption Keys
Correct Answer: (b)
Detailed Solution:Customer Managed Encryption Keys (CMEK) allow customers to create, manage,
and bring their own encryption keys to encrypt data stored in Google Cloud services like Cloud
Storage and BigQuery. This gives customers greater control over their data encryption and security.
Which Google Cloud service provides integration with GSuite for managing user identities and
access?
a) Cloud IAM
b) Cloud Storage
c) Cloud Identity
d) Cloud Functions
Correct Answer: (c)
Detailed Solution: Cloud Identity is a Google Cloud service that provides identity and access
management capabilities. It integrates with G Suite to manage user identities, access controls, and
security policies across Google Cloud services and applications. Cloud Identity allows organizations
to centralize user management, enforce security policies, and enable single sign-on (SSO) for G Suite
and other cloud-based applications.
Week 5
If a host on a network has the address 172.16.45.14/30, what is the subnetwork this host belongs
to?
a) 172.16.45.0
b) 172.16.45.4
c) 172.16.45.8
d) 172.16.45.12
Correct Answer: (d)
Detailed Solution: A /30, regardless of the class of address, has a 252 in the fourth octet. This means
we have a block size of 4 and our subnets are 0, 4, 8, 12, 16, etc. Address 14 is obviously in the 12
subnet.
Consider following statements and select the correct options:
1) In auto subnet mode range of addresses are expandable upto /16 only.
2) Custom subnet mode comes with predefined IP ranges.
a) Only 1 is true.
b) Only 2 is true.
c) Both are true.
d) Both are false.
Correct Answer: (a)
Detailed Solution: In custom subnet mode we have complete control over subnet and IP ranges.
Correct Answer: (c)
Detailed Solution: UDP
Header’s port number is 16
bit long integer whereas
Ethernet MAC
address is 48 bit, IPv6 next
header is 8 bit and TCP
Header’s Sequence number is
32 bit long
Which of the following GCP services helps you to create a secure environment for the application
deployments?
a) Compute engine.
b) Google App engine.
c) Cloud load balancing.
d) VPC.
Correct Answer: (d)
Detailed Solution: VPC is a kind of GCP service that helps you to create a secure environment for
application deployments.
Traffic in a VPN is not _______
a) invisible from public network.
b) logically separated from other traffic.
c) accessible from unauthorized public networks.
d) restricted to a single protocol in IPsec.
Correct Answer: (c)
Detailed Solution: Traffic in a VPN is not accessible from unauthorized public networks because it is
secured with masking IP address. This provides access to blocked resources.
What are some of the factors that can affect the performance of dynamic routing?
a) Reception’s MAC address.
b) The number of routers in the network.
c) The type of dynamic routing protocol being used.
d) Ciphers used for encryption of data.
Correct Answer: (b) and (c)
Detailed Solution: The more routers in a network, the more complex the routing table will be, and
the more time it will take for routers to update their routing tables when a change occurs in the
network. Different dynamic routing protocols have different characteristics, such as the frequency
with which they exchange routing information, the method they use to calculate the shortest path,
and the amount of resources they consume.
In direct peering connection what is maximum capacity per link?
a) 1 Gbps.
b) 4 Gbps.
c) 8 Gbps.
d) 10 Gbps.
Correct Answer: (d)
Detailed Solution: In direct peering connection maximum capacity per link is 10 Gbps.
How many types of global load balancers are there in GCP??
a) 3
b) 4
c) 5
d) 6
Correct Answer: (a)
Detailed Solution: Global load balancers are: HTTP Load Balancer, the SSL Proxy, and the TCP
Proxy.
Which protocol does Google Cloud Platform (GCP) primarily use for assigning internal IP
addresses to virtual machines (VMs) within a Virtual Private Cloud (VPC)?
a) DHCP
b) APIPA
c) RFC-1918
d) None of the above
Correct Answer: (a)
Detailed Solution:
Google Cloud Platform (GCP) uses DHCP (Dynamic Host Configuration Protocol) for assigning
internal IP addresses to virtual machines (VMs) within a Virtual Private Cloud (VPC). DHCP
dynamically assigns IP addresses from a pool configured within the VPC subnet to VM instances
when they are started or restarted. This allows for efficient IP address management and ensures that
VMs within the VPC have unique and routable IP addresses for communication.
Peering between two VPC is possible only when
a) Source allows connectivity to the destination
b) Destination allows traffic from the source
c) Source and destination are within same project
d) Source and destination VPC belongs to same organization
Correct Answer: (a) and (b)
Detailed Solution: Peering between two VPC requires access from both ends. However, peering is
possible between two VPC, regardless of whether they belong to same or different projects or
organization
Cloud VPN connects an on-premise network to a VPC through ____________
a) TCP/IP protocol
b) Peering
c) Shared network connectivity
d) IPSec VPN tunnel
Correct Answer: (d)
Detailed Solution: Cloud VPN connects an on-premise network to a VPC through IPSec VPN
tunnel.
Which of the following is true?
a) Subnets are regional features whereas VPCs are global
b) VPCs are regional and subnets are zonal
c) VPCs are regional but subnets are not used in GCP
d) Both VPCs and subnets are global
Correct Answer: (a)
Detailed Solution: In GCP, subnets can be configured within a region whereas VPCs are global.
Week 7
Which cloud service provides a centralized platform for monitoring and managing resources in
Google Cloud Platform (GCP)?
a) Cloud Functions
b) Cloud Pub/Sub
d) Cloud Storage
c) Cloud Monitoring
Correct Answer: (d)
Detailed Solution: Cloud Monitoring is the cloud service in Google Cloud Platform (GCP) that
provides a centralized platform for monitoring and managing resources.
Stackdriver is a ____________ provided by Google Cloud Platform for monitoring, logging, and
____________ of applications and infrastructure.
a) programming language, debugging
b) cloud service, management
c) database, scaling
d) virtual machine, deployment
Correct Answer: (b)
Detailed Solution: Stackdriver is a cloud service provided by Google Cloud Platform (GCP) for
monitoring, logging, and management of applications and infrastructure. It offers tools and features
for collecting and analyzing metrics, monitoring logs, setting up alerting policies and creating
dashboards for visualizing the health and performance of GCP resources. Stackdriver helps users
gain insights into their applications and infrastructure, enabling effective management and
troubleshooting.
Which cloud service provides a fully managed big data processing and analytics platform?
a) Cloud Storage
b) Cloud Spanner
c) BigQuery
d) Cloud Pub/Sub
Correct Answer: (c)
Detailed Solution: BigQuery is a fully managed big data processing and analytics platform
provided by Google Cloud. It allows you to analyze large datasets using SQL queries and provides
high-performance querying capabilities.
Which of the following are features of Cloud Dataproc?
a) Fully managed Apache Hadoop and Apache Spark service
b) Auto-scaling to handle varying workloads
c) Integrated machine learning capabilities
d) Real-time data streaming
Correct Answer: (a) and (b)
Detailed Solution: Cloud Dataproc is a fully managed service provided by Google Cloud for
running Apache Hadoop and Apache Spark clusters. It automates the provisioning, management,
and scaling of these clusters, allowing you to focus on your data processing and analytics tasks. Auto-
scaling is a feature of Cloud Dataproc that dynamically adjusts the cluster size based on the
workload, ensuring optimal resource utilization and cost efficiency.
What is YAML?
a) YAML is a human-readable data serialization language.
b) YAML is a programming language.
c) YAML is a markup language.
d) YAML is a configuration file format.
Correct Answer: (a)
Detailed Solution: YAML is a human-readable data serialization language that is often used for
writing configuration files.
Which of the following is NOT a feature of Stackdriver?
a) Monitoring
b) Logging
c) Error reporting
d) Tracing
Correct Answer: (c)
Detailed Solution: Error reporting. Stackdriver provides monitoring, logging, and tracing, but not
error reporting.
Which big data managed service is used to analyze streaming data in real time?
a) Cloud Dataproc
b) Cloud Dataflow
c) BigQuery
d) None of the above
Correct Answer: (b)
Detailed Solution: Cloud Dataflow is a fully managed service for executing Apache Beam
pipelines within the Google Cloud Platform ecosystem. It can be used to process both batch and
streaming data, and it supports a variety of programming languages, including Python, Java, and
Go.
Which of the following is the main purpose of BigQuery?
a) To store and analyze large amounts of data very quickly.
b) To provide a Hadoop-based data warehouse.
c) To provide a NoSQL database.
d) To provide a relational database.
Correct Answer: (a)
Detailed Solution: To store and analyze large amounts of data very quickly. BigQuery is a very
powerful tool for analyzing large amounts of data, and it can be used to answer a wide variety of
questions.
Which of the following is NOT a characteristic of a data warehouse?
a) It is a collection of data from different sources.
b) It is stored in a way that makes it easy to analyze.
c) It is designed to store historical data.
d) It is updated in real time.
Correct Answer: (d)
Detailed Solution: It is updated in real time. A data warehouse is typically updated on a regular
basis, but it is not updated in real time
Week 8
Variables in TensorFlow are also known as ___________
a) tensor variable
b) tensor keywords
c) tensor attributes
d) tensor objects
Correct Answer: (d)
Detailed Solution: Variables in TensorFlow are also known as tensor objects. These objects hold
the values which can be modified during the execution of the program.
What is the full form of XLA in TensorFlow?
a) Xtreme Linear Algebra
b) Accelerated Linear Algebra
c) Unknown Linear Algebra
d) X Linear Algebra
Correct Answer: (b)
Detailed Solution: XLA (Accelerated Linear Algebra) is a domain-specific compiler for linear
algebra that can accelerate TensorFlow models with potentially no source code changes.
The last layer of deep neural network is called the ___________
a) hidden layer
b) output layer
c) input layer
d) none of the above
Correct Answer: (b)
Detailed Solution: The first layer is called the Input Layer. The last layer is called the Output
Layer. All layers in between are called Hidden Layers.
Which of the following is NOT a feature of Cloud Vision API?
a) Detecting objects in an image
b) Analyzing text in an image
c) Translating text in an image
d) Converting audio into text
Correct Answer: (d)
Detailed Solution: Cloud Vision API does not support converting audio into text. This feature is
provided by the Cloud Speech API.
What is the purpose of sentiment analysis in natural language processing?
a) To identify the author of a text
b) To determine the tone or emotion expressed in a text
c) To translate a text from one language to another
d) To summarize the main points of a text
Correct Answer: (b)
Detailed Solution: Sentiment analysis is a NLP task that involves analyzing a text to determine the
overall tone or emotion expressed, such as positive, negative, or neutral
Which GCP service enables scalable and distributed training of ML models using TensorFlow?
a) Google Cloud Dataflow
b) Google Cloud Pub/Sub
c) Google Cloud BigQuery
d) Google Cloud ML Engine
Correct Answer: (d)
Detailed Solution: Google Cloud ML Engine is the GCP service that enables scalable and
distributed training of ML models using TensorFlow, an open-source machine learning framework.
What is the purpose of TensorFlow's eager execution mode?
a) To enable dynamic graph construction and execution
b) To enable distributed training across multiple GPUs
c) To optimize memory usage during model training
d) To provide a high-level API for building ML models
Correct Answer: (a)
Detailed Solution: TensorFlow's eager execution mode is a feature that allows for immediate
evaluation and execution of operations in TensorFlow without explicitly building a static
computational graph. In the traditional TensorFlow workflow, users would define a graph of
operations and then execute the graph in a separate session.
Cloud AutoML is a suite of _________ services offered by Google Cloud Platform.
a) Machine learning
b) Data analytics
c) Big data
d) Cloud computing
Correct Answer: (a)
Detailed Solution: Cloud AutoML is a suite of machine learning services provided by Google
Cloud Platform (GCP). It offers a range of services and tools that enable users to build and deploy
custom machine learning models without requiring extensive expertise in machine learning or
programming.
Fill in the blanks: Machine learning is a way to use standard _________ to derive _________
insights from data and make repeated decisions.
a) intelligence, technical
b) techniques, intelligent
c) predictions, algorithmic
d) algorithms, predictive
Correct Answer: (d)
Detailed Solution: As per the definition, machine learning is a way to use standard algorithms to
derive predictive insights from data and make repeated decisions