0% found this document useful (0 votes)
8 views4 pages

NPTELAssign 1

The document contains multiple choice questions related to edge computing, covering topics such as data ingestion, machine learning model execution, and the role of M2M brokers. It highlights the advantages of edge computing over cloud computing, including reduced latency and data sovereignty. Key concepts like real-time data processing paths and federated learning are also discussed.

Uploaded by

nataliechocolate
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views4 pages

NPTELAssign 1

The document contains multiple choice questions related to edge computing, covering topics such as data ingestion, machine learning model execution, and the role of M2M brokers. It highlights the advantages of edge computing over cloud computing, including reduced latency and data sovereignty. Key concepts like real-time data processing paths and federated learning are also discussed.

Uploaded by

nataliechocolate
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

Multiple Choice Questions

1. Which of the following is a building block of edge computing?

a) Data ingestion and stream processing

b) Centralized data centers

c) High-bandwidth CDN

d) Traditional three-tier architecture

Answer: a) Data ingestion and stream processing

Solution: Edge computing requires efficient data ingestion (e.g., using Kafka) and
stream processing for real-time data analysis. These are key building blocks for
processing data at the edge, as opposed to sending data to the cloud for processing.

2. In edge computing, which tier is responsible for running machine learning models?

a) Data Source Tier

b) Storage Tier

c) Actionable Insight Tier

d) Intelligence Tier

Answer: d) Intelligence Tier

Solution: The Intelligence Tier in edge computing is responsible for running machine
learning models. While the cloud may handle model training, the edge handles
model inferencing, providing real-time insights based on data from edge devices.

3. What is the role of M2M brokers in edge computing?

a) Data storage management

b) Enabling machine-to-machine communication

c) Training machine learning models

d) Real-time data conversion

Answer: b) Enabling machine-to-machine communication

Solution: M2M (Machine-to-Machine) brokers orchestrate communication between


devices in edge computing environments, enabling devices to exchange data without
relying on centralized cloud servers
4. What is a limitation of the current cloud system for AI use cases?

a) It offers only local processing capabilities

b) It has a low capacity for data storage

c) It cannot provide real-time responses due to latency

d) It lacks programmability of the network stack

Answer: c) It cannot provide real-time responses due to latency

Solution: Cloud computing, due to its centralized nature, suffers from latency issues
when AI models need to respond in real-time. This limitation makes it unsuitable for
mission-critical AI applications that require immediate feedback, which edge
computing can address by processing data locally

5. Which component is responsible for real-time queries and data processing in edge
computing?

a) Stream Processing

b) Function as a Service

c) Object Storage

d) M2M Brokers

Answer: a) Stream Processing

Solution: Stream processing allows real-time data analysis and immediate response
to incoming data. In edge computing, this capability ensures that data is processed
and acted upon as it arrives, reducing the delay in decision-making.

6. How does edge computing mimic public cloud capabilities?

a) By centralizing data storage in remote data centers

b) By providing capabilities like device management and stream analytics near data
sources

c) By reducing the need for hardware innovations

d) By utilizing client-server architecture for processing

Answer: b) By providing capabilities like device management and stream analytics


near data sources

Solution: Edge computing mimics public cloud capabilities by offering features like
device management, stream analytics, and even running machine learning models
close to where the data is generated, thus enhancing real-time decision-making.

7. What is the primary purpose of the actionable insight layer in edge computing?
a) Storing unstructured data

b) Running machine learning training models

c) Sending alerts and controlling actuators

d) Performing real-time data ingestion

Answer: c) Sending alerts and controlling actuators

Solution: The actionable insight layer is responsible for converting insights from the
intelligence layer into actions, such as sending alerts to stakeholders, updating
dashboards, or controlling actuators to respond to events immediately.

8. . What is the primary advantage of edge computing over cloud computing?


a) High latency
b) Centralized processing
c) Data sovereignty
d) Limited scalability

Answer: c) Data sovereignty

Solution: The primary advantage of edge computing is that it reduces latency by processing
data locally, closer to the source, and ensures data sovereignty by keeping sensitive data
within local boundaries.

9. Which IoT data flow path processes real-time data immediately upon generation?
a) Cold path

b) Warm path

c) Batch path

d) Hot path

Answer: d) Hot path

Solution: The hot path in IoT systems refers to the processing of real-time data as it is
generated. This is essential for applications that require immediate insights or
actions, such as real-time monitoring of industrial systems.

10. Which of the following is a key feature of Federated Learning?


a) Training occurs on centralized data

b) Data remains decentralized while models are aggregated

c) IoT data is processed only in the cloud


d) Training is skipped in federated models

Answer: b) Data remains decentralized while models are aggregated

Solution: Federated learning ensures that data remains decentralized, with the
machine learning models being trained on local devices. Only model updates are
aggregated at a central server, preserving privacy by keeping sensitive data on the
local devices.

You might also like