BDAM - Assignment 1 - Group 2
BDAM - Assignment 1 - Group 2
ASSIGNMENT - 1
SUBMITTED BY:
GROUP 2
Dikshika Arya (19PT1-07)
Jigyasa Monga (19PT1-12)
Pankhuri Bhatnagar (19PT1-18)
1. How things work in Memory computing?
In-memory computing means using a type of middleware software that allows one
to store data in RAM which is faster than traditional spinning disk, across a cluster
of computers, and process it in parallel.
RAM storage and parallel distributed processing are two fundamental pillars of in-
memory computing.
A single modern computer can hardly have enough RAM to hold a significant
dataset, but that’s not enough to store many of today’s operational datasets that
easily measure in terabytes.
● Cons: -
o High Price.
o It does not handle external dependencies.
3. Cloud Bigtable
A fully managed, scalable NoSQL database service for large analytical and
operational workloads.
It is a compressed, high performance, proprietary data storage system built on
Google File System, Chubby Lock Service, SSTable and a few other Google
technologies.
● Pros: -
o Consistent sub-10ms latency—handle millions of requests per second.
o Ideal for use cases such as personalization, ad tech, fintech, digital media, and IoT.
o Seamlessly scale to match your storage needs; no downtime during reconfiguration.
o Designed with a storage engine for machine learning applications leading to better
predictions
Storage virtualization
Storage virtualization is the process of grouping the physical storage from multiple
network storage devices so that it looks like a single storage device.
The process involves abstracting and covering the internal functions of a storage
device from the host application, host servers or a general network in order to
facilitate the application and network-independent management of storage.
Storage virtualization is also known as cloud storage.
Some of the benefits of storage virtualization include automated management,
expansion of storage capacity, reduced time in manual supervision, easy updates
and reduced downtime
Functions of VM manager:
Create virtual machines from installation media or from a virtual machine template.
Delete virtual machines.
Power off virtual machines.
Import virtual machines.
Deploy and clone virtual machines.
Perform live migration of virtual machines.
Import and manage ISOs.
Hyper-V technology
By using stream processing technology, data streams can be processed, stored, analyzed,
and acted upon as it is generated in real-time.
Streaming stacks can be built on an assembly line of open-source and proprietary solutions
to specific problems including stream processing, storage, data integration and real-time
analytics.
Message Broker is the element that takes data from a source, called a producer,
translates it into a standard message format, and streams it on an ongoing basis.
Other components can listen in and consume the messages passed on by the broker.
Various Data Storage options are being used for storing streaming data like
Database or Data Warehouse, in the message broker or in Data Lake. Data Lake
option provides a flexible and inexpensive option for storing event data however, it
offers its own technical challenges.
Various modern streaming architectures are also being adopted which rely on full stack approach
(in contrast to patching together open source technologies) which further provides the benefits of
performance, high availability, fault tolerance, flexibility, etc.