Unit 2 Bda
Unit 2 Bda
NoSQL
NoSQL is a non-relational DMS, that does not require a fixed schema, avoids joins, and is easy to
scale. NoSQL database is used for distributed data stores with humongous data storage needs.
NoSQL is used for Big data and real-time web apps. For example companies like Twitter,
Facebook, Google that collect terabytes of user data every single day.
SQL
SQL programming can be effectively used to insert, search, update, delete database records.
Designing a distributed system does not come as easy and straight forward. A number
of challenges need to be overcome in order to get the ideal system. The major challenges in
distributed systems are listed below:
1. Heterogeneity:
The Internet enables users to access services and run applications over a heterogeneous collection
of computers and networks. Heterogeneity (that is, variety and difference) applies to all of the
following:
4. Concurrency
Both services and applications provide resources that can be shared by clients in a distributed
system. There is therefore a possibility that several clients will attempt to access a shared resource
at the same time. For example, a data structure that records bids for an auction may be accessed
very frequently when it gets close to the deadline time. For an object to be safe in a concurrent
environment, its operations must be synchronized in such a way that its data remains consistent.
This can be achieved by standard techniques such as semaphores, which are used in most operating
systems.
5. Security
Many of the information resources that are made available and maintained in distributed systems
have a high intrinsic value to their users. Their security is therefore of considerable importance.
Security for information resources has three components:
confidentiality (protection against disclosure to unauthorized individuals)
integrity (protection against alteration or corruption),
availability for the authorized (protection against interference with the means to access the
resources).
6. Scalability
Distributed systems must be scalable as the number of user increases. The scalability is defined by
B. Clifford Neuman as
A system is said to be scalable if it can handle the addition of users and resources without suffering
a noticeable loss of performance or increase in administrative complexity
Scalability has 3 dimensions:
o Size
o Number of users and resources to be processed. Problem associated is overloading
o Geography
o Distance between users and resources. Problem associated is communication reliability
o Administration
o As the size of distributed systems increases, many of the system needs to be controlled.
Problem associated is administrative mess
7. Failure Handling
Computer systems sometimes fail. When faults occur in hardware or software, programs may
produce incorrect results or may stop before they have completed the intended computation. The
handling of failures is particularly difficult.
Hadoop is an Apache open source framework written in java that allows distributed processing
of large datasets across clusters of computers using simple programming models. The Hadoop
framework application works in an environment that provides
distributed storage and computation across clusters of computers. Hadoop is designed to scale up
from single server to thousands of machines, each offering local computation and storage.
Hadoop Architecture
MapReduce
The Hadoop Distributed File System (HDFS) is based on the Google File System (GFS) and
provides a distributed file system that is designed to run on commodity hardware. It has many
It is quite expensive to build bigger servers with heavy configurations that handle large scale
processing, but as an alternative, you can tie together many commodity computers with single-
CPU, as a single functional distributed system and practically, the clustered machines can read
the dataset in parallel and provide a much higher throughput. Moreover, it is cheaper than one
high-end server. So this is the first motivational factor behind using Hadoop that it runs across
clustered and low-cost machines.
Hadoop runs code across a cluster of computers. This process includes the following core tasks
that Hadoop performs −
Data is initially divided into directories and files. Files are divided into uniform sized
blocks of 128M and 64M (preferably 128M).
These files are then distributed across various cluster nodes for further processing.
HDFS, being on top of the local file system, supervises the processing.
Blocks are replicated for handling hardware failure.
Checking that the code was executed successfully.
Performing the sort that takes place between the map and reduce stages.
Sending the sorted data to a certain computer.
Writing the debugging logs for each job.
Advantages of Hadoop
Hadoop framework allows the user to quickly write and test distributed systems. It is
efficient, and it automatic distributes the data and work across the machines and in turn,
utilizes the underlying parallelism of the CPU cores.
Yarn divides the task on resource management and job scheduling/monitoring into separate
daemons. There is one ResourceManager and per-application ApplicationMaster. An application
can be either a job or a DAG of jobs.
The ResourceManger have two components – Scheduler and AppicationManager.
The scheduler is a pure scheduler i.e. it does not track the status of running application. It only
allocates resources to various competing applications. Also, it does not restart the job after failure
due to hardware or application failure. The scheduler allocates the resources based on an abstract
notion of a container. A container is nothing but a fraction of resources like CPU, memory, disk,
network etc.
Following are the tasks of ApplicationManager:-
Yarn can scale beyond a few thousand nodes via Yarn Federation. YARN Federation allows to
wire multiple sub-cluster into the single massive cluster. We can use many independent clusters
together for a single large job. It can be used to achieve a large scale system.
So what stores data in HDFS? It is the HBase which stores data in HDFS.
HBase
HBase is a NoSQL database or non-relational database .
HBase is important and mainly used when you need random, real-time, read, or write
access to your Big Data.
It provides support to a high volume of data and high throughput.
In an HBase, a table can have thousands of columns.