100% found this document useful (1 vote)
161 views40 pages

Cloud Storage Infrastructures

The document discusses different types of cloud storage infrastructure including direct attached storage (DAS), network attached storage (NAS), and storage area network (SAN). DAS connects storage directly to hosts but does not scale well. NAS provides file-level data access over a network using protocols like NFS and CIFS. SAN uses fiber channel to provide block-level access to consolidated storage that can be accessed by multiple servers. The document compares the components, protocols, and use cases of each architecture to help determine the best solution based on an organization's needs for scalability, performance, cost and other factors.

Uploaded by

Nikhil Kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
161 views40 pages

Cloud Storage Infrastructures

The document discusses different types of cloud storage infrastructure including direct attached storage (DAS), network attached storage (NAS), and storage area network (SAN). DAS connects storage directly to hosts but does not scale well. NAS provides file-level data access over a network using protocols like NFS and CIFS. SAN uses fiber channel to provide block-level access to consolidated storage that can be accessed by multiple servers. The document compares the components, protocols, and use cases of each architecture to help determine the best solution based on an organization's needs for scalability, performance, cost and other factors.

Uploaded by

Nikhil Kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 40

Cloud Storage Infrastructures

Prof. NIKHILKUMAR B SHARDOOR

Department of Computer Science Engineering.


School of Engineering, MIT ADT University, Pune.
Content
• Introduction to Cloud Storage Infrastructure.
• Direct-Attached Storage (DAS) architecture.
• Storage Area Network (SAN) attributes -components, topologies, connectivity
options and zoning.
• SAN’s- FC protocol stack, addressing, flow control.
• Networked Attached Storage (NAS) components, protocols.
• IP Storage Area Network (IP SAN) iSCSI, FCIP and FCoE architecture.
• Content Addressed Storage (CAS) elements, storage, and retrieval processes.
• Server architectures- Stand-alone, blades, stateless, clustering.
• Cloud file systems: GFS and HDFS, BigTable, HBase and Dynamo.
Introduction
Cloud Storage : Cloud storage is a service model in which data is maintained, managed, backed
up remotely and made available to users over a network (typically the Internet).

Cloud Storage Infrastructure : A cloud storage infrastructure is the hardware and software
framework that supports the computing requirements of a private or public cloud storage service.
Both public and private cloud storage infrastructures are known for their elasticity, scalability and
flexibility.

Cloud General Architecture:

Cloud storage architectures are primarily about delivery of storage on demand in a highly scalable and
multi-tenant way. cloud storage architectures consist of a front end that exports an API to access the
storage.
Cloud Storage Architecture
Characteristic Description
The ability to manage a system with minimal
Manageability
resources
Access method Protocol through which cloud storage is exposed
Performance Performance as measured by bandwidth and latency
Multi-tenancy Support for multiple users (or tenants)
Ability to scale to meet higher demands or load in a
Scalability
graceful manner
Data availability Measure of a system’s uptime
Ability to control a system — in particular, to
Control configure for cost, performance, or other
characteristics
Storage efficiency Measure of how efficiently the raw storage is used
Measure of the cost of the storage (commonly in
Cost
dollars per gigabytes) Fig. General Cloud Architecture
Cloud Storage Types
• DAS – Direct Attached Storage

• NAS Network Attached Storage.

• SAN- Storage Area Network.

Which Storage technology I should use for my Business Application.?


Cloud Storage Infrastructure – Direct Attached Storage(DAS)

• DAS – Direct attached Storage

• DAS stands for Direct Attached Storage and as the name suggests,
it is an architecture where storage connects directly to hosts.

• Examples of DAS include hard drives, SSD, optical disc drives


and external storage drives.

• DAS is ideal for localized data access and sharing in environment


where small server are located for instance, small businesses,
departments etc.

• Block-level access protocols are used to access data through


applications and it can also be used in combination with SAN and
NAS.
Cloud Storage Infrastructure – Direct Attached Storage(DAS)

Based on the location of storage devices with respect to host, DAS can be classified as external or
internal.

Internal DAS: The storage device is internally connected to the host by serial or parallel buses.

Most internal buses have distance limitations and can only be used for short distance
connectivity and can also connect only a limited number of devices. And also hamper
maintenance as they occupy large amount of space inside the server.

External DAS: the server connects directly to the external storage devices. SCSI or FC protocol
are used to communicate between host and storage devices.

It overcomes the limitation of internal DAS and overcome the distance and device count
limitations and also provides central administration of storage devices
Cloud Storage Infrastructure – Direct Attached Storage(DAS)
Why and why not to go for DAS?
Why to go for DAS:

• It requires low investment than other networking architectures.

• Less hardware and software are needed to setup and operate DAS.

• Configuration is simple and can be deployed easily.

• Managing DAS is easy as host based tools such as host OS are used.

Why not to go for DAS:

• Major limitation of DAS is that it doesn’t scale up well and it restricts the number of hosts that can be directly
connected to the storage.

• Limited bandwidth in DAS hampers the available I/O processing capability and when capability is reached, service
availability may be compromised.

• It doesn’t make use of optimal use of resources due to its lack of ability to share front end ports.
Cloud Storage Infrastructure –Network Attached Storage(NAS)
NAS is a file-level computer data storage server connected to a network and providing data accessibility to a
diverse group of clients.

NAS is specialized for the task assigned to it either by its hardware, software or by both and provides the
advantage of server consolidation by removing the need of having multiple file servers.

NAS also uses its own OS which works on its own peripheral devices.

A NAS operating systems is optimized for file I/O and, therefore performs file I/O better than a primitive server.
It also uses different protocols like TCP/IP, CIFS and NFS which are basically used for data transfer and for
accessing remote file service.

Components of NAS

NAS head which is basically a CPU and a memory.


More than one Network Interface Cards (NIC’s).
Optimized Operating System.
Protocols for file sharing (NFS or CIFS).
Protocols to connect and manage storage devices like ATA, SCSI, or FC.
Cloud Storage Infrastructure –Network Attached Storage(NAS)

FIG: NAS

• Centralized storage device for storing data on a Fig: Network Attached Storage
network.
• Will have multiple hard drives in RAID
configuration.
• Directly attaches to a switch or router on a
network.
• Are used in Small businesses.

Drawbacks
• Single point of Failure.
Cloud Storage Infrastructure –Storage Area Network(SAN)
• A storage area network (SAN) provides access to consolidated, block level data storage that is accessible by
the application running on any of the networked server.

• It carries data between servers (hosts) and storage devices through fibre channel switches.

• A SAN helps in aiding organizations to connect geographically isolated hosts and provide robust
communication between hosts and storage devices.

• In a SAN, each storage server and storage device is linked through a switch which includes SAN features like
storage virtualization, quality of service, security and remote sensing etc.

Components of SAN: Cabling, Host Bus Adapters (HBA) and Switches.

• Cabling:- is the physical medium which is used to for establishing a link between every SAN device.

• HBA or Host Bus Adapter is an expansion card that fits into expansion slot in a server.

• Switch is used to handle and direct traffic between different network devices. It accepts traffic and then
transmits the traffic to the desired endpoint device.
Cloud Storage Infrastructure –Storage Area Network(SAN)
• A Special High Speed network that stores and
provides access to large amounts of data.
• SAN’s are Fault Tolerant.
• Data is shared among several disk arrays.
• Server access data as if it was accessing data from
local drive.
• iSCSI(Cheaper) and FC(Expensive) protocols
used.
• SAN’s are not affected by network traffic.
• Highly scalable, Highly Redundant and High

Fig: Storage Area Network Speed(interconnected with fibre channel).


• Expensive.
Cloud Storage Infrastructure –Key Difference between DAS, NAS and SAN
• DAS–Directly Attached Storage.
-Usually disk or tape.
-Directly attached by a cable to the computer processor.(The hard disk drive inside a PC or a tape drive attached
to a single server are simple types of DAS.) I/O requests (also called protocols or commands).
-Access devices directly.

• NAS–Network Attached Storage.


-A NAS device (“appliance”), usually an integrated processor plus disk storage, is attached to a TCP/IP-based
network (LAN or WAN), and accessed using specialized file access/file sharing protocols.
-File requests received by a NAS are translated by the internal processor to device requests.

• SAN-Storage Area Network.


-Storage resides on a dedicated network.
-I/O requests access devices directly.
-Uses Fiber Channel media, providing an any-to-any connection for processors and storage on that network.
-Ethernet media using an I/O protocol called iSCSI is emerging in.
DAS,NAS,SAN-Best Case Scenario Vs Worst Case Scenario
Storage Best Case Scenario Worst Case Scenario
Type
DAS DAS is ideal for small businesses that only need to DAS is not a good choice for businesses that are
share data locally, have a defined, non-growth budget growing quickly, need to scale quickly, need to
to work with and have little to no IT support to share across distance and collaborate or support a
maintain a complex system lot of system users and activity at once

NAS NAS is perfect for SMBs and organizations that need Server-class devices at enterprise organizations that
a minimal-maintenance, reliable and flexible storage need to transfer block-level data supported by a
system that can quickly scale up as needed to Fibre Channel connection may find that NAS can’t
accommodate new users or growing data deliver everything that’s needed. Maximum data
transfer issues could be a problem with NAS

SAN SAN is best for block-level data sharing of mission- SAN can be a significant investment and is a
critical files or applications at data centers or large- sophisticated solution that’s typically reserved for
scale enterprise organizations. serious large-scale computing needs. A small-to-
midsize organization with a limited budget and few
IT staff or resources likely wouldn’t need SAN.
Storage Networking (FC, iSCSi, FCoE)
Fibre Channel (FC) is a technology for transmitting data between computer devices at data rates of up to 20 Gbps at present

time and more in the near future.

• Fibre Channel began in the late 1980s as part of the IPI (Intelligent Peripheral Interface) Enhanced Physical Project to

increase the capabilities of the IPI protocol. That effort widened to investigate other interface protocols as candidates for

augmentation. In 1998, Fiber Channel was approved as a project and now have become and industry standard.

iSCSI - Internet Small Computer System Interface, is a storage networking standard used to link different storage

facilities.

• iSCSI is used to transmit data over local area networks, wide area networks or the Internet and can enable location-

independent data storage and retrieval and is one of two main approaches to storage data transmission over IP networks.

Fibre Channel over IP, translates Fibre Channel control codes and data into IP packets for transmission between

geographically distant Fibre Channel SANs.


FCoE Benefits
• Mapping of Fibre Channel frames over Ethernet • Wire server only once
• Fibre Channel enabled to run on a lossless Ethernet • Fewer cables and adapters
network • Software provisioning of I/O
• Interoperates with existing Fibre Channel SANs
• No gateway; stateless

iSCSI Benefits

• SCSI transport protocol that operates over TCP • Wire server only once
• Encapsulation of SCSI command descriptor blocks and data • Fewer cables and adaptors
in TCP/IP byte streams • New operational model
• Broad industry support; OS vendors support their iSCSI
drivers, gateways (routers, bridges), and native iSCSI storage
arrays
Difference between FCIP and FCoE
• FCIP uses a tunnel to transfer data between networks. It relies on SCSI.

• FCoE was developed to simplify switches and consolidate I/O in comparison with FCIP. It replaces
FC links with high speed ethernet links between the devices that support the network.

• iFCP is a new standard that broadens the way data can be transferred over the internet. It combines
the FCIP and iSCSI protocols.

For More Details refer this


Link 1 https://www.cisco.com/c/en/us/products/collateral/switches/nexus-5000-series-switches/white_paper_c11-495142.html
link 2: http://www.provision.ro/storage-infrastructure/storage-networking-fc-iscsi-fcoe#pagei-1|pagep-1|
Summary:
• FCoE was not designed to make iSCSI obsolete. iSCSI has many applications that FCoE does not cover, in particular in low-
end systems and in small, remote branch offices, where IP connectivity is of paramount importance.

• Some customers have limited I/O requirements in the 100-Mbps range, and iSCSI is just the right solution for them. This is
why iSCSI has taken off and is so successful in the SMB market: it is cheap, and it gets the job done.

• Large enterprises are adopting virtualization, have much higher I/O requirements, and want to preserve their investments and
training in Fibre Channel. For them, FCoE is probably a better solution.

• FCoE will take a large share of the SAN market. It will not make iSCSI obsolete, but it will reduce its potential market.
Cloud File System
A cloud file system is a distributed file system that allows many clients to have access to data and supports
operations on that data.

A File system also ensure the security in terms of Confidentiality, Availability and Integrity.

Types of Cloud File System

• GFS - Google File System.

• HDFS- Hadoop Distributed File System.

• BigTable

• HBase

• Dynamo
Cloud File System: Google File System
• GFS is a proprietary distributed file
system developed by Google for its own
use.

• GFS is used to store and process huge


volumes of data in a distributed
manner.

• GFS consists of a single master and


multiple chunk servers.

• Files are divided into fixed sized chunks

• Each chunk has 64 MB of data in it.

• Each chunk is replicated on multiple


chunk servers (3 by default). Even if any
Fig: Architecture of GFS chunk server crashes, the data file will
still be present in other chunk servers.
Cloud File System: Google File System

Files are divided into fixed sized chunks of has


64 MB SIZE.

Each chunk is replicated on multiple chunk servers (3 by


default). Even if any chunk server crashes, the data file will still
be present in other chunk servers
Cloud File System: HDFS
• HDFS is a Apache project; Yahoo, Facebook, IBM etc.
are based on HDFS.

• HDFS is the storage unit of Hadoop that is used to store


and process huge volumes of data on multiple data
nodes.

• It is designed with low cost hardware that provides data


across multiple Hadoop clusters.

• It has high fault tolerance and throughput.

• Large file is broken down into small blocks of data,


default block size of 128 MB which can be increased as
per requirement. Fig: Architecture of HDFS

• Multiple copies of each block are stored in the cluster in


a distributed manner on different nodes.
Cloud File System GFS Vs HDFS
GFS and HDFS are similar in many aspects and are used for storing large amount of data sets.

There are a few aspects where these can be proven to be a little different from each other.

The key aspects which differ are below:

Key Aspects GFS HDFS


Load Division GFS comprises of a single Master node and HDFS has single Namenode and multiple
multiple Chunk Servers. Datanodes in the file system.
Size of the GFS stores its data into blocks and the size of HDFS divides data into blocks and size of
blocks each block is 64MB which is the default block each block is 128MB which is the default
size. block size.
Data chunk’s GFS checks all the chunk servers in the startup The HDFS maintains the record of all the
storage location and will not maintain any particular record for data nodes information in the name node.
checking the replication information of any
particular data chunk.
Cloud File System GFS Vs HDFS
Key Aspects GFS HDFS
Atomic Record GFS provides an append option along with the HDFS can append a certain file along with
Appends offset option. here the users can append the another but it does not provide an option of
file with a different offset which specifies the offset.
same file. this kind of approach helps in
random read and write ability to the GPS
which the HDFS lacks.
Data Integrity GFS Check servers use checksums to detect HDFS checks the contents of the HDFS
corruption of the stored data and another way files when any file is corrupted. It uses
of checking the corruption is by comparing the client software and applies checksum
files for replications. checking.
Deletion In GFS the resources of the deleted files are In HDFS the deleted files are directly
not reclaimed immediately as it is done in removed into a particular folder and then
HDFS, instead, they are stored in a different they are removed by a garbage collector.
file and they are forcibly removed if the file
won't get deleted within three days.
Snapshot GFS allows individual files and directories to HDFS can take snapshots up to 65,536 for
be snapshotted. each file.
Cloud file System: BigTables
• Bigtable is a compressed, high performance, proprietary data storage system built on Google File System,
developed by Google.
• Designed to scale to a very large size
• Petabytes of data across thousands of servers
• Used for many Google projects
• Web indexing, Personalized Search, Google Earth, Google Analytics, Google Finance
• Flexible, high-performance solution for all of Google’s products

Goals
• Want asynchronous processes to be continuously updating different pieces of data
• Want access to most current data at any time
• Need to support:
• Very high read/write rates (millions of ops per second)
• Efficient scans over all or interesting subsets of data
• Efficient joins of large one-to-one and one-to-many datasets
• Often want to examine data changes over time
• E.g. Contents of a web page over multiple crawls
Building Blocks
• Building blocks:
• Google File System (GFS): Raw storage
• Scheduler: schedules jobs onto machines
• Lock service: distributed lock manager
• MapReduce: simplified large-scale data processing

• BigTable uses of building blocks:


• GFS: stores persistent data (SSTable file format for storage of data)
• Scheduler: schedules jobs involved in BigTable serving
• Lock service: master election, location bootstrapping
• Map Reduce: often used to read/write BigTable data
Basic Data Model
• A BigTable is a sparse, distributed persistent multi-dimensional sorted map
(row, column, timestamp) -> cell contents

• Good match for most Google applications


WebTable Example

• Want to keep copy of a large collection of web pages and related information
• Use URLs as row keys
• Various aspects of web page as column names
• Store contents of web pages in the contents: column under the timestamps when they were fetched.
Rows

• Name is an arbitrary string


• Access to data in a row is atomic
• Row creation is implicit upon storing data
• Rows ordered lexicographically
• Rows close together lexicographically usually on one or a small number of machines
• Reads of short row ranges are efficient and typically require communication with a small number of
machines.
• Can exploit this property by selecting row keys so they get good locality for data access.
• Example:
math.gatech.edu, math.uga.edu, phys.gatech.edu, phys.uga.edu
VS
edu.gatech.math, edu.gatech.phys, edu.uga.math, edu.uga.phys
Columns

• Columns have two-level name structure:


• family:optional_qualifier
• Column family
• Unit of access control
• Has associated type information
• Qualifier gives unbounded columns
• Additional levels of indexing, if desired
Timestamps

• Used to store different versions of data in a cell


• New writes default to current time, but timestamps for writes can also be set explicitly by clients
• Lookup options:
• “Return most recent K values”
• “Return all values in timestamp range (or all values)”
• Column families can be marked w/ attributes:
• “Only retain most recent K values in a cell”
• “Keep values until they are older than K seconds”
Cloud File System :HBase and Dynamo
• HBase is a distributed column-oriented database built on top
of the Hadoop file system. It is an open-source project and is
horizontally scalable.

• HBase is a data model that is similar to Google’s big table


designed to provide quick random access to huge amounts of
structured data. It leverages the fault tolerance provided by the
Hadoop File System (HDFS).

• It is a part of the Hadoop ecosystem that provides random


real-time read/write access to data in the Hadoop File System.

• One can store the data in HDFS either directly or through


HBase. Data consumer reads/accesses the data in HDFS
randomly using HBase. HBase sits on top of the Hadoop File
System and provides read and write access
Features of HBase Where to Use HBase
• HBase is linearly scalable. •Apache HBase is used to have random, real-time read/write
• It has automatic failure support. access to Big Data.
• It provides consistent read and writes. •It hosts very large tables on top of clusters of commodity
• It integrates with Hadoop, both as a source hardware.
and a destination.
•Apache HBase is a non-relational database modeled after
• It has easy java API for client.
• It provides data replication across clusters. Google's Bigtable. Bigtable acts up on Google File System,
likewise Apache HBase works on top of Hadoop and HDFS.

Applications of HBase
•It is used whenever there is a need to write heavy applications.
•HBase is used whenever we need to provide fast random access to available data.
•Companies such as Facebook, Twitter, Yahoo, and Adobe use HBase internally.
Architecture of HBase
• HBase has three major components: the client library, a master server, and region servers.
Architecture of HBase
• HBase has three major components: the client library, a
master server, and region servers.

• Region servers can be added or removed as per requirement.


The master server -
• Assigns regions to the region servers and takes the
help of Apache ZooKeeper for this task.
• Handles load balancing of the regions across region
servers. It unloads the busy servers and shifts the
regions to less occupied servers.
• Maintains the state of the cluster by negotiating the
load balancing.
• Is responsible for schema changes and other metadata
operations such as creation of tables and column
families.
Regions
• Regions are nothing but tables that are split up and spread across the region servers.
Region server
• The region servers have regions that -
• Communicate with the client and handle data-related operations.
• Handle read and write requests for all the regions under it.
• Decide the size of the region by following the region size thresholds.
Zookeeper
• Zookeeper is an open-source project that provides services like maintaining configuration information,
naming, providing distributed synchronization, etc.

• Zookeeper has ephemeral nodes representing different region servers. Master servers use these nodes to
discover available servers.

• In addition to availability, the nodes are also used to track server failures or network partitions.

• Clients communicate with region servers via zookeeper.

• In pseudo and standalone modes, HBase itself will take care of zookeeper.
Dynamo
• Amazon DynamoDB is a fully managed
NoSQL database service that allows to create
database tables that can store and retrieve any
amount of data.

• It automatically manages the data traffic of


tables over multiple servers and maintains
performance.

• It also relieves the customers from the burden


of operating and scaling a distributed database.

• Hardware provisioning, setup, configuration,


replication, software patching, cluster scaling, etc.
is managed by Amazon
Benefits of DynamoDB
• Managed service − Amazon DynamoDB is a managed service. There is no need to hire experts to
manage NoSQL installation. Developers need not worry about setting up, configuring a distributed
database cluster, managing ongoing cluster operations, etc. It handles all the complexities of
scaling, partitions and re-partitions data over more machine resources to meet I/O performance
requirements.

• Scalable − Amazon DynamoDB is designed to scale. There is no need to worry about predefined
limits to the amount of data each table can store. Any amount of data can be stored and retrieved.
DynamoDB will spread automatically with the amount of data stored as the table grows.

• Fast − Amazon DynamoDB provides high throughput at very low latency. As datasets grow,
latencies remain stable due to the distributed nature of DynamoDB's data placement and request
routing algorithms.
• Durable and highly available − Amazon DynamoDB replicates data over at least 3
different data centers’ results. The system operates and serves data even under
various failure conditions.

• Flexible: Amazon DynamoDB allows creation of dynamic tables, i.e. the table can
have any number of attributes, including multi-valued attributes.

• Cost-effective: Payment is for what we use without any minimum charges. Its
pricing structure is simple and easy to calculate.

You might also like