0% found this document useful (0 votes)
40 views133 pages

CC Unit-4,5

CC unit-4,5

Uploaded by

pranathiakula17
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
40 views133 pages

CC Unit-4,5

CC unit-4,5

Uploaded by

pranathiakula17
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 133

UNIT-IV

Cloud Resource Management and Scheduling:


Policies and Mechanisms for Resource
Management, Applications of Control Theory to Task Scheduling on a Cloud, Stability of a Two Level
Resource Allocation Architecture, Feedback Control Based on Dynamic Thresholds. Coordination of
Specialized Autonomic Performance Managers, Resource Bundling, Scheduling Algorithms for
Computing Clouds-Fair Queuing, Start Time Fair Queuing.
UNIT-4

Cloud Resource
Management and Scheduling
Contents
 Resource management and scheduling.
 Policies and mechanisms.
 Applications of control theory to cloud resource allocation.
 Stability of a two-level resource allocation architecture.
 Proportional thresholding.
 Coordinating power and performance management.
 A utility-based model for cloud-based Web services.
 Resource bundling and combinatorial auctions.
 Scheduling algorithms.
 Fair queuing.
 Start-up fair queuing.
 Borrowed virtual time.
 Cloud scheduling subject to deadlines.
Cloud Resource
Management and Scheduling
2
Resource management and scheduling
 Critical function of any man-made system.
 It affects the three basic criteria for the evaluation of a system:
 Functionality.
 Performance.
 Cost.
 Scheduling in a computing system  deciding how to allocate
resources of a system, such as CPU cycles, memory, secondary
storage space, I/O and network bandwidth, between users and
tasks.
 Policies and mechanisms for resource allocation.
 Policy  principles guiding decisions.
 Mechanisms  the means to implement policies.

Cloud Resource Management and


Scheduling 3
Motivation
 Cloud resource management .
 Requires complex policies and decisions for multi-objective
optimization.
 It is challenging - the complexity of the system makes it impossible to
have accurate global state information.
 Affected by unpredictable interactions with the environment, e.g.,
system failures, attacks.
 Cloud service providers are faced with large fluctuating loads which
challenge the claim of cloud elasticity.

 The strategies for resource management for IaaS, PaaS, and SaaS
are different.

Cloud Resource Management and


Scheduling 4
Cloud resource management (CRM) policies
1. Admission control  prevent the system from accepting workload
in violation of high-level system policies.

2. Capacity allocation  allocate resources for individual activations


of a service.

3. Load balancing  distribute the workload evenly among the


servers.

4. Energy optimization  minimization of energy consumption.

5. Quality of service (QoS) guarantees  ability to satisfy timing or


other conditions specified by a Service Level Agreement.

Cloud Resource Management and


Scheduling 5
Mechanisms for the implementation of resource
management policies

 Control theory  uses the feedback to guarantee system stability


and predict transient behavior.

 Machine learning  does not need a performance model of the


system.

 Utility-based  require a performance model and a mechanism to


correlate user-level performance with cost.

 Market-oriented/economic  do not require a model of the system,


e.g., combinatorial auctions for bundles of resources.

Cloud Resource Management and


Scheduling 6
Tradeoffs
 To reduce cost and save energy we may need to concentrate the
load on fewer servers rather than balance the load among them.
 We may also need to operate at a lower clock rate; the
performance decreases at a lower rate than does the energy.

GIET ENGINEERING Cloud Resource Management and


COLLEGE Scheduling 7
Control theory application to cloud resource
management (CRM)

 The main components of a control system:

 The inputs  the offered workload and the policies for admission
control, the capacity allocation, the load balancing, the energy
optimization, and the QoS guarantees in the cloud.

 The control system components  sensors used to estimate relevant


measures of performance and controllers which implement various
policies.

 The outputs  the resource allocations to the individual applications.

GIET ENGINEERING Cloud Resource Management and


COLLEGE Scheduling 8
Feedback and Stability
 Control granularity  the level of detail of the information used to
control the system.
 Fine control  very detailed information about the parameters
controlling the system state is used.
 Coarse control  the accuracy of these parameters is traded for the
efficiency of implementation.
 The controllers use the feedback provided by sensors to stabilize
the system. Stability is related to the change of the output.
 Sources of instability in any control system:
 The delay in getting the system reaction after a control action.
 The granularity of the control, the fact that a small change enacted by
the controllers leads to very large changes of the output.
 Oscillations, when the changes of the input are too large and the
control is too weak, such that the changes of the input propagate
directly to the output.
GIET ENGINEERING Cloud Resource Management and
COLLEGE Scheduling 9
The structure of a cloud controller

disturbance

r s  (k )
u* (k)
Predictive Optimal Queuing
filter controller dynamics
external forecast  (k )
traffic

state feedback q(k)

The controller uses the feedback regarding the current state and the estimation
of the future disturbance due to environment to compute the optimal inputs over
a finite horizon. r and s are the weighting factors of the performance index.

GIET ENGINEERING Cloud Resource Management and


COLLEGE Scheduling 10
Two-level cloud controller

Application 1 Application n
Application 1 SLA 1 …. SLA n Application n
VM VM
…. VM VM

Application …. Application
controller controller
Monitor Monitor

Decision …. Decision

Cloud Controller Actuator Actuator

Cloud Platform

GIET ENGINEERING Cloud Resource Management and


COLLEGE Scheduling 11
Lessons from the two-level experiment

 The actions of the control system should be carried out in a rhythm


that does not lead to instability.
 Adjustments should only be carried out after the performance of the
system has stabilized.
 If upper and a lower thresholds are set, then instability occurs when
they are too close to one another if the variations of the workload
are large enough and the time required to adapt does not allow the
system to stabilize.
 The actions consist of allocation/deallocation of one or more virtual
machines. Sometimes allocation/dealocation of a single VM
required by one of the threshold may cause crossing of the other,
another source of instability.

GIET ENGINEERING Cloud Resource Management and


COLLEGE Scheduling 12
Control theory application to CRM
 Regulate the key operating parameters of the system based on
measurement of the system output.
 The feedback control assumes a linear time-invariant system
model, and a closed-loop controller.
 The system transfer function satisfies stability and sensitivity
constraints.
 A threshold  the value of a parameter related to the state of a
system that triggers a change in the system behavior.
 Thresholds  used to keep critical parameters of a system in a
predefined range.
 Two types of policies:
1. threshold-based  upper and lower bounds on performance trigger
adaptation through resource reallocation; such policies are simple and
intuitive but require setting per-application thresholds.
2. sequential decision  based on Markovian decision models.
GIET ENGINEERING Cloud Resource Management and
COLLEGE Scheduling 13
Design decisions

 Is it beneficial to have two types of controllers:


 application controllers  determine if additional resources are needed.
 cloud controllers  arbitrate requests for resources and allocates the
physical resources.

 Choose fine versus coarse control.

 Dynamic thresholds based on time averages better versus static


ones.

 Use a high and a low threshold versus a high threshold only.

GIET ENGINEERING Cloud Resource Management and


COLLEGE Scheduling 14
Proportional thresholding

 Algorithm
 Compute the integral value of the high and the low threshold as
averages of the maximum and, respectively, the minimum of the
processor utilization over the process history.
 Request additional VMs when the average value of the CPU utilization
over the current time slice exceeds the high threshold.
 Release a VM when the average value of the CPU utilization over the
current time slice falls below the low threshold.

 Conclusions
 Dynamic thresholds perform better than the static ones.
 Two thresholds are better than one.

GIET ENGINEERING Cloud Resource Management and


COLLEGE Scheduling 15
Coordinating power and performance management

 Use separate controllers/managers for the two objectives.


 Identify a minimal set of parameters to be exchanged between the
two managers.
 Use a joint utility function for power and performance.
 Set up a power cap for individual systems based on the utility-
optimized power management policy.
 Use a standard performance manager modified only to accept
input from the power manager regarding the frequency determined
according to the power management policy.
 Use standard software systems.

GIET ENGINEERING Cloud Resource Management and


COLLEGE Scheduling 16
Communication between
autonomous managers

Performance manager Power manager

Control policy Power Control policy


Performance
data data

Blade

Blade

Workload Workload Blade Power


Power
generator distribution assignment
Blade

Blade

Autonomous performance and power managers cooperate to


ensure prescribed performance and energy optimization; they
are fed with performance and power data and implement the
performance and power management policies
GIET ENGINEERING Cloud Resource Management and
COLLEGE Scheduling 17
A utility-based model for cloud-based web services

 A service level agreement (SLA)  specifies the rewards as well


as penalties associated with specific performance metrics.
 The SLA for cloud-based web services uses the average response
time to reflect the Quality of Service.
 We assume a cloud providing K different classes of service, each
class k involving Nk applications.
 The system is modeled as a network of queues with multi-queues
for each server.
 A delay center models the think time of the user after the
completion of service at one server and the start of processing at
the next server.

GIET ENGINEERING Cloud Resource Management and


COLLEGE Scheduling 18
Utility function U(R)

P
e
n
a
t
l
t
y

0 R0 R1 R2
R R - response time
e
w
a
r
d

The utility function U(R) is a series of step functions with jumps


corresponding to the response time, R=R0 | R1 | R2, when the
reward and the penalty levels change according to the SLA. The
dotted line shows a quadratic approximation of the utility function.

GIET ENGINEERING Cloud Resource Management and


COLLEGE Scheduling 19
Si2

Si3

Si1

Si6
Si4
vk

vkmax

Si5

rk (b)
rkmax
(a)

(a) The utility function: vk the revenue (or the penalty)


function of the response time rk for a request of class k.
(b) A network of multiqueues.
GIET ENGINEERING Cloud Resource Management and
COLLEGE Scheduling 20
The model requires a large number of parameters

GIET ENGINEERING Cloud Resource Management and


COLLEGE Scheduling 21
Resource bundling
 Resources in a cloud are allocated in bundles.

 Users get maximum benefit from a specific combination of resources:


CPU cycles, main memory, disk space, network bandwidth, and so on.

 Resource bundling complicates traditional resource allocation models


and has generated an interest in economic models and, in particular,
in auction algorithms.

 The bidding process aims to optimize an objective function f(x,p).

 In the context of cloud computing, an auction is the allocation of


resources to the highest bidder.

GIET ENGINEERING Cloud Resource Management and


COLLEGE Scheduling 22
Combinatorial auctions for cloud resources
 Users provide bids for desirable bundles and the price they are
willing to pay.

 Prices and allocation are set as a result of an auction.

 Ascending Clock Auction, (ASCA)  the current price for each


resource is represented by a “clock” seen by all participants at the
auction.

 The algorithm involves user bidding in multiple rounds; to address


this problem the user proxies automatically adjust their demands on
behalf of the actual bidders.

GIET ENGINEERING Cloud Resource Management and


COLLEGE Scheduling 23
u1 Proxy x1(t)

u2 Proxy x2(t)

x3(t)
u3 Proxy Auctioneer

xu
u (t )  0

uU Proxy xU(t)

p(t+1)

The schematics of the ASCA algorithm; to allow for a single round auction
users are represented by proxies which place the bids xu(t). The auctioneer
determines if there is an excess demand and, in that case, it raises the price of
resources for which the demand exceeds the supply and requests new bids.
GIET ENGINEERING Cloud Resource Management and
COLLEGE Scheduling 24
Pricing and allocation algorithms
 A pricing and allocation algorithm partitions the set of users in two
disjoint sets, winners and losers.
 Desirable properties of a pricing algorithm:
 Be computationally tractable; traditional combinatorial auction algorithms
e.g., Vickey-Clarke-Groves (VLG) are not computationally tractable.
 Scale well - given the scale of the system and the number of requests
for service, scalability is a necessary condition.
 Be objective - partitioning in winners and losers should only be based on
the price of a user's bid; if the price exceeds the threshold then the user
is a winner, otherwise the user is a loser.
 Be fair - make sure that the prices are uniform, all winners within a
given resource pool pay the same price.
 Indicate clearly at the end of the auction the unit prices for each
resource pool.
 Indicate clearly to all participants the relationship between the supply
and the demand in the system.
GIET ENGINEERING Cloud Resource Management and
COLLEGE Scheduling 25
Cloud scheduling algorithms
 Scheduling  responsible for resource sharing at several levels:
 A server can be shared among several virtual machines.
 A virtual machine could support several applications.
 An application may consist of multiple threads.
 A scheduling algorithm should be efficient, fair, and starvation-free.
 The objectives of a scheduler:
 Batch system  maximize throughput and minimize turnaround time.
 Real-time system  meet the deadlines and be predictable.
 Best-effort: batch applications and analytics.
 Common algorithms for best effort applications:
 Round-robin.
 First-Come-First-Serve (FCFS).
 Shortest-Job-First (SJF).
 Priority algorithms.

GIET ENGINEERING Cloud Resource Management and


COLLEGE Scheduling 26
Cloud scheduling algorithms (cont’d)

 Multimedia applications (e.g., audio and video streaming)


 Have soft real-time constraints.
 Require statistically guaranteed maximum delay and throughput.
 Real-time applications have hard real-time constraints.
 Scheduling algorithms for real-time applications:
 Earliest Deadline First (EDF).
 Rate Monotonic Algorithms (RMA).
 Algorithms for integrated scheduling of several classes of
applications:
 Resource Allocation/Dispatching (RAD) .
 Rate-Based Earliest Deadline (RBED).

GIET ENGINEERING Cloud Resource Management and


COLLEGE Scheduling 27
Quantity

Hard real-time
strict
Hard-requirements

Soft-requirements

Best-effort

Timing
loose

loose strict

Best-effort policies  do not impose requirements regarding either the amount of


resources allocated to an application, or the timing when an application is scheduled.
Soft-requirements policies  require statistically guaranteed amounts and
timing constraints
Hard-requirements policies  demand strict timing and precise amounts of resources.
GIET ENGINEERING Cloud Resource Management and
COLLEGE Scheduling 28
Fair queuing - schedule multiple flows through a switch
R(t) R(t)

Fai(tai)=Sai(tai)+Pai Fai(tai)=Sai(tai)+Pai

Sai(tai)=Rai(tai) Sai(tai)=Fai-1(tai-1)

Fai-1(tai-1) Rai(tai)

tai-1 tai tai-1 tai


(a) (b)

The transmission of packet i of a flow can only start after the packet is
available and the transmission of the previous packet has finished.
(a) The new packet arrives after the previous has finished.
(b) The new packet arrives before the previous one was finished.

GIET ENGINEERING Cloud Resource Management and


COLLEGE Scheduling 29
Start-time fair queuing

 Organize the consumers of the CPU bandwidth in a tree structure.

 The root node is the processor and the leaves of this tree are the
threads of each application.
 When a virtual machine is not active, its bandwidth is reallocated to the
other VMs active at the time.
 When one of the applications of a virtual machine is not active, its
allocation is transferred to the other applications running on the same VM.
 If one of the threads of an application is not runnable then its allocation is
transferred to the other threads of the applications.

GIET ENGINEERING Cloud Resource Management and


COLLEGE Scheduling 30
Server

VM1 VM2
(1) (3)

A1 A2 A3
(3) (1) (1)

t1,1 t1,2 t1,3 t2 vs1 vs2 vs3


(1) (1) (1) (1) (1) (1) (1)

The SFQ tree for scheduling when two virtual machines VM1 and VM2 run
on a powerful server

GIET ENGINEERING Cloud Resource Management and


COLLEGE Scheduling 31
Borrowed virtual time (BVT)
 Objective - support low-latency dispatching of real-time applications,
and weighted sharing of CPU among several classes of applications.
 A thread i has
 an effective virtual time, Ei.
 an actual virtual time, Ai.
 a virtual time warp, Wi.
 The scheduler thread maintains its own scheduler virtual time (SVT)
defined as the minimum actual virtual time of any thread.
 The threads are dispatched in the order of their effective virtual time,
policy called the Earliest Virtual Time (EVT).
 Context switches are triggered by events such as:
 the running thread is blocked waiting for an event to occur.
 the time quantum expires.
 an interrupt occurs.
 when a thread becomes runnable after sleeping.
GIET ENGINEERING Cloud Resource Management and
COLLEGE Scheduling 32
Thread a

0 12 24 36
t
12 24 36 48

thread b is suspended thread b is reactivated

Thread b

0 3 6 9 12

3 6 9 12 15

Virtual time

36

24

12
6 Real time

0 3 15 18 21 24 36 48 60

Top  the virtual startup time and the virtual finish time and function of the real time t
for each activation of threads a and b.
Bottom  the virtual time of the scheduler v(t) function of the real time
GIET ENGINEERING Cloud Resource Management and
COLLEGE Scheduling 33
Effective virtual time
450

390

360

300

270

210

180

120

90

30
Real time (mcu)

2 5 11 14 20 23 29 32 38 41
9 18 27 36 45
The effective virtual time and the real time of the threads a
(solid line) and b (dotted line) with weights wa = 2 wb when
the actual virtual time is incremented in steps of 90 mcu.

GIET ENGINEERING Cloud Resource Management and


COLLEGE Scheduling 34
Effective virtual time
450

390

360

300

270

210

180

120

90

30
Real time (mcu)

2 5 12 14 21 23 30 32 39 41
9 18 27 36 45

-60

The effective virtual time and the real time of the threads a (solid line), b (dotted line),
and the c with real-time constraints (thick solid line). Thread c wakes up periodically at
times t=9, 18, 27, 36,…, is active for 3 units of time and has a time warp of 60 mcu.
GIET ENGINEERING Cloud Resource Management and
COLLEGE Scheduling 35
Cloud scheduling subject to deadlines

 Hard deadlines  if the task is not completed by the deadline, other


tasks which depend on it may be affected and there are penalties; a
hard deadline is strict and expressed precisely as milliseconds, or
possibly seconds.

 Soft deadlines  more of a guideline and, in general, there are no


penalties; soft deadlines can be missed by fractions of the units
used to express them, e.g., minutes if the deadline is expressed in
hours, or hours if the deadlines is expressed in days.

 We consider only aperiodic tasks with arbitrarily divisible workloads.

GIET ENGINEERING Cloud Resource Management and


COLLEGE Scheduling 36
Workload partition rules

 Optimal Partitioning Rule (OPR)  the workload is partitioned to


ensure the earliest possible completion time and all tasks are
required to complete at the same time.

 Equal Partitioning Rule (EPR)  assigns an equal workload to


individual worker nodes.

GIET ENGINEERING Cloud Resource Management and


COLLEGE Scheduling 37
S0 1 2 3 n

S1 1

S2 2

S3 3

Sn n

The timing diagram for the Optimal Partitioning Rule; the algorithm requires
worker nodes to complete execution at the same time. The head node, S0,
distributes sequentially the data to individual worker nodes.
GIET ENGINEERING Cloud Resource Management and
COLLEGE Scheduling 38
S0 1 2 3 n

S1 1

S2 2

S3 3

Sn n

The timing diagram for the Equal Partitioning Rule; the algorithm assigns an equal
workload to individual worker nodes.

GIET ENGINEERING Cloud Resource Management and


COLLEGE Scheduling 39
UNIT-V
Storage Systems:
Evolution of storage technology, storage models, file systems and database,
distributed file systems, general parallel file systems. Google file system.
General Parallel File System
Introduction
 File System
 Way to organize data which is expected to be retained after the
program terminates by providing procedures to store, retrieve
and update data as well as manage the available space on the
device which contains it.
Types of File System
Types Examples

Disk file system FAT, exFAT, NTFS…

Optical discs CD, DVD, Blu-ray

Tape file system IBM’s Linear tape

Database file system DB2

Transactional file system TxF, Valor, Amino,TFFS

Flat file system Amazon’s S3

Cluster file system NFS, CIFS, AFS, SMB, GFS, GPFS,


• Distributed file system LUSTRE, PAS
• Shared file system
• San file system
• Parallel file system
In HPC world
 Equally large applications
 Large input data set (e.g. astronomy data)
 Parallel execution on large clusters

 Use parallel file systems for scalable I/O


 e.g. IBM’s GPFS, Sun’s Lustre FS, PanFS, and
Parallel Virtual File System (PVFS)
General Parallel File System
 Cluster: 512 nodes today, fast
reliable communication

 Shared disk: all data and


metadata on disk accessible from
any node through disk I/O
interface (i.e., "any to any"
connectivity)

 Parallel: data and metadata


flows from all of the nodes to all
of the disks in parallel

 RAS: reliability, accessibility,


serviceability
History of GPFS
 Shark video server
 Video streaming from single RS/6000
 Complete system, included file system, network driver, control server
 Large data blocks, admission control, deadline scheduling
 Bell Atlantic video-on-demand trial (1993-94)
 Tiger Shark multimedia file system
 Multimedia file system for RS/6000 SP
 Data striped across multiple disks, accessible from all nodes
 Hong Kong and Tokyo video trials, Austin video server products
 GPFS parallel file system
 General purpose file system for commercial and technical computing
on RS/6000 SP, AIX and Linux clusters.
 Recovery, online system management, byte-range locking, fast pre-
fetch, parallel allocation, scalable directory, small-block random
access.
 Released as a product 1.1 - 05/98.
What is Parallel I/O?
 Multiple processes
(possibly on multiple
nodes) participate in the
I/O Compute Nodes

 Application level
parallelism
 “File” is stored on Interconnect

multiple disks on a I/O Server Nodes

parallel file system


Disk
What does Parallel System support?
 A parallel file system must support
 Parallel I/O
 Consistent global name space across all nodes of the cluster
 Including maintaining a consistent view across all nodes
for the same file
 Programming model allowing programs to access file data
 Distributed over multiple nodes
 From multiple tasks running on multiple nodes
 Physical distribution of data across disks and network
entities eliminates bottlenecks both at the disk interface and
the network, providing more effective bandwidth to the I/O
resources
Why use general parallel file systems?
 Native AIX File System
• No file sharing - application can only access files on its own node
• Applications must do their own data partitioning
• Distributed File System
•Application nodes (DCE clients) share files on server node
•Switch is used as a fast LAN
•Coarse-grained (file or segment level) parallelism
•Server node : performance and capacity bottleneck

• GPFS Parallel File System


• GPFS file systems are striped across multiple disks on multiple
storage nodes
• Independent GPFS instances run on each application node
• GPFS instances use storage nodes as "block servers" - all instances
can access all disks
Performance advantages with GPFS
file system
 Allowing multiple processes or applications on all
nodes in the cluster simultaneously
 Access to the same file using standard file system calls.
 Increasing aggregate bandwidth of your file system by
spreading reads and writes across multiple disks.
 Balancing the load evenly across all disks to maximize
their combined throughput. One disk is no more active
than another.
Performance advantages with GPFS
file system (cont.)
 Supporting very large file and file system sizes.
 Allowing concurrent reads and writes from multiple
nodes.
 Allowing for distributed token (lock) management.
Distributing token management reduces system
delays associated with a lockable object waiting to
obtaining a token.
 Allowing for the specification of other networks for
GPFS daemon communication and for GPFS
administration command usage within your cluster.
GPFS Architecture Overview
 Implications of Shared Disk Model
 All data and metadata on globally accessible disks
(VSD)
 All access to permanent data through disk I/O
interface
 Distributed protocols, e.g., distributed locking,
coordinate disk access from multiple nodes
 Fine-grained locking allows parallel access by
multiple clients
 Logging and Shadowing restore consistency after
node failures
GPFS Architecture Overview (cont.)
 Implications of Large Scale
 Support up to 4096 disks of up to 1 TB each (4
Petabytes)
 The largest system in production is 75 TB
 Failure detection and recovery protocols to
handle node failures
 Replication and/or RAID protect against disk /
storage node failure
 On-line dynamic reconfiguration (add, delete,
replace disks and nodes; rebalance file system)
GPFS Architecture - Special Node Roles
 Three types of nodes:
File system nodes
Manager nodes
Storage nodes
Disk Data Structures:
 Large block size allows efficient use of disk bandwidth
 Fragments reduce space overhead for small files
 No designated "mirror", no fixed placement function:
Flexible replication (e.g., replicate only metadata, or only important
files)
Dynamic reconfiguration: data can migrate block-by-block
Multi level indirect blocks
• Each disk address:
•List of pointers to replicas
• Each pointer:
•Disk id + sector no.
Availability and Reliability
 Eliminate single point of failures
 Designed to transparently fail over token (lock) operations.
 Supports data replications to increase availability in the vent
of a storage media failure.
 Offers time-tested reliability and has been installed on
thousands of nodes across industries
 Basis of many cloud storage offerings
GPFS’s Achievement
 Used on six of the ten most powerful
supercomputers in the world, including the largest
(ASCI white)
 Installed at several hundred customer sites, on
clusters ranging from a few nodes with less than a TB
of disk, up to 512 nodes with 140 TB of disk in 2 file
systems
 20 filed patents
 ASC Purple Supercomputer which is composed of
more than 12,000 processors and has 2 PB of total
disk storage spanning more than 11,000 disks.
Conclusion
 Efficient for managing data volumes
 Provides world-class performance, scalability and
availability for your file data
 Designed to optimize the use of storage
 Provide highly available platform for data-intensive
applications
 Delivering real business needs by streamline data
workflows, improvised services reducing cost and
managing the risks.

You might also like