FlashArray Foundation and Administration 1
FlashArray Foundation and Administration 1
and Administration
@purestorage
Training Approach
FlashArray Dark Site training is comprised of seven modules
1 2 3 4 5 6 7
The Pure Storage products and programs described in this documentation are distributed under a license
agreement restricting the use, copying, distribution, and decompilation/reverse engineering of the products. No
part of this documentation may be reproduced in any form by any means without prior written authorization from
Pure Storage, Inc. and its licensors, if any. Pure Storage may make improvements and/or changes in the Pure
Storage products and/or the programs described in this documentation at any time without notice.
THIS DOCUMENTATION IS PROVIDED “AS IS” AND ALL EXPRESS OR IMPLIED CONDITIONS, REPRESENTATIONS
AND WARRANTIES, INCLUDING ANY IMPLIED WARRANTY OF MERCHANTABILITY, FITNESS FOR A PARTICULAR
PURPOSE, OR NON-INFRINGEMENT, ARE DISCLAIMED, EXCEPT TO THE EXTENT THAT SUCH DISCLAIMERS ARE
HELD TO BE LEGALLY INVALID. PURE STORAGE SHALL NOT BE LIABLE FOR INCIDENTAL OR CONSEQUENTIAL
DAMAGES IN CONNECTION WITH THE FURNISHING, PERFORMANCE, OR USE OF THIS DOCUMENTATION. THE
INFORMATION CONTAINED IN THIS DOCUMENTATION IS SUBJECT TO CHANGE WITHOUT NOTICE.
Pure Storage
Overview
@purestorage Date
Upon completion of this training, you will
• Gain exposure into the history of disk,
and how it shaped Pure Storage
Learning Goals • Learn more about Pure products,
solutions, and core principles
02 Founding Vision
03 Solutions Portfolio
Agenda
04 Core Principles
06 Evergreen
1956 1980
First HDD: First 1GB HDD:
5MB, $50,000 550 lbs, $40,000
1978 1986
RAID Patented SCSI
Developed
1956 1980
First HDD: First 1GB HDD: 2009
5MB, $50,000 550 lbs, $40,000
Pure Storage Founded
1978 1986
RAID Patented SCSI
Developed
Chassis Frame
Modernize
Storage
Performance,
Efficiency &
+ Re-Imagine the
Storage Experience
Simple
Cloud-Based Management
Density
Evergreen Model
100% Flash
Predictive Support
Lower Cost
Built for Cloud
Remember When?
Webscale Storage
Engineering Go-to-market
15
© Pure Storage 2023
Our Branding
Story
Next challenge was the color
scheme
Business Customer
Model Experience
Culture
Why Pure
Storage? Gartner
NPS 82.0
Best in B2B
MQ Technology
Leader
In the
Pure
Storage 82%
TOP
Industry
Average 24% 1%
2018, Certified by Owen CX
of B2B companies
Consolidate Copy & Share Protect Automate Manage Proactively Predict the
Safely Openly Globally Clouds Effortlessly Prevent Issues Future
Core
Technologies
Evergreen Metadata Meta AI
DirectFlash
Architecture Fabric Engine
Purity
FlashBlade
Mission
Performance Unified
Critical Apps
Block And vVol & Databases Concurrent
File & Object
All-Flash
Real Time
Analytics AI/ML
Cloud
EDA
HPC
Cloud Block Store Virtual
FOR AWS Machines Rapid
Restore
Everything you need to build
Block File vVol
all-flash cloud
Archive Backup Cloud Backup
Biggest Scale
Tier 1
FlashBlade
Mission
Performance Unified
Block And vVol Critical Apps Concurrent
& Databases File & Object
HPC
Scientific
Rapid
Research
Restore
No more…
• Setting and managing RAID or FTT
• Host/array block alignment issues
• Troubleshooting performance
• Tuning/tweaking
• Day-long RAID re-builds
• Sizing/managing multiple tiers of storage
• Rat’s nest cabling
• Noisy neighbors
• Painful upgrades
CONFIDENTIAL - INTERNAL USE ONLY
28
© Pure Storage 2023
Manage Effortlessly
Seamless management from anywhere
• One box
• One chassis
• Six cables
• No manual
• No tuning
99.9999%
99.99% 99.999% Including Upgrades
Average
Downtime
Per Year
53 Minutes 5.2 Minutes 31.5 Seconds
• Non-disruptive operations
Ultimate
Availability • No downtime
• No performance loss (even during
upgrades/replacements/failures)
• No performance variability
• Run multiple differing workloads
CONFIDENTIAL - INTERNAL USE ONLY
30
© Pure Storage 2023
Efficient
Efficient Data
Reduction
Efficiency is translated to better data
reduction ratio achieved through
these methods
• Pattern removal
• 512B alignment
• Variable dedupe
• Inline compression
• Post-process deep compression
• Copy reduction
10X
Less Space
Less Power
More Performance
More Simplicity
Before After
Support
Managed
Upgrades
Instant L2
Support
4
Hours On-Site
Global, Mobile Break/Fix
Monitoring
& Reporting
CONFIDENTIAL - INTERNAL USE ONLY
35
© Pure Storage 2023
Right Size Guarantee™
Updated Right Size
Right Size Guarantee
We guarantee your effective capacity, risk-free for 6 months*
Any workload data is fine, we just need Expected data reductions by workload If we do not deliver the guaranteed
to know about it. The more we know, are derived from Pure’s global installed capacity, we will make it right with
the more efficient the configuration. base and applied to the requirements. free added flash, non-disruptively.
*Specific written guarantee based on customer
workloads
CONFIDENTIAL - INTERNAL USE ONLY
37
© Pure Storage 2023
Knowledge Check:
Question 2
Effortless
Evergreen
Efficient
Flexible
Effortless
Evergreen
Efficient
Flexible
14
40
70
17
14
40
70
17
@purestorage Date
Upon completion of this training, you will be
able to:
• Explain what NVM Express™ is
Objectives • Summarize key benefits of using
DirectFlash®
• Summarize key differentiators in
FlashArray family or products (including
the importance of DirectMemory™)
02 FlashArray Family
Agenda
03 DirectMemory
04 Interactive App
Consolidate Copy & Share Protect Automate Manage Proactively Predict the
Safely Openly Globally Clouds Effortlessly Prevent Issues Future
Core
Technologies
Evergreen Metadata Meta AI
DirectFlash
Architecture Fabric Engine
Purity
Bottleneck
Fast
100 µs Slow
20 ms
1 TB Flash
60 TB Management
Disk SATA
Interfaces
Flash
100% NVMe
DirectFlash Module
Fast, parallel access over
PCIe or RoCE
No Hidden Flash
100% of raw NAND is
available to Purity
Deterministic Latency
Purity has visibility down
to the NAND level
FlashArrayFamily
Deliver Next-Gen Analytics Multi-Cloud Data
//X and //C Protection and DR
//X
Latency
Optimization
//C
Capacity
Optimization
Front Panel
Modular and upgradeable design
& 100% flash-optimized hardware Up to 8 in 20 in chassis or 2 or 4 HA
chassis: 28 in shelves: NVRAM
X70 or X90 DFM or SAS FM
Back Panel
Modular and upgradeable design 50 Gb/s 2 HA 6 Slots FC
& 100% flash-optimized hardware NVMe and/or Controllers and/or Ethernet
12 Gb/s SAS
Flexible Expansion:
Up to 6 Ports NVMe Over
25/40/50/100* Gb/S PCIe
Ethernet
Serial-Attached
Up to 10 Ports
SCSI **
16 & 32 Gb/S
Fibre Channel
* 50 & 100GE for NVMe over RoCE ** SAS modules/shelves are not
* Supports NVMe over FC supported on FlashArray//C
PCIe0 PCIe2
PCIe1
PORT 1
PCIe3
PORT 2
PCIe 3 reserved
IB
for NDU paths
DirectFlash
Shelf
Native NVMe/F expansion shelf
28 x DirectFlash modules
up to 512 TBs RAW (1.5 PBs effective)
with 18.3 DirectFlash modules
CONFIDENTIAL - INTERNAL USE ONLY
21
© Pure Storage 2021
Max Shelf Rules
Max NVMe
DirectFlash 0 1 2 2 2
only shelves
** 5/6 shelves allowed only if NDU from existing arrays with 5 or 6 shelves
Flash Performance at Disk Optimized End-to-End for “No Compromise” Flash for Every Data
Economics QLC Flash Enterprise Experience Workflow
• QLC architecture • Deep integration from • Built for the same • Policy-driven replication,
enables Tier 2 software to QLC NAND 99.9999%+ availability, snapshots, and migration
between arrays and
applications to benefit solves QLC wear Pure1 cloud
clouds
from the performance of concerns and delivers management, API
all-flash − predictable market-leading automation, and AI- • Use flash for application
2−4ms latency economics driven predictive support tiering, DR, test/dev,
of every array backup, and retention
• 5.2PBs in 9U delivers • Includes the same
10X consolidation for Evergreen maintenance
racks and racks of disk and wear replacement as
every FlashArray
All-QLC
FlashArray//C Data Protection and
Disaster Recovery
Use Cases
Policy-Based VM-Tiering;
Multicloud Test/Dev
DirectMemory
Evergreen Pure1 Meta™
Fast just got faster with
Available FlashArray//X70 Caching intelligence DirectMemory cache using
and //X90; no forklift or based on customers Storage Class Memory (SCM)
data migration workload
CONFIDENTIAL - INTERNAL USE ONLY
30
© Pure Storage 2021
Flash Read Cache Read
Read IO Read IO
DirectMemory
Cache Decompress
No
Decompression
Needed
1 4
(or SAS FMs)
(or SAS FMs)
2
5
(127 / 256TB only)
• Chrome/Desktop: http://m.kaon.com/c/ps
• Android: https://play.google.com/store/apps/d
etails?id=com.kaon.android.lepton.purestorag
• IOS: https://itunes.apple.com/us/app/pure-
storage-flasharray-tour/id964925778?mt=8
Optimization
Effectiveness
Adaptability
Growth
Optimization
Effectiveness
Adaptability
Growth
@purestorage Date
Upon completion of this training, you will be able to:
FlashProtect
Purity Storage
Software Built FlashRecover™
for Flash
Purity Core
All
Software
Included
FlashCare
Thin
Compression RAID-HA Snap ActiveCluster™
provisioning QoS
Reduce Assure Protect Optimize
Variable Block Flash Global flash Global garbage Pure1 Pure Cloud Block Purity
Engine reliability management collection Meta® AI Store™ CloudSnap™
Metadata DirectFlash™ Cloud
CONFIDENTIAL - INTERNAL USE ONLY
6
© Pure Storage 2021
Always-On Encryption with FlashProtect
CT 0 -
DRAM Buffer BlockXfer_Rdy
Driver
ACK Write_Rqst
Secondary CMD, LBA, LEN
CT 1 -
Primary DRAM Buffer Block Driver
Multipath connected
host sends IO to both
controllers
Write Group
10
© Pure Storage 2021
5:1 Average data
reduction rate
Purity
10: 1 Average total FlashReduce
efficiency
Data
Reduction
Comprehensive, high-performance
data reduction
Reduces usable $/GB
Source: 2019 ESG Economic Validation Report CONFIDENTIAL - INTERNAL USE ONLY
© Pure Storage 2021 18
Deduplication and
Metadata Management
Metadata Table
Management
• In-memory dedupe hashes
allow fast inline dedupe
DRAM DRAM
while protecting flash from Recent Hashes
On-Flash
Dedupe
Hashes
Sample
Compare with content on
flash to confirm duplicate
Sample
Check adjacent sectors on physical flash
to identify additional duplicate data
x x
Sample
Expand selection at 512B increments until a mismatch
occurs at either end
Duplicate sectors do not need to be on 4K boundaries
Add metadata pointer to this existing data for the new
client reference
Purity allows variable block-size dedupe at individual
sector granularity, up to 32K (64 sectors)
Analytics
3 to 4:1
Transactions
DW C
Media 2 to 4:1 OLTP VSI VDI
1.2 to 1.5:1 3 to 4:1 5 to 8:1 7 to 12:1
C Email D D
C C 4 to 6:1
Data Reduction
Efficiency
Pattern Removal
U U U U U U U U U U U
M P P
using RAID-HA. U
U
M P
U
P
U
U U
U U
U U
U
U
U
U
U
U
U
U
U
U
U
U
U
U
U U
U
U
U
U
U
U
U
P P M
U U U U U U U U U U U
Requires that the drive sizes be the same across entire enclosure
With SAS Modules With Direct Flash Modules (Wide Write Groups)
• Up to 10 drives per write group • Up to 28 drives per write group
• 6+2 RAID segments • 14+2 RAID segments
• 25% RAID overhead • 12.5% RAID overhead
• Can still have 1 data pack−25% RAID overhead applies
Spare Block
Spare Block
Spare Block
Spare Block
Current Mapped Segment
System-Wide
Garbage
Spare Block
Spare Block
Spare Block
Spare Block
Spare Block
Collection
Space Reclamation
Metadata pointers are updated
as data is changed or deleted
by the host. New Segment
Spare Block
Spare Block
Spare Block
Spare Block
Current Mapped Segment
System-Wide
Garbage
Spare Block
Spare Block
Spare Block
Spare Block
Spare Block
Collection Cont.
Space Reclamation
As data in a segment becomes
invalid, it is more attractive for
garbage collection. New Segment
Spare Block
Spare Block
Spare Block
Spare Block
Current Mapped Segment
System-Wide
Garbage
Spare Block
Spare Block
Spare Block
Spare Block
Spare Block
Collection Cont.
Space Reclamation
Valid data from the old segment is
rewritten in a further compressed
form to a new segment. New Segment
Spare Block
Spare Block
Spare Block
Spare Block
Old Segment
System-Wide
Garbage
Spare Block
Spare Block
Spare Block
Spare Block
Spare Block
Collection Cont.
Space Reclamation
Valid data from other garbage
collected segments is also
rewritten to fill the new segment, New Mapped Segment
then old segments are unmapped.
CONFIDENTIAL - INTERNAL USE ONLY
39
© Pure Storage 2021
Knowledge Check:
Question 1
What is the block size Purity will analyze for duplicate data?
What is the block size Purity will analyze for duplicate data?
The Pure Storage products and programs described in this documentation are distributed under a license
agreement restricting the use, copying, distribution, and decompilation/reverse engineering of the products. No
part of this documentation may be reproduced in any form by any means without prior written authorization from
Pure Storage, Inc. and its licensors, if any. Pure Storage may make improvements and/or changes in the Pure
Storage products and/or the programs described in this documentation at any time without notice.
THIS DOCUMENTATION IS PROVIDED “AS IS” AND ALL EXPRESS OR IMPLIED CONDITIONS, REPRESENTATIONS
AND WARRANTIES, INCLUDING ANY IMPLIED WARRANTY OF MERCHANTABILITY, FITNESS FOR A PARTICULAR
PURPOSE, OR NON-INFRINGEMENT, ARE DISCLAIMED, EXCEPT TO THE EXTENT THAT SUCH DISCLAIMERS ARE
HELD TO BE LEGALLY INVALID. PURE STORAGE SHALL NOT BE LIABLE FOR INCIDENTAL OR CONSEQUENTIAL
DAMAGES IN CONNECTION WITH THE FURNISHING, PERFORMANCE, OR USE OF THIS DOCUMENTATION. THE
INFORMATION CONTAINED IN THIS DOCUMENTATION IS SUBJECT TO CHANGE WITHOUT NOTICE.
Snapshots and Async
Replication
@purestorage Date
Upon completion of this training, you will be
able to:
• Describe FlashRecover™ snapshots
• Explain how metadata operates on Pure
volumes
• Describe protection groups and
Objectives protection policies
• Explain how FlashRecover replication
leverages snapshots
• Explain how snapshots can be offloaded
to targets of NFS and Cloud
02 Metadata in Action
03 Protection Groups
Agenda
04 Protection Policies
05 FlashRecover Replication
Orchestration Local Backup and Local Continuous Remote Remote Backup and Archiving
Restore Protection Availability Protection Restore
Pure protection policies Snap to cloud
Backup software RTO Backup software
integration integration
SAP, ORACLE and SQL
integration Snap to NFS
RPO RPO
Local Years Months Weeks Days Hours Mins Secs Secs Mins Hours Days Weeks Months Years Remote
0
Volume1 Snapshot of
connected to host Volume1 @ 1:00pm
Volume1 Snapshot of
connected to host Volume1 @ 1:00pm
Volume1 Snapshot of
connected to host Volume1 @ 1:00pm
Send Reduce
Network
• Compressed changes • Dedupe incoming
to data sent on wire replication streams
Read Protect
• Compressed, • Create RAID-HA segments
deduplicated user New Transferring • Encrypt
data read from SSD Snapshot Snapshot • Write user data to Flash
Difference FlashCare™
• Metadata analysis • Global wear leveling and
determines differences refresh
on both source and • Global deletion
target, including what management
has been replicated Previous Retained • Integrity checking
previously Snapshot Snapshots • Continuous optimization
Source Target
CONFIDENTIAL - INTERNAL USE ONLY
16
© Pure Storage 2021
Failing Over
Using Protection Group Snapshots
Clone
Pgroup snapshot
vol1 snap1 vol1 replicated to target snap1 vol1 vol1
Replicate
Clone
CloudSnap to AWS S3
• Purity 5.1: Snap to NFS was the first feature based on our Portable Snapshot Technology.
• Purity 5.2: Technology extended to the public cloud. First cloud target was AWS S3.
• Purity 5.3: Multi-cloud backup capability introduced. Azure Blob Storage added as an offload target.
CONFIDENTIAL - INTERNAL USE ONLY
19
© Pure Storage 2021
Purity FA Snap to NFS Cont.
• Data is backed up in Pure proprietary format.
• Data has to be restored/recovered to a FlashArray™ before it can be used.
• All data and metadata required for recovery resides on the NFS target.
• Data sent over the wire and stored on NFS is:
• Compressed
• Not Deduped (in the current version) Offload
• Not Encrypted (in the current version)
Controller A Controller B
Primary Secondary
<11ms RTT
Snap to NFS
Remote snaps
Remote snaps
CloudSnap
DeltaSnap API
and integrations
One
Two
Five
Ten
One
Two
Five
Ten
@purestorage Date
Upon completion of this training, you will be able
to:
• Identify the features and functions of Purity//FA
ActiveCluster
• Describe a pod and its usage
• Explain the uniform and non-uniform
Objectives configuration options
• Provide an overview of the on-premises
mediator and the various failover preferences
• Explain ActiveDR near-sync replication
functionality
• Explain ActiveCluster setup, monitoring, and
resynchronization
• Explain Purity//FA management and
CONFIDENTIAL - INTERNAL USE ONLY administration of ActiveCluster and ActiveDR 2
© Pure Storage 2021
01 ActiveCluster Overview
02 Configuration Options
03 Performance Optimization
04 High Availability
06 Administration
08 Replication Interconnect
On-Premises
Mediator
Active Active
Compute Compute
Multi-Site Active/Active
Zero RPO, Zero RTO, Zero Cost,
Zero Additional Hardware
On-Premises
Data Center Mediator Metro Global
Rack 1 Rack 2 Up to 11ms Roundtrip Anywhere
2 Rack HA
A C A C A C
B B B
A C A C A C
B B B
VM VM VM VM VM VM
A C A C A C
B B B
VM VM VM VM VM VM
A C A C
B B
VM VM VM VM VM VM
• Resynchronizes arrays
automatically when link is
restored using dedupe-aware
async replication
A C Resync A C
B B
complete VM VM VM VM VM VM
A C A C A C
B B B
Recovery
• Automatic recovery after short event, e.g., after a local HA event
• Never went out of sync
Resynchronization
• Automated recovery after long event, e.g., power outage
• Leverage periodic replication to transfer increments until near-sync, then
forward individual IOs and transfer incremental snap
• No IO pause or log replay for ‘final incremental’
Snap 4
Snap 3
Snap 2
Snap 1
final snap are merged
into target.
A C
B
1 Snapshots are sent
asynchronously until
arrays are nearly in sync.
A C
B In sync
Snap 1
Snap 2
Snap 3
Snap 4
3 Arrays are fully in
sync with no pause
in IO for final sync.
25
© Pure Storage 2021
Failover Preferences (5.1.3)
On-Premises
Mediator
App App App App App App
• Configure a failover preference
VM VM VM VM VM VM
per pod
• Helps to align pod failover with
host application layouts
• Preferred array: Races to the
mediator immediately
• Non-preferred array: Races to
the mediator after short delay P
6 sec A A A
and can keep pod online if leadtime
preferred array fails
P
B B B 6 sec
leadtime
Setup
be elected.
• If no pod preference is set, Purity will pick one to
stay online.
• Standard mediation race behavior occurs when at
least one array is in contact with the mediator and
contact with the mediator is re-established by at
least one array post-election.
30
© Pure Storage 2021
Pre-Election Behavior Comparison
Solution Component Failure Access to Stretched Pod Access to Stretched Pod
Volumes Through Volumes Starting with
One Array Other Array Replication Link Mediator Purity 5.2 Purity 5.3
UP DOWN UP UP Available on one array Available on one array
CLI
• Under Mediator Status, (pre-elected) signifies the pod has pre-elected that array.
root@vm-connor-brooks-ct0:~# purepod list --mediator
Name Source Mediator Mediator Version Array Status Frozen At Mediator Status
p1 - purestorage - vm-connor-brooks online - unreachable
vm-connor-brooks2 online - unreachable (pre-elected)
root@vm-connor-brooks-ct0:~#
Follower
Leader
A C A C A C
B B B
1ms
RTT
1ms
RTT
Write
Follower Send to Leader and request NVRAM order Write to NVRAM Ack to host
0.5ms 0.1ms 1.2ms total
Primary
2 2
x.y.z.2 eth3 eth3 x.y.z.6
Physical
Replication
Ports eth x.y.z.7
Secondar
x.y.z.3 eth
2 2
y
x.y.z.4 eth3 eth3 x.y.z.8
Replication interconnect -
required for ActiveCluster - Multiple active connections between primary controllers and
supports asynchronous and multiple ready connections to remote secondary controllers
synchronous replication Note: Arrays must be connected via switches to allow controller failovers.
HA Failover
x.y.z.1 eth eth x.y.z.5
Primary
2 2
x.y.z.2 eth3 eth3 x.y.z.6
Physical
Replication
Ports Cont.
eth2 x.y.z.7
Secondar
x.y.z.3 eth
2
y
x.y.z.4 eth3 eth3 x.y.z.8
Replication interconnect -
Failover uses existing connections to remote secondary controller required for ActiveCluster -
to establish connections faster.
supports asynchronous and
Note: Arrays must be connected via switches to allow controller failovers. synchronous replication
Site 1 Site 2
Switch A Switch A
Long Distance
Switch B Ethernet Switch B
Infrastructure
Ethernet
ct0 2 3 2 3 ct1 Replication ct0 2 3 2 3 ct1
Ports
Note: Each port on one array must be able to connect or route to every port on the other
array.
On-Premises
Mediator
Switch A Switch A
Long Distance
Switch B Ethernet Switch B
Infrastructure
vir0 vir1* vir0 vir1*
Note: Arrays use the management network to connect to the mediator and establish the
replication connection between the arrays.
4
5
6
7
4
5
6
7
Both arrays
Neither array
The array that reaches the mediator first
The array where the pod was first configured
Both arrays
Neither array
The array that reaches the mediator first
The array where the pod was first configured
Different Pods
Directional and
Reversible
S Snap S Snap
52
© Pure Storage 2021
Production DR
Hosts
Normal
Replication S S
Flow A S A’
S S
Vol Vol
(SN1) (SN2)
Pod1 Pod2
Hosts
Test writes
Test Failover A
End test
A’
S S
Vol Vol
(SN1) snap (SN2) snap
Pod1 Pod2
.undo
Hosts
Failover A A’
S S
Vol Vol
(SN1) snap (SN2) snap
Pod1 Pod2
Hosts
Reverse
Make read-only & reverse
replication (demote)
Replication A A’
S S
Vol Vol
Pod1 (SN1) snap Pod2 (SN2) snap
.undo
ActiveDR Replicatio
Performance Sizing n Network
Max peak
replication
60% Max replication
network latency <50ms
throughput of Array
Maximum
Minimum bandwidth
Max
sustained
30-40% required to support 1.3x
of Array bursts and allow for Incoming
replication Maximum Write Rate
resync
throughput
The Pure Storage products and programs described in this documentation are distributed under a license
agreement restricting the use, copying, distribution, and decompilation/reverse engineering of the products.
No part of this documentation may be reproduced in any form by any means without prior written authorization
from Pure Storage, Inc. and its licensors, if any. Pure Storage may make improvements and/or changes in the
Pure Storage products and/or the programs described in this documentation at any time without notice.
THIS DOCUMENTATION IS PROVIDED “AS IS” AND ALL EXPRESS OR IMPLIED CONDITIONS, REPRESENTATIONS
AND WARRANTIES, INCLUDING ANY IMPLIED WARRANTY OF MERCHANTABILITY, FITNESS FOR A PARTICULAR
PURPOSE, OR NON-INFRINGEMENT, ARE DISCLAIMED, EXCEPT TO THE EXTENT THAT SUCH DISCLAIMERS
ARE HELD TO BE LEGALLY INVALID. PURE STORAGE SHALL NOT BE LIABLE FOR INCIDENTAL OR
CONSEQUENTIAL DAMAGES IN CONNECTION WITH THE FURNISHING, PERFORMANCE, OR USE OF THIS
DOCUMENTATION. THE INFORMATION CONTAINED IN THIS DOCUMENTATION IS SUBJECT TO CHANGE
WITHOUT NOTICE.
Troubleshooting
Latency
@purestorage Date
Upon completion of this training, you will be
able to:
Agenda
02 Troubleshooting Space
Data Reduction
SSD Persistence
Common
Space
Problems
Name Capacity Parity Provisioned Size Thin Provisioning Data Reduction Total Reduction Volumes Snapshots Shared System Total
purearray 57.06T 100% 1240T 29% 16.3 to 1 22.9 to 1 36.83T 1.31T 18.36T 4.57T 61.07T
19
© Pure Storage 2021
Examine Data Reduction Cont.
Checking data reduction via CLI: • Provides insight into the compression ratio
experienced on a specific volume
• Randomly samples provisioned volumes and reports
Array-Wide Data Reduction back thin provisioning and compression ratio
$ purearray list --space
Name Capacity Parity Provisioned Size Thin Provisioning Data Reduction Total Reduction Volumes Snapshots Shared System Total
purearray 66.90T 100% 650T 29% 12.8 to 1 18.1 to 1 24.22T 761.08G 12.10T 37.33T 74.38T
Name Size Thin Provisioning Data Reduction Total Reduction Volume Snapshots Shared Space System Total
DB-VOL01 4T 43% 11.7 to 1 20.5 to 1 163.78G 0.00 - - 163.78G
DB-VOL02 4T 43% 11.7 to 1 20.5 to 1 163.79G 0.00 - - 163.79G
DEPOT-VOL01 8T 6% 20.2 to 1 21.4 to 1 223.41G 0.00 - - 223.41G
DEPOT-VOL02 8T 6% 20.2 to 1 21.4 to 1 223.21G 0.00 - - 223.21G
DB-BACKUP-VOL01 4T 42% 3.2 to 1 5.6 to 1 725.61G 285.16G - - 1010.77G
DB-BACKUP-VOL02 4T 42% 3.2 to 1 5.6 to 1 725.80G 285.04G - - 1010.83G
DB-ARC01 4T 26% 9.3 to 1 12.7 to 1 288.08G 0.00 - - 288.08G
DB-ARC02 4T 26% 9.3 to 1 12.7 to 1 287.94G 0.00 - - 287.94G
True
False
True
False
The Pure Storage products and programs described in this documentation are distributed under a license
agreement restricting the use, copying, distribution, and decompilation/reverse engineering of the products. No
part of this documentation may be reproduced in any form by any means without prior written authorization from
Pure Storage, Inc. and its licensors, if any. Pure Storage may make improvements and/or changes in the Pure
Storage products and/or the programs described in this documentation at any time without notice.
THIS DOCUMENTATION IS PROVIDED “AS IS” AND ALL EXPRESS OR IMPLIED CONDITIONS, REPRESENTATIONS
AND WARRANTIES, INCLUDING ANY IMPLIED WARRANTY OF MERCHANTABILITY, FITNESS FOR A PARTICULAR
PURPOSE, OR NON-INFRINGEMENT, ARE DISCLAIMED, EXCEPT TO THE EXTENT THAT SUCH DISCLAIMERS ARE
HELD TO BE LEGALLY INVALID. PURE STORAGE SHALL NOT BE LIABLE FOR INCIDENTAL OR CONSEQUENTIAL
DAMAGES IN CONNECTION WITH THE FURNISHING, PERFORMANCE, OR USE OF THIS DOCUMENTATION. THE
INFORMATION CONTAINED IN THIS DOCUMENTATION IS SUBJECT TO CHANGE WITHOUT NOTICE.
Pure1® Unplugged
@purestorage Date
Upon completion of this training, you will be
able to:
Objectives • Explain key features of Pure1 Unplugged
• Describe additional features
Agenda
02 Additional Features
Pure1 Unplugged
Comprehensive suite Alert
of key features
Troubleshoot
Plan
HTTPS
HTTP HTTP
HTTP HTTP HTTP
Auth Requests
Pure1-ds Monitor
API Server HTTP Server
Static Web Pure1-ds
Content Auth Server
MongoDB Fleet
PSO
HTTP
Kibana HTTP
Elasticsearch Metrics
Client
Kubernetes
Mobile
Unplugged vs SaaS Analytics (forecast, VM topology)
Protections
• Fleet management
• Available without internet connectivity
• Centralized dashboard
• View all your Pure arrays plus messages
and alerts in one single pane
• Array views
• Monitor array capacity and health
• Analytics
• Assess performance and capacity
• Create custom reports
https://blog.purestorage.com/products/customize-pure1-dashboard/
Centralized dashboard
Capacity planning
Support for local and AD/LDAP authentication
Centralized dashboard
Capacity planning
Support for local and AD/LDAP authentication
@purestorage
The Legacy Storage Status Quo
Customers are trapped in an endless cycle of storage re-buys & refreshes
+ O
R
Pay more money to keep your Total re-purchase every few years Total re-purchase over a multi-
old storage array year rolling cycle, adding new
Migrate all your TB
nodes/ removing older nodes
Forgo advances in Risk performance/availability loss
performance, density and Risk performance loss
features
No planned downtime
or data migrations
On-demand and
included hardware
upgrades
Buy Storage
Continually improving Once,
data services
Use It Forever
10+ YEARS
Ability to respond to
business change
Predictable costs,
investments protected
Evergreen Evergreen
Architecture Subscription
Subscription to
Innovation
+ =
Long-life Chassis
Upgradable Controllers & 10+ year lifespan Upgradable,
Blades Expandable Flash
For performance For capacity & expansion
S500
S200
XL170 C60
XL130 C40
Upgradable Software
Upgradeable,
expandable flash
Always-Upgradeable For capacity & density
Controllers
Across Generations
Upgrade Everything
Online
With no downtime, performance loss or
data migration
© 2022 Pure Storage, Inc.
Evergreen//Forever: Subscription to Innovation
Delivering an always-improving experience, even with traditional CAPEX purchases
© 2022 PureStorage
World-class
Customer
Experience
Love your storage satisfaction guarantee
Right-size guarantee
//M50
FA-420
44TBs
FA-320 22TBs
11TBs //M70
//X90R3
109TBs //X90R2
805TBs
294TBs
• No data migrations • 7+ Year depreciation schedules • Best in class efficiency & space savings
• Flat and Fair subscription renewals • Improved EPS • Included & on-demand controller or
blade upgrades, no re-buy required
Time
(years)
1 2 3 4 5 6 7 8 9
10 TB + 38TB 10 TB 38 TB
*Requires Evergreen//Forever subscription. Restarts Ever Modern clock. See subscription terms for details
© 2022 Pure Storage, Inc.
Software
Subscription
1/3
Of arrays running ActiveCluster were
purchased before we released it
Evergreen
Subscriptions
Work
Hardware
Subscription
10k +
Controllers upgraded via Evergreen
2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022
8TB, 52TB R3
17TB
Pt 1
Pt 2
Metering and billing Pure1 - Effective Used Pure1 - Effective Used Termed subscription
Purestorage.com/Evergreen
All-inclusive subscription model