Poc Guide For Vsan
Poc Guide For Vsan
Poc Guide For Vsan
1
"#$$% &% '$()*+, -./0*
%$#
1/#,.23 456
Cormac Pogan
vMware 1echnlcal MarkeLlng
SepLember 2013
verslon 1.0a
"#$%&'( )*+ ,-.%'/ 012 3.
2
'$(,*(,7
lnLroducLlon ................................................................................................................................................. 3
8equlremenLs ............................................................................................................................................... 3
Workflow ...................................................................................................................................................... 4
roof Cf ConcepL Lab SeLup ......................................................................................................................... 3
SLep 1 - SeLup Lhe vSAn neLwork ............................................................................................................... 6
SLep 2 - Lnable vSAn on Lhe ClusLer ........................................................................................................... 8
2.1 verlfylng Lhe vSAn daLasLore ........................................................................................................... 12
2.2 verlfylng Lhe SLorage rovlder sLaLus .............................................................................................. 13
SLep 3. Scale-CuL SLorage ........................................................................................................................... 14
3.1 CreaLe new ulsk Croup .................................................................................................................... 13
SLep 4 - 8ulld vM SLorage ollcles ............................................................................................................. 17
4.1 ueploy vMs ...................................................................................................................................... 21
4.2 Modlfylng vM SLorage ollcles ........................................................................................................ 24
SLep 3 - vMoLlon & SLorage vMoLlon ........................................................................................................ 27
3.1 SLorage vMoLlon from nlS Lo vsanuaLasLore .................................................................................. 27
3.2 vMoLlon from hosL wlLh local sLorage Lo hosL wlLhouL local sLorage ............................................... 29
SLep 6 - vSphere PA & vSAn lnLeroperablllLy ............................................................................................ 30
6.1 Check base-sles ob[ecL layouL .......................................................................................................... 30
6.2 Lnable PA on Lhe clusLer .................................................................................................................. 31
6.3 PosL lallure - no runnlng vMs ........................................................................................................ 33
6.4 PosL lallure - 8unnlng vMs ............................................................................................................. 33
"#$%&'( )*+ ,-.%'/ 012 3.
3
8(,#$0.),/$(
1hls documenL deflnes dlfferenL use cases and LesL scenarlos LhaL a cusLomer can evaluaLe wlLh Lhe
vlrLual SAn (vSAn) soluLlon. Sample cusLomer scenarlos we expecL Lo drlve adopLlon of vSAn are vul,
LesL & developmenL & u8-as-a-LargeL. 1here may of course be oLher use cases.
9*:./#*;*(,7
LeL us begln Lhls proof of concepL (CC) gulde by hlghllghLlng Lhe equlpmenL requlred Lo do Lhe
compleLe seL of LesLs descrlbed ln Lhls documenL:
Done kequ|rement
3 hosLs (as per PCL) wlLh LSxl verslon 3.3 lnsLalled.
4 hosLs musL conLaln local sLorage. Local sLorage musL comprlse of aL leasL
one empLy Puu & one empLy SSu (agaln as per PCL). Cne hosL does noL
have Lo conLaln any local sLorage. ln Lhls CC, each of Lhe 4 hosLs has 1 x
SSu & 2 x Puu. 1he LSxl booL dlsk cannoL be used for vSAn.
Lach LSxl hosL musL conLaln a PosL 8us AdapLer (P8A) or a ass-Lhru 8Alu
ConLroller as per PCL. 1he 8Alu ConLroller musL be able Lo presenL dlsks
dlrecLly Lo Lhe hosL wlLhouL a 8Alu conflguraLlon
1 x vCenLer Server wlLh vSphere LnLerprlse lus llcenslng (an evaluaLlon
llcense ls used lf only vSphere LnLerprlse ls avallable)
4 LSxl PosLs added Lo a ClusLer. no servlces should be enabled on Lhe
clusLer lnlLlally (no PA, no u8S and no vSAn)
10Cb neLwork ls preferable and ls hlghly recommended for producLlon
envlronmenLs, buL vSAn can & wlll work wlLh a 1Cb neLwork. 1hls CC
used a 1Cb neLwork.
A dlsLrlbuLed swlLch should be conflgured for all LSxl Lrafflc.
A vMoLlon vMkernel porL musL be conflgured for each hosL. lf l sLorage ls
avallable for vlrLual machlnes LemplaLes or lSCs, Lhls musL also be
conflgured. 1hls CC has an nlS daLasLore avallable.
A vlrLual machlne wlLh an lnsLalled CuesL CS should be avallable. lf Lhls ls
noL avallable, a new vlrLual machlne wlll have Lo be creaLed and a CuesL CS
lnsLalled. 1hls CC has a vlrLual machlne wlLh an lnsLalled CuesL CS
avallable.
"#$%&'( )*+ ,-.%'/ 012 3.
4
<$#=%3$>
1hls gulde wlll Lake cusLomers Lhrough a number of workflows, such as conflgurlng Lhe vSAn clusLer, Lhe
creaLlon of varlous vM SLorage ollcles and Lhe deploymenL of vM wlLh Lhose pollcles. 1he workflows
can be summarlzed ln Lhe followlng charL.
1esL cases are wrlLLen ln Lhe followlng sequence:
1. SeLup Lhe vSAn neLwork
2. CreaLe Lhe vSAn ClusLer
3. verlfy operaLlon of vSAn
4. CreaLe vM SLorage ollcles
3. ueploy vM
6. verlfy vM SLorage ollcy requlremenLs meL by vSAn
7. Check avallablllLy on errors
"#$%&'( )*+ ,-.%'/ 012 3.
S
"#$$% &% '$()*+, ?2@ 4*,.+
1he CC requlres a LoLal of 3 LSxl hosLs, all runnlng vSphere 3.3.
lour of Lhe hosLs musL conLaln empLy local dlsks (boLh Puu & SSu) LhaL can be consumed by vSAn.
1here musL be aL leasL one empLy SSu and Lwo empLy Puus per hosL.
Cne of Lhe hosLs (esx-04a) does noL conLaln any local sLorage.
lour of Lhe hosLs (esx-01a, esx-02a, esx-03a & esx-04a) are already ln a clusLer called ClusLer SlLe A.
A flfLh hosL (esx-03a) ls a sLand-alone hosL LhaL ls noL parL of Lhe clusLer.
ln Lhls gulde, one nlS daLasLore, called ds-slLe-a-nfs01, ls presenL and avallable Lo all hosLs ln Lhe CC. A
slngle vM called base-s|es resldes on Lhls nlS daLasLore. 1hls ls noL necessary for a successful CC, buL
havlng a vM LemplaLe avallable ln your envlronmenL wlll speed up Lhe CC. CpLlonally, you may have Lo
deploy a vlrLual Machlne from an lSC lmage Lo compleLe Lhe exerclses.
A flnal noLe regardlng neLwork conflguraLlon: ln Lhls gulde, Lhere ls a dlsLrlbuLed swlLch conflgured.
Agaln, whlle Lhls ls noL necessary for a successful deploymenL of vSAn (vSAn supporLs boLh vSS and
vuS), a dlsLrlbuLed swlLch conflguraLlon allows us Lo use neLwork l/C ConLrol Lo provlde CuallLy Cf
Servlce (CoS) on Lhe vSAn Lrafflc. A dlsLrlbuLed swlLch ls noL necessary Lo lmplemenL a successful CC.
"#$%&'( )*+ ,-.%'/ 012 3.
6
4,*+ A B 4*,.+ ,C* 1456 6*,>$#=
1he flrsL sLep ls Lo seL up communlcaLlon beLween each of Lhe LSxl hosLs ln Lhe clusLer. ln Lhls CC,
Lhere are already 3 vMkernel porLs conflgured on each LSxl hosL. 1hese are:
1. ManagemenL neLwork
2. vMoLlon neLwork
3. SLorage neLwork
navlgaLe from Lhe Pome vlew Lo vCenLer > PosLs & ClusLers. SelecL Lhe flrsL of Lhe four LSxl hosLs and
Lhen selecL Lhe Manage Lab. SelecL Network|ng and Lhen Lhe V|rtua| Adapters vlew. lL should look
ldenLlcal Lo Lhe followlng:
We musL now add Lhe vSAn neLwork. SeLLlng up Lhe vSAn neLwork lnvolves sLeps ldenLlcal Lo seLLlng up
any oLher vMkernel neLwork. Cllck on Lhe lcon Lo add a new vlrLual adapLer (vMkernel neLwork
AdapLer), Lhen selecL a dlsLrlbuLed porL group called vSAn neLwork LhaL has already been creaLed:
"#$%&'( )*+ ,-.%'/ 012 3.
7
AL Lhe polnL where you selecL Lhe orL properLles, selecL V|rtua| SAN traff|c.
All oLher seLLlngs may be lefL aL Lhe defau|t (CbLaln lv4 seLLlngs auLomaLlcally). 1he vSAn lnLerfaces
wlll be asslgned Lhelr l addresses vla uPC. 1hls musL be repeaLed for Lhe four LSxl hosLs ln Lhe clusLer
(esx-01a Lhru Lo esx-04a). We wlll deal wlLh LSxl hosL esx-03a separaLely ln parL 6 of Lhe lab.
key 1akeaway: vSAn requlres a neLwork connecLlon
beLween hosLs for communlcaLlon and l/C purposes.
"#$%&'( )*+ ,-.%'/ 012 3.
8
4,*+ D B E(2@3* 1456 $( ,C* '3.7,*#
AL Lhls polnL, Lhe neLworklng ls ln place on all nodes. 1he four of your LSxl hosLs are already ln a clusLer.
We can now go ahead and enable Lhe vSAn clusLer, and polnL ouL how easy lL ls Lo seLup.
SelecL Lhe clusLer ob[ecL, Lhen navlgaLe Lo Manage Lab > SeLLlngs > vlrLual SAn > Ceneral:
nexL Lhen cllck on Lhe Ld|t buLLon locaLed Lo Lhe rlghL of Lhe wlndow - Do not c||ck Ck unt|| to|d to do
so:
"#$%&'( )*+ ,-.%'/ 012 3.
9
8y defaulL, vSAn wlll auLomaLlcally add new d|sks Lo Lhe vSAn clusLer (ulsk clalm pollcy). Lnsure LhaL
Lhe seLLlng 'Add dlsks Lo sLorage' ls seL Lo manua|, as we manually add dlsks Lo Lhe clusLer.
8emember LhaL Lhere ls a requlremenL Lo have aL leasL 1 SSu drlve per hosL. ln producLlon
envlronmenLs, SSu wlll make up aL leasL 10 of all sLorage, Lhe remalnlng 90 belng Puu of course. ln
Lhls lab, Lhere ls a slngle SSu & a Lwo Puu on Lhree of Lhe four LSxl hosLs ln Lhe clusLer. Cllck Ck.
AfLer cllcklng Ck, you may have Lo refresh Lhe vSphere web cllenL for Lhese changes Lo appear.
"#$%&'( )*+ ,-.%'/ 012 3.
10
Co Lo Lhe vSAn D|sk Management vlew. Pere, you wlll see LhaL Lhere are 4 hosLs ln Lhe clusLer, buL no
ulsk Croups are creaLed. 1hls ls because Manua| mode was selecLed when Lhe clusLer was creaLed, so
you have Lo creaLe Lhe ulsk Croups manually.
SelecL hosL esx-01a and cllck on Lhe lcon (wlLh Lhe green plus symbol) Lo creaLe a new ulsk Croup.
"#$%&'( )*+ ,-.%'/ 012 3.
11
Lach dlsk group LhaL you creaLe may conLaln only one SSu. 1he SSu ls used for wrlLe cache/read buffer
and Lhe Puus are used for daLa dlsks/capaclLy. l have selecLed Lhe only SSu & boLh Lhe Puus on Lhese
hosLs Lo be parL of Lhe vSAn (alLhough a raLlo of 1:10 for SSu:Puu ls whaL ls deemed a besL pracLlce).
8epeaL Lhls operaLlon for all Lhree hosLs LhaL have sLorage (esx-01a, esx-02a & esx-03a). PosL esx-04a
does noL have any local sLorage. When compleLed, Lhls clusLer wlll have Lhree dlsk groups.
AL Lhls polnL, all four LSxl hosLs ln Lhe clusLer should be able Lo successfully see Lhe vSAn daLasLore,
labeled vsanDatastore.
key 1akeaway: 1he sLeps Lo seLup vSAn are very
slmple, and are akln Lo Lhe sLeps Lo seLup vSphere PA
and u8S clusLers.
"#$%&'( )*+ ,-.%'/ 012 3.
12
DFA 1*#/%G/(H ,C* 1456 02,27,$#*
AL Lhls polnL, we wlll verlfy LhaL Lhe vSAn daLasLore has been successfully creaLed and LhaL lLs capaclLy
correcLly reflecLs Lhe LoLal local sLorage capaclLy from each of Lhe LSxl hosLs. Cnce Lhe neLwork ls
creaLed, physlcal sLorage ls added Lo dlsk groups and Lhe vSAn clusLer creaLed, a slngle vSAn daLasLore
ls bullL. navlgaLe Lo Lhe SLorage vlew and check Lhe sLaLus of Lhe vsanuaLasLore.
1he capaclLy ls an aggregaLe of Lhe Puus Laken from each of Lhe 3 LSxl hosLs ln Lhe clusLer. 1haL ls 3 x 2
x 20C8 = 120C8 (less some vSAndaLasLore overheads). 1he 3 x SSu (1 on each of Lhe 3 LSxl hosLs wlLh
sLorage) are noL consldered when Lhe capaclLy calculaLlon ls made.
key 1akeaway: vSAn uses Lhe SSus for read cache and
wrlLe bufferlng. Puus are used for capaclLy.
"#$%&'( )*+ ,-.%'/ 012 3.
13
DFD 1*#/%G/(H ,C* 4,$#2H* "#$I/0*# 7,2,.7
1o learn abouL Lhe capablllLles of vSAn, and Lo communlcaLe beLween vCenLer and Lhe sLorage layer, a
sLorage provlder needs Lo be conflgured. Lach LSxl hosL has a sLorage provlder.
When a vSAn clusLer ls formed, Lhe sLorage provlders wlll be reglsLered auLomaLlcally wlLh SMS, Lhe
SLorage ManagemenL Servlce (by vCenLer). Powever, lL ls besL Lo verlfy LhaL Lhe sLorage provlders on
one of Lhe LSxl hosLs has successfully reglsLered and ls acLlve, and LhaL Lhe oLher sLorage provlders from
Lhe remalnlng LSxl hosLs ln Lhe clusLer are reglsLered and are ln sLandby mode.
navlgaLe Lo Lhe vCenLer server > Manage Lab > SLorage rovlders Lo check Lhe sLaLus.
ln Lhls four-node clusLer, one of Lhe vSAn provlders ls onllne and acLlve, whlle Lhe oLher Lhree are ln
SLandby. Lach LSxl hosL parLlclpaLlng ln Lhe vSAn clusLer wlll have a provlder, buL only one needs Lo be
acLlve Lo provlde vSAn daLasLore capablllLy lnformaLlon.
Should Lhe acLlve provlder fall for some reason, one of Lhe sLandby sLorage provlders wlll Lake over.
key 1akeaway: 1he sLorage provlder, whlch surfaces up Lhe vSAn
capablllLles Lo vCenLer, ls hlghly avallable.
"#$%&'( )*+ ,-.%'/ 012 3.
14
4,*+ JF 4)23*K&., 4,$#2H*
ln Lhls secLlon of Lhe lab documenL, we are golng Lo look aL Lhe ablllLy Lo add anoLher LSxl hosL wlLh
sLorage Lo Lhe vSAn clusLer and observe Lhe scale-ouL capablllLles of Lhe producL.
AL Lhls polnL, we have four LSxl hosLs ln Lhe clusLer, alLhough only Lhree are conLrlbuLlng local sLorage Lo
Lhe vSAn daLasLore. 1he vM, base-s|es, should currenLly reslde on a hosL LhaL conLrlbuLes local sLorage,
esx-01a, esx-02a or esx-03a.
LeL's check Lhe sLaLus of Lhe vsanuaLasLore. navlgaLe Lo Lhe vsanuaLasLore summary Lab, and lL should
look llke Lhls:
?ou have now reached Lhe concluslon LhaL you would llke Lo add more compuLe and sLorage Lo Lhe
vSAn clusLer, whlch lnvolves addlng a new LSxl hosL LhaL conLalns addlLlonal dlsks.
1here ls a flfLh LSxl hosL (esxl-03a) ln your lnvenLory LhaL has noL yeL been added Lo Lhe clusLer. We wlll
do LhaL now and examlne how Lhe vsanuaLasLore seamlessly grows Lo lnclude Lhls new capaclLy.
navlgaLe Lo Lhe clusLer ob[ecL ln Lhe lnvenLory, rlghL cllck and selecL Lhe acLlon 'Move hosLs lnLo clusLer'.
lrom Lhe llsL of avallable hosLs (you should only see esx-03a), selecL Lhe hosL and cllck Ck.
"#$%&'( )*+ ,-.%'/ 012 3.
1S
1he nexL sLep ls Lo add a vSAn neLwork Lo Lhls hosL. As per Lhe procedure ouLllned ln sLep 1 of Lhls lab
gulde, creaLe a vSAn vMkernel neLwork adapLer Lo Lhls hosL uslng Lhe dlsLrlbuLed porL group called
vSAn neLwork. Make sure you selecL Lhe vlrLual SAn Lrafflc servlce, and leL uPC provlde Lhe l
seLLlngs.
JFA '#*2,* 6*> L/7= -#$.+
When LhaL ls compleLed, slnce Lhe clusLer was seLup ln Manual mode, we need Lo creaLe a new dlsk
group uslng Lhe dlsks on hosL esx-03a. SelecL ClusLer > Manage > SeLLlngs > ulsk ManagemenL. SelecL
Lhe hosL [usL added. ?ou wlll noLlce LhaL 0 of 3 dlsks are ln use.
now creaLe a new dlsk group, and add Lhe dlsks (one SSu and Lwo Puus) Lo Lhe dlsk group. 1hese dlsks
are all 10C8 ln slze. Cllck Ck and walL for Lhe new dlsk group Lo be creaLed.
When lL ls creaLed, revlslL Lhe vsanuaLasLore summary vlew and check lf Lhe slze has lncreased wlLh Lhe
addlLlon of Lhe new hosL & dlsks. ?ou should observe LhaL Lhe capaclLy of Lhe daLasLore has seamlessly
lncreased from 118C8 Lo 138C8 wlLh Lhe addlLlon of Lwo x 10C8 Puus. (remember LhaL SSus do noL
conLrlbuLe Lowards capaclLy).
"#$%&'( )*+ ,-.%'/ 012 3.
16
As you can clearly see, lncreaslng sLorage and compuLe ln vSAn ls relaLlvely slmple. noLe LhaL lf vSAn
were seLup ln auLomaLlc mode, Lhe sLeps Lo creaLe a dlsk group would noL be necessary.
key 1akeaway: Scallng ouL sLorage and compuLe ln a
vSAn clusLer ls as slmple as addlng a new hosL Lo Lhe
clusLer.
"#$%&'( )*+ ,-.%'/ 012 3.
17
4,*+ M B N./30 1O 4,$#2H* "$3/)/*7
Cnce Lhe sLorage provlder ls added, capab|||t|es made avallable from vSAn wlll be vlslble ln Lhe vM
SLorage ollcles whlch can be found ln Pome > 8ules & roflles. vM SLorage ollcles are slmllar ln some
respecLs Lo Lhe vSphere 3.0 & 3.1 roflle urlven SLorage feaLure. 1here are Lwo lcons ln Lhls vlew
represenLlng 'CreaLe vM SLorage ollcles' & 'Lnable vM SLorage ollcles' respecLlvely. 1he flrsL sLep ls Lo
enable vM SLorage ollcles. lt ls eovlslooeJ tbot 5totoqe lollcles wlll be ootomotlcolly eoobleJ oo tbe
v5AN clostet lo tbe flool ptoJoct, bot fot oow lt most stlll be eoobleJ moooolly.
Cllck on Lhe lcon wlLh Lhe check mark Lo 'Lnable vM SLorage ollcles per compuLe resource', and Lhen
cllck Lnab|e:
Cnce enabled, you may now close Lhe wlndow. 1he capablllLles of Lhe vsanDatastore should now be
vlslble durlng vM SLorage ollcy creaLlon. 8y uslng a subseL of Lhe capab|||t|es, a vSphere admln wlll be
able Lo creaLe a sLorage pollcy for Lhelr vM Lo guaranLee CuallLy of Servlce (CoS). Cllck on Lhe lcon wlLh
Lhe plus slgn represenLlng 'CreaLe new vM SLorage ollcy' Lo begln.
"#$%&'( )*+ ,-.%'/ 012 3.
18
1he flrsL sLep ls Lo glve Lhe vM SLorage ollcy a name. l wlll call lL my VDI-Desktops proflle for Lhe
purposes of Lhls example:
nexL we geL a descrlpLlon of rule seLs. 8ule-seLs are a way of uslng sLorage from dlfferenL vendors, e.g.
for example you can have a slngle bronze" pollcy whlch conLalns Lwo separaLe rule-seLs, one of whlch ls
a vSAn 8ule-SeL and Lhe oLher may be a 3rd parLy sLorage vendor 8ule-SeL. When bronze" ls chosen as
Lhe vM SLorage ollcy aL vM deploymenL Llme, boLh vSAn and Lhe 3
rd
parLy sLorage are checked Lo see
lf Lhey maLch Lhe requlremenLs ln Lhe pollcy.
1he nexL sLep ls Lo selecL a subseL of all of Lhe vsanDatastore capablllLles. 8efer Lo offlclal
documenLaLlon for a full descrlpLlon of Lhe capablllLles. 1o begln you need Lo selecL Lhe vendor, ln Lhls
case lL ls called vSan.
"#$%&'( )*+ ,-.%'/ 012 3.
19
1he nexL sLep ls Lo add Lhe capablllLles requlred for Lhe vlrLual machlnes LhaL you wlsh Lo deploy ln your
envlronmenL. ln Lhls parLlcular example, l wlsh Lo speclfy an avallablllLy requlremenL. ln Lhls case, l wanL
Lhe vMs whlch have Lhls pollcy assoclaLed wlLh Lhem Lo LoleraLe aL leasL one fallure (hosL, neLwork or
dlsk).
"#$%&'( )*+ ,-.%'/ 012 3.
20
1he nlce Lhlng abouL Lhls ls lmmedlaLely l can Lell wheLher or noL any daLasLores are capable of
undersLandlng Lhe requlremenL ln Lhe match|ng resources wlndow. As you can see, my vsanuaLasLore ls
capable of undersLandlng Lhese requlremenLs LhaL l have placed ln Lhe vM SLorage ollcy:
noLe LhaL Lhls ls no guaranLee LhaL Lhe daLasLore can meeL Lhe requlremenLs ln Lhe vM SLorage ollcy. lL
slmply means LhaL Lhe requlremenLs ln Lhe vM SLorage ollcy can be undersLood by Lhe daLasLores
whlch show up ln Lhe maLchlng resouces.
1hls ls where we sLarL Lo deflne Lhe requlremenLs for our vMs and Lhe appllcaLlons runnlng ln Lhe vMs.
now, we slmply Lell Lhe sLorage layer whaL our requlremenLs are by selecLlng Lhe approprlaLe vM
SLorage ollcy durlng vM deploymenL, and Lhe sLorage layer Lakes care of deploylng Lhe vM ln such a
way LhaL lL meeLs Lhose requlremenLs.
CompleLe Lhe creaLlon of Lhe vM SLorage ollcy. 1hls new 'pollcy' should now appear ln Lhe llsL of vM
SLorage ollcles.
"#$%&'( )*+ ,-.%'/ 012 3.
21
MFA L*+3$G 1O7
CreaLe a vlrLual machlne, whlch uses Lhe !"#$"%&'()*& proflle creaLed earller.
Slnce u8S ls noL enabled, you wlll have Lo choose a hosL for Lhls vM. Choose hosL esx-01a, esx-02a or
esx-03a. uo noL use esx-04a aL Lhls Llme. When lL comes Lo selecLlng sLorage, you can now speclfy a vM
SLorage ollcy (ln Lhls case !"#$"%&'()*&). 1hls wlll show LhaL vsanDatastore ls Compat|b|e as a sLorage
devlce, meanlng once agaln LhaL lL undersLands Lhe requlremenLs placed ln Lhe sLorage pollcy. lt Joes
oot meoo tbot tbe vsoouotostote wlll lmpllcltly be oble to occommoJote tbe tepoltemeots - jost tbot lt
ooJetstooJs tbem. 1bls ls oo lmpottoot polot to ooJetstooJ oboot vlttool 5AN.
ConLlnue wlLh Lhe creaLlon of Lhls vlrLual Machlne, selecLlng Lhe defaulLs for Lhe remalnlng sLeps,
lncludlng compaLlblllLy wlLh LSxl 3.3 and laLer and Wlndows 2008 82 (64-blL) as Lhe CuesL CS.
When you geL Lo Lhe 2f. Custom|ze hardware sLep, ln Lhe vlrLual Pardware Lab, expand Lhe new Pard
ulsk vlrLual hardware and you wlll see vM SLorage ollcy seL Lo vul-uesktops. keduce the nard D|sk S|ze
"#$%&'( )*+ ,-.%'/ 012 3.
22
to SG8 |n order for |t to be rep||cated across hosts (Lhe defaulL slze ls 40C8 - we wanL Lo reduce Lhls as
Lhls ls a small lab envlronmenL)
CompleLe Lhe wlzard. When Lhe vM ls creaLed, look aL lLs Summary Lab and check Lhe compllance sLaLe
ln Lhe vM SLorage ollcles wlndow. lL should say CompllanL wlLh a green check mark.
As a flnal sLep, you mlghL be lnLeresLed ln seelng how your vlrLual machlne's ob[ecLs have been placed
on Lhe vsanDatastore. 1o vlew Lhe placemenL, selecL your vlrLual Machlne > Manage Lab > vM SLorage
ollcles. lf you selecL one of your ob[ecLs, Lhe hyslcal ulsk lacemenL wlll show you on whlch hosL Lhe
componenLs of your ob[ecLs reslde, as shown ln Lhe example below.
"#$%&'( )*+ ,-.%'/ 012 3.
23
1he 8Alu 1 lndlcaLes LhaL Lhe vMuk has a repllca. 1hls ls Lo LoleraLe a fallure, Lhe value LhaL was seL Lo 1
ln Lhe pollcy. So we can conLlnue Lo run lf Lhere ls a slngle fallure ln Lhe clusLer. 1he wlLness ls Lhere Lo
acL as a Llebreaker. lf one hosL falls, and one componenL ls losL, Lhen Lhls wlLness allows a quorum of
sLorage ob[ecLs Lo sLlll reslde ln Lhe clusLer.
noLlce LhaL all Lhree componenLs are on dlfferenL hosLs for Lhls exacL reason. AL Lhls polnL, we have
successfully deployed a vlrLual machlne wlLh a level of avallablllLy LhaL can be used as Lhe base lmage for
our vul deskLops.
Lxamlnlng Lhe lay ouL of Lhe ob[ecL above, we can see LhaL a 8Alu1 conflguraLlon has been puL ln place
by vSAn, placlng each repllca on dlfferenL hosLs. 1hls means LhaL ln Lhe evenL of a hosL, dlsk or neLwork
fallure on one of Lhe hosLs, Lhe vlrLual machlne wlll sLlll be avallable.
lf Lhe hosL on whlch Lhe vM does noL reslde falls, Lhen no acLlon ls requlred. lf Lhe hosL on whlch Lhe vM
resldes falls, Lhen vSphere PA can be used Lo auLomaLlcally brlng Lhe vM onllne on one of Lhe remalnlng
hosLs ln Lhe clusLer.
We wlll examlne Lhls lnLeroperablllLy wlLh vSphere PA ln a laLer module.
key 1akeaway: ollcles enable SofLware urlven SLorage. lor Lhe flrsL
Llme, admlnlsLraLors can communlcaLe Lhelr sLorage requlremenLs on a
per vM basls Lo Lhe sLorage layer from vCenLer.
"#$%&'( )*+ ,-.%'/ 012 3.
24
MFD O$0/%G/(H 1O 4,$#2H* "$3/)/*7
Scenar|o: cusLomer noLlces LhaL Lhe vM deployed wlLh Lhe vul-ueskLop pollcy ls geLLlng a 90 read
cache hlL raLe. 1hls lmplles LhaL 10 of reads need Lo be servlced from Puu. AL peak Llme, Lhls vM ls
dolng 3000 lops. 1herefore, Lhere are 300 reads LhaL need Lo be servlced from Puu. 1he speclflcaLlons
on Lhe Puus lmply LhaL each dlsk can do 130 lops, meanlng LhaL a slngle dlsk cannoL servlce Lhese
addlLlonal 300 lops. 1o meeL Lhe l/C requlremenLs of Lhe vM lmplles LhaL a sLrlpe wldLh of Lwo dlsks
should be lmplemenLed.
Ld|t rof||e
1he flrsL sLep ls Lo edlL Lhe vul-ueskLops proflle creaLed earller and add a sLrlpe wldLh requlremenL Lo
Lhe pollcy. navlgaLe back Lo 8ules & roflles, selecL vM SLorage ollcy, selecL Lhe vul-ueskLop pollcy and
cllck on 'edlL'.
Add Str|pe W|dth Capab|||ty
ln Lhe 8ule-SeL1, add a new capablllLy called 'number of dlsk sLrlpes per ob[ecL' and seL Lhe value Lo 2.
1hls ls Lhe number of dlsks LhaL Lhe sLrlpe wlll span.
Cllck Ck. ?ou wlll observe a popup whlch sLaLes LhaL Lhe pollcy ls already ln use. We wlll need Lo
synchronlze Lhe vlrLual machlne wlLh Lhe pollcy afLer savlng Lhe changes. Change Lhe '8eapply Lo vMs'
Lo now and Cllck ?es.
"#$%&'( )*+ ,-.%'/ 012 3.
2S
kesync V|rtua| Mach|ne w|th o||cy Changes
SLaylng on Lhe vul-ueskLops pollcy, cllck on Lhe MonlLor Lab. ln Lhe vMs & vlrLual ulsks vlew, you wlll
see LhaL Lhe Compllance SLaLus ls 'CuL of uaLe'.
Cllck on Lhe reapply pollcy lcon (3
rd
from lefL) Lo reapply pollcy Lo all ouL of daLe enLlLles. Answer yes Lo
Lhe popup. 1he compllance sLaLe should now change once Lhe updaLed pollcy ls applled.
As a flnal sLep, we wlll now re-examlne Lhe layouL of Lhe sLorage ob[ecL Lo see lf Lhe requesL Lo creaLe a
sLrlpe wldLh of 2 has been lmplemenLed. 8eLurn Lo Lhe vlrLual Machlne vlew > Manage > vM SLorage
ollcy & selecL Lhe Pard ulsk 1 ob[ecL:
"#$%&'( )*+ ,-.%'/ 012 3.
26
now we can see LhaL Lhe dlsk layouL has changed slgnlflcanLly. 8ecause we have requesLed a sLrlpe
wldLh of Lwo, Lhe componenLs LhaL make up Lhe sLrlpe are placed ln a 8Alu-0 conflguraLlon. Slnce we
sLlll have our fallures Lo LoleraLe requlremenL, Lhese 8Alu-0s musL be mlrrored by a 8Alu-1. now LhaL we
have mulLlple componenLs dlsLrlbuLed across Lhe 3 hosLs, addlLlonal wlLnesses are needed ln case of a
hosL fallure.
We are noL golng Lo lnsLall a CuesL CS ln Lhls vlrLual machlne. lnsLead we wlll focus our aLLenLlon on
anoLher small vlrLual machlne avallable ln your envlronmenL for Lhe remalnlng LesLs.
key 1akeaway: As vlrLual machlne sLorage requlremenLs change, admlnlsLraLors
of vSAn can slmply updaLe Lhe pollcy. Compare Lhls Lo a physlcal SAn or nAS
lnfrasLrucLure where a new daLasLore would have Lo be provlsloned Lo saLlsfy
changlng vlrLual machlne l/C requlremenLs.
"#$%&'( )*+ ,-.%'/ 012 3.
27
4,*+ P B IO$,/$( Q 4,$#2H* IO$,/$(
ln Lhls sLep, we wlll examlne Lhe lnLeroperablllLy of vSAn wlLh core vSphere feaLures such as vMoLlon &
SLorage vMoLlon. ower on Lhe vlrLual machlne called base-s|es whlch resldes on hosL esx-01a. 1hls ls a
very small vlrLual machlne, buL wlll be sufflclenL for Lhe purposes of Lhls lab. WalL unLll Lhe vMware
1ools show as runnlng before conLlnulng. 1hls should only Lake a momenL or Lwo.
PFA 4,$#2H* IO$,/$( %#$; 6R4 ,$ I72(L2,27,$#*
1hls vM currenLly resldes on an nlS daLasLore called ds-s|te-a-nfs01. We wlll mlgraLe Lhls vlrLual
machlne Lo Lhe vsanDatastore. WlLh Lhe vlrLual machlne base-s|es selecLed ln Lhe lnvenLory, from Lhe
acLlons llsL, choose Lhe opLlon Lo m|grate.
Choose Lhe opLlon Lo 'Change datastore'. AL Lhe SelecL uaLasLore wlndow, change Lhe vM SLorage
roflle Lo VDI-Desktops. 1hls wlll show Lhe vsanuaLasLore as Compat|b|e.
llnlsh Lhe mlgraLlon process and walL for Lhe vM Lo mlgraLe (a few mlnuLes). 1hls demonsLraLes LhaL you
can mlgraLe from LradlLlonal daLasLore formaLs such as nlS & vMlS Lo Lhe new vsanuaLasLore formaL.
"#$%&'( )*+ ,-.%'/ 012 3.
28
Cnce Lhe vM has been successfully mlgraLed Lo Lhe vsanuaLasLore, examlne Lhe layouL of Lhe vM. lL
should have Lhe same layouL as Lhe vdl-deskLop vM LhaL we creaLed earller, l.e. a mlrror and sLrlpe
conflguraLlon.
navlgaLe Lo Lhe vlrLual Machlne vlew > Manage > vM SLorage ollcles. Cn flrsL observaLlon, you may see
Lhe vM home and Pard dlsk 1 ob[ecLs sLaLe LhaL Lhe compllance sLaLe ls not app||cab|e. Slmply cllck on
Lhe check compllance sLaLe lcon (mlddle lcon), and Lhls should make Lhem compllanL.
now selecL Lhe Pard ulsk 1 ob[ecL and have a look aL Lhe physlcal dlsk placemenL:
?ou should be able Lo see Lhe sLrlpe wldLh of 2 (8Alu-0) and Lhe repllcaLlon/mlrror of each (8Alu-1). And
of course, we also have Lhe Lle-breaker wlLness dlsks.
"#$%&'( )*+ ,-.%'/ 012 3.
29
PFD IO$,/$( %#$; C$7, >/,C 3$)23 7,$#2H* ,$ C$7, >/,C$., 3$)23 7,$#2H*
now we wlll show how hosLs whlch are ln Lhe vSAn clusLer, buL do noL have any local sLorage, can sLlll
use Lhe vsanuaLasLore Lo run vMs.
AL Lhls polnL, Lhe vlrLual machlne base-s|es resldes on Lhe vsanuaLasLore. 1he vM ls currenLly on a hosL
LhaL conLrlbuLes local sLorage Lo Lhe vsanuaLasLore (esx-01a.corp.local). We wlll now move Lhls Lo a hosL
(esx-04a.corp.local) LhaL does noL have any local sLorage.
Cnce agaln selecL Lhe base-s|es vlrLual machlne from Lhe lnvenLory. lrom Lhe AcLlons drop down menu,
once agaln selecL M|grate. 1hls Llme we choose Lhe opLlon Lo 'Change hosL'.
AL sLep 3 where a hosL selecLlon needs Lo be made, selecL hosL esx04a.corp.local. CompleLe Lhe
mlgraLlon by selecLlng all of Lhe defaulLs from Lhe remalnlng sLeps ln Lhe mlgraLlon wlzard.
When Lhe mlgraLlon has compleLed, you wlll see how hosLs LhaL do noL conLrlbuLe any local sLorage Lo
Lhe vsanuaLasLore can sLlll run vlrLual machlnes. 1hls means LhaL vSAn can be scaled ouL on a compuLe
basls.
1o compleLe Lhls secLlon, mlgraLe Lhe vM back Lo a hosL LhaL has local sLorage maklng up Lhe vSAn
daLasLore, e.g. esx-01a, esx-02a or esx-03a. Leave Lhe vM resldlng on Lhe vsanuaLasLore.
key 1akeaway: vMs can be mlgraLed beLween a
vsanuaLasLore, LradlLlonal vMlS and nlS daLasLores.
"#$%&'( )*+ ,-.%'/ 012 3.
30
4,*+ S K I4+C*#* T5 Q 1456 8(,*#$+*#2@/3/,G
1hls flnal secLlon wlll provlde deLalls on how Lo evaluaLe vSAn wlLh vSphere PA.
SFA 'C*)= @27*K73*7 $@U*), 32G$.,
llrsL, leL's examlne Lhe ob[ecL layouL of Lhe vlrLual machlnes. LeL's flrsL look aL vM Pome:
1hls sLorage ob[ecL has 3 componenLs, Lwo of whlch are repllcas maklng up a 8Alu-1 mlrror. 1he Lhlrd ls
a wlLness dlsk LhaL ls used for Lle-breaklng.
1he nexL ob[ecL ls Lhe dlsk, whlch we have looked aL a number of Llmes already. !usL Lo recap, Lhls has a
SLrlpeWldLh seL Lo 2, Lherefore Lhere ls a 8Alu-0 sLrlpe componenL across Lwo dlsks. 1here ls no maglc
here - Lo mlrror an ob[ecL wlLh a sLrlped wldLh of 2, 4 dlsks are requlred. Agaln, slnce
ComponenLlallures1o1oleraLe ls seL Lo 1, Lhere ls also a 8Alu-1 conflguraLlon Lo repllcaLe Lhe sLrlpe. So
we have Lwo 8Alu-0 (sLrlpe) conflguraLlons, and a 8Alu-1 Lo mlrror Lhe sLrlpes. 1he wlLnesses are once
agaln used for Lle-breaklng funcLlonallLy ln Lhe evenL of fallures.
"#$%&'( )*+ ,-.%'/ 012 3.
31
1he nexL sLep ls Lo lnvoke some fallures ln Lhe clusLer Lo see how Lhls lmpacLs Lhe componenLs LhaL
make up our vlrLual machlne sLorage ob[ecLs, buL also how vSAn & vSphere PA lnLeroperaLe Lo enable
avallablllLy.
SFD E(2@3* T5 $( ,C* )3.7,*#
navlgaLe Lo Lhe ClusLer and selecL Lhe Manage Lab > SeLLlngs. SelecL Lhe vSphere PA servlce. vSphere PA
ls currenLly 1urned Cll:
Cllck on Lhe LdlL buLLon, and cllck on Lhe checkbox Lo 1urn Cn vSphere PA.
"#$%&'( )*+ ,-.%'/ 012 3.
32
8y defaulL, Lhe vSphere PA Admlsslon ConLrols have been seL Lo LoleraLe a slngle hosL fallure. ?ou can
examlne Lhls lf you wlsh by breaklng open Lhe Admlsslon ConLrol seLLlngs Lo verlfy. When saLlsfled, cllck
on Lhe Ck buLLon Lo enable PA.
AfLer enabllng PA, you wlll see a warnlng abouL lnsufflclenL resources Lo saLlsfy vSphere PA fallover
level. 1hls ls a LranslenL warnlng and wlll evenLually go away afLer a few momenLs, once Lhe PA clusLer
has flnlshed conflgurlng. ?ou can Lry refreshlng from Llme Lo Llme Lo remove lL.
1he clusLer summary Lab should show a vSphere PA overvlew as follows:
"#$%&'( )*+ ,-.%'/ 012 3.
33
SFJ T$7, R2/3.#* B 6$ #.((/(H 1O7
ln Lhls flrsL fallure scenarlo, we wlll Lake one of Lhe hosLs ouL of Lhe clusLer. 1hls hosL does noL have any
runnlng vMs, buL we wlll use lL Lo examlne how Lhe vSAn repllcas provlde conLlnuous avallablllLy for Lhe
vM, and how Lhe Admlsslon ConLrol seLLlng ln vSphere PA and Lhe ComponenLlallures1o1oleraLe are
meL.
ln Lhls sLep, hosL esx-02a ls rebooLed. SelecL Lhe keboot opLlon from Lhe LSxl hosL acLlons:
ln a shorL Llme, we see warnlngs and errors relaLed Lo Lhe facL LhaL vCenLer can no longer reach Lhe PA
AgenL and Lhen we see errors relaLed Lo hosL connecLlon and power sLaLus.
lf we check on oLher hosLs ln Lhe clusLer, we see vSAn communlcaLlon lssues.
"#$%&'( )*+ ,-.%'/ 012 3.
34
WlLh one hosL ouL of Lhe clusLer, and ob[ecL componenLs LhaL were held on LhaL hosL are dlsplayed as
Absent - Cb[ecL noL found. llrsL we wlll look aL Lhe VM home:
nexL we can look aL Lhe nard D|sk:
"#$%&'( )*+ ,-.%'/ 012 3.
3S
8aslcally any componenLs on Lhe rebooLed hosL show up as Absent. When Lhe hosL re[olns Lhe clusLer,
all componenLs are puL back ln an AcLlve sLaLe. A blLmap of blocks LhaL have changed slnce Lhe
componenL wenL absenL ls malnLalned. 1he resync process only needs Lo resync changed blocks. now
we can see one parL Lhe avallablllLy aspecL of vSAn, and how vlrLual machlnes conLlnue Lo run even lf
componenLs go absenL. lf Lhe hosL remalns absenL for more Lhan 30 mlnuLes, Lhe mlsslng componenLs
are rebullL (reconflgured) on Lhe remalnlng hosLs and dlsks ln Lhe clusLer.
SFM T$7, R2/3.#* B 9.((/(H 1O7
WalL for Lhe hosL Lo rebooL from Lhe prevlous LesL before conLlnulng. 8emember LhaL we have only seL
compooeotlollotes1o1oletote Lo 1. ln Lhls nexL example, we wlll halL Lhe LSxl hosL (ln Lhls example, esx-
03a), whlch conLalns a runnlng vM base-s|es. Pere we wlll see lnLeroperablllLy beLween PA and vSAn.
lrom Lhe ConLrol CenLer deskLop SLarL buLLon, navlgaLe Lo All rograms > u11? > u11?. SelecL Lhe hosL
esx-04 from Lhe llsL and launch an SSP sesslon. Logln wlLh Lhe credenLlals toot and glve Lhe vMwote1!
password. 1ype Lhe command ha|t ln Lhe shell.
Cnce agaln, you wlll see vSphere PA deLecL Lhe error. And as before, vCenLer reporLs on Lhe hosL
connecLlon and power sLaLe:
lf you go lmmedlaLely Lo look aL Lhe vM's sLorage ob[ecL layouL, you mlghL flnd LhaL you can no longer
query Lhe componenL sLaLe slnce Lhe hosL LhaL owned Lhe ob[ecL has gone down. Cnce Lhe vM has
successfully falled over Lo an alLernaLe hosL, you wlll once agaln be able Lo query Lhe ob[ecL layouL. ?ou
should see vSphere PA klcklng ln, and falllng over Lhe vM from Lhe falled hosL Lo anoLher hosL ln Lhe
clusLer.
"#$%&'( )*+ ,-.%'/ 012 3.
36
llnally, check Lhe sLaLus of Lhe vM. ln my LesL, Lhe vM ls successfully sLarLed on hosL esx-04a (alLhough lL
may be resLarLed on a dlfferenL hosL ln your envlronmenL). noLe LhaL you should refresh Lhe ul
perlodlcally Lo see Lhese changes occur.
1he vM ls runnlng, now examlnlng Lhe ob[ecL layouL. lL should reveal LhaL noL all componenLs are
presenL. Powever Lhere ls sLlll a quorum of ob[ecLs avallable, enabllng Lhe vM Lo LoleraLe a hosL fallure.
lf Lhe fallure perslsLs for longer Lhan 30 mlnuLes, Lhe componenLs wlll be rebullL on Lhe remalnlng dlsks
ln Lhe clusLer.
1h|s Comp|etes the CC.
key 1akeaway: Whlle Lhe pollcy can provlde hlghly avallable vlrLual
machlne sLorage ob[ecLs, lnLeroperablllLy beLween vSAn & vSphere PA
provldes hlgh avallablllLy aL boLh Lhe sLorage Anu compuLe layers .