0% found this document useful (0 votes)
37 views29 pages

Training VNX - File

This document provides information about the components and configuration of a Dell storage array. It describes: - The management and data connections of control stations and data movers. - The various states that components like data movers and control stations can be in. - Failover and failback policies when primary data movers fault. - How file systems are created from file pools and mounted on data movers to be accessed over NFS and CIFS protocols.

Uploaded by

vakanivineela24
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
37 views29 pages

Training VNX - File

This document provides information about the components and configuration of a Dell storage array. It describes: - The management and data connections of control stations and data movers. - The various states that components like data movers and control stations can be in. - Failover and failback policies when primary data movers fault. - How file systems are created from file pools and mounted on data movers to be accessed over NFS and CIFS protocols.

Uploaded by

vakanivineela24
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 29

Dell Customer Communication - Confidential

Management Connection : (responsible for management of CS and SP)

Control station Ports : Eth0 , Eth1 , Eth2, Eth3 >>>


Eth0 >> connects to primary DM ( server_2 )
Eth1 >> Connects to Secondary CS ( CS1 )
Eth2 >> connects to secondary DM ( server_3)
Eth3 >> For external connection ( for Control station login )

Datamover management Switch consists of :>>>


mge0 : Receives the connection from primary CS
mge1 : Receives the connection from secondary CS
mge2 : mgmt. connection between DM and SP or between DM and
New Data Mover enclosure.

Datamover DataConnection : ( responsible for data flow )


>> Backend connection between DM and SP is across FC ports or also
called as SP ports . Each DM will have its connection with both SPs via
HBA 0 – connects SP A ( Bound )
HBA 1 --connects to SP B ( Bound )
Dell Customer Communication - Confidential

Data Mover :
Types : 1 ( NAS ) >> Active , 4 ( Standby ) >> Secondary
State :
0 ( Reset )>> down)
1 ( DOS Booted /BIOS)
2 ( Post Failure)
3 ( Loaded )
4 ( Configured )
5 ( CONTACTED ) >>> Fully functional .

CS state :
0 ( reset)
6 Ready State
10 Active state of primary CS
11 Active state of secondary CS

Other states such as 7 , 13 represents rolling reboot or panic due to


software issue with datamover ( DART ). State 17, 18, 19 , 21 ,23
represents Hardware issue with the dart mover .

DART : data access in real time : Operating environment for the Data
mover .( NAS )
Dell Customer Communication - Confidential

Failover : When primary Data mover is faulted Control station initiate


the failover to standby data mover .
Failover Policy :
Failover Policy Description

Auto Immediately activate the standby Data Mover

Retry Control Station first tries to recover the primary Data Mover. If the
recovery fails, the Control Station activates the standby

Manual (Default) Control Station shutdown the primary Data Mover and
takes no other action. The standby must be activated
manually

Failback : Reverting the services from standby data mover to primary


data mover . Failback is always manual.

Reboot : Restarting the datamover .

Virtual datamover ( VDM ) : Its basically root file system ( internal file
system ) which act as data mover server .
VDM can be use to create CIFS server and mount File system on them
By using VDMs, it is possible to separate CIFS and/or NFS servers from
each other and from the associated environment.

File System : File Resource used to access the File data . ( eg . test_fs)
Mount point : Address /location where file resource is mounted in
DART ( linux ) system . ( eg . /test_fs)
Dell Customer Communication - Confidential

File Pool : File storage space container .

+ File pool will be used to create file resources such as file system
+ File pool will receive the space from block pool ( block LUN )
+ Block LUNs needs to allocate from VNX Block ( SP ) to File host
( NAS/CS/Data mover )
+ Once LUNs are allocated , it needs to scan from File side using
command : nas_diskmark –m –a
+ Post which space will be available in File pool under potential space.
+ Potential space denotes newly assigned space to file pool, once
space is used it will show under total space of pool.
+ Once LUNS assigned to file side its called as disk volumes (dvols )
+ Different file pool will be created depending upon LUN configuration
Eg . pool 1 ( if allocated LUN is created from SAS drives , RAID 5( 4+1))
Pool2 ( NL_SAS, RAID 6 )
Pool 3 ( SAS , RAID 5 ( 8+1))

++ Once File system is created from File pool, it needs to mount on


Physical DM or VDM. Unless File system is mounted it can not use .
++ BY Default FS will be mounted with read write mode . You can
change it to read only using below command :
Server_mount server_2 –option ro /mount_point

 For others command refer the command document shared


Dell Customer Communication - Confidential

File : Interface :

Interface /IP
Initiator >> WWN /IQN ( for block )
Host >> name /IP ( for file )
+++++
File Side : all hosts < Linux , windows etc> will connect with data
mover via IPs / interfaces
Interfaces >>Created on Ports of IO modules

IO port : Ethernet Port ( Physical device )

Cge : copper gigbit Ethernet : 1gb : 4 ports >> IO module


Cge0 : 10.10.10.10 (interface)
Cge1 : 10.10.10.11
Cge2 : 10.10.10.12
Cge3 :10.10.10.13

Fxg : 10gb : 2 port >> IO module

Command to create Interface on physical port :


server_ifconfig server_2 -create –Device Cge01 -name test -protocol IP
10.241.169.140 255.255.255.128 10.241.169.255
Dell Customer Communication - Confidential

>> Virtual ports or virtual devices ( on top of physical ports ) :


LACP : Link aggregation control protocol
Trunk : Grouping of Physical ports
FSN : Fail Safe Network

It Offers , redundancy and Load balancing on port level .

LACP : Aggregation of Link (Link: Connection with switch )


Cge0 LACP

L1
Cge1 :
L2
Cge2 : Interface will be created on LACP port
L3
Cge3 L4

Command to create interface on Logical port :

server_ifconfig server_2 -create –Device LACP01 -name test -protocol


IP 10.241.169.140 255.255.255.128 10.241.169.255
here logical Device can be : LACP/FSN/Trunk

FSN : Primary:cge0 Secondary:cge1 OR


FSN : Primary:Trk1 Secondary:Trk2
Where Trk1 and Trk2 are trunk port
Dell Customer Communication - Confidential

Trunk : Grouping of port


Trunk1 : cge0 + Cge1
Trunk2: cge2 + cge 3

NFS : Network File system : ( client server protocol )


+ Primarily used by Linux-Unix based hosts/clients
+ Client specific , permission has to be assign from storage .
+ It’s a client server based protocol used to access the file systems
over the network ( TCP/IP ) .

Permissions from the storage :


Root >> entire
Read write (RW )
Read only ( RO )
Access >>

FS >> client ( via NFS protocol ) …export


Client /user /hosts / NFS host ( unix, Linux , AIX ) >> all means same .
The procedure to provisioning or assigning File system to any NFS
hosts is called exporting and the file system is called export.
The host/client/OS which is using NFS protocol to access the file
system is called as NFS host .
Dell Customer Communication - Confidential

File system will called as Export :


Interface will called as NFS server :
Host1 : Redhat Linux : 192.168.19.10 >> wipro1 ( root)
Host2 : Unix : 192.168.19.20 >> wipro2 ( ro )
Host3 : 192.168.19.30

Command to create NFS export on storage :


server_export server_2 –Protocol nfs –option
root=192.168.19.10,rw=192.168.19.10,ro= 192.168.19.20 /Fs_test

>> This will ensure Host1 will get root access and Host2 will get read
only ( RO ) access.

+ Host is going to use below interface to access NFS server :


server_ifconfig server_2 –all >> to get list of interfaces
NFS-test : 10.10.10.20 >> device=cge 0 >> select any interface for NFS
++++++
Linux : ( NFS client ) : Host1 : Redhat Linux : 192.168.19.10 >> wipro1
( root)
Export path ( nfs_interface:/fs_mount_point ) : 10.10.10.20: /Fs_test

Command use on Host end to mount the NFS export :


mount –t ext 10.10.10.20:/Fs_test /tmp_new_fs
Dell Customer Communication - Confidential

CIFS :Common internet file system : Protocol used to access the file
storage available over the network .

Windows terminology : >>>


Domain: It’s a computer network where user accounts, groups,
computer objects can be accessed and managed with common rules
and principles. It is administered by centralised server called domain
controller.

Domain controller: It’s the key server of your active directory server.
Domain controllers keep the user information, does the security
authentication and manages the GPO (group policy objects) i.e. rules
and principles. Commonly termed as DC.

Active directory: Its Microsoft technology to manage computers and


other devices over network. It runs different services (such as DC,
LDAP, DNS etc.) which manages user permissions, access,
authentication etc.

DNS: It is known as Domain name system. It translates domain name


to IP and vice versa. It’s because of DNS you can access CIFS server via
name and IP both.
NTP: Network time protocol. It keeps all the Active directory servers
(DC, DNS, LDAP etc.) in time sync with each other. More than 300secs
of difference in time can cause communication issue with these
servers.
Dell Customer Communication - Confidential

All above servers are essential for the smooth functioning of CIFS. The
users who are accessing the CIFS share or CIFS servers are generally
referred as CIFS client or windows client
=============Windows side requirement ================
Wipro.com >> domain
Domain controller server >> dc1 ( 10.10.10.20 ), dc2 ( 10.10.10.30 )
DNS >> DNS1 ( 10.10.10.20 ) , DNS2 ( 10.10.10.40 )
NTP >> NTP1 ( 10.10.10.20 ) , NTP2 ( 10.10.10.40 )
================
Different Naming convention :
CIFS server /SMB server( server message block ) /NAS server
/Netbios/Compname >>

Various SMB version available on windows OS:

SMBv1 >> windows 2003


SMBv2 >> windows 2008
SMBv3 >> windows 2008 R2
SMBv3.2 >>>
Dell Customer Communication - Confidential

Storage :
==========
File system >> Share
Interface >> CIFS server
++ join the CIFS server with domain ( storage – windows )

Interface : cifs_int : 10.10.20.30

CIFS server creation involves two steps:


1. Creating CIFS entry
2. Joining the CIFS with domain ( domain joined operation )

server_cifs server_2 -add netbios=CIFS_new,domain=wipro.com,


interface= cifs_int

server_cifs server_2 –Join netbios=CIFS_new,domain=


wipro.com,admin=sa288633
ou="ou=Computers:ou=Engineering:ou=new"
>>password :
where ,
netbios/compname : CIFS server name
domain : Domain info.
Interface : CIFS server IP
Dell Customer Communication - Confidential

OU : organizational unit
Admin : domain administrator user
Password : domain administrator user password.

+ Once CIFS server is joined to the domain it will create a computer


object entry also called as leaf object entry under:
AD/DC >> EMC >> CIFS server name ( CIFS_new )
+ if using OU , leaf entry will create under OU path :

FQDN : Fully qualified domain name ( CIFS server name + domain


Name)eg .. CIFS_new.wipro.com
CIFS server output once its joined to domain :
Server_cifs server_2 :
server_2 :

DOMAIN NASDOCS FQDN=nasdocs.emc.com SITE=Default-First-Site-Name RC=3

SID=S-1-5-15-99589f8d-9aa3a5f-338728a8-ffffffff

>DC=WINSERVER1(172.24.102.66) ref=3 time=1 ms (Closest Site)

CIFS Server DM112-CGE0[NASDOCS] RC=2 (Hidden)

Alias(es): DM112-CGEA1

Full computer name=dm112-cge0.nasdocs.emc.com realm=NASDOCS.EMC.COM

Comment='EMC Celerra'

if=cge0 l=172.24.102.242 b=172.24.102.255 mac=0:60:16:4:35:4f

wins=172.24.102.25:172.24.103.25

FQDN=dm112-cge0.nasdocs.emc.com (Updated to DNS) >> retrying to DNS

Password change interval: 30 minutes

Last password change: Thu Oct 27 15:59:17 2005


Dell Customer Communication - Confidential

Access of CIFS :
Any domain ( which is joined with storage ) can access the CIFS server
like :
//CIFS_server_ip
//CIFS_server_name
//FQDN

//CIFS_server_ip/share1

Share :
+ IF customer using CIFS protocol to access file system, then File
systems will be available in the form of share.
+ Shares are paths of the File system .
+ First share will always be on mount point of the file system
+ Further shares can be created on sub-folder.
+ Deleting share will not delete the data. Only specific share( path will
not be available )
Eg. Share1 ( path: /Test_FS ) , share2 ( Path : /Test_FS/Folder1 ),
share3 ( Path :/Test_FS/Folder1/Folder2)
Dell Customer Communication - Confidential

StandAlone CIFS server :


+ A CIFS sever not associated with any domain .
+ It uses workgroup. Workgroup is group/container of non-domain
users .
+ Used for smaller group of users . No need of domain .
+ Once created , needs to change the temporary password assign at
the time of creation.
+ StandAlone CIFS server can manage from Computer management
( MMC ) or CIFS MMC tool.

Requirement : CIFS interface , workgroup , name, User/password .

Command to create standalone CIFS server :


server_cifs server_2 -add
standalone=dm112cge0,workgroup=NASDOCS, interface=CIFS_int
server_2 : Enter Password:********

Enter Password Again:********

Procedure to change the default administrator password for a Standalone CIFS Server:

1. After logging in to a Windows host, press CTRL + ALT + Delete, and then select Change Password.
2. In the Log on to field, type the IP address (or Name) of the Celerra CIFS Server.
( administrator\standalone_CIFS_IP )
3. Type the original password used to create the Local Users Support, and then type a new password and
confirm. No entry is made in the server log for a successful password change, but a popup will display if
successful: "Your password has been changed."
4. Access the CIFS Server from the standard Computer Management Microsoft MMC snapin tool, but use
the procedure outlined in 47697 or else access to the local groups database will be denied.
Dell Customer Communication - Confidential

Snapshot/Checkpoint :
+ Point in time copy of the data .
+ It is not replica of a data ,rather it keep the pointer to data
+ ro and rw snapshot can be created . Maximum limit is 96 RO and 16
RW checkpoints for each file system.
( Snapshot : block ) , ( checkpoint : File System )
+ Snapshots used for Backup purpose ( restore and copy )
>> root_checkpt >> system checkpoint
Savvol Space : Space used to keep the checkpoints

Access From Windows :


1. From share properties >> under previous version tab .
2. //CIFS_server_ip/.ckpt

Access from Linux :


mount –t ext3 NFS-interface:/FS_test /new_fs
cd new_fs
ll >> list all the files and directory .
cd .ckpt ( .ckpt is hidden directory from where checkpoints can be
access )
ll >> it will list the checkpoint
Dell Customer Communication - Confidential

Manual Checkpoint : select FS , select size , pool , name (use


command: fs_ckpt )
Schedule Checkpoint :
+ Checkpoints will auto created as per the configured schedule
+ schedule can be created hourly ,daily, weekly or monthly basis
+ rentenion period ( keep ) will ensure the expiry of the snapshots.
Eg. Schedule : Every day @12 PM , take a checkpoint for FS :
test_new , retention period ( 7 days )

Quota : Limitation on usage of the File system :


+There are three types of quota can be configured on filesystem .
+ Quota limit the usage of the file system on user level or folder level
1. User quota
2. Group quota
3. Tree quota

1. User quota : Set the usage limitation for individual users (eg.
User : rauts2 , or userid : 1000 etc)
2. Group quota : set the usage limitation for entire group ( eg.
Admin group , domain group etc )
3. Tree quota : set the usage limitation on folders inside file
systems ( eg . /FS_new/Finanace/salary/Scholarship ( 100 Gb )
Dell Customer Communication - Confidential

Users limit =500GB ( Hard limit ) , Soft limit ( 450GB )


Hard quota : Hard limit of the usage
Soft quota : Threshold limit and triggers the warning for the usage .

Replication : >> Replicating a data from one unit to another unit

>>Site A ( Mumbai ) >> Site B ( Bangalore )


>> source /Primary >> destination/secondary
>> Production >> DR ( disaster recovery )

Types of replication : ( Asynchronous )

Two different storage array : remote replication .


Same storage array : local replication .

Remote Replication :

>> Between Two storage array . Supported configuration are :


VNX to VNX ( except VNX 5100 block only )
VNX to VNXe
>> Remote replication always created from source to
destination.
Dell Customer Communication - Confidential

Requirements /Requisites:

1. Connection ( source system – destination System )


A. Management connection ( CS – CS Connection /cel )
B. Data connection ( DM – DM / Interconnection )
2. FS Size ( exact or larger FS size on destination )
3. Sufficient Pool space available on both the end .
4. FS on source should be rw ( ro ) …destination has to be read only
( ro )
5. Destination FS should be empty.

Management Connection :
On source Storage array A, we need to add :
>> destination CS IP ( B )
>> Passphrase (eg. admin123 )

On destination Storage Array B, we need to add :


>> source CS IP ( A )
>> Passphrase ( same passphrase as source ,admin 123)

DataConnection /Interconnects :
Between Server_2 of source to server_2 of destination . ( eg.
Interconnect id=20002)
And like wise .
Dell Customer Communication - Confidential

Server_2 ,server_3 server_2,server_3


+ Each interconnect can have one or more interface IPs
+ In above diagram we have 4 interconnects , two for each data
mover.
+ interconnect can be validate ( connectivity check ) using :nas_cel –
interconnect –v id=<interconnect_id>

Replication always will be created on source end :


Command format : ( for VDM replication , respective VDM ( source
and destination ) can be use )
nas_replicate -create new_session -source –vdm vdm_new -destination –vdm
vdm_replica -interconnect NYs3_LAs2 -source_interface ip=10.6.3.190 -
destination_interface ip=10.6.3.173 -max_time_out_of_sync 60
max_time_out_of_sync : RPO (recovery point object ) : ensure the next sync of
data . ( value varies from 5min to 24 hrs )
Dell Customer Communication - Confidential

+ Once the replication is created two root replication checkpoints will be


created on source and destination end. eg .root_rep_ckpt
+ command to verify the ckpt : fs_ckpt <fs_name> -l -a

Output for replication session :


nas_replicate –info new_session

ID = 184_APM00064600086_0000_173_APM00072901601_0000
Name = new_session
Source Status = OK
Network Status = ok
Destination Status = OK
Last Sync Time = 4th June 2020
Type = filesystem
Celerra Network Server = cs110
Dart Interconnect = 20003
Peer Dart Interconnect = 20004
Replication Role = source
Source Filesystem = ufs1
Source Data Mover = server_2
Source Interface = 10.6.3.190
Source Control Port = 0
Source Current Data Port = 0
Destination Filesystem = ufs1_replica3
Destination Data Mover = server_2
Dell Customer Communication - Confidential

Destination Interface = 10.6.3.173


Destination Control Port = 5081
Destination Data Port = 8888
Max Out of Sync Time (minutes) = 60 ( RPO)
Next Transfer Size (Kb) = 0
Latest Snap on Source =
Latest Snap on Destination =
Current Transfer Size (KB) = 10045 ( 100MB)
Current Transfer Remain (KB) = 1000447 (1Gb)
Estimated Completion Time = 0
Current Transfer is Full Copy = Yes
Current Transfer Rate (KB/s) = 76
Current Read Rate (KB/s) = 11538
Current Write Rate (KB/s) = 580
Previous Transfer Rate (KB/s) = 0
Previous Read Rate (KB/s) = 0
Previous Write Rate (KB/s) = 0
Average Transfer Rate (KB/s) = 6277
Average Read Rate (KB/s) = 0
Average Write Rate (KB/s) = 0
Dell Customer Communication - Confidential

Local replication :

>> Replication within the same storage array


>> Destination FS can be created on same pool or different pool or
same DM or different DM
>> Loopback ( interconnect ) will use in the creation of local
replication.
>> no connection testing ( validation) is require , rest of the
replication requisites are applicable same as that of remote
replication .

How to make Destination array accessible:

Switchover >> For DR(disaster recovery) testing


>> source/primary is not actually down.
>> Switchover needs to Initiate from source (using -switchover
option)
>> sync the ongoing replication data and then switchover the service
>> source FS will become read-only and destination FS will become
read-write
>> ( -reverse ) option revert the changes to original form , Called
failback.

Failover >> Actual Disaster


>> source array is down
Dell Customer Communication - Confidential

>> initiated from Destination


>> insync data will be lost
>> destination site will become production ( fs will change to read
write )
>> once source is available , -reverse option to revert the changes

CAVA ( Viruschecking ) :

CAVA : Common Antivirus Agent. ( virus checking service of storage )


 CAVA / VC service will default available on storage , needs to
enable/start once AV server is configured.

AV : Antivirus sever. Windows 2012


AV server IP
-AV needs to configure On windows system. AV server

-CEE package ( common event enabler) has to AV agent ( mcfee)

Download from Support.emc.com


+ It includes :
+ CAVA software
+ CEPA package
+ CIFS auditing Users Storage ( VNX file )

-CAVA software needs to install and


configure.
Dell Customer Communication - Confidential

+Once AV server is configured needs to install the AV agent ( eg.


Mcfee , Kaspersky )
+ AV Agent is antivirus software will actually does scanning job.
+ CAVA service needs to start either for AV server end or storage end .
+ from AV server end : control panel – service – EMC CAVA ( start)
+ from storage :
Server_setup server_2 –p viruschk –service start

CAVA output from Storage :

server_viruschk server_2
server_2 :
10 threads started

1 Checker IP Address(es):
172.24.102.18 offline at Mon Jan 31 18:35:43 2005 (GMT-00:00)
RPC program version: 3
CAVA release: 3.3.5, AV Engine: Network Associates
Last time signature updated: Thu Jan 27 19:38:35
2005 (GMT-00:00)
31 File Mask(s):
*.exe *.com *.doc *.dot *.xl? *.md? *.vxd *.386 *.sys *.bin *.rtf *.obd
*.dll
Dell Customer Communication - Confidential

*.scr *.obt *.pp? *.pot *.ole *.shs *.mpp *.mpt *.xtp *.xlb *.cmd *.ovl
*.dev
*.zip *.tar *.arj *.arc *.z
No File excluded
Share \\DM112-CGE0\CHECK$
RPC request timeout=25000 milliseconds
RPC retry timeout=5000 milliseconds
High water mark=200
Low water mark=50
Scan all virus checkers every 60 seconds
When all virus checkers are offline:
Continue to work without CIFS and Viruschecking ( shutdown ): NO
Scan on read if access Time less than Thu Jan 27 19:38:35 2005 (GMT-
00:00)

++ All above info. Will be included in viruschecker.conf file on storage


and configure on AV end.
++ shutdown value responsible for storage behavior once AV server is
down/offline . If shutdown is set to :
No : no impact on CIFS access
CIFS : CIFS access will be block
viruschecking: AV will attempt to start and if no luck then access will
block )
Dell Customer Communication - Confidential

Audit output ( file scan count )

server_viruschk server_2 -audit


server_2 :
Total Requests : 138
Requests in progress : 25
NO ANSWER from the Virus Checker Servers: 0
ERROR_SETUP : 0
FILE_NOT_FOUND : 0
ACCESS_DENIED : 0
FAIL : 0
TIMEOUT : 0
Total Infected Files : 875
Deleted Infected Files : 64
Renamed Infected Files : 0
Modified Infected Files : 811
min=70915 uS, max=1164891 uS, average=439708 uS

++ from storage will only get count of files which were scanned , in
progress or were found to be infected .
++ detail file information regarding scan will be available on AV agent
or AV server end.
Dell Customer Communication - Confidential

NDMP Backup :

+ The Network Data Management Protocol (NDMP) is used for backup and
recovery operations between two systems using a separate Data Management
Application (DMA). (backup software eg. Commovault , Netbackup, Avamar etc )
+ Both 2-way and 3-way NDMP are supported on VNX systems.
+ Block LUN (backup and recovery) : Host level backup
+ For File system (backup and recovery ): NDMP

There are three major components of NDMP setup :


+ Storage
+ DMA ( Backup software )/Backup server)
+ Tape device / Tape library unit ( TLU)

Supported Feature :
+ Full backups
+ Incremental backups
+ Restores

Backup server : ( eg. Commavault , Netbackup, AVAMAR etc)


+ Backup management will be done from backup server
+ It does 80% of the operation related with the NDMP backup ( like,
start of backup/restore, stop, progress check etc )
+ It has control and data connection with storage and tape unit
Dell Customer Communication - Confidential

VNX : ( from VNX storage backup server needs below info. For
successful connection )
.1 NDMP user ( username /password )
.2 CIFS server information .

TLU : tape library unit


+ Also called as backup client
+ Its secondary storage ( usually cheaper storage )
+ backup will be saves on tapes ( eg . HP tape library , IBM tape library
)
+ If TLUs are directly attached with storage over FC , its called 2 way
NDMP setup
+ VNX will connect with TLUs via AUX ports or backup ports ( HBA 2
and HBA 3 )
+ If TLUs are connected over the network , it’s called 3 way NDMP
setup

Logs :
For File : support materials
Script : ./collect_support materials (under /nas/tools)
Location : /nas/var/emcsupport

>> Can also be collected from unisphere and transfer using winscp
Dell Customer Communication - Confidential

For Block : SP collects


Script : ./.get_spcollect (under /nas/tools)
Location : /nas/var/log

>> Can also be collected from unisphere ( get diagnostic file ) or from
setup page of sp (eg. //SP_A_Ip/setup)

Important files for log analysis :


Server_log and sys_log for file
Triag_analysis.txt , Triage_splogs from block

You might also like