Delphix User Guide 5
Delphix User Guide 5
Delphix User Guide 5
User Guide
January, 2016
1. _Release . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.1 What's New for Delphix Engine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.2 Release Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.2.1 Release 3.1 - 3.1.x.x Known Issues and Changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.2.2 Release 3.2 - 3.2.x.x Known Issues and Changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.2.3 Release 4.0 - 4.0.x.x Known Issues and Changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.2.4 Release 4.1 - 4.1.x.x Known Issues and Changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.2.5 Release 4.2 - 4.2.x.x Known Issues and Changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.2.6 Release 4.3 - 4.3.x.x Known Issues and Changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.2.7 Release 5.0-5.0.x.x Known Issues and Changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2. _QuickStart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.1 Quick Start Guide for The Delphix Engine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.1.1 Oracle Quick Start Topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.1.1.1 Set Up an Oracle Single Instance or RAC Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.1.1.2 Link an Oracle Data Source . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.1.1.3 Provision an Oracle VDB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.1.2 PostgreSQL Quick Start Topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.1.2.1 Add a PostgreSQL Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.1.2.2 Link a PostgreSQL Data Source . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.1.2.3 Provision a PostgreSQL VDB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.1.3 MySQL Quick Start Topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.1.3.1 Add a MySQL Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.1.3.2 Link a MySQL dSource . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.1.3.3 Provision a MySQL VDB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.1.4 SQL Server Quick Start Topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.1.4.1 Set Up a SQL Server Target Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.1.4.2 Set Up a SQL Server Source Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.1.4.3 Link a SQL Server Data Source . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.1.4.4 Provision a SQL Server VDB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.1.5 SAP ASE Quick Start Topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.1.5.1 Add an SAP ASE Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.1.5.2 Link an SAP ASE Data Source . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.1.5.3 Provision an SAP ASE VDB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.1.6 Create a Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.1.7 Delete a dSource . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.1.8 Delete a VDB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.1.9 Disable a dSource . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3. _SysAdmin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.1 System Installation, Configuration, and Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.1.1 Installation and Initial Configuration Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.1.1.1 Supported Web Browsers and Operating Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.1.1.2 Virtual Machine Requirements for VMware Platform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.1.1.3 Virtual Machine Requirements for AWS/EC2 Platform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.1.1.4 General Network and Connectivity Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.1.1.5 Checklist of Information Required for Installation and Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.1.1.6 Virtual Machine Requirements for OpenStack with the KVM Hypervisor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.1.2 Installation and Initial System Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.1.2.1 The delphix_admin and sysadmin User Roles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.1.2.2 Using HostChecker to Confirm Source and Target Environment Configuration . . . . . . . . . . . . . . . . . . . . . . . . .
3.1.2.3 Installing the Delphix Engine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.1.2.4 Setting Up Network Access to the Delphix Engine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.1.2.5 Customizing the Delphix Engine System Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.1.2.6 Setting Up the Delphix Engine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.1.2.7 Retrieving the Delphix Engine Registration Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.1.3 Factory Reset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.2 Managing System Administrators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.2.1 System Administrators and Delphix Users . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.2.2 Adding New System Administrators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.2.3 Changing System Administrator Passwords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.2.4 Deleting and Suspending System Administrators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.2.5 Reinstating System Administrators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.3 Capacity and Resource Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.3.1 An Overview of Capacity and Performance Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.3.2 Setting Quotas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.3.3 Deleting Objects to Increase Capacity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.3.4 Changing Snapshot Retention to Increase Capacity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.3.5 Delphix Storage Migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.3.6 Adding and Expanding Storage Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.4 System Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.4.1 Configuring SNMP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.4.2 Viewing Action Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
13
14
17
24
29
38
46
57
69
78
82
83
84
85
87
89
91
92
94
96
98
99
101
103
105
106
108
109
111
113
114
115
117
119
120
121
122
123
124
125
126
127
130
131
133
134
136
137
138
139
140
143
144
148
149
150
151
152
153
154
155
156
157
159
160
161
162
167
168
169
170
174
177
178
179
180
181
182
183
186
187
188
189
192
195
198
199
200
207
208
213
214
216
219
220
222
223
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
246
247
248
249
251
255
258
261
265
266
267
270
272
273
274
275
276
277
278
279
280
281
282
283
285
287
289
290
291
292
295
297
298
299
300
301
302
303
305
307
310
321
322
323
325
326
329
332
334
340
342
343
344
345
348
349
350
351
352
353
354
356
357
360
363
364
365
366
367
368
369
370
372
374
375
378
380
381
384
387
388
391
393
395
396
399
400
401
402
403
404
406
407
408
410
411
412
413
415
416
417
418
419
421
422
423
425
426
427
428
429
430
431
433
437
439
440
441
443
444
445
448
449
450
452
454
455
456
457
458
459
460
461
462
463
464
466
468
469
470
471
472
474
475
476
478
479
480
481
482
483
485
486
489
492
493
494
495
496
499
500
501
502
504
506
507
508
509
510
511
512
513
514
515
516
518
520
521
522
524
525
527
528
529
531
532
533
534
535
536
537
539
542
545
546
547
549
550
553
554
555
557
559
560
561
562
563
564
565
566
567
568
570
572
573
574
575
576
577
578
579
581
582
583
584
585
586
587
589
592
595
596
598
599
600
602
604
605
607
609
610
611
612
613
614
615
617
619
624
625
626
627
628
629
630
631
633
635
636
637
640
643
644
645
647
649
651
653
654
656
659
665
668
670
671
672
673
674
675
677
679
685
688
689
690
691
694
697
698
699
700
701
703
704
705
706
709
711
713
714
716
719
724
726
733
734
736
738
741
742
743
744
745
746
747
748
749
750
751
752
753
754
756
757
758
760
762
763
767
772
779
781
788
789
791
793
794
795
796
798
799
800
801
815
816
817
818
819
821
822
823
826
827
828
829
830
832
833
837
838
840
842
843
845
846
847
848
849
851
852
853
854
856
857
858
859
861
862
863
864
865
866
867
869
870
871
872
873
874
875
878
879
880
881
882
883
884
885
886
887
888
891
892
893
894
895
896
897
898
899
900
901
903
906
908
911
912
914
916
919
920
922
924
926
929
932
938
939
941
942
943
945
946
947
949
951
954
956
961
962
963
964
965
966
968
971
973
979
980
981
982
983
984
986
989
991
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1011
1012
1013
1014
1024
1031
1033
1036
1038
1039
1040
1042
1043
1045
1048
1052
1059
1063
1072
1073
1080
1081
1082
1084
1085
1090
1103
1113
1116
1117
1118
1119
1121
1122
1124
1125
1126
1127
1128
1129
1132
1133
1134
1135
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1149
1151
_Release
13
Integrated Masking
Masked VDB Provisioning
You can now create masked copies of data at VDB provision time, using masking jobs defined on the masking engine that run when you provision
or refresh the VDB. It
is now even easier to mask copies of production and deliver secure data across teams. From
one streamlined workflow, admins can define how/what needs to be masked, who can access the data, and
distribute that masked data. To find additional information about masked VDB provisioning, see Provisioning
Masked VDBs.
Selective Data Distribution
You can now replicate masked data directly to a target Delphix engine, while ensuring unmasked sensitive
data does not leave the production site. This feature is critical for implementing a hybrid cloud deployment in which you want only
masked data in the cloud, as well as other cases in which you want only masked data in target systems, such as offshore QA and outsourced
analytics. For more information about selective data distribution, see Selective Data Distribution Overview.
14
DB2 Support
DB2 LUW
DB2 LUW will be available on a single machine, single partition DBs on versions 10.1 and above. We will support customers on AIX 6.1+ and
Redhat 6.5+. For more information about DB2 LUW, see DB2 on Delphix: An Overview.
15
Now Jet Stream users can share ownership of a single data container. For more information on multi-owner data containers, see Jet Stream
Data Concepts and Working with Data Operations and Sources in a Container.
Technical Improvements
UX Change
Faster Start
For users with a large number of databases, application startup time will be significantly faster.
ZFS Improvements
Compressed ZFS Send/Receive
Performance of replication across a WAN (for example, to the cloud) is now improved with send stream pre-compression. This will lower CPU use
and improve bandwidth in cases where CPU performance was a bottleneck, or where compression was not previously enabled for replication. All
replications are now sent compressed so there is no longer a "compressed" checkbox in the replication UI. There is no additional CPU cost,
because the data is compressed when it is first written, rather than as it is being replicated. Reported replication throughput may be lower
because the amount of compressed data sent is reported, rather than the amount of uncompressed data. For more information see, Configuring
Replication.
16
Release Notes
Welcome to the 5.0 release of the Delphix Engine database virtualization system.
5.0 Upgrade Matrix
Tested Browser Configuration Matrix
Supported Oracle DBMS Versions and Operating Systems for Source and Target Environments
Supported DBMS Versions
Supported Operating Systems
Supported SQL Server Versions, Operating Systems, and Backup Software
Supported Versions of Windows OS
Supported Versions of SQL Server
Supported SQL Server Backup Software
Supported PostgreSQL Versions and Operating Systems
Supported DBMS Versions
Supported Operating Systems
Supported SAP ASE Versions and Operating Systems
Supported MySQL Versions and Operating Systems
Supported DBMS Versions
Supported MySQL Storage Engine
Licenses and Notices
Supported?
No
N/A
Yes
Required
Yes
Not Required
Yes
Optional 2
1. VDB Downtime is caused by a reboot of the Delphix Engine when DelphixOS is modified by an upgrade.
2. VDB Downtime may be optional for an upgrade when a release contains DelphixOS changes that are also optional. In such a scenario,
the DelphixOS changes may be deferred (see documentation on Deferred OS Upgrade).
Browsers Supported
Adobe Flash/Flex
Minimum Memory
Firefox, Chrome
10.x
4GB
10.x
4GB
17
Windows 7
10.x
4GB
Windows 7
Firefox, Chrome
10.x
4GB
Windows 7 x64
10.x
4GB
Windows 7 x64
Firefox, Chrome
10.x
4GB
Mac OS X
Firefox, Chrome
9.0.3 (6531.9)
4GB
Supported Oracle DBMS Versions and Operating Systems for Source and Target Environments
Source and Target OS and DBMS Compatibility
The source and target must be running the same DBMS/Operating System combination (for example, Oracle 10.2.0.4 on RHEL 5.2) in
order to successfully provision a VDB to the target. If the DBMS versions are compatible, the OS version on a target host can be
different from the OS version on the source host.
VDB
SnapSync
Yes
No
LogSync
No
No
Rewind
Not Applicable
No
Yes
No
RAC
No
No
Standby Database
No
No
Oracle 10.2.0.4
The Delphix Engine does not support Oracle 10.2.0.4 databases using Automatic Storage Management (ASM) that do not have the
patch set for Oracle Bug 7207932. This bug is fixed in patch set 10.2.0.4.2 onward.
18
Version
Processor Family
Solaris
SPARC
Solaris
x86_64
x86_64
5.3 - 5.11
6.0 - 6.6
x86_64
x86_64
AIX
Power
HP-UX
11i v2 (11.23)
IA64
11i v3 (11.31)
Delphix supports all 64-bit OS environments for source and target, though 64-bit Linux environments also require that a 32-bit version of glibc
installed
Required HP-UX patch for Target Servers
PHNE_37851 - resolves a known bug in HP-UX NFS client prior to HP-UX 11.31.
R2
Delphix supports only 64-bit versions of Windows on target hosts and validated-sync-target hosts.
Target hosts and validated-sync-target hosts running Windows Server 2003 SP2 or 2003 R2 must install the hotfix documented in KB
943043.
Platform: x64
19
Location: (http://hotfixv4.microsoft.com/Windows%207/Windows%20Server2008%20R2%20SP1/sp2/Fix385766/7600/free/44135
1_intl_x64_zip.exe)
Updates MSISCI.sys
Platform: x64
Location: (http://hotfixv4.microsoft.com/Windows%207/Windows%20Server2008%20R2%20SP1/sp2/Fix388733/7600/free/44067
5_intl_x64_zip.exe)
Expand -f:* c:\TEST\(write the complete details of the file with extension .msu).msu c:\TEST
Expand -f:* c:\TEST\(write the complete details of the file with extension .cab).cab c:\TEST
pkgmgr /ip /m:c:\Test\update-bf.mum
There are further restrictions on supported Windows and SQL Server versions for SQL Server Failover Cluster target environments.
See Adding a SQL Server Failover Cluster Target Environment for details.
20
Delphix Version
Delphix 3.x
Delphix 3.x
Delphix 3.x
Delphix supports SQL Server AlwaysOn Availability Groups as a dSource but creation of a VDB on AlwaysOn Availability Groups is not
supported. Delphix
supports Windows Server Failover Cluster (WSFC) as a dSource and also as a target
(VDB).
In versions 4.3.3.0 and newer Delphix supports encrypted backups; if you are running an older version of the Delphix Engine (v 4.3.2.x 3.0) encrypted backups are not supported.
Version
Processor Family
21
PostgreSQL
x86_64
x86_64
Version
Processor Family
x86_64
x86_64
DBMS Versions
22
Processor Family
x86_64
x86_64
x86_64
x86_64
x86_64
23
Description
24248
24339
Should not be allowed to resume initial load while the dSource is disabled.
24471
Confusing error message during Oracle cluster discovery when users have a database with duplicate db unique name in
another environment.
24528
24532
24549
Cannot log in to the CLI via console when the stack is down
24618
Powering off Delphix Engine while snapsync is running causes zero blocks in datafiles
24622
24
24688
24689
24694
IndexOutOfBoundsException when enabling a dSource after deleting its most recent snapshot
24707
sysadmin and delphix_admin are able to sftp into the delphix appliance
24714
24764
24791
24804
24833
24836
24840,25189
24871
Space in the shared backup location breaks sync from existing backup
24879
24881
24888
24890
24894
24895
24922
24952
24962
24965
24969
24981
db_domain not used in JDBC connection entry when using wildcard notation in VDB config
24988
24999
25000
25001
25012
25050
SQL Server Linking from Environment Management screen does not select the database
25065
25066
25067
25072
Refreshing the source environment gets rid of the LiteSpeed version on the source
25107
25108
25199
25
25381
Delphix VM shoots up to 100% utilization, with large number of UCP java threads spinning on locks
25465
Retain the original time zone specified during initial configuration.Source and Target Environment Issues
If the size of SGA_TARGET is larger than /dev/shm, the administrator should reduce SGA_TARGET in the VDB configuration parameter, and save
a named template for use in provisioning other VDBs.
Alternatively, increase the /dev/shm size in /etc/fstab.
In some cases, it may be possible to add the TNS_ADMIN to the ssh environment that Delphix Engine uses:
1. Set PermitUserEnvironment to yes in sshd_config.
2.
26
Add the IP address of the Delphix Engine to the list of invited nodes in $ORACLE_HOME/network/admin.
Make sure all files in the target path are readable by the OS user given to Delphix Engine.
Remove the Oracle sample schemas from the source database before provisioning VDBs.
27
Provisioning to a higher SQL Server version if the source is SQL Server 2005
If the source for a VDB is SQL Server 2005, then you can't provision to SQL Server 2008 or 2008R2 directly.
Running the manual recovery script Provision.ps1 after V2P may receive the following error message:
The term 'dlpxzfree' is not recognized as the name of a cmdlet, function, script file, or operable program.
Check the spelling of the name, or if a path was included, verify that the path is correct and try again.
This is because the utility dlpxzfree.exe is not in path. It does not affect the execution and functionality of the
script.
This error message will not be shown in a future release of the product.
28
29
Description
26227 28456 31134 31134 31135 31136 31137 31220 31221 31223 31226 31226 31461
32266 32268 32269 32342 32290
31142
Security fix
31153
31908
31989
Description
28221
30812, 30576
Replication fixes
30
30763
Fixed an issue where provisioning a single instance dSource to a RAC target would fail
30617
30450
Fixed an issue with CLI validation of non-sys user fields fails on existing valid connection string
30412
30366
30161
Fixed an issue where the management stack could run out of memory
29964
Fixed an issue with displaying times and SCNs from the latest archive logs
29960
29905
29850, 30552
29698
28622, 30027
29373
Description
29978
Fixes an issue related to Oracle standby database where datafiles are added during a dSource SnapSync
30109
Fixes an issue where connecting to a VDB (created from a standby dSource) fails when using a non-sys user
30147
Fixed an issue where provision from the last SCN of a dSource (created from a standby database) might fail
30148
30149
Fixes an issue where provision may fail when using file mapping when mapping with a large # of datafiles with long names
30245
Fixes an issue where the VDB status is shown as unknown on Solaris and HP-UX platforms.
Description
29499
Fixed an issue with SQL Server VDBs not starting automatically following a reboot of the target host.
VDBs are now stopped at 95% of storage capacity and automatically restarted once storage capacity drops below
90%.
dSources will stop pulling new data from sources at 85%. Once the usage goes below 82%, we will resume pulling
data again.
29359
Fixed an issue with iSCSI target being offlined due to task abort timeouts
29662
29156
Fixed a problem where you would get exception.oracle.dbc.query.failed during SnapSync if v$rman_configuration
has more than one entry for snapshot controlfile
29050, 29049
31
29539
Fixed an issue where a running job would not be recorded in job history
29881, 29696
Fixes to address naming and structure of Oracle data files and temp files
30010
Fixed an issue where RAC VDB rollback would fail due to "Failed to apply logs in database recovery"
27633, 29540
29207
Fixed a GUI issue where updating the source database user credential from the dSource cards could give an error
message
29687
Fix to workaround Oracle note 387210.1 which restricts the value of MAXLOGHISTORY on Oracle versions
10.2-10.2.0.0 ans 11.1-11.1.0.6
29321
Fixed a GUI issue with updating the target principal of an existing replication configuration
29697
Fixed an issue where VDB log retention could fail to delete a log
27478, 27388
Performance improvements
29274, 29275,
Description
29386
29273
Fixed an issue where certain characters in VDB config templates would cause provisioning failures
Description
29301
29286
Description
29100
28707
Fixed an issue with SQL Server LogSync where provisioning needed the stopat to be in the source's
timezone
28474
28962
32
28904
Provision a VDB from a standby should allow the user to specify a non-SYS user
28821
28741, 28742
28466
28870, 28894
28934
Fixed an issue where the management stack could run out of memory
28916
Fixed an issue where the GUI could disable the staging source instead of the linked source
28867
Fixed an issue where the database management screen would display garbled data
28684
Fixed an issue where the GUI's might not handle timezones with half hour offsets properly
28878
28779
28526
Description
28559, 28050
Bug Fixes
Bug Number
Description
28435
Fixed an issue with the GUI could show an action script error during the provisioning wizard
28364, 28373
28261
SQL Server now supports backup paths which include $ and ' characters
28208
28160, 27881
27953
Fixed an issue where an exception would be raised in some cases when detecting database privileges
27926
27892
Fixed an issue where Delphix would pick the incorrect archive logs, causing provisioning to fail
27827
27789
Monitor SQL Server VDBs to check if new data/log files have been added to non-Delphix storage
27738
Fixed an issue where Environment Management does not show correct version for SQL Server environment
27737
Fixed an issue where environment discovery would not identify disk space problem
27736
Fixed an issue with umask requirements when not using Oracle user
33
27484
Fixed an issue where VDB enable would fail if the file list changed since the last snapshot
27432, 27386
26951
Fixed an issue where system under extreme load could run out of heap space
Description
28186
Fixed an issue with provisioning from VDB snapshots created in Delphix 2.7.x or earlier
27808
Fixed an issue when upgrading with domain and system users with the same name
Bug Fixes
Bug Number
Description
27810
27808
27770, 27750,
27613
Fixed an issue where log retention on Windows did not free up space
27657
Fixed an issue where ORA-01152 error messages during provisioning would incorrectly display warnings
27636
Fixed an issue when doing initial load from an Oracle 9i database would fail
27624
Fixed an issue where the Delphix Engine could crash while receiving a replication update
27616
Fixed an issue with the SCN End stamp not displaying when taking a snapshot
27595
Fixed an issue where cached browser data could cause incorrect strings to be displayed in the GUI after upgrading a
Delphix Engine
27582
Fixed an issue where resource monitor workers where not removed when restarting the management stack
27530
27492
Fixed an issue where SQL Server pre-provisioning fails if a file is renamed on the source
27449
Fixed an issue where tab navigation skips "Toolkit Path" when adding "Standalone Host" in the "Add Environment" wizard
27445, 27208
Fixed an issue where an initial load does not generate a fault on a NOLOGGING operation
27443
Fixed an issue with not properly checking for X$KCCFE privileges on source databases
27420
Fixed an issue with deleting a namespace after replication failover when doing circular replication
27353
Fixed an issue where provisioning from SQL Server 2005 to SQL Server 2008 would be allowed
27261
Fixed an issue where the GUI would no longer require the email address to be set for delphix_admin
27230
Fixed an issue with the SCN range not displaying correctly on snapshots
26423
Fixed an issue where upgrading the staging instance would not be properly detected
34
24037
Fixed an issue when multiple SQL files with has the same physical file name
Retain the original time zone specified during initial configuration.Source and Target Environment Issues
If the size of SGA_TARGET is larger than /dev/shm, the administrator should reduce SGA_TARGET in the VDB configuration parameter, and save
a named template for use in provisioning other VDBs.
Alternatively, increase the /dev/shm size in /etc/fstab.
35
Solution
In some cases, it may be possible to add the TNS_ADMIN to the ssh environment that Delphix Engine uses:
1. Set PermitUserEnvironment to yes in sshd_config.
2. Restart sshd daemon
3. Add TNS_ADMIN=<loc> to ~/.ssh/environment for the respective OS user used by Delphix.
Add the IP address of the Delphix Engine to the list of invited nodes in $ORACLE_HOME/network/admin.
Make sure all files in the target path are readable by the OS user given to Delphix Engine.
Remove the Oracle sample schemas from the source database before provisioning VDBs.
36
Provisioning to a higher SQL Server version if the source is SQL Server 2005
If the source for a VDB is SQL Server 2005, then you can't provision to SQL Server 2008 or 2008R2 directly.
Currently running the manual recovery script Provision.ps1 after V2P may get the following error message:
The term 'dlpxzfree' is not recognized as the name of a cmdlet, function, script file, or operable program.
Check the spelling of the name, or if a path was included, verify that the path is correct and try again.
This is because the utility dlpxzfree.exe is not in path. It does not affect the execution and functionality of the
script.
This error message will not be shown in a future release of the product.
Single Quotation Marks (') in File Names and File Paths
We currently don't support single quotation marks (') used in Delphix connector installation paths and database backup file names and paths.
37
Description
37126
Delphix Engine fails to boot following deferred OS upgrade from 4.0.3.0 or later
36983
restarting a canceled or suspended initial SnapSync does not resume from where it left off
37149
internal metadata database race condition may cause failure during upgrade from 3.1 or 3.2
38
Description
32231
no input validation for timeflowPoint.location when creating bookmark, results in server error when rolling back to bookmark
32232
32233
duplicate Bookmarks names for the same dSource time flow can be erroneously created
32457
33449
35602
35605
35780
switchover to standby with Oracle flashback can result in duplicate snapshots displayed in Delphix Engine GUI
35989
36087
36231
36235
36244
source continuity for Oracle dSources (allow SnapSync to continue after source is rolled back)
36284
36412
36462
shutdown of an Oracle VDB with LogSync enabled can cause SCN gaps in timeflow
36497
36736
Description
35961,36089
Delphix Engine metadata can be corrupted when the system is restarted (see technical bulletin
)
35554
ASM source datafiles can be deleted when provisioning a VDB back to a source environment (see technical bulletin)
33451
enable the configuration of SNMP TRAP instead of INFORM for Delphix alerts to workaround environments where
TRAPs are not acknowledged
34030
35045,35449
fix a possible upgrade issue with Delphix Engines that have SQLServer dSources or VDBs
34220
GUI not rendering sources and groups properly after having popped up an error dialog
34332
cannot delete a VDB template when the template name is too long
35381
faults count on the menu bar does not matches the faults count in the active faults list
35466
35491
35540
increase SnapSync policy timeout limit from 24 hours to 168 hours (1 week)
39
35701
blank error popups sometimes displayed and eventually result in runtime exceptions
35704
GUI does not show error message when VDB provisioning fails due to a mount failure
35712
34888
network outage can cause Oracle V2P job to fail with a server error
34884
34584
Oracle logsync can erroneously start fetching old logs resulting in missing log alerts
35439
Delphix Engine can generate a flood of alert emails if an alert profile contains an invalid email address
34786
34471
34583
the Finish and Back buttons do not work when provision a replicated VDB
35400
35402,35488,35489
32735
35312
34766
33717
Oracle VDB provisioning via CLI uses a listener from the wrong environment
34851
Oracle RAC attachSource failing with snapshot control file must be accessible to all nodes
35713
add the ability to launch a job via policy with no time limit
35044
DelphixOS Fixes
Bug Number
Description
33924
TCP performance problem causing low throughput for connections traversing one or more routers
35174
Bug Fixes
Bug Number
Description
34716
35012, 35018
40
Bug Number
Description
32600
32952
32955
33306
33338
33351
33607, 33965
33652
33686
"object already exists" error when failing over appdata with builtin toolkit
33788
Oracle logsync can fail with an internal error after upgrade to 4.0
33837
allow user with 'sudo mount' to be different from owner of provisioned appdata files
33864
non domain admin users don't get prompt back when they issue a DB_DELETE job (last notification does not arrive)
33903
internal error trying to provision appdata to s10 target because id doesn't support -u option
33904
33937
updating AppData VDB parameters on vFiles card in GUI overwrites password with "******"
33978
need to preserve case when editing and saving VDB config template contents
33989
34058
after the currently selected VDB Template is saved, it should always be auto-refreshed
34059
34064
34073
34075
dSource name is shown instead of VDB name in Refresh VDB confirmation dialog
34388
34427
EBS-app toolkit missing expect logic for db domain name, file system owner and startup
34509
stack-only upgrade
Description
31280
31454
31695
container update notifications not being sent for enabling or disabling of Oracle dSources
32053
32059
cannot add an environment if it contains an Oracle database whose db_unique_name is equal to an existing dSource
32061
41
32063
32420
fix out-of-range issue when upgrading from 3.2.3.0 or older and SQL Server environments are in different timezones than the
Delphix Engine
32593
32703
32733
32862
32973
32992
expanding a group or container details should not expand any group folder
32994
33002
VDB migration fails when parent's archive logs have been removed
33072
can't start SQL Server VDB after upgrading with the VDB in a stopped state
33129
33209
snapsync can fail with an internal error when linking to an Oracle database on AIX
33251
unable to disable VDB when its database on MSSQL does not exist
33252
add a delayed retry to SQL Server transaction log pickup before generating an alert
33352
33373
33388
33389
Description
31278
31279
31281
31392
31398
fix a problem with RECOVERY_PENDING SQL Server VDBs not being restarted after a target reboot
31459
retention policy should use last change time instead of creation time for calculating snapshot retention eligibility
31648
31649
hostchecker should not query for BCT when Oracle version is 9.X
31650
31687
analytics screen disk I/O graph loses its lower summary row when selecting a specific latency range
31728
31754
disabling or enabling a system user is not reflected in the GUI until the browser is refreshed
31825
cannot manage both Postgres and SQL Server dSources from the GUI
42
32042
32110
'excludes' and 'followSymlinks' properties of AppDataLinkParameters do not appear in CLI while linking
32115
32129
32151
32188
errors during the export phase of cross-platform provisioning are not displayed
32227
reduce CPU impact of SQL command run on Oracle targets used to discover database user privileges
32250
32305
32306
32362
32376
SQL Server snapshot corruption occurs if a source is disabled before Delphix Engine upgrade
32413
32435
failure provisioning an Oracle VDB if the source contains a datafile with spaces in the filename
32508
cross-platform provisioning experiences internal error if user script output is less than 256 characters
32512
32525
32526
32527
32528
navigating on the analytics timeline erases graph data from the screen
32537
32614
32644
32704
scalability issue in the GUI that caused "Flash plugin not responding" popups in the browser
32766
32858
32859
32873
32875
Description
32755
43
(used to run Oracle and Postgres hooks) have been re-architected use the protocol. This places new network connectivity requirements
on the product and the hosts that interact with Delphix Engines. See Network and Connectivity Requirements section of the
documentation for details.
VDB Pre/Post-Scripts have been superseded by the new Hook Operations feature. Any post-scripts configured on existing VDB will
automatically be converted to Configure Clone hook operations as part of the upgrade to 4.0. Pre-scripts are no longer supported and will
be removed on upgrade.
Delphix Engine upgrade images are now signed by Delphix, and signatures are verified prior to upgrade. This ensures that only updates
authorized by Delphix can be applied to a Delphix Engine.
The Oracle and PostgreSQL VDB provisioning wizard includes a screen for configuring user-defined hooks to be run during specified
VDB operations. See the documentation for further details.
A summary of storage capacity metrics is now displayed on the main screen after login.
Most of the performance monitoring functionality that was previously accessible via the Performance screen has been re-implemented
and moved to the new Performance Analytics screen.
New advanced data management options are available from the Oracle dSource wizard. See the documentation for further details.
Policies may now be expressed using cron format. The Delphix Engine uses expressions compatible with the Quartz CronTrigger
scheduler.
New VDB Configuration Templates GUI screen.
Doing cross-platform provisioning of a VDB from a replicated dSource fails with an internal error. To work around this, create a VDB of the
replicated dSource, and do a cross-platform provision of the VDB.
VDB Refresh Takes a Long Time
The time taken to refresh of a cross-platform provisioned VDB is similar to the time taken for cross-platform provisioning. This is because the
refresh process re-provisions the VDB, including much of the cross-platform provisioning logic. We are investigating how to improve this in a
future release.
Detaching an Application Data dSource fails with an internal error. There is no workaround.
Oracle RAC Environments Not Supported
If any dSource is disabled prior to upgrade and enabled after upgrade the following issues are seen:
Validated sync might fail with a fault stating that the most recent transaction log failed to be restored.
Even if validated sync succeeds, provisioning a VDB from a snapshot after the upgrade will fail with an internal error as the VDB cannot
be recovered. Provisioning from any snapshot taken prior to upgrade continues to work.
If a dSource is disabled after the upgrade, the subsequent enable can fail with an error stating that the dSource could not be enabled as
the corresponding staging source could not be enabled.
This can be resolved by doing a sync on the dSource after the upgrade.
Issues With Upgrades From Delphix 3.2.3.0 or Older
If the source host and Delphix Engine are in separate timezones provisioning VDBs after upgrade from snapshots taken before upgrade may fail
with timestamp out of range errors. Provisioning from snapshots taken after upgrade works correctly.
44
PostgreSQL Issues
Replication is not Supported
There are some problems associated with provisioning a VDB from a replicated PostgreSQL dSource. Replication is not yet fully supported with
PostgreSQL.
Staging Environment Reboot Not Handled Correctly
If a staging environment is rebooted, the pg_receivexlog process starts writing log files to the local filesystem instead of the NFS directory
mounted from Delphix. This results in missing logs, and the inability to re-enable the staging environment after it has been disabled.
The output of user scripts is not included in the job information unless the script fails (exits with a non-zero exit code). This can make it difficult to
diagnose problems with scripts if they are doing something unexpected but not failing.
Statistics for network interface bytes/sec and packets/sec occasionally include invalid negative values. This is exhibited in the GUI as large spikes
in the respective graphs. This has only been observed on systems with multiple network interfaces.
Other Issues
Spurious Job in the Job History
When the Delphix Engine starts up, a spurious job is always run with summary, "Restore the application containers to a consistent state in the
event of a failure during an operation." This job is spurious and does not affect any system state. It can safely be ignored.
45
Description
DLPX-34837
DLPX-35993
DLPX-35671
Check for Oracle bug 13075226 fails on 11.2.0.3 with patch installed
DLPX-35672
Oracle snapsync prescript fails if the script returns successful status but stderr has content
DLPX-34581
DLPX-35855
MSSQL provisioning with LogSync creates VDB with recovery model of FULL
46
DLPX-35809
DLPX-35872
DLPX-35746
DLPX-35667
DLPX-36015
DLPX-35668
ASE GUI does not set loadBackupServerName when Remote Backup Server selected
DLPX-34958
DLPX-34824
RFE: see the template name on the back of the VDB card with a pencil for edit
DLPX-35888
DLPX-32228
After applying a policy to a VDB or Group, the server need to notify the client of the changes
DLPX-35334
Waiting SNMP listener threads caused Delphix to run out of memory and hang
DLPX-32792
DLPX-35931
DLPX-35358
DLPX-32985
DLPX-34854
DLPX-29991
hs_err_pid files from java crashes are removed when stack restarts
DLPX-29992
DLPX-35484
DLPX-34865
DLPX-35948
Description
DLPX-32566
Can't create Jet Stream branch with latest data from the template
DLPX-32865
DLPX-32585
DLPX-35304
DLPX-31885
Creating a Jet Stream branch at now doesn't include latest changes on Oracle VDBs
DLPX-34757
DLPX-34649
DLPX-34191
DLPX-34414
Jet Stream only tracks initial timeflows of data template's data sources
DLPX-34390
Refactor Jet Stream time drift calculation for engine time API
DLPX-32565
47
DLPX-34117
DLPX-32958
DLPX-34620
Oracle SnapSync failed for read only datafile on dSource, regression introduced by fix for 34064 Remove SnapSync reliance on
RFN
DLPX-34396
Oracle SnapSync job stuck at zero percent complete, client Java in lwp_cond_wait
DLPX-31765
Oracle SnapSync fails when database has '_fix_control string 5909305:ON' set to non-null value
DLPX-33317
DLPX-33122
Oracle 12c failed vPDB provision or failed vPDB enable due open vPDB failures are not handled, partially provisioned vPDB is
left around
DLPX-33121
DLPX-32991
DLPX-33320
DLPX-34335
When archived logs are in recovery area, directories can be created with incorrect permissions.
DLPX-33123
DLPX-34622
DLPX-34011
DLPX-34009
DLPX-33198
DLPX-33294
DLPX-32519
Refresh/Provision can fail for MSSQL during standby phase if exclusive lock fails
DLPX-32572
MSSQL Provisioning should only switch to standby and back when doing point-in-time restores
DLPX-32571
MSSQL Provisioning should only mount source-archive when doing a point in time restore
DLPX-34423
DLPX-34010
Able to delete Primary User when environment is an AG cluster and no databases are linked
DLPX-34007
DLPX-31383
We should inform customer when we detect that iSCSI initiator is not running
DLPX-31081
Continuation of "MSSQL Backup set appears to have been deleted for a snapshot"
DLPX-34808
DLPX-28057
Error message when ppt MSSQL instance owner cant read backup location can be improved
DLPX-33049
Workaround from 38187 leaves MSSQL VDB in restoring state after disable/enable
DLPX-34084
Able to break MSSQL provisioning by connecting to VDB before provisioning had completed
DLPX-34712
DLPX-34005
DLPX-34728
DLPX-34650
DLPX-31395
DLPX-32894
DLPX-33001
48
DLPX-34793
Refreshing environment after target server rebuild results in spurious connection error
DLPX-34044
DLPX-34214
DLPX-33303
DLPX-35323
DLPX-33248
DLPX-34662
DLPX-31685
DLPX-34658
DLPX-34640
DLPX-32665
DLPX-34745
DelphixOS Fixes
Bug Number
Description
DLPX-31490, DLPX-34665
DLPX-34215
Description
DLPX-32834
V2ASM for RAC database should use SRVCTL stop instead of SQLPLUS shutdown abort
DLPX-32832
move-to-asm.sh fails with "Use Oracle install user to run this script" error
DLPX-32666
DLPX-31381
DLPX-33045
DLPX-32726
fix internal error while discovering MSSQL cluster environment backup software
DLPX-32518
DLPX-31978
DLPX-31945
MSSQL discovery does not detect Redgate backup software when the Redgate GUI client is not installed
DLPX-32684
getting ASE instance ports fails when client character set is different from server character set
DLPX-31827
DLPX-33262
DLPX-33281
JetStream should not come up in IE7 mode when actually in IE9 compatibility mode
DLPX-32913
hard to see the pencil to switch from scn to level based backups on back of dSource card
DLPX-32821
DLPX-32806
49
DLPX-32607
DLPX-32284
provide a warning banner on the login page to warn people when they are using too-old a browser
DLPX-32209
DLPX-30951
GUI can get confused during storage configuration resulting in spurious "in use" error
DLPX-29317
DLPX-33293
standardize long, float, and double API types into integer and number
DLPX-33292
DLPX-34069
DLPX-34051
DLPX-33190
DLPX-33062
fix internal error in incremental replication on MSSQL / ASE due to dataset is busy
DLPX-32388
fix internal error in TransactionalFilesystemManager during on stack startup when deleting deadbeats
DLPX-32672
DLPX-32824
Description
39605
remove instrumentation which causes benign memory free to crash the management server
DelphixOS Fixes
Bug Number
Description
39598
Description
39193
DelphixOS Fixes
Bug Number
Description
39198
50
Bug Number
Description
38548
38046
Hook execution not generating job events nor updating completion percentage
38007
38200
37893
Need to verify compatibility before plugging Oracle 12c vPDB into a target CDB
37369
37712
Oracle provisioning failed while creating file under the datafile mount
37697
Internal error during initial SnapSync of Oracle 12c PDB when environment user is changed from the environment
37817
Logs needed for Oracle snapshot: compare deleted logs on dSource to missing logs in snapshot
38364
38436
38219
38470
SQL Server provision fails when source DB was in read-only mode when backed up
38201
Could not redo log record when sync'ing SQL Server dSource
37756
Failure during refresh where SSMS cannot drop database because it is currently in use
37663
38361, 38820
38006
38316
AppData SnapSync jobs stuck at 0% when the connector does not start
37997
37460
38730
38488
37981
38329
38480
38489
38490
38194
37532
Bumping the API version in the PAM module shouldn't require an OS upgrade
38162
38772
38670
37883
Jet Stream is not clearing the previous segment field of a segment when that object is deleted
37749
Add a link to Jet Stream Capacity Information KB article on the Capacity page
36798
51
37884
Generating a support bundle may use a non-admin user, resulting in incomplete bundle data
38098
38203
SNMP trap varbind data is out of order which confuses the Tivoli Netcool SNMP implementation
38287
37998
DelphixOS Fixes
Bug Number
Description
37965
Storage LUNs failing to expand, although visible in "Sysadmin > Capacity" screen
38349
Description
35193
Provisioning fails with "Failed to rename datafile" when dSource has no valid tempfiles
37541
Linking and provisioning PDBs on SPARC fails Provision against PDB into SPARC CDB hangs
37618
36877
36917
36899
36292
Need to leave the auxiliary CDB around when PDB provisioning fails
37818
37162
36780
36778
36752
36289
37062
36611
37055
SnapSync hangs after archived log backups when LogSync is disabled and no archived logs need backup
36120
Check and set umask before switching archive logs as part of SnapSync
37668
MSSQL dSources in simple mode not able to pull in new full backups
36379
MSSQL provisioning fails when requested from API version 1.1.1 or lower
52
37158
SAP ASE warning is not sent if "Discover SAP ASE" option was not set
37157
Log backup for SAP ASE changes the snapshot time of the first backup when not required
36846
Internal error during sync on a replicated SAP ASE dSource after failover
36640
Change JDBC driver to jConnect for SAP ASE databases for better progress reporting
36466
37297
37242
Fix V2P failure due to "Could not change permissions for file"
37476
37437
36797
37054
37493
37492
37371
37060
37044
Add tool to help support engineers create host privilege elevation profiles
37236
37494
Creating a Jet Stream bookmark with LATEST_TIMESTAMP doesn't work as intended for Oracle dSources
37043
Deleting Jet Stream container can leave mount points with stale file handles
37357
Jet Stream bookmark at now does not actually create a bookmark at now
37299
36766
DelphixOS Fixes
Bug Number
Description
37402
Description
36710
36178
36565
37207
End timestamp for a log fetched by LogSync in Archive Redo mode can be incorrect
36256
35789
36735
53
36982
SnapSync resumed initial load will backup files that have already been backed up
36796
36175
36398
36109
36745, 36936
37240
37239
Depending on the order of datafiles retrieved from database, XPP will fail with internal error
36624
36519
36054
36631
35604
35878
35853
35323
NPM-enabled VDBs will not be mounted after Delphix reboots if the VDB was disabled earlier
36935
JVM hung in forkAndExec on Solaris host due to deadlock in PKCS11 crypto library
36174
36413
36567
36060
PKCS11 consumes too much native memory on Delphix for SSL sessions
36719
35621
Upgrading Delphix with an LDAP server using MD5 authentication makes LDAP unconfigurable
36146
36900
36960
36974
36975
36613
37125
36850
Upgrading stack-only from 4.1 to 4.1.1 fails because PostgreSQL times out
37122
35988
DelphixOS Fixes
Bug Number
Description
37219
54
55
56
Description
DLPX-38636
DLPX-38542
DLPX-38774
DLPX-38773
DLPX-39109
NPE in FaultManagerImpl#postErrorEvent
DLPX-38970
57
DLPX-38955
DLPX-38954
DLPX-38953
DLPX-38885
AppData AIX mount options include cio and intr when not needed
DLPX-38884
DLPX-38874
AppData NFS mounting options should include "noac" when Additional Mount Points in use
DLPX-38861
Upgrade to 4.2.4.1 gets dSource out of NPM mode but associated filesystems are not mounted - zfs state
DLPX-38657
DLPX-38775
DLPX-38368
Race between retention and settings of previousTimeflow during refresh leads to NPE in replication
DLPX-38219
DLPX-37701
DLPX-38563
Description
DLPX-38490
Setting 'Data Operator' and 'Reader' privileges via GUI fails after upgrade
DLPX-38462
DLPX-38442
DLPX-38408
GUI - new user privileges dropdown menu can be glitchy on Chrome and IE
DLPX-38407
DLPX-38406
GUI - AppData vFiles with 'Reader' privileges still has a 'snapshot' button
DLPX-38405
DLPX-38387
DLPX-38386
DLPX-38356
DLPX-38272
DLPX-38243
Create GUI for new data and read only user roles
DLPX-38235
DLPX-38195
DLPX-38137
Bump API version to 1.5.3 for 4.2.4 after exposing device removal
DLPX-37875
DLPX-37864
DLPX-37845
DLPX-37831
DLPX-37768
58
DLPX-37698
DLPX-37676
DLPX-37675
DLPX-37637
Installer might get stuck without error log instead of running the silent installer
DLPX-37548
DLPX-37517
DLPX-37492
DLPX-37465
windows connector cannot be installed on hosts that do not have mssql installed
DLPX-37371
DLPX-37135
DLPX-37073
DelphixOS Fixes
Bug Number
Description
DLPX-38193
DLPX-37766
Description
DLPX-38031
Description
DLPX-37687
DLPX-37667
DLPX-37595
DLPX-37499
Null pointer error when viewing admin app due to free version check
DLPX-37458
DLPX-37422
Delphix Express
DLPX-37420
DLPX-37414
DLPX-37412
DLPX-37411
The maximum number of entries in the pie graph on usage overview page should be 10
DLPX-37382
59
DLPX-37378
[IE-11] drop-down menu for owner on container creation page is not visible
DLPX-37376
DLPX-37375
DLPX-37374
DLPX-37373
DLPX-37372
DLPX-37307
DLPX-37306
DLPX-37302
API to map REFRESH, RESTORE, RESET operation to the time for the previous snapshot
DLPX-37300
Jet Stream UI needs to use new API to get time for last tickmark prior to REFRESH, RESET, RESTORE
DLPX-37282
DLPX-37279
DLPX-37277
DLPX-37256
DLPX-37251
DLPX-37230
DLPX-37205
After upgrading to 4.2.1.1 VDB configuration template parameters do not display during provisioning
DLPX-37201
DLPX-37196
DLPX-37195
DLPX-37193
After upgrade to 4.2 stack doesn't start up because a fault has no message associated with it
DLPX-37181
DLPX-37152
DLPX-37151
DLPX-37150
DLPX-37147
DLPX-37146
DLPX-37145
DLPX-37085
DLPX-37078
DLPX-37053
DLPX-37042
DLPX-36883
Oracle provision scripts affected adversely by customer turning SET TIMING ON in their SQLPLUS init file
DLPX-36869
DLPX-36850
DLPX-36780
DLPX-36774
60
DLPX-36743
VDB provision takes long time in doRenameDatafiles, dSource has ASM datafiles, target host does not have ASM
DLPX-36660
DLPX-36632
DLPX-36521
DLPX-36441
DLPX-36299
DLPX-36298
DLPX-36164
DLPX-36001
Oracle Validated Sync fails with ORA-01157 during post provision query
DLPX-35932
iSCSI CHAP
DLPX-35892
DLPX-32667
DelphixOS Fixes
Bug Number
Description
DLPX-37443
DLPX-37430
Description
DLPX-37083
DLPX-37254
DLPX-36921
UEM raising faults for permissions on CRS home for single instances
DLPX-37506
Description
DLPX-36658
DLPX-37040
DLPX-37032
DLPX-37025
DLPX-36944
DLPX-36896
61
DLPX-36894
DLPX-36873
DLPX-36867
NPE(s) after upgrade to 4.2.1.1 preventing Faults from being shown in GUI
DLPX-36781
DLPX-36768
NPE in MSSqlPreProvisioningWorker#raiseFault
DLPX-36723
DLPX-36721
DLPX-36715
DLPX-36714
DLPX-36699
DLPX-36685
DLPX-36682
MongoDB should be restarted periodically to prevent it from consuming too much memory
DLPX-36671
DLPX-36659
DLPX-36656
DLPX-36616
DLPX-36592
DLPX-36571
DLPX-36569
After upgrading to 4.2.1.0 debug log files roll over in 3 hours due to logging MDS queries
DLPX-36545
DLPX-36524
After upgrading to 4.2.1.0 debug log files roll over in 3 hours due to logging MDS queries
DLPX-36488
DLPX-36483
Support ASE 16
DLPX-36429
DLPX-36418
If host IP address exists in duplicate environments (RAC and standalone), disable of one prevents refresh of other
DLPX-36413
DLPX-36407
DLPX-36403
DLPX-36352
DLPX-36337
DLPX-36300
dSource card layout allows drawing confirmation buttons out of visible area
DLPX-36287
DLPX-36281
NPE in test_validate_xpp_with_invalid_timeflow_point
DLPX-36279
AppData staging should not allow you to choose an incompatible staging environment
DLPX-36277
Windows Appdata staging dsource card contents don't fit within box
DLPX-36247
DLPX-36190
62
DLPX-36182
DLPX-36128
hostchecker.sh does not extract and use bundled jdk when it should
DLPX-36108
Oracle 12c - PdbPlug and PdbOpen exception handling made wrong assumption, causing incomplete clean up after provision
failure
DLPX-36079
stack on upgraded replication target does not come up after vm is unregistered and reregistered
DLPX-36021
DLPX-35992
DLPX-35985
DLPX-35983
NPE in MSSqlPreProvisioningWorker.java
DLPX-35935
DLPX-35934
Pages scroll bar only displays up to the first 4 pages when dSource is selected
DLPX-35933
Long MSSql LSNs create scroll bar on dSource and VDB snapshots
DLPX-35705
DLPX-35638
DLPX-35559
DLPX-35524
DLPX-34562
startLiveSourceResync hang against multiple Live Source almost at the same time
DLPX-34557
DLPX-34518
DLPX-32668
DLPX-35883
DelphixOS Fixes
Bug Number
Description
DLPX-36529
DLPX-35303
DLPX-36416
DLPX-36511
DLPX-36189
Description
DLPX-36535
63
Bug Number
Description
DLPX-36355
Engine becomes slow after storage migration and removing device from ESX
DLPX-36349
DLPX-36348
Capacity API calls get extremely slow with large number of snapshots
DLPX-36347
DLPX-36336
DLPX-36321
DLPX-36319
DLPX-36262
DLPX-36259
Trying to edit the database user for a vPDB fails with "virtual database is enabled"
DLPX-36251
DLPX-36236
DLPX-36233
DLPX-36227
Updating the env user for an AppData Staging dsource leads to crash
DLPX-36221
DLPX-36215
DLPX-36214
DLPX-36196
DLPX-36195
DLPX-36194
DLPX-36186
DLPX-36173
DLPX-36168
DLPX-36115
DLPX-36098
DLPX-36082
"read" in dlpx_pfexec script may not accept empty input from /dev/null, need terminating "\n"
DLPX-36071
Pause and resume of Oracle V2P during database recovery phase results in crash dump
DelphixOS Fixes
Bug Number
Description
DLPX-36111
DLPX-36017
Description
64
DLPX-36165
DLPX-36163
DLPX-36162
DLPX-36135
DLPX-36085
DLPX-36073
DLPX-36052
DLPX-36042
DLPX-36018
DLPX-36014
DLPX-35963
NPE in TrileadC3ConnectionImpl.java
DLPX-35957
DLPX-35799
DLPX-35691
DLPX-35646
DelphixOS Fixes
Bug Number
Description
DLPX-36111
DLPX-36017
Description
DLPX-35669
ASE GUI does not set loadBackupServerName when Remote Backup Server selected
DLPX-35856
DLPX-35867
DLPX-35938
DLPX-35649
Interim solution for bug DLPX-30538 Check for Oracle bug 13075226 fails on 11.2.0.3 with patch installed
DLPX-35711
Adding more than one hook operation template is slow and the templates window doesn't update
DLPX-35709
DLPX-35758
DLPX-35954
DLPX-35787
DLPX-35979
Windows Create vFiles wizard does not allow PowerShell scripts for Hooks
DLPX-35871
DLPX-35710
65
DLPX-35757
Cluster VDBs failing on vDTully30s and 32s when lower numeric Node is owner of the SQL instance
DLPX-35885
DLPX-35953
DelphixOS Fixes
Bug Number
Description
DLPX-35826
Description
DLPX-35522
DLPX-35549
Windows AppData Replication tests fail with 'Cannot untar tar file'
DLPX-35557
DLPX-35560
DLPX-35561
flex doesn't show up after rebuild delphix engine with localized properties files
DLPX-35562
MSSQL cluster VDB provision/export must change disk signature on non-cluster host
DLPX-35564
test_data_container_disable_enable fails Create Jet Stream data container for Appdata on Windows
DLPX-35565
java.lang.StringIndexOutOfBoundsException: String index out of range: -1 when adding live source to dsource with altered
log_archive_config
DLPX-35566
DLPX-35589
DLPX-35614
DLPX-35616
DLPX-35631
DLPX-35680
DLPX-35703
DLPX-35731
Show the template name on the back of the VDB card with a pencil for edit
DLPX-35738
Snapsync fails with internal error when offline tablespace is made online
DLPX-35750
User exception in the environment monitor check when VDB is on a clustered SQL instance
DLPX-35751
Querying iSCSI LU number should take iSCSI view into account for MSSQL cluster VDBs
DLPX-35752
MSSQL cluster VDB provision/export must change disk signature on non-cluster host
DLPX-35753
DLPX-35760
DLPX-35761
Horizontal scroll bar on back of vFiles card when viewing hook scripts
DLPX-35762
Hook operations gui for empty vfiles should not have before and after refresh hooks
66
DLPX-35763
AppData linking wizard summary screen has scrollbars with a long 'path to exclude'
DLPX-35780
ASE: Temporary VDB used by V2P is left around after V2P completed
DLPX-35784
DLPX-35810
DLPX-35811
StorageUtilTest#getSnapshotCapacityBucketsPolicyOrManual failed
DLPX-35823
DLPX-35824
Limit file printing to one per line in the datafile info message during snapsync
DLPX-35829
DLPX-35832
DLPX-35842
DLPX-35848
MSSql provisioning with LogSync creates VDB with recovery model of FULL
DLPX-35853
DLPX-35863
DLPX-35880
DelphixOS Fixes
Bug Number
Description
DLPX-35797
67
A new feature has been added to the CLI for showing and fetching missing logs on a timeflow. See TimeFlow Patching for more
information.
The ability to ignore persistent diagnostic faults and to mark all active faults as resolved has been added. See System Faults for more
information.
VDB refresh and rewind operations can now be undone.
The queries run against source databases by Oracle LogSync have been made more efficient and buffered writing has been added to
improve LogSync's write performance.
EBS support has been expanded to include EBS 12.2 and EBS 11i.
The historical capacity data API has been augmented to allow obtaining capacity data at arbitrary intervals.
Database config templates can be associated with a repository and a container such that any time the data in the container is deployed
on the associated repository we fall back on the config template if no template has been explicitly specified . This feature can be used to
enable Oracle validated sync on a staging environment that is under-equipped relative to its source. See Provisioning Oracle VDBs: An
Overview#RepositoryTemplates for more information.
68
Description
DLPX-41399
DLPX-41649
DLPX-40390
DLPX-41936
DLPX-41582
Delphix can't find the ASE dump file when dumping to device with compression syntax
DLPX-39767
DLPX-40041
DLPX-41525
DLPX-40350
Cannot provision Oracle VDB from source with datafile names that only differ by a space char at the end.
DLPX-41195
DLPX-40458
doRenameDatafiles.sh in AIX with /bin/sh fails with out of memory error when database has 12K+ datafiles
DLPX-41570
DLPX-41569
[Gonzales] Turn off fault list and user profile UI until needed
69
DLPX-41140
DLPX-40686
DLPX-41209
If user name of owner of dataserver process is too long, ps outputs uid, causing add of Dsource to fail with internal error
DLPX-41408
Provision against datafile with more than one space at the end causes provision to fail
DLPX-41402
V2P against dSource has extra space at the end of a datafile got server restart
DLPX-41411
DLPX-41650
DLPX-41349
DLPX-41581
DLPX-41373
DLPX-41619
DLPX-41629
DLPX-41490
ase backup file discovery croaks on hidden subdirectories with identical files
DLPX-41659
DLPX-41679
DLPX-41681
Provisioning from snapshot that does not exist leads to NPE instead of Delphix Error
DLPX-41696
DLPX-41726
DLPX-41730
DLPX-41762
Delphix can't find the ASE trans dump file with compression syntax when log sync is enabled
DLPX-41820
Description
DLPX-41316
DLPX-41256
TCP stats collection causes analytics compression to consume all of stacks memory
DLPX-41255
DLPX-41224
DLPX-41211
DLPX-40616
Bump API version to 1.6.1 for 4.3.3.0 after introducing a preRollback hook
DLPX-40601
DLPX-40600
DLPX-40591
DLPX-40576
DLPX-40575
70
DLPX-40568
Operation Durations do not appear unless "Create Branch" operation has occured
DLPX-40546
DLPX-40523
DLPX-40512
DLPX-40473
DLPX-40460
DLPX-40448
DLPX-40372
DLPX-40349
DLPX-40323
DLPX-40315
DLPX-40289
DLPX-40270
DLPX-40263
DLPX-40210
DLPX-40204
On a resumed initial backup if many datafiles need backup, the backup command is too long and causes RMAN to fail
DLPX-40194
Weekly operation counts and durations stop working beyond one week
DLPX-40129
DLPX-40125
DLPX-40115
DLPX-40066
DLPX-40043
DLPX-40042
DLPX-40036
Appdata virtual source status does not automatically resolve when status is fixed
DLPX-39511
MSSQL virtual source enable after upgrade failed saying primary database file is incorrect
DLPX-38893
DLPX-37832
attachsource on 4.2 now requires postSync/preSync parameters by default and are confusing to set.
Description
DLPX-40142
DLPX-40231
4.3.x does not properly check for minimum supported upgrade version
71
Bug Number
Description
DLPX-40092
DLPX-40064
DLPX-40032
DLPX-40031
DLPX-40030
DLPX-40004
DLPX-39957
DLPX-39939
DLPX-39886
DLPX-39865
DLPX-39714
DLPX-39703
CREATE_CONTROL_FILE_ERROR in V2P/DB_EXPORT
DLPX-39568
DLPX-39540
AppData vFiles card boolean sliders are too long and card contents glitch and disappear
DLPX-39531
DLPX-39526
DLPX-39518
DLPX-39512
DLPX-39503
DLPX-39489
ASE ValidatedSync rollback logic should attempt to use UNMOUNT before falling back to DROP DATABASE
DLPX-39458
DLPX-39440
DLPX-39436
CLI objname.js should only list APIs that are visible to the user
DLPX-39413
Don't leak notification channels when creating new APISessionDO for existing HttpSession
DLPX-39383
DLPX-39342
DLPX-39237
DLPX-39209
DLPX-38889
DLPX-40152
DelphixOS Fixes
Bug Number
Description
DLPX-39572
DLPX-39179
DLPX-39167
72
Description
DLPX-39385
DLPX-39384
DLPX-39358
CLONE - Configs with toolkit defined params should not be manually created or updated
DLPX-39352
Upgrade to 4.2.4.1 gets dSource out of NPM mode but associated filesystems are not mounted - mds state
DLPX-39351
Upgrade to 4.2.4.1 gets dSource out of NPM mode but associated filesystems are not mounted - zfs state
DLPX-39350
xpp validation fails when delphix database user does not have 'select any dictionary' privilege
DLPX-39348
DLPX-39317
DLPX-39306
DLPX-39277
DLPX-39274
Failure in forceSendReceiveTest
DLPX-39269
Existing AppData repositories in 4.2 and upgrade to 4.3 leads to duplicated repositories
DLPX-39249
Race between serialization point becoming inactive and reaper checking for holds
DLPX-39239
DLPX-39195
Initial setup can't proceed past Storage Setup or Setup Summary screens
DLPX-39114
Stack crashes when trying to create more than 800 worker threads
DLPX-39024
Replication fails with LDAP error on target, but user auth works
DLPX-38819
DLPX-38818
DLPX-38712
DLPX-38703
DLPX-38699
EBS appsTier vFiles GUI card contents can overflow with a long INST_TOP value
DLPX-36278
Windows Appdata staging provision wizard slightly cuts off content on the right
DLPX-39528
DLPX-39399
DLPX-39482
DLPX-39502
DLPX-39522
Description
73
DLPX-39224
DLPX-39200
NPE in ObjectReaperTest
DLPX-39198
DLPX-39174
DLPX-39155
DLPX-39147
Finding the first- and latest backup sets for an mssql timeflow grows very slow over time
DLPX-39138
DLPX-39134
DLPX-39127
DLPX-39107
Got Exception During Linking/SnapSync a dSource on MySQL 5.7 Installation with GTID Enabled
DLPX-39105
DLPX-38994
network analytic code is making too many DNS queries, blocks dtrace reading threads
DLPX-38993
DLPX-38872
Click Next button from Add dSource but did not go to next screen
DLPX-38858
DLPX-38853
DLPX-38851
DLPX-38714
DLPX-38705
Restoration Dataset sourceConfigs should be filtered from environment page and dSource wizard
DLPX-38701
DelphixOS Fixes
Bug Number
Description
DLPX-39146
Description
DLPX-39088
DLPX-39080
DLPX-39078
Delphix Express Update phone home to call home an hour after stack startup if first time or if it hasn't been a week
DLPX-39003
DLPX-38997
DLPX-38969
DLPX-38835
DLPX-38716
DLPX-38713
74
DLPX-38696
MSSQL VDB refresh can fail trying to set recovery mode to 'UNKNOWN'
DLPX-38693
DLPX-38691
DLPX-38686
unrevert DLPX-28695 sql upgrade scripts need to be valid HyperSQL and PostgreSQL at the same time
DLPX-38673
Unmount and unexport unused LUNs for mssql staging and target dbs
DLPX-38672
Description
DLPX-38959
DLPX-38906
DLPX-38877
DLPX-38873
AppData NFS mounting options should include "noac" when Additional Mount Points in use
DLPX-38852
java.lang.NullPointerException
com.delphix.appliance.node.webapp.ApplicationInitializer.onStartup(ApplicationInitializer.java:218)
DLPX-38842
DLPX-38815
DLPX-38807
DLPX-38771
DLPX-38711
AppData linking wizard summary screen has scrollbars with a long 'path to exclude'
DLPX-38707
Provision vFiles wizard Target Environment and Summary views don't show scrollbar with many dynamic params
DLPX-38697
DLPX-38681
DLPX-38668
DelphixOS Fixes
Bug Number
Description
DLPX-38822
Delphix VMs configured with DHCP networking default fail to configure network interfaces if no DHCP server is present
DLPX-38683
75
Delphix now uses CHAP authentication to secure iSCSI connections, which will eliminate the possibility of unauthorized connections. For
more information click iSCSI Configuration.
DSP integration with SOCKS leaves the firewall in control of applications and provides a clean connection across a firewall for data
transfer. F or more information, r efer to Configuring Network in Replication.
Delphix will now support SAP ASE on the AIX operating platform . Customers using ASE on AIX can now integrate Delphix with this
platform.
Delphix will fully support SAP ASE Version 16 with this update
Delphix now supports PostgreSQL 9.3 on OSes.
Delphix now supports PostgreSQL 9.4 on OSes.
Delphix now provides support for the AWS GovCloud region.
Users can customize the redo log size while provisioning a vdb and disabling the archive log mode. This will improve VDB provision time
and runtime performance.
Duplicate data source names are no longer allowed in JetStream. Existing duplicate names will be uniquified on upgrade.
JetStream users can now go back to the last snapshot before a REFRESH, RESTORE, or RESET operation.
Oracle 12c w/APEX users will no longer cause Unix to Linux validation to fail.
Cross-Site Request Forgery (CSRF) headers are now required on all browser requests. This is handled automatically by the Delphix GUI.
If you see a "403 Forbidden" error you may need to refresh the page or clear the browser cache.
The User Roles for accessing/viewing Delphix objects has changed. Please see User Roles for more details.
The IO Report Card has been modified to include IOPS, throughput (MBps) along with avg/min/max/stdev latency. For more info please
see the IO Report Card documentation.
Ignored faults will no longer be notified via email. Please see faults for more information on fault handling.
A warning will now be raised if an MSSQL Server source changes its recovery model.
76
77
Integrated Masking
Masked VDB Provisioning
You can now create masked copies of data at VDB provision time, using masking jobs defined on the masking engine that run when you provision
or refresh the VDB. It
is now even easier to mask copies of production and deliver secure data across teams. From
one streamlined workflow, admins can define how/what needs to be masked, who can access the data, and
distribute that masked data. To find additional information about masked VDB provisioning, see Provisioning
Masked VDBs.
Selective Data Distribution
You can now replicate masked data directly to a target Delphix engine, while ensuring unmasked sensitive
data does not leave the production site. This feature is critical for implementing a hybrid cloud deployment in which you want only
masked data in the cloud, as well as other cases in which you want only masked data in target systems, such as offshore QA and outsourced
analytics. For more information about selective data distribution, see Selective Data Distribution Overview.
78
DB2 Support
DB2 LUW
DB2 LUW will be available on a single machine, single partition DBs on versions 10.1 and above. We will support customers on AIX 6.1+ and
Redhat 6.5+. For more information about DB2 LUW, see DB2 on Delphix: An Overview.
79
Technical Improvements
UX Change
Faster Start
For users with a large number of databases, application startup time will be significantly faster.
ZFS Improvements
Compressed ZFS Send/Receive
Performance of replication across a WAN (for example, to the cloud) is now improved with send stream pre-compression. This will lower CPU use
and improve bandwidth in cases where CPU performance was a bottleneck, or where compression was not previously enabled for replication. All
replications are now sent compressed so there is no longer a "compressed" checkbox in the replication UI. There is no additional CPU cost,
because the data is compressed when it is first written, rather than as it is being replicated. Reported replication throughput may be lower
because the amount of compressed data sent is reported, rather than the amount of uncompressed data. For more information see, Configuring
Replication.
80
Bug 19637186: RAC OPTION MISMATCH PRODUCES ERROR VIOLATION DURING PDB PLUG IN
81
_QuickStart
82
83
84
Procedure
1. Login to the Delphix Admin application using Delphix Admin credentials.
2. Click Manage.
3. Select Environments.
4. Click the Plus icon next to Environments.
5. In the Add Environment dialog, select Unix/Linux.
6. Select Standalone Host or Oracle Cluster, depending on the type of environment you are adding.
7. For standalone Oracle environments enter the Host IP address.
8. For Oracle RAC environments, enter the Node Address and Cluster Home.
9. Enter an optional Name for the environment.
10. Enter the SSH port.
The default value is 22.
11. Enter a Username for the environment.
See Requirements for Oracle Target Hosts and Databases for more information on the required privileges for the environment user.
12. Select a Login Type.
For Password, enter the password associated with the user in Step 10.
Using Public Key Authentication
If you want to use public key encryption for logging into your environment:
a. Select Public Key for the Login Type.
b. Click View Public Key.
c. Copy the public key that is displayed, and append it to the end of your ~/.ssh/authorized_keys file. If this file
does not exist, you will need to create it.
i. Run chmod 600 authorized_keys to enable read and write privileges for your user.
ii. Run chmod 755 ~ to make your home directory writable only by your user.
The public key needs to be added only once per user and per environment.
You can also add public key authentication to an environment user's profile by using the command line interface, as explained
in the topic CLI Cookbook: Setting Up SSH Key Authentication for UNIX Environment Users.
13. For Password Login, click Verify Credentials to test the username and password.
14. Enter a Toolkit Path.
The toolkit directory stores scripts used for Delphix Engine operations, and should have a persistent working directory rather than a
temporary one. The toolkit directory will have a separate sub-directory for each database instance. The toolkit path must have 0770
permissions and at least 345MB of free space.
15. Click OK.
Post-Requisites
After you create the environment, you can view information about it:
1. Click Manage.
2. Select Environments.
3. Select the environment name.
85
Related Links
Requirements for Oracle Target Hosts and Databases
Supported Operating Systems and DBMS Versions for Oracle Environments
86
Prerequisites
Make sure you have the correct user credentials for the source environment, as described in Requirements for Oracle Target Hosts
and Databases.
If you are linking a dSource to an Oracle or Oracle RAC physical standby database, you should read the topic Linking Oracle Physical
Standby Databases.
If you are using Oracle Enterprise Edition, you must have Block Change Tracking (BCT) enabled as described in Requirements for
Oracle Source Hosts and Databases.
The source database should be in ARCHIVELOG mode and the NOLOGGING option should be disabled as described in Requirements
for Oracle Source Hosts and Databases.
You may also want to read the topic Advanced Data Management Settings for Oracle dSources.
Procedure
1. Login to the Delphix Admin application using Delphix Admin credentials.
2. Click Manage.
3. Select Databases.
4. Select Add dSource.
Alternatively, on the Environment Management screen, you can click Link next to a database name to start the dSource creation
process.
5. In the Add dSource wizard, select the source database.
Changing the Environment User
If you need to change or add an environment user for the source database, see Managing Oracle Environment Users.
6. Enter your login credentials for the source database and click Verify Credentials.
If you are linking a mounted standby, click Advanced and enter non-SYS login credentials as well. Click Next. See the topics under Link
ing Oracle Physical Standby Databases for more information about how the Delphix Engine uses non-SYS login credentials.
7. In Add dSource/Add Environment wizard, the Toolkit Path can be set to /tmp (or any unused directory).
8. Select a Database Group for the dSource, and then click Next.
Adding a dSource to a database group lets you set Delphix Domain user permissions for that database and its objects, such as
snapshots. See the topics under Users, Permissions, and Policies for more information.
9. Select an Initial Load option.
By default, the initial load takes place upon completion of the linking process. Alternatively, you can set the initial load to take place
according to the SnapSync policy, for example if you want the initial load to take place when the source database is not in use, or after a
set of operations have taken place.
10. Select whether the data in the database is Masked.
This setting is a flag to the Delphix Engine that the database data is in a masked state. Selecting this option will not mask the data.
11. Select a SnapSync policy.
See Advanced Data Management Settings for Oracle dSources for more information.
12. Click Advanced to edit LogSync, Validated Sync, and Retention policies.
See Advanced Data Management Settings for Oracle dSources for more information.
13. Click Next.
14. Review the dSource Configuration and Data Management information, and then click Finish.
The Delphix Engine will initiate two jobs, DB_Link and DB_Sync, to create the dSource. You can monitor these jobs by clicking Active
Jobs in the top menu bar, or by selecting System > Event Viewer. When the jobs have successfully completed, the database icon will
change to a dSource icon on the Environments > Databases screen, and the dSource will be added to the list of My Databases under
its assigned group.
87
and permissions. In the Databases panel, click on the Open icon to view the front of the dSource card. The card will then flip, showing
you information such as the Source Database and Data Management configuration. For more information, see Advanced Data
Management Settings for Oracle dSources .
Related Links
Advanced Data Management Settings for Oracle dSources
Requirements for Oracle Source Hosts and Databases
Requirements for Oracle Target Hosts and Databases
Linking dSources from an Encrypted Oracle Database
Linking Oracle Physical Standby Databases
Users, Permissions, and Policies
Managing Oracle Environment Users
88
Procedure
1. Login to the Delphix Admin application.
2. Click Manage.
3. Select Databases.
4. Select My Databases.
5. Select a dSource.
6. Select a dSource snapshot.
See Provisioning by Snapshot and LogSync in this topic for more information on provisioning options.
You can take a snapshot of the dSource to provision from by clicking theCamera icon on the dSource card.
7. Optional: Slide the LogSync slider to the open the snapshot timeline, and then move the arrow along the timeline to provision from a
point of time within a snapshot.
You can provision from the most recent log entry by opening the snapshot timeline, and then click the red Arrow icon next to
the LogSync Slider.
8. Click Provision.
The Provision VDB panel will open, and the fields Installation Home,Database Unique Name, SID, Database Name, Mount Base,
andEnvironment User will auto-populate with information from the dSource.
9. If you need to add a new target environment for the VDB, click the green Plusicon next to the Filter Target field, and follow the
instructions in Adding an Oracle Single Instance or RAC Environment.
10. Review the information for Installation Home, Database Unique Name, SID, and Database Name and edit as necessary.
11. Review the Mount Base and Environment User and edit as necessary.
The Environment User must have permissions to write to the specified Mount Base, as described in Requirements for Oracle Target
Hosts and Databases. You may also want to create a new writeable directory in the target environment with the correct permissions,
and use that as the Mount Base for the VDB.
12. Select Provide Privileged Credentials if you want to use login credentials on the target environment other than those associated with
the Environment User.
13. Click Advanced to select Oracle Node Listeners or enter any VDB configuration settings or file mappings.
For more information, see Customizing Oracle VDB Configuration Settings and Customizing VDB File Mappings.
If you are provisioning to a target environment that is running a Linux OS, you will need to compare the SGA_TARGET configur
ation parameter with the shared memory size in /dev/shm. The shared memory configured on the target host should match
the SGA memory target. You can check this by opening the Advanced settings, and then finding the value for SGA_TARGETun
der DB Configuration.
Description
Provision by
Time
You can provision to the start of any snapshot by selecting that snapshot card from the Timeflow view, or by entering a value in
the time entry fields below the snapshot cards. The values you enter will snap to the beginning of the nearest snapshot.
Provision by
SCN
You can use the Slide to Provision by SCN control to open the SCN entry field. Here, you can type or paste in the SCN you
want to provision to. After entering a value, it will "snap" to the start of the closest appropriate snapshot.
When provisioning by LogSync information, you can provision to any point in time, or to any SCN , within a particular snapshot. The TimeFlow
view for a dSource shows multiple snapshots by default. To view the LogSync data for an individual snapshot, use the Slide to Open LogSync c
ontrol at the top of an individual snapshot card.
Provisioning
By LogSync
Description
Provision
bySCN
Use the Slide to Open LogSync and Slide to Provision bySCN controls to view the range of SCNs within that
snapshot.Drag the red triangle to the LSN that you want to provision from. You can also type or paste in the specific SCN you
want to provision to. Note that if the SCN doesn't exist, you will see an error when you provision.
Provision by
Time
Use the Slide to Open LogSync control to view the time range within that snapshot. Drag the red triangle to the point in time
that you want to provision from. You can also enter a date and time directly.
Related Links
Linking an Oracle Data Source
Requirements for Oracle Target Hosts and Databases
Customizing Oracle VDB Configuration Settings
Provisioning a VDB from an Encrypted Oracle Database
Adding an Oracle Single Instance or RAC Environment
Customizing VDB File Mappings
90
91
Prerequisites
Make sure your environment meets the requirements described in the following topics:
Requirements for PostgreSQL Source Hosts and Databases
Requirements for PostgreSQL Target Hosts and Databases
Supported Operating Systems and Database Versions for PostgreSQL Environments
Procedure
1. Login to the Delphix Admin application.
2. Click Manage.
3. Select Environments.
4. Next to Environments, click the green Plus icon.
5. In the Add Environment dialog, select Unix/Linux in the operating system menu.
6. Select Standalone Host.
7. Enter the Host IP address.
8. Enter an optional Name for the environment.
9. Enter the SSH port.
The default value is 22.
10. Enter a Username for the environment.
For more information about the environment user requirements, see Requirements for PostgreSQL Target Hosts and Databases and
Requirements for PostgreSQL Source Hosts and Databases.
11. Select a Login Type.
For Password, enter the password associated with the user in Step 9.
Using Public Key Authentication
If you want to use public key encryption for logging into your environment:
a. Select Public Key for the Login Type.
b. Click View Public Key.
c. Copy the public key that is displayed, and append it to the end of your ~/.ssh/authorized_keys file. If this file
does not exist, you will need to create it.
i. Run chmod 600 authorized_keys to enable read and write privileges for your user.
ii. Run chmod 755 ~ to make your home directory writable only by your user.
The public key needs to be added only once per user and per environment.
You can also add public key authentication to an environment user's profile by using the command line interface, as explained
in the topic CLI Cookbook: Setting Up SSH Key Authentication for UNIX Environment Users.
12. For Password Login, click Verify Credentials to test the username and password.
13. Enter a Toolkit Path.
See Requirements for PostgreSQL Target Hosts and Databases and Requirements for PostgreSQL Source Hosts and Databases
for more information about the toolkit directory requirements.
14. Click OK.
As the new environment is added, you will see two jobs running in the Delphix Admin Job History, one to Create and Discover an
environment, and another to Create an environment. When the jobs are complete, you will see the new environment added to the list in
the Environments panel. If you don't see it, click the Refresh icon in your browser.
Post-Requisites
After you create the environment, you can view information about it by selecting Manage > Environments, and then select the
environment name.
92
Related Links
Setting Up PostgreSQL Environments: An Overview
Requirements for PostgreSQL Source Hosts and Databases
Requirements for PostgreSQL Target Hosts and Databases
Supported Operating Systems and Database Versions for PostgreSQL Environments
Adding an Installation to a PostgreSQL Environment
93
Prerequisites
Make sure you have the correct user credentials for the source environment, as described in Requirements for PostgreSQL Source
Hosts and Databases
You may also want to read the topic Advanced Data Management Settings for PostgreSQL Data Sources.
Procedure
1. Login to the Delphix Admin application using Delphix Admin credentials.
2. Click Manage.
3. Select Databases.
4. Select Add dSource.
Alternatively, on the Environment Management screen, you can click Link next to a database name to start the dSource creation
process.
5. In the Add dSource wizard, select the source database.
Changing the Environment User
If you need to change or add an environment user for the source database, see Managing PostgreSQL Environment Users.
6. Enter your login credentials for DB Cluster User and DB Cluster Password.
7. Click Advanced to enter a Connection Database.
The Connection Database will be used when issuing SQL queries from the Delphix Engine to the linked database. It can be any existing
database that the DB Cluster User has permission to access.
8. Click Next.
9. Select a Database Group for the dSource, and then click Next.
Adding a dSource to a database group lets you set Delphix Domain user permissions for that database and its objects, such as
snapshots. See the topics under Users, Permissions, and Policies for more information.
10. Select a SnapSync Policy, and, if necessary, a Staging Installation for the dSource.
The Staging installation represents the PostgreSQL binaries that will be used on the staging target to backup and restore the linked
database to a warm standby.
11. Click Advanced to select whether the data in the data sources is Masked, to select a Retention Policy, and to indicate whether any pre
or post scripts should be executed during the dSource creation.
For more information, see Advanced Data Management Settings for PostgreSQL Data Sources and Using Pre- and Post-Scripts
with PostgreSQL dSources.
12. Click Next.
13. Review the dSource Configuration and Data Management information, and then click Finish.
The Delphix Engine will initiate two jobs, DB_Link and DB_Sync, to create the dSource. You can monitor these jobs by clicking Active
Jobs in the top menu bar, or by selecting System > Event Viewer. When the jobs have successfully completed, the database icon will
change to a dSource icon on the Environments > Databases screen, and the dSource will be added to the list of My Databases under
its assigned group.
Related Links
Advanced Data Management Settings for PostgreSQL Data Sources
Requirements for PostgreSQL Target Hosts and Databases
Using Pre- and Post-Scripts with PostgreSQL dSources
Users, Permissions, and Policies
94
95
Prerequisites
You will need to have linked a dSource from a source database, as described in Linking a PostgreSQL dSource, or have already
created a VDB from which you want to provision another VDB
Procedure
1. Login to the Delphix Admin application.
2. Click Manage.
3. Select Databases.
4. Select My Databases.
5. Select a dSource.
6. Select a dSource snapshot.
See Provisioning by Snapshot and LogSync in this topic for more information on provisioning options.
You can take a snapshot of the dSource to provision from by clicking the Camera icon on the dSource card.
7. Optional: Slide the LogSync slider to the open the snapshot timeline, and then move the arrow along the timeline to provision from a
point in time within a snapshot.
8. Click Provision.
The VDB Provisioning Wizard will open, and the fields Installation, Mount Base, and Environment User will auto-populate with
information from the environment configuration.
9. Enter a Port Number.
The TCP port upon which the VDB will listen.
10. Click Advanced to enter any VDB configuration settings.
For more information, see Customizing PostgreSQL VDB Configuration Settings.
11. Click Next to continue to the VDB Configuration tab.
12. Modify the VDB Name if necessary.
13. Select a Target Group for the VDB.
14. Click the green Plus icon to add a new group, if necessary.
15. Select a Snapshot Policy for the VDB.
16. Click the green Plus icon to create a new policy, if necessary.
17. Click Next to continue to the Hooks tab.
18. Specify any Hooks to be used during the provisioning process.
For more information, see Customizing PostgreSQL Management with Hook Operations.
19.
96
Related Links
Linking a PostgreSQL dSource
Requirements for PostgreSQL Target Hosts and Databases
Using Pre- and Post-Scripts with dSources and VDBs
Customizing PostgreSQL VDB Configuration Settings
97
98
Prerequisites
Make sure your environment meets the requirements described in the following topics:
Requirements for MySQL Source Hosts and Databases
Requirements for MySQL Target/Staging Hosts and Databases
Supported Operating Systems and Database Versions for MySQL Environments
Procedure
1. Login to the Delphix Admin application.
2. Click Manage.
3. Select Environments.
4. Next to Environments, click the green Plus icon.
5. In the Add Environment dialog, select Unix/Linux in the operating system menu.
6. Select Standalone Host.
7. Enter the Host IP address.
8. Enter an optional Name for the environment.
9. Enter the SSH port.
The default value is 22.
10. Enter a Username for the environment.
For more information about the environment user requirements, see Requirements for MySQL Target/Staging Hosts and Databases a
nd Requirements for MySQL Source Hosts and Databases.
11. Select a Login Type.
For Password, enter the password associated with the user in step 9.
Using Public Key Authentication
If you want to use public key encryption for logging into your environment:
a. Select Public Key for the Login Type.
b. Click View Public Key.
c. Copy the public key that is displayed, and append it to the end of your ~/.ssh/authorized_keys file. If this file
does not exist, you will need to create it.
i. Run chmod 600 authorized_keys to enable read and write privileges for your user.
ii. Run chmod 755 ~ to make your home directory writable only by your user.
The public key needs to be added only once per user and per environment.
You can also add public key authentication to an environment user's profile by using the command line interface, as explained
in the topic CLI Cookbook: Setting Up SSH Key Authentication for UNIX Environment Users.
12. For Password Login, click Verify Credentials to test the username and password.
13. Enter a Toolkit Path.
For more information about the toolkit directory requirements, see Requirements for MySQL Target/Staging Hosts and Databases an
d Requirements for MySQL Source Hosts and Databases.
14. Click OK.
As the new environment is added, you will see two jobs running in the Delphix Admin Job History, one to Create and Discover an
environment, and another to Create an environment. When the jobs are complete, you will see the new environment added to the list in
the Environments tab. If you do not see it, click the Refresh icon in your browser.
Post-Requisites
To view information about an environment after you have created it:
1. Click Manage.
2.
99
2. Select Environments.
3. Select the environment name.
Related Links
Setting Up MySQL Environments: An Overview
Requirements for MySQL Source Hosts and Databases
Requirements for MySQL Target/Staging Hosts and Databases
Supported Operating Systems and Database Versions for MySQL Environments
Adding an Installation to a MySQL Environment
100
Prerequisites
Make sure you have the correct user credentials for the source environment, as described in Requirements for MySQL Source Hosts
and Databases
You may also want to read the topic Advanced Data Management Settings for MySQL Data Sources.
Procedure
1. Login to the Delphix Admin application using Delphix Admin credentials.
2. Click Manage.
3. Select Databases.
4. Click Add dSource.
Alternatively, on the Environment Management screen, you can click Link next to a database name to start the dSource creation
process.
5. In the Add dSource wizard, select the source database.
Changing the Environment User
If you need to change or add an environment user for the source database, see Managing MySQL Environment Users.
Related Links
101
102
Prerequisites
You must have already:
linked a dSource from a source database, as described in Linking a MySQL dSource
or,
created a VDB from which you want to provision another VDB
Procedure
1. Login to the Delphix Admin application.
2. Click Manage.
3. Click My Databases.
4. Select a dSource.
5. Select a dSource snapshot.
For more information on provisioning options, see Provisioning by Snapshot or LogSync below.
6. Optional: Slide the LogSync slider to open the snapshot timeline, and then move the arrow along the timeline to provision from a point
in time within a snapshot.
7. Click Provision.
The VDB Provisioning Wizard will open, and the fields Installation, Mount Base, and Environment User will auto-populate with
information from the environment configuration.
8. Enter a Port Number. This is the TCP port upon which the VDB will listen.
9. Click Advanced followed by clicking the green Plus icon (Add Parameter) to add new or update existing VDB configuration settings on
the template provided.
For more information, see Customizing MySQL VDB Configuration Settings.
10. Click Next to continue to the VDB Configuration tab.
11. Modify the VDB Name if necessary.
12. Select a Target Group for the VDB.
13. If necessary, click the green Plus icon to add a new group.
14. Select a Snapshot Policy for the VDB.
15. If necessary, click the green Plus icon to create a new policy.
16. Click on LogSync option to enable LogSync process for point-in-time provisioning/refresh.
17. Click Next to continue to the Hooks tab.
18. Specify any Hooks to be used during the provisioning process.
For more information, see Customizing MySQL Management with Hook Operations.
19.
20. Verify all the information displayed for the VDB is correct.
21. Click Finish.
When provisioning starts, you can view progress of the job in the Databases panel or in the Job History panel of the Dashboard. When
provisioning is complete, the VDB will be included in the group you designated, and listed in the Databases panel. If you select the VDB in the Da
tabases panel and click the Open icon, you can view its card, which contains information about the database and its Data Management settings.
103
Related Links
Linking a MySQL dSource
Requirements for MySQL Target/Staging Hosts and Databases
Using Pre- and Post-Scripts with dSources and VDBs
Customizing MySQL VDB Configuration Settings
104
105
Prerequisites
Make sure that your target environment meets the requirements described in Requirements for SQL Server Target Hosts and
Databases.
On the Windows machine that you want to use as a target, you will need to download the Delphix Connector software through the
Delphix Engine interface, install it and then register that machine with the Delphix Engine.
Procedure
Flash Player Required for Connector Download
A Flash player must be available on the target host to download the Delphix Connector when using the Delphix GUI. If the target host
does not have a Flash player installed, you can download the connector directly from the Delphix Engine by navigating to this URL: ht
tp://<name of your Delphix Engine>/connector/DelphixConnectorInstaller.msi
1. From the machine that you want to use as a target, start a browser session and connect to the Delphix Engine GUI using the
delphix_admin login.
2. Click Manage.
3. Select Environments.
4. Next to Environments, click the green Plus icon.
5. In the Add Environment dialog, select Windows in the operating system menu.
6. Select Target.
7. Select Standalone.
8. Click the download link for the Delphix Connector Installer.
The Delphix Connector will download to your local machine.
9. On the Windows machine that you want to want to use as a target, run the Delphix Connector installer. Click Next to advance through
each of the installation wizard screens.
The installer will only run on 64-bit Windows systems. 32-bit systems are not supported.
a. For Connector Configuration, make sure there is no firewall in your environment blocking traffic to the port on the target
environment that the Delphix Connector service will listen to.
b. For Select Installation Folder, either accept the default folder, or click Browse to select another.
c. Click Next on the installer final 'Confirm Installation' dialog to complete the installation process and then Close to exit the Delphix
Connector Install Program.
10. Return to the Delphix Engine interface.
11. Enter the Host Address, Username, and Password for the target environment.
12. Click Validate Credentials.
13. Click OK to complete the target environment addition request.
106
Post-Requisites
1. On the target machine, in the Windows Start Menu, click Services.
2. Select Extended Services.
3. Ensure that the Delphix Connector service has a Status of Started.
4. Ensure that the Startup Type is Automatic.
Related Links
Setting Up SQL Server Environments: An Overview
Requirements for SQL Server Target Hosts and Databases
107
Prerequisites
You must have already set up SQL Server target environments, as described in Adding a SQL Server Standalone Target
Environment
You will need to specify a target environment that will act as a proxy for running SQL Server instance and database discovery on
the source, as explained in Setting Up SQL Server Environments: An Overview
Make sure your source environment meets the requirements described in Requirements for SQL Server Target Hosts and Databases
Procedure
1. Login to the Delphix Admin application.
2. Click Manage.
3. Select Environments.
4. Next to Environments, click the green Plus icon.
5. In the Add Environment dialog, select Windows in the operating system menu.
6. Select Source.
a. If you are adding a Windows Server Failover Cluster (WSFC), add the environment based on which WSFC feature the source
databases use:
i. Failover Cluster Instances
Add the environment as a standalone source using the cluster name or address.
ii. AlwaysOn Availability Groups
Add the environment as a cluster source using the cluster name or address.
b. Otherwise, add the environment as a standalone source.
7. Select a Connector Environment.
Connector environments are used as proxy for running discovery on the source. If no connector environments are available for selection,
you will need to set them up as described in Adding a SQL Server Standalone Target Environment. Connector environments must:
have the Delphix Connector installed
be registered with the Delphix Engine from the host machine where they are located.
8. Enter the Host Address, Username, and Password for the source environment.
9. Click Validate Credentials.
10. Click OK, and then click Yes to confirm the source environment addition request.
As the new environment is added, you will see multiple jobs running in the Delphix Admin Job History to Create and Discover an
environment. In addition, if you are adding a cluster environment, you will see jobs to Create and Discover each node in the cluster and
their corresponding hosts. When the jobs are complete, you will see the new environment added to the list in the Environments panel. If
you don't see it, click the Refresh icon.
Related Links
Setting Up SQL Server Environments: An Overview
Adding a SQL Server Standalone Target Environment
Adding a SQL Server Failover Cluster Target Environment
Requirements for SQL Server Target Hosts and Databases
108
Prerequisites
Be sure that the source database meets the requirements described in Requirements for SQL Server Target Hosts and Databases
You must have already set up a staging target environment as described in Setting Up SQL Server Environments: An Overview and A
dding a Windows Target Environment
Maximum Size of a Database that Can Be Linked
If the staging environment uses the Windows 2003 operating system, the largest size of database that you can link to the
Delphix Engine is 2TB. This is also the largest size to which a virtual database (VDB) can grow.
For all other Windows versions, the maximum size for databases and VDBs is 32TB.
In both cases, the maximum size of the database and resulting VDBs is determined by the operating system on the staging target host.
Procedure
1. Login to the Delphix Admin application using Delphix Admin credentials or as the owner of the database from which you want to
provision the dSource.
2. Click Manage.
3. Select Databases.
4. Select Add dSource.
Alternatively, on the Environment Management screen, you can click Link next to a database name to start the dSource creation
process.
5. In the Add dSource wizard, select the source database.
Changing the Environment User
If you need to change or add an environment user for the source database, see Managing SQL Server Environment Users.
14. Select a standalone SQL Server instance on the target environment for hosting the staging database.
15. Select whether the data in the database is Masked.
16. Select whether you want LogSync enabled for the dSource. For more information, see Advanced Data Management Settings for SQL
Server dSources.
LogSync Disabled by Default
For SQL Server data sources, LogSync is disabled by default. For more information about how LogSync functions with SQL
Server data sources, see Managing SQL Server Data Sources.
17. Cl ick Advanced to edit retention policies and specify pre- and post-scripts. For details on pre- and post-scripts, refer to Customizing
SQL Server Management with Pre- and Post-Scripts. Additionally, if the source database's backups use LiteSpeed or RedGate
password protected encryption, you can supply the encryption key the Delphix Engine should use to restore those backups.
18. Click Next.
19. Review the dSource Configuration and Data Management information.
20. Click Finish.
The Delphix Engine will initiate two jobs to create the dSource, DB_Link and DB_Sync. You can monitor these jobs by clicking Active Jobs in
the top menu bar, or by selecting System > Event Viewer. When the jobs have completed successfully, the database icon will change to a dSou
rce icon on the Environments > Databases screen, and the dSource will appear in the list of My Databases under its assigned group.
You can view the current state of Validated Sync for the dSource on the dSource card itself.
The dSource Card
After you have created a dSource, the dSource card allows you to view information about it and make modifications to its policies and
permissions. In the Databases panel, click the Open icon to view the front of the dSource card. You can then flip the card to see
information such as the Source Database and Data Management configuration. For more information, see the topic Advanced Data
Management Settings for SQL Server dSources.
Related Links
Users, Permissions, and Policies
Setting Up SQL Server Environments: An Overview
Linking a dSource from a SQL Server Database: An Overview
Advanced Data Management Settings for SQL Server dSources
Adding a SQL Server Standalone Target Environment
Requirements for SQL Server Target Hosts and Databases
Using Pre- and Post-Scripts with SQL Server dSources
110
Procedure
1. Login to the Delphix Admin application.
2. Click Manage.
3. Select Databases.
4. Select My Databases.
5. Select a dSource.
6. Select a means of provisioning.
See Provisioning by Snapshot and LogSync in this topic for more information.
7. Click Provision.
The Provision VDB panel will open, and the Database Name and Recovery Model will auto-populate with information from the
dSource.
8. Select a target environment from the left pane.
9. Select an Instance to use.
10. If the selected target environment is a Windows Failover Cluster environment, select a drive letter from Available Drives. This drive will
contain volume mount points to Delphix storage.
11. Specify any Pre or Post Scripts that should be used during the provisioning process.
For more information, see Using Pre- and Post-Scripts with SQL Server dSources.
12. Click Next.
13. Select a Target Group for the VDB.
Click the green Plus icon to add a new group, if necessary.
14. Select a Snapshot Policy for the VDB.
Click the green Plus icon to create a new policy, if necessary.
15. Click Next.
16. If your Delphix Engine system administrator has configured the Delphix Engine to communicate with an SMTP server, you will be able to
specify one or more people to notify when the provisioning is done. You can choose other Delphix Engine users, or enter email
addresses.
17. Click Finish.
When provisioning starts, you can review progress of the job in the Databases panel, or in the Job History panel of the Dashboard.
When provisioning is complete, the VDB will be included in the group you designated, and listed in the Databases panel. If you select the
VDB in the Databases panel and click the Open icon, you can view its card, which contains information about the database and its Data
Management settings.
You can select a SQL Server instance that has a higher version than the source database and the VDB will be automatically upgraded.
For more information about compatibility between different versions of SQL Server, see SQL Server Operating System Compatibility
Matrices.
111
Provisioning
By
Snapshot
Description
Provision by
Time
You can provision to the start of any snapshot by selecting that snapshot card from the TimeFlow view, or by entering a value
in the time entry fields below the snapshot cards. The values you enter will snap to the beginning of the nearest snapshot.
Provision by
LSN
You can use the Slide to Provision by LSN control to open the LSN entry field. Here, you can type or paste in the LSN you
want to provision to. After entering a value, it will "snap" to the start of the closest appropriate snapshot.
If LogSync is enabled on the dSource, you can provision by LogSync information. When provisioning by LogSync information, you can provision
to any point in time, or to any LSN, within a particular snapshot. The TimeFlow view for a dSource shows multiple snapshots by default. To view
the LogSync data for an individual snapshot, use the Slide to Open LogSync control at the top of an individual snapshot card.
Provisioning
By LogSync
Description
Provision by
Time
Use the Slide to Open LogSync control to view the time range within that snapshot. Drag the red triangle to the point in time
that you want to provision from. You can also enter a date and time directly.
Provision by
LSN
Use the Slide to Open LogSync and Slide to Provision by LSN controls to view the range of LSNs within that snapshot. You
must type or paste in the specific LSN you want to provision to. Note that if the LSN doesn't exist, you will see an error when
you provision.
Related Links
Linking a SQL Server dSource
Adding a SQL Server Standalone Target Environment
Adding a SQL Server Failover Cluster Target Environment
Requirements for SQL Server Target Hosts and Databases
Setting Up SQL Server Environments: An Overview
Using Pre- and Post-Scripts with dSources and SQL Server VDBs
112
113
Prerequisites
See Requirements for SAP ASE Source Hosts and Databases.
Procedure
1. Login to the Delphix Admin application using Delphix Admin credentials.
2. Click Manage.
3. Select Environments.
4. Click the Plus icon next to Environments.
5. In the Add Environment dialog, select Unix/Linux.
6. Select Standalone Host.
7. Enter the Host IP address.
8.
12. For Password, enter the password associated with the user in Step 10.
Using Public Key Authentication
If you want to use public key encryption for logging into your environment:
a. Select Public Key for the Login Type.
b. Click View Public Key.
c. Copy the public key that is displayed, and append it to the end of your ~/.ssh/authorized_keys file. If this file
does not exist, you will need to create it.
i. Run chmod 600 authorized_keys to enable read and write privileges for your user.
ii. Run chmod 755 ~ to make your home directory writable only by your user.
The public key needs to be added only once per user and per environment.
You can also add public key authentication to an environment user's profile by using the command line interface, as explained
in the topic CLI Cookbook: Setting Up SSH Key Authentication for UNIX Environment Users.
13. For Password Login, click Verify Credentials to test the username and password.
14. Enter a Toolkit Path.
The toolkit directory stores scripts used for Delphix Engine operations. It must have a persistent working directory rather than a temporary
one. The toolkit directory will have a separate sub-directory for each database instance. The toolkit path must have 0770 permissions.
15. Click the Discover SAP ASE checkbox.
16. Enter a Username for an instance on the environment.
17. Enter the Password associated with the user in Step 15.
18. Click OK.
Post-Requisites
After you create the environment, you can view information about it by selecting Manage > Environments and then selecting the environment
name.
Related Links
Link an SAP ASE Data Source
114
Prerequisites
1. Ensure a correct set up the source and target environment, as described in Managing SAP ASE Environments
2. Before you can link a data source in a Veritas Cluster Server (VCS) environment, the following static configuration parameter for Delphix
engine needs to be manually added by a support contact to avoid failure. This is because each node in a VCS environment typically
has more than one IP address for fail over purposes. By default, the Delphix engine will only interface with a single IP address from the
source host, unless the following configuration is added:
PRO.RESTRICT_TARGET_IP=false
3. The Delphix engine configuration must be modified and can be found with the following path name:
/var/delphix/server/etc/delphix_config_override.properties
4. Finally, the Delphix engine stack needs to be manually restarted in order for the new configuration to take effect.
Procedure
1. Login to the Delphix Admin application using Delphix Admin credentials.
2. Click Manage.
3. Select Databases.
4. Click Add dSource.
Alternatively, on the Environment Management screen, you can click Link next to a database name to start the dSource creation
process.
5. In the Add dSource wizard, select the source database.
Changing the Environment User
If you need to change or add an environment user for the source database, see Managing SAP ASE Environment Users.
115
12. Enter the Backup Location. This is the directory where the database backups are stored. Delphix recursively searches this location, so
the database backups or transaction logs can reside in any subdirectories below the path entered.
13. Optionally, enter the Load Backup Server Name. If you have multiple backup servers in your staging environment, you can specify the
name of the backup server here to load database dumps and transaction logs into the staging database. If you leave this parameter
empty, the server designated as "SYB_BACKUP" will be used.
14. Select environment and ASE instance name.
Related Links
Managing SAP ASE Environments
Requirements for SAP ASE Target Hosts and Databases
Managing SAP ASE Environment Users
Users, Permissions, and Policies
116
Prerequisites
Before you provision an SAP ASE VDB, you must:
Have linked a dSource from a source database, as described in Linking an SAP ASE Data Source, or have already created a VDB from
which you want to provision another VDB
Have set up target environments as described in Adding an SAP ASE Environment
Ensure that you have the required privileges on the target environment as described in Requirements for SAP ASE Target Hosts and
Databases
If you are provisioning to a target environment that is different from the one in which you set up the staging database, you must make
sure that the two environments have compatible operating systems, as described in Requirements for SAP ASE Target Hosts and
Databases. For more information on the staging database and the validated sync process, see Managing SAP ASE Environments: An
Overview.
Procedure
1. Login to the Delphix Admin application.
2. Click Manage
3. Select Databases.
4. Click My Databases.
5. Select a dSource.
6. Select a means of provisioning.
For more information, see Provisioning by Snapshot and LogSync.
7. Click Provision.
The Provision VDB panel will open, and the Instance and Database Name fields will auto-populate with information from the dSource.
8. Select whether to enable Truncate Log on Checkpoint database option for the VDB.
9. Click Next.
10. Select a Target Group for the VDB.
Click the green Plus icon to add a new group, if necessary.
11. Select a Snapshot Policy for the VDB.
Click the green Plus icon to create a new policy, if necessary.
12. Click Next.
13. Specify any Hooks to be used during the provisioning process.
For more information, see Customizing SAP ASE Management with Hook Operations.
14. If your Delphix Engine system administrator has configured the Delphix Engine to communicate with an SMTP server, you will be able to
specify one or more people to notify when the provisioning is done. You can choose other Delphix Engine users or enter email
addresses.
15. Click Finish.
When provisioning starts, you can review progress of the job in the Databases panel, or in the Job History panel of the Dashboard.
When provisioning is complete, the VDB will be included in the group you designated, and it will be listed in the Databases panel. If you
select the VDB in the Databases panel and click the Open icon, you can view its card, which contains information about the database
and its Data Management settings.
Provisioning by Snapshot
You can provision to the start of any snapshot by selecting that snapshot card from the TimeFlow view, or by entering a value in the time entry
fields below the snapshot cards. The values you enter will snap to the beginning of the nearest snapshot.
Provisioning by LogSync
If LogSync is enabled on the dSource, you can provision by LogSync information. When provisioning by LogSync information, you can provision
to any point in time within a particular snapshot. The TimeFlow view for a dSource shows multiple snapshots by default. To view the LogSync
data for an individual snapshot, use the Slide to Open LogSync control at the top of an individual snapshot card. Drag the red triangle to the
point in time from which you want to provision. You can also enter a date and time directly.
117
Related Links
Linking an SAP ASE Data Source
Adding an SAP ASE Environment
Requirements for SAP ASE Target Hosts and Databases
Managing SAP ASE Environments: An Overview
Customizing SAP ASE Management with Hook Operations
118
Create a Group
Before you can link to a dSource or provision a VDB, you will need to create a group that will contain your database objects. Permissions and
policies for database objects are also determined within the group, as described in Users, Groups, and Permissions: An Overview.
When you first start up the Delphix Engine, a default group, <New Group>, is already defined. You can edit the name of this group, as well as
the policies and permissions associated with it, to use as your first group, or you can create a group as described in the following steps.
Groups for dSources and VDBs
Since policies and permissions for database objects are set by the group they belong too, you may want to create two groups, one for
dSources, one for VDBs, so you can set policies and permissions by object types.
Excerpt not found
The page: Adding and Deleting Groups was found, however the excerpt named: Procedure was not found. Please check/update the excerpt
name.
119
Delete a dSource
Prerequisites
You cannot delete a dSource that has dependent VDBs. Before deleting a dSource, make sure all dependent VDBs have been deleted
as described in Delete a VDB.
Procedure
1. Login to the Delphix Admin application using Delphix Admin credentials.
2. Select Manage.
3. Select Databases.
4. Select My Databases.
5. In the Databases panel, select the dSource you want to delete.
6. Click the Trash Can icon.
7. Click Yes to confirm.
Deleting a dSource will also delete all snapshots, logs, and descendant VDB Refresh policies for that database. The deletion
cannot be undone.
120
Delete a VDB
This topic describes how to delete a VDB.
Procedure
1. Login to the Delphix Admin application using Delphix Admin credentials.
2. Click Manage.
3. Click My Databases.
4. Select the VDB you want to delete.
5. Click the Trash icon.
6. Click Yes to confirm.
121
Disable a dSource
This topic describes how to enable and disable dSources for operations such as backup and restore.
For certain processes, such as backing up and restoring the source database, you may want to temporarily disable your dSource. Disabling a
dSource turns off communication between the dSource and the source database, but does not tear down the configuration that enables
communication and data updating to take place. When a disabled dSource is later enabled, it will resume communication and incremental data
updates from the source database according to the original policies and data management configurations that you set.
Disabling a dSource is also a prerequisite for several other operations, like database migration and upgrading the dSource after upgrade of the
associated data source.
Procedure
1. Click Manage.
2. Select Databases.
3. Click My Databases.
4. Select the dSource you want to disable.
5. On the back of the dSource card, move the slider control from Enabled to Disabled.
6. Click Yes to acknowledge the warning.
When you are ready to enable the dSource again, move the slider control from Disabled to Enabled, and the dSource will continue to function as
it did previously.
122
_SysAdmin
123
124
125
Browsers Supported
Adobe Flash/Flex
Minimum Memory
Firefox, Chrome
10.x
4GB
10.x
4GB
Windows 7
10.x
4GB
Windows 7
Firefox, Chrome
10.x
4GB
Windows 7 x64
10.x
4GB
Windows 7 x64
Firefox, Chrome
10.x
4GB
Mac OS X
Firefox, Chrome
9.0.3 (6531.9)
4GB
126
Requirements
Notes
Virtualization
Platform
Virtual CPUs
8 vCPUs
Memory
Network
127
SCSI
Controller
When adding virtual disks make sure that they are evenly
distributed across 4 virtual SCSI controllers. Spreading the disks
across all available SCSI controllers will ensure optimal IO
performance from the disks. For example, a VM with 4 SCSI
controllers and 6 virtual disks should distribute the disks across
the controllers as follows:
disk1 = SCSI(0:0)
disk2 = SCSI(0:1)
disk3 = SCSI(1:0)
disk4 = SCSI(1:1)
disk 5 = SCSI(2:0)
disk 6 = SCSI(3:0)
General
Storage
Configuration
Delphix VM
Configuration
Storage
Delphix
Engine
System Disk
Storage
128
Database
Storage
129
Requirements
Instance
Types
Storage
Optimized
Instances
i2.2xlarge
i2.4xlarge
i2.8xlarge
Notes
The Delphix Engine most closely resembles a storage appliance and performs best when
provisioned using a storage optimized instance type
Larger instance types provide more CPU, which can prevent resource shortfalls under high I/O
throughput conditions
Larger instances also provide more memory, which the Delphix Engine uses to cache database
blocks. More memory will provide better read performance.
Network
Configuration
Virtual
Private
Cloud
Static Public
IP
Security
Group
Configuration
You must deploy the Delphix Engine and all of the source and target environments in a VPC network
to ensure that private IP addresses are static and do not change when you restart instances.
When adding environments to the Delphix Engine, you must use the host's VPC (static private) IP
addresses.
The EC2 Delphix instance must be launched with a static public IP address; however, the default
behavior for VPC instances is to launch with a dynamic public IP address which can change
whenever you restart the instance. A static public IP address can only be achieved by using
assigned AWS Elastic IP Addresses.
The default security group will only open port 22 for secure shell (SSH) access. You must modify the
security group to allow access to all of the networking ports used by the Delphix Engine and the
various source and target platforms. See General Network and Connectivity Requirements for
information about specific port configurations.
See Network Performance Configuration Options for information about network performance
tuning
EBS
Configuration
EBS
Provisioned
IOPS
Volumes
EBS
Optimized
Instance
(except for
i2.xlarge
instance
type)
All attached storage devices must be EBS volumes. Delphix does not support the use of instance
store volumes.
Because EBS volumes are connected to EC2 instances via the network, other network activity on
the instance can affect throughput to EBS volumes. EBS optimized instances provide guaranteed
throughput to EBS volumes and are required (for instance types that support it) in order to provide
consistent and predictable storage performance. The i2.8xlarge instance type does not support EBS
optimized instances; however, this instance type supports 10 Gigabit networking that often provides
suitable performance.
Use EBS volumes with provisioned IOPs in order to provide consistent and predictable performance.
The number of provisioned IOPs depends on the estimated IO workload on the Delphix Engine.
Provisioned IOPs volumes must be configured with a volume size at least 30 GiB times the number
of provisioned IOPs. For example, a volume with 3,000 IOPS must be configured with at least 100
GiB.
I/O requests of up to 256 kilobytes (KB) are counted as a single I/O operation (IOP) for provisioned
IOPs volumes. Each volume can be configured for up to 4,000 IOPs.
General
Storage
Configuration
Allocate initial storage equal to the size of the physical source databases. For high redo rates and/or
high DB change rates, allocate an additional 10-20% storage.
Add storage when storage capacity approaches 30% free
Keep all EBS volumes the same size. Add new storage by provisioning new volumes of the same
size.
Maximize Delphix Engine RAM for a larger system cache to service reads
Use at least 3 EBS volumes to maximize performance. This enables the Delphix File System (DxFS)
to make sure that its file systems are always consistent on disk without additional serialization. This
also enables the Delphix Engine to achieve higher I/O rates by queueing more I/O operations to its
storage.
See Optimal Storage Configuration Parameters for the Delphix Engine
130
Port
Numbers
Use
TCP
25
TCP/UDP
53
UDP
123
UDP
162
HTTPS
443
SSL connections from the Delphix Engine to the Delphix Support upload server
TCP/UDP
636
TCP
8415
TCP
50001
Connections to source and target environments for network performance tests via the Delphix command line interface
(CLI). See Network Performance Tool.
Port
Number
Use
TCP
22
TCP
80
UDP
161
TCP
443
TCP
8415
Delphix Session Protocol connections from all DSP-based network services including Replication, SnapSync for
Oracle, V2P, and the Delphix Connector.
TCP
50001
Connections from source and target environments for network performance tests via the Delphix CLI. See Network
Performance Tool.
TCP/UDP
32768 65535
Required for NFS mountd and status services from target environment only if the firewall between Delphix and the
target environment does not dynamically open ports.
Note: If no firewall exists between Delphix and the target environment, or the target environment dynamically opens
ports, this port range is not explicitly required.
SSHD Configuration
Both source and target Unix environments are required to have sshd running and configured such that the Delphix Engine can connect over ssh.
The Delphix Engine expects to maintain long-running, highly performant ssh connections with remote Unix environments. The following sshd con
131
figuration entries can interfere with these ssh connections and are therefore disallowed:
Disallowed sshd Configuration Entries
ClientAliveInterval
ClientAliveCountMax
Related Links
Network and Connectivity Requirements for DB2 Environments
Network and Connectivity Requirements for Unix Environments
Network and Connectivity Requirements for PostgreSQL Environments
Network and Connectivity Requirements for SQL Server Environments
Network and Connectivity Requirements for Windows Environment
Network and Connectivity Requirements for Oracle Environments
Network and Connectivity Requirements for SAP ASE Environments
Network and Connectivity Requirements for MySQL Environments
Network and Connectivity Requirements for Windows Environments
132
Related Links
Virtual Machine Requirements for VMware Platform
The delphix_admin and sysadmin User Roles
133
Component
Virtualization
Platform
Requirements
Notes
Virtual CPUs
Memory
8 vCPUs
vCPUs must be model Westmere (preferred if
supported by physical CPU), Nehalem, Penryn, Conr
oe, or kvm64.
To set the vCPU model for your compute node, add the following
lines to the [libvirt] section of nova.conf (see list to the left
for acceptable cpu_model values):
cpu_mode = custom
cpu_model = Westmere
virt_type = kvm
The Delphix Engine uses its memory to cache database
blocks. More memory will provide better read performance.
Memory overcommit should be disabled on the compute
node by setting where the Delphix VM is running, if possible.
Overcommit causes the Delphix Engine to stall while waiting
for its memory to be paged in by the compute node. You can
disable Overcommit by adding the following line to the [DEF
AULT] section of nova.conf:
ram_allocation_ratio = 1.0
Alternatively, you can simply run the Delphix Engine as the
sole VM on the OpenStack Compute node where it is
located.
Network
134
Delphix
Engine
System Disk
Storage
Database
Storage
Configuration
135
136
Related Links
Managing System Administrators
137
138
Prerequisites
Read the requirement and support information in the Installation and Initial Configuration Requirements topics.
Post-Requisites
After installing the server, follow the procedures in these topics to specify and customize the Delphix Engine network and to make modifications to
the memory size, number of CPUs, and number of disks used for storage.
Setting Up Network Access to the Delphix Engine
Customizing the Delphix Engine System Settings
139
Prerequisites
Follow the initial installation instructions in Installing the Delphix Engine.
NAT Configuration
Delphix communicates it's IP address in application layer data and this cannot be translated by NAT
You can configure a Delphix Engine to use either a dynamic (DHCP) IP address or a static IP address.
Procedure
1. Power on the Delphix Engine and open the Console.
2. Wait for the Delphix Management Service and Delphix Boot Service to come online.
3. Press F2 to access the sysadmin console.
4. Enter sysadmin@SYSTEM for the username and sysadmin for the password.
5. You will be presented with a description of available network settings and instructions for editing.
140
dhcp
Boolean value indicating whether DHCP should be used for the
primary interface. Setting this value
to 'true' will cause all other properties (address, hostname,
and DNS) to be derived from the DHCP
response
dnsDomain
dnsServers
DNS server(s) as a list of IP addresses (e.g.
'1.2.3.4,5.6.7.8').
hostname
'myserver').
primaryAddress
'1.2.3.4/22').
Current settings:
defaultRoute: 192.168.1.1
dhcp: false
dnsDomain: example.com
dnsServers: 192.168.1.1
hostname: Delphix
primaryAddress: 192.168.1.100/24
6. Configure the hostname. If you are using DHCP, this step can be skipped.
7. Configure DNS. If you are using DHCP, this step can be skipped.
DHCP Configuration
delphix network setup update *> set dhcp=true
141
Static Configuration
delphix network setup update *> set dhcp=false
delphix network setup update *> set primaryAddress=<address>/<prefix-len>
The static IP address must be specified in CIDR notation (for example, 192.168.1.2/24)
9. Configure a default gateway. If you are using DHCP, this step can be skipped.
delphix> exit
142
Prerequisites
Follow the initial installation instructions in Installing the Delphix Engine.
Procedure
1. Shut down the guest operating system and power off the Delphix Engine.
2. Under Getting Started, select Edit Virtual Machine Settings.
3. You can now customize the system settings.
Setting
Options
Memory
Size
Set to 64GB or larger based on sizing analysis. In the Resource Allocation panel, ensure that Reserve all guest memory is
checked.
Number of
CPUs
Allocate 8 vCPUs or more based on your Delphix licensing. vCPUs should be fully reserved to ensure that the Delphix Engine
does not compete for CPU cycles on an overcommitted host.
Disks for
Data
Storage
Add virtual disks to provide storage for user data such as dSources and VDBs. The underlying storage must be redundant. Add
a minimum of 150GB per storage disk. All virtual disks should be the same size and have the same performance
characteristics. If using VMFS, use thick provisioned, eager zeroed disks. To alleviate IO bottlenecks at the virtual controller
layer, spread the virtual disks across all 4 virtual SCSI controllers.
Data
Storage
Multipathing
Policy
For EMC storage, the multipathing policy should always be set to roundrobin (default for 5.X). Additionally, change the IO
Operation Limit from the default of 1000 to 1. This should be strongly considered for other storage platforms as well.
Network
The network configuration is set to have a VMXNET3 network adapter. VMXNET3 is a tuned network interface that is included
with the VMtools provided in the OVA file.
It will be assigned to VM Network
See VMware KB article EMC VMAX and DMX Symmetrix Storage Array Recommendations for Optimal Performance on
VMware ESXi/ESX
JUMBO Frames
VMXNET3 supports Ethernet jumbo frames, and you can use this to maximize throughput and minimize CPU utilization.
Adding Link Aggregation via VMware NIC Teaming
To increase throughput or for failover, add multiple physical NICs to the vSwitch that is connected to the Delphix Engine. To
increase throughput, NIC Teaming must use the Route Based on IP Hash protocol for load balancing. See VMware KB article
Troubleshooting IP-Hash outbound NIC selection.
Dedicate Physical NICs to the Delphix Engine
For best performance, assign the Delphix Engine to network adapters that are used exclusively by Delphix.
Post-Requisites
After making any changes to the system settings, power on the Delphix Engine again and proceed with the initial system configuration as
described in Setting Up the Delphix Engine.
143
Prerequisites
You must have Delphix sysadmin privileges to perform the following procedures. For more information, see The delphix_admin and sysadmin
User Roles .
Click on the Server Setup link in the lower left corner of the default Delphix login dialog box, and login to the Delphix Setup user interface as s
ysadmin, as shown here...
Procedure
The setup procedure uses a wizard process to take you through five configuration screens:
System Time
Network
Storage
Serviceability
Authentication Service
1. Connect to the Delphix Engine at http://<Delphix Engine>/login/index.html#serverSetup.
The ServerSetup application will launch when you connect to the server.
Enter your sysadmin login credentials, which initially defaults to the username syadmin, with the initial default password of sysadmin.
On first login, you will be prompted to change the initial default password.
2. Click Next.
System Time
1. Select an option for maintaining the system time.
Option
Notes
Set NTP
Server
After selecting this option, select an NTP server from the list, or click Add NTP Server to manually enter a server.
Be aware that you can highlight more than one NTP Server entry in order to select more than one.
When configuring a Delphix Engine on VMware, be sure to configure the NTP client on the host to use the same
servers that you enter here. On a vSphere client, the NTP client is set in the Security Profile section of the
configuration process.
144
Manually
Select Time
and Date
Click Use Browser Time and Date to set the system time, or select the date and time by using the calendar and
clock displays. Then select the Time Zone.
If you select Use Browser Time and Date, the date and time will persist as your local time, even if you change the
time zone.
Snapshots from dSources and VDBs reflect the time zone of the source or target environment, not that of the Delphix
Engine.
2. Be sure to choose the appropriate time zone for the Delphix Server, using the drop-down list in the lower left-hand corner of this page.
3. Click Next.
Network Configuration
The initial out-of-the-box network configuration in the OVA file is set to use a VMXNET3 network adapter.
1. Under Network Interfaces, click Settings.
2. The first Network Interface is Enabled by default.
3. Select DHCP or Static network addressing.
For Static addressing, enter an IP Address and Subnet Mask.
The static IP address must be specified in CIDR notation (for example, 192.168.1.2/24)
Storage
The Delphix Engine automatically discovers and displays storage devices. For each device, set the Usage Assignment to Data and set the Stor
age Profile to Striped.
You can associate additional storage devices with the Delphix Engine after initial configuration, as described in Adding and Expanding Storage
Devices.
Storage Disk Usage Assignment Options
There are three options for storage disk usage assignment:
Data
Once you set the storage unit assignment for a disk to Data and save the configuration, you cannot change it again.
Unassigned
These are disks being held for later use.
Unused
These disks can be configured later to add capacity for existing data disks.
Serviceability
1. If a Web Proxy Server is necessary for your environment, select Use a Web Proxy and enter the required information.
145
When a critical fault occurs with the Delphix Engine, it will automatically send an email alert to the delphix_admin user. Make sure that
you configure the SMTP server so that alert emails can be sent to this user. See System Faults for more information.
Authentication Service
To avoid configuration issues, consult with your lightweight directory access protocol (LDAP) administrator before attempting to set up
LDAP authentication of users for the Dephix Engi
1. Select Use LDAP to enable LDAP authentication of users.
2. Enter the LDAP Server IP address or hostname, and Port number.
3. Select the Authentication method.
4. Select whether you want to Protect LDAP traffic with SSL/TLS.
If you select this option, you must import the server certificate.
5. When you are done with the LDAP configuration, click Test Connection.
6. Click Next.
Registration
If the Delphix Engine has access to the external Internet (either directly or through a web proxy), then you can auto-register the Delphix Engine:
1. Enter your Support Username and Support Password.
2. Click Register.
If external connectivity is not immediately available, you must perform manual registration.
1. Copy the Delphix Engine registration code in one of two ways:
a. Manually highlight the registration code and copy it to clipboard. Or,
b. Click Copy Registration Code to Clipboard.
2. Transfer the Delphix Engine's registration code to a workstation with access to the external network Internet. For example, you could
e-mail the registration code to an externally accessible e-mail account.
3. On a machine with access to the external Internet, please use your browser to navigate to the Delphix Registration Portal at http://regist
er.delphix.com.
4. Login with your Delphix support credentials (username and password).
5. Paste the Registration Code.
6. Click Register.
While your Delphix Engine will work without registration, we strongly recommend that you register each Delphix Engine as part of setup.
Failing to register the Delphix Engine will impact its supportability and security in future versions.
Summary
1. The final summary form will enable you to review your configurations for System Time, Network, Storage, Serviceability, and
Authentication. Click Modify to change the configuration for any of these server settings.
2. If you are ready to proceed, then click Finish.
3. Click Yes to confirm that you want to save the configuration.
4.
146
Post-Requisites
After configuration is complete, the Delphix Engine will restart and launch the browser-based Delphix Admin user interface. The URL for
this will be http://<Delphix Engine>/login/index.html#delphixAdmin.
After the Delphix Admin interface launches, the delphix_admin can login using the initial default username delphix_admin and the initial
default password delphix. On first login, you will be prompted to change the initial password.
You can access the Server Setup interface at any time by navigating to http://<Delphix Engine>/login/index.html#serverS
etup and entering the sysadmin credentials.
Related Links
The delphix_admin and sysadmin User Roles
System Faults
Adding Delphix Users and Privileges
Adding and Expanding Storage Devices
147
Procedure
1. You can retrieve the Delphix Engine Registration Code through the ServerSetup application after logging in with the sysadmin credentia
ls.
2. In the Registration panel, click View.
3. The Registration Code is displayed in the bottom half of the Registration window.
4. If your local machine is connected to the external Internet, you can auto-register the Delphix Engine:
a. Enter your Support Username and Support Password.
b. Click Register.
5. If external connectivity is not immediately available, you must register manually.
a. Copy the Delphix Engine registration code by either manually highlighting and copying to clipboard or clicking Copy
Registration Code to Clipboard.
b. Transfer the Delphix Engine's registration code to a location with an external network connection. For example, you could e-mail
the registration code to an externally accessible e-mail account.
c. On a machine with external network access, use your browser to navigate to the Delphix Registration Portal at https://register.d
elphix.com.
d. Login with your support credentials.
e. Paste the Registration Code.
f. Click Register.
Post-Requisites
Following registration, you will receive an e-mail confirming the registration of your Delphix Engine.
148
Factory Reset
This topic describes the process for returning the Delphix Engine to "factory default" settings. This completely removes all DATA and
CONFIGURATION.
Prerequisites
It is recommend to shut down and remove all VDBs before resetting the Delphix Engine. Failure to do so could possibly lead to stale data mounts
in target environments. (NFS, for *nix environments, or iSCSI I/O errors in Windows environments) For the same reason, disable all dSources that
use pre-provisioning (all SQL Server dSources, and any Oracle dSources with validated sync enabled).
Use Factory Reset only when a complete reset and reconfiguration of the Delphix Engine is necessary, as all Delphix Engine objects
will be de-allocated.
Procedure
149
150
System Administrators
Delphix system administrator users are responsible for managing the Delphix Engine itself, but not the objects (Environments, dSources, VDB's)
within the server. For example, a system administrator is responsible for setting the time on the Delphix Engine and its network address, restarting
it, creating new system administrator users (but not Delphix users), and other similar tasks.
The sysadmin user is the default system administrator user. While this user can be suspended, it may not be deleted. When the Delphix Admin
interface launches, the delphix_admin can log in using the username delphix_admin and password delphix.
System administrators administer the Delphix Engine through the ServerSetup interface, which is accessed through
a Web browser at http:<Delphix Engine>/ServerSetup.html,as well as through the command line
interface accessible via ssh.
Delphix Users
Delphix users are responsible for managing the objects within the Delphix Engine. These include:
dSources
VDBs
Groups
Policies
Space and Bandwidth
Replication Services
Backup and Restore
A Delphix user can be marked as a Delphix Admin. Delphix Admins have three special privileges:
They can manage other Delphix users
They implicitly have Owner privileges for all Delphix objects
They can create new groups and new environments
The delphix_admin is the default Delphix user provided with a Delphix Engine and is a Delphix Admin. Like the sysadmin user, delphix_admin
can not be deleted. When the Delphix Admin interface launches, the delphix_admin can log in using the username delphix_admin and
password delphix.
A Delphix Admin user accesses objects with the Delphix Engine Admin Interface, which is accessed through a Web browser at http:<Delphix
Engine>/Server.html.
Updating Credentials
System administrator users can change the password of any other system administrator user. Delphix Admin users can change the password of
any other Delphix user (including other Delphix Admins). Regular Delphix users can change their own passwords but must provide their old
password to do this.
151
Procedure
1. Launch the ServerSetup application and log in using sysadmin level credentials.
2. In the System User Management panel, click +.
3. Enter the required information.
4. Click Save.
152
Procedure
1. Launch the ServerSetup application and log in using sysadmin level credentials.
2. In the System User Management panel, click the user whose password you want to change.
3. Select Change Password?
4. Enter the new password in the New Password and Verify New Password fields.
5. Click OK.
153
Procedure
1. Launch the ServerSetup application and log in using the sysadmin (or other system administrator) credentials.
2. In the System User Management panel, click the user you want to suspend or delete.
3. Suspend the user by clicking the red, crossed circle icon in the in the lower left corner of the System User Management panel.
4. Delete the user by clicking the trash can icon in the lower left corner of the panel.
154
Procedure
1. Launch the ServerSetup application and log in using system administrator credentials.
2. In the System User Management panel, click on the name of the user you want to reinstate.
3. Reinstate the user by clicking the yellow checkmark icon in the in the lower left corner of the System User Management panel.
155
156
157
Capacity Management - the amount of physical storage available and what is currently used
TimeFlow Ratio - see above
VDB Ratio - a measure of the amount of physical space that would be occupied by the database content against the amount of storage
occupied by that same database content as VDBs.
Performance Management - the amount of network bandwidth available and the amount that VDBs are currently utilizing, as well as
information about specific VDB network usage
Notes
Name
Name of the group or database object. Click the expand icon next to a group name to see the objects in that group.
Date
Quota
The maximum amount of storage space allocated to the group or object, also known as the ceiling. See Setting Quotas for
more information. You can see quota allocations for groups and objects in the Graph view of the Capacity screen.
Used
Unvirtualized
Estimated amount of space that the group or object would occupy in an unvirtualized state.
Ratio
The amount of storage space occupied by the group or object in the unvirtualized state as opposed to the amount of space it
occupies as a virtual object. This can also be thought of as the "de-duplication ratio."
Keep Until
For Snapshots, the number of days it is retained as set by the Snapshot Retention Policy. See the topics under Managing
Policies for more information.
Summary
Metric
Notes
dSource
Ratio
The total amount of storage space occupied by the sources of all dSources as opposed to the amount of storage space occupied
by the dSources themselves.
VDB
Ratio
The total amount of storage space occupied by the databases that are the sources for the VDBs as opposed to the amount of
storage space occupied by the VDBs.
TimeFlow
Ratio
The total amount of storage space occupied by all snapshots multiplied by their unvirtualized size as opposed to the amount of
storage space occupied by the virtualized snapshots, archive logs, and temp files.
Related Links
Adding and Expanding Storage Devices
Changing Snapshot Retention to Increase Capacity
Deleting Objects to Increase Capacity
Managing Policies
Setting Quotas
158
Setting Quotas
This topic describes how to set quotas for database objects.
Procedure
1. Log into the Delphix Admin application with delphix_admin credentials.
2. Select Resources > Capacity.
3. In the Quotas column, click next to the group or object for which you want to set a quota.
4. Enter the amount of storage space you want to allocate for a quota.
5. Click outside the column again to set the amount.
159
Procedure
1. Log into the Delphix Admin application using delphix_admin credentials.
2. Select Resources > Capacity.
3. Select the groups or objects you want to delete.
As you select items, you will see them added to the Total Capacity of Objects Selected for Deletion.
4. Click Delete.
Dependencies
If there are dependencies on the SnapShot you will not be able to delete the SnapShot free space; the dependencies rely on the data
associated with the SnapShot.If there are dependencies on the SnapShot you will not be able to delete the SnapShot free space; the
dependencies rely on the data associated with the SnapShot.
160
Procedure
1. Log into the Delphix Admin application using delphix_admin credentials.
2. Select Resources > Capacity.
3. Click in the Keep Until column for the snapshot you want to edit.
4. Select the number of days you want to preserve the snapshot.
5. Click outside the column to set the change.
161
Getting Started
Delphix storage migration is a new feature available in Delphix Engine version 4.3. This feature allows you to remove storage devices from your
Delphix Engine, provided there is sufficient unused space, thus allowing you to repurpose the removed storage device to a different server. You
can also use this feature to migrate your Delphix Engine to different storage by adding the new storage devices and then removing the old storage
devices.
areece-test1.dcenter 'Disk10:2'> ls
Properties
type: ConfiguredStorageDevice
name: Disk10:2
bootDevice: false
configured: true
expandableSize: 0B
model: Virtual disk
reference: STORAGE_DEVICE-6000c293733774b7bb0e4aea83513b36
serial: 6000c293733774b7bb0e4aea83513b36
size: 8GB
usedSize: 7.56MB
vendor: VMware
b. To migrate the Delphix Engine to new storage, add storage devices backed by the new storage to the Delphix Engine, then
remove all the devices on the old storage.
2. Use the Delphix command line interface (CLI) to initiate the removal of your selected device.
3. Data will be migrated from the selected storage device to the other configured storage devices. This process will take longer the more
data there is to move; for very large disks, it could potentially take hours. You can cancel this step if necessary.
4. The status of the device changes from configured to unconfigured and an alert is generated to inform you that you can safely detach
the storage device from the Delphix Engine. After this point, it is not possible to undo the removal, although it is possible to add the
storage device back to the Delphix Engine.
5. Use the hypervisor to detach the storage device from the Delphix Engine. After this point, the Delphix Engine is no longer using the
storage device, and you can safely re-use or destroy it.
162
User Interface
Delphix storage migration is currently available exclusively via the CLI. There are 3 entry points:
storage/remove
The status of the current or most recent removal, including the total memory used by all removals up to
this point
storage/device $device/removeVerify
storage/device $device/remove
Type cd storage/device.
5. (VMware only) Confirm that your disk selection is correct by validating that the serial matches your UUID:
areece-test1.dcenter storage device 'Disk10:2'> ls
Properties
type: ConfiguredStorageDevice
name: Disk10:2
bootDevice: false
configured: true
expandableSize: 0B
model: Virtual disk
reference: STORAGE_DEVICE-6000c2909ccd9d3e4b5d62d733c5112f
serial: 6000c2909ccd9d3e4b5d62d733c5112f
size: 8GB
usedSize: 8.02MB
vendor: VMware
6. Execute removeVerify to confirm that removal will succeed. Validate the amount of memory/storage used by the removal:
areece-test1 storage device 'Disk10:2'> removeVerify
areece-test1 storage device 'Disk10:2' removeVerify *> commit
type: StorageDeviceRemovalVerifyResult
newFreeBytes: 15.85GB
newMappingMemory: 3.14KB
oldFreeBytes: 23.79GB
oldMappingMemory: 0B
163
7.
Delphix User Guide 2016 Delphix
9. Once the device evacuation has completed, the job will finish and a fault will be generated. Detach the disk from your hypervisor and the
fault will clear on its own. An example of the fault created is seen below.
When using VMDKs, deleting the wrong VMDK could cause data loss. Therefore, it is highly advisable to detach the device, then verify
that the Delphix Engine continues to operate correctly, and lastly delete the VMDK.
Getting the UUID of a RDM Disk from VMware, via the vSphere GUI
1. In the ESX graphical user interface (GUI), select your VM.
2. Click Edit settings.
3. If not already displayed, select the Hardware tab.
4. Select the device you want to remove.
5.
164
The UUID of the device appears in the title bar, as seen below.
165
Related Links
Adding and Expanding Storage Devices
166
Prerequisites
If you are expanding a storage device after initial configuration, first make sure to add capacity to it using the storage management tools available
through the device's operating system. In vSphere, for example, you can add capacity using Edit System Settings.
Procedure
1. Launch the ServerSetup application and log in using the sysadmin credentials.
2. In the Storage section of the Server Setup Summary screen, click Modify.
3. The Delphix Engine should automatically detect any new storage devices.
If a newly added storage device does not appear in in the Storage section of the Server Setup Summary screen, click Rediscover.
4. Select Expand for each device that you want to expand.
The Expand checkbox appears next to the name of devices that have added capacity (in other words, the underlying LUN has been
expanded), and the Unused column indicates how much capacity is available for each device. Newly-added devices will have a
drop-down in the Usage Assignment column. Set the Usage Assignment to DATA for newly-added devices that you wish to add to the
storage pool.
5. Click OK.
Related Links
Setting Up the Delphix Engine
167
System Monitoring
These topics describe system monitoring features.
Configuring SNMP
Viewing Action Status
System Faults
Viewing System Events
Accessing Audit Logs
Creating Support Logs
Setting Support Access Control
Setting SysLog Preferences
Diagnosing Connectivity Errors
Email (SMTP) Alerts
168
Configuring SNMP
This topic describes how to configure SNMP.
SNMP is a standard protocol for managing devices on IP networks. The Delphix Engine can be configured to send alerts to an external SNMP
manager.
Prerequisites
At least one SNMP manager must be available, and must be configured to accept SNMPv2 InformRequest notifications.
Delphix's MIB (Management Information Base) files must be installed on the SNMP manager or managers. These MIB files describe the
information that the Delphix Engine will send out. They are attached to this topic.
Procedure
1. Choose the Server Setup option at the Delphix Engine login screen.
2. Log into the Server Setup application of the Delphix Engine using the sysadmin username and password in the Delphix Setup login
screen.
3. Select Engine Setup.
4. On the Delphix Engine Setup screen, select Delphix Engine Preferences > SNMP Configuration.
5. Set the severity level of the messages you want to be sent to the SNMP manager(s).
6. Click the + icon.
7. Enter an SNMP Manager hostname / IP address.
Provide a community string and adjust the port number if necessary.
8. Click Save.
The newly-entered manager will appear in the list.
9. An attempt will be made to connect with the SNMP manager by transmitting an informational level message. If a response
is received from the manager within 20 seconds, a checkmark will appear along with the manager entry. If not, a red X will appear
check your settings and try again.
169
This topic describes how to view the status of actions for the Delphix Engine.
To view the status of actions that are currently running on the Delphix Engine, open the Action sidebar. To view details of currently-running and
completed jobs, open the Dashboard.
Description
The Action sidebar consists of two sections. The top section lists actions that are currently running on the Delphix Engine. The bottom section,
labeled Recently completed, contains actions which have recently completed.
Each action is initially collapsed and only presents the title of the action. Click an action to expand it and see more details such as progress,
elapsed time, and a description of the operation in progress.
The following is an example of the Action sidebar when a Link action is running.
170
Sub-action
Each action may contain one or several sub-actions which represent the execution of a subset of the action itself. Click an action to see its
sub-actions and their respective details. Note that the list of sub-actions is created dynamically during the execution of the action.
The following is an example of an Environment Refresh action and its sub-actions.
171
Errors
When an error condition occurs during the execution of an action, the background color of the action's box becomes red, and the action remains
in the top section until you dismiss it.
1. Click the action title to expand it.
The action will expand to display a description of the error, suggestions to resolve it, and sometimes the raw output of command
execution.
To dismiss the action:
1. Click the X next to the action displaying an error.
The following is an example of an action failure displayed in the Action sidebar.
172
173
System Faults
Overview
Viewing Faults
Addressing Faults
Fault Lifecycle Example
Related Links
Overview
This topic describes the purpose and function of system faults.
System faults describe states and configurations that may negatively impact the functionality of the Delphix Engine which can only be resolved
through active user intervention. When you login to the Delphix Admin application as a delphix_admin, the number of outstanding system faults
appears on the right-hand side of the navigation bar at the top of the screen. Faults serve as a record of all issues impacting the Delphix Engine
and can never be deleted. However, ignored and resolved faults are not displayed in the faults list.
Viewing Faults
To view the list of active system faults:
1. Click Faults on the right-hand side of the navigation bar.
2. Click any fault in the list to expand it and see its details.
Each fault is comprised of six parts:
Severity How much of an impact the fault will have on the system. A fault may have a severity of either Warning or Critical. A Warning
Fault implies that the system can continue despite the fault but may not perform optimally in all scenarios. A Critical Fault describes an
issue that breaks certain functionality and must be resolved before some or all functions of the Delphix Engine can be performed.
Date The date the fault was diagnosed by the Delphix Engine.
Target Object The object that the fault was posted against
Title A short descriptive summary of the fault
Details A detailed summary of the cause of the fault
User Action The action you can take to resolve the fault.
174
Addressing Faults
After viewing a fault and deciding on the appropriate course of action, you can address the fault through the user interface (UI). You can mark a
fault as Ignored or Resolved. If you have fixed the underlying cause of the fault, mark it as Resolved. Note that if the fault condition persists, it
will be detected in the future and re-diagnosed. You can mark the fault as Ignored if it meets the following criteria:
The fault is caused by a well-understood issue that cannot be changed
Its impact to the Delphix Engine is well understood and acceptable
In this case, the fault will not be re-diagnosed even if the fault condition persists. You will receive no further notifications.
To address a fault follow the steps below.
1. In the top menu bar, click Faults.
2. In the list of faults, click a fault date/name to view the fault details.
3. If the fault condition has been resolved, click Mark Resolved.
Note that if the fault condition persists it will be detected in the future and re-diagnosed.
4. If the fault condition describes a configuration with well-understood impact to the Delphix Engine that cannot be changed, you can ignore
the fault by clicking Ignore.
Note that an ignored fault will not be diagnosed again even if the underlying condition persists.
When a critical fault occurs, the Delphix Engine immediately sends an email to the delphix_admin. Make sure you have
configured an SMTP server so that this email can be sent. See Setting Up the Delphix Engine for more information.
175
Below is an image of the fault card for the fault "TCP slot table entries below recommended minimum."
The Details section of the fault explains that the sunrpc.tcp_slot_table_entries property on frodo.dcenter.delphix.com is set to a value that is
below the recommended minimum of 128. The User Action section instructs you to adjust the value of the sunrpc.tcp_slot_table_entries property
upward to the recommended minimum. The process for adjusting this property differs between operating systems. To resolve the underlying
issue, search "how to adjust sunrpc.tcp_slot_table_entries" using a search engine and find that the second result is a link to the Delphix
community forum describing how to resolve this issue. After following the instructions applicable to your operating system, return to the Delphix UI
and mark the fault Resolved.
Related Links
Setting Up the Delphix Engine
176
Procedure
1. Launch the Delphix Admin application and log in with delphix_admin credentials.
2. Select System > Event Viewer.
3. Select a time range.
4. Click Search.
177
Procedure
1. Log into the Delphix Admin application using delphix_admin credentials.
2. Select System > Audit Logs.
3. Select an audit log time range.
4. Click Search.
178
Procedure
Using the GUI:
1. Log into the Delphix Admin application using delphix_admin credentials.
2. Select System > Support Logs
3. Select Transfer or Download.
a. If you select Download, then the support bundle will be downloaded as a ".tar" file into a folder on your workstation
b. If you select Transfer, then the support bundle will be uploaded over HTTPS to Delphix Support. If you have configured an
HTTP proxy, it will be used to send the support bundle.
c. If there is a support case involved, please enter the case number to associate the bundle to the case
4. Click OK.
a. If you had selected Download and have the ".tar" file in a folder on your workstation, please upload that file to Delphix Support
via the website at "http://upload.delphix.com".
b. If there is a support case involved, please enter the case number (again) to associate the bundle to the case
You can also access support log functionality in the ServerSetup application using sysadmin credentials. Click Support Bundles in
the top menu bar.
ssh <delphix_user>@<delphixengine>
2. Run the upload operation
delphix
delphix
delphix
delphix
> service
service > support
service support > bundle
service support bundle > upload
179
Procedure
1. Log into the ServerSetup application using sysadmin credentials.
2. Select Server Preferences > Support Access.
3. Click Enable.
4. Set the time period during which you want to allow Delphix Support to have access to your instance of the Delphix Engine.
5. Click Generate Token.
Provide the token to Delphix Support to enable access to your server.
180
Procedure
1. Log into the ServerSetup application using sysadmin credentials.
2. Select Server Preferences > Syslog Configuration.
3. Select the severity level of the messages you want sent to the SysLog server.
4. Click Add Server.
5. Enter the SysLog server hostname/IP address and port number.
6. Click Add.
7. Click Enable.
181
This shows a popup message with more information about the problem and what actions to take to resolve it. For some errors, the Delphix Engine
will be able to diagnose the problem further and display this extra information under Diagnosing Information. In the screenshot above, the job
failed because the Delphix Engine was unable to lookup the host address.
Viewing Active Faults
A fault symbolizes a condition that can affect the performance or functionality of the Delphix Engine and must be addressed. Faults can be either
warnings or critical failures that prevent the Delphix Engine from functioning normally. For example, a problem with a source or target environment
can cause SnapSync or LogSync policy jobs to fail. Faults will show up as active as long as:
The error is still occurring, or
You have chosen to manually resolve it or ignore it
For example, if a background job fails, it will create a fault that describes the problem. To view any active faults:
1. In the top right-hand corner of the Delphix UI, click Faults.
This brings up a popup box listing all active faults.
The screenshot above illustrates a fault with regard to database network connectivity. The Delphix Engine will mark an object with a warning
triangle to indicate that it is affected by an external problem. You can view more details of the fault by looking at the active faults and their fault
effects.
182
Overview
The configuration for SMTP-based alerts has two components:
Configuration of an SMTP gateway by the Delphix system administrator
Configuration of one or more alert profiles (if needed)
Alert Profiles
The Delphix Engine sends out notifications based on alert profiles. An alert profile contains various filtering options; if all conditions are met, a
notification is sent.
Alert profiles can define the following things:
Where to send the alert (email address)
Which alert severity to send (Critical, Warning, Informational, Audit)
Which objects to monitor (dSources, VDBs)
Which event types to monitor
By default the Delphix Engine has a single alert profile configured with the following parameters:
Where to send the alert: Email Address defined for user delphix_admin
Which alert severity to send: CRITICAL or WARNING
Which objects to monitor: ALL
Which event types to monitor: ALL
Using the CLI, it is possible to:
Modify the system default alert profile
Create additional profiles in addition to the default one
Set multiple actions for a single profile, such as "email dephix_admin" and "email [email protected]".
183
ssh delphix_admin@yourdelphixengine
2. Go into your alerts and list the alerts you already have
Profile Filters
As seen above, you can define what types of alerts are sent out to the various email addresses using the severityFilter filter. The three filter
types are:
Filter
Purpose
Example
severityFilter
severityFilter=CRITICAL
184
eventFilter
eventFilter=fault.*
This would only send alerts out based on faults generated on the
engine
targetFilter
targetFilter=Group/DB
This would only send out alerts related to the database DB located
in the group Group
Action Types
When the action type is set to AlertActionEmailUser, the alert is created for the email address of the user currently logged into the command line
interface. The "actions.0.addresses" array is not available for this type.
Set action type to AlertActionEmailList in order to create an alert for any number of users. When this type is selected, an email address may be
defined in each element of the "actions.0.addresses" array as illustrated above.
185
Performance Tuning
These topics describe how to use the performance analytics tool to improve performance of the Delphix Engine. Additionally, the topics that
describe specific configuration recommendations for hosts, networks, and storage to improve performance.
Configuration Options for Improved Performance
Network Performance Configuration Options
Optimal Network Configuration Parameters for the Delphix Engine
Network Operations Using the Delphix Session Protocol
Network Performance Tool (iPerf)
Network Performance Tool notes - Restricted
Storage Performance Configuration Options
Optimal Storage Configuration Parameters for the Delphix Engine
Storage Performance Tool (fio)
Storage Performance Tool notes - Restricted
Host Performance Configuration Options
Target Host Configuration Options for Improved Performance
Performance Analytics
Performance Analytics Tool Overview
Working with Performance Analytics Graphs in the Graphical User Interface
Performance Analytics Statistics Reference
Performance Analytics Tool API Reference
Performance Analytics Case Study: Using a Single Statistic
Performance Analytics Case Study: Using Multiple Statistics
186
187
188
189
Network Bandwidth
~=10MB/sec
~=100MB/sec
~=1GB/sec
Low network throughput can impact the Delphix Engine in a number of ways:
Increasing the amount of time it takes to perform a SnapSync operation, both for initial load and subsequent regular snapshots
Managing LogSync operations in a high change environment
Poor VDB performance when an application is performing large sequential I/O operations, such as sequential table scans for reporting or
business intelligence, or RMAN backups of the VDB.
Delphix Engine throughput must exceed the sum of the peak I/O loads of all VDBs. Delphix incorporates an I/O-Collector toolkit to collect I/O data
from each production source database and pre-production server.
Best practices to improve network throughput include:
190
191
Overview
Delphix Session Protocol, or DSP, is a communication protocol that operates at the session and presentation layer in the Open Systems
Interconnection (OSI) model.
DSP supports the request-reply pattern for communication between two networked peers. In
Key Concepts
The foundation of DSP is built on top of a few key abstractions, namely, exchange, task, nexus, and service. For an overview of how DSP
works and the features it provides, lets start with these abstractions.
An exchange refers to an application defined protocol data unit which may be a request or a response. DSP supports the request-response
pattern for communication. For each request sent, there is a corresponding response which describes the result of the execution. An application
protocol is made up of a set of exchanges.
A nexus (a.k.a., session) refers to a logical conduit between the client and server application. In contrast, a transport connection (a.k.a.,
connection) refers to a physical link. A nexus has a separate naming scheme from the connection, which allows it to be uniquely and persistently
identified independent of the physical infrastructure. A nexus has a different lifecycle than the connection. It is first established over a leading
192
connection. After it comes into existence, new connections may be added and existing ones removed. It must have at least one connection to
remain operational but may live on even after all connections are lost. Nexus lifecycle management actions, such as create, recover, and destroy,
are always initiated by the client with the server remaining passive.
A nexus has dual channels, namely, the fore channel and the back channel. The fore channel is used for requests initiated from the client to the
server; and the back channel from the server to the client. From a request execution perspective, the nexus is full duplex and the channels are
functionally identical, modulo the operational parameters that may be negotiated independently for each channel. A channel supports a number of
features for request processing, such as ordered delivery, concurrent execution, remote cancellation, exactly-once semantics, and throughput
throttling.
A service refers to a contract that consists of all exchanges (both the requests and the corresponding responses) defined in an application
protocol. Given the full duplex nature of request execution in DSP, part of the service is fulfilled by the server and the remaining by the client,
where the client and server are from the nexus management perspective.
A task implements a workflow that typically involves multiple requests executed in either or both directions over the nexus. A task is a
self-contained building block, available in the form of a sharable module including both the protocol exchanges and implementation, that can be
easily integrated into other application protocols. A library of tasks may significantly simplify distributed application development by making it more
of an assembly experience.
The following is a diagram that illustrates the key abstractions and how they are related to each other.
Security
As a network protocol, DSP is designed with security in mind from the onset. It supports strong authentication as well as data encryption. It follows
a session based authentication model which requires each connection to authenticate before it is allowed to join the session. Authentication is
performed using the Simple Authentication and Security Layer (SASL) framework, a standard based pluggable security framework. The currently
supported SASL mechanisms include DIGEST-MD5, PLAIN with TLS, CRAM, and ANONYMOUS. Optionally, TLS encryption may be negotiated
between the client and the server for data privacy.
Performance
DSP offers a number of features to enable the support for high performance network applications. For example, it allows multiple requests to be
exchanged in both directions simultaneously, which provides effective pipelining of data transfer to minimize the impact of network latency while
ensuring the total ordering at the same time. It supports trunking that can effectively aggregate the throughput across multiple connections, which
is crucial for long fat network (LFN) and 10GigE. It also provides optional compression support which boosts performance over bandwidth limited
network. We have observed, through both internal benchmarking and in customer environment, DSP based applications delivering multi GigE in
an ideal environment and getting a performance boost of as much as x10 in bandwidth limited settings.
Resiliency
DSP automatically recovers from transient connection loss without any application involvement. It may also detect random data corruption on the
wire and automatically recovers from it. In both cases, outstanding requests are retried once the fault condition is resolved.
DSP offers control over a remotely executing request. Once a request is initiated, the application may cancel it at any time before completion. In
the rare event of a session loss, a new session creation request will be held until the old session has been reinstated. It ensures that we never
leave any unknown or unwanted activities on the remote side and provides better predictability and consistency guarantees over an otherwise
unreliable network.
193
Diagnosability
Application exceptions encountered during remote execution of a request are communicated back to the initiator through DSP. A standard Java
API is used to facilitate the handling of remote exceptions that is in many ways identical to local ones.
DSP provides detailed information and statistics at the session level. The information may be used to examine the state of the session as well as
diagnose performance problems. It is currently exposed via an internal support tool called jmxtool.
Supported Applications
Replication is the first feature to take advantage of DSP. It has been rebuilt on top of DSP and shipping in the field since 3.1. In the latest release,
a number of host based applications, such as SnapSync, V2P, and Delphix connector, use DSP as well.
194
Prerequisites
The network performance tool measures network performance between a Delphix Engine and an environment host. You must have added an
environment in order to use this tool. At this time, this tool only supports Unix environments, while windows environments must be tested
manually.
This transmission control protocol (TCP) throughput test uses TCP port 50001 by default. The port can also be configured on a
per-test-run basis. For the duration of a given throughput test, a server on the receiver will be listening on this port. For a transmit test,
the receiver is the remote host; for a receive test, the receiver is the Delphix Engine.
195
196
197
198
Description
A copy of
each Source
Database
Unique Block
Changes in
VDBs
When changes are made to a VDB, the Delphix Engine stores the changes in new blocks associated with only that VDB. The
new blocks are compressed.
Timeflow for
dSources
and VDBs
The TimeFlow kept for each dSource and VDB comprises snapshots of the database (blocks changed since the previous
snapshot) and archive logs. The retention period for this history of changes is determined by polices established for each
dSource and VDB. The TimeFlow is compressed.
In addition to the storage for these items, the Delphix Engine requires 30% free space in its storage for best performance. See An Overview of
Capacity and Performance Information and related topics for more details on managing capacity for the Delphix Engine.
Best practices for storage performance include:
Initial storage equal to the size of the physical source databases. For high redo rates and/or high DB change rates, allocate an additional
10-20% storage.
Add storage when storage capacity approaches 30% free
Use physical LUNS allocated from storage pools or RAID groups that are configured for availability
Never share physical LUNs between the Delphix Engine and other storage clients.
Keep all physical LUNs the same size. Add new storage by provisioning new LUNs of the same size.
Provision storage using VMDKs or RDMs operating in virtual compatibility mode.
VMDKs should be Thick Provisioned, Eager Zeroed. The underlying physical LUNs can be thin provisioned.
Physical LUNs used for RDMs should be thick provisioned.
Measure or estimate the required IOPS and manage the storage disks to provide this capacity. It is common to use larger numbers of
spindles to provide the IOPS required.
Physical LUNs carved from RAID 1+0 groups or pools with dedicated spindles provide higher IOPS performance than other
configurations
Maximize Delphix Engine vRAM for a larger system cache to service reads
Example
There are two production dSources, totalling 5 TB in size. 5 VDBs will be created for each. Sum of read and write rates on the production source
database is moderate (1000 iops), sum of VDB read rate is moderate (950 iops), and VDB update rate is low (50 iops).
Initial storage equal to 5TB, provisioned as 5 x 1 TB physical LUNs, Thin Provisioned. Allow for expansion of the LUNs to 2TB.
Provision as 5 x 950 GB Virtual Disks. VMDKs must be Thick Provisioned, Eager Zeroed. Using 1 TB LUNs allows expansion to 2 TB
(ESX 5.1 limit).
The storage provisioned to the Dephix Engine storage must be able to sustain 1000 IOPs (950 + 50). For this reason, each physical LUN
provisioned to the Delphix Engine must be capable of sustaining 200 IOPs. IOPs on the source databases are not relevant to the Delphix
Engine.
64GB Delphix Engine vRAM for a large system cache
Related Topics
Optimal Network Configuration Parameters for the Delphix Engine
An Overview of Capacity and Performance Information
199
Prerequisites
Prior to setting up the Delphix Engine, the admin can login to the Delphix CLI using a sysadmin account to launch the Storage Performance Tool.
Because the test is destructive, it will only run against storage which has not been allocated to Delphix for use by the engine. If the storage has
already been allocated but is is acceptable to lose all the data on Delphix, a factory reset can be used to wipe out all data and configuration,
allowing the Delphix-assigned storage to be re-tested.
Login as the sysadmin user to the Delphix Engine CLI using ssh.
a. If the Delphix Engine has not been setup yet, the network setup prompt appears. Discard the command.
200
201
Latency
-------------------------Average 95th %ile
Grade
------- ------------2.16
4.77
A1.62
3.73
A
62.60
182.00
D
1.30
2.61
C
10.19
26.00
D
Load Scaling
--------------Scaling
Grade
----------0.89
poor
0.54
fair
1.40
bad
0.07
good
1.35
bad
Grading Key:
Test Name
Grade: A+
-------------------Small Random Reads
2.0
Large Seq Reads
12.0
Small Seq Writes
0.5
Large Seq Writes
2.0
A
---4.0
14.0
1.0
4.0
A---6.0
16.0
1.5
6.0
B
---8.0
18.0
2.0
8.0
B---10.0
20.0
2.5
10.0
C
---12.0
22.0
3.0
12.0
C---14.0
24.0
3.5
14.0
D
----> 14.0
> 24.0
> 3.5
> 14.0
IO Summary:
Test Name
IOPS
--------------------------- ----------------------------------------
Throughput (MBps)
-----------------
Latency (msec)
Average
Min
Max
-------
-------
-------
StdDev
------Rand 4K
1.72
Rand 4K
5.12
Rand 4K
17.56
Rand 4K
30.09
Rand 8K
0.27
Rand 8K
Reads w/ 8 Jobs
15703
61.34
0.50
0.05
754.74
Reads w/ 16 Jobs
15631
61.06
1.00
0.05
1347.10
Reads w/ 32 Jobs
15972
62.39
1.95
0.05
1231.40
Reads w/ 64 Jobs
17341
67.74
3.62
0.05
1750.10
Reads w/ 8 Jobs
15151
118.37
0.52
0.05
45.18
Reads w/ 16 Jobs
16457
128.58
0.95
0.05
501.90
202
3.57
Rand 8K Reads w/ 32 Jobs
16.93
Rand 8K Reads w/ 64 Jobs
30.03
Seq 1K Writes w/ 4 Jobs
0.26
Seq 4K Writes w/ 4 Jobs
0.27
Seq 8K Writes w/ 4 Jobs
0.28
Seq 16K Writes w/ 4 Jobs
0.26
Seq 32K Writes w/ 4 Jobs
0.38
Seq 64K Writes w/ 4 Jobs
2.26
Seq 128K Writes w/ 4 Jobs
4.75
Seq 1M Writes w/ 4 Jobs
42.24
Seq 1K Writes w/ 16 Jobs
6.86
Seq 4K Writes w/ 16 Jobs
7.76
Seq 8K Writes w/ 16 Jobs
7.91
Seq 16K Writes w/ 16 Jobs
8.05
Seq 32K Writes w/ 16 Jobs
8.10
Seq 64K Writes w/ 16 Jobs
4.40
Seq 128K Writes w/ 16 Jobs
4.32
Seq 1M Writes w/ 16 Jobs
32.05
Seq 64K Reads w/ 4 Jobs
0.15
Seq 64K Reads w/ 8 Jobs
0.41
Seq 64K Reads w/ 16 Jobs
7.41
Seq 64K Reads w/ 32 Jobs
20.02
Seq 64K Reads w/ 64 Jobs
34.37
Seq 128K Reads w/ 4 Jobs
0.25
Seq 128K Reads w/ 8 Jobs
0.42
Seq 128K Reads w/ 16 Jobs
6.46
Seq 128K Reads w/ 32 Jobs
19.87
Seq 128K Reads w/ 64 Jobs
36.79
Seq 1M Reads w/ 4 Jobs
0.75
16908
132.10
1.84
0.05
1336.10
16865
131.76
3.71
0.05
1505.50
22053
21.54
0.18
0.04
168.14
24937
97.41
0.16
0.04
152.17
22946
179.27
0.17
0.04
120.19
18003
281.31
0.22
0.05
81.24
12993
406.05
0.30
0.05
40.33
6429
401.83
0.62
0.06
116.19
3614
451.86
1.10
0.08
200.12
388
388.83
10.28
0.27
832.57
25965
25.36
0.60
0.04
814.84
25610
100.04
0.61
0.04
1022.50
25183
196.75
0.62
0.04
910.55
23433
366.14
0.66
0.04
948.57
19327
604.00
0.81
0.05
1180.50
9313
582.08
1.71
0.06
711.96
3369
421.14
4.75
0.08
69.12
481
481.06
33.22
0.27
269.88
16912
1057.20
0.23
0.05
40.36
18862
1178.10
0.42
0.05
78.57
20352
1272.50
0.77
0.06
900.81
20750
1296.10
1.50
0.06
1231.60
21146
1321.70
2.95
0.06
2440.30
11649
1456.30
0.34
0.06
53.66
15995
1999.50
0.49
0.06
32.21
17413
2176.80
0.90
0.07
1057.60
17874
2234.30
1.76
0.07
1355.40
17523
2190.50
3.58
0.07
1926.20
1404
1404.20
2.84
0.31
64.38
203
Seq 1M
0.46
Seq 1M
3.20
Seq 1M
34.64
Seq 1M
54.39
Reads w/ 8 Jobs
2360
2360.70
3.38
0.32
17.60
Reads w/ 16 Jobs
3876
3876.50
4.10
0.33
429.44
Reads w/ 32 Jobs
4732
4732.60
6.69
0.29
1305.70
Reads w/ 64 Jobs
4730
4730.10
13.33
0.32
1847.90
IO Histogram:
Test Name
us50
ms20
ms50 ms100 ms250 ms500
--------------------------- --------- ----- ----- ----- ----Rand 4K Reads w/ 8 Jobs
0
1
0
0
0
0
Rand 4K Reads w/ 16 Jobs
0
1
0
0
0
0
Rand 4K Reads w/ 32 Jobs
0
2
0
0
0
0
Rand 4K Reads w/ 64 Jobs
0
3
1
0
0
0
Rand 8K Reads w/ 8 Jobs
0
1
0
0
0
0
Rand 8K Reads w/ 16 Jobs
0
2
1
0
0
0
Rand 8K Reads w/ 32 Jobs
0
3
2
0
0
0
Rand 8K Reads w/ 64 Jobs
0
8
4
1
0
0
Seq 1K Writes w/ 4 Jobs
0
1
0
0
0
0
Seq 4K Writes w/ 4 Jobs
0
1
0
0
0
0
Seq 8K Writes w/ 4 Jobs
0
1
0
0
0
0
Seq 16K Writes w/ 4 Jobs
0
1
0
0
0
0
Seq 32K Writes w/ 4 Jobs
0
2
1
0
0
0
Seq 64K Writes w/ 4 Jobs
0
7
3
0
0
0
Seq 128K Writes w/ 4 Jobs
0
16
5
2
0
0
Seq 1M Writes w/ 4 Jobs
0
0
24
57
14
4
Seq 1K Writes w/ 16 Jobs
0
1
0
0
0
0
Seq 4K Writes w/ 16 Jobs
0
1
0
0
0
0
Seq 8K Writes w/ 16 Jobs
0
2
1
0
0
0
Seq 16K Writes w/ 16 Jobs
0
4
2
0
0
0
Seq 32K Writes w/ 16 Jobs
0
16
8
2
0
0
us100
s1
--------0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
204
us250
s2
--------0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
us500
s5
--------2
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
4
0
2
0
1
0
0
0
0
0
0
0
0
0
0
0
1
0
1
0
0
0
0
0
0
0
ms1
ms2
ms4
ms10
-----
-----
-----
-----
46
41
39
47
10
64
22
75
16
41
49
66
20
72
18
85
53
36
44
44
41
47
27
57
10
55
30
56
33
76
55
34
46
42
43
38
12
67
23
74
Seq
81
Seq
2
Seq
0
Seq
2
Seq
8
Seq
53
Seq
27
Seq
1
Seq
9
Seq
29
Seq
45
Seq
1
Seq
1
Seq
0
Seq
0
Seq
0
Seq
0
Seq
0
0
0
0
0
0
0
9
0
3
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1
0
0
0
3
0
8
0
11
0
205
29
59
15
74
14
10
75
64
0
0
6
0
1
0
58
0
0
15
0
0
11
0
0
0
0
0
0
0
0
0
0
0
0
38
0
0
1
0
10
40
32
206
11
207
When exclusively using Oracle's Direct NFS Feature (dNFS), it is unnecessary to tune the native NFS client. However, tuning network
parameters is still relevant and may improve performance.
On systems using Oracle Solaris Zones, the kernel NFS client can only be tuned from the global zone.
On Solaris, by default the maximum I/O size used for NFS read or write requests is 32K. When Oracle does I/O larger than 32K, the I/O is broken
down into smaller requests that are serialized. This may result in poor I/O performance. To increase the maximum I/O size:
1. As superuser, add to the /etc/system file:
On systems using Oracle Solaris Zones, TCP parameters, including buffer sizes, can only be tuned from the global zone or in
exclusive-IP non-global zones. Shared-IP non-global zones always inherit TCP parameters from the global zone.
Solaris 10
It is necessary to install a new Service Management Facility (SMF) service that will tune TCP parameters after every boot. These are samples of
the files needed to create the service:
File
Installation location
dlpx-tcptune
/lib/svc/method/dlpx-tcptune
dlpx-tune.xml
/var/svc/manifest/site/dlpx-tune.xml
1. As superuser, download the files and install in the path listed in the Installation location in the table.
2. Run the commands:
#
#
#
#
Verify that the SMF service ran after being enabled by running the command:
208
#
#
#
#
ipadm
ipadm
ipadm
ipadm
set-prop
set-prop
set-prop
set-prop
-p
-p
-p
-p
max_buf=16777216 tcp
_cwnd_max=4194304 tcp
send_buf=4194304 tcp
recv_buf=16777216 tcp
Linux/Redhat/CentOs
Tuning the Kernel NFS Client
In Linux, the number of simultaneous NFS requests is limited by the Remote Procedure Call (RPC) subsystem. The maximum number of
simultaneous requests defaults to 16. Maximize the number of simultaneous requests by changing the kernel tunable sunrpc.tcp_slot_tabl
e_entries value to 128.
RHEL4 through RHEL5.6
1. As superuser, run the following command to change the instantaneous value of simultaneous RPC commands:
# sysctl -w sunrpc.tcp_slot_table_entries=128
2. Edit the file /etc/modprobe.d/modprobe.conf.dist and change the line:
# sysctl -w sunrpc.tcp_slot_table_entries=128
2. If it doesn't already exist, create the file /etc/modprobe.d/rpcinfo with the following contents:
209
tcp_slot_table_entries tuneable still exists, it has a default value of 2, instead of 16 as in prior releases. The maximum number of
simultaneous requests is determined by the new tuneable, sunrpc.tcp_max_slot_table_entries, which has a default value of 65535.
net.ipv4.tcp_timestamps = 1
net.ipv4.tcp_sack = 1
net.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_rmem = 4096 16777216 16777216
net.ipv4.tcp_wmem = 4096 4194304 16777216
2. Run the command:
/sbin/sysctl -p
IBM AIX
Tuning the Kernel NFS Client
On AIX, by default the maximum I/O size used for NFS read or write requests is 64K. When Oracle does I/O larger than 64K, the I/O is broken
down into smaller requests that are serialized. This may result in poor I/O performance. IBM can provide an Authorized Program Analysis Report
(APAR) that allows the I/O size to be configured to a larger value.
1. Determine the appropriate APAR for the version of AIX you are using:
AIX Version
APAR Name
6.1
IV24594
7.1
IV24688
4. Configure the maximum read and write sizes using the commands below:
# nfso -p -o nfs_max_read_size=524288
# nfso -p -o nfs_max_write_size=524288
5. Confirm the correct settings using the command:
210
By default AIX implements a 200ms delay for TCP acknowledgements. However, it has been found that disabling this behaviour can provide
significant performance improvements.
To disable delayed ACKs on AIX the following command can be used:
# /usr/sbin/no -o tcp_nodelayack=1
To make the change permanent use:
# /usr/sbin/no -o -p tcp_nodelayack=1
HP-UX
Tuning the Kernel NFS Client
On HP-UX, by default the maximum I/O size used for NFS read or write requests is 32K. When Oracle does I/O larger than 32K, the I/O is broken
down into smaller requests that are serialized. This may result in poor I/O performance.
1. As superuser, run the following command:
# /usr/sbin/kctune nfs3_bsize=1048576
2. Confirm the changes have occurred and are persistent by running the following command and checking the output:
1. As superuser, edit the /etc/rc.config.d/nddconf file, adding or replacing the following entries:
TRANSPORT_NAME[0]=tcp
NDD_NAME[0]=tcp_recv_hiwater_def
NDD_VALUE[0]=16777216
#
TRANSPORT_NAME[1]=tcp
NDD_NAME[1]=tcp_xmit_hiwater_def
NDD_VALUE[1]=4194304
In this example, the array indices are shown as 0 and 1.In the actual configuration file, each index used must be strictly
increasing, with no missing entries. See the comments at the beginning of /etc/rc.config.d/nddconf for more
information.
/usr/bin/ndd -c
3. Confirm the settings:
211
Type
MaxTransferLength
Default
Recommended
Comments
REG_DWORD 262144
131072
MaxBurstLength
REG_DWORD 262144
131072
MaxPendingRequests
REG_DWORD 255
512
131072
Related Links
SQL Server Target Host iSCSI Configuration Parameter Recommendations
Set Up a SQL Server Target Environment
212
Performance Analytics
These topics describe how to use the Performance Analytics tool to optimize the performance of a Delphix Engine deployment.
Performance Analytics Tool Overview
Working with Performance Analytics Graphs in the Graphical User Interface
Performance Analytics Statistics Reference
Performance Analytics Tool API Reference
Performance Analytics Case Study: Using a Single Statistic
Performance Analytics Case Study: Using Multiple Statistics
213
Introduction
The performance analytics tool allows introspection into how the Delphix Engine is performing. The introspection techniques it provides are tuned
to allow an iterative investigation process, helping to narrow down the cause associated with the performance being measured. Performance
analytics information can be accessed through the Delphix Admin application, as described in Working with Performance Analytics Graphs in
the Graphical User Interface, as well as the CLI and the web services API, as described in other topics in this section. The default statistics that
are being collected on the Delphix Engine include CPU utilization, network utilization, and disk, NFS, and iSCSI IO operations (see Performance
Analytics Statistics Reference for details).
The performance tool operates with two central concepts: statistics and statistic slices.
Statistics
Each statistic describes some data that can be collected from the Delphix Engine. The first piece of information a statistic provides is its type,
which you will use as a handle when creating a statistic slice. It also gives the minimum collection interval, which puts an upper bound on the
frequency of data collection. The actual data a statistic can collect is described through a set of axes, each of which describe one "dimension" of
that statistic. For example, the statistic associated with Network File System (NFS) operations has a latency axis, as well as an operation type
axis (among many others), which allows users to see NFS latencies split by whether they were reads or writes.
Each axis has some important information embedded in it.
The name of the axis provides a short description of what the axis collects and is used when creating a statistic slice
A value type, which tells you what kind of data will be collected for this axis. The different value types are integer, boolean, string, and
histogram. The first three are straightforward, but statistic axes with a histogram type can collect a distribution of all the values
encountered during each collection interval. This means that instead of seeing an average NFS operation latency every collection
interval, you can see a full distribution of operation latencies during that interval. This allows you to see outliers as well as the average,
and observe the effects of caching on the performance of your system more easily.
A constraint type, which is only relevant while creating a statistic slice, and will be described in more detail below
One last bit of information that an axis provides makes the most sense after seeing how datapoints are queried. In the most basic situation, you
would only collect one axis of a statistic, such as the latency axis from the NFS operations statistic. When you ask for data, you would get back a
datapoint for every collection interval in the time range you requested. These datapoints would be grouped into a single stream.
However, if you had collected the operation type axis as well as the latency axis, you would get two streams of datapoints: one for read
operations, and one for write operations.
214
Because the operation axis applies to many datapoints, the datapoints returned are split into two streams, and the operation axis is stored with
the top-level stream instead of with each datapoint in the streams. However, the latency axis will be different for each datapoint in a stream, so it
is not an attribute of the stream, but instead an attribute of the datapoint.
Statistic Slices
Statistics describe what data can be collected and are auto-populated by the system, but statistic slices are responsible for actually collecting the
data, and you must create them manually when you want to collect some performance data. Each slice is an instantiation of exactly one statistic,
and can only gather data which is described by that statistic. "Slices" are so named because each one provides a subset of the information
available from the parent statistic it is associated with. A statistic can be thought of as describing the axes of a multidimensional space, whereas
you typically will only want to collect a simpler slice of that space due to the large number of axes available.
When you specify a slice, there are several fields which you must supply:
The statistic type this slice is associated with. This must be the same type as the statistic of which this is an instantiation.
The collection interval, which must be greater than the minimum collection interval the parent statistic gives
The axes of the parent statistic this slice will collect
Finally, a slice can place constraints on axes of its parent statistic, allowing you to limit the data you get back. For instance, if you're trying to
narrow down the cause of some high NFS latency outliers, it may be useful to filter out any NFS latencies which are shorter than one second. To
do this, you would place a constraint on the latency axis of an NFS operation slice that states that the values must be higher than one second.
You can constrain any axis in the same fashion, and each axis' description in the parent statistic gives a constraint type which can be applied to it.
This allows you to place different types of constraints on the latency axis (which is a number measured in nanoseconds) than the operation type
axis (which is an enum that can take the values "read" or "write").
Related Links
The Performance Analytics Tool API Reference provides a detailed list of all statistics which can be collected, what their axes
represent, and how those axes can be constrained, and outlines all management operations which are available.
Working with Performance Analytics Graphs in the Graphical User Interface
215
Control
Name
Control Description
Usage
Graph
Selector
View a graph by selecting the checkbox next to its name. To hide a graph,
clear the checkbox.
Zoom
Level
Shown
Data
Timeline
Available
Data
Timeline
216
Timeline
Selector
Drag the Timeline Selector to view statistics for a specific time in the past,
or click the scroll bar arrows to view the desired time period. You can also
use the slider controls within the Timeline Selector to change the length of
time for which data is displayed.
When the Timeline Selector is aligned to the right of the timeline, it
represents live data that is updated every second. If the Timeline Selector
is moved from right alignment with the timeline, the data displayed is
historical and no live updates are displayed. To resume live data updates,
move the Timeline Selector back to the right-aligned position representing
the current time. The data will be refreshed to the latest data, and live
updates will resume every second.
Graph
Legend
To hide a set of information, click on the set name within the Graph
Legend. Data representing that set is removed from the graph, and the
set's name is greyed out. To show a set that has been hidden, click on the
set name.
Control
Name
Control Description
Usage
217
Timeline
Page
Left/Right
Button
When the Zoom Level is set to Minute, click Timeline Page Left. The
Available Data Timeline is changed to show the time period for the
previous hour prior.
Graph
Value
Tooltip
Latency
Range
Selector (
shown on
latency
heatmaps
only)
Drag the lower and upper controls to drill down into a specific range of
latency buckets. Latency buckets that fall outside of the selected range
are summarized, the lower row representing latency buckets that are
below the lower limit, and the upper row representing latency buckets
that are below the upper limit of the latency range selector. Use
Latency Range Selector to view more detailed distribution of latencies
for a specific range.
Latency
Outlier
Selector (
shown on
latency
heatmaps
only)
Related Links
Performance Analytics Statistics Reference
218
Description
CPU
Utilization
Total CPU utilization for all CPUs. This statistic includes both kernel and user time.
Network
Throughput
Measures throughput in bytes and packets, broken down by sent vs. received data and by network interface. Each network
interface shows four graphed lines: bytes sent, bytes received, packets sent, and packets received. To help easily correlate
bytes and packets, the same color is used for both bytes and packet values.
Disk IO
Measures a number of IO operations, and the latencies and throughput of the underlying storage layer. The statistic is
represented by the graphs - a column chart for IO operations, a heat map for latency distribution, and a line chart for throughput.
IO operations are grouped by reads and writes. A shaded rectangle on a latency heat map represents an IO operation (read or
write) which falls within a particular time range (bucket). The shading of rectangles depends on the number of IO operations that
fall within a particular bucket - the higher the count the darker the shading.
NFS
Measures a number of IO operations and the latencies and throughput of the NFS server layer in the Delphix Engine. Its
graphical representation is similar to the Disk IO graph. It is useful to diagnose performance of dSources and VDBs that use
NFS mounts (Oracle, PostgreSQL).
iSCSI
Measures the number of IO operations, and the latencies and throughput, of the iSCSI server layer in the Delphix Engine. Its
graphical representation is similar to the Disk IO graph. It is useful to diagnose performance of Microsoft SQL Server dSources
and VDBs.
Related Links
Working with Performance Analytics Graphs in the Graphical User Interface
Performance Analytics Tool Overview
219
Statistic Types
More documentation can be found about each statistic type through the CLI and webservices API, but the following table provides more
information about how similar I/O stack statistic types relate to each other.
Statistic Type
Description
NFS_OPS
Provides information about Network File System operations. This is the entrypoint to the Delphix Engine for all
Oracle database file accesses.
iSCSI_OPS
Provides information about iSCSI operations. This is the entrypoint to the Delphix Engine for all SQL Server file
accesses.
VFS_OPS
This layer sits immediately below NFS_OPS and iSCSI_OPS, and should give almost exactly the same latencies,
assuming no unexpected behavior is occurring.
DxFS_OPS
This layer sits immediately below VFS_OPS, and the two of them should give almost exactly the same latencies.
DxFS_IO_QUEUE_OPS
This layer sits below DxFS_OPS, but the latencies will differ from that layer because this layer batches together
operations to increase throughput.
DISK_OPS
This layer sits below DxFS_IO_QUEUE_OPS at the bottom of the I/O stack, and measures interactions the Delphix
Engine has with disks.
CPU_UTIL
This is unrelated to the layers of the I/O stack - it measures CPU utilization on the Delphix Engine.
TCP_STATS
Description
INTEGER
The value is returned as an integer. For information about what units the integer is measured in, read the documentation for the
related datapoint or datapoint stream type.
BOOLEAN
STRING
The value is returned as a string. This is used for enum values as well, although the set of strings which can be returned is
limited.
HISTOGRAM
The value is returned as a log-scale histogram. The histogram has size buckets whose minimum and maximum value get
doubled. Histograms are returned as JSON maps, where the keys are the minimum value in a bucket and the values are the
height of each bucket.
Here is an example histogram. Notice that buckets with a height of zero are not included in the JSON object, and that keys and
values are represented as strings.
{
"32768": "10",
"65536": "102",
"262144": "15",
"524288": "2"
}
Axis constraints are used to limit the data which a slice can collect. Each axis specifies a constraint type which can be used to limit that axis'
values.
Constraint Type
Description
220
BooleanConstraint
A superclass which constraints on boolean values must extend. Currently, the only subclass is BooleanEqualConst
raint, which requires that a boolean axis equal either true or false (depending on user input).
EnumConstraint
A superclass which constraints on enum values must extend. Currently, the only subclass is EnumEqualConstrain
t, which requires that an enum axis be equal to a user-specified value.
IntegerConstraint
A superclass which constraints on integer values must extend. Subclasses include IntegerLessThanConstraint,
IntegerGreaterThanConstraint, and IntegerEqualConstraint, which map to the obvious comparators for
integers.
NullConstraint
This class signifies that an axis cannot be constrained. This makes the most sense for axes which provide an average
value - placing a constraint on an average doesn't make sense because you are not able to include or discard a
particular operation based on what its effects would be on the average of all operations.
PathConstraint
A superclass which constraints on file path values must extend. Currently, the only subclass is PathDescendantCon
straint, which requires that a path value must be a descendant of the specified path (it must be contained within it).
This only applies to paths on the Delphix Engine itself, and all paths used must be canonical Unix paths starting from
the root of the filesystem.
StringConstraint
A superclass which constraints on string values must extend. Currently, the only subclass is StringEqualsConstr
aint, which requires that a string value must equal a user-specified string.
getData
This is used to fetch data from a statistic slice which has been collecting data for a while. It returns a datapoint
set, which is composed of datapoint streams, which contain datapoints. For a full description, see the Performan
ce Analytics Tool Overview.
rememberRange
This is used to ensure that data collected during an ongoing investigation doesn't get deleted unexpectedly. If this
is not used, data is only guaranteed to be persisted for 24 hours. If it is used, data will be remembered until a
corresponding call to stopRememberingRange is made.
stopRememberingRange
This is used to allow previously-remembered data to be forgotten. The data will be forgotten on the same
schedule as brand new data, so you will have at least 24 hours before data which you have stopped remembering
is deleted. This undoes the rememberRange operation.
pause
This command pauses the collection of a statistic slice, causing no data to be collected until resume is called.
resume
This command resumes the collection of a statistic slice, undoing a pause operation.
Related Links
The Performance Analytics Tool Overview gives an overview of how all the pieces on this page interact.
The case studies (Performance Analytics Case Study: Using a Single Statistic, Performance Analytics Case Study: Using
Multiple Statistics) give command-by-command examples with extensive explanation.
221
Introduction
The Delphix Engine uses Network File System (NFS) as the transport for Oracle installations. An increase in the NFS latency could be causing
sluggishness in your applications running on top of Virtual Databases. This case study illustrates how this pathology can be root caused using the
analytics infrastructure. This performance investigation uses one statistic to debug the issue, and utilizes the many axes of that statistic to filter
down the probably cause of the issue. This technique uses an approach of iteratively drilling down by inspecting new axes of a single statistic,
and filtering the data to only include information about the operations that appear slow. This technique is valuable for determining which use
patterns of a resource might be causing the system to be sluggish. If you isolate a performance issue using this approach, but aren't sure what is
causing it or how to fix it, Delphix Support can provide assistance for your investigation.
The following example inspects the statistic which provides information about NFS I/O operations on the Delphix Engine. This statistic can be
collected a maximum of once every second, and the axes it can collect, among others, are:
latency, a histogram of wait times between NFS requests and NFS responses
size, a histogram of the NFS I/O sizes requested
op, whether the NFS requests were reads or writes
client, the network address of the NFS client which was making requests
Roughly the same performance information can be obtained from the iSCSI interface as well.
Investigation
Because the NFS layer sits above the disk layer, all NFS operations that use the disk synchronously (synchronous writes and uncached reads)
will have latencies which are slightly higher than those of their corresponding disk operations. Usually, because disks have very high seek times
compared to the time the NFS server spends on CPU, disk operations are responsible for almost all of the latency of these NFS operations. In the
graphical representation, you can see this by looking at how the slower cluster of NFS latencies (around 2ms-8ms) have similar latencies to the
median of the disk I/O (around 2ms-4ms). Another discrepancy between the two plots is that the number of disk operations is much lower than the
corresponding number of NFS operations. This is because the Delphix filesystem batches together write operations to improve performance.
If database performance is not satisfactory and almost all of the NFS operation time is spent waiting for the disks, it suggests that the disk is the
slowest piece of the I/O stack. In this case, disk resources (the number of IOPS to the disks, the free space on the disks, and the disk throughput)
should be investigated more thoroughly to determine if adding more capacity or a faster disk would improve performance. However, care must be
taken when arriving at these conclusions, as a shortage of memory or a recently-rebooted machine can also cause the disk to be used more
heavily due to fewer cache hits.
Sometimes, disk operations will not make up all of the latency, which suggests that something between the NFS server and the disk (namely,
something in the Delphix Engine) is taking a long time to complete its work. If this is the case, it is valuable to check whether the Delphix Engine is
resource-constrained, and the most common areas of constraint internal to the Delphix Engine are CPU and memory. If either of those is too
limited, you should investigate whether expanding the resource would improve performance. If no resources appear to be constrained or more
investigation is necessary to convince you that adding resources would help the issue, Delphix Support is available to help debug these issues.
While using this technique, you should take care to recognize the limitations that caching places on how performance data can be interpreted. In
this example, the Delphix Engine uses a caching layer for the data it stores, so asynchronous NFS writes will not go to disk quickly because they
are being queued into larger batches, and cached NFS reads won't use the disk at all. This causes these types of NFS operations to return much
more quickly than any disk operations are able to, resulting in a very large number of low-latency NFS operations in the graph above. For this
reason, caching typically creates a bimodal distribution in the NFS latency histograms, where the first cluster of latencies is associated with
operations that only hit the cache, and the second cluster of latencies is associated with fully or partially uncached operations. In this case,
cached NFS operations should not be compared to the disk latencies because they are unrelated. It is possible to use techniques described in the
first example to filter out some of the unrelated operations to allow a more accurate mapping between disk and NFS latencies.
Related Links
The Performance Analytics Tool API Reference gives a full list of the commands, axes, and data types used by the analytics tool.
222
Introduction
This case study illustrates an investigation involving more than one metric. In typical performance investigations you will need to peel out multiple
layers of the stack in order to observe the component causing the actual performance pathology. This case study specifically examines sluggish
application performance caused due to slow IO responses from the disk sub-system. This example will demonstrate a technique of looking at the
performance of each layer in the I/O stack to find which layer is responsible for the most latency, then looking for constrained resources that the
layer might need to access. This technique is valuable for finding the most-constrained resource in the system, potentially giving actionable
information about resources that can be expanded to increase performance.
For the following example, we will inspect latency at two layers: the Network File System (NFS) layer on the Delphix Engine, and the disk layer
below it. Both of these layers provide the latency axis, which gives a histogram of wait times for the clients of each layer.
Investigation
The analytics infrastructure enables users to observe the latency of multiple layers of the software stack. This investigation will examine the
latency of both layers, and then draw conclusions about the differences between the two.
Setup
To measure this data, create two slices. When attempting to correlate data between two different statistics, it can be easier to determine
causation when collecting data at a relatively high frequency. The fastest that each of these statistics will collect data is once per second, so that
is value used.
1. A slice collecting the latency axis for the statistic type NFS_OPS.
/analytics
create
set name=slice1
set statisticType=NFS_OPS
set collectionInterval=1
set collectionAxes=latency
commit
2. A slice collecting the latency axis for the statistic type DISK_OPS.
/analytics
create
set name=slice2
set statisticType=DISK_OPS
set collectionInterval=1
set collectionAxes=latency
commit
After a short period of time, read the data from the first statistic slice.
223
select slice2
getData
setopt format=json
commit
setopt format=text
The same process works for the second slice. The setopt steps are optional but allow you to see the output better via the CLI. The output for the
first slice might look like this:
224
{
"type": "DatapointSet",
"collectionEvents": [],
"datapointStreams": [{
"type": "NfsOpsDatapointStream",
"datapoints": [{
"type": "IoOpsDatapoint",
"latency": {
"512": "100",
"1024": "308",
"2048": "901",
"4096": "10159",
"8192": "2720",
"16384": "642",
"32768": "270",
"65536": "50",
"131072": "11",
"524288": "64",
"1048576": "102",
"2097152": "197",
"4194304": "415",
"8388608": "320",
"16777216": "50",
"33554432": "20",
"67108864": "9",
"268435456": "2"
},
"timestamp": "2013-05-14T15:51:40.000Z"
}, {
"type": "IoOpsDatapoint",
"latency": {
"512": "55",
"1024": "130",
"2048": "720",
"4096": "6500",
"8192": "1598",
"16384": "331",
"32768": "327",
"65536": "40",
"131072": "14",
"262144": "87",
"524288": "42",
"1048576": "97",
"2097152": "662",
"4194304": "345",
"8388608": "280",
"16777216": "22",
"33554432": "15",
"134217728": "1"
},
"timestamp": "2013-05-14T15:51:41.000Z"
}, ...]
}],
"resolution": 1
}
For the second slice, it might look like this:
225
{
"type": "DatapointSet",
"collectionEvents": [],
"datapointStreams": [{
"type": "DiskOpsDatapointStream",
"datapoints": [{
"type": "IoOpsDatapoint",
"latency": {
"262144": "1",
"524288": "11",
"1048576": "13",
"2097152": "34",
"4194304": "7",
"8388608": "2",
"16777216": "3",
"33554432": "1"
},
"timestamp": "2013-05-14T15:51:40.000Z"
}, {
"type": "IoOpsDatapoint",
"latency": {
"262144", "5",
"524288", "10",
"1048576", "14",
"2097152", "26",
"4194304", "7",
"8388608", "4",
"16777216", "2"
},
"timestamp": "2013-05-14T15:51:41.000Z"
}, ...]
}],
"resolution": 1
}
The data is returned as a set of datapoint streams. Streams hold the fields that would otherwise be shared by all the datapoints they contain, but
only one is used in this example because there are no such fields. Streams are discussed in more detail in the Performance Analytics Tool
Overview. The resolution field indicates how many seconds each datapoint corresponds to, which in our case matches the requested collec
tionInterval. The collectionEvents field is not used in this example, but lists when the slice was paused and resumed to distinguish
between moments when no data was collected because the slice was paused, and moments when there was no data to collect.
Graphically, these four histograms across two seconds look like this:
226
227
Analysis
Because the NFS layer sits above the disk layer, all NFS operations that use the disk synchronously (synchronous writes and uncached reads)
will have latencies which are slightly higher than those of their corresponding disk operations. Usually, because disks have very high seek times
compared to the time the NFS server spends on CPU, disk operations are responsible for almost all of the latency of these NFS operations. In the
graphical representation, you can see this by looking at how the slower cluster of NFS latencies (around 2ms-8ms) have similar latencies to the
median of the disk I/O (around 2ms-4ms). Another discrepancy between the two plots is that the number of disk operations is much lower than the
corresponding number of NFS operations. This is because the Delphix filesystem batches together write operations to improve performance.
If database performance is not satisfactory and almost all of the NFS operation time is spent waiting for the disks, it suggests that the disk is the
slowest piece of the I/O stack. In this case, disk resources (the number of IOPS to the disks, the free space on the disks, and the disk throughput)
should be investigated more thoroughly to determine if adding more capacity or a faster disk would improve performance. However, care must be
taken when arriving at these conclusions, as a shortage of memory or a recently-rebooted machine can also cause the disk to be used more
heavily due to fewer cache hits.
Sometimes, disk operations will not make up all of the latency, which suggests that something between the NFS server and the disk (namely,
something in the Delphix Engine) is taking a long time to complete its work. If this is the case, it is valuable to check whether the Delphix Engine is
resource-constrained, and the most common areas of constraint internal to the Delphix Engine are CPU and memory. If either of those is too
limited, you should investigate whether expanding the resource would improve performance. If no resources appear to be constrained or more
investigation is necessary to convince you that adding resources would help the issue, Delphix Support is available to help debug these issues.
While using this technique, you should take care to recognize the limitations that caching places on how performance data can be interpreted. In
this example, the Delphix Engine uses a caching layer for the data it stores, so asynchronous NFS writes will not go to disk quickly because they
are being queued into larger batches, and cached NFS reads won't use the disk at all. This causes these types of NFS operations to return much
more quickly than any disk operations are able to, resulting in a very large number of low-latency NFS operations in the graph above. For this
reason, caching typically creates a bimodal distribution in the NFS latency histograms, where the first cluster of latencies is associated with
operations that only hit the cache, and the second cluster of latencies is associated with fully or partially uncached operations. In this case,
cached NFS operations should not be compared to the disk latencies because they are unrelated. It is possible to use techniques described in the
first example to filter out some of the unrelated operations to allow a more accurate mapping between disk and NFS latencies.
Related Links
The Performance Analytics Tool API Reference gives a full list of the commands, axes, and data types used by the analytics tool.
228
_DelphixAdmin
229
230
231
Related Links
User Roles in the Delphix Domain
Linking an Oracle Data Source
Creating Policy Templates
232
Object Privileges
Group Privileges
Provisioner
Owner
Can provision VDBs from all dSources and VDBs in the group
Can refresh or rollback all VDBs in the group
Can snapshot all dSources and VDBs in the group
Can perform Virtual to Physical (V2P) from owned dSources
Can apply Custom policies to dSources and VDBs
Can create Template policies for the group
Can assign Owner privileges for dSources and VDBs
Can access the same statistics as an Provisioner, Data
Operator, or Reader
Data
Operator
Reader
Related Links
Adding Delphix Users
233
Prerequisites
If you intend to validate user logins using LDAP authentication, make sure a system administrator has configured LDAP as described in Setting
Up the Delphix Engine.
Procedure
1. Launch the the Delphix Admin application and log in as delphix_admin and the password delphix.
2. Select Manage > Users.
3. Click Add User.
A new user profile will open on the right side.
4. Enter user name, email, and password information for the new user.
5. Clear the Delphix Admin selection, if necessary, and click Save.
A Privileges tab will be added to the user profile. See User Privileges for Delphix Objects for more information about privileges.
6. Assign the user Owner or Auditor privileges for appropriate Delphix objects.
Assigning Owner and Auditor Privileges
Assigning Owner privileges at the Group level conveys ownership privileges over all objects in that group. Click the expand
icon next to each group name to see all objects in that group.
You can also assign ownership privileges only for specific objects in a group. You do not have to assign owner or auditor
privileges for all Delphix objects, only those for which you want to grant the user specific access.
Related Links
Setting Up the Delphix Engine
Adding Delphix Admin Users
User Privileges for Delphix Objects
234
Procedure
1. Launch the the Delphix Admin application and log in as a Delphix Admin user.
2. Select Manage > Users.
3. Click the user's name to open the user's profile panel.
4. Edit the user's profile information or object privileges as necessary.
5. Click the suspend icon to suspend that user.
6. Click the trash can icon to delete the user.
Deleting a user cannot be undone.
235
Procedure
1. Log into the Delphix Admin application as a user with Delphix Admin privileges.
2. Select Manage > Users.
3. For an existing user, click the user name to the open the User Profile manager.
4. Click the Privileges tab.
5. Assign Owner or Auditor rights for groups or objects within groups.
You do not have to assign a specific owner or auditor right for each object.
6. Click Commit when finished.
7. For new users, follow the instructions in Adding Delphix Users and Privileges. When you click Save, the User Profile manager will
reload, and then you can follow steps 4 - 6 to assign privileges.
Related Links
Adding Delphix Users and Privileges
User Privileges for Delphix Objects
236
Adding a Group
1. Log into the Delphix Admin application as a user with Delphix Admin privileges.
2. In the Databases menu, select Add New Group.
3. Enter a Group Name and an optional description.
4. Click OK.
Deleting a Group
1. Log into the Delphix Admin application as a user with Delphix Admin privileges or group OWNER privileges for the target group.
2. Open the group card in the Databases panel by selecting the target group
3. Click the Trash Can icon.
4. Click OK.
237
Prerequisites
You must be a Delphix Admin user to create another Delphix Admin user.
Procedure
1. Launch the the Delphix Admin application and log in.
2. Select Manage > Users.
3. Click Add User.
A new user profile panel will open on the right side.
4. Enter user name, email, and password information for the new user.
5. Select Delphix Admin.
Unlike ordinary Delphix users, Delphix Admin users are not shown a Privileges tab. This is because they have full privileges over all
objects.
6. Click Save.
238
Procedure
1. After logging in, click your name in the menu bar.
2. Click Profile.
3. Edit profile information as necessary.
4. Select options for the event level that will trigger a notification email.
5. Select a time period for Session Timeout.
6. Click Password to edit your password.
7. Click OK when finished.
8. Click Privileges to see your privileges (Auditor or Owner) for Delphix objects.
239
Managing Policies
These topics describe creating and managing SnapSync, LogSync, Retention, and VDB Refresh policies.
Managing Policies: An Overview
Creating Custom Policies
Creating Policy Templates
Policies and Time Zones
Configuring Retention on Individual Snapshots
240
VDB Snapshot How often snapshots are taken of the virtual database (VDB).
Retention How long snapshots and log files are retained for dSources and VDBs.
VDB Refresh A destructive process that is used only if you need to re-provision VDBs from their sources at regular intervals. The
default setting for this policy is None.
Setting the VDB Refresh Policy Interval
Since VDB Refresh is a re-provisioning process, it is important to set the policy interval for an amount of time that will allow the
VDB to fully re-provision before another refresh takes place. For example, if you set the VDB Refresh policy to initiate a refresh
every 15 minutes, it is possible that the VDB will not fully re-provision in that amount of time, and your refresh process will fail.
There can additionally be default, custom, or template policies for each of these categories.
Policy
Type
Description
Default
Default policies exist at the domain level and are applied across all objects in a category. You can modify the
settings for a default policy in a a category, but you cannot change the name default.
Users with
Delphix
Admin
credentials
Custom
Custom policies can only be applied to a specific database object. These cannot be saved to be used with other
objects. You can create custom policies for dSources during the dSource linking process, as described in the Lin
king and Advanced Data Management Settings topics for each database platform type. See also Creating
Custom Policies.
Users with
Delphix
Admin
credentials
Group and
object
owners
Template
Template policies are named policies that can be saved and applied to other database objects and to groups.
These are created on the Policy Management screen.See Creating Policy Templates for more information.
Users with
Delphix
Admin
credentials
Group and
object
owners
241
Procedure
1. Login to the Delphix Admin application as a user with Delphix Admin privileges.
2. Click Manage.
3. Select Policies.
4. Select the policy for the object or group your want to modify.
5. Click Apply New Policy.
6. Enter Name for the policy.
7. Select Customized.
8. Enter the cron expressions you want to use for the policy. The expected format is compatible with the Quartz CronTrigger scheduler.
9. Choose either Weekly, Hours or Minutes, or Custom for Scheduled By.
10. Click OK.
242
Procedure
1. Log into the Delphix Admin application as a user with Delphix Admin privileges.
2. Select Manage > Policies.
3. Click Modify Policy Templates.
4. Under the category where you want to create the template, click Add New Policy.
5. Enter a Name for the template.
6. Enter the cron expressions you want to use for the new policy. The expected format is compatible with the Quartz Cron Trigger schedul
er.
7. Click OK.
Post-Requisites
You can apply the new policy by selecting the appropriate policy category for an existing object or group, and then select the template
policy
Related Links
Users, Permissions, and Policies
243
and Quota policies are not schedulable and do not need a time zone.
To maintain the same behavior of the Delphix Engine after upgrade, the upgrade process clones existing policies
with these clones differing only in their time zone. After upgrading, you may notice that the names of policies change
to include the time zones in which they operate.
Note: Default
policies are not cloned and always operate under the time zone of the Delphix Engine.
Post-Upgrade
Original Policy
New Policies
UserSnapSync
UserSnapSync (America/Mexico_City)
UserSnapSync (America/New_York)
SnapshotTest
SnapshotTest (America/Mexico_City)
SnapshotTest (America/New_York)
244
UserRefresh
UserRefresh (America/Mexico_City)
UserRefresh (America/New_York)
After an upgrade, ensure that the policies are configured as expected; it may have been unclear prior to this upgrade when policies were actually
firing.
Also, after upgrading to 4.2 or higher, you may consolidate/clean-up the clones and these changes will persist through future upgrades. If you go
to the policy tab, and click on a policy you should see a timezone field. This timezone field is editable. So for example, if you had "VDB_SNAP
(US/Arizona)" and "VDB_SNAP (America/Phoenix)", you could delete one of the duplicates (they are both from the same time zone in this case),
make sure the timezone field is set to the desired time zone and rename the remaining policy to "VDB_SNAP".
245
Procedure
1. Log into the Delphix Admin application as a user with Delphix Admin privileges.
2. Select Resources > Capacity.
3. Expand the object (dsource or vdb) to modify.
4. Expand the snapshots. (it may take a few minutes for the individual snapshots to appear)
5. Configure the desired value in the 'keep until' column, either the number of days or tick forever.
246
247
248
VDB
SnapSync
Yes
No
LogSync
No
No
Rewind
Not Applicable
No
Yes
No
RAC
No
No
Standby Database
No
No
Oracle 10.2.0.4
The Delphix Engine does not support Oracle 10.2.0.4 databases using Automatic Storage Management (ASM) that do not have the
patch set for Oracle Bug 7207932. This bug is fixed in patch set 10.2.0.4.2 onward.
Version
Processor Family
Solaris
SPARC
Solaris
x86_64
x86_64
5.3 - 5.11
6.0 - 6.6
x86_64
x86_64
249
AIX
Power
HP-UX
11i v2 (11.23)
IA64
11i v3 (11.31)
Delphix supports all 64-bit OS environments for source and target, though 64-bit Linux environments also require that a 32-bit version of glibc
installed
Required HP-UX patch for Target Servers
PHNE_37851 - resolves a known bug in HP-UX NFS client prior to HP-UX 11.31.
250
b. Group memberships:
i. The user's primary group must be the UNIX group that is mapped to OSDBA by the Oracle installation. This is typically
the dba group on the host.
Oracle 12c
For Oracle 12c and later versions of Oracle databases, the delphix_os user can also use OSBACKUPDBA as
its primary group. This is typically the backupdba group on the host.
ii. If the Oracle install group (typically oinstall), exists on the host, it should be set as a secondary group for the user.
iii. If the Oracle ASM groups (typically asmadmin and asmdba) exist on the host, they should be assigned to the user as
secondary groups.
2. There must be a directory on the source host where the Delphix Engine Toolkit can be installed, for example: /var/opt/delphix/Toolkit.
a. The delphix_os user must own the directory.
b. The directory must have permissions -rwxrwx--- (0770), but you can also use more permissive settings.
c. The directory should have 1.5GB of available storage: 400MB for the toolkit and 400MB for the set of logs generated by each
client that runs out of the toolkit.
3. The Delphix Engine must be able to make an SSH connection to the source host (typically port 22)
OS Specific Requirements
AIX
None
HP-UX
None
251
Linux
On 64-bit Linux environments, there must be a 32-bit version of glibc.
Solaris
On a Solaris host, gtar must be installed. Delphix uses gtar to handle long file names when extracting the toolkit files into the toolkit directory on
a Solaris host. The gtar binary should be installed in one of the following directories:
/bin:/usr
/bin:/sbin:/usr
/sbin:/usr/contrib
/bin:/usr/sfw
/bin:/opt/sfw
/bin:/opt/csw/bin
252
For an Oracle pluggable database, there must be one database user (delphix_db) for the pluggable database and one
common database user (c##delphix_db) for its container database. The createDelphixDBUser.sh script can create both
users.
3. Enable Block Change Tracking (BCT). (Highly Recommended). Without BCT, incremental SnapSyncs must scan the entire
database.
BCT is an Enterprise Edition feature.
See Linking Oracle Physical Standby Databases for restrictions on enabling BCT on Oracle Physical Standby databases.
alter database enable block change tracking using file '<user specified file>;
4. Enable FORCE LOGGING. (Highly Recommended). This prevents NOLOGGING operations on Source Databases. Oracle requires
FORCE LOGGING for proper management of standby databases.
Enter this command to enable FORCE LOGGING:
5. If the online redo log files are located on RAW or ASM devices, then the Delphix Engine LogSync feature can operate in Archive Only m
ode only.
Example: This shows that the group dba has read/write/execute permission on the
database resources
$ crsctl get hostname
node2
5. All datafiles and archive logs must be located on storage shared by all of the cluster nodes. Each node in the cluster must be able to
access archive logs from all other nodes. This is an Oracle Best Practice, and a requirement for Delphix.
253
Troubleshooting Linking
For each Oracle Home which you will use with dSources, the delphix_os user should have:
1. Execute permission for the programs in $ORACLE_HOME/bin.
2. The $ORACLE_HOME/bin/oracle executable must have the SETUID and SETGID flags set. Permissions on the oracle binary must
be -rwsr-sx (06751) but you can also use more permissive settings.
Related Links
Requirements for Oracle Target Hosts and Databases
Using HostChecker to Confirm Source and Target Environment Configuration
Sudo File Configurations
Sudo Privilege Requirements
254
b. Group memberships:
i. The user's primary group must be the UNIX group that is mapped to OSDBA by the Oracle installation. This is typically
the dba group on the host.
Oracle 12c
For Oracle 12c and later versions of Oracle databases, the delphix_os user can also use OSBACKUPDBA
as its primary group. This is typically the backupdba group on the host.
ii. If the Oracle install group (typically oinstall), exists on the host, it should be set as a secondary group for the user.
iii. If the Oracle ASM groups (typically asmadmin and asmdba) exist on the host, they should be assigned to the user as
secondary groups.
2. There must be a directory on the source host where the Delphix Engine Toolkit can be installed, for example: /var/opt/delphix/Too
lkit.
a. The delphix_os user must own the directory.
b. The directory must have permissions -rwxrwx--- (0770), but you can also use more permissive settings.
c. The directory should have 1.5GB of available storage: 400MB for the toolkit and 400MB for the set of logs generated by each
client that runs out of the toolkit.
3. There must be an empty directory (e.g. /delphix or /mnt/provision/ ) that will be used as a container for the mount points that are created
when provisioning a VDB to the target host. The group associated with the directory must be the primary group of the delphix_os user
(typically dba). Group permissions for the directory should allow read, write, and execute by members of the group.
4. The following permissions are usually granted via sudo authorization of the commands. See Sudo Privilege Requirements for further
explanation of the commands, and Sudo File Configurations for examples of the /etc/lsudoers file on different operating systems.
a. Permission to run mount, umount, mkdir, rmdir, ps as super-user.
b. Permission to run pargs on Solaris hosts and ps on AIX, HP-UX, Linux hosts, as super-user.
c. If the target host is an AIX system, permission to run the nfso command as super-user.
5. Write permission to the $ORACLE_HOME/dbs directory
6. An Oracle listener process should be running on the target host. The listener's version should be equal to or greater than the highest
Oracle version that will be used to provision a VDB.
7. NFS client services must be running on the target host.
8. The Delphix Engine must be able to make an SSH connection to the target host (typically port 22)
OS Specific Requirements
AIX, HP-UX
None
Linux
255
Solaris
On a Solaris host, gtar must be installed. Delphix uses gtar to handle long file names when extracting the toolkit files into the toolkit directory on
a Solaris host. The gtar binary should be installed in one of the following directories:
/bin:/usr
/bin:/sbin:/usr
/sbin:/usr/contrib
/bin:/usr/sfw
/bin:/opt/sfw
/bin:/opt/csw/bin
Example: This shows that the group dba has read/write/execute permission on the
database resources
$ crsctl getperm resource ora.trois.db
Name: ora.trois.db
owner:ora112:rwx,pgrp:dba:rwx,other::r-5. All datafiles and archive logs must be located on storage shared by all of the cluster nodes. Each node in the cluster must be able to
256
LDAP/NIS User
Troubleshooting Provisioning
1. The $ORACLE_HOME/bin/oracle executable must have the SETUID and SETGID flags set. Permissions on the oracle
binary must be -rwsr-sx (06751) but more permissive settings can also be used.
Related Links
Requirements for Oracle Source Hosts and Databases
Using HostChecker to Validate Oracle Source and Target Environments
Network and Connectivity Requirements for Oracle Environments
Sudo Privilege Requirements
Sudo File Configurations
257
Port
Numbers
Use
TCP
25
TCP/UDP
53
UDP
123
UDP
162
HTTPS
443
SSL connections from the Delphix Engine to the Delphix Support upload server
TCP/UDP
636
TCP
8415
TCP
50001
Connections to source and target environments for network performance tests via the Delphix command line interface
(CLI). See Network Performance Tool.
Port
Number
Use
TCP
22
TCP
80
UDP
161
TCP
443
TCP
8415
Delphix Session Protocol connections from all DSP-based network services including Replication, SnapSync for
Oracle, V2P, and the Delphix Connector.
TCP
50001
Connections from source and target environments for network performance tests via the Delphix CLI. See Network
Performance Tool.
TCP/UDP
32768 65535
Required for NFS mountd and status services from target environment only if the firewall between Delphix and the
target environment does not dynamically open ports.
Note: If no firewall exists between Delphix and the target environment, or the target environment dynamically opens
ports, this port range is not explicitly required.
258
the virtual database (VDB) target environments. If the Delphix Engine is separated from a source environment by a firewall, the firewall must be
configured to permit network connections between the Delphix Engine and the source environments for the application protocols (ports) listed
above.
Intrusion detection systems (IDSs) should also be made permissive to the Delphix Engine deployment. IDSs should be made aware of the
anticipated high volumes of data transfer between dSources and the Delphix Engine.
SSHD Configuration
Both source and target Unix environments are required to have sshd running and configured such that the Delphix Engine can connect over ssh.
The Delphix Engine expects to maintain long-running, highly performant ssh connections with remote Unix environments. The following sshd con
figuration entries can interfere with these ssh connections and are therefore disallowed:
Disallowed sshd Configuration Entries
ClientAliveInterval
ClientAliveCountMax
scp Availability
The scp program must be available in the environment in order to add an environment.
The Delphix Engine makes use of the following network ports for Oracle dSources and VDBs:
259
Port Numbers
Use
TCP
22
TCP
xxx
Connections to the Oracle SQL*Net Listener on the source and target environments (typically port 1521)
Port
Number
Use
TCP/UDP
111
Remote Procedure Call (RPC) port mapper used for NFS mounts
Note: RPC calls in NFS are used to establish additional ports, in the high range 32768-65535, for supporting
services. Some firewalls interpret RPC traffic and open these ports automatically. Some do not; see below.
TCP
1110
NFS Server daemon status and NFS server daemon keep-alive (client info)
TCP/UDP
2049
TCP
4045
TCP
8341
TCP
8415
UDP
33434 33464
Traceroute from source and target database servers to the Delphix Engine (optional)
UDP/TCP
32768 65535
NFS mountd and status services, which run on a random high port. Necessary when a firewall does not
dynamically open ports.
260
Configuring sudo Access on Solaris SPARC for Source and Target Environments
Sudo access to pargs on the Solaris operating system is required for the detection of listeners with non-standard configurations on both source
and target environments. Super-user access level is needed to determine the TNS_ADMIN environment variable of the user running the listener
(typically oracle, the installation owner). From TNS_ADMIN, the Delphix OS user delphix_os can derive connection parameters.
261
This example restricts the delphix_os user's use of sudo privileges to the directory /oracle.
Note that wildcards are allowed for the options on mount and umount because those commands expect a fixed number of arguments after the
options. The option wildcard on the mount command also makes it possible to specify the file-system being mounted from the Delphix Engine.
But wildcards are not acceptable on mkdir and rmdir because they can have any number of arguments after the options. For those commands
you are required to specify the exact options (-p, -p -m 755) used by the Delphix Engine.
262
Example /etc/sudoers File Configuration on the Target Environment for sudo Privileges on the
VDB Mount Directory Only
Defaults:delphix_os !requiretty
delphix_os ALL=(root) NOPASSWD:
/bin/mount *
/oracle/*,
/bin/umount *
/oracle/*,
/bin/umount
/oracle/*,
/bin/mkdir -p
/oracle/*,
/bin/mkdir -p -m 755 /oracle/*,
/bin/mkdir
/oracle/*,
/bin/rmdir
/oracle/*,
/bin/ps
\
\
\
\
\
\
\
\
Example 2
This example restricts the delphix_os user's use of sudo privileges to the directory /mnt/delphix.
This example demonstrates a very restrictive syntax for the mount and umount commands. The umount command allows no user-specified
options. The mount command specifies the Delphix Engines server name (or IP address) on the mount command so as to limit which file
systems can be mounted.
A Second Example of Configuring the /etc/sudoers File on the Target Environment for Privileges
on the VDB Mount Directory Only
/usr/sbin/umount "", \
/usr/sbin/mount "", \Defaults:delphix_os !requiretty
delphix_os ALL=(root) NOPASSWD: \
/usr/sbin/mount
<delphix-server-name>* /mnt/delphix/*, \
/usr/sbin/mount *
<delphix-server-name>* /mnt/delphix/*, \
/usr/sbin/mount
<delphix-server-ip>*
/mnt/delphix/*, \
/usr/sbin/mount *
<delphix-server-ip>*
/mnt/delphix/*, \
/usr/sbin/mount "", \
/usr/sbin/umount
/mnt/delphix/*, \
/usr/sbin/umount *
/mnt/delphix/*, \
/usr/bin/mkdir [*]
/mnt/delphix/*, \
/usr/bin/mkdir
/mnt/delphix/*, \
/usr/bin/mkdir -p
/mnt/delphix/*, \
/usr/bin/mkdir -p -m 755 /mnt/delphix/*, \
/usr/bin/rmdir
/mnt/delphix/*, \
/usr/bin/ps, \
/bin/ps
Considerations for sudo access and account locking
The Delphix Engine tests its ability to run the mount command using sudo on the target environment by issuing the sudo mount com
mand with no arguments. Many of the examples shown in this topic do not allow that, and in those cases the attempt will be blocked. In
most situations, this does not cause a problem.
Similarly, the ps or pargs command is used for target environment operations such as initial discovery and refresh. The most
restrictive sudo setups might not allow the commands ps (pargs), mkdir, and rmdir; strictly speaking, Delphix can still function
without these privileges (see Sudo Privilege Requirements for a full explanation).
263
However, some users configure the security on the target environments to monitor sudo failures and lock out the offending account
after some threshold. In those situations, the delphix_os account can become locked. One work-around for this situation is to increase
the threshold for locking out the user account. Another option is to modify /etc/sudoers to permit the delphix_os user to run ps
(pargs), mkdir, rmdir, and mount command without parameters.
Related Links
Sudo Privilege Requirements
Requirements for Oracle Source Hosts and Databases
Requirements for Oracle Target Hosts and Databases
264
Sources
Targets
Rationale
ps | pargs
Optional,
Optional,
Delphix auto-discovery uses the TNS_ADMIN environment variable of Oracle Listener
Strongly
Strongly
processes with non-standard configurations to derive their connection parameters. An
Recommended Recommended Oracle Listener is normally owned by a different user (oracle) than the delphix_os user.
The Delphix Engine needs sudo access to pargs on the Solaris OS or ps on other OSes
to examine the environment variables of those Listener processes.
This privilege is optional in all cases, since you can manually configure dSources and
VDBs. It is also optional when using a standard TNS_ADMIN location.
Optional
Delphix dynamically makes and removes directories under the provisioning directory
during VDB operations. This privilege is optional, provided the provisioning directory
permissions allow the delphix os user to make and remove directories.
Required
Delphix dynamically mounts and unmounts directories under the provisioning directory
during VDB operations. This privilege is required because mount and umount are
typically reserved for superuser.
Required
Delphix monitors NFS read and write sizes on an AIX target host. It uses the nfso comm
and to query the sizes in order to optimize NFS performance for VDBs running on the
target host. Only a superuser can issue the nfso command.
mkdir/rmdir
Not Required
Not Required
Related Links
Requirements for Oracle Source Hosts and Databases
Requirements for Oracle Target Hosts and Databases
Sudo File Configurations
265
266
What is HostChecker?
The HostChecker is a standalone program which validates that host machines are configured correctly before the Delphix Engine uses them as
data sources and provision targets.
Please note that HostChecker does not communicate changes made to hosts back to the Delphix Engine. If you reconfigure a host, you must
refresh the host in the Delphix Engine in order for it to detect your changes.
You can run the tests contained in the HostChecker individually, or all at once. You must run these tests on both the source and target hosts to
verify their configurations. As the tests run, you will either see validation messages that the test has completed successfully, or error messages
directing you to make changes to the host configuration.
The Oracle HostChecker is distributed as a set of Java files and executables. You can find these files and executables in 5 distinct tarballs, each
containing a different jdk corresponding to a particular platform (OS + processor). Together, these tarballs comprise the set of platforms supported
by Delphix. When validating Oracle hosts during a new deployment, it is important to download the appropriate tarball for the host you are
validating. Tarballs follow the naming convention "hostchecker_<OS>_<processor>.tar." For example, if you are validating a linux x86 host, you
should download the tarball named hostchecker_linux_x86.tar.
The Oracle HostChecker is also included in the Delphix Toolkit which is pushed to every environment managed by the Delphix Engine. It can be
found in /<toolkit-path>/<Delphix_COMMON>/client/hostchecker.
Prerequisites
Make sure your Oracle environment meets the requirements described in Requirements for Oracle Source Hosts and Databases and
Requirements for Oracle Target Hosts and Databases
At minimum, the hostchecker requires Java 6 to run. However, the Java 6 binaries are included in each of the platform specific
hostchecker tarballs and will be extracted if necessary.
Procedure
1. Download the appropriate HostChecker tarball for your platform from https://download.delphix.com/. Tarballs follow the naming
convention "hostchecker_<OS>_<processor>.tar". For example, if you are validating a linux x86 host you should download the
hostchecker_linux_x86.tar tarball.
2. Create a working directory and extract the HostChecker files from the HostChecker tarball.
mkdir dlpx-host-checker
cd dlpx-host-checker/
tar -xf hostchecker_linux_x86.tar
3. Run the sh script contained within:
sh hostchecker.sh
This will extract the JDK included in the tarball (if necessary) and invoke the hostchecker.
ora10205@bbdhcp:/home/ora10205/hostchecker-> sh hostchecker.sh
Installed version of Java (version: 1.4.2) is not compatible with the hostchecker.
Java version 1.6 or greater required.
Using the JDK from the included tarball (already extracted).
267
7. Error or warning messages will explain any possible problems and how to address them. Resolve the issues that the HostChecker
describes. Don't be surprised or undo your work if more errors appear the next time you run HostChecker, because the error you just
fixed may have been masking other problems.
8. Repeat steps 3 - 7 until all the checks return no errors or warnings.
Non-Interactive Mode
The Java hostchecker can also be invoked in non-interactive mode. Each check is associated with a numeric flag; the association can be
displayed using the -help input flag. To run a particular check pass in the associated flag.
Applicable
Platforms
Oracle
Source
Oracle
Target
Description
268
Check Host
Secure Shell
(SSH)
Connectivity
All
All
Verifies that the toolkit installation location is suitable for example, that it has the proper
ownership, permissions, and enough free space
Check Home
Directory
Permissions
All
Verifies that the host can be accessed via SSH using public key authentication. If you do
not need this feature, you can ignore the results of this check, or you can choose not to
run it.
Check Inventory
Access
All
Verifies that the current user has access to the Oracle inventory file
Check Oracle
Installation
All
Verifies basic information about the Oracle installation on the system, including that
various files are in expected locations, that they are formatted properly, and that they
have the correct permissions
Check ORATAB
File
All
Verifies that the oratab file is in an expected location and is formatted appropriately. You
only need to run this on source machines.
Check Oracle
DB Instance
All
Verifies more specific information both about the installation of oracle on the system and
about the various databases. Information includes not only file locations, formatting, and
permissions, but also the presence of DB listeners, database settings, oracle versions,
oracle user permissions, and more. You only need to run this on source machines.
Check Oracle
CRS Installation
All
Check OS User
Privileges
All
Check
SnapSync
Connectivity
All
Check
transmission
control protocol
(TCP) slot table
entries
Verifies settings related to Oracle CRS. You only need to run this on machines that have
CRS set up.
Verifies that the operating system user can execute certain commands with necessary
privileges via sudo. You only need to run this on target environments. See the topic Sudo
Privilege Requirements for more information.
Linux
RHEL
4.0-5.6
Verifies that the source host is able to connect to the Delphix Engine at port 8415 for
SnapSync
X
Check that the maximum number of (TCP) RPC requests that can be in flight is at least
128.
Related Links
Requirements for Oracle Source Hosts and Databases
Requirements for Oracle Target Hosts and Databases
269
Prerequisites
See the topics Requirements for Oracle Target Hosts and Databases and Supported Operating Systems and DBMS Versions for
Oracle Environments
There can be one Oracle unique database name (DB_UNIQUE_NAME) per Delphix Engine. For example, if you provision a VDB with a
database unique name "ABC" and later try to add an environment which has a source database that also has a database unique name of
"ABC", errors will occur.
Procedure
1. Login to the Delphix Admin application using Delphix Admin credentials.
2. Click Manage.
3. Select Environments.
4. Click the Plus icon next to Environments.
5. In the Add Environment dialog, select Unix/Linux.
6. Select Standalone Host or Oracle Cluster, depending on the type of environment you are adding.
7. For standalone Oracle environments enter the Host IP address.
8. For Oracle RAC environments, enter the Node Address and Cluster Home.
9. Enter an optional Name for the environment.
10. Enter the SSH port.
The default value is 22.
11. Enter a Username for the environment.
See Requirements for Oracle Target Hosts and Databases for more information on the required privileges for the environment user.
12. Select a Login Type.
For Password, enter the password associated with the user in Step 10.
Using Public Key Authentication
If you want to use public key encryption for logging into your environment:
a. Select Public Key for the Login Type.
b. Click View Public Key.
c. Copy the public key that is displayed, and append it to the end of your ~/.ssh/authorized_keys file. If this file
does not exist, you will need to create it.
i. Run chmod 600 authorized_keys to enable read and write privileges for your user.
ii. Run chmod 755 ~ to make your home directory writable only by your user.
The public key needs to be added only once per user and per environment.
You can also add public key authentication to an environment user's profile by using the command line interface, as explained
in the topic CLI Cookbook: Setting Up SSH Key Authentication for UNIX Environment Users.
13. For Password Login, click Verify Credentials to test the username and password.
14. Enter a Toolkit Path.
The toolkit directory stores scripts used for Delphix Engine operations, and should have a persistent working directory rather than a
temporary one. The toolkit directory will have a separate sub-directory for each database instance. The toolkit path must have 0770
permissions and at least 345MB of free space.
15. Click OK.
Post-Requisites
After you create the environment, you can view information about it:
1. Click Manage.
2. Select Environments.
3.
270
Related Links
Requirements for Oracle Target Hosts and Databases
Supported Operating Systems and DBMS Versions for Oracle Environments
271
Procedure
1. Log into the Delphix Admin application using Delphix Admin credentials.
2. Select Manage > Environments.>
3. Click Databases.
4. Click the green Plus icon next to Add Installation Home.
5. Enter the Installation Home.
6. Click the Check icon when finished.
Related Links
Adding a Database to an Oracle Environment
272
Prerequisites
Make sure your source database meets the requirements described in Requirements for Oracle Source Hosts and Databases, as well
as general database user requirements as described in Requirements for Oracle Target Hosts and Databases.
Before adding a database, the installation home of the database must exist in the environment. If the installation home does not exist in
the environment, follow the steps in Adding a Database Installation Home to an Oracle Environment.
Procedure
1. Login to the Delphix Admin application using Delphix Admin credentials.
2. Click Manage.
3. Select Environments.
4. Click Databases.
5. Choose the installation home where the database is installed. Click the Up icon next to the the installation home path to show details if
needed.
6. Click the green Plus icon next to Add Databases.
7. Enter the Database Unique Name, Database Name, and Instance Name.
8. When finished, click the Check icon.
Related Links
Requirements for Oracle Source Hosts and Databases
Requirements for Oracle Target Hosts and Databases
Adding a Database Installation Home to an Oracle Environment
273
Procedure
1. Login into the Delphix Admin application using Delphix Admin credentials
2. Select Manage > Environments.
3. Click Databases.
4. Choose the installation which has the multitenant container database and click the Up icon next to the the installation path to show
details.
5. Click "Discover CDB" next to the multitenant container database.
6. Enter the credentials for the multitenant container database and click the Check icon.
7. After pluggable databases are discovered, an Up button appears next the the container database. Click on it to see all discovered
pluggable databases.
Related Links
Requirements for Oracle Source Environments and Databases
Adding a Database to an Oracle Environment
274
Procedure
1. Log into the Delphix Admin application using Delphix Admin credentials.
2. Select Manage > Environments.
3. In the Environments panel, click on the name of an environment to view its basic information.
4. Next to Listeners, click the green Plus icon to add a Listener Service.
5. Enter a Name for the new Listener Service, and an IP address for its Endpoint.
6. Click the green Plus icon next to Add Endpoints to enter additional endpoints.
7. Click the Check icon to save your changes.
275
Changing the Host Name or IP Address for Oracle Source and Target Environments
This topic describes how to change the host name or IP address for source and target environments, and for the Delphix Engine.
Procedure
For Source Environments
For VDB Target Environments
For the Delphix Engine
Procedure
For Source Environments
1. Disable the dSource as described in Enabling and Disabling SAP ASE dSources.
2. If the Host Address field contains an IP address, edit the IP address.
3. If the Host Address field contains a host name, update your Domain Name Server to associate the new IP address to the host name.
The Delphix Engine will automatically detect the change within a few minutes.
4. In the Environments screen of the Delphix Engine, refresh the host.
5. Enable the dSource.
276
Procedure
1. Login to the Delphix Admin application with Delphix Admin credentials or as the owner of an environment.
2. Click Manage.
3. Select Environments.
4. In the Environments panel, click on the name of an environment to view its attributes.
5. Under Attributes, click the Pencil icon to edit an attribute.
6. Click the Check icon to save your edits.
Description
Environment
Users
The users for that environment. These are the users who have permission to ssh into an environment, or access the
environment through the Delphix Connector. See the Requirements topics for specific data platforms for more information on
the environment user requirements.
Host
Address
Notes
Oracle Attributes
Attribute
Description
Environment
Name (RAC)
The Environment Name field under Attributes is used to provide the name of the environment host in the case of cluster
environments. This field defaults to the IP address of the host unless you specify another name.
Cluster User
(RAC)
Virtual IP
(RAC)
The IP address that will failover to another node in the cluster when a failure is detected. Click the green + to add another
virtual IP domain and IP address.
Listeners
The listener used to connect incoming client requests to the database. See Adding a Listener to an Oracle Environment fo
r more information.
SSH Port
Toolkit Path
Remote Listener: a network name that resolves to an address or address list of Oracle Net remote listeners. Click the
green + to add a remote listener.
SCAN: Single Client Access Name that is used to allow clients to access cluster databases. Click the green + to add a
SCAN.
SCAN Listener: Listener used with SCAN to establish client connections to the database. Click the green + to add a
SCAN listener name and endpoints.
277
Prerequisites
Users that you add to an environment must meet the requirements for that environment as described in the platform-specific Requirements topics.
Procedure
1. Login to the Delphix Admin application using Delphix Admin credentials.
2. Click Manage.
3. Select Environments.
4. Click the name of an environment to open the environment information screen.
5. Under Basic Information, click the green Plus icon to add a user.
6. Enter the Username and Password for the OS user in that environment.
278
Procedure
1. Login to the Delphix Admin application using Delphix Admin credentials
2. Click Manage.
3. Select Environments.
4. Click Databases.
5. To enable or disable staging, slide the button next to Use as Staging to Yes or No.
6. To enable or disable provisioning, slide the button next to Allow Provisioning to On or Off.
279
Prerequisites
You cannot delete an environment that has any dependencies, such as dSources or virtual databases (VDBs). These must be deleted before you
can delete the environment.
Procedure
1. Login to the Delphix Admin application using Delphix Admin credentials.
2. Click Manage.
3. Select Environments.
4. In the Environments panel, select the environment you want to delete.
5. Click the Trash icon.
6. Click Yes to confirm.
280
Procedure
1. Login to the Delphix Admin application with Delphix Admin credentials.
2. Click Manage.
3. Select Environments.
4. In the Environments panel, click on the name of the environment to you want to refresh.
5. Click the Refresh icon.
To refresh all environments, click the Refresh icon next to Environments.
281
282
Prerequisites
Make sure you have the correct user credentials for the source environment, as described in Requirements for Oracle Target Hosts
and Databases.
If you are linking a dSource to an Oracle or Oracle RAC physical standby database, you should read the topic Linking Oracle Physical
Standby Databases.
If you are using Oracle Enterprise Edition, you must have Block Change Tracking (BCT) enabled as described in Requirements for
Oracle Source Hosts and Databases.
The source database should be in ARCHIVELOG mode and the NOLOGGING option should be disabled as described in Requirements
for Oracle Source Hosts and Databases.
You may also want to read the topic Advanced Data Management Settings for Oracle dSources.
Procedure
1. Login to the Delphix Admin application using Delphix Admin credentials.
2. Click Manage.
3. Select Databases.
4. Select Add dSource.
Alternatively, on the Environment Management screen, you can click Link next to a database name to start the dSource creation
process.
5. In the Add dSource wizard, select the source database.
Changing the Environment User
If you need to change or add an environment user for the source database, see Managing Oracle Environment Users.
6. Enter your login credentials for the source database and click Verify Credentials.
If you are linking a mounted standby, click Advanced and enter non-SYS login credentials as well. Click Next. See the topics under Link
ing Oracle Physical Standby Databases for more information about how the Delphix Engine uses non-SYS login credentials.
7. In Add dSource/Add Environment wizard, the Toolkit Path can be set to /tmp (or any unused directory).
8. Select a Database Group for the dSource, and then click Next.
Adding a dSource to a database group lets you set Delphix Domain user permissions for that database and its objects, such as
snapshots. See the topics under Users, Permissions, and Policies for more information.
9. Select an Initial Load option.
By default, the initial load takes place upon completion of the linking process. Alternatively, you can set the initial load to take place
according to the SnapSync policy, for example if you want the initial load to take place when the source database is not in use, or after a
set of operations have taken place.
10. Select whether the data in the database is Masked.
This setting is a flag to the Delphix Engine that the database data is in a masked state. Selecting this option will not mask the data.
11. Select a SnapSync policy.
See Advanced Data Management Settings for Oracle dSources for more information.
12. Click Advanced to edit LogSync, Validated Sync, and Retention policies.
See Advanced Data Management Settings for Oracle dSources for more information.
13. Click Next.
14. Review the dSource Configuration and Data Management information, and then click Finish.
The Delphix Engine will initiate two jobs, DB_Link and DB_Sync, to create the dSource. You can monitor these jobs by clicking Active
Jobs in the top menu bar, or by selecting System > Event Viewer. When the jobs have successfully completed, the database icon will
change to a dSource icon on the Environments > Databases screen, and the dSource will be added to the list of My Databases under
its assigned group.
283
and permissions. In the Databases panel, click on the Open icon to view the front of the dSource card. The card will then flip, showing
you information such as the Source Database and Data Management configuration. For more information, see Advanced Data
Management Settings for Oracle dSources .
Related Links
Advanced Data Management Settings for Oracle dSources
Requirements for Oracle Source Hosts and Databases
Requirements for Oracle Target Hosts and Databases
Linking dSources from an Encrypted Oracle Database
Linking Oracle Physical Standby Databases
Users, Permissions, and Policies
Managing Oracle Environment Users
284
Prerequisites
Make sure the Delphix Engine has already discovered the multitenant container database and its pluggable databases. If the container
database does not exist in the environment, follow the steps in Adding a Database to an Oracle Environment. If the pluggable
database you want to link does not exist in the environment, follow the steps in Discovering Oracle Pluggable Databases in an Oracle
Environment.
You should have Block Change Tracking (BCT) enabled for the container database, as described in Requirements for Oracle Source
Hosts and Databases.
The container database should be in ARCHIVELOG mode and the NOLOGGING option should be disabled, as described in Requireme
nts for Oracle Source Hosts and Databases.
Procedure
1. Log into the Delphix Admin application using Delphix Admin credentials.
2. Select Manage > Databases > Add dSource.
Alternatively, on the Environment Management screen, you can click Add dSource next to the pluggable database name to start the
dSource creation process.
3. In the Add dSource wizard, select the source pluggable database.
If the container database is shown but the pluggable database is not, select the container database, enter its database
credentials, and click Verify Credentials. The Delphix Engine will discover and list all pluggable databases in the container
database. Select the pluggable database from the list.
4. Enter your login credentials for the source database and click Verify Credentials.
5. Click Next.
6. Select a Database Group for the dSource.
7. Click Next.
8. Select an Initial Load option.
By default, the initial load takes place upon completion of the linking process. Alternatively, you can set the initial load to take place
according to the SnapSync policy. For example, you can set the initial load to take place when the source database is not in use, or after
a set of operations have taken place.
9. Select a SnapSync policy.
See Advanced Data Management Settings for Oracle dSources for more information.
10. Click Advanced to edit Oracle Sync Options Settings and Retention policies.
See Advanced Data Management Settings for Oracle dSources for more information.
11. Click Next.
12. Review the dSource Configuration and Data Management information.
13. Click Finish.
The Delphix Engine will initiate two jobs, DB_Link and DB_Sync, to create the dSource. You can monitor these jobs by clicking Active
Jobs in the top menu bar, or by selecting System > Event Viewer. When the jobs have completed successfully, the database icon will
change to a dSource icon on the Environments > Databases screen, and the dSource will be added to the list of My Databases under
its assigned group.
Link/Sync of the Multitenant Container Database
The DB_Link job will also link the pluggable database's multitenant container database if it has not been linked yet.
You can also initiate a DB_Sync job for the container database.
Related Links
Adding a Database to an Oracle Environment
Discovering Oracle Pluggable Databases in an Oracle Environment
Requirements for Oracle Source Hosts and Databases
285
286
1. In the Data Management panel of the Add dSource wizard, click Advanced.
On the back of the dSource card
1. Click Manage.
2. Select Policies. This will open the Policy Management screen.
3. Select the policy for the dSource you want to modify.
4. Click Modify.
For more information, see Creating Custom Policies and Creating Policy Templates.
Retention Policies
Retention policies define the length of time that the Delphix Engine retains snapshots and log files to which you can rewind or provision objects
from past points in time. The retention time for snapshots must be equal to, or longer than, the retention time for logs.
To support longer retention times, you may need to allocate more storage to the Delphix Engine. The retention policy in combination with the
SnapSync policy can have a significant impact on the performance and storage consumption of the Delphix Engine.
287
With LogSync enabled, you can customize both the retention policy and the SnapSync policy to access logs for longer periods of time, enabling
point-in-time rollback and provisioning.
288
289
The default OS user for the staging host must have access to the Oracle installation that will be used to perform recovery during
validated sync.
Video
Related Links
CLI Cookbook: Enabling Oracle Validated Sync
290
Related Links
Linking an Oracle Data Source
Provisioning a VDB from an Encrypted Oracle Database
291
Apply
Mode
Notes
10.2.0.x, 11.2.0.4,
12.x in
SCN Backup Mode
Archive
Apply
mode
No special restrictions.
Real
Time
Apply
mode
Archive
Apply
mode
Due to Oracle bug 10146187, redo apply must be stopped and the database opened in read-only mode
during SnapSync.
See the section on Stopping and Restarting Redo Apply.
Real
Time
Apply
mode
11.1.0.x, 11.2.0.2,
11.2.0.3 in SCN
Backup Mode
Due to Oracle bug 10146187, redo apply must be stopped and the database opened in read-only mode
during SnapSync.
See the section on Stopping and Restarting Redo Apply.
Due to Oracle Bug 13075226, which results in a hang during the restart of Redo Apply, BCT must be
disabled on the standby database.
Both Oracle bug 10146187 and 13075226 are fixed starting from Oracle 11.2.0.4. There is no need to
configure stop and restart of Redo Apply or disable BCT if the Physical Standby Database is at version
11.2.0.4 or above.
11.1.0.x, 11.2.0.2,
11.2.0.3 in
Level Backup Mode
Archive
Apply
mode
No special restrictions.
But see the section on Mandatory Requirements for Using Level Backup Mode.
Real
Time
Apply
mode
292
backup streams.
However, SCN Backup mode requires stopping/restarting redo apply for Physical Standbys on Oracle versions 11.1.0.x, 11.2.0.2, 11.2.0.3
because of Oracle bug 10146187 listed in the support matrix. For these specific Oracle versions, using Level Backup mode is a better option
provided the requirements below are satisfied.
Failure to meet all of these requirements will cause external RMAN backups to be incomplete or result in corrupt SnapSync snapshots.
Switching from SCN to LEVEL mode will force a new LEVEL 0 backup.
2.
3.
4.
Select the Oracle dSource for which you want to add a non-SYS user.
5.
Click the dSource's Expand icon to open the dSource card, then click the Flip icon on the card to view the
back.
6.
293
6.
Related Links
Linking an Oracle Data Source
Advanced Data Management Settings for Oracle dSources
Using Pre- and Post-Scripts with Oracle dSources
294
Example of Attaching and Redirecting External Data Files for Oracle Databases
This example uses two environments:
1. 172.16.200.446 as the source environment
dinosaur as the source database
2. 172.16.200.447 as the target environment
vdino as the target database
Linking a dSource
1. Create an external data directory and an external data file, and attach the directory to the source database.
a. Log into 172.16.200.446 as the environment user.
b. Create a physical directory on the source environment.
$ mkdir /work/extdata
c. Create a directory in Oracle.
$ sqlplus / as sysdba
SQL> create or replace directory extdata as '/work/extdata';
d. Create a text file /work/extdata/exttab.dat.
$ sqlplus / as sysdba
SQL> create table exttab (id number, text varchar2(10))
2 organization external (default directory extdata
location('exttab.dat'));
f. Query the table.
295
f.
Provisioning a VDB
1. Provision vdino from Dinosaur.
2. Modify the directory extdata in vdino
a. Log into the target environment 172.16.200.447
b. Set SID to vdino
$ export ORACLE_SID=vdino
c. A query to exttabwill fail.
$ sqlplus / as sysdba
SQL> select * from exttab
select * from exttab
*
ERROR at line 1:
ORA-29913: error in executing ODCIEXTTABLEOPEN callout
ORA-29400: data cartridge error
KUP-04063: unable to open log file EXTTAB_23394.log
OS error No such file or directory
ORA-06512: at "SYS.ORACLE_LOADER", line 19
3. Modify directory to the new location.
Related Links
Linking an Oracle Data Source
Provisioning an Oracle VDB
296
Related Links
Linking an Oracle Data Source
297
Prerequisites
Do not suspend LogSync on the Delphix Engine during an Oracle upgrade of the source environment. LogSync will detect the Oracle version
change, and automatically update this information on the Delphix Engine for all the associated dSources and VDBs. Follow all Oracle instructions
and documentation.
Procedure
There are 2 ways to apply a PSU (Patch Set Update)/Oracle upgrade:
A) Apply to existing ORACLE_HOME. (best if on Delphix v4.1.x or higher.)
B) Create new ORACLE_HOME (could clone existing one) and then apply the PSU to the new ORACLE_HOME.
For a dSource using option A:
1) Follow Oracle documentation, patch the ORALCE_HOME and the database.
2) Refresh the Environment in the GUI.
For a dSource using option B:
1) Refresh the Environment from the Delphix GUI and verify that the new ORACLE_HOME is picked up and in the databases tab as an ORACLE
Installation.
2) Follow all Oracle documentation, patch the production database etc.
3) Go to the Delphix GUI flip the dSource card over and use the Upgrade Icon on the bottom to switch the ORACLE_INSTALLATION to the new
(verified in step 1).
298
Procedure
1. Click Manage.
2. Select Databases.
3. Click My Databases.
4. Select the dSource you want to disable.
5. On the back of the dSource card, move the slider control from Enabled to Disabled.
6. Click Yes to acknowledge the warning.
When you are ready to enable the dSource again, move the slider control from Disabled to Enabled, and the dSource will continue to function as
it did previously.
299
Detaching a dSource
1. Login to the Delphix Admin application as a user with OWNER privileges on the dSource, group, or domain.
2. Click Manage.
3. Select My Databases.
4. Select the database you want to unlink or delete.
5. Click the Unlink icon.
A warning message will appear.
6. Click Yes to confirm.
Attaching a dSource
Rebuilding Source Databases and Using VDBs
In situations where you want to rebuild a source database, you will need to detach the original dSource and create a new one
from the rebuilt data source. However, you can still provision VDBs from the detached dSource.
1. Detach the dSource as described above.
2. Rename the detached dSource by clicking the Edit icon in the upper left-hand corner of the dSource card, next to its
name.
This is necessary only if you intend give the new dSource the same name as the original one. Otherwise, you will see
an error message.
3. Create the new dSource from the rebuilt database.
You will now be able to provision VDBs from both the detached dSource and the newly created one, but the detached dSource
will only represent the state of the source database prior to being detached.
The attach operation is currently only supported from the command line interface (CLI). Full GUI support will be added in a future release. Only
databases that represent the same physical database can be re-attached
1. Login to the Delphix CLI as a user with OWNER privileges on the dSource, group, or domain.
2. Select the dSource by name using database select <dSource>.
3. Run the attachSource command.
4. Set the source config to which you want to attach using set source.config=<newSource>. Source configs are named by their
database unique name.
5. Set any other source configuration operations as you would for a normal link operation.
6. Run the commit command.
300
Prerequisites
You cannot delete a dSource that has dependent virtual databases (VDBs). Before deleting a dSource, make sure that you have deleted all
dependent VDBs as described in Deleting a VDB.
Procedure
1. Login to the Delphix Admin application using Delphix Admin credentials.
2. Click Manage.
3. Select Databases.
4. Click My Databases.
5. In the Databases panel, select the dSource you want to delete.
6. Click the Trash Can icon.
7. Click Yes to confirm.
Deleting a dSource will also delete all snapshots, logs, and descendant VDB Refresh policies for that database. You cannot
undo the deletion.
301
Prerequisites
You must have replicated a dSource or a VDB to the target host, as described in Replication Overview.
You must have added a compatible target environment on the target host.
Procedure
1. Login to the Delphix Admin application for the target host.
2. Click Manage.
3. Select Databases.
4. Click My Databases.
5. In the list of replicas, select the replica that contains the dSource or VDB you want to provision.
6. The provisioning process is now identical to the process for provisioning standard objects.
Post-Requisites
Once the provisioning job has started, the user interface will automatically display the new VDB in the live system.
302
Description
There is a critical fault associated with the dSource or VDB. See
the error logs for more information.
303
304
of the
You can use and refresh those VDBs to the new snapshot.
The improved user workflow replaces the old user workflow, which directed users to troubleshoot when SnapSync would fail. Begin Oracle Source
Continuity in the following way:
1. The database undergoes a resetlogs operation.
2. If LogSync is enabled, it generates a fault and stops.
3. Start SnapSync. The SnapSync does a full restore of the database to a new timeflow, clears the fault, and restarts LogSync. If you
created VDBs prior to the resetlogs operation, they will still exist after the SnapSync; you can refresh them from the new snapshot.
Version 4.2.0
Once LogSync detects the resetlogs operation and throws the fault, no more changes will be be retrieved from the
database until you start a new SnapSync. This SnapSync will take a full backup, clear the fault, and restart
LogSync. Only the new snapshot and timeflow will be visible in the dSource TimeFlow view in the graphical user
305
interface (GUI). Previous snapshots and timeflow will still exist and be visible through the command line interface
(CLI) and the Capacity Timeflow view of the GUI. The following screenshot shows the same Delphix Engine after a
SnapSync has been performed. Note that the fault has been cleared, LogSync is now active, and only the new
snapshot is visible in the GUI.
Version 4.2.0
The following CLI output shows that the old and new timeflow and snapshots are still available. The name of the original timeflow for the database
is "default." The name of the new timeflow that was created during the SnapSync is "CLONE@2015-01-15T17:07:20."
delphix> /timeflow list display=name,container
NAME
CONTAINER
'CLONE@2015-01-15T17:07:20'
dbdhcp1
default
dbdhcp1
CONTAINER
TIMEFLOW
'@2015-01-16T00:50:08.784Z'
dbdhcp1
default
'@2015-01-16T00:52:13.685Z'
dbdhcp1
default
'@2015-01-16T00:53:46.873Z'
dbdhcp1
default
'@2015-01-16T00:55:18.079Z'
dbdhcp1
default
'@2015-01-16T01:08:02.411Z'
dbdhcp1
'CLONE@2015-01-15T17:07:20'
The old snapshots and timeflow will still be subject to logfile and snapshot retention policies. You can also delete the snapshots manually. In
addition, you can use the CLI to provision from the old timeflow.
306
Oracle LiveSources
Oracle LiveSources Overview
Understanding Oracle LiveSources
Understanding How to Use Oracle LiveSources
Oracle LiveSources Quickly Sync with Consistent Snapshots
Oracle LiveSources Use Resync and Apply
LiveSource Resync is a two-step operation consisting of:
Pre-requisites: Configuration and Installation of Staging Environments To Host a Standby Database
Related Links
The Data Age of the LiveSource is displayed on the LiveSource timeflow. A spinning gear, as seen below, indicates
that the LiveSource standby database instance is actively receiving data from the source database. Delphix
continuously monitors the standby instance and notifies users of any abnormalities.
Users can change the Data Age Threshold at any time by flipping the LiveSource card and updating the threshold
value in the card as seen below.
307
performed:
There are unresolvable gaps in the log sequence for example, logs from the source database deleted before the primary database
could ship them over to the LiveSource standby.
The source database was taken through a point in time recovery / flashback, resulting in a changed incarnation.
The source database contains non-logged changes. In this case, a Resync is needed only if you are interested in moving the non-logged
data over to the LiveSource.
The LiveSource is significantly behind the source database due to network communication issues or large amount of writes.
the Resync data will perform one more incremental backups from the source
database to ensure up to date data, and recreate the LiveSource instance while preserving all the
configurations. This operation requires downtime for the LiveSource.
If the prepared resync data is no longer needed or resync data has become obsolete (for example, another
controlled change has been done on the source database), you can discard the current resync data with Disca
rd Resync Data. The next Resync will refetch data from the source database.
308
The LiveSource feature requires an Active Data Guard license. Delphix uses Active Data Guard to replicate changes
from the source database to a standby database that it creates on the staging environment.
Network Requirements
LiveSource requires a Data Guard connection between the source and the standby database which utilizes TNS listeners associated with the
databases.
Database Requirements
LiveSource requires Enterprise Edition of Oracle Database.
Related Links
Oracle LiveSource User Workflows
309
To get a live feed to the source database data through the Delphix Engine, you must first link the database to the Delphix Engine to create a
dSource. You can then convert the dSource into a LiveSource by following the steps outlined below:
1.
2.
3.
Click Convert to LiveSource, as highlighted above. This launches the Convert to LiveSource wizard.
310
Note: The LiveSource database name must be same as the database name of the primary database; therefore, this value is read-only.
4. Click Next.
Convert to LiveSource, Section 3 of 6 in the LiveSource Wizard
The image below illustrates where a user is to configure virtual database (VDB) templates and DB configuration parameters.
1.
2.
311
3.
Click Next.
1. The image below illustrates where you will enter the data age warning threshold for the LiveSource.
If the data in LiveSource lags behind the source database by more than this threshold, the Delphix Engine will raise a fault and notify you
.
2.
Click Next.
1. As seen in the image below, you can enter the operations to be performed on initial conversion. These operations are performed after
the Delphix Engine has created the standby database for the LiveSource.
312
2. Click Next.
2.
Setting up Log Transport between a dSource or Primary Database and a LiveSource or Standby Database
After adding a LiveSource instance, you must configure the log transport between the dSource or primary database and the LiveSource or
standby database. For details on configuring a standby database, refer to the Oracle Data Guard Concepts and Administration guide.
At source/primary database:
1. Configure the LOG_ARCHIVE_CONFIG parameter to enable the sending of redo logs to remote destinations and the receipt of remote
redo logs (the LiveSource instance). For example:
alter system set log_archive_config='DG_CONFIG=(sourcedb,livesource)' scope=both;
2.
Configure the
LOG_ARCHIVE_DEST_n parameter to point the redo logs to the LiveSource instance. For example:
alter system set log_archive_dest_2='SERVICE=livesource ASYNC VALID_FOR=(ONLINE_LOGFILE,PRIMARY_ROLE)
DB_UNIQUE_NAME=livesource scope=both;
Configure the corresponding LOG_ARCHIVE_DEST_STATE_n parameter to identify whether the log transport is
enabled. For example:
alter system set log_archive_dest_state_2='ENABLE' scope=both;
6.
Configure the STANDBY_FILE_MANAGEMENT parameter to enable automatic standby file management. For example:
alter system set standby_file_management='AUTO' scope=both;
At the Staging Environment where the LiveSource standby database environment is running:
1. Configure the FAL_SERVER parameter to point to the primary database for proper fetch archive log function. For example:
ALTER system SET
fal_server='service="(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=sourcedb.dcenter.delphix.com)(PORT=152
1))(CONNECT_DATA=(SERVICE_NAME=sourcedb)(SERVER=DEDICATED)))"';
2.
Removing a LiveSource
3. Click Convert to dSource, as highlighted in the lower right-hand corner of the LiveSource card below:
As seen in the image below, you can take a snapshot of a LiveSource by clicking the camera icon on the front of the LiveSource card.
LiveSource snapshots are instantaneous, Quick Provision snapshots and dont require an RMAN backup of the source database.
Provisioning from a LiveSource timeflow is the same process as provisioning from a snapshot for dSource
timeflow. The only difference is that you will select a LiveSource and a LiveSource snapshot.
314
Note: When you enable the LiveSource, the Delphix Engine will recreate the standby database on the staging environment.
Note: Disabling a LiveSource shuts down the standby database that Delphix manages on the staging environment.
You can detach a LiveSource in the same way as detaching a regular dSource. Detaching a LiveSource will implicitly
convert the LiveSource into a regular dSource. After a dSource is re-attached, you can convert it back into a
LiveSource.
Resyncing a LiveSource + Applying the Resync
Resync is a way to refresh the LiveSource to the current point in the linked source. Resync is a multi-phase operation comprised of the following:
Perform Resync
1.
Click Manage.
2.
Select Databases.
3.
Select My Databases.
4.
5. Click the Start Resync Data icon, as highlighted in the image below.
315
Click the Discard Resync Data icon, as highlighted in the image below.
316
1. Click Manage.
2. Select Databases.
3. Select My Databases.
4. Flip the LiveSource card.
5.
If the apply resync data process failed, first investigate and resolve the cause of failure, such as a full disk. Then
follow the procedure to start resync.
317
Migrating a LiveSource
1. Click Manage.
2. Select Databases.
3. Select My Databases.
4. Flip the LiveSource card.
5.
6.
Click the Migrate icon on the lower right-hand side of the LiveSource card, as seen below:
318
7. Update the environment, user, and repository, as illustrated in the image below:
Note: After the LiveSource is migrated to a different staging environment, you must ensure that the log transport between the source
database and the LiveSource instance on the new staging environment is set up correctly.
Upgrading a LiveSource
If the source database for the LiveSource has been upgraded, users would have to inform Delphix of the updated Oracle installation and the
associated environment user for both the source database and the LiveSource. This can be done by following the steps below:
1. Click Manage.
2. Select Databases.
3. Select My Databases.
4.
5.
6.
On the back of the LiveSource card, click the upgrade icon in the lower right-hand corner, as highlighted in
the image below.
319
7. Specify the new installation and environment user for the Linked Source and the LiveSource, as illustrtaed in the image below.
320
Replication Prerequisites
Delphix Version
Basic Connectivity
Storage
User
Delphix Version
As mentioned previously, the source primary Delphix Engine and the replica target Delphix Engine must be exactly the same Delphix version. If
you attempt to establish replication between non-matching versions, you will receive an error.
Basic Connectivity
Verify that the source primary engine can reach the replica target engine on your network. If there is a firewall between the two, verify that port
8415 is open from the source to the target.
Storage
The target replica Delphix Engine must have space available in order to retain the objects being copied from the source primary Delphix Engine.
In order to verify this, perform the following:
1. On the source primary engine, login to the administrative GUI as delphix_admin or a similar user.
2. Click the Resources menu.
3. Select Capacity.
4. On the resulting Capacity Management screen, make note of the size of any object or group of objects that you plan on replicating to the
target.
5. Add together all objects you plan on replicating to have a total sum of the space required.
6. On the target replica engine, login to the administrative GUI as delphix_admin or a similar user.
7. Click the Manage menu.
8. Select Dashboard.
9. In the upper right-hand corner, under Capacity Management, verify that Available Space has enough capacity to hold the total object
size from the source primary engine plus enough space left over to be under the warning and critical space alerts for your site.
If adding the Primary objects to the replica will put used capacity over the warning or critical disk thresholds for your site, add additional
storage capacity before you configure replication.
User
In order to configure replication, you must use a username and password with administrative privileges on the target. This can be delphix_admin
or any other user with administrative privileges on the target.
321
Replication Verification
Once the initial replication is complete, you can verify that all wanted objects have been replicated to the target by examining the namespace(s)
showing on the target:
1. On the target Delphix Engine, login to the administrative GUI as delphix_admin or another user with administrative privileges.
2. Click System.
3. Select Namespaces.
The network name of the source Delphix engine(s) will appear on the left-hand side of the screen, and the replicated objects related to
the selected engine on the right-hand side.
The example screenshot below shows the objects replicated from a Delphix Engine that has the network name source."
If the target engine is unable to resolve the IP address of the source Delphix Engine to a name, the namespace will be listed as
the IP address of the source. You can give the namespace a different name by clicking the pencil (
namespace name, located above Databases.
4. If there is more than one namespace listed, select the namespace on the left that you need to verify.
5. Verify that all desired objects from the source have been replicated to the target by reviewing the databases, groups, and environments
that are showing for the selected namespace.
322
a. Click Manage.
b. Select Dashboard and monitor the replication job there.
Once the final replication job is complete, you are ready to failover the namespace to the target Delphix Engine.
324
Provisioning VDBs from Oracle, Oracle RAC, and Oracle PDB Sources
These topics describe concepts and tasks for provisioning a VDB from an Oracle, Oracle RAC or Oracle PDB Source.
Provisioning Oracle VDBs: An Overview
Provisioning an Oracle VDB
Provisioning an Oracle Virtual Pluggable Database
Customizing Oracle VDB Configuration Settings
Customizing Oracle VDB Environment Variables
Customizing VDB File Mappings
Provisioning a VDB from an Encrypted Oracle Database
Time Flows for RAC Provisioning of VDBs
TimeFlow Patching
Enabling and Disabling an Oracle VDB
Provisioning from a Replicated Oracle VDB
Rewinding an Oracle VDB
Refreshing an Oracle VDB
Deleting an Oracle VDB
Migrating an Oracle VDB
Upgrading an Oracle VDB
Migrate a vPDB
325
326
Repository Templates
327
Repository templates are a new feature introduced in the Fhloston release. The primary use case and motivation for this new capability is to
provide the Delphix administrator with control over the Oracle database parameters used during the staging phase of the VDB provisioning
process. It is useful to be able to control these configuration parameters when the physical capabilities of the staging machine, such as CPU
count and memory, are inferior to the physical capabilities of the machines hosting the source database repository.
The repository template is a relationship between three entities:
A database repository The entity that contains database instances on host environments
A database container An entity that represents all of the physical data associated with the database
A VDB configuration template A list of database configuration parameter names and values that you can save on the Delphix Engine to
use at a later time
During the staging process, if you do not specify a repository template, then by default the Delphix Engine will use the configuration parameters
taken from the source database to configure the staged database. These parameters may not be appropriate, because the machine used for
staging may be physically inferior to the machine hosting the source database.
Instead, the Delphix administrator can create a VDB configuration template, which would be appropriate for the physical machine hosting staging
repository. (See Create VDB Config Template. ) Then the admin can create a repository template entry which will bind together the VDB
configuration template, database repository, and database container. This instructs the Delphix Engine to use configuration parameters from the
VDB configuration template whenever the database container is staged on the database repository specified, instead of the parameters on the
source database.
Currently, repository template relations can only be created via the command line interface (CLI) in repository->template.
1. Switch to the repository->template context and create a new template entry.
delphix> repository template
delphix> create
delphix repository template create *> set name=RepositoryTemplate1
delphix repository template create *> set container=DBContainer1
delphix repository template create *> set repository=DBRepostory1
delphix repository template create *> set template=DBTemplate1
delphix repository template create *> commit
Related Links
Supported Operating Systems and DBMS Versions for Oracle Environments
Requirements for Oracle Target Hosts and Databases
Customizing Oracle VDB Configuration Settings
Customizing VDB File Mappings
328
Prerequisites
You must have already done one of the following:
linked a dSource from a source database, as described in Linking an Oracle Data Source
or
created a VDB from which you want to provision another VDB
You will need to have the correct OS User privileges on the target environment, as described in Requirements for Oracle Target Hosts
and Databases.
If you want to use customized database configuration settings, first create a VDB Config Template as described in Customizing Oracle
VDB Configuration Settings.
If you are creating a VDB from a dSource linked to an encrypted database, make sure you have copied the wallet file to the target
environment, as described in Provisioning a VDB from an Encrypted Oracle Database.
Procedure
1. Login to the Delphix Admin application.
2. Click Manage.
3. Select Databases.
4. Select My Databases.
5. Select a dSource.
6. Select a dSource snapshot.
See Provisioning by Snapshot and LogSync in this topic for more information on provisioning options.
You can take a snapshot of the dSource from which to provision. To do so, click the Camera icon on the dSource card.
7. Optional: Slide the LogSync slider to open the snapshot timeline, and then move the arrow along the timeline to provision from a point
of time within a snapshot.
You can provision from the most recent log entry by opening the snapshot timeline, and then clicking the red Arrow icon next
to the LogSync Slider.
8. Click Provision.
The Provision VDB panel will open, and the fields Installation Home, Database Unique Name, SID, Database Name, Mount Base,
and Environment User will auto-populate with information from the dSource.
9. If you need to add a new target environment for the VDB, click the green Plus icon next to the Filter Target field, and follow the
instructions in Adding an Oracle Single Instance or RAC Environment.
10. Review the information for Installation Home, Database Unique Name, SID, and Database Name. Edit as necessary.
11. Review the Mount Base and Environment User. Edit as necessary.
The Environment User must have permissions to write to the specified Mount Base, as described in Requirements for Oracle Target
Hosts and Databases. You may also want to create a new writeable directory in the target environment with the correct permissions and
use that as the Mount Base for the VDB.
12. Select Provide Privileged Credentials if you want to use login credentials on the target environment that are different from those
associated with the Environment User.
13. Click Advanced to customize the VDB online log size and log groups, archivelog mode, Oracle Node Listeners, additional VDB
329
Description
Provision by
Time
You can provision to the start of any snapshot by selecting that snapshot card from the Timeflow view or by entering a value in
the time entry fields below the snapshot cards. The values you enter will snap to the beginning of the nearest snapshot.
Provision by
SCN
You can use the Slide to Provision by SCN control to open the SCN entry field. Here, you can type or paste in the SCN to
which you want to provision. After entering a value, it will "snap" to the start of the closest appropriate snapshot.
When provisioning by LogSync information, you can provision to any point in time, or to any SCN, within a particular snapshot. The TimeFlow vie
w for a dSource shows multiple snapshots by default. To view the LogSync data for an individual snapshot, use the Slide to Open LogSync contr
ol at the top of an individual snapshot card.
Provisioning
By LogSync
Description
Provision by
Time
Use the Slide to Open LogSync control to view the time range within that snapshot. Drag the red triangle to the point in time
from which you want to provision. You can also enter a date and time directly.
Provision by
SCN
Use the Slide to Open LogSync and Slide to Provision by SCN controls to view the range of SCNs within that snapshot.
Drag the red triangle to the LSN from which you want to provision. You can also type or paste in the specific SCN to which you
want to provision. Note that if the SCN does not exist, you will see an error when you provision.
Procedure
1. After provisioning, you can select the VDB card and flip it to the back to edit the RAC VDB instance configuration.
330
Related Links
Linking an Oracle Data Source
Requirements for Oracle Target Hosts and Databases
Using Pre- and Post-Scripts with dSources and SQL Server VDBs
Customizing Oracle VDB Configuration Settings
Customizing VDB File Mappings
331
Prerequisites
You must have done one of the following:
linked a PDB dSource from a multitenant container database, as described in Linking an Oracle Pluggable Database
already created a VPDB from which you want to provision another VPDB
There must be a target environment that has a compatible multitenant container database to host the VPDB you are about to create
You will need to have the correct operating system (OS) user privileges on this target environment. For more information, refer
to Requirements for Oracle Target Hosts and Databases.
The multitenant container databases (CDBs) of the source PDB and the target that will host the VPDB must meet the following
requirements:
They must have the same endian format
They must be in ARCHIVELOG mode
They must have compatible character sets and national character sets, which means:
Every character in the source CDB character set is available in the target CDB character set
Every character in the source CDB character set has the same code point value in the target CDB character set
They must have the same set of database options installed. For example, if the source CDB is a real application cluster (RAC)
database, the target CDB must be a RAC database.
Procedure
1. Log into the Delphix Admin application.
2. Select Manage > Databases > My Databases.
3. Select a PDB dSource or a previously provisioned VPDB.
4. Select a snapshot.
For more information on provisioning options, see the Provisioning by Snapshot or LogSync section in Provisioning an Oracle VDB.
You can take a snapshot of the source database to provision from by clicking the Camera icon on the source card.
5. Optional: Slide the LogSync slider to open the snapshot timeline, and then move the arrow along the timeline to provision from a point
of time within a snapshot.
You can provision from the most recent log entry by opening the snapshot timeline and then clicking the red Arrow ic
on next to the LogSync Slider.
6. Click Provision.
The Provision VDB panel will open, and the provision target fields Installation Home, Container Database, Database Name, Mount
Base, and Environment User will auto-populate. Information from the selected target environment will be highlighted on the left hand
pane.
7. For each selected Installation Home, there can be more than one Container Database. Use the drop down box to further select which
Container Database you are about to provision to host your VPDB.
8.
Review the information for Installation Home, Container Database, and Database Name. Change or edit as
necessary.
9. Review the Mount Base and Environment User and edit as necessary.
The Environment User must have permissions to write to the specified Mount Base, as described in Requirements for Oracle Target
Environments and Databases. You may also want to create a new writeable directory in the target environment with the correct
permissions, and use that as the Mount Base for the VDB.
10. Select Provide Privileged Credentials if you want to use login credentials on the target environment other than those associated with
the Environment User.
11. Click Advanced to enter any file mappings setting for your VPDB.
332
The container database of the VPDB will be automatically linked if it has not been linked already.
Related Links
Linking an Oracle Pluggable Database
Provision an Oracle VDB
Discovering Oracle Pluggable Databases in an Oracle Environment
Requirements for Oracle Target Hosts and Databases
Customizing VDB File Mappings
Migrate a vPDB
Customizing Oracle Management with Hook Operations
333
3.
334
You can also set the template reference during provisioning. See the CLI Cookbook: Provisioning a Single Instance Oracle VDB topic for
more information.
Video
Restricted Parameters
These parameters are restricted for use by the Delphix Engine. Attempting to customize these parameters through the use of a VDB Config
Template will cause an error during the provisioning process.
active_instance_count
cluster_database
cluster_database_instances
cluster_interconnects
control_files
db_block_size
db_create_file_dest
db_create_online_log_dest_1
db_create_online_log_dest_2
db_create_online_log_dest_3
db_create_online_log_dest_4
db_create_online_log_dest_5
db_file_name_convert
db_name
db_recovery_file_dest
db_recovery_file_dest_size
db_unique_name
dg_broker_config_file1
dg_broker_config_file2
dg_broker_start
fal_client
fal_server
instance_name
instance_number
local_listener
log_archive_config
log_archive_dest
log_archive_duplex_dest
log_file_name_convert
spfile
standby_archive_dest
standby_file_management
thread
undo_tablespace
__db_cache_size
__java_pool_size
__large_pool_size
__oracle_base
__pga_aggregate
__sga_target
335
__shared_io_pool_size
__shared_pool_size
__streams_pool_size
Customizable Parameters
The default value for these parameters is cleared during the provisioning process. They are removed from the VDB configuration file unless you
set values for them through a VDB Config Template.
audit_file_dest
audit_sys_operations
audit_trail
background_dump_dest
core_dump_dest
db_domain
diagnostic_dest
dispatchers
fast_start_mttr_target
log_archive_dest_1
log_archive_dest_2
log_archive_dest_3
log_archive_dest_4
log_archive_dest_5
log_archive_dest_6
log_archive_dest_7
log_archive_dest_8
log_archive_dest_9
log_archive_dest_10
log_archive_dest_11
log_archive_dest_12
log_archive_dest_13
log_archive_dest_14
log_archive_dest_15
log_archive_dest_16
log_archive_dest_17
log_archive_dest_18
log_archive_dest_19
log_archive_dest_20
log_archive_dest_21
log_archive_dest_22
log_archive_dest_23
log_archive_dest_24
log_archive_dest_25
log_archive_dest_26
log_archive_dest_27
log_archive_dest_28
log_archive_dest_29
log_archive_dest_30
log_archive_dest_31
log_archive_dest_state_1
log_archive_dest_state_2
log_archive_dest_state_3
log_archive_dest_state_4
log_archive_dest_state_5
log_archive_dest_state_6
log_archive_dest_state_7
log_archive_dest_state_8
log_archive_dest_state_9
log_archive_dest_state_10
336
log_archive_dest_state_11
log_archive_dest_state_12
log_archive_dest_state_13
log_archive_dest_state_14
log_archive_dest_state_15
log_archive_dest_state_16
log_archive_dest_state_17
log_archive_dest_state_18
log_archive_dest_state_19
log_archive_dest_state_20
log_archive_dest_state_21
log_archive_dest_state_22
log_archive_dest_state_23
log_archive_dest_state_24
log_archive_dest_state_25
log_archive_dest_state_26
log_archive_dest_state_27
log_archive_dest_state_28
log_archive_dest_state_29
log_archive_dest_state_30
log_archive_dest_state_31
remote_listener
user_dump_dest
Overview
Certain Oracle database parameters are sensitive to the environment variables present when you start or administer the database. For this
reason, the Delphix Engine allows you to dictate custom environment variables that will be set prior to any administrative action, such as
provision, start, stop, rollback, or refresh.
You can specify environment variables by two different means:
Name-Value Pair A literal variable name and value to be set
Environment File An environment file to be sourced
Environment variables for Oracle RAC databases might vary in value between cluster nodes. Therefore, environment variable specifications for
an Oracle RAC database must specify the cluster node to which they apply.
Procedure
1. You can configure custom environment variables in the Provision Wizard.
a.
337
ii. For Oracle RAC databases, you must also specify the cluster node to which this environment variable applies.
4. Save the custom environment variables by completing provisioning, or clicking the Confirm icon below the widget on the VDB card.
These environment variables will take effect when you start the Oracle VDB.
Caveats
338
Environment variables are sourced on provision, start, stop, rollback, and refresh. Custom environment variables are not applicable to
V2P.
Custom environment variables do not propagate to child VDBs and must be set again on provision.
Custom environment variables do not persist after migration.
On migration of a VDB with custom environment variables, an alert will be raised that the custom environment variables have been
removed from the VDB. In order to view the alert, go to System -> Event Viewer.
339
Overview
Certain Oracle database parameters are sensitive to the environment variables present when you start or administer the database. For this
reason, the Delphix Engine allows you to dictate custom environment variables that will be set prior to any administrative action, such as
provision, start, stop, rollback, or refresh.
You can specify environment variables by two different means:
Name-Value Pair A literal variable name and value to be set
Environment File An environment file to be sourced
Environment variables for Oracle RAC databases might vary in value between cluster nodes. Therefore, environment variable specifications for
an Oracle RAC database must specify the cluster node to which they apply.
Procedure
1. You can configure custom environment variables in the Provision Wizard.
a. On the Target Environment tab, click Advanced.
or
b. You can also configure these variables on the back of an Oracle VDB card (on the Standard tab) when the VDB is disabled.
2. Click the Plus icon to add an environment variable.
3. Choose a format of environment variable.
a. Name-Value Pair
i. Enter a Name to identify the variable.
ii. Enter the variable's Value.
iii. For Oracle RAC databases, you must also specify the cluster node to which this environment variable applies.
b. Environment File
i. Enter an absolute path to an environment file on the target environment.
This path can be followed by parameters. Paths and parameters are separated by spaces.
Escaping Spaces
To specify literal spaces, escape them with a backslash ("hello\ world" -> "hello world").
To specify literal backslashes, escape them with a backslash ("foo\\" -> "foo\").
Any other character preceded by a backslash will retain both the backslash and the original character ("\b" ->
"\b").
Escaping is done in order from left to right ("part1\\ part2" -> "part1\" "part2" will be two parameters).
ii. For Oracle RAC databases, you must also specify the cluster node to which this environment variable applies.
4. Save the custom environment variables by completing provisioning, or clicking the Confirm icon below the widget on the VDB card.
These environment variables will take effect when you start the Oracle VDB.
340
Caveats
Environment variables are sourced on provision, start, stop, rollback, and refresh. Custom environment variables are not applicable to
V2P.
Custom environment variables do not propagate to child VDBs and must be set again on provision.
Custom environment variables do not persist after migration.
On migration of a VDB with custom environment variables, an alert will be raised that the custom environment variables have been
removed from the VDB. In order to view the alert, go to System -> Event Viewer.
341
342
ENCRYPTION_WALLET_LOCATION=(SOURCE=(METHOD=FILE)(METHOD_DATA=(DIRECTORY=/opt/oracle/walle
ts/$ORACLE_SID)))
Procedure
1. Check for any encrypted columns or tablespaces on the source database by using these commands:
$ more sqlnet.ora
ENCRYPTION_WALLET_LOCATION=(SOURCE(METHOD=file)
(METHOD_DATA=(DIRECTORY=/opt/oracle/oradata/nf/wallet)))
3. If the source database does not use auto-open wallet, create the auto-open wallet at the target environment.
343
344
TimeFlow Patching
Introduction
The Delphix Engine provides the ability to link to an external database by creating a dSource within the Delphix
system. Once linked, the Delphix Engine maintains a complete history of the database as part of a t imeflow , limited
by the retention policies configured by the administrator. From any time within that timeflow, you can provision a
virtual database (VDB) from the Delphix Engine. This timeflow is maintained through the use of SnapSync and LogS
ync.
The SnapSync operation pulls over the complete data set of the external database during initial load. Subsequent SnapSync operations pull and
store only incremental changes. At the end of each SnapSync operation, a snapshot is created that serves as the base point for provisioning
operations. In
addition, LogSync periodically connects to the host(s) running the source database and pulls over any
log files associated with the database. These log files are stored separately from the SnapSync data and are used to
provision from points in between SnapSync snapshots. Usually SnapSync operates against a live database with
changes actively being made to it. Hence the data that it pulls over is fuzzy and logs must be applied to the data to
make it consistent and provisionable. If LogSync is enabled, SnapSync relies on it to copy the logs over. If LogSync
is not enabled, SnapSync copies the logs itself. Occasionally, LogSync or SnapSync is not able to retrieve one or
more log files from the database. This creates a break in the timeflow or can prevent a snapshot from being
provisioned. To remedy this situation, the Delphix Engine has tools to repair, or patch, a snapshot and the timeflow.
Snapshot Repair
Please note that the below steps do not apply if your archive logs are stored on ASM, if you are doing so the archived logs need to be
moved to a supported filesystem directory.
When missing log files prevent a snapshot from being provisioned, you can use the graphical user interface (GUI) to determine the missing logs
and repair the snapshot. The
Delphix Engine will generate a fault whenever missing logs prevent a snapshot from being
provisionable. The fault will likely have the title Cannot provision database from snapshot and will contain a
description of the cause. The most common causes are:
Logs were deleted/moved/archived from the database before the Delphix Engine could retrieve them. In this case, the archive log
retention policy on the source database may be too aggressive. Use the GUI snapshot repair tool to fetch the logs.
LogSync is still fetching the logs. SnapSync is relying on LogSync to fetch the logs needed to make the snapshot consistent. SnapSync
normally will wait up to 15 minutes for LogSync to fetch the logs. If LogSync has not fetched the logs by then, SnapSync will generate a
fault and finish. The best course of action in this case may be to wait for LogSync to fetch the logs.
The source database is a physical standby in real-time apply mode. The changes described in the current online log of the database are
needed to make the snapshot consistent. LogSync cannot retrieve the log until it is archived, and SnapSync cannot force the log to be
archived because the source database is a physical standby. Force a log switch on the primary database or wait until the log is naturally
archived.
Below is a screenshot of a snapshot with missing logs. Hovering the cursor over the (i) symbol on the snapshot card will cause the list of missing
log(s) to be shown. In this example, log sequences 18 and 19 are missing.
345
346
Timeflow Patching
When missing log files cause a break in the timeflow, you can use the command line interface (CLI) to determine the
missing logs and patch the timeflow. The Delphix Engine will generate a fault whenever there are missing logs on a portion of the
timeflow. The fault will likely have the title Cannot provision a database from a portion of TimeFlow and will contain a description of the cause.
The most common cause is an overly aggressive archive log retention policy on the source database causing a log to be deleted before LogSync
can fetch it. Other faults can also be generated describing the specific errors encountered when fetching the log(s).
You can use the CLI to list the missing logs and patch the timeflow. The following CLI Cookbook entry demonstrates how to do this: CLI
Cookbook: Repairing a Timeflow.
347
Procedure
1. Click Manage.
2. Select Databases.
3. Click My Databases.
4. Select the VDB you want to disable.
5. On the back of the dSource card, move the slider control from Enabled to Disabled.
6. Click Yes to acknowledge the warning.
When you are ready to enable the VDB again, move the slider control form Disabled to Enabled, and the VDB will continue to function as it did
previously.
348
Prerequisites
You must have replicated a dSource or a VDB to the target host, as described in Replication Overview.
You must have added a compatible target environment on the target host.
Procedure
1. Login to the Delphix Admin application for the target host.
2. Click Manage.
3. Select Databases.
4. Click My Databases.
5. In the list of replicas, select the replica that contains the dSource or VDB you want to provision.
6. The provisioning process is now identical to the process for provisioning standard objects.
Post-Requisites
Once the provisioning job has started, the user interface will automatically display the new VDB in the live system.
349
Prerequisites
To rewind a VDB, you must have the following permissions:
Auditor permissions on the dSource associated with the VDB
Owner permissions on the VDB itself
You do NOT need owner permissions for the group that contains the VDB. A user with Delphix Admin credentials can perform a VDB Rewind on
any VDB in the system.
Procedure
1. Login to the Delphix Admin application.
2. Under Databases, select the VDB you want to rewind.
3. Select the rewind point as a snapshot or a point in time.
4. Click Rewind.
5. If you want to use login credentials on the target environment other than those associated with the environment user, click Provide
Privileged Credentials.
6. Click Yes to confirm.
You can use TimeFlow bookmarks as the rewind point when using the CLI. Bookmarks can be useful to:
Mark where to rewind to - before starting a batch job on a VDB for example.
Provide a semantic point to revert back to in case the chosen rewind point turns out to be incorrect.
For a CLI example using a TimeFlow bookmark, see CLI Cookbook: Provisioning a VDB from a TimeFlow Bookmark.
350
Prerequisites
To refresh a VDB, you must have the following permissions:
Auditor permissions on the dSource associated with the VDB
Auditor permissions on the group that contains the VDB
Owner permissions on the VDB itself
A user with Delphix Admin credentials can perform a VDB Refresh on any VDB in the system.
Procedure
1. Login to the Delphix Admin application.
2. Under Databases, select the VDB you want to refresh.
3. Click the Open icon in the upper right hand corner of the VDB card.
4. On the back of the VDB card, click the Refresh VDB icon in the lower right-hand corner.
This will open the screen to re-provision the VDB.
5. Select desired refresh point snapshot or slide the display LogSync timeline to pick a point-in-time to refresh from.
6. Click Refresh VDB.
7. Click Yes to confirm.
Related Links
Managing Policies: An Overview
Creating Custom Policies
Creating Policy Templates
351
Procedure
1. Login to the Delphix Admin application using Delphix Admin credentials.
2. Click Manage.
3. Click My Databases.
4. Select the VDB you want to delete.
5. Click the Trash icon.
6. Click Yes to confirm.
352
Prerequisites
You should have already set up a new target environment that is compatible with the VDB that you want to migrate.
A VDB from a Single Instance of Oracle cannot be migrated onto a RAC environment, the additional reconfiguration needed when
converting a single instance to RAC is only performed during a VDB provision. Provision a new VDB instead.
Procedure
1. Login to your Delphix Engine using Delphix Admin credentials.
2. Click Manage.
3. Select Databases.
4. Select My Databases.
5. Select the VDB you want to migrate.
6. Click the Open icon.
7. Slide the Enable/Disable control to Disabled, and click Yes to confirm.
When the VDB is disabled, its icon will turn grey.
8. On the bottom-right corner of the VDB card, click the VDB Migrate icon.
9. Select the new target environment for the VDB, the user for that environment, and the database installation where the VDB will reside.
10. Click the Check icon to confirm your selections.
11. Slide the Enable/Disable control to Enabled, and click Yes to confirm.
Within a few minutes your VDB will re-start in the new environment, and you can continue to work with it as you would any other VDB.
Video
353
Recreate the spfile using the new init.ora parameters as recommended by Oracle for the upgrade.
Procedure
Normally a PSU or Oracle upgrade will have both binary changes as well as some scripts to run on the database side as well.
Prior to applying to a VDB take a snapshot of the VDB just in case something goes wrong and you want to back it out.
There are 3 ways to apply a PSU/Oracle upgrade:
A) Apply to existing ORACLE_HOME. (Please be on Delphix version 4.1.x or higher.)
B) Create new ORACLE_HOME (could clone existing one) and then apply the PSU to the new ORACLE_HOME.
C) Using refresh on the back of a VDB card to upgrade the VDB after a dsource was upgraded.
Oracle documentation should be followed and the appropriate script(s) and or steps would be ran on the databases using those
ORACLE_HOMEs or in option B the instance would be stopped (using old ORALCE_HOME) and restarted with the new ORACLE_HOME from
the command line as normal.
354
None
355
Migrate a vPDB
There may be situations in which you want to migrate a virtual pluggable database (vPDB) to a new container database on the same or a different
target environment, for example when upgrading the host on which the vPDB resides, or as part of a general data center migration. This is easily
accomplished by first disabling the vPDB, then using the Migrate vPDB feature to select a new container database.
Pre-requisites
You should already set up and have Delphix discover a container database in the same environment as the vPDB currently is or from a different
environment to which the vPDB will be migrated to.
Procedure
Login to your Delphix Engine using Delphix Admin credentials.
1. Click Manage.
2.
Select Databases.
3.
Select My Databases.
Related Links
Linking an Oracle Pluggable Database
Provisioning an Oracle Virtual Pluggable Database
Provision an Oracle VDB
Discovering Oracle Pluggable Databases in an Oracle Environment
Requirements for Oracle Target Hosts and Databases
Customizing VDB File Mappings
Customizing Oracle Management with Hook Operations
356
dSource Hooks
Hook
Description
Pre-Sync
Post-Sync
Operations performed after a sync. This hook will run regardless of the success of the sync or Pre-Sync hook operations.
These operations can undo any changes made by the Pre-Sync hook.
Description
Configure
Clone
Operations performed after initial provision or after a refresh. This hook will run after the virtual dataset has been started.
During a refresh, this hook will run before the Post-Refresh hook.
Pre-Refresh
Post-Refresh
Operations performed after a refresh. During a refresh, this hook will run after the Configure Clone hook. This hook will not run
if the refresh or Pre-Refresh hook operations fail.
These operations can restore cached data after the refresh completes.
Pre-Rewind
Post-Rewind
Operations performed after a rewind. This hook will not run if the rewind or Pre-Rewind hook operations fail.
These operations can restore cached data after the rewind completes.
Pre-Snapshot
Post-Snapshot
Operations performed after a snapshot. This hook will run regardless of the success of the snapshot or Pre-Snapshot hook
operations.
These operations can undo any changes made by the Pre-Snapshot hook.
Operation Failure
If a hook operation fails, it will fail the entire hook: no further operations within the failed hook will be run.
357
You can construct hook operation lists through the Delphix Admin application or the command line interface (CLI). You can either define the
operation lists as part of the linking or provisioning process or edit them on dSources or virtual datasets that already exist.
358
*> add
0 *> set type=RunCommandOnSourceOperation
0 *> set command="echo Refresh completed."
0 *> ls
0 *> commit
source
source
source
source
source
source
"pomme"
"pomme"
"pomme"
"pomme"
"pomme"
"pomme"
update
update
update
update
update
update
operations
operations
operations
operations
operations
operations
postRefresh
postRefresh
postRefresh
postRefresh
postRefresh
postRefresh
*> add
1 *> set type=RunCommandOnSourceOperation
1 *> set command="echo Refresh completed."
1 *> back
*> unset 1
*> commit
359
Oracle RAC
When linking from, or provisioning to Oracle RAC environments, hook operations will not run once on each node in the cluster. Instead, the
Delphix Engine picks a node in the cluster at random and guarantees all operation within any single hook will execute serially on this node.
Note that the Delphix Engine does not guarantee the same node is chosen for the execution of every hook, but does guarantee that Pre-/Posthook pairs (such as Pre-Sync and Post-Sync) will execute on the same node.
Shell Operations
RunCommand Operation
The RunCommand operation runs a shell command on a Unix environment using whatever binary is available at /bin/sh. The environment user
runs this shell command from their home directory. The Delphix Engine captures and logs all output from this command. If the script fails, the
output is displayed in the Delphix Admin application and command line interface (CLI) to aid in debugging.
If successful, the shell command must exit with an exit code of 0. All other exit codes will be treated as an operation failure.
Examples of RunCommand Operations
You can input the full command contents into the RunCommand operation.
remove_dir="$DIRECTORY_TO_REMOVE_ENVIRONMENT_VARIABLE"
if test -d "$remove_dir"; then
rm -rf "$remove_dir" || exit 1
fi
exit 0
If a script already exists on the remote environment and is executable by the environment user, the RunCommand operation can execute this
script directly.
The RunBash operation runs a Bash command on a Unix environment using a bash binary provided by the Delphix Engine.The environment user
runs this Bash command from their home directory. The Delphix Engine captures and logs all output from this command. If the script fails, the
output is displayed in the Delphix Admin application and command line interface (CLI) to aid in debugging.
If successful, the Bash command must exit with an exit code of 0. All other exit codes will be treated as an operation failure.
Example of RunBash Operations
You can input the full command contents into the RunBash operation.
360
remove_dir="$DIRECTORY_TO_REMOVE_ENVIRONMENT_VARIABLE"
# Bashisms are safe here!
if [[ -d "$remove_dir" ]]; then
rm -rf "$remove_dir" || exit 1
fi
exit 0
Shell Operation Tips
Using nohup
You can use the nohup command and process backgrounding from resource in order to "detach" a process from the Delphix Engine. However, if
you use nohup and process backgrounding, you MUST redirect stdout and stderr.
Unless you explicitly tell the shell to redirect stdout and stderr in your command or script, the Delphix Engine will keep its connection to the
remote environment open while the process is writing to either stdout or stderr . Redirection ensures that the Delphix Engine will see no more
output and thus not block waiting for the process to finish.
For example, imagine having your RunCommand operation background a long-running Python process. Below are the bad and good ways to do
this.
Bad Examples
nohup
nohup
nohup
nohup
python
python
python
python
file.py
file.py
file.py
file.py
& # no redirection
2>&1 & # stdout is not redirected
1>/dev/null & # stderr is not redirected
2>/dev/null & # stdout is not redirected
Good Examples
nohup python file.py 1>/dev/null 2>&1 & # both stdout and stderr redirected, Delphix Engine will not
block
Other Operations
RunExpect Operation
The RunExpect operation executes an Expect script on a Unix environment. The Expect utility provides a scripting language that makes it easy to
automate interactions with programs which normally can only be used interactively, such as ssh. The Delphix Engine includes a
platform-independent implementation of a subset of the full Expect functionality.
The script is run on the remote environment as the environment user from their home directory. The Delphix Engine captures and logs all output
of the script. If the operation fails, the output is displayed in the Delphix Admin application and CLI to aid in debugging.
If successful, the script must exit with an exit code of 0. All other exit codes will be treated as an operation failure.
Example of a RunExpect Operation
361
Environment Variable
Description
ORACLE_SID
ORACLE_BASE
ORACLE_HOME
ORAENV_ASK
Always set to NO
DELPHIX_DATABASE_NAME
DELPHIX_DATABASE_UNIQUE_NAME
Environment Variable
Description
ORACLE_SID
ORACLE_BASE
The home directory for the Oracle software hosting the VDB
ORACLE_HOME
The home directory for cluster services hosting the RAC VDB
ORAENV_ASK
Always set to NO
DELPHIX_DATABASE_NAME
DELPHIX_DATABASE_UNIQUE_NAME
DELPHIX_MOUNT_PATH
362
Databases Icon
7. For the source database or VDB which you are investigating, click Show Details to the right of the database name. This will display the
JDBC connection string being used for the given database.
8. To verify that the connection string works, click the checkmark to the right of the connection string. You will then see username and
password text boxes.
9. Enter the oracle username and password used by the Delphix engine.
363
4. Follow the remaining steps in Verifying the JDBC Connection String to validate your newly added connection string.
364
4. Follow the remaining steps in Verifying the JDBC Connection String to validate your newly added connection string.
365
366
Databases Icon
7. For the source database or VDB which you are investigating, click Show Details to the right of the database name. This will display the
JDBC connection string being used for the given database.
8. To verify that the connection string works, click the checkmark to the right of the connection string. You will then see username and
password text boxes.
9. Enter the oracle username and password used by the Delphix engine.
367
368
369
370
Related Links
Supported Operating Systems, Server Versions, and Backup Software for SQL Server
Setting Up SQL Server Environments: An Overview
371
a source database comes from a SQL Server 2005 instance, then the target hosts that will host
VDBs from that source must be running either a SQL Server 2005 instance or a SQL Server 2012 instance or
higher.
Upgrading VDBs from SQL Server 2005
You can first provision a VDB to SQL Server 2005 and then upgrade it to a higher version by following the steps described in
the topic Upgrading SQL Server VDBs. See the topic SQL Server Operating System Compatibility Matrices for more
information about compatibility between different versions of SQL Server.
4. The target host must have 64-bit Windows as the operating system. Delphix does not support 32-bit target systems.
5. To add a Windows cluster as a target environment see the topic Adding a SQL Server Failover Cluster Target Environment.
6. If the target host is a VMWare virtual machine, then the Windows Server operating system must be configured to use the VMXNET3 netw
ork driver.
7. The operating system version on a target host that will be used for the provisioning of VDBs should be equal to or higher than the
operating system on the target that is hosting the staging databases for the dSource from which the VDB is being provisioned. There is
no OS compatibility requirement between source and target hosts. See the topic SQL Server Operating System Compatibility
Matrices for more information.
8. Windows PowerShell 2.0 or higher must be installed.
9. Execution of Windows PowerShell scripts must be enabled on the target host.
While running Windows PowerShell as an Administrator, enter this command to enable script execution: Set-ExecutionPolicy
Unrestricted.
10. For Windows 2003 target hosts, the following should be installed:
a. WIndows Server iSCSI initiator (available for download).
b. Hotfix documented in Microsoft Knowledge Base article KB 943043.
11. The Windows iSCSI Initiator Service should have its Startup Type set to Automatic, and the service should be running. See the topic
SQL Server Target Host iSCSI Configuration Parameter Recommendations for configuring the Windows iSCSI Initiator Service.
12. The Delphix Connector must be installed, as described in the topics Setting Up SQL Server Environments: An Overview and Adding
a SQL Server Standalone Target Environment.
Flash Player Required for Connector Download
A Flash player must be available on the target host to download the Delphix Connector when using the Delphix GUI. If the
target host does not have a Flash player installed, you can download the connector directly from the Delphix Engine by
navigating to this URL: http://<name of your Delphix Engine>/connector/DelphixConnectorInstaller.msi
13. Shared Memory must be enabled as a Network Protocol for the SQL instances on the target.
In SQL Server Config Manager, navigate to Client Protocols > Shared Memory to enable this.
14. TCP/IP access must be enabled for each SQL Server instance on the target host to allow remote connections to instances.
In SQL Server Config Manager, navigate to Network Configuration > Protocols > TCP/IP to enable TCP/IP access.
372
Related Links
Setting Up SQL Server Environments: An Overview
SQL Server Operating System Compatibility Matrice
SQL Server Target Host iSCSI Configuration Parameter Recommendations
373
Related Links
Setting Up SQL Server Environments: An Overview
SQL Server Operating System Compatibility Matrices
Supported Operating Systems, Server Versions, and Backup Software for SQL Server
Requirements for SQL Server Target Hosts and Databases
374
Supported Operating Systems, Server Versions, and Backup Software for SQL Server
This topic describes the versions of the Windows operating system and Microsoft SQL Server that Delphix supports.
Supported Versions of Windows OS
Supported Versions of SQL Server
Supported SQL Server Backup Software
R2
Delphix supports only 64-bit versions of Windows on target hosts and validated-sync-target hosts.
Target hosts and validated-sync-target hosts running Windows Server 2003 SP2 or 2003 R2 must install the hotfix documented in KB
943043.
Platform: x64
Location: (http://hotfixv4.microsoft.com/Windows%207/Windows%20Server2008%20R2%20SP1/sp2/Fix385766/7600/free/44135
1_intl_x64_zip.exe)
Updates MSISCI.sys
Platform: x64
Location: (http://hotfixv4.microsoft.com/Windows%207/Windows%20Server2008%20R2%20SP1/sp2/Fix388733/7600/free/44067
5_intl_x64_zip.exe)
375
Expand -f:* c:\TEST\(write the complete details of the file with extension .msu).msu c:\TEST
Expand -f:* c:\TEST\(write the complete details of the file with extension .cab).cab c:\TEST
pkgmgr /ip /m:c:\Test\update-bf.mum
There are further restrictions on supported Windows and SQL Server versions for SQL Server Failover Cluster target environments.
See Adding a SQL Server Failover Cluster Target Environment for details.
Delphix Version
Delphix 3.x
Delphix 3.x
Delphix 3.x
Delphix supports SQL Server AlwaysOn Availability Groups as a dSource but creation of a VDB on AlwaysOn Availability Groups is not
supported. Delphix
supports Windows Server Failover Cluster (WSFC) as a dSource and also as a target
(VDB).
Supported SQL Server Backup Software
The Delphix Engine interacts with source database backups in the following ways:
376
When linking a new source database, the Delphix Engine can use an existing full backup to load the source database data
When performing a sync of an existing dSource, the Delphix Engine can use an existing full backup
After the dSource is created, the Delphix Engine picks up any new backups that are taken on the source database and applies them to
the copy of the source database on the Delphix Engine. This includes:
Transaction log backups for databases in Full or Bulk-Logged recovery models
Differential and full backups for databases in Simple recovery model
Delphix currently supports the following backup software for source database backups:
SQL Server native backups
Quest/NetVault LiteSpeed
If the source database is backed up with LiteSpeed, the source and the staging environments must have LiteSpeed installed on
them. The version of LiteSpeed on the staging environment must be the same or higher than that on the source. Delphix
currently supports LiteSpeed v5.0.0.0 to v8.x.
Red Gate SQL Backup Pro
If the source database is backed up with SQL Backup Pro, the source and the staging environments must have SQL Backup Pro
installed on them. The version of SQL Backup Pro on the staging environment must be the same as that on the source. Delphix
currently supports SQL Backup Pro v7.3 and onwards.
In versions 4.3.3.0 and newer Delphix supports encrypted backups; if you are running an older version of the Delphix Engine (v 4.3.2.x 3.0) encrypted backups are not supported.
377
Windows 2008
Windows 2008 R2
Windows 2012
Windows 2012 R2
Source Environment
SQL Server 2005
Source Environment
SQL Server 2005
SQL Server 2008
SQL Server 2008 R2
X
X
378
X
X
Provisioning to Higher SQL Versions When the Source is SQL Server 2005
For SQL Server 2005, direct provisioning to higher SQL Server versions is only supported for provisioning to SQL Server 2012 or
higher. You can first provision a VDB to SQL Server 2005 and then upgrade it to a higher version by following the steps outlined in the
topic Upgrading SQL Server VDBs.
379
Registry Key
Registry Value
Type
Data
TimeoutValue
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\Tcpip\Parameters\
TcpAckFrequency
iSCSIDisableNagle
Interfaces\<Interface GUID>
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Class\
{4D36E97B-E325-11CE-BFC1-08002BE10318}\<Instance Number>\Parameters
For systems running Windows 2003 see Microsoft Knowledge base article 815230 for hotfix information regarding changing
TcpAckFrequency.
380
Port
Numbers
Use
TCP
25
TCP/UDP
53
UDP
123
UDP
162
HTTPS
443
SSL connections from the Delphix Engine to the Delphix Support upload server
TCP/UDP
636
TCP
8415
TCP
50001
Connections to source and target environments for network performance tests via the Delphix command line interface
(CLI). See Network Performance Tool.
Port
Number
Use
TCP
22
TCP
80
UDP
161
TCP
443
TCP
8415
Delphix Session Protocol connections from all DSP-based network services including Replication, SnapSync for
Oracle, V2P, and the Delphix Connector.
TCP
50001
Connections from source and target environments for network performance tests via the Delphix CLI. See Network
Performance Tool.
TCP/UDP
32768 65535
Required for NFS mountd and status services from target environment only if the firewall between Delphix and the
target environment does not dynamically open ports.
Note: If no firewall exists between Delphix and the target environment, or the target environment dynamically opens
ports, this port range is not explicitly required.
381
SSHD Configuration
Both source and target Unix environments are required to have sshd running and configured such that the Delphix Engine can connect over ssh.
The Delphix Engine expects to maintain long-running, highly performant ssh connections with remote Unix environments. The following sshd con
figuration entries can interfere with these ssh connections and are therefore disallowed:
Disallowed sshd Configuration Entries
ClientAliveInterval
ClientAliveCountMax
Refer to Setting Up SQL Server Environments: An Overview for information on SQL Server environments. The Delphix Engine makes use of
the following network ports for SQL Server dSources and VDBs:
Port Numbers
Use
382
TCP
9100
TCP
xxxx
JDBC Connections to the SQL Server instances on the source environments (typically port 1433)
Port Number
Use
TCP
3260
iSCSI target daemon for connections from iSCSI initiators on the target environments to the Delphix Engine
Incoming
Protocol
Port
Number
Use
Source
Environment
Staging
Environment
SMB
445
Full backup of the source database during sync directed to the staging
environment
Staging
Environment
Source
Environment
SMB
445
383
It is only possible to enable this feature here at link time. Once a dSource has been linked, you cannot modify the use of this feature. If you enable
this feature, the dSource can only use Delphix-taken copy-only full backups to stay in sync with its source; the Delphix Engine will prohibit syncs
using existing backups. Checking the Enabled box results in the following changes to the Data Management page:
The initial load option is set to a Delphix-taken copy-only full backup
The ability to provide a backup path is removed
A SnapSync selection screen is added
You can select from the list of existing SnapSync policies, or click the green plus to create a new one. Proceeding through the remainder of the
link wizard will create a dSource with Delphix-managed backups enabled. You can confirm that a dSource has the feature by expanding its
dSource card and checking the Delphix Managed Backups section, as displayed below:
384
SnapSync policies provide you the ability to specify the frequency at which the Delphix Engine should take a copy-only full backup of a source
database. As shown in the section above, selecting an initial SnapSync policy is mandatory at dSource link time. However, you can change the
SnapSync policy applied on a dSource at any time by visiting the policy management screen:
1. Click Manage.
2. Click Policies.
For dSources that have Delphix-managed backups enabled, the current SnapSync policy will be displayed under the SnapSync column. The
rows corresponding to dSources that do not use Delphix-managed backups will be greyed out. Clicking the current SnapSync policy for a
dSource will display a drop-down menu of existing SnapSync policies along with the option to create a new SnapSync policy. Selecting a
SnapSync policy from this list will change the current SnapSync policy for the dSource. When creating a new policy, you will see the following
screen:
385
Here, you can configure the frequency at which the Delphix Engine takes backups of your source database. You can modify these schedules at
any time by clicking the Modify Policy Templates button in the top right-hand corner of the policy management screen.
The Timeout field above specifies how long a SnapSync job is allowed to run before it is terminated. If a SnapSync job exceeds its timeout
window, the Delphix Engine discards the new backup and rolls back the dSource to the most recent snapshot.
386
387
Block Diagram of Linking Architecture between SQL Server Environments and the Delphix Engine
388
389
Database discovery is initiated during the environment set up process. When you specify a production source environment that contains the
databases you want to manage with the Delphix Engine, you must also specify a target environment where you have installed the Delphix
Connector to act as a proxy for communication with the source environment. This is necessary because Delphix does not require that you install
the Delphix Connector software on the production source environment. When you register the source environment with the Delphix Engine, the
Delphix Engine uses the Delphix Connector on the proxy environment to discover SQL Server instances and databases on the source. You can
then create dSources from the discovered databases. If you later refresh the source environment, the Delphix Engine will execute instance and
database re-discovery through the proxy host.
SQL Server dSources are backed by a staging database that runs on a target host, as shown in the diagram. There is no requirement for
When you later provision a VDB, you can specify any environment as a target, including the environment that contains the staging database.
However, for best performance, Delphix recommends that you choose a different target environment. The only requirements for the target are:
it must have the Delphix Connector installed
it must have an operating system that is compatible with the one running on the validated host, as described in Requirements for SQL
Server Target Hosts and Databases
Related Links
SQL Server Support and Requirements
390
What is HostChecker?
The HostChecker is a standalone program which validates that host machines are configured correctly before the Delphix Engine uses them as
data sources and provision targets.
Please note that HostChecker does not communicate changes made to hosts back to the Delphix Engine. If you reconfigure a host, you must
refresh the host in the Delphix Engine in order for it to detect your changes.
You can run the tests contained in the HostChecker individually, or all at once. You must run these tests on both the source and target hosts to
verify their configurations. As the tests run, you will either see validation messages that the test has completed successfully, or error messages
directing you to make changes to the host configuration.
Prerequisites
Make sure that your source and target environments meet the requirements specified in SQL Server Support and Requirements
391
11.
have been masking other problems.
12. Repeat steps 49 until all the checks return no errors or warnings.
Tests Run
Test
MS SQL
Server
Source
MS
SQL Server
Target
Description
Check Powershell
Version
Check OS User
Privileges
For target hosts, verifies that the the operating system (OS) user has administrative rights. For
source hosts, verifies that the OS user can successfully perform remote registry access from
the target host to the source host.
Check host
settings
Verifies that the Delphix Engine can discover host environment details from the Windows
registry
Verifies that the Delphix Engine can discover SQL Server instances
For target hosts, verifies that the Windows OS user can be used to log in to the SQL Server
instances. For source hosts, verifies that the supplied SQL Server login credentials can be
used to log in to the SQL Server instances.
Check database
discovery
Verifies that the Delphix Engine can discover SQL Server databases
Additional options
Run the following to view additional host checker options:
dlpx-host-checker.ps1 -?
Related Links
SQL Server Support and Requirements
392
Prerequisites
Make sure that your target environment meets the requirements described in Requirements for SQL Server Target Hosts and
Databases.
On the Windows machine that you want to use as a target, you will need to download the Delphix Connector software through the
Delphix Engine interface, install it and then register that machine with the Delphix Engine.
Procedure
Flash Player Required for Connector Download
A Flash player must be available on the target host to download the Delphix Connector when using the Delphix GUI. If the target host
does not have a Flash player installed, you can download the connector directly from the Delphix Engine by navigating to this URL: ht
tp://<name of your Delphix Engine>/connector/DelphixConnectorInstaller.msi
1. From the machine that you want to use as a target, start a browser session and connect to the Delphix Engine GUI using the
delphix_admin login.
2. Click Manage.
3. Select Environments.
4. Next to Environments, click the green Plus icon.
5. In the Add Environment dialog, select Windows in the operating system menu.
6. Select Target.
7. Select Standalone.
8. Click the download link for the Delphix Connector Installer.
The Delphix Connector will download to your local machine.
9. On the Windows machine that you want to want to use as a target, run the Delphix Connector installer. Click Next to advance through
each of the installation wizard screens.
The installer will only run on 64-bit Windows systems. 32-bit systems are not supported.
a. For Connector Configuration, make sure there is no firewall in your environment blocking traffic to the port on the target
environment that the Delphix Connector service will listen to.
b. For Select Installation Folder, either accept the default folder, or click Browse to select another.
c. Click Next on the installer final 'Confirm Installation' dialog to complete the installation process and then Close to exit the Delphix
Connector Install Program.
10. Return to the Delphix Engine interface.
11. Enter the Host Address, Username, and Password for the target environment.
12. Click Validate Credentials.
13. Click OK to complete the target environment addition request.
393
Post-Requisites
1. On the target machine, in the Windows Start Menu, click Services.
2. Select Extended Services.
3. Ensure that the Delphix Connector service has a Status of Started.
4. Ensure that the Startup Type is Automatic.
Related Links
Setting Up SQL Server Environments: An Overview
Requirements for SQL Server Target Hosts and Databases
394
Prerequisites
You must have already set up SQL Server target environments, as described in Adding a SQL Server Standalone Target
Environment
You will need to specify a target environment that will act as a proxy for running SQL Server instance and database discovery on
the source, as explained in Setting Up SQL Server Environments: An Overview
Make sure your source environment meets the requirements described in Requirements for SQL Server Target Hosts and Databases
Procedure
1. Login to the Delphix Admin application.
2. Click Manage.
3. Select Environments.
4. Next to Environments, click the green Plus icon.
5. In the Add Environment dialog, select Windows in the operating system menu.
6. Select Source.
a. If you are adding a Windows Server Failover Cluster (WSFC), add the environment based on which WSFC feature the source
databases use:
i. Failover Cluster Instances
Add the environment as a standalone source using the cluster name or address.
ii. AlwaysOn Availability Groups
Add the environment as a cluster source using the cluster name or address.
b. Otherwise, add the environment as a standalone source.
7. Select a Connector Environment.
Connector environments are used as proxy for running discovery on the source. If no connector environments are available for selection,
you will need to set them up as described in Adding a SQL Server Standalone Target Environment. Connector environments must:
have the Delphix Connector installed
be registered with the Delphix Engine from the host machine where they are located.
8. Enter the Host Address, Username, and Password for the source environment.
9. Click Validate Credentials.
10. Click OK, and then click Yes to confirm the source environment addition request.
As the new environment is added, you will see multiple jobs running in the Delphix Admin Job History to Create and Discover an
environment. In addition, if you are adding a cluster environment, you will see jobs to Create and Discover each node in the cluster and
their corresponding hosts. When the jobs are complete, you will see the new environment added to the list in the Environments panel. If
you don't see it, click the Refresh icon.
Related Links
Setting Up SQL Server Environments: An Overview
Adding a SQL Server Standalone Target Environment
Adding a SQL Server Failover Cluster Target Environment
Requirements for SQL Server Target Hosts and Databases
395
Changing the Host Name or IQN of a SQL Server Target or Staging Host
This topic describes how to change the host name or iSCSI Qualified Name (IQN) of a Windows target or staging host.
By default, Windows servers generate an IQN based on the host name assigned to the host. Changing the host name will change the host IQN as
well. Because the Delphix Engine exports storage for dSources and VDBs to Windows hosts using iSCSI, changes to the Windows host name
must be made according to the following procedure. If you have set a non-default IQN on a Windows target or staging host, and want to change
that IQN, you must follow these procedures when changing the IQN.
Changing the host name or IQN of a Windows target or staging server requires that you modify the iSCSI Initiator configuration on the
Windows host. Doing so incorrectly can cause failures in dSources, VDBs, or non-Delphix users of iSCSI on the Windows host.
The instructions in this topic describe how to change the IQN using the iscsicli command line utility. Because many people are less
familiar with the iscsicli utility, the instructions also include information for using the iSCSI Initiator graphical user interface.
Failing to carefully follow the steps below in order can cause availability issues for your dSources and VDBs. If you have questions
about the following instructions, please contact Delphix Support for help.
Procedure
1. Disable the dSources as described in Enabling and Disabling dSources.
2. Disable the VDBs as described in Enabling and Disabling Virtual Databases.
If your Windows server has dSources or VDBs from more than one Delphix Engine, you will need to disable the dSources and
VDBs on each Delphix Engine.
396
397
a.
b. If you are changing the IQN only, change it through the Microsoft iSCSI Initiator GUI following the instructions in the Microsoft
iSCSI User Guide.
7. Wait for the computer to finish rebooting.
8. Verify the new IQN in the iSCSI initiator.
If you are using the default IQN and have changed the host name, the IQN should include the new host name.
Related Links
Enabling and Disabling dSources
Microsoft TechNet article "Renaming the Computer"
Microsoft iSCSI User Guide (download)
398
Procedure
1. Login to the Delphix Admin application with Delphix Admin credentials or as the owner of an environment.
2. Click Manage.
3. Select Environments.
4. In the Environments panel, click on the name of an environment to view its attributes.
5. Under Attributes, click the Pencil icon to edit an attribute.
6. Click the Check icon to save your edits.
Description
Environment
Users
The users for that environment. These are the users who have permission to ssh into an environment, or access the
environment through the Delphix Connector. See the Requirements topics for specific data platforms for more information on
the environment user requirements.
Host
Address
Notes
Description
Delphix
Connector Port
For target environments, the port used for communication with the Delphix Connector. See Setting Up SQL Server
Environments: An Overview for more information.
Connector
Host
The host where the Delphix Connector is installed. See Setting Up SQL Server Environments: An Overview and Adding a
SQL Server Target Environment for more information.
399
Prerequisites
Users that you add to an environment must meet the requirements for that environment as described in the platform-specific Requirements topics.
Procedure
1. Login to the Delphix Admin application using Delphix Admin credentials.
2. Click Manage.
3. Select Environments.
4. Click the name of an environment to open the environment information screen.
5. Under Basic Information, click the green Plus icon to add a user.
6. Enter the Username and Password for the OS user in that environment.
400
Prerequisites
You cannot delete an environment that has any dependencies, such as dSources or virtual databases (VDBs). These must be deleted before you
can delete the environment.
Procedure
1. Login to the Delphix Admin application using Delphix Admin credentials.
2. Click Manage.
3. Select Environments.
4. In the Environments panel, select the environment you want to delete.
5. Click the Trash icon.
6. Click Yes to confirm.
401
Procedure
1. Login to the Delphix Admin application with Delphix Admin credentials.
2. Click Manage.
3. Select Environments.
4. In the Environments panel, click on the name of the environment to you want to refresh.
5. Click the Refresh icon.
To refresh all environments, click the Refresh icon next to Environments.
402
Procedure
1. Login to the Delphix Admin application using Delphix Admin credentials
2. Click Manage.
3. Select Environments.
4. Click Databases.
5. To enable or disable staging, slide the button next to Use as Staging to Yes or No.
6. To enable or disable provisioning, slide the button next to Allow Provisioning to On or Off.
403
Prerequisites
You must add each node in the Window Failover Cluster individually as a standalone target environment using a non-cluster address.
See Adding a SQL Server Standalone Target Environment.
A cluster node added as a standalone environment will only have non-clustered SQL Server instances
discovered.
A cluster target environment will only have SQL Server Failover Cluster instances discovered.
Each clustered SQL Server instance must have at least one clustered disk added to the clustered instance resource group which can be
used for creating mount points to Delphix storage.
The clustered drive must have a drive letter assigned to it.
The clustered drive must be formatted using the "GUID Partition Table (GPT)" partition style.
Each node in the cluster must have the Failover Cluster Module for Windows Powershell feature installed.
An additional target environment that can be used as a Connector Environment must exist. This environment must NOT be a node in
the cluster. See Adding a SQL Server Standalone Target Environment.
Hotfix required for Windows 2008 R2 hosts
The following hotfix is required for Windows 2008 R2 Cluster nodes:
"0x80070490 Element Not found" error when you enumerate a cluster disk resource by using the WMI MSCluster_Disk class
query in a Windows Server 2008 R2-based failover cluster
http://support.microsoft.com/kb/2720218
Best Practices
SQL Server failover cluster instances that will be used with Delphix should not be used to host databases other than Delphix VDBs.
Supported Operating System and SQL Server Versions for Cluster Target Environments
Supported Operating System Versions
Windows 2008 R2
Windows 2012
Windows 2012 R2
Supported SQL Server Versions
SQL Server 2008 (10.0)
SQL Server 2008 R2 (10.5)
SQL Server 2012 (11.0)
SQL Server 2014 (12.0)
Procedure
1. Click Manage.
2. Select Environments.
3.
404
Example Environment
In this example environment, the Delphix Connector was installed on Connector Environment, Cluster Node 1, and Cluster Node 2. Each host
was added to Delphix as standalone target environments. Next, the Windows Failover Cluster was added as a Windows Target Cluster
environment using the cluster address. Cluster Node 1 is currently the active node for the SQL Server Failover Cluster resource group. Delphix
has exported iSCSI LUs and has created the corresponding Cluster Disk resources for each VDB.
Related Links
Setting Up SQL Server Environments: An Overview
Adding a SQL Server Standalone Target Environment
Adding a SQL Server Source Environment
Requirements for SQL Server Target Hosts and Databases
405
406
Related Topics
Setting Up SQL Server Environments: An Overview
Supported Operating Systems, Server Versions, and Backup Software for SQL Server
407
Prerequisites
Be sure that the source database meets the requirements described in Requirements for SQL Server Target Hosts and Databases
You must have already set up a staging target environment as described in Setting Up SQL Server Environments: An Overview and A
dding a Windows Target Environment
Maximum Size of a Database that Can Be Linked
If the staging environment uses the Windows 2003 operating system, the largest size of database that you can link to the
Delphix Engine is 2TB. This is also the largest size to which a virtual database (VDB) can grow.
For all other Windows versions, the maximum size for databases and VDBs is 32TB.
In both cases, the maximum size of the database and resulting VDBs is determined by the operating system on the staging target host.
Procedure
1. Login to the Delphix Admin application using Delphix Admin credentials or as the owner of the database from which you want to
provision the dSource.
2. Click Manage.
3. Select Databases.
4. Select Add dSource.
Alternatively, on the Environment Management screen, you can click Link next to a database name to start the dSource creation
process.
5. In the Add dSource wizard, select the source database.
Changing the Environment User
If you need to change or add an environment user for the source database, see Managing SQL Server Environment Users.
14. Select a standalone SQL Server instance on the target environment for hosting the staging database.
15. Select whether the data in the database is Masked.
16. Select whether you want LogSync enabled for the dSource. For more information, see Advanced Data Management Settings for SQL
Server dSources.
LogSync Disabled by Default
For SQL Server data sources, LogSync is disabled by default. For more information about how LogSync functions with SQL
Server data sources, see Managing SQL Server Data Sources.
17. Cl ick Advanced to edit retention policies and specify pre- and post-scripts. For details on pre- and post-scripts, refer to Customizing
SQL Server Management with Pre- and Post-Scripts. Additionally, if the source database's backups use LiteSpeed or RedGate
password protected encryption, you can supply the encryption key the Delphix Engine should use to restore those backups.
18. Click Next.
19. Review the dSource Configuration and Data Management information.
20. Click Finish.
The Delphix Engine will initiate two jobs to create the dSource, DB_Link and DB_Sync. You can monitor these jobs by clicking Active Jobs in
the top menu bar, or by selecting System > Event Viewer. When the jobs have completed successfully, the database icon will change to a dSou
rce icon on the Environments > Databases screen, and the dSource will appear in the list of My Databases under its assigned group.
You can view the current state of Validated Sync for the dSource on the dSource card itself.
The dSource Card
After you have created a dSource, the dSource card allows you to view information about it and make modifications to its policies and
permissions. In the Databases panel, click the Open icon to view the front of the dSource card. You can then flip the card to see
information such as the Source Database and Data Management configuration. For more information, see the topic Advanced Data
Management Settings for SQL Server dSources.
Related Links
Users, Permissions, and Policies
Setting Up SQL Server Environments: An Overview
Linking a dSource from a SQL Server Database: An Overview
Advanced Data Management Settings for SQL Server dSources
Adding a SQL Server Standalone Target Environment
Requirements for SQL Server Target Hosts and Databases
Using Pre- and Post-Scripts with SQL Server dSources
409
Prerequisites
The source SQL Server database has been upgraded by attaching to a higher version of SQL Server instance.
Procedure
1. Refresh all environments.
2. Log into the Delphix Admin application using delphix_admin credentials.
3. Select Manage > Databases > My Databases.
4. Disable the dSource to be upgraded.
5. Click the Expand icon to open its card.
6. Click the crown icon on the bottom of the dSource card.
The Upgrade Database screen will open. The new instance should appear in the dropdown list. If it does not, go to Manage->Environm
ents, select a card with the environment containing the new instance, and click Refresh Environment on that card.
7. Select the new SQL Server instance that the source database is attached to.
8. Select the appropriate staging environment and instance. The staging instance must be the same version as the new SQL Server
instance.
9. Click OK.
10. Enable the dSource.
11. Click Snapshot on the dSource card to run SnapSync for the dSource.
Related Links
Refreshing an Environment
Linking a SQL Server dSource
Enabling and Disabling dSources
410
Prerequisites
The dSource for the staging database has to be disabled first before the migration. Follow the steps in Enabling and Disabling
dSources to disable the dSource.
The target environment for the migrated staging database should already have been added to the Delphix Engine. Follow the steps in Ad
ding a SQL Server Standalone Target Environment to add the environment as a target environment. The environment should also
meet the requirements for hosting a staging database as described in Requirements for SQL Server Target Hosts and Databases.
Procedure
1. Go to Manage > Database > My Databases
2. Select the dSource for the staging source.
3. Modify the Staging Environment for the dSource by clicking the Pencil icon next to it.
4. Select the new target environment for the staging source.
5. Select the SQL Server instance on the new target environment.
6. Accept the change.
Post-Requisites
Enable the dSource following the steps outlined in Enabling and Disabling dSources.
Related Links
Setting Up SQL Server Environments: An Overview
Adding a SQL Server Standalone Target Environment
Enabling and Disabling dSources
Requirements for SQL Server Target Hosts and Databases
411
Prerequisites
The dSource for the staging database must be disabled before the staging target environment can be changed. Follow the steps in Enabling and
Disabling dSources to disable the dSource.
Procedure
1. In the Databases pane, select the dSource for which you want to change the staging target environment.
2. Click the Open icon for the dSource to view its information card.
3. On the front of the information card, click the Flip icon to view the Staging Environment on the back of the dSource card.
4. Click the Pencil icon next to Staging Environment to edit the target server and the SQL Server instance on the server to use for
staging.
5. Click the Check icon to save your changes.
412
1. In the Data Management panel of the Add dSource wizard, click Advanced.
On the back of the dSource card
1. Click Manage.
2. Select Policies. This will open the Policy Management screen.
3. Select the policy for the dSource you want to modify.
4. Click Modify.
For more information, see Creating Custom Policies and Creating Policy Templates.
Retention Policies
Retention policies define the length of time that the Delphix Engine retains snapshots and log files to which you can rewind or provision objects
from past points in time. The retention time for snapshots must be equal to, or longer than, the retention time for logs.
To support longer retention times, you may need to allocate more storage to the Delphix Engine. The retention policy in combination with the
SnapSync policy can have a significant impact on the performance and storage consumption of the Delphix Engine.
413
Related Links
Managing SQL Server Data Sources
414
Procedure
1. Click Manage.
2. Select Databases.
3. Click My Databases.
4. Select the dSource you want to disable.
5. On the back of the dSource card, move the slider control from Enabled to Disabled.
6. Click Yes to acknowledge the warning.
When you are ready to enable the dSource again, move the slider control from Disabled to Enabled, and the dSource will continue to function as
it did previously.
415
Detaching a dSource
1. Login to the Delphix Admin application as a user with OWNER privileges on the dSource, group, or domain.
2. Click Manage.
3. Select My Databases.
4. Select the database you want to unlink or delete.
5. Click the Unlink icon.
A warning message will appear.
6. Click Yes to confirm.
Attaching a dSource
Rebuilding Source Databases and Using VDBs
In situations where you want to rebuild a source database, you will need to detach the original dSource and create a new one
from the rebuilt data source. However, you can still provision VDBs from the detached dSource.
1. Detach the dSource as described above.
2. Rename the detached dSource by clicking the Edit icon in the upper left-hand corner of the dSource card, next to its
name.
This is necessary only if you intend give the new dSource the same name as the original one. Otherwise, you will see
an error message.
3. Create the new dSource from the rebuilt database.
You will now be able to provision VDBs from both the detached dSource and the newly created one, but the detached dSource
will only represent the state of the source database prior to being detached.
The attach operation is currently only supported from the command line interface (CLI). Full GUI support will be added in a future release. Only
databases that represent the same physical database can be re-attached
1. Login to the Delphix CLI as a user with OWNER privileges on the dSource, group, or domain.
2. Select the dSource by name using database select <dSource>.
3. Run the attachSource command.
4. Set the source config to which you want to attach using set source.config=<newSource>. Source configs are named by their
database unique name.
5. Set any other source configuration operations as you would for a normal link operation.
6. Run the commit command.
416
Prerequisites
You cannot delete a dSource that has dependent virtual databases (VDBs). Before deleting a dSource, make sure that you have deleted all
dependent VDBs as described in Deleting a VDB.
Procedure
1. Login to the Delphix Admin application using Delphix Admin credentials.
2. Click Manage.
3. Select Databases.
4. Click My Databases.
5. In the Databases panel, select the dSource you want to delete.
6. Click the Trash Can icon.
7. Click Yes to confirm.
Deleting a dSource will also delete all snapshots, logs, and descendant VDB Refresh policies for that database. You cannot
undo the deletion.
417
Prerequisites
You must have replicated a dSource or a VDB to the target host, as described in Replication Overview.
You must have added a compatible target environment on the target host.
Procedure
1. Login to the Delphix Admin application for the target host.
2. Click Manage.
3. Select Databases.
4. Click My Databases.
5. In the list of replicas, select the replica that contains the dSource or VDB you want to provision.
6. The provisioning process is now identical to the process for provisioning standard objects.
Post-Requisites
Once the provisioning job has started, the user interface will automatically display the new VDB in the live system.
418
Description
There is a critical fault associated with the dSource or VDB. See
the error logs for more information.
419
420
421
Once a VDB has been provisioned, you can also take snapshots of it. As with the dSource snapshots, you can find
these when you select the VDB in the My Databases panel. You can then provision additional VDBs from these
VDB snapshots.
SQL Server and SAP ASE VDBs do not have LogSync support. You can only provision from VDB snapshots.
Dependencies
If there are dependencies on the SnapShot you will not be able to delete the SnapShot free space; the dependencies rely on the data
associated with the SnapShot.
Related Links
Setting Up SQL Server Environments: An Overview
Provisioning a SQL Server VDB
422
Prerequisites
You will need to have linked a dSource from a source database, as described in Linking a SQL Server dSource, or have already
created a VDB from which you want to provision another VDB.
You should already have set up Windows target environments and installed the Delphix Connector on them, as described in Adding a
SQL Server Standalone Target Environment.
Make sure you have the required privileges on the target environment as described in Requirements for SQL Server Target Hosts and
Databases.
If you are provisioning to a different target environment than the one where the staging database has been set up, you need to make sure
that the two environments have compatible operating systems, as described in Requirements for SQL Server Target Hosts and
Databases. For more information on the staging database and the validated sync process, see Setting Up SQL Server Environments:
An Overview.
Procedure
1. Login to the Delphix Admin application.
2. Click Manage.
3. Select Databases.
4. Select My Databases.
5. Select a dSource.
6. Select a means of provisioning.
See Provisioning by Snapshot and LogSync in this topic for more information.
7. Click Provision.
The Provision VDB panel will open, and the Database Name and Recovery Model will auto-populate with information from the
dSource.
8. Select a target environment from the left pane.
9. Select an Instance to use.
10. If the selected target environment is a Windows Failover Cluster environment, select a drive letter from Available Drives. This drive will
contain volume mount points to Delphix storage.
11. Specify any Pre or Post Scripts that should be used during the provisioning process.
For more information, see Using Pre- and Post-Scripts with SQL Server dSources.
12. Click Next.
13. Select a Target Group for the VDB.
Click the green Plus icon to add a new group, if necessary.
14. Select a Snapshot Policy for the VDB.
Click the green Plus icon to create a new policy, if necessary.
15. Click Next.
16. If your Delphix Engine system administrator has configured the Delphix Engine to communicate with an SMTP server, you will be able to
specify one or more people to notify when the provisioning is done. You can choose other Delphix Engine users, or enter email
addresses.
17. Click Finish.
When provisioning starts, you can review progress of the job in the Databases panel, or in the Job History panel of the Dashboard.
When provisioning is complete, the VDB will be included in the group you designated, and listed in the Databases panel. If you select the
VDB in the Databases panel and click the Open icon, you can view its card, which contains information about the database and its Data
Management settings.
You can select a SQL Server instance that has a higher version than the source database and the VDB will be automatically upgraded.
For more information about compatibility between different versions of SQL Server, see SQL Server Operating System Compatibility
Matrices.
423
You can take a new snapshot of the dSource and provision from it by clicking the Camera icon on the dSource card.
Provisioning
By
Snapshot
Description
Provision by
Time
You can provision to the start of any snapshot by selecting that snapshot card from the TimeFlow view, or by entering a value
in the time entry fields below the snapshot cards. The values you enter will snap to the beginning of the nearest snapshot.
Provision by
LSN
You can use the Slide to Provision by LSN control to open the LSN entry field. Here, you can type or paste in the LSN you
want to provision to. After entering a value, it will "snap" to the start of the closest appropriate snapshot.
If LogSync is enabled on the dSource, you can provision by LogSync information. When provisioning by LogSync information, you can provision
to any point in time, or to any LSN, within a particular snapshot. The TimeFlow view for a dSource shows multiple snapshots by default. To view
the LogSync data for an individual snapshot, use the Slide to Open LogSync control at the top of an individual snapshot card.
Provisioning
By LogSync
Description
Provision by
Time
Use the Slide to Open LogSync control to view the time range within that snapshot. Drag the red triangle to the point in time
that you want to provision from. You can also enter a date and time directly.
Provision by
LSN
Use the Slide to Open LogSync and Slide to Provision by LSN controls to view the range of LSNs within that snapshot. You
must type or paste in the specific LSN you want to provision to. Note that if the LSN doesn't exist, you will see an error when
you provision.
Related Links
Linking a SQL Server dSource
Adding a SQL Server Standalone Target Environment
Adding a SQL Server Failover Cluster Target Environment
Requirements for SQL Server Target Hosts and Databases
Setting Up SQL Server Environments: An Overview
Using Pre- and Post-Scripts with dSources and SQL Server VDBs
424
425
Description
dlpx_server_name
dlpx_server_uuid
dlpx_source_id
These properties can be found under the Extended Properties page of the Properties window for a VDB using the SQL Server Management
Studio tool. They can also be displayed by using the sp_dlpx_vdbinfo stored procedure. This stored procedure can be installed by running the
SQL code contained in <Delphix Connector install path>\etc\sp_dlpx_vdbinfo.sql.
426
Notes
Upgrading a SQL Server 2005 VDB to SQL Server 2008 or 2008 R2 is not supported.
Related Links
Refreshing an Environment
Enabling and Disabling dSources
427
Prerequisites
The VDB has to be disabled first before migrating it by following the steps outlined in Enabling and Disabling Virtual Databases.
The target environment where the VDB is to be migrated should already have been added to the Delphix Engine. Follow the steps
outlined in Adding a SQL Server Standalone Target Environment.
delphix> source
delphix source > select "vexample"
2. Select the source config associated with the source.
Post-Requisites
Enable the VDB following the steps outlined in Enabling and Disabling Virtual Databases.
Related Links
Adding a SQL Server Standalone Target Environment
428
Prerequisites
The VDB should be running on the target environment.
The SQL Server instance on the target environment where the VDB is should be up and reachable.
Procedure
1. Select the source associated with the VDB.
delphix> source
"vexample"
"vexample"
sourceconfig
sourceconfig
sourceconfig
sourceconfig
429
Prerequisites
To rewind a VDB, you must have the following permissions:
Auditor permissions on the dSource associated with the VDB
Owner permissions on the VDB itself
You do NOT need owner permissions for the group that contains the VDB. A user with Delphix Admin credentials can perform a VDB Rewind on
any VDB in the system.
Procedure
1. Login to the Delphix Admin application.
2. Under Databases, select the VDB you want to rewind.
3. Select the rewind point as a snapshot or a point in time.
4. Click Rewind.
5. If you want to use login credentials on the target environment other than those associated with the environment user, click Provide
Privileged Credentials.
6. Click Yes to confirm.
You can use TimeFlow bookmarks as the rewind point when using the CLI. Bookmarks can be useful to:
Mark where to rewind to - before starting a batch job on a VDB for example.
Provide a semantic point to revert back to in case the chosen rewind point turns out to be incorrect.
For a CLI example using a TimeFlow bookmark, see CLI Cookbook: Provisioning a VDB from a TimeFlow Bookmark.
Video
430
Using Scripts with SQL Server dSources and Virtual Databases (VDBs)
For SQL Server dSources, pre- and post-scripts are incorporated into the validated sync process.
For SQL Server single instance environments, scripts must exist and be readable on the staging environment.
Scripts can also be run as part of the SQL Server VDB provisioning process, in which case they must exist and be readable on the target
environment.
For SQL Server, both dSource and VDB scripts can be either text or binary executables.
Description
SOURCE_INSTANCE_HOST
SOURCE_INSTANCE_PORT
SOURCE_INSTANCE_NAME
SOURCE_DATABASE_NAME
Description
VDB_INSTANCE_HOST
VDB_INSTANCE_PORT
431
VDB_INSTANCE_NAME
VDB_DATABASE_NAME
function die {
Write-Error "Error: $($args[0])"
exit 1
}
function verifySuccess {
if (!$?) {
die "$($args[0])"
}
}
Write-Output "I'd rather be in Hawaii"
verifySuccess "WRITE_OUTPUT_FAILED"
& C:\Program Files\Delphix\scripts\myscript.ps1
verifySuccess "MY_SCRIPT_FAILED"
432
dSource Hooks
Hook
Description
Pre-Sync
Post-Sync
Operations performed after a sync. This hook will run regardless of the success of the sync or Pre-Sync hook operations.
These operations can undo any changes made by the Pre-Sync hook.
Description
Configure
Clone
Operations performed after initial provision or after a refresh. This hook will run after the virtual dataset has been started.
During a refresh, this hook will run before the Post-Refresh hook.
Pre-Refresh
Post-Refresh
Operations performed after a refresh. During a refresh, this hook will run after the Configure Clone hook. This hook will not run
if the refresh or Pre-Refresh hook operations fail.
These operations can restore cached data after the refresh completes.
Pre-Rewind
Post-Rewind
Operations performed after a rewind. This hook will not run if the rewind or Pre-Rewind hook operations fail.
These operations can restore cached data after the rewind completes.
Pre-Snapshot
Post-Snapshot
Operations performed after a snapshot. This hook will run regardless of the success of the snapshot or Pre-Snapshot hook
operations.
These operations can undo any changes made by the Pre-Snapshot hook.
433
Operation Failure
If a hook operation fails, it will fail the entire hook: no further operations within the failed hook will be run.
434
*> add
0 *> set type=RunCommandOnSourceOperation
0 *> set command="echo Refresh completed."
0 *> ls
0 *> commit
source
source
source
source
source
source
"pomme"
"pomme"
"pomme"
"pomme"
"pomme"
"pomme"
update
update
update
update
update
update
operations
operations
operations
operations
operations
operations
postRefresh
postRefresh
postRefresh
postRefresh
postRefresh
postRefresh
*> add
1 *> set type=RunCommandOnSourceOperation
1 *> set command="echo Refresh completed."
1 *> back
*> unset 1
*> commit
435
436
The RunPowershell operation executes a Powershell script on a Windows environment. The environment user runs this shell command from their
home directory. The Delphix Engine captures and logs all output of the script. If the script fails, the output is displayed in the Delphix
Admin application and command line interface (CLI) to aid in debugging.
If successful, the script must exit with an exit code of 0. All other exit codes will be treated as an operation failure.
Example of a RunPowershell Operation
You can input the full command contents into the RunPowershell operation.
$removedir = $Env:DIRECTORY_TO_REMOVE
if ((Test-Path $removedir) -And (Get-Item $removedir) -is [System.IO.DirectoryInfo]) {
Remove-Item -Recurse -Force $removedir
} else {
exit 1
}
exit 0
SQL Server Environment Variables
Operations that run user-provided scripts have access to environment variables. For operations associated with specific dSources or virtual
databases (VDBs), the Delphix Engine will always set environment variables so that the user-provided operations can use them to access the
dSource or VDB.
dSource Environment Variables
Environment Variables
Description
SOURCE_INSTANCE_HOST
SOURCE_INSTANCE_PORT
SOURCE_INSTANCE_NAME
SOURCE_DATABASE_NAME
Environment Variables
Description
VDB_INSTANCE_HOST
VDB_INSTANCE_PORT
VDB_INSTANCE_NAME
VDB_DATABASE_NAME
437
438
439
440
TCP/IP connectivity to and from the source environment must be configured as described in
and Connectivity Requirements.
General Network
3. The following changes must be made to postgresql.conf (for more information, see the Server Configuration chapter in the
PostgreSQL documentation):
a. TCP/IP connectivity must be configured to allow the role mentioned above to connect to the source database from the Delphix
Engine and from the standby DBMS instance set up by the Delphix Engine on the staging environment. This can be done by
modifying the listen_addresses parameter, which specifies the TCP/IP addresses on which the DBMS is to listen for
connections from client applications.
listen_addresses Configuration
The simplest way to configure Postgres is so that it listens on all available IP interfaces:
listen_addresses = '*'
# Default is 'localhost'
b. The value of max_wal_senders, which specifies the maximum number of concurrent connections from standby servers or
streaming base backup clients, must be increased from its desired value by four. That is, in addition to the allowance of
connections for consumers other than the Delphix Engine, there must be an allowance for four additional connections from
consumers set up by the Delphix Engine.
max_wal_senders Configuration
The default value of max_wal_senders is zero, meaning replication is disabled. In this configuration, the value of max
_wal_senders must be increased to two for the Delphix Engine:
max_wal_senders = 4
# Default is 0
c. The value of wal_level, which determines how much information is written to the write-ahead log (WAL), must be set to archi
ve or hot_standby to allow connections from standby servers. The logical wal_level value (introduced in PostgreSQL
9.4) is also supported.
wal_level Configuration
The default value of wal_level is minimal, which writes only the information needed to recover from a crash or
441
immediate shutdown to the WAL archives. In this configuration, one must add the logging required for WAL archiving
as follows:
wal_level = archive
# Default is minimal
4. PostgreSQL must be configured to allow PostgreSQL client connections from the Delphix Engine and from the staging target
environment, as well as PostgreSQL replication client connections from the staging target environment by adding the following entries to
pg_hba.conf:
pg_hba.conf Configuration
host
host
host
all
all
replication
<role>
<role>
<role>
<ip-address_of_delphix_engine>/32
<ip-address_of_staging_target>/32
<ip-address_of_staging_target>/32
<auth-method>
<auth-method>
<auth-method>
<auth-method> must be md5 or trust to indicate if a password is required (md5) or not (trust). For more information on how to
configure pg_hba.conf, see the Client Authentication chapter in the PostgreSQL documentation.
Related Links
General Network and Connectivity Requirements
Server Configuration in the PostgreSQL documentation
Client Authentication in the PostgreSQL documentation
442
4. The pg_xlogdump utility must be installed, this is typically included in the postgresql-contrib package. For postgres 9.2, the pg_xlo
gdump util was not included in the standard Postgres packages, so we include a copy in the toolkit dir installed by the DE.
5. There must be an operating system user with the following privileges:
a. The Delphix Engine must be able to make an SSH connection to the target environment using the operating system user.
b. The operating system user must have read and execute privileges on the PostgreSQL binaries installed on the
target environment.
c. The operating system user must have permission to run mount and umount as the superuser via sudo with neither a password
nor a TTY via the following entries in /etc/sudoers.conf:
/etc/sudoers Configuration
Defaults:<username> !requiretty
<username> ALL=NOPASSWD:/bin/mount, /bin/umount
6.
There must be a directory on the target environment where the Delphix Engine toolkit can be installed (for
example, /var/tmp ) with the following properties:
a. The toolkit directory must be writable by the operating system user mentioned above.
b. The toolkit directory must have at least 256 MB of available storage.
7.
There must be a mount point directory (for example, /mnt/provision) that will be used as the base for
mount points that are created when provisioning a VDB with the following properties:
a. The mount point directory must be writable by the operating system user mentioned above.
b. The mount point directory should be empty.
8.
TCP/IP connectivity to and from the source environment must be configured as described in
and Connectivity Requirements .
Related Links
Using HostChecker to Confirm Source and Target Environment Configuration
sudoers Manual Page
443
General Network
Version
Processor Family
PostgreSQL
x86_64
x86_64
Version
Processor Family
x86_64
x86_64
444
Port
Numbers
Use
TCP
25
TCP/UDP
53
UDP
123
UDP
162
HTTPS
443
SSL connections from the Delphix Engine to the Delphix Support upload server
TCP/UDP
636
TCP
8415
TCP
50001
Connections to source and target environments for network performance tests via the Delphix command line interface
(CLI). See Network Performance Tool.
Port
Number
Use
TCP
22
TCP
80
UDP
161
TCP
443
TCP
8415
Delphix Session Protocol connections from all DSP-based network services including Replication, SnapSync for
Oracle, V2P, and the Delphix Connector.
TCP
50001
Connections from source and target environments for network performance tests via the Delphix CLI. See Network
Performance Tool.
TCP/UDP
32768 65535
Required for NFS mountd and status services from target environment only if the firewall between Delphix and the
target environment does not dynamically open ports.
Note: If no firewall exists between Delphix and the target environment, or the target environment dynamically opens
ports, this port range is not explicitly required.
445
SSHD Configuration
Both source and target Unix environments are required to have sshd running and configured such that the Delphix Engine can connect over ssh.
The Delphix Engine expects to maintain long-running, highly performant ssh connections with remote Unix environments. The following sshd con
figuration entries can interfere with these ssh connections and are therefore disallowed:
Disallowed sshd Configuration Entries
ClientAliveInterval
ClientAliveCountMax
Refer to Setting Up PostgreSQL Environments: An Overview for information on PostgreSQL environments. The Delphix Engine makes use of
the following network ports for PostgreSQL dSources and VDBs:
Port
Numbers
Use
TCP
22
446
TCP
xxx
PostgreSQL client connections to the PostgreSQL instances on the source and target environments (port 5432 by
default)
Port Number
Use
TCP/UDP
111
Remote Procedure Call (RPC) port mapper used for NFS mounts
TCP
1110
Network Status Monitor (NSM) client from target hosts to the Delphix Engine
TCP/UDP
2049
TCP
4045
Network Lock Manager (NLM) client from target hosts to the Delphix Engine
UDP
33434 - 33464
Traceroute from source and target database servers to the Delphix Engine (optional)
Incoming
Protocol
Port
Number
Use
Target
Environment
Source
Environment
PostgreSQL
replication client
xxx
447
448
Since PostgreSQL does not provide a native incremental backup API, a warm standby server (in other words,
one in log-shipping mode) must be created with all database files stored on the Delphix Engine for each
source database. We refer to the creation and maintenance of this staging database as validated sync. During
validated sync, we retrieve data from the source and ensure that all the components necessary for provisioning a VDB have been
validated. The result of validated sync is both a TimeFlow with consistent points from which you can provision a VDB, and a faster
provisioning process, because there is no need for any database recovery when provisioning a VDB. In order to create a staging
database, you must designate a target environment for this task when linking a dSource. During the linking process, database files are
exported over the network to the target environment, where the staging database instance runs as a warm standby server. A target
environment that hosts one or more staging databases is referred to as a staging target for validated sync.
2.
Once a staging database has been set up, you can provision virtual databases from any point in time along
the TimeFlow mentioned above to any compatible target environment (for more information, see Requiremen
ts for PostgreSQL Target Hosts and Databases). Database files are exported over the network to the
target environment, where the virtual database instance runs.
449
What is HostChecker?
The HostChecker is a standalone program which validates that host machines are configured correctly before the Delphix Engine uses them as
data sources and provision targets.
Please note that HostChecker does not communicate changes made to hosts back to the Delphix Engine. If you reconfigure a host, you must
refresh the host in the Delphix Engine in order for it to detect your changes.
You can run the tests contained in the HostChecker individually, or all at once. You must run these tests on both the source and target hosts to
verify their configurations. As the tests run, you will either see validation messages that the test has completed successfully, or error messages
directing you to make changes to the host configuration.
Prerequisites
Make sure that your source and target environments meet the requirements specified in PostgreSQL Support and Requirements.
Procedure
1. Download the HostChecker tarball from https://download.delphix.com/ (for
example: delphix_4.0.2.0_2014-04-29-08-38.hostchecker.tar).
2. Create a working directory and extract the HostChecker files from the HostChecker tarball.
mkdir dlpx-host-checker
cd dlpx-host-checker/
tar -xf delphix_4.0.2.0_2014-04-29-08-38.hostchecker.tar
3. Change to the working directory and enter this command. Note that for the target environments, you would change source to target.
4. Select which checks you want to run. We recommend you run all checks if you are running Hostchecker for the first time.
5. Pass in the arguments the checks ask for.
6. Read the output of the check.
7. The error or warning messages will explain any possible problems and how to address them. Resolve the issues that the HostChecker
describes. Don't be surprised or undo your work if more errors appear the next time you run HostChecker, because the error you just
fixed may have been masking other problems.
8. Repeat steps 37 until all the checks return no errors or warnings.
Tests Run
Test
Check Host
SSH
Connectivity
PostgreSQL
Source
PostgreSQL
Target
Description
Verifies that the environment is accessible via SSH
450
Check Tool
Kit Path
Verifies that the toolkit installation location has the proper ownership, proper permissions, and
enough free space.
Check Home
Directory
Permissions
Verifies that the environment can be accessed via SSH using public key authentication. If you
don't need this feature, you can ignore the results of this check.
Verifies that the operating system user can execute certain commands with necessary privileges
via sudo. This only needs to be run on target environments. See the topic Requirements for
PostgreSQL Target Hosts and Databases for more information.
Check OS
User
Privileges
Check
PostgreSQL
OS
compatibility
Verifies that the environment is running a compatible operating system. See the topic Supporte
d Operating Systems and Database Versions for PostgreSQL Environments for more
information.
Check
PostgreSQL
installations
Attempts to discover existing PostgreSQL installations and validate that they are of a compatible
version and that each instance meets the requirements for PostgreSQL source databases. See
the topics Requirements for PostgreSQL Source Hosts and Databases and Supported
Operating Systems and Database Versions for PostgreSQL Environments for more
information.
Related Links
PostgreSQL Support and Requirements
451
Prerequisites
Make sure your environment meets the requirements described in the following topics:
Requirements for PostgreSQL Source Hosts and Databases
Requirements for PostgreSQL Target Hosts and Databases
Supported Operating Systems and Database Versions for PostgreSQL Environments
Procedure
1. Login to the Delphix Admin application.
2. Click Manage.
3. Select Environments.
4. Next to Environments, click the green Plus icon.
5. In the Add Environment dialog, select Unix/Linux in the operating system menu.
6. Select Standalone Host.
7. Enter the Host IP address.
8. Enter an optional Name for the environment.
9. Enter the SSH port.
The default value is 22.
10. Enter a Username for the environment.
For more information about the environment user requirements, see Requirements for PostgreSQL Target Hosts and Databases and
Requirements for PostgreSQL Source Hosts and Databases.
11. Select a Login Type.
For Password, enter the password associated with the user in Step 9.
Using Public Key Authentication
If you want to use public key encryption for logging into your environment:
a. Select Public Key for the Login Type.
b. Click View Public Key.
c. Copy the public key that is displayed, and append it to the end of your ~/.ssh/authorized_keys file. If this file
does not exist, you will need to create it.
i. Run chmod 600 authorized_keys to enable read and write privileges for your user.
ii. Run chmod 755 ~ to make your home directory writable only by your user.
The public key needs to be added only once per user and per environment.
You can also add public key authentication to an environment user's profile by using the command line interface, as explained
in the topic CLI Cookbook: Setting Up SSH Key Authentication for UNIX Environment Users.
12. For Password Login, click Verify Credentials to test the username and password.
13. Enter a Toolkit Path.
See Requirements for PostgreSQL Target Hosts and Databases and Requirements for PostgreSQL Source Hosts and Databases
for more information about the toolkit directory requirements.
14. Click OK.
As the new environment is added, you will see two jobs running in the Delphix Admin Job History, one to Create and Discover an
environment, and another to Create an environment. When the jobs are complete, you will see the new environment added to the list in
the Environments panel. If you don't see it, click the Refresh icon in your browser.
Post-Requisites
After you create the environment, you can view information about it by selecting Manage > Environments, and then select the
environment name.
452
Related Links
Setting Up PostgreSQL Environments: An Overview
Requirements for PostgreSQL Source Hosts and Databases
Requirements for PostgreSQL Target Hosts and Databases
Supported Operating Systems and Database Versions for PostgreSQL Environments
Adding an Installation to a PostgreSQL Environment
453
Procedure
1. Log into the Delphix Admin application using Delphix Admin credentials.
2. Select Manage > Environments.
3. Click Databases.
4. Click the green Plus icon next to Add Dataset Home.
5. Under Dataset Home Type, select PostgreSQL.
6. Enter the path to the Installation.
7. Click the Check icon when finished.
Related Links
Adding a Database Cluster to a PostgreSQL Environment
454
Prerequisites
Make sure your source database meets the requirements described in Requirements for PostgreSQL Source Hosts and Databases a
nd Requirements for PostgreSQL Target Hosts and Databases.
Before adding a database, the installation of the database must exist in the environment. If the installation does not exist in the
environment, follow the steps in Adding an Installation to a PostgreSQL Environment.
Procedure
1. Log into the Delphix Admin application using Delphix Admin credentials.
2. Select Manage > Environments.
3. Click Databases.
4. Choose the installation which has been used to start the database cluster.
Click the Up icon next to the the installation path to show details if needed.
5. Click the green Plus icon next to Add DB Cluster.
6. Enter the Path of the data cluster directory.
7. Click the Check icon when finished.
Related Links
Requirements for PostgreSQL Source Hosts and Databases
Requirements for PostgreSQL Target Hosts and Databases
Adding an Installation to a PostgreSQL Environment
455
Procedure
1. Login to the Delphix Admin application with Delphix Admin credentials or as the owner of an environment.
2. Click Manage.
3. Select Environments.
4. In the Environments panel, click on the name of an environment to view its attributes.
5. Under Attributes, click the Pencil icon to edit an attribute.
6. Click the Check icon to save your edits.
Description
Environment
Users
The users for that environment. These are the users who have permission to ssh into an environment, or access the
environment through the Delphix Connector. See the Requirements topics for specific data platforms for more information on
the environment user requirements.
Host
Address
Notes
PostgreSQL Attributes
Attribute
Description
SSH Port
Toolkit Path
456
Prerequisites
Users that you add to an environment must meet the requirements for that environment as described in the platform-specific Requirements topics.
Procedure
1. Login to the Delphix Admin application using Delphix Admin credentials.
2. Click Manage.
3. Select Environments.
4. Click the name of an environment to open the environment information screen.
5. Under Basic Information, click the green Plus icon to add a user.
6. Enter the Username and Password for the OS user in that environment.
457
Prerequisites
You cannot delete an environment that has any dependencies, such as dSources or virtual databases (VDBs). These must be deleted before you
can delete the environment.
Procedure
1. Login to the Delphix Admin application using Delphix Admin credentials.
2. Click Manage.
3. Select Environments.
4. In the Environments panel, select the environment you want to delete.
5. Click the Trash icon.
6. Click Yes to confirm.
458
Procedure
1. Login to the Delphix Admin application with Delphix Admin credentials.
2. Click Manage.
3. Select Environments.
4. In the Environments panel, click on the name of the environment to you want to refresh.
5. Click the Refresh icon.
To refresh all environments, click the Refresh icon next to Environments.
459
Procedure
1. Login to the Delphix Admin application using Delphix Admin credentials
2. Click Manage.
3. Select Environments.
4. Click Databases.
5. To enable or disable staging, slide the button next to Use as Staging to Yes or No.
6. To enable or disable provisioning, slide the button next to Allow Provisioning to On or Off.
460
Changing the Host Name or IP Address for PostgreSQL Source and Target Environments
This topic describes how to change the host name or IP address for source and target environments, and for the Delphix Engine.
Procedure
For Source Environments
For VDB Target Environments
For the Delphix Engine
Procedure
For Source Environments
1. Disable the dSource as described in Enabling and Disabling dSources.
2. If the Host Address field contains an IP address, edit the IP address.
3. If the Host Address field contains a host name, update your Domain Name Server to associate the new IP address to the host name.
The Delphix Engine will automatically detect the change within a few minutes.
4. In the Environments screen of the Delphix Engine, refresh the host.
5. Enable the dSource.
461
462
Related Links
Setting Up PostgreSQL Environments: An Overview
PostgreSQL Support and Requirements
463
Prerequisites
Make sure you have the correct user credentials for the source environment, as described in Requirements for PostgreSQL Source
Hosts and Databases
You may also want to read the topic Advanced Data Management Settings for PostgreSQL Data Sources.
Procedure
1. Login to the Delphix Admin application using Delphix Admin credentials.
2. Click Manage.
3. Select Databases.
4. Select Add dSource.
Alternatively, on the Environment Management screen, you can click Link next to a database name to start the dSource creation
process.
5. In the Add dSource wizard, select the source database.
Changing the Environment User
If you need to change or add an environment user for the source database, see Managing PostgreSQL Environment Users.
6. Enter your login credentials for DB Cluster User and DB Cluster Password.
7. Click Advanced to enter a Connection Database.
The Connection Database will be used when issuing SQL queries from the Delphix Engine to the linked database. It can be any existing
database that the DB Cluster User has permission to access.
8. Click Next.
9. Select a Database Group for the dSource, and then click Next.
Adding a dSource to a database group lets you set Delphix Domain user permissions for that database and its objects, such as
snapshots. See the topics under Users, Permissions, and Policies for more information.
10. Select a SnapSync Policy, and, if necessary, a Staging Installation for the dSource.
The Staging installation represents the PostgreSQL binaries that will be used on the staging target to backup and restore the linked
database to a warm standby.
11. Click Advanced to select whether the data in the data sources is Masked, to select a Retention Policy, and to indicate whether any pre
or post scripts should be executed during the dSource creation.
For more information, see Advanced Data Management Settings for PostgreSQL Data Sources and Using Pre- and Post-Scripts
with PostgreSQL dSources.
12. Click Next.
13. Review the dSource Configuration and Data Management information, and then click Finish.
The Delphix Engine will initiate two jobs, DB_Link and DB_Sync, to create the dSource. You can monitor these jobs by clicking Active
Jobs in the top menu bar, or by selecting System > Event Viewer. When the jobs have successfully completed, the database icon will
change to a dSource icon on the Environments > Databases screen, and the dSource will be added to the list of My Databases under
its assigned group.
Related Links
Advanced Data Management Settings for PostgreSQL Data Sources
Requirements for PostgreSQL Target Hosts and Databases
Using Pre- and Post-Scripts with PostgreSQL dSources
Users, Permissions, and Policies
464
465
1. In the Data Management panel of the Add dSource wizard, click Advanced.
On the back of the dSource card
1. Click Manage.
2. Select Policies. This will open the Policy Management screen.
3. Select the policy for the dSource you want to modify.
4. Click Modify.
For more information, see Creating Custom Policies and Creating Policy Templates.
Retention Policies
Retention policies define the length of time that the Delphix Engine retains snapshots and log files to which you can rewind or provision objects
from past points in time. The retention time for snapshots must be equal to, or longer than, the retention time for logs.
To support longer retention times, you may need to allocate more storage to the Delphix Engine. The retention policy in combination with the
SnapSync policy can have a significant impact on the performance and storage consumption of the Delphix Engine.
466
Schedule By Settings
In the default SnapSync policy setting, snapshots are taken daily at a set time, with a four hour period. You can modify the snapshot schedule and
frequency by changing the Schedule By setting.
467
468
Procedure
1. Click Manage.
2. Select Databases.
3. Click My Databases.
4. Select the dSource you want to disable.
5. On the back of the dSource card, move the slider control from Enabled to Disabled.
6. Click Yes to acknowledge the warning.
When you are ready to enable the dSource again, move the slider control from Disabled to Enabled, and the dSource will continue to function as
it did previously.
469
Detaching a dSource
1. Login to the Delphix Admin application as a user with OWNER privileges on the dSource, group, or domain.
2. Click Manage.
3. Select My Databases.
4. Select the database you want to unlink or delete.
5. Click the Unlink icon.
A warning message will appear.
6. Click Yes to confirm.
Attaching a dSource
Rebuilding Source Databases and Using VDBs
In situations where you want to rebuild a source database, you will need to detach the original dSource and create a new one
from the rebuilt data source. However, you can still provision VDBs from the detached dSource.
1. Detach the dSource as described above.
2. Rename the detached dSource by clicking the Edit icon in the upper left-hand corner of the dSource card, next to its
name.
This is necessary only if you intend give the new dSource the same name as the original one. Otherwise, you will see
an error message.
3. Create the new dSource from the rebuilt database.
You will now be able to provision VDBs from both the detached dSource and the newly created one, but the detached dSource
will only represent the state of the source database prior to being detached.
The attach operation is currently only supported from the command line interface (CLI). Full GUI support will be added in a future release. Only
databases that represent the same physical database can be re-attached
1. Login to the Delphix CLI as a user with OWNER privileges on the dSource, group, or domain.
2. Select the dSource by name using database select <dSource>.
3. Run the attachSource command.
4. Set the source config to which you want to attach using set source.config=<newSource>. Source configs are named by their
database unique name.
5. Set any other source configuration operations as you would for a normal link operation.
6. Run the commit command.
470
Prerequisites
You cannot delete a dSource that has dependent virtual databases (VDBs). Before deleting a dSource, make sure that you have deleted all
dependent VDBs as described in Deleting a VDB.
Procedure
1. Login to the Delphix Admin application using Delphix Admin credentials.
2. Click Manage.
3. Select Databases.
4. Click My Databases.
5. In the Databases panel, select the dSource you want to delete.
6. Click the Trash Can icon.
7. Click Yes to confirm.
Deleting a dSource will also delete all snapshots, logs, and descendant VDB Refresh policies for that database. You cannot
undo the deletion.
471
Description
There is a critical fault associated with the dSource or VDB. See
the error logs for more information.
472
473
474
Related Links
Setting Up PostgreSQL Environments: An Overview
Requirements for PostgreSQL Target Hosts and Databases
Supported Operating Systems and Database Versions for PostgreSQL Environments
Provisioning a PostgreSQL VDB
475
Prerequisites
You will need to have linked a dSource from a source database, as described in Linking a PostgreSQL dSource, or have already
created a VDB from which you want to provision another VDB
Procedure
1. Login to the Delphix Admin application.
2. Click Manage.
3. Select Databases.
4. Select My Databases.
5. Select a dSource.
6. Select a dSource snapshot.
See Provisioning by Snapshot and LogSync in this topic for more information on provisioning options.
You can take a snapshot of the dSource to provision from by clicking the Camera icon on the dSource card.
7. Optional: Slide the LogSync slider to the open the snapshot timeline, and then move the arrow along the timeline to provision from a
point in time within a snapshot.
8. Click Provision.
The VDB Provisioning Wizard will open, and the fields Installation, Mount Base, and Environment User will auto-populate with
information from the environment configuration.
9. Enter a Port Number.
The TCP port upon which the VDB will listen.
10. Click Advanced to enter any VDB configuration settings.
For more information, see Customizing PostgreSQL VDB Configuration Settings.
11. Click Next to continue to the VDB Configuration tab.
12. Modify the VDB Name if necessary.
13. Select a Target Group for the VDB.
14. Click the green Plus icon to add a new group, if necessary.
15. Select a Snapshot Policy for the VDB.
16. Click the green Plus icon to create a new policy, if necessary.
17. Click Next to continue to the Hooks tab.
18. Specify any Hooks to be used during the provisioning process.
For more information, see Customizing PostgreSQL Management with Hook Operations.
19.
476
Related Links
Linking a PostgreSQL dSource
Requirements for PostgreSQL Target Hosts and Databases
Using Pre- and Post-Scripts with dSources and VDBs
Customizing PostgreSQL VDB Configuration Settings
477
Procedure
1. Login to the Delphix Admin application using Delphix Admin credentials
2. Click Manage.
3. Select Environments.
4. Click Databases.
5. To enable or disable staging, slide the button next to Use as Staging to Yes or No.
6. To enable or disable provisioning, slide the button next to Allow Provisioning to On or Off.
478
479
Procedure
1. Login to the Delphix Admin application using Delphix Admin credentials.
2. Click Manage.
3. Click My Databases.
4. Select the VDB you want to delete.
5. Click the Trash icon.
6. Click Yes to confirm.
480
Prerequisites
You must have already set up a new target environment that is compatible with the VDB that you want to migrate.
Procedure
1. Login to your Delphix Engine using Delphix Admin credentials.
2. Click Manage.
3. Click My Databases.
4. Select the VDB you want to migrate.
5. Click the Open icon.
6. Slide the Enable/Disable control to Disabled.
7. Click Yes to confirm.
When the VDB is disabled, its icon will turn gray.
8. In the lower right-hand corner of the VDB card, click the VDB Migrate icon.
9. Select the new target environment for the VDB, the user for that environment, and the database installation where the VDB will
reside.
10. Click the Check icon to confirm your selections.
11. Slide the Enable/Disable control to Enabled.
12. Click Yes to confirm.
Within a few minutes, your VDB will re-start in the new environment, and you can continue to work with it as you would with any other
VDBs.
Video
481
Prerequisites
You must have replicated a dSource or a VDB to the target host, as described in Replication Overview.
You must have added a compatible target environment on the target host.
Procedure
1. Login to the Delphix Admin application for the target host.
2. Click Manage.
3. Select Databases.
4. Click My Databases.
5. In the list of replicas, select the replica that contains the dSource or VDB you want to provision.
6. The provisioning process is now identical to the process for provisioning standard objects.
Post-Requisites
Once the provisioning job has started, the user interface will automatically display the new VDB in the live system.
482
Description
There is a critical fault associated with the dSource or VDB. See
the error logs for more information.
483
484
VDB Configuration
When you create a VDB, configuration settings are copied from the dSource and used to create the VDB. Most settings are copied directly, and
you can see these settings by clicking the Advanced link in the Target Environment screen in the VDB Provisioning Wizard. When a VDB is
provisioned, you can specify configuration parameters directly. It is important to know, however, that some configuration parameters are not
customizable, and some are stripped out during the provisioning process but are customizable. The list of restricted and customizable parameters
can be found below.
Restricted Parameters
These parameters are restricted for use by the Delphix Engine. Attempting to customize these parameters will cause an error during the
provisioning process.
archive_command
archive_mode
wal_level
port
data_directory
config_file
hba_file
ident_file
max_stack_depth
wal_segment_size
block_size
lc_ctype
segment_size
wal_block_size
lc_collate
server_version
integer_datetimes
server_encoding
server_version_num
max_identifier_length
max_index_keys
max_function_args
include
include_if_exists
485
dSource Hooks
Hook
Description
Pre-Sync
Post-Sync
Operations performed after a sync. This hook will run regardless of the success of the sync or Pre-Sync hook operations.
These operations can undo any changes made by the Pre-Sync hook.
Description
Configure
Clone
Operations performed after initial provision or after a refresh. This hook will run after the virtual dataset has been started.
During a refresh, this hook will run before the Post-Refresh hook.
Pre-Refresh
Post-Refresh
Operations performed after a refresh. During a refresh, this hook will run after the Configure Clone hook. This hook will not run
if the refresh or Pre-Refresh hook operations fail.
These operations can restore cached data after the refresh completes.
Pre-Rewind
Post-Rewind
Operations performed after a rewind. This hook will not run if the rewind or Pre-Rewind hook operations fail.
These operations can restore cached data after the rewind completes.
Pre-Snapshot
Post-Snapshot
Operations performed after a snapshot. This hook will run regardless of the success of the snapshot or Pre-Snapshot hook
operations.
These operations can undo any changes made by the Pre-Snapshot hook.
Operation Failure
If a hook operation fails, it will fail the entire hook: no further operations within the failed hook will be run.
486
You can construct hook operation lists through the Delphix Admin application or the command line interface (CLI). You can either define the
operation lists as part of the linking or provisioning process or edit them on dSources or virtual datasets that already exist.
487
*> add
0 *> set type=RunCommandOnSourceOperation
0 *> set command="echo Refresh completed."
0 *> ls
0 *> commit
source
source
source
source
source
source
"pomme"
"pomme"
"pomme"
"pomme"
"pomme"
"pomme"
update
update
update
update
update
update
operations
operations
operations
operations
operations
operations
postRefresh
postRefresh
postRefresh
postRefresh
postRefresh
postRefresh
*> add
1 *> set type=RunCommandOnSourceOperation
1 *> set command="echo Refresh completed."
1 *> back
*> unset 1
*> commit
488
Shell Operations
RunCommand Operation
The RunCommand operation runs a shell command on a Unix environment using whatever binary is available at /bin/sh. The environment user
runs this shell command from their home directory. The Delphix Engine captures and logs all output from this command. If the script fails, the
output is displayed in the Delphix Admin application and command line interface (CLI) to aid in debugging.
If successful, the shell command must exit with an exit code of 0. All other exit codes will be treated as an operation failure.
Examples of RunCommand Operations
You can input the full command contents into the RunCommand operation.
remove_dir="$DIRECTORY_TO_REMOVE_ENVIRONMENT_VARIABLE"
if test -d "$remove_dir"; then
rm -rf "$remove_dir" || exit 1
fi
exit 0
If a script already exists on the remote environment and is executable by the environment user, the RunCommand operation can execute this
script directly.
The RunBash operation runs a Bash command on a Unix environment using a bash binary provided by the Delphix Engine.The environment user
runs this Bash command from their home directory. The Delphix Engine captures and logs all output from this command. If the script fails, the
output is displayed in the Delphix Admin application and command line interface (CLI) to aid in debugging.
If successful, the Bash command must exit with an exit code of 0. All other exit codes will be treated as an operation failure.
Example of RunBash Operations
You can input the full command contents into the RunBash operation.
remove_dir="$DIRECTORY_TO_REMOVE_ENVIRONMENT_VARIABLE"
# Bashisms are safe here!
if [[ -d "$remove_dir" ]]; then
rm -rf "$remove_dir" || exit 1
fi
exit 0
Shell Operation Tips
Using nohup
489
You can use the nohup command and process backgrounding from resource in order to "detach" a process from the Delphix Engine. However, if
you use nohup and process backgrounding, you MUST redirect stdout and stderr.
Unless you explicitly tell the shell to redirect stdout and stderr in your command or script, the Delphix Engine will keep its connection to the
remote environment open while the process is writing to either stdout or stderr . Redirection ensures that the Delphix Engine will see no more
output and thus not block waiting for the process to finish.
For example, imagine having your RunCommand operation background a long-running Python process. Below are the bad and good ways to do
this.
Bad Examples
nohup
nohup
nohup
nohup
python
python
python
python
file.py
file.py
file.py
file.py
& # no redirection
2>&1 & # stdout is not redirected
1>/dev/null & # stderr is not redirected
2>/dev/null & # stdout is not redirected
Good Examples
nohup python file.py 1>/dev/null 2>&1 & # both stdout and stderr redirected, Delphix Engine will not
block
Other Operations
RunExpect Operation
The RunExpect operation executes an Expect script on a Unix environment. The Expect utility provides a scripting language that makes it easy to
automate interactions with programs which normally can only be used interactively, such as ssh. The Delphix Engine includes a
platform-independent implementation of a subset of the full Expect functionality.
The script is run on the remote environment as the environment user from their home directory. The Delphix Engine captures and logs all output
of the script. If the operation fails, the output is displayed in the Delphix Admin application and CLI to aid in debugging.
If successful, the script must exit with an exit code of 0. All other exit codes will be treated as an operation failure.
Example of a RunExpect Operation
Environment Variable
Description
PGDATA
The path to the VDB data files mounted from the Delphix Engine
490
PGPORT
PGUSER
PGDATABASE
491
492
493
TCP/IP connectivity to and from the source environment must be configured as described in
and Connectivity Requirements.
General Network
Related Links
General Network and Connectivity Requirements
494
/etc/sudoers Configuration
Defaults:<username> !requiretty
<username> ALL=NOPASSWD:/bin/mount, /bin/umount
There must be a directory on the target environment where the Delphix Engine toolkit can be installed (for example, /var/tmp) with the
following properties:
The toolkit directory must have 770 mode and be owned by the operating system user to avoid creating a fault.
The toolkit directory must have at least 1.5 GB of available storage.
NFS must be running on the host.
There must be a mount point directory (for example, /mnt/provision) that will be used as the base for mount points that are created when
provisioning a VDB. The mount point directory must:
be writable by the operating system user mentioned above.
be empty.
TCP/IP connectivity to and from the source environment must be configured as described in General Network and Connectivity
Requirements.
Java version 6 or greater must be installed on the host.
Related Links
Using HostChecker to Confirm Source and Target Environment Configuration
sudoers Manual Page
495
Port
Numbers
Use
TCP
25
TCP/UDP
53
UDP
123
UDP
162
HTTPS
443
SSL connections from the Delphix Engine to the Delphix Support upload server
TCP/UDP
636
TCP
8415
TCP
50001
Connections to source and target environments for network performance tests via the Delphix command line interface
(CLI). See Network Performance Tool.
Port
Number
Use
TCP
22
TCP
80
UDP
161
TCP
443
TCP
8415
Delphix Session Protocol connections from all DSP-based network services including Replication, SnapSync for
Oracle, V2P, and the Delphix Connector.
TCP
50001
Connections from source and target environments for network performance tests via the Delphix CLI. See Network
Performance Tool.
TCP/UDP
32768 65535
Required for NFS mountd and status services from target environment only if the firewall between Delphix and the
target environment does not dynamically open ports.
Note: If no firewall exists between Delphix and the target environment, or the target environment dynamically opens
ports, this port range is not explicitly required.
496
SSHD Configuration
Both source and target Unix environments are required to have sshd running and configured such that the Delphix Engine can connect over ssh.
The Delphix Engine expects to maintain long-running, highly performant ssh connections with remote Unix environments. The following sshd con
figuration entries can interfere with these ssh connections and are therefore disallowed:
Disallowed sshd Configuration Entries
ClientAliveInterval
ClientAliveCountMax
The Delphix Engine makes use of the following network ports for MySQL dSources and VDBs:
Port
Numbers
Use
TCP
22
TCP
xxx
MySQL client connections/JDBC connections to the MySQL instances on the source and target environments (port
3306 by default)
497
Port Number
Use
TCP/UDP
111
Remote Procedure Call (RPC) port mapper used for NFS mounts
TCP
1110
Network Status Monitor (NSM) client from target hosts to the Delphix Engine
TCP/UDP
2049
TCP
4045
Network Lock Manager (NLM) client from target hosts to the Delphix Engine
UDP
33434 - 33464
Traceroute from source and target database servers to the Delphix Engine (optional)
Incoming
Protocol
Port
Number
Use
Target
Environment
Source
Environment
MySQL
replication client
xxx
498
Processor Family
x86_64
x86_64
x86_64
x86_64
x86_64
499
500
501
What is HostChecker?
The HostChecker is a standalone program which validates that host machines are configured correctly before the Delphix Engine uses them as
data sources and provision targets.
Please note that HostChecker does not communicate changes made to hosts back to the Delphix Engine. If you reconfigure a host, you must
refresh the host in the Delphix Engine in order for it to detect your changes.
You can run the tests contained in the HostChecker individually, or all at once. You must run these tests on both the source and target hosts to
verify their configurations. As the tests run, you will either see validation messages that the test has completed successfully, or error messages
directing you to make changes to the host configuration.
Prerequisites
Ensure that your source and target environments meet the requirements specified in MySQL Support and Requirements.
Procedure
1. Download HostChecker tarball for the O/S version that runs on the source or target hosts.
a. For Linux, hostchecker_linux_x86.tar
b. For Solaris, hostchecker_sunos_sparc.tar
c. For HP-UX, hostchecker_hpux_ia64.tar
2. Create a working directory and extract the HostChecker files from the HostChecker tarball.
mkdir dlpx-host-checker
cd dlpx-host-checker/
tar -xf hostchecker_linux_x86.tar
3. Change to hostchecker sub-directory and enter this command:
$ ./hostchecker.sh
Do Not Run as Root
Do not run the HostChecker as root; this will cause misleading or incorrect results from many of the checks.
Tests Run
Test
MySQL
Source
MySQL
Target
Description
502
Check Host
SSH
Connectivity
Check Tool
Kit Path
Verifies that the toolkit installation location has the proper ownership, proper permissions, and enough free
space
Check
Home
Directory
Permissions
Verifies that the environment can be accessed via SSH using public key authentication. If you do not need
this feature, you can ignore the results of this check.
Verifies that the operating system user can execute certain commands with necessary privileges via sudo.
This only needs to be run on target environments. For more information, see Requirements for MySQL
Target/Staging Hosts and Databases.
Verifies that the appropriate MySQL binaries are executable by the current user for the specified
MySQLinstallation. For more information, see Requirements for MySQL Source Hosts and Databases a
nd Supported Operating Systems and Database Versions for MySQL Environments.
Check OS
User
Privileges
Check
MySQL
installation
Related Links
MySQL Support and Requirements
Managing MySQL Environments
503
Prerequisites
Make sure your environment meets the requirements described in the following topics:
Requirements for MySQL Source Hosts and Databases
Requirements for MySQL Target/Staging Hosts and Databases
Supported Operating Systems and Database Versions for MySQL Environments
Procedure
1. Login to the Delphix Admin application.
2. Click Manage.
3. Select Environments.
4. Next to Environments, click the green Plus icon.
5. In the Add Environment dialog, select Unix/Linux in the operating system menu.
6. Select Standalone Host.
7. Enter the Host IP address.
8. Enter an optional Name for the environment.
9. Enter the SSH port.
The default value is 22.
10. Enter a Username for the environment.
For more information about the environment user requirements, see Requirements for MySQL Target/Staging Hosts and Databases a
nd Requirements for MySQL Source Hosts and Databases.
11. Select a Login Type.
For Password, enter the password associated with the user in step 9.
Using Public Key Authentication
If you want to use public key encryption for logging into your environment:
a. Select Public Key for the Login Type.
b. Click View Public Key.
c. Copy the public key that is displayed, and append it to the end of your ~/.ssh/authorized_keys file. If this file
does not exist, you will need to create it.
i. Run chmod 600 authorized_keys to enable read and write privileges for your user.
ii. Run chmod 755 ~ to make your home directory writable only by your user.
The public key needs to be added only once per user and per environment.
You can also add public key authentication to an environment user's profile by using the command line interface, as explained
in the topic CLI Cookbook: Setting Up SSH Key Authentication for UNIX Environment Users.
12. For Password Login, click Verify Credentials to test the username and password.
13. Enter a Toolkit Path.
For more information about the toolkit directory requirements, see Requirements for MySQL Target/Staging Hosts and Databases an
d Requirements for MySQL Source Hosts and Databases.
14. Click OK.
As the new environment is added, you will see two jobs running in the Delphix Admin Job History, one to Create and Discover an
environment, and another to Create an environment. When the jobs are complete, you will see the new environment added to the list in
the Environments tab. If you do not see it, click the Refresh icon in your browser.
Post-Requisites
To view information about an environment after you have created it:
1. Click Manage.
2.
504
2. Select Environments.
3. Select the environment name.
Related Links
Setting Up MySQL Environments: An Overview
Requirements for MySQL Source Hosts and Databases
Requirements for MySQL Target/Staging Hosts and Databases
Supported Operating Systems and Database Versions for MySQL Environments
Adding an Installation to a MySQL Environment
505
Procedure
1. Login to the Delphix Admin application using Delphix Admin credentials.
2. Click Manage.
3. Select Environments.
4. Click Databases.
5. Click the green Plus icon next to Add Dataset Home.
6. Under Dataset Home Type, select MySQL.
7. Enter the path to the installation.
8. When finished, click the Check icon.
Related Links
Adding a MySQL Server to a MySQL Environment
Managing MySQL Environments
506
Prerequisites
Make sure your source database meets the requirements described in Requirements for MySQL Source Hosts and Databases and R
equirements for MySQL Target/Staging Hosts and Databases.
Before adding a database, the installation of the database must exist in the environment. If the installation does not exist in the
environment, follow the steps in Adding an Installation to a MySQL Environment.
Procedure
1. Login to the Delphix Admin application using Delphix Admin credentials.
2. Click Manage.
3. Select Environments.
4. Click Databases.
5. Choose the installation which has been used to start the database.
If needed, click the Up icon next to the the installation path to show details.
6. Click the green Plus icon next to Add Database.
7. Enter the data directory of the database as the Path.
8. Enter the port the server is running on as Port.
9. When finished, click the Check icon.
Related Links
Requirements for MySQL Source Hosts and Databases
Requirements for MySQL Target/Staging Hosts and Databases
Adding an Installation to a MySQL Environment
507
Procedure
1. Login to the Delphix Admin application with Delphix Admin credentials or as the owner of an environment.
2. Click Manage.
3. Select Environments.
4. In the Environments panel, click on the name of an environment to view its attributes.
5. Under Attributes, click the Pencil icon to edit an attribute.
6. Click the Check icon to save your edits.
Description
Environment
Users
The users for that environment. These are the users who have permission to ssh into an environment, or access the
environment through the Delphix Connector. See the Requirements topics for specific data platforms for more information on
the environment user requirements.
Host
Address
Notes
MySQL Attributes
Attribute
Description
SSH Port
Toolkit Path
Related Links
Managing MySQL Environments
508
Changing the Host Name or IP Address for MySQL Source and Target Environments
This topic describes how to change the host name or IP address for source and target environments, and for the Delphix Engine.
Procedure
For Source Environments
For VDB Target Environments
For the Delphix Engine
Procedure
For Source Environments
1. Disable the dSource as described in Enabling and Disabling MySQL dSources.
2. If the Host Address field contains an IP address, edit the IP address.
3. If the Host Address field contains a host name, update your Domain Name Server to associate the new IP address to the host name.
The Delphix Engine will automatically detect the change within a few minutes.
4. In the Environments screen of the Delphix Engine, refresh the host.
5. Enable the dSource.
Related Links
Enabling and Disabling MySQL dSources
Enabling and Disabling MySQL VDBs
Setting Up Network Access to the Delphix Engine
Managing MySQL Environments
509
Prerequisites
You cannot delete an environment that has any dependencies, such as dSources or virtual databases (VDBs). These must be deleted before you
can delete the environment.
Procedure
1. Login to the Delphix Admin application using Delphix Admin credentials.
2. Click Manage.
3. Select Environments.
4. In the Environments panel, select the environment you want to delete.
5. Click the Trash icon.
6. Click Yes to confirm.
Related Links
Managing MySQL Environments
510
Prerequisites
Users that you add to an environment must meet the requirements for that environment as described in the platform-specific Requirements topics.
Procedure
1. Login to the Delphix Admin application using Delphix Admin credentials.
2. Click Manage.
3. Select Environments.
4. Click the name of an environment to open the environment information screen.
5. Under Basic Information, click the green Plus icon to add a user.
6. Enter the Username and Password for the OS user in that environment.
Related Links
Managing MySQL Environments
511
Procedure
1. Login to the Delphix Admin application with Delphix Admin credentials.
2. Click Manage.
3. Select Environments.
4. In the Environments panel, click on the name of the environment to you want to refresh.
5. Click the Refresh icon.
To refresh all environments, click the Refresh icon next to Environments.
Related Links
Managing MySQL Environments
512
Procedure
1. Login to the Delphix Admin application using Delphix Admin credentials
2. Click Manage.
3. Select Environments.
4. Click Databases.
5. To enable or disable staging, slide the button next to Use as Staging to Yes or No.
6. To enable or disable provisioning, slide the button next to Allow Provisioning to On or Off.
Related Links
Managing MySQL Environments
513
514
Related Links
Setting Up MySQL Environments: An Overview
MySQL Support and Requirements
Supported Operating Systems and Database Versions for MySQL Environments
515
Prerequisites
Make sure you have the correct user credentials for the source environment, as described in Requirements for MySQL Source Hosts
and Databases
You may also want to read the topic Advanced Data Management Settings for MySQL Data Sources.
Procedure
1. Login to the Delphix Admin application using Delphix Admin credentials.
2. Click Manage.
3. Select Databases.
4. Click Add dSource.
Alternatively, on the Environment Management screen, you can click Link next to a database name to start the dSource creation
process.
5. In the Add dSource wizard, select the source database.
Changing the Environment User
If you need to change or add an environment user for the source database, see Managing MySQL Environment Users.
Related Links
516
517
1. In the Data Management panel of the Add dSource wizard, click Advanced.
On the back of the dSource card
1. Click Manage.
2. Select Policies. This will open the Policy Management screen.
3. Select the policy for the dSource you want to modify.
4. Click Modify.
For more information, see Creating Custom Policies and Creating Policy Templates.
Retention Policies
Retention policies define the length of time that the Delphix Engine retains snapshots and log files to which you can rewind or provision objects
from past points in time. The retention time for snapshots must be equal to, or longer than, the retention time for logs.
To support longer retention times, you may need to allocate more storage to the Delphix Engine. The retention policy in combination with the
SnapSync policy can have a significant impact on the performance and storage consumption of the Delphix Engine.
518
Related Links
Creating Custom Policies
Creating Policy Templates
MySQL Data Sources
519
520
Prerequisites
You cannot delete a dSource that has dependent virtual databases (VDBs). Before deleting a dSource, make sure that you have deleted all
dependent VDBs as described in Deleting a VDB.
Procedure
1. Login to the Delphix Admin application using Delphix Admin credentials.
2. Click Manage.
3. Select Databases.
4. Click My Databases.
5. In the Databases panel, select the dSource you want to delete.
6. Click the Trash Can icon.
7. Click Yes to confirm.
Deleting a dSource will also delete all snapshots, logs, and descendant VDB Refresh policies for that database. You cannot
undo the deletion.
Related Links
MySQL Data Sources
521
Detaching a dSource
1. Login to the Delphix Admin application as a user with OWNER privileges on the dSource, group, or domain.
2. Click Manage.
3. Select My Databases.
4. Select the database you want to unlink or delete.
5. Click the Unlink icon.
A warning message will appear.
6. Click Yes to confirm.
Attaching a dSource
Rebuilding Source Databases and Using VDBs
In situations where you want to rebuild a source database, you will need to detach the original dSource and create a new one
from the rebuilt data source. However, you can still provision VDBs from the detached dSource.
1. Detach the dSource as described above.
2. Rename the detached dSource by clicking the Edit icon in the upper left-hand corner of the dSource card, next to its
name.
This is necessary only if you intend give the new dSource the same name as the original one. Otherwise, you will see
an error message.
3. Create the new dSource from the rebuilt database.
You will now be able to provision VDBs from both the detached dSource and the newly created one, but the detached dSource
will only represent the state of the source database prior to being detached.
The attach operation is currently only supported from the command line interface (CLI). Full GUI support will be added in a future release. Only
databases that represent the same physical database can be re-attached
1. Login to the Delphix CLI as a user with OWNER privileges on the dSource, group, or domain.
2. Select the dSource by name using database select <dSource>.
3. Run the attachSource command.
4. Set the source config to which you want to attach using set source.config=<newSource>. Source configs are named by their
database unique name.
5. Set any other source configuration operations as you would for a normal link operation.
6. Run the commit command.
Related Links
522
523
Procedure
1. Click Manage.
2. Select Databases.
3. Click My Databases.
4. Select the dSource you want to disable.
5. On the back of the dSource card, move the slider control from Enabled to Disabled.
6. Click Yes to acknowledge the warning.
When you are ready to enable the dSource again, move the slider control from Disabled to Enabled, and the dSource will continue to function as
it did previously.
Related Links
MySQL Data Sources
524
Description
There is a critical fault associated with the dSource or VDB. See
the error logs for more information.
525
526
527
Related Links
Setting Up MySQL Environments: An Overview
Requirements for MySQL Target/Staging Hosts and Databases
Supported Operating Systems and Database Versions for MySQL Environments
Provisioning a MySQL VDB
528
Prerequisites
You must have already:
linked a dSource from a source database, as described in Linking a MySQL dSource
or,
created a VDB from which you want to provision another VDB
Procedure
1. Login to the Delphix Admin application.
2. Click Manage.
3. Click My Databases.
4. Select a dSource.
5. Select a dSource snapshot.
For more information on provisioning options, see Provisioning by Snapshot or LogSync below.
6. Optional: Slide the LogSync slider to open the snapshot timeline, and then move the arrow along the timeline to provision from a point
in time within a snapshot.
7. Click Provision.
The VDB Provisioning Wizard will open, and the fields Installation, Mount Base, and Environment User will auto-populate with
information from the environment configuration.
8. Enter a Port Number. This is the TCP port upon which the VDB will listen.
9. Click Advanced followed by clicking the green Plus icon (Add Parameter) to add new or update existing VDB configuration settings on
the template provided.
For more information, see Customizing MySQL VDB Configuration Settings.
10. Click Next to continue to the VDB Configuration tab.
11. Modify the VDB Name if necessary.
12. Select a Target Group for the VDB.
13. If necessary, click the green Plus icon to add a new group.
14. Select a Snapshot Policy for the VDB.
15. If necessary, click the green Plus icon to create a new policy.
16. Click on LogSync option to enable LogSync process for point-in-time provisioning/refresh.
17. Click Next to continue to the Hooks tab.
18. Specify any Hooks to be used during the provisioning process.
For more information, see Customizing MySQL Management with Hook Operations.
19.
20. Verify all the information displayed for the VDB is correct.
21. Click Finish.
When provisioning starts, you can view progress of the job in the Databases panel or in the Job History panel of the Dashboard. When
provisioning is complete, the VDB will be included in the group you designated, and listed in the Databases panel. If you select the VDB in the Da
tabases panel and click the Open icon, you can view its card, which contains information about the database and its Data Management settings.
529
Related Links
Linking a MySQL dSource
Requirements for MySQL Target/Staging Hosts and Databases
Using Pre- and Post-Scripts with dSources and VDBs
Customizing MySQL VDB Configuration Settings
530
VDB Configuration
When you create a VDB, the Delphix Engine copies configuration settings from the dSource and uses them to create the VDB. Most settings are
copied directly, but you can add or update some of these settings by clicking the Advanced option in the Target Environment screen of the VDB
Provisioning Wizard. When you provision a VDB, it is important to know, however, that some configuration parameters are not customizable,
and some are stripped out during the provisioning process but are customizable. The list of restricted parameters can be found below.
Restricted Parameters
These parameters are restricted for use by the Delphix Engine. Attempting to customize these parameters will cause an unexpected behavior for
the VDB.
basedir
log_bin
datadir
log_error
gtid_mode
pid_file
port
relay_log
server_id
tmpdir
innodb_checksum_algorithm
innodb_checksums
innodb_data_file_path
innodb_log_file_size
innodb_log_files_in_group
innodb_page_size
innodb_undo_tablespaces
default_storage_engine
innodb_fast_shutdown
innodb_flush_log_at_trx_commit
innodb_flush_method
sync_binlog
sync_master_info
sync_relay_log
sync_relay_log_info
Related Links
Provisioning VDBs from MySQL dSources
531
Prerequisites
You must have replicated a dSource or a VDB to the target host, as described in Replication Overview.
You must have added a compatible target environment on the target host.
Procedure
1. Login to the Delphix Admin application for the target host.
2. Click Manage.
3. Select Databases.
4. Click My Databases.
5. In the list of replicas, select the replica that contains the dSource or VDB you want to provision.
6. The provisioning process is now identical to the process for provisioning standard objects.
Post-Requisites
Once the provisioning job has started, the user interface will automatically display the new VDB in the live system.
Related Links
Replication Overview
Provisioning VDBs from MySQL dSources
532
Procedure
1. Click Manage.
2. Select Databases.
3. Click My Databases.
4. Select the VDB you want to disable.
5. On the back of the dSource card, move the slider control from Enabled to Disabled.
6. Click Yes to acknowledge the warning.
When you are ready to enable the VDB again, move the slider control form Disabled to Enabled, and the VDB will continue to function as it did
previously.
Related Links
Provisioning VDBs from MySQL dSources
533
Procedure
1. Login to the Delphix Admin application using Delphix Admin credentials.
2. Click Manage.
3. Click My Databases.
4. Select the VDB you want to delete.
5. Click the Trash icon.
6. Click Yes to confirm.
Related Link
Provisioning VDBs from MySQL dSources
534
Prerequisites
You must have already set up a new target environment that is compatible with the VDB that you want to migrate.
Procedure
1. Login to your Delphix Engine using Delphix Admin credentials.
2. Click Manage.
3. Click My Databases.
4. Select the VDB you want to migrate.
5. Click the Open icon.
6. Slide the Enable/Disable control to Disabled.
7. Click Yes to confirm.
When the VDB is disabled, its icon will turn gray.
8. In the lower right-hand corner of the VDB card, click the VDB Migrate icon.
9. Select the new target environment for the VDB, the user for that environment, and the database installation where the VDB will
reside.
10. Click the Check icon to confirm your selections.
11. Slide the Enable/Disable control to Enabled.
12. Click Yes to confirm.
Within a few minutes, your VDB will re-start in the new environment, and you can continue to work with it as you would with any other
VDBs.
Video
Related Links
Provisioning VDBs from MySQL dSources
535
Although the VDB no longer contains the previous contents, the previous Snapshots and TimeFlow still remain in Delphix and
are accessible through the Command Line Interface (CLI).
Prerequisites
To refresh a VDB, you must have the following permissions:
Auditor permissions on the dSource associated with the VDB
Auditor permissions on the group that contains the VDB
Owner permissions on the VDB itself
A user with Delphix Admin credentials can perform a VDB Refresh on any VDB in the system.
Procedure
1. Login to the Delphix Admin application.
2. Under Databases, select the VDB you want to refresh.
3. Click the Open icon in the upper right-hand corner of the VDB card.
4. On the back of the VDB card, click the Refresh VDB icon in the lower right-hand corner.
This will open the screen to re-provision the VDB.
5. Select desired refresh point snapshot or slide the display LogSync timeline to pick a point-in-time from which to refresh.
6. Click Refresh VDB.
7. Click Yes to confirm.
Related Links
Managing Policies: An Overview
Creating Custom Policies
Creating Policy Templates
536
Description
There is a critical fault associated with the dSource or VDB. See
the error logs for more information.
537
538
dSource Hooks
Hook
Description
Pre-Sync
Post-Sync
Operations performed after a sync. This hook will run regardless of the success of the sync or Pre-Sync hook operations.
These operations can undo any changes made by the Pre-Sync hook.
Description
Configure
Clone
Operations performed after initial provision or after a refresh. This hook will run after the virtual dataset has been started.
During a refresh, this hook will run before the Post-Refresh hook.
Pre-Refresh
Post-Refresh
Operations performed after a refresh. During a refresh, this hook will run after the Configure Clone hook. This hook will not run
if the refresh or Pre-Refresh hook operations fail.
These operations can restore cached data after the refresh completes.
Pre-Rewind
Post-Rewind
Operations performed after a rewind. This hook will not run if the rewind or Pre-Rewind hook operations fail.
These operations can restore cached data after the rewind completes.
Pre-Snapshot
Post-Snapshot
Operations performed after a snapshot. This hook will run regardless of the success of the snapshot or Pre-Snapshot hook
operations.
These operations can undo any changes made by the Pre-Snapshot hook.
Operation Failure
If a hook operation fails, it will fail the entire hook: no further operations within the failed hook will be run.
539
You can construct hook operation lists through the Delphix Admin application or the command line interface (CLI). You can either define the
operation lists as part of the linking or provisioning process or edit them on dSources or virtual datasets that already exist.
540
*> add
0 *> set type=RunCommandOnSourceOperation
0 *> set command="echo Refresh completed."
0 *> ls
0 *> commit
source
source
source
source
source
source
"pomme"
"pomme"
"pomme"
"pomme"
"pomme"
"pomme"
update
update
update
update
update
update
operations
operations
operations
operations
operations
operations
postRefresh
postRefresh
postRefresh
postRefresh
postRefresh
postRefresh
*> add
1 *> set type=RunCommandOnSourceOperation
1 *> set command="echo Refresh completed."
1 *> back
*> unset 1
*> commit
541
Shell Operations
RunCommand Operation
The RunCommand operation runs a shell command on a Unix environment using whatever binary is available at /bin/sh. The environment user
runs this shell command from their home directory. The Delphix Engine captures and logs all output from this command. If the script fails, the
output is displayed in the Delphix Admin application and command line interface (CLI) to aid in debugging.
If successful, the shell command must exit with an exit code of 0. All other exit codes will be treated as an operation failure.
Examples of RunCommand Operations
You can input the full command contents into the RunCommand operation.
remove_dir="$DIRECTORY_TO_REMOVE_ENVIRONMENT_VARIABLE"
if test -d "$remove_dir"; then
rm -rf "$remove_dir" || exit 1
fi
exit 0
If a script already exists on the remote environment and is executable by the environment user, the RunCommand operation can execute this
script directly.
The RunBash operation runs a Bash command on a Unix environment using a bash binary provided by the Delphix Engine.The environment user
runs this Bash command from their home directory. The Delphix Engine captures and logs all output from this command. If the script fails, the
output is displayed in the Delphix Admin application and command line interface (CLI) to aid in debugging.
If successful, the Bash command must exit with an exit code of 0. All other exit codes will be treated as an operation failure.
Example of RunBash Operations
You can input the full command contents into the RunBash operation.
remove_dir="$DIRECTORY_TO_REMOVE_ENVIRONMENT_VARIABLE"
# Bashisms are safe here!
if [[ -d "$remove_dir" ]]; then
rm -rf "$remove_dir" || exit 1
fi
exit 0
Shell Operation Tips
Using nohup
542
You can use the nohup command and process backgrounding from resource in order to "detach" a process from the Delphix Engine. However, if
you use nohup and process backgrounding, you MUST redirect stdout and stderr.
Unless you explicitly tell the shell to redirect stdout and stderr in your command or script, the Delphix Engine will keep its connection to the
remote environment open while the process is writing to either stdout or stderr . Redirection ensures that the Delphix Engine will see no more
output and thus not block waiting for the process to finish.
For example, imagine having your RunCommand operation background a long-running Python process. Below are the bad and good ways to do
this.
Bad Examples
nohup
nohup
nohup
nohup
python
python
python
python
file.py
file.py
file.py
file.py
& # no redirection
2>&1 & # stdout is not redirected
1>/dev/null & # stderr is not redirected
2>/dev/null & # stdout is not redirected
Good Examples
nohup python file.py 1>/dev/null 2>&1 & # both stdout and stderr redirected, Delphix Engine will not
block
Other Operations
RunExpect Operation
The RunExpect operation executes an Expect script on a Unix environment. The Expect utility provides a scripting language that makes it easy to
automate interactions with programs which normally can only be used interactively, such as ssh. The Delphix Engine includes a
platform-independent implementation of a subset of the full Expect functionality.
The script is run on the remote environment as the environment user from their home directory. The Delphix Engine captures and logs all output
of the script. If the operation fails, the output is displayed in the Delphix Admin application and CLI to aid in debugging.
If successful, the script must exit with an exit code of 0. All other exit codes will be treated as an operation failure.
Example of a RunExpect Operation
Environment Variable
Description
MYSQL_ENVUSER
MYSQL_DATADIR
MYSQL_INSTALL
543
MYSQL_PORT
MYSQL_DBUSER
Environment Variable
Description
MYSQL_ENVUSER
MYSQL_DATADIR
MYSQL_INSTALL
MYSQL_PORT
MYSQL_DBUSER
MYSQL_SOCKET_FILE
MYSQL_CNF_FILE
544
545
546
There must be a directory on the source host where you can install the Delphix Engine toolkit, for
example: /var/opt/delphix
/Toolkit
The delphix_os user must own the directory
The directory must have permissions 0770, for example, -rwxrwx---. However, you can also use more permissive settings.
The directory should have 256MB of available storage.
The Delphix Engine must be able to make an ssh connection (for example, TCP port 22) to the source host
Sample Script
Sample Script to create delphix_os on Linux
USER=delphix_os
GROUP=sybase
if [ ! `grep $USER /etc/passwd` ]
then
echo "Creating User $USER with no Password"
adduser --gid $GROUP --home-dir /home/$USER $USER
mkdir /home/$USER/.ssh
chmod 755 /home/$USER
echo "PATH=$PATH:/opt/sybase/ASE15_0/bin; export PATH" >> /home/$USER/.bashrc
echo "SYBASE=/opt/sybase; export SYBASE" >> /home/$USER/.bashrc
chown $USER:$GROUP /home/$USER/.ssh
else
echo "User $USER Already Exists"
fi
if [ ! -d /home/$USER/toolkit ]
then
echo "Creating Toolkit Directory"
mkdir /home/$USER/toolkit
chown $USER:$GROUP /home/$USER/toolkit
chmod 0770 /home/$USER/toolkit
else
echo "Toolkit Directory already Exists"
fi
547
Sample Script
Sample script run as sa
Related Links
For more information about using the HostChecker bundle, see Using HostChecker to Validate SAP ASE Source and Target
Environments
Linking an SAP ASE Data Source
Sudo File Configurations
548
To support multiple VDBs, you may need to increase the parameter number of alarms.
Delphix uses ASE operations which use alarm structures such as MOUNT and UNMOUNT. The number of alarms limit the
number of these operations which can be run concurrently. Various ASE instance failures can occur if the available alarm
structures are exhausted. The amount of memory consumed by increasing the number of alarm structures is small. Delphix
recommends that the number of alarms value is increased to 4096.
Related Links
Using HostChecker to Confirm Source and Target Environment Configuration
sudoers Manual Page
549
Port
Numbers
Use
TCP
25
TCP/UDP
53
UDP
123
UDP
162
HTTPS
443
SSL connections from the Delphix Engine to the Delphix Support upload server
TCP/UDP
636
TCP
8415
TCP
50001
Connections to source and target environments for network performance tests via the Delphix command line interface
(CLI). See Network Performance Tool.
Port
Number
Use
TCP
22
TCP
80
UDP
161
TCP
443
TCP
8415
Delphix Session Protocol connections from all DSP-based network services including Replication, SnapSync for
Oracle, V2P, and the Delphix Connector.
TCP
50001
Connections from source and target environments for network performance tests via the Delphix CLI. See Network
Performance Tool.
TCP/UDP
32768 65535
Required for NFS mountd and status services from target environment only if the firewall between Delphix and the
target environment does not dynamically open ports.
Note: If no firewall exists between Delphix and the target environment, or the target environment dynamically opens
ports, this port range is not explicitly required.
SSHD Configuration
Both source and target Unix environments are required to have sshd running and configured such that the Delphix Engine can connect over ssh.
The Delphix Engine expects to maintain long-running, highly performant ssh connections with remote Unix environments. The following sshd con
figuration entries can interfere with these ssh connections and are therefore disallowed:
Disallowed sshd Configuration Entries
550
ClientAliveInterval
ClientAliveCountMax
Refer to Managing SAP ASE Environments for information on SAP ASE environments. The Delphix Engine makes use of the following network
ports for SAP ASE dSources and VDBs:
Port Numbers
Use
TCP
Configuration dependent
Port Number
Use
UDP
33434-33464
Traceroute from source and target database servers to the Delphix Engine (optional)
TCP/UDP
111
Remote Procedure Call (RPC) port mapper used for NFS mounts
TCP
2049
TCP
1110
Network Status Monitor (NSM) client from target hosts to Delphix Engine
TCP
4045
Network Lock Manager (NLM) client from target hosts to Delphix Engine
551
Port Numbers
Use
TCP
Configuration
dependent
SAP ASE Remote Backup Server protocol. Applies if linking using the New Full Backup option, or if linking
with the Remote Backup Server option.
Port Allocation Between Staging Target Environments and Shared Backup Fileserver
Protocol
Port Numbers
Use
TCP/UDP
NFS mount point exported by an NFS shared backup fileserver. Applies if linking using the Loc
al Backup Server option.
Portmap (111)
NFS (typically 2049)
Network Lock
Manager (NLM)
Network Status
Monitor (NSM)
552
DBMS Versions
553
554
Block Diagram of Linking Architecture between SAP ASE Environments and the Delphix Engine
Environment Setup
SAP ASE dSources are backed by a staging database that runs on a target host, as shown in the diagram. There is no requirement for additional
local storage on this host, as the storage is mounted over NFS from the Delphix Engine. At Delphix, we refer to the creation and maintenance of
this staging database on the staging host as "validated sync," because it prepares the dSource data on the Delphix Engine for provisioning VDBs
later on. After the Delphix Engine creates the staging database, it continuously monitors the source database for new transaction log dumps.
When it detects a new transaction log dump, it loads that dump to the staging database. The result is a TimeFlow with consistent points from
which you can provision a virtual database (VDB), and a faster provisioning process, because there is no need for any database recovery during
provisioning.
When you later provision a VDB, you can specify any environment as a target, including the environment that contains the staging database.
However, for best performance, Delphix recommends that you choose a different target environment. The target must have an operating system
that is compatible with the one running on the validated host, as described in Requirements for SAP ASE Target Hosts and Databases.
555
Related Links
SAP ASE Support and Requirements
556
What is HostChecker?
The HostChecker is a standalone program which validates that host machines are configured correctly before the Delphix Engine uses them as
data sources and provision targets.
Please note that HostChecker does not communicate changes made to hosts back to the Delphix Engine. If you reconfigure a host, you must
refresh the host in the Delphix Engine in order for it to detect your changes.
You can run the tests contained in the HostChecker individually, or all at once. You must run these tests on both the source and target hosts to
verify their configurations. As the tests run, you will either see validation messages that the test has completed successfully, or error messages
directing you to make changes to the host configuration.
Prerequisites
Make sure that your source and target environments meet the requirements specified in SAP ASE Support and Requirements.
Procedure
1. Download the HostChecker tarball from https://download.delphix.com/ (for example:
delphix_4.0.2.0_2014-04-29-08-38.hostchecker.tar).
2. Create a working directory and extract the HostChecker files from the HostChecker tarball.
mkdir dlpx-host-checker
cd dlpx-host-checker/
tar -xf delphix_4.0.2.0_2014-04-29-08-38.hostchecker.tar
3. Change to the working directory and enter this command. Note that for the target environments, you would change source to target.
4. Select which checks you want to run. We recommend you run all checks if you are running Hostchecker for the first time.
5. Pass in the arguments the checks ask for.
6. Read the output of the check.
7. The error or warning messages will explain any possible problems and how to address them. Resolve the issues that the HostChecker
describes. Don't be surprised or undo your work if more errors appear the next time you run HostChecker, because the error you just
fixed may have been masking other problems.
8. Repeat steps 37 until all the checks return no errors or warnings.
Tests Run
Test
ASE
Source
ASE
Target
Description
557
Check Host
SSH
Connectivity
Check Tool
Kit Path
Verifies that the toolkit installation location has the proper ownership, proper permissions, and enough free
space.
Verifies that the operating system user can execute certain commands with necessary privileges via sudo.
This only needs to be run on target environments. See the topic Requirements for SAP ASE Target Hosts
and Databases for more information.
Check OS
User
Privileges
Check OS
ASE
Environment
Checks that the proper ASE environment variables are defined and the isql executable can be found.
Check ASE
installations
Attempts to discover all ASE instances, backup servers, make sure backup server log files can be read and
that the user has proper database permissions. See the topic SAP ASE Support and Requirements for
more information.
Related Links
SAP ASE Support and Requirements
558
Procedure
1. Login to the Delphix Admin application using Delphix Admin credentials.
2. Click Manage.
3. Select Environments.
4. Click the Plus icon next to Environments.
5. In the Add Environment dialog, select Unix/Linux.
6. Select Standalone Host.
7. Enter the Host IP address.
8.
12. For Password, enter the password associated with the user in Step 10.
Using Public Key Authentication
If you want to use public key encryption for logging into your environment:
a. Select Public Key for the Login Type.
b. Click View Public Key.
c. Copy the public key that is displayed, and append it to the end of your ~/.ssh/authorized_keys file. If this file
does not exist, you will need to create it.
i. Run chmod 600 authorized_keys to enable read and write privileges for your user.
ii. Run chmod 755 ~ to make your home directory writable only by your user.
The public key needs to be added only once per user and per environment.
You can also add public key authentication to an environment user's profile by using the command line interface, as explained
in the topic CLI Cookbook: Setting Up SSH Key Authentication for UNIX Environment Users.
13. For Password Login, click Verify Credentials to test the username and password.
14. Enter a Toolkit Path.
The toolkit directory stores scripts used for Delphix Engine operations. It must have a persistent working directory rather than a temporary
one. The toolkit directory will have a separate sub-directory for each database instance. The toolkit path must have 0770 permissions.
15. Click the Discover SAP ASE checkbox.
16. Enter a Username for an instance on the environment.
17. Enter the Password associated with the user in Step 15.
18. Click OK.
Post-Requisites
After you create the environment, you can view information about it by selecting Manage > Environments and then selecting the environment
name.
559
Procedure
1. Login to the Delphix Admin application with Delphix Admin credentials or as the owner of an environment.
2. Click Manage.
3. Select Environments.
4. In the Environments panel, click on the name of an environment to view its attributes.
5. Under Attributes, click the Pencil icon to edit an attribute.
6. Click the Check icon to save your edits.
Description
Environment
Users
The users for that environment. These are the users who have permission to ssh into an environment, or access the
environment through the Delphix Connector. See the Requirements topics for specific data platforms for more information on
the environment user requirements.
Host
Address
Notes
Description
DB User
DB Password
560
Procedure
For Source Environments
1. Disable the dSource as described in Enabling and Disabling dSources.
2. If the Host Address field contains an IP address, edit the IP address.
3. If the Host Address field contains a host name, update your Domain Name Server to associate the new IP address to the host name.
The Delphix Engine will automatically detect the change within a few minutes.
4. In the Environments screen of the Delphix Engine, refresh the host.
5. Enable the dSource.
561
Prerequisites
You cannot delete an environment that has any dependencies, such as dSources or virtual databases (VDBs). These must be deleted before you
can delete the environment.
Procedure
1. Login to the Delphix Admin application using Delphix Admin credentials.
2. Click Manage.
3. Select Environments.
4. In the Environments panel, select the environment you want to delete.
5. Click the Trash icon.
6. Click Yes to confirm.
562
Prerequisites
Users that you add to an environment must meet the requirements for that environment as described in the platform-specific Requirements topics.
Procedure
1. Login to the Delphix Admin application using Delphix Admin credentials.
2. Click Manage.
3. Select Environments.
4. Click the name of an environment to open the environment information screen.
5. Under Basic Information, click the green Plus icon to add a user.
6. Enter the Username and Password for the OS user in that environment.
563
Procedure
1. Login to the Delphix Admin application with Delphix Admin credentials.
2. Click Manage.
3. Select Environments.
4. In the Environments panel, click on the name of the environment to you want to refresh.
5. Click the Refresh icon.
To refresh all environments, click the Refresh icon next to Environments.
564
This topic describes how to enable and disable provisioning and linking for SAP ASE databases.
Before a database can be used as a dSource, you must first make sure that you have enabled linking to it. Similarly, before you can provision a
VDB to a target database, you must make sure that you have enabled provisioning to it.
Procedure
1. Login to the Delphix Admin application using Delphix Admin credentials.
2. Click Manage.
3. Select Environments.
4. Click Databases.
5. Slide the button next to Allow Provisioning to On or Off to enable or disable provisioning for that instance.
6. Click show details for the database.
7. Slide the button next to Allow Linking to On or Off to enable or disable linking.
565
566
Related Links
Link an SAP ASE Data Source
Add an SAP ASE Environment
567
Prerequisites
1. Ensure a correct set up the source and target environment, as described in Managing SAP ASE Environments
2. Before you can link a data source in a Veritas Cluster Server (VCS) environment, the following static configuration parameter for Delphix
engine needs to be manually added by a support contact to avoid failure. This is because each node in a VCS environment typically
has more than one IP address for fail over purposes. By default, the Delphix engine will only interface with a single IP address from the
source host, unless the following configuration is added:
PRO.RESTRICT_TARGET_IP=false
3. The Delphix engine configuration must be modified and can be found with the following path name:
/var/delphix/server/etc/delphix_config_override.properties
4. Finally, the Delphix engine stack needs to be manually restarted in order for the new configuration to take effect.
Procedure
1. Login to the Delphix Admin application using Delphix Admin credentials.
2. Click Manage.
3. Select Databases.
4. Click Add dSource.
Alternatively, on the Environment Management screen, you can click Link next to a database name to start the dSource creation
process.
5. In the Add dSource wizard, select the source database.
Changing the Environment User
If you need to change or add an environment user for the source database, see Managing SAP ASE Environment Users.
568
12. Enter the Backup Location. This is the directory where the database backups are stored. Delphix recursively searches this location, so
the database backups or transaction logs can reside in any subdirectories below the path entered.
13. Optionally, enter the Load Backup Server Name. If you have multiple backup servers in your staging environment, you can specify the
name of the backup server here to load database dumps and transaction logs into the staging database. If you leave this parameter
empty, the server designated as "SYB_BACKUP" will be used.
14. Select environment and ASE instance name.
Related Links
Managing SAP ASE Environments
Requirements for SAP ASE Target Hosts and Databases
Managing SAP ASE Environment Users
Users, Permissions, and Policies
569
1. In the Data Management panel of the Add dSource wizard, click Advanced.
On the back of the dSource card
1. Click Manage.
2. Select Policies. This will open the Policy Management screen.
3. Select the policy for the dSource you want to modify.
4. Click Modify.
For more information, see Creating Custom Policies and Creating Policy Templates.
Retention Policies
Retention policies define the length of time that the Delphix Engine retains snapshots and log files to which you can rewind or provision objects
from past points in time. The retention time for snapshots must be equal to, or longer than, the retention time for logs.
To support longer retention times, you may need to allocate more storage to the Delphix Engine. The retention policy in combination with the
SnapSync policy can have a significant impact on the performance and storage consumption of the Delphix Engine.
570
Property
Usage
Staging environment
Backup path
Path to the directory, relative to the staging environment, where backups can be found
571
Value
ASE_ENVUSER
ASE_DBUSER
ASE_DATABASE
ASE_INSTANCE
ASE_PORT
/opt/app/product/10.2.0.5/db_1/dbs/myscript.sh one "second argument in double quotes" 'third argument in single quotes
572
Prerequisites
You cannot delete a dSource that has dependent VDBs. Before deleting a dSource, make sure all dependent VDBs have been deleted
as described in Deleting an SAP ASE VDB.
Procedure
1. Login to the Delphix Admin application using Delphix Admin credentials.
2. Click Manage.
3. Select Databases.
4. Click My Databases.
5. In the Databases panel, select the dSource you want to delete.
6. Click the Trash Can icon.
7. Click Yes to confirm.
573
Detaching a dSource
1. Login to the Delphix Admin application as a user with OWNER privileges on the dSource, group, or domain.
2. Click Manage.
3. Select My Databases.
4. Select the database you want to unlink or delete.
5. Click the Unlink icon.
A warning message will appear.
6. Click Yes to confirm.
Attaching a dSource
Rebuilding Source Databases and Using VDBs
In situations where you want to rebuild a source database, you will need to detach the original dSource and create a new one
from the rebuilt data source. However, you can still provision VDBs from the detached dSource.
1. Detach the dSource as described above.
2. Rename the detached dSource by clicking the Edit icon in the upper left-hand corner of the dSource card, next to its
name.
This is necessary only if you intend give the new dSource the same name as the original one. Otherwise, you will see
an error message.
3. Create the new dSource from the rebuilt database.
You will now be able to provision VDBs from both the detached dSource and the newly created one, but the detached dSource
will only represent the state of the source database prior to being detached.
The attach operation is currently only supported from the command line interface (CLI). Full GUI support will be added in a future release. Only
databases that represent the same physical database can be re-attached
1. Login to the Delphix CLI as a user with OWNER privileges on the dSource, group, or domain.
2. Select the dSource by name using database select <dSource>.
3. Run the attachSource command.
4. Set the source config to which you want to attach using set source.config=<newSource>. Source configs are named by their
database unique name.
5. Set any other source configuration operations as you would for a normal link operation.
6. Run the commit command.
574
Procedure
1. Click Manage.
2. Select Databases.
3. Click My Databases.
4. Select the dSource you want to disable.
5. On the back of the dSource card, move the slider control from Enabled to Disabled.
6. Click Yes to acknowledge the warning.
When you are ready to enable the dSource again, move the slider control from Disabled to Enabled, and the dSource will continue to function as
it did previously.
575
Description
There is a critical fault associated with the dSource or VDB. See the error logs for more information.
There is a warning fault associated with the dSource or VDB. See the error logs for more information.
The Delphix Engine is checking the VDB status.
The dSource has been deleted or the Source status is UNKNOWN.
The state of the VDB is unknown. This is often associated with a connection error.
The VDB is inactive.
The dSource has been unlinked from the source database.
The VDB is disabled, is in the process of being created, or the creation process has been canceled or failed. For more information,
see Enabling and Disabling SAP ASE VDBs.
The VDB is running normally
The dSource is disabled. For more information, see Enabling and Disabling SAP ASE dSources.
576
577
Procedure
1. Login to the Delphix Admin application.
2. Click Manage.
3. Select Databases.
4. Click My Databases.
5. Select a dSource.
6. Select a means of provisioning.
See Provisioning by Snapshot and LogSync in this topic for more information.
7. Click Provision.
The Provision VDB panel will open, and the Instance and Database Name fields will auto-populate with information from the dSource.
8. Specify any Pre or Post Scripts that should be used during the provisioning process.
9. Click Next.
10. Select a Target Group for the VDB.
Click the green Plus icon to add a new group, if necessary.
11. Select a Snapshot Policy for the VDB.
Click the green Plus icon to create a new policy, if necessary.
12. Click Next.
13. If your Delphix Engine system administrator has configured the Delphix Engine to communicate with an SMTP server, you will be able to
specify one or more people to notify when the provisioning is done. You can choose other Delphix Engine users, or enter email
addresses.
14. Click Finish.
When provisioning starts, you can review progress of the job in the Databases panel, or in the Job History panel of the Dashboard.
When provisioning is complete, the VDB will be included in the group you designated, and it will be listed in the Databases panel. If you
select the VDB in the Databases panel and click the Open icon, you can view its card, which contains information about the database
and its Data Management settings.
578
Prerequisites
Before you provision an SAP ASE VDB, you must:
Have linked a dSource from a source database, as described in Linking an SAP ASE Data Source, or have already created a VDB from
which you want to provision another VDB
Have set up target environments as described in Adding an SAP ASE Environment
Ensure that you have the required privileges on the target environment as described in Requirements for SAP ASE Target Hosts and
Databases
If you are provisioning to a target environment that is different from the one in which you set up the staging database, you must make
sure that the two environments have compatible operating systems, as described in Requirements for SAP ASE Target Hosts and
Databases. For more information on the staging database and the validated sync process, see Managing SAP ASE Environments: An
Overview.
Procedure
1. Login to the Delphix Admin application.
2. Click Manage
3. Select Databases.
4. Click My Databases.
5. Select a dSource.
6. Select a means of provisioning.
For more information, see Provisioning by Snapshot and LogSync.
7. Click Provision.
The Provision VDB panel will open, and the Instance and Database Name fields will auto-populate with information from the dSource.
8. Select whether to enable Truncate Log on Checkpoint database option for the VDB.
9. Click Next.
10. Select a Target Group for the VDB.
Click the green Plus icon to add a new group, if necessary.
11. Select a Snapshot Policy for the VDB.
Click the green Plus icon to create a new policy, if necessary.
12. Click Next.
13. Specify any Hooks to be used during the provisioning process.
For more information, see Customizing SAP ASE Management with Hook Operations.
14. If your Delphix Engine system administrator has configured the Delphix Engine to communicate with an SMTP server, you will be able to
specify one or more people to notify when the provisioning is done. You can choose other Delphix Engine users or enter email
addresses.
15. Click Finish.
When provisioning starts, you can review progress of the job in the Databases panel, or in the Job History panel of the Dashboard.
When provisioning is complete, the VDB will be included in the group you designated, and it will be listed in the Databases panel. If you
select the VDB in the Databases panel and click the Open icon, you can view its card, which contains information about the database
and its Data Management settings.
Provisioning by Snapshot
You can provision to the start of any snapshot by selecting that snapshot card from the TimeFlow view, or by entering a value in the time entry
fields below the snapshot cards. The values you enter will snap to the beginning of the nearest snapshot.
Provisioning by LogSync
If LogSync is enabled on the dSource, you can provision by LogSync information. When provisioning by LogSync information, you can provision
to any point in time within a particular snapshot. The TimeFlow view for a dSource shows multiple snapshots by default. To view the LogSync
data for an individual snapshot, use the Slide to Open LogSync control at the top of an individual snapshot card. Drag the red triangle to the
point in time from which you want to provision. You can also enter a date and time directly.
579
Related Links
Linking an SAP ASE Data Source
Adding an SAP ASE Environment
Requirements for SAP ASE Target Hosts and Databases
Managing SAP ASE Environments: An Overview
Customizing SAP ASE Management with Hook Operations
580
Prerequisites
You must have replicated a dSource or a VDB to the target host, as described in Replication Overview
You must have added a compatible target environment on the target host
Procedure
1. Login to the Delphix Admin application for the target host.
2. Click Manage.
3. Select Databases.
4. Click My Databases.
5. In the list of replicas, select the replica that contains the dSource or VDB you want to provision.
6. The provisioning process is now identical to the process for provisioning standard objects.
Post-Requisites
Once the provisioning job has started, the user interface will automatically display the new VDB in the live system.
581
Procedure
1. Click Manage.
2. Select Databases.
3. Click My Databases.
4. Select the VDB you want to disable.
5. On the back of the dSource card, move the slider control from Enabled to Disabled.
6. Click Yes to acknowledge the warning.
When you are ready to enable the VDB again, move the slider control form Disabled to Enabled, and the VDB will continue to function as it did
previously.
582
Procedure
1. Login to the Delphix Admin application using Delphix Admin credentials.
2. Click Manage.
3. Select Databases.
4. Click My Databases.
5. Select the VDB you want to delete.
6. Click the Trash icon.
7. Click Yes to confirm.
583
This topic describes how to migrate a Virtual Database (VDB) from one target environment to another.
There may be situations in which you want to migrate a virtual database to a new target environment, for example when upgrading the host on
which the VDB resides, or as part of a general data center migration. This is easily accomplished by first disabling the database, then using the
Migrate VDB feature to select a new target environment.
Prerequisites
Procedure
Prerequisites
You should have already set up a new target environment that is compatible with the VDB that you want to migrate.
Procedure
1. Login to your Delphix Engine using Delphix Admin credentials.
2. Click Manage.
3. Select Databases.
4. Click My Databases.
5. Select the VDB you want to migrate.
6. Click the Open icon.
7. Slide the Enable/Disable control to Disabled.
8. Click Yes to confirm.
When the VDB is disabled, its icon will turn gray.
9. In the lower right-hand corner of the VDB card, click the VDB Migrate icon.
10. Select the new target environment for the VDB, the user for that environment, and the database instance where the VDB will reside.
11. Click the Check icon to confirm your selections.
12. Slide the Enable/Disable control to Enabled.
13. Click Yes to confirm.
Within a few minutes, your VDB will re-start in the new environment, and you can continue to work with it as you would any other VDB.
584
Although the VDB no longer contains the previous contents, the previous Snapshots and TimeFlow still remain in
Delphix and are accessible through the Command Line Interface (CLI).
Prerequisites
To refresh a VDB, you must have the following permissions:
Auditor permissions on the dSource associated with the VDB
Auditor permissions on the group that contains the VDB
Owner permissions on the VDB itself
A user with Delphix Admin credentials can perform a VDB Refresh on any VDB in the system.
Procedure
1. Login to the Delphix Admin application.
2. Under Databases, select the VDB you want to refresh.
3. Click the Open icon to open the VDB's card.
4. On the back of the VDB card, click the Refresh VDB icon in the lower right-hand corner.
This will open the screen to re-provision the VDB.
5. Select the refresh point as a snapshot or a point in time.
6. Click Refresh VDB.
7. If you want to use login credentials on the target environment other than those associated with the environment user, click Provide
Privileged Credentials.
8. Click Yes to confirm.
585
Although the VDB no longer contains changes after the rewind point, the rolled over Snapshots and TimeFlow still remain in
Delphix and are accessible through the Command Line Interface (CLI). See the topic CLI Cookbook: Rolling Forward a VDB f
or instructions on how to use these snapshots to refresh a VDB to one of its later states after it has been rewound.
Prerequisites
Procedure
Prerequisites
To rewind a VDB, you must have the following permissions:
Auditor permissions on the dSource associated with the VDB
Owner permissions on the VDB itself
You do NOT need owner permissions for the group that contains the VDB. A user with Delphix Admin credentials can perform a VDB Rewind on
any VDB in the system.
Procedure
1. Login to the Delphix Admin application.
2. Under Databases, select the VDB you want to rewind.
3. Select the rewind point as a snapshot or a point in time.
4. Click Rewind.
5. If you want to use login credentials on the target environment other than those associated with the environment user, click Provide
Privileged Credentials.
6. Click Yes to confirm.
You can use TimeFlow bookmarks as the rewind point when using the CLI. Bookmarks can be useful to:
Mark where to rewind to - before starting a batch job on a VDB for example.
Provide a semantic point to revert back to in case the chosen rewind point turns out to be incorrect.
For a CLI example using a TimeFlow bookmark, see CLI Cookbook: Provisioning a VDB from a TimeFlow Bookmark.
586
Description
There is a critical fault associated with the dSource or VDB. See
the error logs for more information.
587
588
dSource Hooks
Hook
Description
Pre-Sync
Post-Sync
Operations performed after a sync. This hook will run regardless of the success of the sync or Pre-Sync hook operations.
These operations can undo any changes made by the Pre-Sync hook.
Description
Configure
Clone
Operations performed after initial provision or after a refresh. This hook will run after the virtual dataset has been started.
During a refresh, this hook will run before the Post-Refresh hook.
Pre-Refresh
Post-Refresh
Operations performed after a refresh. During a refresh, this hook will run after the Configure Clone hook. This hook will not run
if the refresh or Pre-Refresh hook operations fail.
These operations can restore cached data after the refresh completes.
Pre-Rewind
Post-Rewind
Operations performed after a rewind. This hook will not run if the rewind or Pre-Rewind hook operations fail.
These operations can restore cached data after the rewind completes.
Pre-Snapshot
Post-Snapshot
Operations performed after a snapshot. This hook will run regardless of the success of the snapshot or Pre-Snapshot hook
operations.
These operations can undo any changes made by the Pre-Snapshot hook.
Operation Failure
If a hook operation fails, it will fail the entire hook: no further operations within the failed hook will be run.
589
You can construct hook operation lists through the Delphix Admin application or the command line interface (CLI). You can either define the
operation lists as part of the linking or provisioning process or edit them on dSources or virtual datasets that already exist.
590
*> add
0 *> set type=RunCommandOnSourceOperation
0 *> set command="echo Refresh completed."
0 *> ls
0 *> commit
source
source
source
source
source
source
"pomme"
"pomme"
"pomme"
"pomme"
"pomme"
"pomme"
update
update
update
update
update
update
operations
operations
operations
operations
operations
operations
postRefresh
postRefresh
postRefresh
postRefresh
postRefresh
postRefresh
*> add
1 *> set type=RunCommandOnSourceOperation
1 *> set command="echo Refresh completed."
1 *> back
*> unset 1
*> commit
591
Shell Operations
RunCommand Operation
The RunCommand operation runs a shell command on a Unix environment using whatever binary is available at /bin/sh. The environment user
runs this shell command from their home directory. The Delphix Engine captures and logs all output from this command. If the script fails, the
output is displayed in the Delphix Admin application and command line interface (CLI) to aid in debugging.
If successful, the shell command must exit with an exit code of 0. All other exit codes will be treated as an operation failure.
Examples of RunCommand Operations
You can input the full command contents into the RunCommand operation.
remove_dir="$DIRECTORY_TO_REMOVE_ENVIRONMENT_VARIABLE"
if test -d "$remove_dir"; then
rm -rf "$remove_dir" || exit 1
fi
exit 0
If a script already exists on the remote environment and is executable by the environment user, the RunCommand operation can execute this
script directly.
The RunBash operation runs a Bash command on a Unix environment using a bash binary provided by the Delphix Engine.The environment user
runs this Bash command from their home directory. The Delphix Engine captures and logs all output from this command. If the script fails, the
output is displayed in the Delphix Admin application and command line interface (CLI) to aid in debugging.
If successful, the Bash command must exit with an exit code of 0. All other exit codes will be treated as an operation failure.
Example of RunBash Operations
You can input the full command contents into the RunBash operation.
remove_dir="$DIRECTORY_TO_REMOVE_ENVIRONMENT_VARIABLE"
# Bashisms are safe here!
if [[ -d "$remove_dir" ]]; then
rm -rf "$remove_dir" || exit 1
fi
exit 0
Shell Operation Tips
Using nohup
592
You can use the nohup command and process backgrounding from resource in order to "detach" a process from the Delphix Engine. However, if
you use nohup and process backgrounding, you MUST redirect stdout and stderr.
Unless you explicitly tell the shell to redirect stdout and stderr in your command or script, the Delphix Engine will keep its connection to the
remote environment open while the process is writing to either stdout or stderr . Redirection ensures that the Delphix Engine will see no more
output and thus not block waiting for the process to finish.
For example, imagine having your RunCommand operation background a long-running Python process. Below are the bad and good ways to do
this.
Bad Examples
nohup
nohup
nohup
nohup
python
python
python
python
file.py
file.py
file.py
file.py
& # no redirection
2>&1 & # stdout is not redirected
1>/dev/null & # stderr is not redirected
2>/dev/null & # stdout is not redirected
Good Examples
nohup python file.py 1>/dev/null 2>&1 & # both stdout and stderr redirected, Delphix Engine will not
block
Other Operations
RunExpect Operation
The RunExpect operation executes an Expect script on a Unix environment. The Expect utility provides a scripting language that makes it easy to
automate interactions with programs which normally can only be used interactively, such as ssh. The Delphix Engine includes a
platform-independent implementation of a subset of the full Expect functionality.
The script is run on the remote environment as the environment user from their home directory. The Delphix Engine captures and logs all output
of the script. If the operation fails, the output is displayed in the Delphix Admin application and CLI to aid in debugging.
If successful, the script must exit with an exit code of 0. All other exit codes will be treated as an operation failure.
Example of a RunExpect Operation
Environment Variables
Description
ASE_ENVUSER
ASE_DBUSER
ASE_DATABASE
593
ASE_INSTANCE
ASE_PORT
Environment Variables
Description
ASE_ENVUSER
ASE_DBUSER
ASE_DATABASE
ASE_INSTANCE
ASE_PORT
594
595
Introduction to DB2
DB2 for Linux, UNIX and Windows is a database server product developed by IBM. Sometimes called DB2 LUW for brevity, it is part of the DB2
family of database products. DB2 LUW is the "Common Server" product member of the DB2 family, designed to run on most popular operating
systems. By contrast, all other DB2 products are specific to a single platform.
DB2 LUW was initially called DB2 Universal Database (UDB), but over time IBM marketing started to use the same term for other database
products, notably mainframe (z-Series) DB2. Thus the DB2 for Linux, UNIX and Windows moniker became necessary to distinguish the common
server DB2 LUW product from single-platform DB2 products.
The current DB2 LUW product runs on multiple Linux and UNIX distributions, such as Red Hat Linux, SUSE Linux, AIX, HP/UX, and Solaris, and
most Windows systems. Multiple editions are marketed for different sizes of organization and uses. The same code base is also marketed without
the DB2 name as IBM InfoSphere Warehouse edition.
The version numbers in DB2 are non-sequential with v10.1 and 10.5 being the two most recent releases. Specifics of DB2 versions and platforms
supported on Delphix are located in the DB2 Compatibility Matrix.
DB2 Authentication
Authentication is the process of validating a supplied user ID and password using a security mechanism. User and group authentication is
managed in a facility external to DB2 LUW, such as the operating system, a domain controller, or a Kerberos security system. This is different
from other database management systems (DBMSs), such as Oracle and SQL Server, where user accounts may be defined and authenticated in
the database itself, as well as in an external facility such as the operating system.
Any time a user ID and password is explicitly provided to DB2 LUW as part of an instance attachment or database connection request, DB2
attempts to authenticate that user ID and password using this external security facility. If no user ID or password is provided with the request, DB2
implicitly uses the user ID and password that were used to log in to the workstation where the request originated. More information of DB2
authentication and authorization is available via IBM documentation.
Delphix DB2 authentication
Delphix for DB2 requires that that the staging and target hosts must already have the necessary users and authentication systems
created/installed on them. Delphix will neither create users nor change database passwords as part of the provisioning process.
DB2 Instances
A DB2 instance is a logical database manager environment that can catalog databases and set configuration parameters. Depending on specific
needs, customers can create multiple instances on the same physical server to provide a unique database server environment for each instance.
Associated with an instance is the concept of an instance owner. This is the user that "owns" that instance and has SYSADM authority over the
instance and all databases inside that instance. SYSADM authority is the highest level of authority in DB2 and lets this user perform several
database management activities such upgrade, restore, edit configurations, etc... More information about intances can be found the IBM
knowledge center.
Delphix DB2 Instances
Delphix operates on the instance level and requires that the staging and target hosts must have the empty instances created a prior to
Delphix using them, and the instance owners added as environment users. It is important to note that our dSources and VDBs are
entire instances and NOT specific databases inside an instance.
596
Delphix HADR
HADR replication takes place at a database level, not at the instance level. Therefore, a standby instance can have multiple databases
from multiple different primary servers/instances on it. If the instance ID on the Delphix standby is NOT the same as the instance ID on
the primary, the Delphix standby instance ID MUST have database permissions secadm and dbadm granted to it on the primary
database. These permissions, and all HADR settings, must be implemented on the primary database BEFORE you take the backup on
the primary database.
Log Transmitting
All changes that take place at the primary database server are written into log files. The individual log records within the log files are then
transmitted to the secondary database server, where the recorded changes are replayed to the local copy of the database. This procedure
ensures that the primary and the secondary database servers are in a synchronized state. Using two dedicated TCP/IP communication ports and
a heartbeat, the primary and the standby databases track where they are processing currently, the current state of replication, and whether the
standby database is up-to-date with the status of the primary database. When a log record is "closed" (still in memory, but has yet to be written to
disk on the primary), it is immediately transmitted to the HADR standby database(s). Transmission of the logs to the standbys may also be
time-delayed.
Multiple Standby
Beginning in DB2 v10.1,the HADR feature supports multiple standby databases. This enables an advanced topology where you can deploy HADR
in multiple standby mode with up to three standby databases for a single primary. One of the databases is designated as the principal HADR
standby database, with the others termed as auxiliary HADR standby databases. As with the standard HADR deployment, both types of HADR
standbys are synchronized with the HADR primary database through a direct TCP/IP connection. Furthermore, both types support the reads on
standby feature and can be configured for time-delayed log replay. It is possible to issue a forced or non-forced takeover on any standby,
including the delphix auxiliary standby. However, you should never use the Delphix auxiliary standby as a primary, because this will impact
Delphix performance.
597
598
Version
Processor Family
6.5, 6.6
x86_64
AIX
7.1
Power
599
Protocol
Port
Number
Use
TCP/UDP
111
Remote Procedure Call (RPC) port mapper used for NFS mounts
Note: RPC calls in NFS are used to establish additional ports, in the high range 32768-65535, for supporting services.
Some firewalls interpret RPC traffic and open these ports automatically. Some do not.
TCP
1110
NFS Server daemon status and NFS server daemon keep-alive (client info)
TCP/UDP
2049
TCP
4045
UDP
33434 33464
Traceroute from standby and target hosts to the Delphix Engine (optional)
UDP/TCP
32768 65535
NFS mountd and status services, which run on a random high port. Necessary when a firewall does not dynamically
open ports.
Protocol
Port Numbers
Use
TCP
873
TCP
xxxx
DSP connections used for monitoring and script management. Typically DSP runs on port 8415.
Protocol
Port Numbers
Use
TCP
22
The HADR ports set for HADR_LOCAL_SVC and HADR_REMOTE_SVC on the DB2 Master and Standby hosts. The specific ports used at the
customers discretion and need to be specified during the linking process. It is highly recommended that this ports also be defined in the
/etc/services file to ensure that they are only used by DB2 for the specified databases.
Port
Numbers
Use
TCP
25
600
TCP/UDP
53
UDP
123
UDP
162
HTTPS
443
SSL connections from the Delphix Engine to the Delphix Support upload server
TCP/UDP
636
TCP
8415
TCP
50001
Connections to source and target environments for network performance tests via the Delphix command line interface
(CLI). See Network Performance Tool.
Port
Number
Use
TCP
22
TCP
80
UDP
161
TCP
443
TCP
8415
Delphix Session Protocol connections from all DSP-based network services including Replication, SnapSync for
Oracle, V2P, and the Delphix Connector.
TCP
50001
Connections from source and target environments for network performance tests via the Delphix CLI. See Network
Performance Tool.
TCP/UDP
32768 65535
Required for NFS mountd and status services from target environment only if the firewall between Delphix and the
target environment does not dynamically open ports.
Note: If no firewall exists between Delphix and the target environment, or the target environment dynamically opens
ports, this port range is not explicitly required.
SSHD Configuration
Both source and target Unix environments are required to have sshd running and configured such that the Delphix Engine can connect over ssh.
The Delphix Engine expects to maintain long-running, highly performant ssh connections with remote Unix environments. The following sshd con
figuration entries can interfere with these ssh connections and are therefore disallowed:
Disallowed sshd Configuration Entries
ClientAliveInterval
ClientAliveCountMax
601
602
Related Links
DB2 Compatibility Matrix
Setting Up DB2 Environments: An Overview
603
604
Delphix uses the a Standby server model along with DB2s High Availability Disaster Recovery feature to ingest data and stay in sync with the
source database. The standby server is then snapshotted by the Delphix engine and the snapshots can be provisioned out to one or more target
servers.
The snapshot and provision process occurs on the instance level, all databases that exist on the standby server will be provisioned out
to the target machines. Similarly actions such as bookmark, rewind and refresh will simultaneously apply to all the databases in the
instance.
Block Diagram of Linking Architecture Between DB2 Environments and the Delphix Engine
The linking process converts an empty DB2 instance on the standby server into an HADR standby for the primary database. In order to do this the
staging instance must have access to a recent backup copy of all the databases that you wish to add the dSource. Once the restoration process
is complete, Delphix will begin issue the HADR standby commands on each database and ensure that the health of the HADR connection stays
within the acceptable threshold values you set.
605
database level. If you have only one database to be replicated it is a direct 1-1 mapping betweeen instances and databases. However it may be
advantageous to collect multiple databases needed for an application into a single Delphix staging instance which can then be used to snapshot
and provision all the databases simultaneously. It is also possible to setup databases from multiple primary hosts to use the same
standby/dSource.
The choice of databases on the staging server should also take into account the expected network traffic that HADR will create between the
source and staging environments.
Related Links
DB2 Support and Requirements
606
Prerequisites
Make sure that the staging environment in question meets the requirements described in Requirements for DB2 Hosts and Databases
Procedure
1. Login to the Delphix Admin application.
2. Click Manage.
3. Select Environments.
4. Next to Environments, click the green Plus icon.
5. In the Add Environment dialog, select Unix/Linux in the operating system menu.
6. Select Standalone Host
7. Enter the Host IP address.
8. Enter an optional Name for the environment.
9. Enter the SSH port.
The default value is 22.
10. Enter a Username for the environment.
For more information about the environment user requirements, see Requirements for DB2 Hosts and Databases.
11. Select a Login Type.
For Password, enter the password associated with the user in step 9.
Using Public Key Authentication
If you want to use public key encryption for logging into your environment:
a. Select Public Key for the Login Type.
b. Click View Public Key.
c. Copy the public key that is displayed, and append it to the end of your ~/.ssh/authorized_keys file. If this file
does not exist, you will need to create it.
i. Run chmod 600 authorized_keys to enable read and write privileges for your user.
ii. Run chmod 755 ~ to make your home directory writable only by your user.
The public key needs to be added only once per user and per environment.
You can also add public key authentication to an environment user's profile by using the command line interface, as explained
in the topic CLI Cookbook: Setting Up SSH Key Authentication for UNIX Environment Users.
12. For Password Login, click Verify Credentials to test the username and password.
13. Enter a Toolkit Path.
For more information about the toolkit directory requirements, see Requirements for DB2 Hosts and Databases.
14. Click OK.
As the new environment is added, you will see two jobs running in the Delphix Admin Job History, one to Create and Discover an
environment, and another to Create an environment. When the jobs are complete, you will see the new environment added to the list in
the Environments tab. If you do not see it, click the Refresh icon in your browser.
Post-Requisites
To view information about an environment after you have created it:
1. Click Manage.
2. Select Environments.
3. Select the environment name.
Related Links
607
608
Procedure
1. Login to the Delphix Admin application with Delphix Admin credentials or as the owner of an environment.
2. Click Manage.
3. Select Environments.
4. In the Environments panel, click on the name of an environment to view its attributes.
5. Under Attributes, click the Pencil icon to edit an attribute.
6. Click the Check icon to save your edits.
Description
Environment
Users
The users for that environment. These are the users who have permission to ssh into an environment, or access the
environment through the Delphix Connector. See the Requirements topics for specific data platforms for more information on
the environment user requirements.
Host
Address
Notes
609
View Instances
1. Login to the Delphix Admin application with Delphix Admin credentials.
2. Click Manage.
3. Select Environments.
4. In the Environments panel, click on the name of the environment to you want to refresh.
5. Click on Databases to see a list of all DB2 instances found in the environment.
610
Prerequisites
Users that you add to an environment must meet the requirements for that environment as described in the platform-specific Requirements topics.
Procedure
1. Login to the Delphix Admin application using Delphix Admin credentials.
2. Click Manage.
3. Select Environments.
4. Click the name of an environment to open the environment information screen.
5. Under Basic Information, click the green Plus icon to add a user.
6. Enter the Username and Password for the OS user in that environment.
611
Prerequisites
You cannot delete an environment that has any dependencies, such as dSources or virtual databases (VDBs). These must be deleted before you
can delete the environment.
Procedure
1. Login to the Delphix Admin application using Delphix Admin credentials.
2. Click Manage.
3. Select Environments.
4. In the Environments panel, select the environment you want to delete.
5. Click the Trash icon.
6. Click Yes to confirm.
612
Procedure
1. Login to the Delphix Admin application with Delphix Admin credentials.
2. Click Manage.
3. Select Environments.
4. In the Environments panel, click on the name of the environment to you want to refresh.
5. Click the Refresh icon.
To refresh all environments, click the Refresh icon next to Environments.
613
614
Description
There is a critical fault associated with the dSource or VDB. See
the error logs for more information.
615
616
Data Ingest
DB2 for Delphix ingests data by using a Standby instance of DB2 to create the necessary data files that represent the data. This Standby instance
is converted to a Delphix dSource by going through the linking process during which it is given access to a full backup of each of the databases
that are to be added to the dSource. Delphix then runs an automated redirected restore process on the backup file in order to convert the data
files to a format and structure that is compatible with Delphix. All of the data files and log files from this backup are stored on a single NFS mount
created by the Delphix engine which allows it to snapshot the dSource as necessary.
A single standby instance can contain data from multiple source databases.
Data Synchronization
During the linking process, you can optionally setup an HADR connection between the original source databases and copies on the Standby
instance. By doing this the Standby instance will always keep its databases in sync with the source databases using HADR for log shipping. It is
important to note that a single Standby instance (dSource) can contain multiple databases from multiple different servers and instances as long as
each database has a unique name.
Delphix HADR standby maintains a different structure from the production server and should never be used as a DR failover from
production.
Related Topics
Requirements for DB2 Hosts and Databases
617
618
Prerequisites
Be sure that the source and staging instances meets the host requirements and the databases meet the container requirements
described in Requirements for DB2 Hosts and Databases.
Delphix uses the DB2 instance owner account on the dSource for many things, including verifying that data in the databases is
accessible. If the instance name on the dSource is different from the source instance name then you must explicitly grant DBADM and
SECADM to the dSource instance owner on the source instance using the following steps:
1. Connect to the source databases as the source instance owner.
a. connect to <DB_NAME> user <INSTANCE_OWNER>
2. Issue database grant command
a. grant DBADM, SECADM on database to user <DSOURCE_INSTANCE_OWNER>
3. Repeat step 2 for every database to be included in the dSource, on the corresponding source database.
Determine if your dSource will be a non-HADR instance, an HADR single standby instance, or an HADR multiple standby instance. Non-HADR
dSources can only be updated via a full dSource resync from a newer backup file
Non-HADR Database
Ensure that the source database has the necessary user permissions for the provisioned VDBs as
described in Database Permissions for Provisioned DB2 VDBs
This assumes a single standby database HADR setup already exists. The existing standby will be referred to as the main standby.
The new delphix standby will be referred to as the auxiliary standby.
1. The following database configuration settings must be set on the primary database:
a. update db cfg for <DB_NAME> using HADR_SYNCMODE <SYNC MODE> immediate set whichever sync mode you
wish to use on your main standby.
b. update db cfg for <DB_NAME> using HADR_TARGET_LIST
"<MAIN_STANDBY_IP:MAIN_STANDBY_PORT|AUXILIARY_STANDBY_IP:AUXILIARY_STANDBY_PORT>"
immediate
i. You may have up to two auxiliary standbys defined separated by a '|'; one of which must be the delphix
dSource.
2. stop hadr on db <DB_NAME>
3. start hadr on db <DB_NAME> as primary by force
4. Take a full online backup as defined in the "Backup Source Database" section below. While this backup is running, you may
continue with step 5.
5. The following database configuration settings must be set on the existing main standby database:
a. update db cfg for <DB_NAME> using HADR_SYNCMODE <same mode as defined in 1.a above.> It must be the
same value used for primary database.
b. update db cfg for <DB_NAME> using HADR_TARGET_LIST
"<PRIMARY_IP:PRIMARY_PORT|MAIN_STANDBY_IP:MAIN_STANDBY_PORT>"
6. stop hadr on db <DB_NAME>
7. start hadr on db <DB_NAME> as standby
8. Record the following information, as it must be entered on the Delphix Engine while creating the dSource (the auxiliary standby
database):
a. HADR Primary hostname
b. HADR Primary SVC
c. HADR Standby SVC (auxiliary standby port)
d. HADR_TARGET_LIST <PRIMARY_IP:PRIMARY_PORT|MAIN_STANDBY_IP:MAIN_STANDBY_PORT>
In order to complete the linking process, the Standby dSource must have access to a full backup of the source DB2 databases on disk. This
should be a compressed online DB2 backup and must be accessible to the dSource instance owner on disk. Delphix is currently not setup to
accept DB2 backups taken using third-party sources such as Netbackup or TSM. Both HADR and Non-HADR backups must also include logs.
Example backup command: db2 backup database <DB_NAME> online compress include logs
Procedure
1. Login to the Delphix Admin application using Delphix Admin credentials or as the owner of the database from which you want to
provision the dSource.
2. Click Manage.
3. Select My Databases.
4. Select the green + icon to Add dSource.
Alternatively, on the Databases tab of Environment Management screen, you can click Add dSource next to a instance name to start
the dSource creation process.
5. In the Add dSource wizard, select the source instance on the left of the screen.
620
7. Hit the plus icon once per database you wish to restore into this instance. For every time you hit it, the following box will be added to the
screen:
8. The database name is mandatory and must be unique for a given instance. This is the name that the database was on the instance it was
restored from and it will be the same name in this instance.
9. Enter the complete Backup Path where the database backup file resides. If no value is entered, the default value used is the instance
home directory. If there are multiple backup files for a database on the backup path, the most current one will be used.
10. Enter the Log Archive Method1 you wish to use for the database. If no valued is entered, the default valued used is
DISK:/mountpoint/dbname/arch.
11. If the dSource is to use HADR please enter the following fields. If it will not use HADR skip ahead to step 13. For more information about
HADR please view Linking a dSource from a DB2 Database: An Overview.
a. Enter a fully qualified HADR Primary Hostname. This is a required field for HADR and must match the value set for
HADR_LOCAL_HOST on the master.
b. Enter the port or /etc/services name for the HADR Primary SVC. This is a required field for HADR and uses the value set for
HADR_LOCAL_SVC on the master.
c. Enter the port or /etc/services name for the HADR Standby SVC. This is a required field for HADR and uses the value set for
HADR_REMOTE_SVC on the master.
12. Enter the value for Max Heartbeat Misses. If any of the HADR connections exceed this number of missed heartbeats the Delphix engine
will throw a fault. This value will be used for all databases defined within this instance.
13. Click Next.
14. Select a dSource Name and Database Group for the dSource.
15. Click Next.
Adding a dSource to a database group lets you set Delphix Domain user permissions for that database and its objects, such as
snapshots. For more information, see the topics under Users, Permissions, and Policies.
16. Set the Staging Environment to be the same as the dSource host.
17. Select the Staging Environment User to be the same as the instance owner of the dSource instance.
621
If you need to change or add an environment user for the dSource instance, see Managing DB2 Users and Instance Owners
.
18. Set the Staging Mount Base to be <DSOURCE_INSTANCE_HOME_DIR>/<DSOURCE_INSTANCE_NAME>. In the example below the
instance name is db2inst1 while the instance home is /home/db2instst1
19. Set the desired Snapsync Policy for the dSource. For more information on policies see Advanced Data Management Settings for DB2
dSources.
20. Click Next.
21. Specify any desired pre- and post-scripts. For details on pre- and post-scripts, refer to Customizing DB2 Management with Hook
Operations.
22. Click Next.
23. Review the dSource Configuration and Data Management information.
24. Click Finish.
The Delphix Engine will initiate two jobs to create the dSource, DB_Link and DB_Sync. You can monitor these jobs by clicking Active Jobs in
the top menu bar, or by selecting System > Event Viewer. When the jobs have completed successfully, the database icon will change to a dSou
rce icon on the Environments > Databases screen, and the dSource will appear in the list of My Databases under its assigned group.
622
Related Links
Requirements for DB2 Hosts and Databases
Linking a dSource from a DB2 Database: An Overview
Users, Permissions, and Policies
Managing DB2 Users and Instance Owners
Advanced Data Management Settings for DB2 dSources
Customizing DB2 Management with Hook Operations
623
Prerequisites
You cannot delete a dSource that has dependent virtual databases (VDBs). Before deleting a dSource, make sure that you have deleted all
dependent VDBs as described in Deleting a VDB.
Procedure
1. Login to the Delphix Admin application using Delphix Admin credentials.
2. Click Manage.
3. Select Databases.
4. Click My Databases.
5. In the Databases panel, select the dSource you want to delete.
6. Click the Trash Can icon.
7. Click Yes to confirm.
Deleting a dSource will also delete all snapshots, logs, and descendant VDB Refresh policies for that database. You cannot
undo the deletion.
624
Procedure
1. Click Manage.
2. Select Databases.
3. Click My Databases.
4. Select the dSource you want to disable.
5. On the back of the dSource card, move the slider control from Enabled to Disabled.
6. Click Yes to acknowledge the warning.
When you are ready to enable the dSource again, move the slider control from Disabled to Enabled, and the dSource will continue to function as
it did previously.
625
Retention Policies
Retention policies define the length of time Delphix Engine retains snapshots within its storage. To support longer retention times, you may need
to allocate more storage to the Delphix Engine. The retention policy in combination with the SnapSync policy can have a significant impact on
the performance and storage consumption of the Delphix Engine.
626
Related Links
Customizing DB2 Management with Hook Operations
627
Prerequisites
You must have replicated a dSource or a VDB to the target host, as described in Replication Overview.
You must have added a compatible target environment on the target host.
Procedure
1. Login to the Delphix Admin application for the target host.
2. Click Manage.
3. Select Databases.
4. Click My Databases.
5. In the list of replicas, select the replica that contains the dSource or VDB you want to provision.
6. The provisioning process is now identical to the process for provisioning standard objects.
Post-Requisites
Once the provisioning job has started, the user interface will automatically display the new VDB in the live system.
628
629
Related Links
Setting Up DB2 Environments: An Overview
Provisioning a DB2 VDB
Database Permissions for Provisioned DB2 VDBs
630
Prerequisites
You will need to have linked a dSource from a staging instance, as described in Linking a DB2 dSource, or have created a VDB from
which you want to provision another VDB
You should have set up the DB2 target environment with necessary requirements as described in Requirements for DB2 Hosts and
Databases
Make sure you have the required Instance Owner permissions on the target instance and environment as described in Managing DB2
Users and Instance Owners
The method for Database Permissions for Provisioned DB2 VDBs is decided before the provisioning
You can take a new snapshot of the dSource by clicking the Camera icon on the dSource card. Once the snapshot is complete you can
provision a new VDB from it.
Procedure
1. Login to the Delphix Admin application.
2. Click Manage.
3. Select Databases.
4. Select My Databases.
5. Select a dSource.
6. Select a snapshot you wish to provision from.
7. Click Provision to open Provision VDB panel.
8. Select a target environment from the left pane.
9. Select an Installation to use from the dropdown list of available DB2 instances on that environment.
10. Set the Environment User to be the Instance Owner.
11. If the target machine has lower memory than the staging instance, set the BufferPool Override value in number of pages. If this is not
set and the target machine does not have enough memory the target server may crash. If the value is set to zero then the existing value
of that instance will not be overridden.
Related Links
Linking a DB2 dSource
Provisioning DB2 VDBs: An Overview
Database Permissions for Provisioned DB2 VDBs
Customizing DB2 Management with Hook Operations
632
DB2 Authentication
Authentication is the process of validating a supplied user ID and password using a security mechanism. User and group authentication is
managed in a facility external to DB2 LUW, such as the operating system, a domain controller, or a Kerberos security system. This is different
from other database management systems (DBMSs), such as Oracle and SQL Server, where user accounts may be defined and authenticated in
the database itself, as well as in an external facility such as the operating system.
Any time a user ID and password is explicitly provided to DB2 LUW as part of an instance attachment or database connection request, DB2
attempts to authenticate that user ID and password using this external security facility. If no user ID or password is provided with the request, DB2
implicitly uses the user ID and password that were used to log in to the workstation where the request originated. More information of DB2
authentication and authorization is available via IBM documentation.
Delphix DB2 authentication
Delphix for DB2 requires that that the staging and target hosts must already have the necessary users and authentication systems
created/installed on them. Delphix will neither create users nor change database passwords as part of the provisioning process.
While the terminology used within the Delphix GUI refers to a VDB, the ingest, snapshot and provisioning process for DB2 on Delphix always
occurs on the instance level. Thus when a virtual DB2 instance is provisioned by Delphix, it contains all the DB2 databases that were in the
source instance with the identical user permissions as they had on the source. This means that if the target instance name is different from the
source instance name then that instance owner will NOT have DBADM or SECADM permissions unless they were specifically granted to that
instance owner on the source instance. The instance owner will however always have SYSADM permissions on all databases in the instance.
LDAP Authentication
If your DB2 instances and applications use LDAP authentication then they will work seamlessly as long as LDAP had been configured on the VDB
Target Instance.
OS Authentication
If your DB2 instances and applications are using OS authentication then it is important to ensure that the relevant OS accounts exist on the target
machine.
Generic Accounts
If the DB2 applications are using generic (non-instance owner) accounts, they will then be able to continue using them as long as those OS
accounts exist on the host machine. It is important to note that the passwords for the same account may be different on different hosts, or if they
use different LDAP servers (i.e. prod vs dev ldap servers).
Related Topics
633
634
635
Prerequisites
To rewind a VDB, you must have the following permissions:
Auditor permissions on the dSource associated with the VDB
Owner permissions on the VDB itself
You do NOT need owner permissions for the group that contains the VDB. A user with Delphix Admin credentials can perform a VDB Rewind on
any VDB in the system.
Procedure
1. Login to the Delphix Admin application.
2. Under Databases, select the VDB you want to rewind.
3. Select the rewind point as a snapshot or a point in time.
4. Click Rewind.
5. If you want to use login credentials on the target environment other than those associated with the environment user, click Provide
Privileged Credentials.
6. Click Yes to confirm.
You can use TimeFlow bookmarks as the rewind point when using the CLI. Bookmarks can be useful to:
Mark where to rewind to - before starting a batch job on a VDB for example.
Provide a semantic point to revert back to in case the chosen rewind point turns out to be incorrect.
For a CLI example using a TimeFlow bookmark, see CLI Cookbook: Provisioning a VDB from a TimeFlow Bookmark.
Video
636
dSource Hooks
Hook
Description
Pre-Sync
Post-Sync
Operations performed after a sync. This hook will run regardless of the success of the sync or Pre-Sync hook operations.
These operations can undo any changes made by the Pre-Sync hook.
Description
Configure
Clone
Operations performed after initial provision or after a refresh. This hook will run after the virtual dataset has been started.
During a refresh, this hook will run before the Post-Refresh hook.
Pre-Refresh
Post-Refresh
Operations performed after a refresh. During a refresh, this hook will run after the Configure Clone hook. This hook will not run
if the refresh or Pre-Refresh hook operations fail.
These operations can restore cached data after the refresh completes.
Pre-Rewind
Post-Rewind
Operations performed after a rewind. This hook will not run if the rewind or Pre-Rewind hook operations fail.
These operations can restore cached data after the rewind completes.
Pre-Snapshot
Post-Snapshot
Operations performed after a snapshot. This hook will run regardless of the success of the snapshot or Pre-Snapshot hook
operations.
These operations can undo any changes made by the Pre-Snapshot hook.
Operation Failure
If a hook operation fails, it will fail the entire hook: no further operations within the failed hook will be run.
637
You can construct hook operation lists through the Delphix Admin application or the command line interface (CLI). You can either define the
operation lists as part of the linking or provisioning process or edit them on dSources or virtual datasets that already exist.
638
*> add
0 *> set type=RunCommandOnSourceOperation
0 *> set command="echo Refresh completed."
0 *> ls
0 *> commit
source
source
source
source
source
source
"pomme"
"pomme"
"pomme"
"pomme"
"pomme"
"pomme"
update
update
update
update
update
update
operations
operations
operations
operations
operations
operations
postRefresh
postRefresh
postRefresh
postRefresh
postRefresh
postRefresh
*> add
1 *> set type=RunCommandOnSourceOperation
1 *> set command="echo Refresh completed."
1 *> back
*> unset 1
*> commit
639
Shell Operations
RunCommand Operation
The RunCommand operation runs a shell command on a Unix environment using whatever binary is available at /bin/sh. The environment user
runs this shell command from their home directory. The Delphix Engine captures and logs all output from this command. If the script fails, the
output is displayed in the Delphix Admin application and command line interface (CLI) to aid in debugging.
If successful, the shell command must exit with an exit code of 0. All other exit codes will be treated as an operation failure.
Examples of RunCommand Operations
You can input the full command contents into the RunCommand operation.
remove_dir="$DIRECTORY_TO_REMOVE_ENVIRONMENT_VARIABLE"
if test -d "$remove_dir"; then
rm -rf "$remove_dir" || exit 1
fi
exit 0
If a script already exists on the remote environment and is executable by the environment user, the RunCommand operation can execute this
script directly.
The RunBash operation runs a Bash command on a Unix environment using a bash binary provided by the Delphix Engine.The environment user
runs this Bash command from their home directory. The Delphix Engine captures and logs all output from this command. If the script fails, the
output is displayed in the Delphix Admin application and command line interface (CLI) to aid in debugging.
If successful, the Bash command must exit with an exit code of 0. All other exit codes will be treated as an operation failure.
Example of RunBash Operations
You can input the full command contents into the RunBash operation.
remove_dir="$DIRECTORY_TO_REMOVE_ENVIRONMENT_VARIABLE"
# Bashisms are safe here!
if [[ -d "$remove_dir" ]]; then
rm -rf "$remove_dir" || exit 1
fi
exit 0
Shell Operation Tips
Using nohup
640
You can use the nohup command and process backgrounding from resource in order to "detach" a process from the Delphix Engine. However, if
you use nohup and process backgrounding, you MUST redirect stdout and stderr.
Unless you explicitly tell the shell to redirect stdout and stderr in your command or script, the Delphix Engine will keep its connection to the
remote environment open while the process is writing to either stdout or stderr . Redirection ensures that the Delphix Engine will see no more
output and thus not block waiting for the process to finish.
For example, imagine having your RunCommand operation background a long-running Python process. Below are the bad and good ways to do
this.
Bad Examples
nohup
nohup
nohup
nohup
python
python
python
python
file.py
file.py
file.py
file.py
& # no redirection
2>&1 & # stdout is not redirected
1>/dev/null & # stderr is not redirected
2>/dev/null & # stdout is not redirected
Good Examples
nohup python file.py 1>/dev/null 2>&1 & # both stdout and stderr redirected, Delphix Engine will not
block
Other Operations
RunExpect Operation
The RunExpect operation executes an Expect script on a Unix environment. The Expect utility provides a scripting language that makes it easy to
automate interactions with programs which normally can only be used interactively, such as ssh. The Delphix Engine includes a
platform-independent implementation of a subset of the full Expect functionality.
The script is run on the remote environment as the environment user from their home directory. The Delphix Engine captures and logs all output
of the script. If the operation fails, the output is displayed in the Delphix Admin application and CLI to aid in debugging.
If successful, the script must exit with an exit code of 0. All other exit codes will be treated as an operation failure.
Example of a RunExpect Operation
Environment Variable
Description
DLPX_DATA_DIRECTORY
641
Environment Variable
Description
DLPX_DATA_DIRECTORY
642
643
644
Failure Points
Before devising a strategy, you must first have a set of requirements by which the resulting solution can be evaluated. What failures are you trying
to protect against, and what are your recovery goals in the event of failure?
Storage Failure
The Delphix Engine uses LUNs from a storage array provided through the VMware hypervisor. The storage array may have redundant disks
and/or controllers to protect against single points of failure within the array. However, the Delphix Engine can still be affected by a failure of the
entire array, the SAN path between the Delphix Engine and the array, or by a failure of the LUNs in the array that are assigned to the Delphix
Engine.
Recommendation: Replication
Site Failure
When an entire site or datacenter goes down, all servers, storage, and infrastructure are lost. This will affect not only the Delphix Engine, but any
production databases and target servers in the datacenter.
Recommendation: Replication
Administrative Error
If an administrator mistakenly deletes a VDB or takes some other irreversible action, there is no method of recovery built into the Delphix Engine.
Recommendation: Snapshots
Recovery Objectives
Once infrastructure fails, some amount of work is required to restore the Delphix Engine to an operational state. Clients wont have access to the
Delphix Engine during this time, and the point to which the system is recovered is dependent on the mechanism being used. These qualitative
aspects of recovery can be captured by the following metrics. As these metrics are often directly associated with cost, it is important to think not
just about the desired metrics, but also the minimum viable goals.
645
646
Deployment Architecture
This topic describes components of the Delphix deployment architecture.
Delphix operates in a virtual environment with several core systems working in concert, each with its own set of capabilities. Understanding this
architecture is critical in evaluating how solutions can be applied across the components, and the tradeoffs involved.
Architectural Components
This diagram illustrates Delphixs recommended best practices for deploying the Delphix Engine in a VMware environment:
This architecture is designed to isolate I/O traffic to individual LUNs while using the most commonly deployed VMware components. In this
example each VMDK file is placed in a separate VMFS volume. Each volume is exported to every node in the ESX cluster, allowing the Delphix
Engine to run on any physical host in the cluster.
Server Clustering
Clustering provides a standby server that can take over in the event of failure. A given clustering solution may or may not provide high availability
guarantees, though all provide failover capabilities, provided that an identical passive system is available.
Snapshots
Snapshots preserve a point-in-time copy of data that can be used later for rollback or to create writable copies. Creating a snapshot is typically
low cost in terms of space and time. Because they use the storage allocated to the array, snapshots restore quickly, but they do not protect
against failures of the array.
647
Replication
Data replication works by sending a series of updates from one system to another in order to recreate the same data remotely. This stream can
be synchronous, but due to performance considerations is typically asynchronous, where some data loss is acceptable. Replication has many of
the same benefits of backup, in that the data is transferred to a different fault domain, but has superior recovery time given that the data is
maintained within an online system. The main drawback of replication is that the data is always current - any logical data error in the primary
system is also propagated to the remote target. The impact of such a failure is less when replication is combined with snapshots, as is often the
case with continuous data protection (CDP) solutions.
Backup
Like snapshots, backup technologies preserve a point-in-time copy of a storage dataset, but then move that copy to offline storage. Depending on
the system, both full and incremental backups may be supported, and the backup images may or may not be consistent. Backup has the
advantage that the data itself is stored outside the original fault domain, but comes at high cost in terms of complexity, additional infrastructure,
and recovery time.
648
Feature Capabilities
Based on these failure points and recovery features, you can use the following table to map requirements to architectural components: VMware
(V), Delphix (D), or storage (S). This can drive implementation based on infrastructure capabilities and recovery objectives.
Fault Recovery Features
Failure Point
Server Failure
VSD
Storage Failure
VSD
VS
Site Failure
VSD
VS
Administrative Error
VS
VS
Time
Description
Clustering
Zero
All changes committed to disk are automatically propagated to the new server. Any pending changes in
memory are lost.
Replication
Near zero
Most solutions offer scheduled replication, but many can offer continuous replication with near-zero data loss.
Snapshots
Snapshot period
(for example,
one hour)
Given their relatively low cost, snapshots tend to be taken at a higher frequency than a traditional backup
schedule.
Backup
Backup period
(for example,
one day)
Backup policies can be configured in a variety of ways, but even with incremental backups, most
deployments operate no more frequently than once a day because of the cost of full backups, and the impact
of incremental backups on recovery time.
Time
Description
Clustering
Near
zero
VM clustering with the Delphix Engine provides near zero downtime in the event of failure, but clients may be briefly
paused or interrupted.
Replication
15
minutes
The target side environment is kept in hot standby mode, so it is relatively quick to switch over to the target
environment. Depending on the scope of the failure, however, some configuration information may need to be
changed on the target side prior to enabling operation.
Snapshots
15
minutes
The Delphix Engine can be rolled back to a previous state. Changes made to systems external to the Delphix Engine
(for example, deleting a VDB) can cause inconsistencies after rollback.
Backup
Hours
or days
Restoring a full backup can be very time consuming. In addition to having to read, transfer, and write all of the data,
the same process will need to be run for each incremental backup to reach the objective point.
Granularity
Description
Clustering
None
Replication
None
Only the nearest replicated state can be recovered, unless combined with snapshots.
649
Snapshots
Backup
650
Clustering
VMware vSphere high availability provides the ability to have a VM configuration shared between multiple physical ESX servers. Once the
storage has been configured on all physical servers, any server can run the Delphix Engine VM. This allows ESX clusters to survive physical
server failure. In the event of failure, the VM is started on a different server, and appears to clients as an unexpected reboot with non-zero but
minimal downtime. Depending on the length of the outage, this may cause a short pause in I/O and database activity, but longer outages can
trigger timeouts at the protocol and database layers that result in I/O and query errors. Such long outages are unlikely to occur in a properly
configured environment.
Automatic detection of failure in a HA environment does not work in all circumstances, and there are cases where the host, storage, or network
can hang such that clients are deprived access, but the systems continue to appear functional. In these cases, a manual failover of the systems
may be required.
When configuring a cluster, it is important to provide standby infrastructure with equivalent resources and performance characteristics.
Asymmetric performance capabilities can lead to poor performance in the event of a failover. In the worst case of an over-provisioned server, this
can cause widespread workload failure and inability to meet performance SLAs.
Snapshots
VMware provides storage-agnostic snapshots that are managed through the VMware Snapshot Manager. Use of VMware snapshots can,
however, cause debilitating performance problems for write-heavy workloads due to the need to manage snapshot redo-log metadata. In order to
provide an alternative snapshot implementation, while retaining the existing management infrastructure, VMware has created an API to allow
storage vendors to supply their own snapshot implementation. This is only supported in ESX 5.1. Furthermore, the array must support the the vSt
orage APIs. Consult the VMware documentation for supported storage solutions and the performance and management implications.
Storage-based snapshots, by virtue of being implemented natively in the storage array, typically do not suffer from such performance problems,
and are preferred over VMware snapshots when available. When managing storage-based snapshots, it is critical that all LUNs backing a single
VM be part of the same consistency group. Consistency groups provide write order consistency across multiple LUNs and allow snapshots to be
taken at the same point in time across the LUNs. This must include all VM configuration, system VMDKs, and VMDKs that hold the dSources and
VDBs. Each storage vendor presents consistency groups in a different fashion; consult your storage vendor documentation for information on how
to configure and manage snapshots across multiple LUNs.
In the event of a snapshot recovery becoming required, ensure that the Delphix Engine VM is powered off for the duration of the snapshot
recovery. Failure to do so can lead to filesystem corruption as you're changing blocks underneath a running system.
Replication
Site Recovery Manager (SRM) is a VMware product that provides replication and failover of virtual machines within a vSphere environment. It is
primarily an orchestration framework, with the actual data replication performed by a native VMware implementation, or by the storage array
through a storage replication adapter (SRA). A list of supported SRAs can be found in the VMware documentation. There is some performance
overhead in the native solution, but not of the same magnitude as the VMware snapshot impact. SRAs provide better performance, but require
that the same storage vendor be used as both source and target, and require resynchronization when migrating between storage vendors.
Storage-based replication can also be used in the absence of SRM, though this will require manual coordination when re-configuring and starting
up VMs after failover. The VM configuration, as well as the storage configuration within ESX, will have to be recreated using the replicated
storage.
The Delphix Engine also provides native replication within Delphix. This has the following benefits:
The target system is online and active
VDBs can be provisioned on the target from replicated objects
A subset of objects can be replicated
On failover, the objects are started in a disabled state. This allows configuration to be adjusted to reflect the target environment prior to
triggering policy-driven actions.
Multiple sources can be replicated to a single target
Note that the Delphix Engine currently only replicates data objects (dSources and VDBs) and environments (source and target services). It does
not replicate system configuration, such as users and policies. This provides more flexibility when mapping between disparate environments, but
requires additional work when instantiating an identical copy of a system after failover.
651
Backup
There is a large ecosystem of storage and VM-based backup tools, each with its own particular advantages and limitations. VMware provides Dat
a Protector , but there are size limitations (linked to a maximum of 2TB of deduped data) that make it impractical for most Delphix Engine
deployments. Most third-party backup products, such as Symantec NetBackup, EMC Networker, and IBM Tivoli Storage Manager, have solutions
designed specifically for backup of virtual machines. Because the Delphx Engine is packaged as an appliance, it is not possible to install third
party backup agents. However, any existing solution that can back up virtual machines without the need for an agent on the system should be
applicable to Delphix as well. Check with your preferred backup vendor to understand what capabilities exist.
Some storage vendors also provide native backup of LUNs. Backup at the storage layer reduces overhead by avoiding data movement across the
network, but loses some flexibility by not operating within the VMware infrastructure. For example, recreating the VM storage configuration from
restored LUNs is a manual process when using storage based recovery.
652
Replication
These topics describe concepts and procedures for replicating data from one Delphix Engine to another.
Replication Overview
Replication Use Cases
Replication User Interface
Configuring Replication
Enabling Replicated Objects
Replicas and Failover
Failing Over a Replica
Updating Replication User Credentials from Previous Versions
Provisioning from Replicated Data Sources or VDBs
653
Replication Overview
Table of Contents
Replication Overview
Delphix allows data objects to be replicated between Delphix Engines. These engines must be running identical Delphix versions, but otherwise
they can be asymmetric in terms of engine configuration. In the event of a failure that destroys the source engine, you can bring up the target
engine in a state matching that of the source. In addition, you can provision VDBs from replicated objects, allowing for geographical distribution of
data and remote provisioning.
Replication can be run ad hoc, but it is typically run according to a predefined schedule. After the initial update, each subsequent update sends
only the changes incurred since the previous update. Replication does not provide synchronous semantics, which would otherwise guarantee that
all data is preserved on the target engine. When there is a failover to a replication target, some data is lost, equivalent to the last time a replication
update was sent.
Replication is not generally suited for high-availability configurations where rapid failover (and failback) is a requirement. Failing over a replication
target requires a non-trivial amount of time and is a one-way operation; to fail back requires replicating all data back to the original source. For
cases where high availability is necessary, it is best to leverage features of the underlying hypervisor or storage platform. See the topics under Ba
ckup and Recovery Strategies for the Delphix Engine for more information on how to evaluate the use of Delphix Engine replication for your
data recovery requirements.)
Replication Features
As virtual appliances, it is possible to backup, restore, replicate, and migrate data objects between Delphix Engines using features of VMWare
and the underlying storage infrastructure. Data objects include groups, dSources, VDBs, Jet Stream data templates and data containers, and
associated dependencies. In addition to the replication capabilities provided by this infrastructure, native Delphix Engine replication provides
further capabilities, such as the ability to replicate a subset of objects, replicate multiple sources to a single target, and provision VDBs from
replicated dSources and VDBs without affecting ongoing updates. The topics under Backup and Recovery Strategies for the Delphix Engine p
rovide more information on how to evaluate features of the Delphix Engine in relation to your backup and recovery requirements.
Replication is configured on the source Delphix Engine and copies a subset of dSources and VDBs to a target Delphix Engine. It then sends
incremental updates manually or according to a schedule. For more information on configuring replication, see Configuring Replication
You can use replicated dSources and VDBs to provision new VDBs on the target side. You can refresh these VDBs to data sent as part of an
incremental replication update, as long as you do not destroy the parent object on the replication source. For more information, see Provisioning
from a Replicated Data Sources or VDBs.
During replication, replicated dSources and VDBs are maintained in an alternate replica and are not active on the target side. In the event of a
disaster, a failover operation can break the replication relationship. For more information on how to activate replicated objects, see Replicas and
Failover.
Replication Details
When you select objects for replication, the engine will automatically include any dependencies, including parent objects, such as groups, and
data dependencies such as VDB sources. This means that replicating a VDB will automatically include its group, the parent dSource, and the
group of the dSource, as well as any environments associated with those databases. When replicating an entire engine, all environments will be
included. When replicating a database or group, only those environments with the replicated databases are included.
During replication, the Delphix Engine will negotiate an SSL connection with its server peer to use SSL_RSA_WITH_RC4_128_MD5 as the cipher
suite, and TLSv1 as the protocol.
Only database objects and their dependencies are copied as part of a backup or replication operation, including:
dSources
VDBs
Groups
Jetstream Data Templates and Data Containers
Environments
Environment configuration (users, database instances, and installations)
The following objects are not copied as part of a backup or replication operation:
Users and roles
Policies
VDB (init.ora) configuration templates
Events and faults
654
Job history
System services settings, such as SMTP
After failover, these settings must be recreated on the target.
Resumable Replication
Resumable replication enhances the current replication feature by allowing you to restart large, time-consuming initial replication or incremental
updates from an intermediate point. A single replication instance can fail for a number of environmental and internal reasons. Previously, when
you restarted a failed replication instance, replication required a full resend of all data transmitted prior to the failure. With resumable replication,
no data is retransmitted. Replication is resumable across machine reboot, stack restart, and network partitions.
For example, suppose a replication profile has already been configured from a source to a target. A large, full send begins between the two that is
expected to take weeks to complete. Halfway through, a power outage at the datacenter that houses the source causes the source machine to go
down and only come back up after a few hours. On startup, the source will detect a replication was ongoing, automatically re-contact the target,
and resume the replication where it left off. In the user interface (UI) on the source, the same replication send job will appear as active and
continue to update its progress. However, in the UI of the target, a new replication receive job will appear but will track its progress as a
percentage of the entire replication.
In 4.1 and earlier releases, the replication component would always clean up after failed jobs to ensure that the Delphix Engine was kept in a
consistent state and that no storage was wasted on unused data. With the addition of resumability, the target and source can choose to retain
partial replication state following a failure to allow future replications to complete from that intermediate point. In the current release, the target and
source will only choose to retain partial replication state following failures that leave them out of network contact with each other for
example, source restart, target restart, or network partition. Once network contact is re-established, the ongoing replication will be automatically
detected and resumed. The resumable replication feature is fully automated and does not require or allow any user intervention.
Replication will not resume after failures that leave the source and target connected. For example, if a storage failure on the target, such as
out-of-space errors, causes a replication to fail, the source and target remain connected. As a result, will conservatively throw away all MDS and
ZFS data associated with the failed replication. Nonetheless, resumable replication would begin during a source reboot, a target reboot and a
network partition.
.
Related Topics
Backup and Recovery Strategies for the Delphix Engine
Replication User Interface
Configuring Replication
Provisioning from Replicated Data Sources or VDBs
Replicas and Failover New
655
Disaster Recovery
Replication is traditionally used to provide recovery in the event of disaster, where a datacenter or site is completely destroyed. Delphix replication
may not be the only recovery solution in this scenario; consult the Backup and Recovery Strategies for the Delphix Engine topic to determine
if it meets your requirements.
In a disaster recovery scenario, the target is kept in a passive state until the source system is lost. At this point, a failover is performed that breaks
subsequent replication updates and activates objects so that they can be managed on the target side.
You can reconfigure environments on the target prior to failover if the infrastructure uses a different network topology or set of systems. Whether
or not this is required depends on the nature of the failure at the primary site. If only the Delphix Engine is affected, and all of the source
databases and target environments are unaffected, then the target can enable dSources and VDBs and reconnect to the original systems. If, on
the other hand, the failure also destroyed the source and target systems, then those environments will have to be adjusted to point to the new
systems on the target side. If there is not a 1:1 mapping, then you can migrate the VDBs to new systems on the target, and you can detach
dSources and attach them to the standby system in the target environment.
Follow the best practices below to simplify failover and meet performance expectations in the event of a disaster:
The environment should be as close to identical when it comes to available resources
Target hosts and systems should exist at the target that match those at the source
The Delphix Engine should be provisioned with identical resources
The network and storage topologies should be the same
A 1:1 relationship between source and targets should be maintained
The target should remain passive and not be actively used for other workloads
Configuration of non-replicated objects, such as policies and users, should be retrieved via the command line interface (CLI) and saved
so that they can be recreated after failover.
656
Because there is no failover, this topology can support more complex topologies such as 1-to-many and many-to-1. Chained replication
(replicating from Site A -> Site B -> Site C) is not supported.
For geographical distribution, follow these best practices:
Because each replication stream induces load on the source system:
Minimize the number of simultaneous replication updates
If possible, avoid heavy VDB workloads on the source
Provision only from sources that are effectively permanent. Otherwise, remote VDBs cannot be refreshed once the source is deleted.
Provision additional storage capacity on the target
Remotely provisioned VDBs can consume shared storage on the target even when the parent is deleted on the source
Migration
You can use replication to perform one-time migration of resources from one Delphix Engine to another. While the hypervisor provides tools to
move virtual appliances between physical systems, there are times when migration is necessary, such as:
Migrating between different physical storage
Consolidating or distributing workloads across Delphix Engines
In these cases, replication can be used to copy a subset of objects across asymmetric topologies.
657
658
659
Configuration Options
Configuration Options for the selected replication profile. These include:
Description Free text description of the profile
Target Engine The Delphix Engine on the receiving end of this replication pair
Automatic Replication If enabled, shows the frequency and time that regular replication will be run
Traffic options Summarizes the traffic options with which this profile has been configured
Object Selection Tree
Shows all of the objects, such as groups, dSources, VDBs, and Jet Stream data layouts, that you have selected for replication in this
replication profile. If you select Entire Delphix Engine, all objects on the engine will be replicated, and thus the tree is collapsed.
Replicate Now Button
Begins the replication process
Delete Button
Allows you to delete the current profile
660
8. Under Traffic Options, select whether you want to Encrypt traffic or Limit bandwidth during replication updates.
9. In the right-hand column, under Objects Being Replicated, click the boxes next to the objects you want to replicate.
Selected Objects
Some selected objects may have dependencies other objects that will be pulled into replication because they share data. For
more details, see what's copied. Objects that will be replicated are confirmed with a blue chain link icon.
Note that this is not guaranteed to be the full set of dependent objects, but rather is a best guess. The full set
of objects and their dependents will be calculated at the time of replication.
When selecting objects, you can select the entire server (Entire Delphix Engine) or a set of groups, dSources, VDBs, and Jet
Stream data layouts.
When replicating a group, all dSources and VDBs currently in the group, or added to the group at a later time, will be included.
If you select a Jet Stream data template, all data containers created from that template will be included. Likewise, if you select
a data container, its parent data template will be included.
If you select the entire server, all groups and Jet Stream objects will be included.
Regardless of whether you select a VDB individually or as part of a group, the parent dSource or VDB (and any parents in its
lineage) are automatically included. This is required because VDBs share data with their parent object. In addition, any
environments containing database instances used as part of a replicated dSource or VDB are included as well.
When replicating individual VDBs, only those database instances and repositories required to represent the replicated VDBs
are included. Other database instances that may be part of the environment, such as those for other VDBs, are not included.
10. Click Create Profile to submit the new profile. This saves the replication profile details. If you leave the Create page prior to submitting
the profile, the draft replication profile will be discarded.
661
Status Box
Similar to the Replication Profile status box, this shows the most up-to-date status information for the replica on the target
Replicated Environments
Replicating a dSource or VDB will automatically replicate any environments associated with those objects. For more information, see
Replication Overview.
8. Under Traffic Options, select whether you want to Encrypt traffic or Limit bandwidth during replication updates.
9. In the right-hand column, under Objects Being Replicated, click the boxes next to the objects you want to replicate.
Selected Objects
Some selected objects may have dependencies other objects that will be pulled into replication because they share data. For
more details, see what's copied. Objects that will be replicated are confirmed with a blue chain link icon.
Note that this is not guaranteed to be the full set of dependent objects, but rather is a best guess. The full set
of objects and their dependents will be calculated at the time of replication.
When selecting objects, you can select the entire server (Entire Delphix Engine) or a set of groups, dSources, VDBs, and Jet
Stream data layouts.
When replicating a group, all dSources and VDBs currently in the group, or added to the group at a later time, will be included.
If you select a Jet Stream data template, all data containers created from that template will be included. Likewise, if you select
a data container, its parent data template will be included.
If you select the entire server, all groups and Jet Stream objects will be included.
Regardless of whether you select a VDB individually or as part of a group, the parent dSource or VDB (and any parents in its
lineage) are automatically included. This is required because VDBs share data with their parent object. In addition, any
environments containing database instances used as part of a replicated dSource or VDB are included as well.
When replicating individual VDBs, only those database instances and repositories required to represent the replicated VDBs
are included. Other database instances that may be part of the environment, such as those for other VDBs, are not included.
10. Click Create Profile to submit the new profile. This saves the replication profile details. If you leave the Create page prior to submitting
the profile, the draft replication profile will be discarded.
663
Related Links
CLI Cookbook: Replication
Configuring Replication
Replicas and Failover
Failing Over a Replica
664
Configuring Replication
This topic describes how to configure data replication between Delphix Engines.
Prerequisites
Configuring the Network
Configuring the Replication Source Delphix Engine
Configuring the Target Delphix Engine
Related Links
Prerequisites
The replication source and the replication target must be running identical versions of the Delphix Engine (for example, Delphix Engine
version 5.0).
The target Delphix Engine must be reachable from the source engine.
The target Delphix Engine must have sufficient free storage to receive the replicated data.
The user must have administrative privileges on the source and the target engines.
See also: Replication Prerequisites
set socks.enabled=true
set socks.host=10.2.3.4
set socks.username=someuser
set socks.password=somepassword
commit
Note that SOCKS port 1080 is used by default, but can be overridden.
665
9. Under Traffic Options, select whether you want to Encrypt traffic or Limit bandwidth during replication updates.
Encrypting Traffic
By default, replication streams are sent unencrypted. This provides maximum performance on a secure network. If the network
is insecure, you can enable encryption. Note that encrypting the replication stream will consume additional CPU resources and
may limit the maximum bandwidth that can be achieved.
Compressing Traffic
All replication streams are compressed via the same mechanism that compresses the data on disk. In environments where
network bandwidth is a constrained resource, compression has been shown to conserve bandwidth and optimize overall
throughput achieved by replication. Because the data is already stored compressed, there is no CPU overhead for
compressing replication streams.
Limiting Bandwidth
By default, replication will run at the maximum speed permitted by the underlying infrastructure. In some cases, particularly
when a shared network is being used, replication can increase resource contention and may impact the performance of other
operations. This option allows administrators to specify maximum bandwidth that replication can consume.
10. In the right-hand column, under Objects Being Replicated, click the checkboxes next to the objects you want to replicate.
Selected Objects
Some selected objects may have dependencies other objects that will be pulled into replication because they share
data. For more details, see Replication Overview. Objects that will be replicated are confirmed with a blue chain link
icon.
When selecting objects, you can select the entire server (Entire Delphix Engine) or a set of groups, dSources, VDBs,
and Jet Stream data layouts.
Note that this is not guaranteed to be the full set of dependent objects, but rather is a best guess. The
full set of objects and their dependants will be calculated at the time of replication.
When replicating a group, all dSources and VDBs currently in the group, or added to the group at a later time, will be
included.
If you select a Jet Stream data template, all data containers created from that template will be included. Likewise, if
you select a data container, its parent data template will be included.
If you select the entire server, all groups and Jet Stream objects will be included.
Regardless of whether you select a VDB individually or as part of a group, the parent dSource or VDB (and any
parents in its lineage) are automatically included. This is required because VDBs share data with their parent object. In
addition, any environments containing database instances used as part of a replicated dSource or VDB are included
as well.
When replicating individual VDBs, only those database instances and repositories required to represent the replicated
VDBs are included. Other database instances that may be part of the environment, such as those for other VDBs, are
666
not included.
11. Click Create Profile to submit the new profile. This saves the replication profile details. If you leave the Create page prior to submitting
the profile, the draft replication profile will be discarded.
Related Links
CLI Cookbook: Replication
Replication Overview
Replication User Interface
Replicas and Failover
Failing Over a Replica
667
4. Click View Public Key in order to display the public key for the Delphix Engine
5. Highlight the public key string (starts with ssh-rsa) and copy the key to your clipboard (Ctrl+C in Windows)
6. On each source and target host within your defined environments, paste the public key into the environment users authorized_keys file
(normally located in the users ~/.ssh/ directory)
Once the public key has been copied to the hosts making up your environments, you are ready to enable the remaining objects.
Enabling Environments
In order to begin using the environments and related dSources and VDBs, the environments must first be started.
1. Log into the Delphix Engine as delphix_admin or another user with administrative privileges
2. Navigate to Manage/Environments
3. Select an environment which you want to enable
4. Click or slide the slide bar, currently displaying Disabled (
initiate jobs that will refresh and enable the environment.
5. Repeat these steps for each environment which you want to enable
Enabling dSources
In order to again have actively syncing dSources, you must enable them.
1. Log onto the Delphix Engine administrative GUI as delphix_admin or another user with administrative privileges.
2. Click the Delphix logo in the upper-left corner, or navigate to Manage/Databases/My Databases
3. Hover over the dSource you wish to enable, and click the open arrow (
4. Click or slide the slide bar in the lower-left corner (
Enabling VDBs
The final step in the failover would be to enable any needed VDBs.
1. Log into the Delphix Engine as delphix_admin or another user with administrative privileges.
2. Click on the VDB that needs to be enabled
3. To the right of the VDB name, click the open icon (
668
At this time the failover is complete. All objects previously running on the previous source Delphix Engine should now be running off of the
Delphix Engine to which was failed over.
669
Replicas
A replica contains a set of replicated objects. These objects are read-only and disabled while replication is ongoing. To view replicated objects,
select the System > Replication menu item and select the replica under Received Replicas (or namespace in the CLI). On this screen you can
browse the contents of replicas, as well as fail over or delete individual replicas. As described in the Replication Overview NEW topic,
databases (dSouces and VDBs) and environments are included within the Replica.
Deleting or failing over a replica will sever any link with the replication source. Subsequent incremental updates will fail, requiring the
source to re-establish replication. Failover should only be triggered when no further updates from the source are possible (as in a
disaster scenario).
Multiple replicas can exist on the system at the same time. Active objects can exist in the system alongside replicas without interfering with
replication updates. VDBs and dSources within a replica can also be used as a source when provisioning. For more information, see Provisionin
g from Replicated Data Sources or VDBs.
670
Prerequisites
A Delphix system that contains a replica is required, see the Replicas and Failover topic for an overview of what replicas are and what failover
implies. For more information on configuring replication please refer to the Configuring Replication topics.
Procedure
1. Locate the replica to failover. The list of replicas can be accessed via the Received Replicas section of the System > Replication scree
n. Each replica has a default name that is the hostname of the source that sent the update. These names may be customized if desired.
Each replica will list the databases and environments it contains.
If this replica is the result of a replication update, check to see if the source Delphix appliance is still active. If so, then disable
any dsource or VDB that is part of the replica being failed over to ensure that only one instance is enabled. Dsources and
VDBs can be disabled by going to the Databases > My Databases screen, finding the appropriate database, and toggling the
enabled slider.
Related Topics
Replication User Interface
Replicas and Failover
Configuring Replication
671
Prerequisites
A source and target replication host that were configured with a release prior to 3.2 and subsequently upgraded.
On the target system, a Delphix user with domain privileges is required.
Procedure
1. On the source host, click System.
2. Select Replication.
3. Enter the name and password for a Delphix user on the target who has domain privileges.
4. Save the configuration.
Related Links
Configuring Replication
672
Prerequisites
You will need to have replicated a dSource or a VDB to the target host, as described in Replication Overview
You will need to have added a compatible target environment on the target host as described in Provisioning VDBs: An Overview
You will need to install any toolkits the replicated objects depend on on the target host.
Procedure
1. Login to the Delphix Admin application for the target host.
2. Click Manage.
3. Select Databases.
4. Select My Databases.
5. In the list of replicas, select the replica that contains the dSource or VDB to be provisioned.
6. The provisioning process is now identical to the process for provisioning standard objects. For the details of how to provision VDBs for
specific platforms, consult Provisioning VDBs: An Overview.
Post-Requisites
Once the provisioning job has started, the user interface will automatically display the new VDB in the live system.
Related Links
Replication Overview
Database Provisioning Overview
Provisioning VDBs: An Overview
Provisioning VDBs from Oracle and Oracle RAC dSources
Provisioning VDBs from SQL Server dSources
Provisioning VDBs from PostgreSQL dSources
673
674
675
For example, you can resume a large, time-consuming initial distribution or incremental update after it is interrupted. Suppose a selective data
distribution profile has already been configured from a source to a target. A large, full send from the source begins that is expected to take weeks
to complete. Halfway through, a power outage at the data center that houses the source causes the source machine to go down and only come
back up after a few hours. On startup, the source will detect that a selective data distribution was ongoing, automatically re-contact the target, and
resume the distribution where it left off. In the user interface (UI) on the source, the same selective data distribution send job will appear as active
and continue to update its progress. However, in the UI of the target, a new distribution receive job will appear, although it will track its progress
as a percentage of the entire replication.
Selective data distribution will not resume after failures that leave the source and target connected. For example, if a storage failure on the
target, such as an out-of-space error, causes a distribution to fail the source and target remain connected. As a result, the Delphix Engine will
conservatively throw away all MDS and ZFS data associated with the failed operation.
Related Links
Delphix Masking Overview
Selective Data Distribution Use Cases
Selective Data Distribution User Interface
Configuring Selective Data Distribution
Selective Data Distribution and Failover
Provisioning from a Replicated Data Sources or VDBs
676
Because there is no failover, this technology can support more complex topologies such as 1-to-many and many-to-1. Chained distribution
(replicating from Site A -> Site B -> Site C) is not supported.
Best Practices
For geographical distribution, follow these best practices:
Because each replication stream induces load on the source system:
Minimize the number of simultaneous replication updates
If possible, avoid heavy VDB workloads on the source
On the target, provision only from sources that are effectively permanent. If a source is deleted remote VDBs can no longer be refreshed.
Provision additional storage capacity on the target
Remotely provisioned VDBs can consume shared storage on the target even when the parent is deleted on the source
Migration
You cannot use selective data distribution for data migration. A full replication is needed for data migration.
Related Links
Selective Data Distribution Overview
Selective Data Distribution User Interface
677
678
679
680
Status Box
Shows the distribution status of the selected profile, including:
The result of the most recent or current distribution event
Statistics for the distribution run, such as data transferred, duration, and average throughput
In the upper left-hand corner, an icon summarizes the distribution status. There are four possible status icons:
Type
Shows the type of the selected profile or replica.
Configuration Options
Configuration options for the selective data distribution profile you have selected. These include:
Description Free text description of the profile
Target Engine The Delphix Engine on the receiving end of this data distribution pair
Automatic Replication If enabled, shows the frequency and time that regular distribution will be run
Traffic options Summarizes the traffic options with which this profile has been configured
Object Selection Tree
Shows all of the masked objects that you have selected for distribution in this selective data distribution profile.
Replicate Now Button
Begins the distribution process
Delete Button
Allows you to delete the current profile
Interacting with the Create New Selective Data Distribution Profile Section
1. In the left-hand navigation section, click the Create Profile icon
681
2. Enter the name of the selective data distribution profile and an optional description.
3. Select type Selective Data Distribution.
4. For Target Engine, enter the hostname or IP address for the target Delphix Engine.
5. Enter the username and password of a user who has Delphix Admin-level credentials on the target Delphix Engine. If the username and
password change on the target Delphix Engine, you must update these settings on the source Delphix Engine.
6. By default, automatic replication is disabled, meaning that you must trigger distribution updates manually. To enable automatic
distribution, click the Enabled checkbox.
7. In the Automatic Replication field, enter the Frequency and Starting Time for distribution updates to the target Delphix Engine. Once
you have entered and saved your distribution settings, you will also see an option to trigger distribution immediately with the Replicate
Now button.
Automatic replication uses Quartz for scheduling. Starting with Delphix version 4.2, the Quartz-formatted string is editable via
the Advanced option. Refer to the screenshot below.
8. Under Traffic Options, select whether you want to Encrypt traffic or Limit bandwidth during distribution updates.
9. In the right-hand column, under Objects Being Replicated, click the boxes next to the objects you want to distribute.
Selected Objects
10. Click Create Profile to submit the new profile. This saves the selective data distribution profile details. If you leave the Create page
prior to submitting the profile, the draft selective data distribution profile will be discarded.
682
Click the screenshot below for an enlarged view of the Replica section. The descriptions below provide more details of the functionality of this
section.
Status Box
Similar to the Replication Profile status box, this shows the most up-to-date status information for the replica on the target.
Type
A read-only field of the type of the replica, which is Selective Data Distribution.
Replicated Environments
Distributing a masked VDB will automatically distribute any environments associated with those objects. For more information, see Sel
ective Data Distribution Overview.
Replicated Objects Tree
A read-only view of the distributed objects in this replica
Delete Button
Deletes this replica on the target. This does not have an affect on the corresponding profile on the source engine.
683
Related Links
Selective Data Distribution Overview
Selective Data Distribution Use Cases
Configuring Selective Data Distribution
Selective Data Distribution and Failover
Delphix Masking Overview
684
Prerequisites
The replication source and the replication target must be running identical versions of the Delphix Engine (for example, Delphix Engine
version 5.0).
The target Delphix Engine must be reachable from the source engine.
The target Delphix Engine must have sufficient free storage to receive the replicated data.
The user must have administrative privileges on the source and the target engines.
See also: Replication Prerequisites
set socks.enabled=true
set socks.host=10.2.3.4
set socks.username=someuser
set socks.password=somepassword
commit
685
Note that SOCKS port 1080 is used by default, but you can override it.
10. Under Traffic Options, select whether you want to Encrypt traffic or Limit bandwidth during replication updates.
Encrypting Traffic
By default, replication streams are sent unencrypted. This provides maximum performance on a secure network. If the network
is insecure, encryption can be enabled. Note that encrypting the replication stream will consume additional CPU resources and
may limit the maximum bandwidth that can be achieved.
Limiting Bandwidth
By default, replication will run at the maximum speed permitted by the underlying infrastructure. In some cases, particularly
when a shared network is being used, replication can increase resource contention and may impact the performance of other
operations. This option allows you to specify the maximum bandwidth that replication can consume.
11. In the right-hand column, under Objects Being Replicated, click the checkboxes next to the objects you want to replicate.
Selected Objects
You can only select masked VDBs for selective data distribution.
The parent dSource or VDB (and any parents in its lineage) are NOT automatically included. Some of the data from
the parent may be included for disk space optimization. In addition, any environments containing database instances
used as part of a replicated VDB are included as well.
When replicating individual VDBs, only those database instances and repositories required to represent the replicated
VDBs are included. Other database instances that may be part of the environment, such as those for other VDBs, are
not included.
12. Click Create Profile to submit the new profile. This saves the replication profile details. If you leave the Create page prior to submitting
the profile, the draft replication profile will be discarded.
686
Related Links
Selective Data Distribution Overview
Selective Data Distribution Use Cases
Selective Data Distribution User Interface
Selective Data Distribution and Failover
Delphix Masking Overview
CLI Cookbook: Replication
Replication Prerequisites
Replication User Interface
687
Replicas
A replica contains a set of replicated objects. These objects are read-only and disabled while replication is ongoing. To view replicated objects:
1. Click the System.
2. Select Replication.
3. Under Received Replicas, select the replica. On this screen, you can browse the contents of replicas or delete individual replicas. As
described in the Selective Data Distribution Overview, VDBs and environments are included within the replica.
You can also view replicated objects under namespace in the command line interface (CLI).
Deleting a replica will sever the link with the replication source. Subsequent incremental updates will fail, requiring the source to
re-establish replication.
Multiple replicas can exist on the system at the same time. Active objects can exist in the system alongside replicas without interfering with
replication updates. VDBs within a replica can also be used as a source when provisioning. For more information, see Provisioning from
Replicated Data Sources or VDBs.
Failover
Selective data distribution does not support failover of replicas.
Related Topics
Selective Data Distribution Overview
Selective Data Distribution Use Cases
Selective Data Distribution User Interface
Configuring Selective Data Distribution
Selective Data Distribution and Failover
Provisioning from Replicated Data Sources or VDBs
Delphix Masking Overview
688
689
690
Procedure
1. Login to the Delphix Admin application as a Delphix Admin user, or as a group or object owner.
2. Select the dSource or VDB you want to export.
3. Select the Snapshot of the dSource or VDB state you want to export.
4. If you want to export the state of the database from a specific point in time, slide the LogSync slider on the top of the snapshot card to
the right, and then select the point in time from which you want to create the export.
5. Click V2P or Deploy (If you have the Delphix Modernization Engine you will see "Deploy").
6. Select the target environment.
7. Enter the Target Directory for the export.
The target directory you enter here must exist in the target environment, and you must have permission to write to it. For more
information on user requirements for target environments, see Requirements for Oracle Target Hosts and Databases.
8. Select whether or not to Open Database After Recovery.
If you do not select this option, the Oracle database will not undergo open resetlogs, and the database will not be available for read/write
access. This can be useful if the files are to be used to restore an existing data file for recovery purposes. You can use the scripts that
are created in the target environment to complete the database open process at a later time. For more information, see Manually
Recovering a Database after V2P.
9. Click Advanced to customize data transfer settings, customize the target directory layout, enter any database configuration param
eters or enter file mappings from the source environment to the target. For more information, see Customizing Target Directory
Structure for Database Export, Customizing Oracle VDB Configuration Settings and Customizing VDB File Mappings. The data
transfer settings are described below:
Compression Enable compression of data sent over the network. Default is Off.
Encryption Enable encryption of data sent over the network. Default is Off.
Bandwidth Limit Select the network bandwidth limit in units of megabytes per second (MB/s) between the Delphix Engine and
the target environment. Default is 0, which means no bandwidth limit is enforced.
Number of Connections Select the number of transmission control protocol (TCP) connections to use between the Delphix
Engine and the target environment. Multiple connections may improve network throughput, especially over long-latency and
highly-congested networks. Default is 1.
Number of Files to Stream Concurrently Select the number of files that V2P should stream concurrently from the Delphix
Engine to the target environment. Default is 3.
10. Click Next.
11. Select whether you want to have an email sent to you when the export process completes.
12. Click Finish.
Post-Requisites
If you did not select for Open Database After Recovery, follow the instructions in Manually Recovering a Database after V2P to complete the
database open process.
Resumable V2P
Resumable V2P is a capability that allows you to suspend a V2P operation and then resume it at a later time, without redoing any of the work
already completed. For example, any portion of a file that has already been transferred to the target environment is not re-sent. For an entire file
that has already been transferred, no part is re-sent.
The image below presents a progress bar, a stop button and a pause button while a V2P is running. To manually suspend a V2P operation:
1. Click the pause button.
691
The next image represents a message alert generated after a V2P job has been suspended. To manually resume the job:
1. Click the play button.
Recoverable Errors
Broadly speaking, a "recoverable error" is an error condition caused by a disruption in the environment or on the target host, not errors in the
actual V2P operation. Examples of recoverable errors include:
A timeout due to a network outage
Running out of disk space on the target environment
An inability to create directories or files on the target environment
You can often address recoverable errors by taking some action to fix the problem, such as freeing up space on the target environment.
Auto-Suspend
A V2P operation that encounters a recoverable error is auto-suspended: it appears as a suspended job in the user interface (UI), with a message
detailing the error condition. Once you have fixed the error, you can simply resume the job. Alternatively, you can cancel the job. Just as when
you manually suspend and resume a job, any portion of a file that has been transferred, including possibly the entire file itself, is not re-sent when
the job resumes.
Video
Related Links
Requirements for Oracle Target Hosts and Databases
692
693
Prerequisites
Provision a VDB on the target machine that is running Oracle ASM or Exadata
Create an ASM disk group that will contain all the database files. Optionally create a separate disk group for redo log files.
Where multiple disk groups are used for datafiles, the reference move-to-asm.sh script will need to be modified. Oracle best practices
recommend a single datafile disk group.
Oracle Versions
This procedure applies to all Oracle RDBMS Versions supported by Delphix.
Procedure
1. Download the reference shell script move-to-asm.sh on the target machine where the VDB instance and ASM instance are running.
2. Ensure that Oracle environment variables ORACLE_HOME ORACLE_SID and CRS_HOME (RAC only) are correctly set for the VDB that
needs to be moved.
3. Execute the script move-to-asm.sh as the Environment User who provisioned the single instance VDB.
For a RAC VDB, the Environment User selected to execute move-to-asm.sh must be the Oracle installation owner. This is due to an
Oracle restriction that only the installation owner can invoke srvctl to add or remove database configurations.
move-to-asm.sh [-noask] [-parallel #] [-dbunique db_unique_name] <data_diskgroup> [<redo_diskgroup>]
Parameters
Description
-noask [optional]
Do not prompt for confirmation before moving the VDB. Default is to prompt.
-parallel [optional]
-dbunique [optional]
Database unique name for the resulting physical database. Default is VDB unique name.
<data_diskgroup> [required]
Target ASM disk group for data, server parameter and control files.
<redo_diskgroup> [optional]
Target ASM disk group for redo log files. Default is data_diskgroup.
694
$ /home/ora1120/scripts/delphix/move-to-asm.sh
Usage: move-to-asm.sh [-noask] [-parallel #] [-dbunique db_unique_name]
<data_diskgroup> [<redo_diskgroup>]
$ /home/ora1120/scripts/delphix/move-to-asm.sh -noask -dbunique davis +DATA +LOG
============================================================
Virtual-to-ASM script (move-to-asm.sh v1.5) for Delphix 3.x
Copyright (c) 2013 by Delphix.
============================================================
Moving database db52temp to ASM: started at Mon Jun 10 11:48:31 EDT 2013
db_unique_name => db52
ORACLE_SID => db52
ORACLE_HOME => /opt/app/oracle/product/11.2.0/dbhome_1
Datafile diskgroup => +VIIL
RMAN Channels => 8
Generate script to move tempfiles to ASM
Generate script to drop old tempfiles
Generate script to drop offline tablespaces
Generate script to make read-only tablespaces read-write
Make read-only tablespaces read-write
Remove offline tablespaces
Updating server parameter file with ASM locations
Move spfile to ASM
Move datafiles to ASM: started at Mon Jun 10 11:49:25 EDT 2013
Move datafiles to ASM: completed at Mon Jun 10 11:56:09 EDT 2013
Startup database with updated parameters
Move tempfiles into ASM
Move Online logs
Restore any read-only tablespaces
Remove old tempfiles
Database db52 moved to ASM: completed at Mon Jun 10 11:57:19 EDT 2013
Final Steps to complete the move to ASM:
1) Delete VDB on Delphix.
2) Copy new init.ora: cp
/home/ora1120/scripts/delphix/initdb52_run8396_moveasm.ora
/opt/app/oracle/product/11.2.0/dbhome_1/dbs/initdb52.ora
3) Startup database instance.
4) Modify initialization parameters to match source and restart.
Source parameters are restored at
/home/ora1120/scripts/delphix/source_initdb52.ora
4. Alternatively, enter move-to-asm.sh as a Post Script when provisioning the VDB. This will provision and move the VDB into ASM
diskgroups in a single flow.
You must specify the -noask option to execute in non-interactive mode. For example:
/delphix/scripts/move-to-asm.sh -noask -parallel 10 +DATA +REDO
See Using Pre- and Post-Scripts with dSources and SQL Server VDBs for more information.
695
Post-Requisites
Final steps to be manually executed are displayed when the script completes and are written to the execution output log.
1. Delete the Delphix VDB that was moved.
2. For Single Instance only: copy generated init.ora parameter file to the default $ORACLE_HOME/dbs/init<$ORACLE_SID>.ora
3. For Single Instance only: startup the physical database that will now run on ASM.
A RAC database is automatically started up by the move-to-asm.sh script using srvctl.
4. Modify initialization parameters to match the original source database parameters, if necessary.
As a convenience to assist with this step, the source database parameters are restored as source_init<$ORACLE_SID>.ora
Related Links
Requirements for Oracle Target Hosts and Databases
696
Procedure
1. Log into the Delphix Admin application as a Delphix Admin user, or group or object owner.
2. Select the dSource or VDB you want to export.
3. Select the snapshot of the dSource or VDB state you want to export.
4. If you want to export the state of the database from a specific point in time, slide the LogSync slider on the top of the snapshot card to
the right, and then select the point in time from which you want to create the export.
5. Click V2P or Deploy.
6. Select the target environment.
7. Enter the Target Directory for the export.
The target directory you enter here must exist in the target environment, and you must have permission to write to it. See Requirements
for SQL Server Target Hosts and Databases for more information on user requirements for target environments.
8. Select an option for Run recovery after V2P.
If you select No, you can use the scripts that are created in the target environment to manually recover the database at a later time. See
Manually Recovering a Database after V2P for more information.
9. Click Advanced to customize the target directory layout. See Customizing Target Directory Structure for Database Export for more
information.
10. Click Next.
11. Select whether you want to have an email sent to you when the export process completes, and then click Finish.
Post-Requisites
If you selected No for Run Recovery after V2P, follow the instructions in Manually Recovering a Database after V2P to complete the V2P
process.
Related Links
Requirements for SQL Server Target Hosts and Databases
Manually Recovering a Database after V2P
697
Procedure
1. Log into the Delphix Admin application as a Delphix Admin user, or as a group or object owner.
2. Select the dSource or VDB you want to export.
3. Select the snapshot of the dSource or VDB state you want to export.
4. Click V2P or Deploy.
5. Select the target environment.
6. Enter the Target Directory for the export.
The target directory you enter here must exist in the target environment, and you must have permission to write to it. See Requirements
for PostgreSQL Target Hosts and Databases for more information on user requirements for target environments.
7. Enter a Port Number.
This is the TCP port the exported database will listen on.
8. Click Advanced to customize the target directory layout, or enter any database configuration parameters.
See Customizing Target Directory Structure for Database Export, Customizing PostgreSQL VDB Configuration Settings for more
information.
9. Click Next.
10. Review the Target Environment configuration information, and then click Finish.
Related Links
Requirements for PostgreSQL Target Hosts and Databases
Customizing Target Directory Structure for Database Export
Customizing VDB Configuration Settings
698
Requirements
Before you perform the V2P operation, you must have created a database on the target instance into which you will load the exported data. It
must be sufficiently large, and you must have created it with the for load SAP ASE option.
The Delphix Engine will initiate a load command using the database specified. The V2P operation will overwrite any existing data in this
database.
Procedure
1. Login to the Delphix Admin application as a Delphix Admin user, or as a group or object owner.
2. Select the dSource or VDB you want to export.
3. Select the snapshot of the dSource or VDB state you want to export.
4. Click V2P.
5. Select the target environment.
6. Under Installation, select which instance that you want to export to.
7. Enter the Name of the database on the target instance into which you want to load the exported data.
8. Select whether or not to Run Recovery After V2P. When this option is set, the Delphix Engine will online the database when the export
is done.
9. Click Next.
10. Select whether you want to have an email sent to you when the export process completes.
11. Click Finish.
Related Links
Requirements for SAP ASE Environments
699
Procedure
1. In the V2P target environment, navigate to the scripts directory for your exported database instance.
You can find the scripts in a sub-directory named for that specific database instance. For Oracle databases, the path is <target_dire
ctory>/<db_unique_name>/script/<instance name> . For SQL Server databases, the path is <target_directory>\<db_n
ame>\scripts.
2. For Oracle databases, locate the scripts recover-vdb.sh and open-vdb.sh. Run them in that order.
For SQL Server databases, locate the script Provision.ps1 and run it.
3. For SQL Server databases, when the script completes, Refresh the target environment for it to discover the recovered database.
For Oracle databases, add the recovered database to /etc/oratab and Refresh the target environment for it to discover the recovered
database.
700
Procedure
1. During the virtual to physical export process, click Advanced in the V2P Wizard to see the target directory options.
2. You can customize any of the following:
Data Directory
Archive Directory
Temp Directory
External Directory
Script Directory
3. Each directory will then be concatenated to the Target Directory separated by the appropriate separator.
Any one of Target Directory, Data Directory, Archive Directory, Temp Directory, External Directory, Script Directory ca
n be blank. However, the combination of the fields must form an absolute path.
Data files: <target directory>/<data directory>
Archive files: <target directory>/<archive directory>
Temp files: <target directory>/<archive directory>
External files: <target directory>/<external directory>
Script files: <target directory>/<script directory>
Examples
Target directory is not empty
This means all target directories have a common root.
Input
701
Input
Target Directory:
Data Directory: /mydata
Target Directory:
Data Directory:
Archive Directory: /myarchive
Data files:/
/filesystem1/a.dbf
/filesystem2/b.dbf
File mappings:
a.dbf : /filesystem1/a.dbf
b.dbf : /filesystem2/b.dbf
Target directory is empty and one of the sub directories is empty would result in error
Input
Target Directory:
Data Directory: /mydata
Final Directories
INVALID
Archive Directory:
Temp Directory: /mytarget/temp
External Directory: /mytarget/myexternal
Script Directory: /myscript
702
Procedure
1. Log into the Delphix Admin application as a Delphix Admin user, or as a group or object owner.
2. Select the dataset you want to export.
3. Select the snapshot you want to export.
4. Click V2P or Deploy.
5. Select the target environment.
6. Enter the Mount Path for the export.
The directory you enter here must exist in the target environment, and you must have permission to write to it. See Managing Unix
Environments for more information on user requirements for target environments.
7. Click Next.
8. Review the Target Environment configuration information, and then click Finish.
Related Links
Managing Unix Environments
Virtual to Physical: An Overview
703
Procedure
1. Log into the Delphix Admin application as a Delphix Admin user, or as a group or object owner.
2. Select the dSource or VDB you want to export.
3. Select the snapshot of the dSource or VDB state you want to export.
4. Click V2P or Deploy.
5. Select the target environment.
6. Enter the Target Directory for the export.
The target directory you enter here must exist in the target environment, and you must have permission to write to it. See Requirements
for MySQL Server Target/Staging Hosts and Databases for more information on user requirements for target environments.
7. Enter a Port Number.
This is the TCP port the exported database will listen on.
8. ClickAdvanced to customize the target directory layout, or enter any database configuration parameters.
See Customizing Target Directory Structure for Database Export, Customizing MySQL VDB Configuration Settings for more
information.
9. Click Next.
10. Review the Target Environment configuration information, and then click Finish.
Related Links
Requirements for MySQL Server Target/Staging Hosts and Databases
Customizing Target Directory Structure for Database Export
704
_Delphix Masking
705
706
Installation and
deployment details
Initial Requirements
for all Masking
Activities
To mask data
Masking jobs can be created in the Masking Engine GUI and run via the
GUI or API just like Standalone Masking Engines.
You can also use the Data as a Service Engine GUI to provision
masked virtual databases (VDBs). See Provisioning Masked VDBs fo
r more details.
Post-Masking
Features
707
Related Links
Masking Engine Terms Overview
Masking Engine Install, System Configuration, and Network Setup
Prepare Data for Masking
Masking Engine Activities
Provisioning Masked VDBs
Masking API Calls to Run a Masking Job
Advanced Integrated Delphix Masking Workflows
708
Provision Data
Delphix allows you to provision data from a linked source to the target you choose. This flexibility empowers development and testing teams to
procure fresh, secure data from a source environment and move it to a non-production environment whenever they need it.
Understanding Connections
Delphix stores JDBC database connection information in an object called a "connector." You can discover a list of connectors within an
environment by going to Environment Overview and then clicking the Connector tab. The connection includes fields such as database name, h
ost, user id and password, and port. It is specific to the DBMS type you select. This builds a connector between the source database and the
masking interface.
709
Understanding Profiling
Profiling is a major component of the Masking Engine. The objective of profiling is to identify the location of Non-Public Information (NPI) or
sensitive data if you are unsure of what data needs to be masked in the first place. Profiling data is not necessary when you have already
identified the sensitive data you need to mask.
The Delphix profiler uses two different methods to identify the location of sensitive data:
Searching through the column names in the target database by querying the database catalog (metadata)
Looking at the data itself, using a sampling algorithm, to see whether there is any sensitive data. This is especially useful for files and
comment and notes fields in a database.
Understanding Inventory
The Delphix Engine automatically stores the masking rules for each sensitive column in the Delphix repository database in the environment's
"inventory." When you select a table to mask, its columns will appear, and you can select them for masking. Afterwards, you can edit the columns
with an appropriate algorithm required for masking.
Understanding Algorithms
Algorithms are how the Masking Engine masks sensitive data. From the Settings tab, click Algorithm on the left-hand side, and the list of
algorithms appears for you to select. The following algorithms are the most commonly used methods for masking:
Secure Lookup Algorithm Uses a lookup file to assign masked values in a consistent manner
Segmented Mapping Algorithm Replaces data values based on segment definitions. For example, an ACCOUNT NUMBER algorithm
might keep the first segment of an account number but replace the remaining segments with a random number.
Secure Shuffle algorithm A user-defined algorithm assigned to a specific column. Secure shuffle automates the creation of a secure
lookup algorithm by building a list of replacement values based on the existing unique values in the target column and creating a secure
lookup using those values. In that respect, it is simply shuffling the values.
Related Links
Quick Start Masking Engine Overview
Masking Engine Install, System Configuration, and Network Setup
Prepare Data for Masking
Masking Engine Activities
Provisioning Masked VDBs
Masking API Calls to Run a Masking Job
Advanced Integrated Delphix Masking Workflow
710
Installation Overview
Installations of Delphix 5.0 and above include the Delphix Masking Engine. This combination of the Delphix Data as a Service Engine and
Masking Engine provides tight integration and enables additional features such as Selective Data Distribution.
Both installation types require:
A Delphix Support Account
The appropriate installation file for your supported hypervisor (e.g. VMWare) and installation type from Delphix Downloads
If you are installing Delphix 5.0 or above, start here: Delphix and Masking Engine Installation
If you are not running Delphix 5.0 or above, and want to install the latest Masking Engine, start here: Standalone Masking Engine Installation
If you are unsure of which Masking Engine is right for you, please contact your Professional Services team or Delphix Support.
h. /etc/init.d/network restart
i. Edit /etc/resolv.conf.
j. This may have been picked up correctly by dhcp, otherwise modify the nameserver to map to the correct DNS server(s)
k. If necessary, edit the firewall.
l. By default, the firewall is disabled, meaning that /etc/init.d/iptables status shows not running. If this needs to be
enabled, open the port 8282 for access to the UI as well as any ports needed to connect to their database servers.
m. Stop and start the Masking Engine from the root prompt with the following commands:
i. cd /opt/dmsuite
ii. ./stop_all.sh
iii. ./start_all.sh
4. Connect to the Masking Engine at: http://<IP or DNS name>:8282/dmsuite as the user delphix_admin and password Delphix_123.
Click here to proceed to next steps once your Masking Engine is installed and enabled.
Next Steps
Prepare Data for Masking
Masking Engine Activities
Related Links
Quick Start Masking Engine Overview
Prepare Data for Masking
Masking Engine Activities
Provisioning Masked VDBs
Masking API Calls to Run a Masking Job
Advanced Integrated Delphix Masking Workflow
Installing the Delphix Engine
712
Next Steps
Masking Engine Activities
Create Data Masking Rule Sets, Algorithms, and Inventories
Mask Data
Related Links
Quick Start Masking Engine Overview
Prepare Data for Masking
Masking Engine Activities
Provisioning Masked VDBs
Masking API Calls to Run a Masking Job
Advanced Integrated Delphix Masking Workflow
Link an Oracle Data Source
713
User Roles
The Masking Engine has a built-in Administrator role, which gives you complete access to masking functions. As an administrator, you can
access, update, and delete all environments, and all objects within those environments. You can also add roles in the roles settings.
Note: Defining new environments and connections requires different privileges than building masking jobs.
Once logged into the Masking Engine, you can complete the activities needed for masking under the Environments tab, seen below:
714
Next Steps
Add an Application and Create a New Environment and Connector
Create Data Masking Rule Sets, Algorithms, and Inventories
Mask Data
Previous Steps
Prepare Data for Masking
Related Links
Quick Start Masking Engine Overview
Provisioning Masked VDBs
Masking API Calls to Run a Masking Job
Advanced Integrated Delphix Masking Workflow
715
Add an Application
In order to mask, you first need to add an application and create an environment to store the connection information and the masking rules for the
data store.
1. Click Add Application.
2. Enter an Application Name.
3. In the upper right-hand side of the screen, click Add Environment. The screen prompts you for the following items:
a. From the Application Name drop-down menu, select the name of the application associated with this environment, for
informational purposes. An integrated test environment can have multiple applications.
b. Enter an Environment Name.
This will be the display name of the new environment.
c. From the Purpose drop-down menu, select Mask.
716
4. Either:
Click Save to return to the Environments List/Summary screen,
or
Click Save & View to display the Environment Overview screen.
717
After you create an environment and connectors, you need to define a rule set. See the following activity for how to do this.
Next
Create Data Masking Rule Sets, Algorithms, and Inventories
Related Links
Masking Engine Activities
Create Data Masking Rule Sets, Algorithms, and Inventories
Mask Data
718
719
3. In the upper right-hand corner of the Rule Set tab, click the Edit (pencil) icon for the rule set you want to edit.
The Create Rule Set screen appears, allowing you to specify which tables belong in the Rule Set.
a. Enter a Name for your rule set.
b. Select a Connector name from the drop-down menu.
The list of tables for that connector appears.
c. To select individual tables, click their names in the list to the right. Alternatively, click Select All in the bottom left to select all the
tables.
d. Click Save.
You are returned to the Rule Set screen.
4. To see the list of tables that you selected, click the name of the newly-created rule set.
5. Optionally, for each table, if there is no primary key for that table, click Edit Table and define the logical key, as seen in the screenshot
below:
720
The following section describes how to define the columns to mask for each table in the rule set.
Profiling Data
1. Create a profiling job using the steps above.
2. Run the profiling job you just created. When you run this profiling job, it updates/populates an inventory.
722
6. To view the inventory, click the Inventory tab while in an Environment Overview.
7. Examine the inventory to ensure that the profiling job has included everything you want to mask. For example, if you selected a First
Name field, you probably want the Last Name field as well. You can see which columns were selected for masking by selecting the
associated rule set. Make sure that you have included all sensitive data elements, such as personal identifying information, from the table
that you want to mask.
8. Modify the inventory, if necessary.
When a profiling job runs, it automatically updates the inventory for the given rule set. If you do not want the Profiler to automatically update the
inventory, change the ID Method to User.
Next
Mask Data
Related Links
Masking Activities
Add an Application and Create a New Environment and Connector
Mask Data
723
Mask Data
Create a New Masking Job
Run a New Masking Job
Validate a New Masking Job
Next Steps
Related Links
Rule Set Select a rule set against which this job will execute.
2. When you are finished, click Save.
724
Next Steps
Provisioning Masked VDBs
Related Links
Quick Start Masking Engine Overview
Masking Engine Activities
Add an Application and Create a New Environment and Connector
Create Data Masking Rule Sets, Algorithms, and Inventories
Masking API Calls to Run a Masking Job
Advanced Integrated Delphix Masking Workflow
725
Prerequisites
Delphix Masking Pre-Configuration Activities
If you are configuring Delphix Masking for the first time, you must complete all of the activities below in order.
1. Install the Combined OVA.
2. Prepare Your Data.
3. Configure, Create, and Test a Simple Masking Job.
a. Add an Application.
b. Create Data Masking Rule Sets.
VDB Snapshot Required
Take a VDB snapshot before masking data. This is required to bring the changes into Delphix if you are going to be
provisioning masked VDBs.
c. Mask Data.
Mask Data for Provisioned Masked VDBs
A masking job must be Multi Tenant to use it when creating a masked virtual database (VDB).
726
Restrictions
Unique masking jobs cannot be selected and run on multiple VDBs simultaneously. The user interface
will allow you to assign the same masking job to multiple VDBs, but if you provision or refresh multiple
VDBs using the same selected masking job and ruleset, errors may occur with the masking jobs. To avoi
d any issues with provisioning or refreshing, and if you are using the same masking ruleset on multiple
VDBs, be sure to create a unique job for each VDB.
Provisioning masked VDBs through the Delphix Engine does not currently work with DB2. In order to mask DB2, you should use the
Masking Engine interface.
You cannot apply additional masking jobs to a masked VDB or its children.
If a masking job has been applied to a VDB, you cannot create an unmasked snapshot of that VDB.
If an existing VDB has not had a masking job applied to it, then you cannot mask that particular VDB at any point in the future. All the
data within the VDB and its parents will be accessible if it is replicated or distributed.
727
To provision a masked VDB, you must first indicate that the masking job you are using is complete and applicable to a particular database. You
do this by associating the masking job with a dSource.
1. Open the database card for the dSource to which the masking job is applicable and with which it will be associated.
2. Click the Masking tab.
3. Click the pencil icon to edit. All masking jobs on this Delphix Engine that have not been associated with another dSource will be listed on
the right-hand side.
6. Repeat for any other jobs that you want to associate with this dSource at this time.
7. Click the yellow checkmark to confirm.
The Delphix Engine now considers this masking job to be applicable to this dSource and ready for use. When provisioning from snapshots of this
dSource, this masking job will now be available.
Note: Masking jobs can also be associated with virtual sources in addition to dSources.
728
8. Click Next.
9. Specify any Pre or Post Scripts that should be used during the provisioning process. If the VDB was configured before running the
masking job using scripts that impact either user access or the database schema, those same scripts should also be used here.
10. Click Next.
11. Click Finish.
In the Actions sidebar on the right-hand side of the window, there will be an action indicating that masking is running. You can verify this and
monitor progress by going to the Masking Engine page and clicking the Monitor tab.
Once you have created a masked VDB, you can provision its masked data to create additional VDBs, in the same way that you can
provision normal VDBs. Since the parent masked VDB contains masked data, descendent VDBs will only have masked data. This is a
great way to distribute multiple independent copies of masked data that is both time- and space-efficient.
729
4. You will then be brought to the timeflow of the dSource from which the VDB was provisioned.
5. Select the snapshot or point in time to which you want to refresh.
6. Click Refresh.
Delphix will now update the masked VDB with the new data and mask it using the masking job with which this Masked VDB was provisioned.
730
Alter the database to contain masked data from a previous point in time.
Refresh
Get new data from the parent dSouce and mask it.
Disable
Turn off the database and remove it from the host system.
Enable
Related Links
Troubleshoot Provisioning Errors for Masked VDBs
Quick Start Masking Engine Overview
Masking Engine Activities
Masking API Calls to Run a Masking Job
Advanced Integrated Delphix Masking Workflow
Masking Engine Install, System Configuration, and Network Setup
731
732
If the error message is insufficient to diagnose the problem, you can view the full Masking Engine logs.
1. From the Masking Engine page, click Admin.
2. On the left-hand side of the screen, click Logs.
Related Links
Provisioning Masked VDBs
Masking API Calls to Run a Masking Job
733
Procedure
1. Login User GET dmsuite/apiV4/login?user={userID}&password={encrypted passwd}
a. Returns authorization token in HTTP header that should be used in subsequent operations
2. Get Application GET /dmsuite/apiV4/applications
a. Returns applications and environments associated to each application in response body for example:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<ApplicationsResponse>
<ResponseStatus>
<Status>SUCCESS</Status>
</ResponseStatus>
<Applications>
<Application>
<Name>demo</Name>
<Link href="applications/demo" rel="details"/>
<Environments>
<Link href="environments/1" rel="SAP"/>
</Environments>
<Environments>
<Link href="environments/37" rel="TEST"/>
</Environments>
</Application>
</Applications>
</ApplicationsResponse>
3. Get Job GET dmsuite/apiV4/applications/{applicationID}/jobs
a. Returns jobs in response body for example:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<JobsResponse>
<ResponseStatus>
<Status>SUCCESS</Status>
</ResponseStatus>
<Jobs>
<Profiles>
<Profile>
<Name>OracleProfile</Name>
<Link rel="details" href="applications/demo/profilejobs/0"/>
<Status>Succeeded</Status>
</Profile>
</Profiles>
<Provisions/>
<Maskings>
<Masking>
<Name>OracleMasking</Name>
<Link rel="details" href="applications/demo/maskingjobs/1"/>
<Status>Succeeded</Status>
</Masking>
</Maskings>
<Certifys/>
</Jobs>
</JobsResponse>
4. Run Job POST dmsuite/apiV4/applications/{applicationID}/maskingjobs/{maskingjobID}/run
a. Returns job launch status in response body - for example:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<MaskingsResponse>
<ResponseStatus>
<Status>SUCCESS</Status>
</ResponseStatus>
</MaskingsResponse>
b. For on-the-fly masking, pass the target connector in the request body environments/{environmentID}/connectors/{connectorId}?DataSource={Database,File,Mainframe}
734
Related Links
Quick Start Masking Engine Overview
Masking API Calls to Run a Masking Job
Masking Engine Activities
Provisioning Masked VDBs
Masking API Calls to Run a Masking Job
Advanced Integrated Delphix Masking Workflow
735
To learn more or to get started, go to the Jet Stream Admin Guide and follow the procedures for the following activities:
736
Related Links
Selective Data Distribution Overview
Selective Data Distribution Use Cases
Selective Data Distribution User Interface
Configuring Selective Data Distribution
Selective Data Distribution and Failover
Jet Stream Admin Guide
Selecting Masked Data Sources in Data Templates
Selecting Masked Data Sources for Data Containers
737
Managing Connectors
The Connector List
Creating or Editing a Connector
738
Deleting Connectors
Database Connectors
File Connectors
Managing Jobs
Jobs on the Environment Overview Screen
Creating New Jobs
Creating a New Profiling Job
Creating a New Masking Job
Creating a New Certify Job
Creating a New Provisioning Job
Running and Stopping Jobs from the Environment Overview Screen
File Masking
Overview
File Formats
1. Mainframe and XML Files
2. Delimited, Excel, Fixed Files
Create a File Connector
Create a Ruleset
File Inventory
Tokenization
Creating a Tokenization Algorithm
Create a Domain
Create a Tokenization Environment
Create a Connection and Rule Set
Create the Rule Set and Apply File Format
Apply the Tokenization Algorithm
Create and Execute a Tokenization Job
Result Snapshot
Steps to Re-Identify
Result Snapshot
Monitor Jobs
Scheduler Tab
Scheduling Job(s) to Run
Job Completion E-mail Message
739
Settings Tab
Algorithms
Domains (Masking)
Profiler
Mapping
File Format
Remote Server
Admin Tab
Users
About
Risk Tab
Adhoc Reporting on the Delphix Repository
Index of Terms
740
741
742
Certify Data
After completing the other tasks and masking your data, you should create a job to certify your data on an ongoing basis. This alerts you if
unmasked data is introduced to a masked database.
You need to certify your data on a regular basis.
Certifying Data
1. Create a Certification job, as described in "Creating a New Certify Job" on page .
2. Run the Certify job.
3. Confirm that no unmasked data has been introduced.
For detailed information about certifying data, see "About Certifying Data."
743
Click Create Rule Set to the upper right of the Rule Set screen.
The Create Rule Set screen appears (Figure 1.5 on next page). This screen lets you specify which tables belong in the Rule Set.
a. Enter a Name for your Rule Set.
b. Select a Connector name from the dropdown.
c. The list of tables/files for that connector appears.
d. Click individual tables/file names in the list to the right to select them, or click Select All in the bottom left to select all the tables.
Click Save.
e. You are returned to the Rule Set screen.
3.
4.
You may then need to define the Rule Set by modifying the table settings as described in
For example:
For a table, you may want to filter data from the table.
For a file, you must select a File Format to use.
744
".
4.
745
Either click Save to return to the Environments List/Summary screen, or click Save & View to display the
Environment Overview screen (see "The Environment Overview Screen").
Importing an Environment
1. Click Import Environment at the upper right of the screen.
746
4. For detailed information about masking data, see "About Masking Data."
747
Profiling Data
1. Create a profiling job as described in "Creating a New Profiling Job."
2.
3.
To view the inventory, click the Inventory tab while in an Environment Overview.
4.
Examine the inventory to ensure that the profiling job has included everything you want to mask. (For
example, if you selected a First Name field, you probably want the Last Name field as well.)
You can see which columns were selected for masking by selecting the associated rule set.
Make sure you have included all sensitive data elements (for example, personal identifying information) from
the table that you want to mask.
5.
When a profiling job runs, it automatically updates the inventory for the given rule set. If you do not want the Profiler to automatically update the
inventory, change the ID Method to User. For detailed information about profiling data, see "About Profiling Data."
What you do next depends on how you plan to mask your data. If you plan to mask your data in-place and you want to use Delphix Masking to
provision your data, proceed with "Provision Data" next. If you plan to mask data on-the-fly, or have already provisioned your data outside of
Delphix Masking, continue with "Mask Data.".
748
Provision Data
If you are using a source file, instead of a source database, you do NOT provision your data. Skip this section.
Before you can provision or subset data, you must first create your source environment in Delphix Agile Data Masking (Create or Import an
Environment), define connections (Define Connections), define a rule set (Define a Rule Set), and create a target environment (Create or
Import an Environment).
Provisioning Data
1. Create a provisioning job as described in "Creating a New Provisioning Job."
2. Run the provisioning job.
3. Ensure you have the information you need.
For detailed information about provisioning data, see "About Provisioning (Subsetting Data)."
749
Start Masking
Delphix Masking is a Web application that you use within a browser window.
Enter your User Id and Password. User Id is not case-sensitive; Password is case-sensitive. If this is your
first time logging in, the default Administrator is username delphix_admin, password Delphix_123
3.
Click Login.
The Environments List/Summary screen appears.For detailed information about the Environments
List/Summary screen, see "The Environment List/Summary Screen."
The username appears in the upper right corner of the screen.
4.
To log out of Delphix Masking, click Log Out to the right of the username.
750
751
Profiling Data
Profiling is one of the major components in Delphix. The objective of profiling is to identify the location of Non-Public Information (NPI) or sensitive
data.
The Delphix profiler uses two different methods to identify the location of sensitive data:
1. Search through the column names in the target database, by querying the database catalog (metadata).
2. Look at the data itself using a sampling algorithm, to see whether there is any sensitive data. This is especially useful for files, and
comment and notes fields in a database, for example.
After you have defined an environment and a connection for your data source, you can profile the data. To do so, you create a profiling job (see
Creating a New Profiling Job ).
752
753
Masking Data
After you create an environment, connection, rule set, and inventory, you mask data.
To maintain Referential Integrity (RI), Delphix masks each field on itself. This repeatable masking automatically maintains RI (for verbatim
matches), even if it's between applications or platforms.
For example, if you want to match the values between a parent and children, simply select the same algorithm to mask them. This ensures that
referential integrity is maintained within the same database. Furthermore, Delphix creates the integrity across database platforms (between SQL
Server and DB2, for example) or across files (tab-delimited files) and relational data (a column in a SQL Server database)just select the same
masking algorithm.
As a practical example, assume you have an SSN column in a Microsoft SQL Server database, an SSN column in a DB2 database, and an SSN
field in a tab-delimited file. If the SSN value was 111111111 across the two databases and the file, and you use the same SSN algorithm for all
three, the masked value (for example, 801-01-0838) will be the same for all three.
There are two ways to mask data. You can mask data on-the-fly or you can provision it first and then mask it. The following sections explain these
two options.
Figure 6 Delphix In-Place Masking Option
Masking In-Place
With in-place masking, production data that already exists in a nonproduction environment is masked, in place.
Advantages/Disadvantages:
The main advantage to in-place masking is when you have provisioned data to a non-production environment that contains some production data.
Delphix can mask the data in those existing environments. In-place masking masks only the columns you flag in the inventory, leaving the other
columns alone.
The main disadvantage is that production data is copied potentially into a nonproduction environment while the masking takes place, so sensitive
data might exist in the nonproduction environment until the masking is complete.
On-The-Fly Masking
754
With on-the-fly masking, you specify the source of the information to be masked, and where the masked data will be loaded. On-the-fly masking is
an Extract Transform Load (ETL) process.
Delphix extracts the data from a source environment, such as a production copy, gold copy, or disaster recovery copy (only read from a database
not an archived file).
Delphix transforms, or masks, the data in the memory of the application server on which it resides, and then loads the masked data to the target
environment. Delphix does not modify the original source data; only the target data changes.
Advantages/Disadvantages:
One advantage to on-the-fly masking is that sensitive production data doesn't get persisted in any nonproduction environment. This method only
requires a production source and nonproduction target environment. Because on-the-fly masking uses all insert statements, it typically performs
better than in-place masking, which uses updates.
The main disadvantage to on-the-fly masking is that it requires an active connection to a source production environment or copy.
755
Certifying Data
After profiling and masking data, you want to monitor or audit the process (also known as certifying your data). This alerts you if unmasked data is
introduced to a masked database.
For example, if you mask your master customer database once a week, and an input file of unmasked is introduced by mistake, you want to be
able to detect that. The purpose of the Delphix certification module is to identify such a situation. To do so, you create a Certification job against
that database (see Creating a New Certify Job).
The Certifying job goes through every row in the tables in a rule set and verifies that every value designated for masking in the inventory is
masked. The Certification job output lists the fields designated for masking, along with the result of the certification: Clean, Polluted, or Not
Applicable. Polluted data indicates that Delphix encountered a value that could potentially be an unmasked production value. Not Applicable
indicates that Delphix was unable to determine whether the value is masked.
756
757
Managing Environments
Environments define the scope of work in Delphix. In order to mask or provision databases and files within Delphix, you first need to create an
Environment in which Delphix will store the connection information and masking and provisioning rules for those data stores. An environment can
contain multiple database connections and multiple file connections.
Exporting an Environment
You can export an environment from the Environment List/Summary screen. You can later import that environment to a different instance of
Delphix, such as a development test instance or a production instance.
To export an environment:
1. Click the Export icon.
2. The popup fills in the following items:
a. Environment Name
b. File Name.
3. Click Export.
758
All the information for the specified environment (connectors, rule sets, inventory, jobs, and so on) is exported to an XML file.
A status popup appears. When the export operation is complete, you can click on the Download file name to access the XML file.
This screen gives an overview of the Environment and the Environment Status. The left of the screen displays the environment Name, Purpose
(for example, DEV or QA), and the Application Name. The Environment Status lists the Current Status, and dates for Last Data Refresh, Last
Masked, Last Certified, and Last Profiled. The files listed on the right side of the window are pdfs of the last certification job (C) and the last
masking job inventory (M).
The body of the page displays all jobs currently defined for this environment, along with the status of the jobs (created, running, succeeded, or
failed). For information about Jobs and the icons on this screen, see Managing Jobs.
You can use the icons in the Jobs heading to create new jobs. See Monitor Jobs.
759
Managing Connectors
Delphix stores database connection information in an object called a "Connector." When in an Environment Overview, click the Connector tab to
view the list of connectors within an environment.
For each connection, you must manually define a corresponding connector with the same name.
Deleting Connectors
To delete a connector:
Click the Delete icon to the far right of the connector name.
When you delete a connector, you also delete its rule sets and inventory data.
Database Connectors
The fields that appear are specific to the DBMS Type you select. If you need assistance determining these values, please contact your database
administrator. All required fields are marked with an asterisk on the screen.
You only can create connectors for the databases and/or files listed. If your database or file type is not listed here, you cannot create a
connector for it.
Kerberos Authentication (Sybase, Oracle, or DB2 only, optional) Whether to use a Kerberos connection to the database. This box is
clear by default. If this box is checked, the application code makes a Kerberos connection to the database instead of using a
login/password.
Connection Type (Oracle or MS SQL Server only) Choose a connection type:
Basic Basic connection information.
Advanced The full JDBC connect string.
Connection Name The name of the database connector (specific for your Delphix application).
760
For each Connection Name, you must manually define a corresponding connector with the same name.
Schema Name The schema that contains the tables that this connector will access.
Database Name The name of the database to which you are connecting.
Host Name / IP or Hostname/IPThe network host name or IP address of the database server.
Username (Oracle only)
ODBC DNS Name (ODBC and Microsoft Access only)
Login ID The user login this connector will use to connect to the database.
Password The password associated with the Login ID or Username. (This password is stored encrypted.)
System Number (SAP only)
SAP Client (SAP only)
Language (SAP only)
Port The TCP port of the server.
SID (Oracle only) Oracle System ID (SID).
Instance Name (MS SQL Server only) The name of the instance. This is optional. If the instance name is specified, the connector
ignores the specified "Port" and attempts to connect to the "SQL Server Browser Service" on port 1434 to retrieve the connection
information for the SQL Server instance. If the instance name is provided, be sure to make exceptions in the firewall for port 1434 as well
as the particular port that the SQL Server instance listens to.
Server Name (Informix only) The name of the Informix server.
Custom Driver Name (Adabas and SQL Anywhere only) The name of the custom driver.
Custom JDBC URL (Adabas and SQL Anywhere only) The name of the custom JDBC URL.
All database types have a Test Connection button at the bottom left of the New Connector window. We highly recommend that you test your
connection before you save it. Do so before you leave this window. When you click Test Connection, Delphix uses the information in the form to
attempt a database connection. When finished, a status message appears indicating success or failure.
File Connectors
The values that appear correlate to the File Type you select. All required fields are marked with an asterisk on the screen.
Connector Name The name of the file connector (specific to your Delphix application and unrelated to the file itself).
Connection Mode Local Files, SFTP, FTP, HTTP & HTTPS.
Path The path to the directory where the file(s) are located.
Operating System Choose the operating system on which the file resides: Windows or Linux. (This value does not appear for
Mainframe Copybooks.)
If you select SFTP or FTP for Connection Mode, the following additional values appear:
Server Name The name of the server used to connect to the file.
User Name The User Name to connect to the server.
Public Key Authentication (Optional) (Only appears for SFTP.) Check this box to specify a public key.
When you check this box, the Available Keys dropdown appears. Choose a key from the dropdown. (The path on the server to the location that
contains the keys is configured in a Delphix property files.)
Password The associated Password for the server.
Port The Port used to connect to the server.
761
Mainframe Connectors
The fields that appear are correlate to the File Type you select. If you need assistance determining these values, please contact your MVS
administrator. All required fields are marked with an asterisk on the screen.
Connection Name The name of the file connector (specific to your application and unrelated to the file itself).
Host Name / IP The network host name or IP address of the PDS server.
FileName The source file fully qualified data set name, including "(0)" for generation data group files.
FileType The source file type: normal, VSAM, or GDG.
UserID The user login this connector will use to connect to the mainframe host system to access the PDS copybook files. For VSAM
files, use the cluster name.
Password The password associated with the Login ID or Username. (This password is stored encrypted.)
File DCB RECFM The source file record format; possible values: F, FB, FBA, V, VB, VBA.
BLKSIZE The source file block size, from 1 to 32760.
File DCB LRECL The source file logical record length, from 1 to 32760. If record format is fixed, must be a divisor of block size.
Header/Trailer Code The number of records to skip (copy and not mask) at the beginning and end of the source file. The format is in
the form "H,x,T,y", "H,x", or "T,y" where:
H and T are constants.
x is the number of header rows to skip.
y is the number of trailer rows to skip.
Normal file types have a Test Connection button at the bottom left of the New Connector window. VSAM and GDG file types do not have a Test
Connection button.
We highly recommend that you test your connection. Do so before you leave this window. When you click the Test Connection button, DMsuite
uses the information in the form to attempt a mainframe connection. When finished, a status message appears indicating whether the attempt was
successful or failed.
VSAM files are treated sequentially as ordinary files. IDCAMS uses the cluster name to create an ordinary file. This GOLDCOPY is used to load
the target dataset later.
To allocate the new file, the process uses record format, record length, and block size, as follows:
If the VSAM source record is fixed length, use FB and record length, and make block size the largest multiple of record length less than
27,998 (half-track blocking).
If the VSAM record is variable length, use VB, and use the maximum record length +4 as the record length, and use block size of 27,990.
This information pertains to processing the source records and allocating the GOLDCOPY; it does not apply to system records about the original
source.
762
763
Logical Key
If your table has no primary keys defined in the database, and you are using an In-Place strategy, you must specify an existing column or columns
to be a logical key. This logical key does not change the target database; it only provides information to Delphix. For multiple columns, separate
each column using a comma. Note: If no primary key is defined and a logical key is not defined an identify column will be created.
To enter a logical key:
1. From the Rule Set screen, click the name of the desired rule set.
2. Click the green edit icon to the right of the table whose filter you wish to edit.
3. On the left, select Logical Key.
4. Edit the text for this property.
5. To remove any existing code, click Delete.
6.
764
6. Click Save.
Edit Filter
Use this function to specify a filter to run on the data before loading it to the target database.
To add a filter to a database rule set table or edit a filter:
1. From the Rule Set screen, click the name of the desired rule set.
2. Click the green edit icon to the right of the table you want.
3. On the left, select Edit Filter.
4. Edit the properties of this filter by entering or changing values in the Where field.
Be sure to specify column name with table name prefix (for example, customer.cust_id <1000).
1. To remove an existing filter, click Delete.
2. Click Save.
Custom SQL
Use this function to use SQL statements to filter data for a table.
To add or edit SQL code:
1. From the Rule Set screen, click the name of the desired rule set.
2. Click the green edit icon to the right of the table you want.
3. On the left, select Custom SQL.
4. Enter custom SQL code for this table.
Delphix will run the query to subset the table based on the SQL you specify.
1. To remove any existing code, click Delete.
2. Click Save.
Table Suffix
To set a table suffix for a rule set:
1. In the Rule Set screen, click the name of the desired rule set.
2. Click the green edit icon to the right of the table for which you wish to set the suffix.
3. On the left, select Table Suffix.
4. The Original Table Name will already be filled in.
5. (Optional) Enter a Suffix date Pattern (for example, mmyy).
6. (Optional) Enter a Suffix Value, if you want to append a specific value.
7. (Optional) Enter a Separator (for example, _). This value will be inserted before the suffix value (for example, tablename_0131).
8. Click Save.
Add Column
Use this function to select a column or columns from a table when you don't want to load data to all the columns in a table.
To add a column to a database rule set table or edit a column:
1. From the Rule Set screen, click the name of the desired rule set.
2. Click the green edit icon to the right of the table you want.
3. On the left, select Add Column.
4. Select one or more column names to include in the table. To remove a column, deselect it.
You can also choose Select All or Select None.
5. Select Save.
Join Table
Use this function to specify a SQL join condition so you can define primary key/foreign key relationships between tables.
To define or edit the join condition for a table:
1.
765
1. From the Rule Set screen, click the name of the desired rule set.
2. Click the green edit icon to the right of the table you want.
3. On the left, select Join Table.
4. Edit the properties for this join condition.
5. To remove an existing join condition, click Delete.
6. Click Save.
List
Use this function to select a list to use for filtering data in a table.
To add or edit a list:
1. From the Rule Set screen, click the name of the desired rule set.
2. Click the green edit icon to the right of the table you want.
3. On the left, select List.
4. Edit the text file properties for this list.
a. Select a column.
b. Enter or browse for a filename.
c. Files that have already been specified appear next to Existing File.
5. To remove an existing list file, click Delete.
6. Click Save.
Removing a Table
To remove a table from the rule set:
1. From the Rule Set screen, click the name of the desired rule set.
2. Click the red delete icon to the right of the table you want to remove.
If you remove a table from a rule set and that table has an inventory, that inventory will also be removed.
766
Inventory Settings
To specify your inventory settings:
1. On the left-hand side of the screen, select a Rule Set from the drop-down menu.
2. Below this, Contents lists all the tables or files defined for the rule set.
Figure 14 Inventory Screen
3. Select a table or file for which you would like to create or edit the inventory of sensitive data.
The Columns or Fields for that specific table or file appear.
4. If a column is a primary key (PK), Foreign Key (FK) or an index (IDX), an icon indicating this will appear to the left of the column name. If
there is a Note for the column, a Note icon will appear. To read the note, click the icon.
5. If a table, metadata for the column appears: Data Type and Length (in parentheses). This information is read-only.
6. Choose how you would like to view the inventory:
a. All FieldsDisplays all columns in the table or all fields in the file (allowing you to mark new columns or fields to be masked).
b. Masked FieldsFilters the list to just those columns or fields that are already marked for masking.
7. Choose how to determine whether to mask/unmask a column:
a. AutoThe default value. The profiling job can determine or update the algorithm assigned to a column and whether to mask the
column.
b. UserThe user's choice overrides the profiling job. The user manually updates the algorithm assignment, mask/unmask option
767
7.
Delphix User Guide 2016 Delphix
b.
of the column. The Profiler will ignore the column, so it will not be updated as part of the Profiling job. In order to use the Secure
Shuffle algorithm the user would select it as a User defined algorithm and assign it to the specific column. Secure shuffle
automates the creation of a secure lookup algorithm by building a list of replacement values based on the existing unique values
in the target column and creating a secure lookup using those values. In that respect it is simply shuffling the values.
768
Defining Fields
Note: A file system must be selected from the Select Rule Set dropdown menu on the left, not a database.
To create new fields:
1. From an Environment's Inventory tab, clickDefine fields to the far right.
The Edit Fields window appears.
Figure 15 Define Fields Window
769
770
Once this is done you will see the XML fields show up on the inventory page so you can set up your masking. This is not a database.
To add a new Record Format:
1. From an Environment's Inventory tab, click Record Types towards the upper right.
The Record Type window appears.
2. Click +Add a Record Type towards the bottom of the window.
The Add Record Type window appears.
771
Managing Jobs
Delphix creates "jobs" to profile, provision, and mask, and certify data.
1. Click Profile.
The Create Profiling Job window appears.
Figure 17 Create Profile Job
773
p. Feedback Size (optional) The number of rows to process before writing a message to the logs. Set this parameter to the
appropriate level of detail required for monitoring your job. For example, if you set this number significantly higher than the actual
number of rows in a job, the progress for that job will only show 0 or 100%.
q. Bulk Data (optional) For In-Place masking only. The default is for this check box to be clear. If you are masking very large
tables in-place and require performance improvements, check this box. Delphix will mask data to a flat file, and then use inserts
instead of updates to bulk load the target table.
r. Disable Constraint (optional) Whether to automatically disable database constraints. The default is for this check box to be
clear and therefore not perform automatic disabling of constraints. For more information about database constraints, see Enabli
ng and Disabling Database Constraints.
s. Batch Update (optional) Enable or disable use of a batch for updates. A job's statements can either be executed individually,
or can be put in a batch file and executed at once, which is faster.
t. Disable Trigger (optional) Whether to automatically disable database triggers. The default is for this check box to be clear
and therefore not perform automatic disabling of triggers.
u. Drop Index (optional) Whether to automatically drop indexes on columns which are being masked and automatically re-create
the index when the masking job is completed. The default is for this check box to be clear and therefore not perform automatic
dropping of indexes.
v. Prescript (optional) Specify the full pathname of a file that contains SQL statements to be run before the job starts, or click Br
owse to specify a file. If you are editing the job and a prescript file is already specified, you can click the Delete button to remove
the file. (The Delete button only appears if a prescript file was already specified.) For information about creating your own
prescript files, see Creating SQL Statements to Run Before and After Jobs.
w. Postscript (optional) Specify the full pathname of a file that contains SQL statements to be run after the job finishes, or click
Browse to specify a file. If you are editing the job and a postscript file is already specified, you can click the Delete button to
remove the file. (The Delete button only appears if a postscript file was already specified.) For information about creating your
own postscript files, see Creating SQL Statements to Run Before and After Jobs.
x. Comments (optional) Add comments related to this masking job.
y. Email (optional) Add e-mail address(es) to which to send status messages.
2. When you are finished, click Save.
For information about running jobs, see Running and Stopping Jobs from the Environment Overview Screen.
776
k. Max memory (MB) (optional) Maximum amount of memory to allocate for the job, in megabytes.
l. Commit Size (optional) The number of rows to process before issuing a commit to the database.
m. Feedback Size (optional) The number of rows to process before writing a message to the logs. Set this parameter to the
appropriate level of detail required for monitoring your job. For example, if you set this number significantly higher than the actual
number of rows in a job, the progress for that job will only show 0 or 100%.
n. Disable Constraint (optional) Whether to automatically disable database constraints. The default is for this check box to be
clear and therefore not perform automatic disabling of constraints. For more information about database constraints, see Enablin
g and Disabling Database Constraints.
o. Batch Update (optional) Enable or disable use of a batch for updates. A job's statements can either be executed individually,
or can be put in a batch file and executed at once, which is faster.
p. Truncate (optional) Whether to truncate target tables before loading them with data. If this box is selected, the tables will be
"cleared" before the operation. If this box is clear, data is appended to tables, which potentially can cause primary key violations.
This box is clear by default.
q. Disable Trigger (optional) Whether to automatically disable database triggers. The default is for this check box to be clear
and therefore not perform automatic disabling of triggers.
r. Prescript (optional) Specify the full pathname of a file containing SQL statements to be run before the job starts, or click Bro
wse to specify a file. If you are editing the job and a prescript file is already specified, you can click the Delete button to remove
the file. (The Delete button only appears if a prescript file was already specified.) For information about creating your own
prescript files, see Creating SQL Statements to Run Before and After Jobs.
s. Postscript (optional) Specify the full pathname of a file containing SQL statements to be run after the job finishes, or click Bro
wse to specify a file. If you are editing the job and a postscript file is already specified, you can click the Delete button to remove
the file. (The Delete button only appears if a postscript file was already specified.) For information about creating your own
postscript files, see Creating SQL Statements to Run Before and After Jobs.
t. Comments (optional) Add comments related to this provisioning job.
u. Email (optional) Add e-mail address(es) to which to send status messages.
2. When you are finished, click Save.
778
File Masking
Delphix will mask a number of different file types and formats. These include fixed, delimited, Excel, Mainframe/VSAM, XML, Word, and
Powerpoint. The purpose of this document is to provide an overview of general guidelines on how to successfully mask files using Delphix. This
document will not replace Delphix training or the Delphix manual set, it is in addition to these items.
Overview
Delphix supports 2 masking methodologies, In-Place and On-The-Fly. In-Place requires a single file connection and Delphix will read from that
file, mask data in memory, and update the file with the masked data. On-The-Fly requires 2 file connections. One connection for the source file,
and one connection to the target where the masked file will be placed. The target file name must exist. In this scenario, Delphix will read the file
from the source connection, mask in memory, and write the masked data to the target file.
File Formats
Unlike databases files for the most part do not have built in metadata to describe the format of the fields in the file. You must provide this to
Delphix so it can update the file appropriately. This is done through the settings tab where you will see a menu item on the left for File Format.
Select File Format and you will see options to create a file format or input a file format. This will depend on the type of file and how you want to let
Delphix know the format of the file.
779
These connection modes (other than local) will require additional information. We provide a test connection button to test the validity of the
connection. If you are doing in-place masking the file(s) will be masked and updated in the directory pointed to by the connector. If you plan to do
on-the-fly masking then you will need to create a separate environment and connector to be the source for the files to be masked. The masked
files will get put into the directory pointed to by the connector you created previously (the target). However, the file name must exist in the target
directory. It does not have to be a copy of the file, just an entry in the directory with the same name. It will be replaced by the masked file.
Create a Ruleset
Once you create a connector, you can click on the ruleset tab and create a ruleset. Click on create ruleset, give it a name, and provide the file
connector you previously created. Once you do this you will see a list of files that the connector points to. You can select a single file, multiple
files, or all the files. Once you save this the ruleset with the file or files will be saved.
Once you create a ruleset with a file or set of files, you will need to assign those files to their appropriate file format. This is accomplished by
editing the ruleset. When you click on the edit button for the file a popup screen called edit file will appear with the file name. There will be a
dropdown for the format so you can select the proper format for the file. Select the end-of-record to let Delphix know whether the file is in
windows/dos format (CR+LF) or Linux format (LF). If the file is a delimited file you will have a space to put in the delimiter. If the file is a
Mainframe/VSAM file with a copybook you will see a checkbox to signify if the file is variable length. If there are multiple files in the ruleset you will
have to edit each one individually and assign it to the appropriate file format.
File Inventory
For XML or Copybook files, once you select the ruleset and the file you will see the inventory for the file and you can edit this inventory with the
appropriate masking settings like any Delphix data source by either using the profiler or setting this manually.
For Excel, Delimited, or Fixed files, if you created the file format by importing it then the format for the file will be set. When you go to the inventory
page and select the ruleset and file you will see a line which shows all the records which you can expand to see the inventory. If your file has a
header and/or footer you will need to click on Record Type, click on add record type and select Header and/or Footer from the dropdown box.
Then enter a name for this and the number of rows/lines. Now you can assign the appropriate masking algorithms either by running the profiler or
setting them manually on the all records section.
If you did not import a format and just created a file format with the create format button you will have to enter the actual layout of the file into
Delphix. This can be done for Excel, Delimited and Fixed files. This can be accomplished by:
Navigating to the inventory screen and selecting the appropriate ruleset and file
Click on the Record Type button and add the appropriate record types
You will have to add a body record type. Select body from the dropdown and give it a name (i.e. Body).
If you have a file with the columns defined you can import this using the import button. If not then just save.
If your file has a header or footer you can add those next.
If you did not import the format you will have to enter this in manually. To do this click on the Define Fields button, when the screen pops
up you enter in the field name, choose the record type (body) and position in the record.
If the file is a fixed length file you will also have to enter in the length of the field.
You can optionally set the masking here also as you enter this in, or you can do this with the profiler. Enter in all the fields and you will be
set.
780
Tokenization
This section describes how to create and manage jobs. Tokenization uses reversible algorithms so that the data can be returned to it's original
state. Tokenization is a form of encryption where the actual data such as names and addresses are converted into tokens that have similar
properties to the original data (text, length, etc.) but no longer convey any meaning.
Create a Domain
Once you have created an algorithm, you will need to associate it with a domain.
1. From the Home page, click Settings.
2. Select Domains.
3. Click Add Domain. You will see the popup below:
781
Note
This environment will be used to re-identify your data when required.
783
784
Result Snapshot
Here is a snapshot of the data before and after Tokenization to give you an idea of what the it will look like.
Before Tokenization
After Tokenization
Steps to Re-Identify
Use the Tokenize/Re-Identify environment.
1. From the Home page, click Environments.
2. Click Re-Identify.
3. Create a re-Identify job and execute.
785
786
Result Snapshot
Here is a snapshot of the data before and after re-identification to give you an idea of what to expect.
Before Re-Identification
After Re-Identification
787
Monitor Jobs
Click the Monitor tab at the top of the screen to display all of the jobs defined to Delphix.
This screen provides an overview of job activity within the entire Delphix application, and also provides a mechanism to view execution results
and to run or rerun jobs. You will only see jobs associated with environments for which you have the appropriate role definition. If any job does not
succeed, you can correct the errors and then rerun the job.
Figure 21 Monitor Screen
788
Scheduler Tab
Scheduling Job(s) to Run
To schedule new job(s)
To edit a schedule
To delete a schedule
Enabling and Disabling Database Constraints
Creating SQL Statements to Run Before and After Jobs (For Distributed Environment)
Click the Scheduler tab at the top of the screen to display the list of jobs scheduled to run. This screen provides an overview of scheduled jobs
and lets the user configure schedules for jobs to run.
The following columns appear on the Scheduler screen:
Groups
Status
Start
End
Frequency
Edit
Delete
To search for a job group:
1. Enter a job group name in the Search field.
2. Click Search.
Upon completion of each job, an e-mail message that contains job start and end times, along with the completion status, is sent to the user whose
e-mail address is specified in the E mail field (in the Edit Job window).
To edit a schedule
1. From the Scheduler tab, click the Edit icon to the right of the schedule you want.
To delete a schedule
1. Click the Delete icon to the right of the schedule you want.
Creating SQL Statements to Run Before and After Jobs (For Distributed Environment)
When you create a masking job or a certification job, you can specify SQL statements to run before (prescript) you run a job and/or after
(postscript) the job has completed. For example, if you want to provision a schema from the source to a target, you would use a prescript (SQL
statements) to disable constraints and truncate data on the target.
You create prescript and postscript by creating a text document with the SQL statement(s) to execute. If the text file contains more than one SQL
statement, each statement must be separated by a semicolon [;] EXCEPT when variables are being used in the script. Any time variables are
used, a semicolon should not be used between statements until those variables are no longer needed. For example:
790
Settings Tab
This user guide only gives an overview of the Delphix settings. For more detailed information, see the Delphix Administrator's Guide.
Click the Settings tab at the top of the screen to view or change Delphix settings.
There are several areas to which settings are applied:
Algorithms
Types of Algorithms
Domains (Masking)
Profiler
Mapping
File Format
Remote Server
Algorithms
The main methods used by Delphix algorithms are secure lookup and segmented mapping. Delphix also includes some algorithms for specific
types of data, such as phone numbers and dates. These standard Delphix algorithms are available if you select Delphix as the generator.
From the Settings tab, if you click Algorithm to the left, the list of algorithms will be displayed.
Types of Algorithms
Secure Lookup Algorithm Uses a lookup file to assign masked values in a consistent manner. The design of the algorithm introduces
intentional collisions
Segmented Mapping Algorithm Replaces data values based on segment definitions. For example, an ACCOUNT NUMBER algorithm
might keep the first segment of an account number but replace the remainder or remaining segments with a random number.
Mapping Algorithm: Sequentially maps original data values to masked values that are pre-populated to a lookup table through the
Delphix user interface
Binary Lookup Algorithm Much like the Secure Lookup Algorithm, but used when entire files are stored in a specific column
Tokenization Algorithm Replaces the data value with an algorithmically generated token that can be reversed. These are only used
when you create a tokenization environment.
Min/Max Algorithm: This algorithm allows you to make sure all the values in the database are within a specified range. They prevent
unique identification of individuals by characteristics that are outside the normal range, such as age over 99.
Data Cleansing Algorithm If the target data needs to be put in a standard format prior to masking, you can use this algorithm. For
example, Ariz, Az, Arizona can all be cleansed to AZ.
Free Text Redaction Algorithm This algorithms masks or redacts free text columns of files. It uses either a Whitelist or Blacklist to
determine what words are masked or not masked. This algorithm may require additional configuration to work in the manner you desire.
These Delphix Algorithm Frameworks give you the ability to quickly and easily define the algorithms you want, directly on the Settings tab. Then,
you can immediately propagate them. Anyone in your organization who has Delphix can then access the info.
Domains (Masking)
Domains specify certain data to be masked with a certain algorithm.
From the Settings tab, if you click Domains to the left, the list of domains will be displayed. From here, you can add, edit, or delete domains. You
can set a domain's algorithm here; to change a domain's selection expressions or to group domains into sets, continue to the next section.
Profiler
The profiler is used to group domains into Profile Sets (aka Profiles) and assign expressions to domains. Profile Sets can be used in Profiling
Jobs.
From the Settings tab, if you click Profiler to the left, a list of expressions will be displayed. From here, you can work with:
Expressions Expressions are used to specify what data is sensitive. This can be done at either the column or data level. Each
expression is assigned to a domain, and domains can be grouped into profile sets. You can add, edit, or delete expressions from the Prof
iler Settings page.
791
Profile Sets (Profiles) A profile set is a group of domains. Said another way, it is a group of certain data to be masked in certain ways.
You can add, edit, or delete profiles by clicking + Profiler Set at the top of the Profiler Settings page.
Mapping
From the Settings tab, if you click Mapping to the left, the list of mappings will be displayed. From here you can add, edit, or delete mappings.
File Format
File formats are a way of organizing the types of files to be masked. Before a file can be masked, it needs to have a file format assigned to it.
From the Settings tab, if you click File Format to the left, the list of file formats will be displayed. From here, you can add or delete file formats.
To assign a file format:
1. In the Rule Set screen, select a rule set.
2. Click the green edit icon to the right of a file.
Remote Server
The Delphix Engine typically executes jobs on a local instance. Remote servers are for executing jobs elsewhere.
From the Settings tab, if you click Remote Server to the left, the list of remote servers will be displayed. From here, you can add, edit, or delete
remote servers.
792
Admin Tab
For more detailed Admin information, see the Delphix Administrator's Guide.
Click the Admin tab at the top of the screen for administrator settings and information.
Users
From the Admin tab, if you click Users to the left, the list of users will be displayed. From here you can add, edit, or delete users.
Along with regular user information (name, username, email etc.), users have permissions. You can set these permissions by the user's role and
what environments they can access.
About
From the Admin tab, if you click About to the left, the list of information about your current Delphix installation is shown:
Delphix Version
Operating System
Application Server
Database
Masking
Java Version
Expiration Date
Licensed Data Sources
The following figure shows a sample of the information displayed in the About section:
Figure 23 About Section
793
Risk Tab
The screenshot below shows user entitlement information for Oracle databases.
794
795
Utilization Reports
The Utilization Screen
796
Security
Storing Database Passwords
Authenticating Users
Authorizing Users (Roles)
Configuration
Configuring Masking Engine to use Active Directory
Configuring Log File Locations
Configuring the Default Port
Restarting Masking Engine
Troubleshooting
Memory Usage
Stack Traces
Application Server Down
Database Server Down
Backups and Recovery
797
Administration
As a Masking Engine Administrator, you specify what information (data elements) to mask, how to mask the data (the algorithms to use), the
location of the data to mask (regular expressions and profiler settings), and the roles or privileges for Masking Engine users. You perform all this
within Masking Engine and you can then propagate it across all of your organization's departments.
A domain is a virtual representation of a data element. An integral part of the data masking process is to use algorithms to mask each data
element. The way you specify which algorithm to use on each individual data element is by creating a unique domain for each element. You do
this on the Domains tab. You define a unique domain for each element and then associate the classification and algorithm you want to use for
each domain.
In addition to using the Domain settings to determine your inventory of what to mask, a Profiling job uses expressions to identify the data you are
seeking. A regular expression is a special text string that defines a search pattern. You can also group expressions into profiler sets, which are
defined for a given target, such as financial services or health care.
Masking Engine has a built-in Administrator role, which gives a user complete access to Masking Engine functions. You can also add roles to the
Roles Settings. Perhaps you want to define an analyst or developer role, so someone can create masking jobs, or an operator role, to make sure
jobs are run consistently.
798
Managing Settings
The Settings Screen
Display the Settings screen by clicking the Settings tab at the top of any Masking Engine screen.
You must have the appropriate user privileges to see this screen.
The Settings screen has the following tabs:
Algorithm Define the algorithms to use to mask your data
Domains Define domains and choose their classification and default masking algorithm
Profiler Define expressions and groupings of expressions used to create your inventory
Roles Define user roles and privileges, such as edit and delete
Mapping Define mapping rules
File Format Define the file format definitions and format types
Remote Server This is an add-on feature for Masking Engine Standard Edition. Define the remote server(s) that will execute jobs.
799
800
801
To add an algorithm:
1. Click Add Algorithm at the top right of the Algorithm tab.
Select Algorithm Type Popup
802
Note
Masking Engine supports lookup files saved in ASCII or UTF-8 format only. If the lookup file contains foreign alphabet characters, the
file must be saved in UTF-8 format for Masking Engine to read the Unicode text correctly.
803
Segmented mapping algorithms let you create unique asked values by dividing a target value into separate segments and masking each segment
individually. Optionally, you can preserve the semantically rich part of a value while providing an unique value for the remainder. This is especially
useful for primary keys or columns that need to be unique because they are part of a unique index.
When using segmented mapping algorithms for primary nd foreign keys, to make sure they match, you must use the same segmented mapping
algorithm for each.
804
805
Range#A range of alphanumeric characters; separate values in this field with a comma (,). Individual values can be a number
from 0 to 9 or an uppercase letter from A to Z. (For example, B,C,J,K,Y,Z or AB,DE.)
If you do not specify a range, Masking Engine uses the full range (A-Z, 0-9). If you do not know the format of the input, leave the range fields
empty. If you know the format of the input (for example, always alphanumeric followed by numeric), you can enter range values such as A2 and
S9.
When determining a numeric or alphanumeric range, remember that a narrow range will likely generate duplicate values, which will cause
your job to fail.
To ignore specific characters, enter one or more characters in the Ignore Character List box. Separate values with a comma.
To ignore the comma character (,), select the Ignore comma (,) check box.
To ignore control characters, select Add Control Characters.
The Add Control Characters window appears.
Add Control Characters Window
Select the individual control characters that you would like to ignore, or choose Select All or Select None.
When you are finished, click Save.
You are returned to the Segmented Mapping pane.
9. Preserve Original Values by entering Starting position and length values. (Position starts at 1.)
For example, to preserve the second, third, and fourth values, enter Starting position 2 and length 3.
If you need additional value fields, click Add.
10. When you are finished, click Save.
11. Before you can use the algorithm (specify it in a profiling or masking job), you must add it to a domain. If you are not using the Masking
Engine Profiler to create your inventory, you do not need to associate the algorithm with a domain.
Mapping Algorithm
A mapping algorithm sequentially maps original data values to masked values that are pre-populated to a lookup table through the Masking
Engine user interface. With the mapping algorithm, you must supply at minimum, the same number of values as the number of unique values you
are masking, more is acceptable. For example if there are 10000 unique values in the column you are masking you must give the mapping
algorithm at least 10000 values.
806
807
Tokenization Algorithm
Tokenization uses reversible algorithms so that the data can be returned to its original state. Tokenization is a form of encryption where the actual
data ( e.g. names and addresses) are converted into tokens that have similar properties to the original data (text, length, etc) but no longer convey
any meaning.
808
Create a Domain
3. Select Tokenize/Re-Identify as the purpose and Save. Note: This environment will be used to Re-Identify your data when required.
4. Set up a Tokenize job using tokenization method. Execute the job
809
Result Snapshot
Here is a snapshot of the data before and after Tokenization to give you an idea of what the it will look like.
Before Tokenization
810
After Tokenization
811
812
814
815
To add an algorithm:
1. Click Add Algorithm at the top right of the Algorithm tab.
Select Algorithm Type Popup
816
Note
Masking Engine supports lookup files saved in ASCII or UTF-8 format only. If the lookup file contains foreign alphabet characters, the
file must be saved in UTF-8 format for Masking Engine to read the Unicode text correctly.
817
818
819
If you do not specify a range, Masking Engine uses the full range. For example, for a 4-digit segment, Masking Engine uses 0-9999.
Alpha-Numeric segment type:
Min#A number from 0 to 9; the first value in the range.
Max#A number from 0 to 9; the last value in the range.
MinCharA letter from A to Z; the first value in the range.
MaxCharA letter from A to Z; the last value in the range.
Range#A range of alphanumeric characters; separate values in this field with a comma (,). Individual values can be a number
from 0 to 9 or an uppercase letter from A to Z. (For example, B,C,J,K,Y,Z or AB,DE.)
If you do not specify a range, Masking Engine uses the full range (A-Z, 0-9). If you do not know the format of the input, leave the range fields
empty. If you know the format of the input (for example, always alphanumeric followed by numeric), you can enter range values such as A2 and
S9.
When determining a numeric or alphanumeric range, remember that a narrow range will likely generate duplicate values, which will cause
your job to fail.
To ignore specific characters, enter one or more characters in the Ignore Character List box. Separate values with a comma.
To ignore the comma character (,), select the Ignore comma (,) check box.
To ignore control characters, select Add Control Characters.
The Add Control Characters window appears.
Add Control Characters Window
Select the individual control characters that you would like to ignore, or choose Select All or Select None.
When you are finished, click Save.
You are returned to the Segmented Mapping pane.
9. Preserve Original Values by entering Starting position and length values. (Position starts at 1.)
For example, to preserve the second, third, and fourth values, enter Starting position 2 and length 3.
If you need additional value fields, click Add.
10. When you are finished, click Save.
11. Before you can use the algorithm (specify it in a profiling or masking job), you must add it to a domain. If you are not using the Masking
Engine Profiler to create your inventory, you do not need to associate the algorithm with a domain.
820
Mapping Algorithm
A mapping algorithm sequentially maps original data values to masked values that are pre-populated to a lookup table through the Masking
Engine user interface. With the mapping algorithm, you must supply at minimum, the same number of values as the number of unique values you
are masking, more is acceptable. For example if there are 10000 unique values in the column you are masking you must give the mapping
algorithm at least 10000 values.
821
822
Tokenization Algorithm
Tokenization uses reversible algorithms so that the data can be returned to its original state. Tokenization is a form of encryption where the actual
data ( e.g. names and addresses) are converted into tokens that have similar properties to the original data (text, length, etc) but no longer convey
any meaning.
823
3. Select Tokenize/Re-Identify as the purpose and Save. Note: This environment will be used to Re-Identify your data when required.
4. Set up a Tokenize job using tokenization method. Execute the job
Result Snapshot
Here is a snapshot of the data before and after Tokenization to give you an idea of what the it will look like.
Before Tokenization
824
After Tokenization
825
826
827
828
829
Masking Engine includes several default domains and algorithms. These appear the first time you display the Masking Settings tab. Each
domain has a classification and masking method assigned to it. You might choose to assign a different algorithm to a domain, but each domain
name is unique and can only be associated with one algorithm.
If you create additional algorithms, they will appear in the Algorithms dropdown. Because each algorithm used must have a unique domain, you
need to add a domain or reassign an existing domain in order to use any other algorithms. If you create mapplets, you need to follow the
instructions in Error! Reference source not found to integrate them and add them to the Algorithms dropdown list.
To add a Domain:
1. Click Add Domain at the Top of the Domains tab.
A new domain will be created in-line.
Add Domain Window
830
5. Click Save.
831
832
Column
Description
Data Description
(i:ad(d
dress)_line1
ad(d
dress)1
city_ad(d
dress)
(.[\s]+b(ou)?l(e)?v(ar)?d[\
s].*)
(.[\s]+st[.]?(reet)?[\
s].*)
(.[\s]+ave[.]?(nue)?[\
s].*)
(.[\s]+r(oa)?d[\
s].*)
(.[\s]+l(a)?n(e)?[\
s].*)
(.[\s]+cir(cle)?[\
s].*)
(?i)(.[\s]*ap(ar)?t(ment)?[\
s]+.)
(.[\s]*s(ui)?te[\s]+.)
(c(are)?[\s][\\\\\]?[/]?o
(f)?[\s]+.)
833
ad(d
For sample expressions and tools, see http://www.regular-expressions.info/ or perform an Internet search for "regular expressions".
(Disclaimer: We have provided this resource as a suggestion. Axis Technology does not endorse this or any other related site.)
To add an expression:
1. Click Add Expression at the top of the Profiler tab.
A new expression will be created in-line.
2. Select a domain from the Domain dropdown.
Note: Only the default Masking Engine domains and the domains you have defined appear in this dropdown. If you need to add a domain, see
page 28.
1. Enter the following information for that domain:
Expression NameThe field name used to select this expression as part of a profiler set.
Expression TextThe regular expression used to identify the location of the sensitive data.
2. Select an Expression Level for the domain:
Column LevelTo identify sensitive data based on column names.
Data LevelTo identify sensitive data based on data values, not column names.
3. When you are finished, click Save.
Add Expression Window
To delete an expression:
Click the Delete icon to the far right of the name.
834
835
If the expression is at a data level, you can look for common names such as John and Mary:
(([Jj][Oo][Hh][Nn])|[Mm][Aa][Rr][Yy]))
This expression looks for the names John and Mary in the database. If Masking Engine finds any, it identifies that as a First Name column.
You can also search based on format. For instance you can look for a social security number by looking for nine digits of data, with two hyphens
(at positions 4,1 and 7,1): ^\d{3}\d{2}\d{4}$
836
837
Column
Description
Data Description
(i:ad(d
dress)_line1
ad(d
dress)1
city_ad(d
dress)
(.[\s]+b(ou)?l(e)?v(ar)?d[\
s].*)
(.[\s]+st[.]?(reet)?[\
s].*)
(.[\s]+ave[.]?(nue)?[\
s].*)
(.[\s]+r(oa)?d[\
s].*)
(.[\s]+l(a)?n(e)?[\
s].*)
(.[\s]+cir(cle)?[\
s].*)
(?i)(.[\s]*ap(ar)?t(ment)?[\
s]+.)
(.[\s]*s(ui)?te[\s]+.)
(c(are)?[\s][\\\\\]?[/]?o
(f)?[\s]+.)
For sample expressions and tools, see http://www.regular-expressions.info/ or perform an Internet search for "regular expressions".
(Disclaimer: We have provided this resource as a suggestion. Axis Technology does not endorse this or any other related site.)
To add an expression:
1. Click Add Expression at the top of the Profiler tab.
A new expression will be created in-line.
2. Select a domain from the Domain dropdown.
Note: Only the default Masking Engine domains and the domains you have defined appear in this dropdown. If you need to add a domain, see
page 28.
1. Enter the following information for that domain:
Expression NameThe field name used to select this expression as part of a profiler set.
Expression TextThe regular expression used to identify the location of the sensitive data.
2. Select an Expression Level for the domain:
Column LevelTo identify sensitive data based on column names.
Data LevelTo identify sensitive data based on data values, not column names.
3. When you are finished, click Save.
Add Expression Window
To delete an expression:
838
ad(d
839
840
841
842
Adding Roles
To add a role:
1. Click Add Roles near the top of the Roles tab.
2. Enter a Role Name. The types of privileges appear across the top of the table, corresponding to the columns of check boxes:
View
Add
Update
Delete
Copy
Import
Export
The far-left column lists the items for which you can set privileges.
3. Select the check boxes for the corresponding privileges that you want to apply. If there is no check box, that privilege is not available.
For example, if you want this Role to have View, Add, Update, and Run privileges for Masking jobs, select the corresponding check boxes
in the Masking Job row.
4. When you are finished assigning privileges for this Role, click Submit.
Add Role Window
843
844
845
Adding Roles
To add a role:
1. Click Add Roles near the top of the Roles tab.
2. Enter a Role Name. The types of privileges appear across the top of the table, corresponding to the columns of check boxes:
View
Add
Update
Delete
Copy
Import
Export
The far-left column lists the items for which you can set privileges.
3. Select the check boxes for the corresponding privileges that you want to apply. If there is no check box, that privilege is not available.
For example, if you want this Role to have View, Add, Update, and Run privileges for Masking jobs, select the corresponding check boxes
in the Masking Job row.
4. When you are finished assigning privileges for this Role, click Submit.
Add Role Window
846
Adding Mappings
To add a new mapping:
1. Click Add Mapping at the upper right. The Add Mapping Rule window appears
2. Select a Mapping Type.
3. Enter a Mapping Name.
4. Enter values for Input and Output.
5. Select a Mapping File from the filesystem.
6. Click Submit.
Add Mapping Rule Window
847
Adding Mappings
To add a new mapping:
1. Click Add Mapping at the upper right. The Add Mapping Rule window appears
2. Select a Mapping Type.
3. Enter a Mapping Name.
4. Enter values for Input and Output.
5. Select a Mapping File from the filesystem.
6. Click Submit.
Add Mapping Rule Window
848
850
851
852
853
2. Enter a name for the remote server in the Remote Server Name field.
3. In the Host Name/IP field, enter the name of the remote server host or the IP address of the remote server.
4. Enter the Port on which the remote client is listening for job requests.
5. Enter a User Name to access the remote server.
6. Enter the Password for the specified User Name.
7. In the Remote Application Home field, enter the path on the remote server to the home directory for the Masking Engine client.
8. Click Submit.
854
Click the Edit icon to the right of the Remote Server Name.
855
2. Enter a name for the remote server in the Remote Server Name field.
3. In the Host Name/IP field, enter the name of the remote server host or the IP address of the remote server.
4. Enter the Port on which the remote client is listening for job requests.
5. Enter a User Name to access the remote server.
6. Enter the Password for the specified User Name.
7. In the Remote Application Home field, enter the path on the remote server to the home directory for the Masking Engine client.
8. Click Submit.
856
857
858
Managing Users
The Users Screen
Click the Admin tab at the top and the then the Users tab on the left of the screen to display the list of users defined in the Masking Engine
installation.
Users Tab
859
To edit a user
1. Click on the User Name in the user list. The User Profile pane appears.
2. Modify the settings as you would for a new user.
3. Click Save.
860
861
1.
To edit a user
1. Click on the User Name in the user list. The User Profile pane appears.
2. Modify the settings as you would for a new user.
3. Click Save.
To delete any user
862
Utilization Reports
The Utilization Screen
Click the Admin tab at the top and then the Utilization tab on the left to bring up the utilization screen.
Utilization Screen
863
864
865
866
Security
The following sections describe security actions:
Storing Database Passwords
Authenticating Users
Authorizing Users (Roles)
Configuring a Boot Password
Configuring a Security Banner
Authenticating Users
If you choose to use Masking Engine internal authentication, Masking Engine uses encryption and stores passwords for each user encrypted in
the Masking Engine relational repository.
When a user logs in to Masking Engine and enters their username and password, Masking Engine verifies that the user is an active user with
Masking Engine, and then authenticates their password.
Optionally, Masking Engine can integrate with external authentication software (Microsoft Active Directory, CA SiteMinder, or LDAP) to
authenticate users. If you integrate with external authentication software, Masking Engine will validate that the user has rights to access the
application and will log in the user automatically. (No additional Masking Engine password will be required.)
Procedure
2. Switch to the service security context and execute the update command.
867
2. Switch to the service security context and execute the update command.
delphix service security update *> set banner="Use is subject to license terms."
The banner is in plain text. HTML or other markup is not supported.
868
Procedure
2. Switch to the service security context and execute the update command.
869
2. Switch to the service security context and execute the update command.
delphix service security update *> set banner="Use is subject to license terms."
The banner is in plain text. HTML or other markup is not supported.
870
User Security
The following sections describe security actions:
Storing Database Passwords
Authenticating Users
Authorizing Users (Roles)
Authenticating Users
If you choose to use Masking Engine internal authentication, Masking Engine uses encryption and stores passwords for each user encrypted in
the Masking Engine relational repository.
When a user logs in to Masking Engine and enters their username and password, Masking Engine verifies that the user is an active user with
Masking Engine, and then authenticates their password.
Optionally, Masking Engine can integrate with external authentication software (Microsoft Active Directory, CA SiteMinder, or LDAP) to
authenticate users. If you integrate with external authentication software, Masking Engine will validate that the user has rights to access the
application and will log in the user automatically. (No additional Masking Engine password will be required.)
871
872
Authenticating Users
If you choose to use Masking Engine internal authentication, Masking Engine uses encryption and stores passwords for each user encrypted in
the Masking Engine relational repository.
When a user logs in to Masking Engine and enters their username and password, Masking Engine verifies that the user is an active user with
Masking Engine, and then authenticates their password.
Optionally, Masking Engine can integrate with external authentication software (Microsoft Active Directory, CA SiteMinder, or LDAP) to
authenticate users. If you integrate with external authentication software, Masking Engine will validate that the user has rights to access the
application and will log in the user automatically. (No additional Masking Engine password will be required.)
873
874
Configuration
Configuring Masking Engine to use Active Directory
Masking Engine can be configured to use the Active Directory environment to manage the login process. This is accomplished by modifying one
of the Masking Engine property files with the appropriate information to communicate with the Active Directory infrastructure.
Configuration Steps
1. The first step, before you configure Masking Engine to use AD is to create a user in Masking Engine using your AD username. The
Masking Engine username must match exactly your AD username as this is what we will be sending to AD for validation. You will have to
put in a password, but this will not be used once AD is turned on. This user should be an administrator in Masking Engine as this will be
the only valid user until more AD users are created.
2. Once this user is created, bring down Masking Engine.
3. Once Masking Engine is stopped, you need to edit the dm-util.properties file. This is located in the <Masking Engine_home>\conf dire
ctory.
4. Scroll down in the file until you come to the following section:
#LDAP CONFIGURATION.
LDAP_ENABLE=0
LDAP_HOST=10.10.10.31
LDAP_PORT=389
LDAP_BASEDN=DC=tbspune,DC=com
LDAP_FILTER=(&(objectClass=person)(sAMAccountName=?))
LDAP_ANONYMOUS=false
MSAD_DOMAIN=AD
LDAP_KERBEROS_AUTH=true
LDAP_USERID_ATTR=msfwid
5. Set the following entries:
LDAP_ENABLE=1
LDAP_HOST=xxx.xxx.xxx.xxx (your AD host IP address)
LDAP_PORT=389 (your AD host port, this is normally 389)
LDAP_BASEDN=xxx (your AD environment's base DN)
LDAP_KERBEROS_AUTH=false (disable Kerberos if your environment does not use it)
6. Save the file
7. Restart Masking Engine
8. Once Masking Engine comes up you should be able to login to Masking Engine using your AD login and password. If this does not work,
a few things are the possible cause:
a) You did not enter in you username in Masking Engine exactly the way AD expects it. In order to fix this, you will have to bring
Masking Engine down. Edit the dm-util.properties file and change LDAP_ENABLE=0, and save the file. Restart Masking Engine and
login as axistech, correct the AD user. Edit the property file again setting LDAP_ENABLE=1, and save the file. Bring Masking Engine
down, and restart Masking Engine and then try the login again.
b) It is possible that your Active Directory environment is customized, we have run into this before and then you will need to open a
support ticket and have your Active Directory support people available for consultation.
Setting Notes:
Multiple AD domains can be supported by specifying the MSAD_DOMAIN property as a comma separated list of AD domains. For
example: MSAD_DOMAIN=AD,TEST,DEMO
LDAP_USERID_ATTR is only used when LDAP_KERBEROS_AUTH=true
875
/conf/log4j.properties
2. Modify the following key in the file:
log4j.appender.R.File =
For example:
log4j.appender.R.File = C:/Tomcat 6.0/logs/Masking Engine/Masking Engine.log
3. Save and close the properties file.
4. Restart your application server.
876
1. Go to the following location, where <bea_server_root> is the location of your application server root folder:
<bea_server_root>/userprojects/domains/Masking Engine_domain/bin
For example:
bea_Masking Engine/apache_tomcat_6.0.18/userprojects/domains/Masking Engine_domain/bin
2. Execute the startupWebLogic.cmd file.
877
1. The first step, before you configure Masking Engine to use AD is to create a user in Masking Engine using your AD username. The
Masking Engine username must match exactly your AD username as this is what we will be sending to AD for validation. You will have to
put in a password, but this will not be used once AD is turned on. This user should be an administrator in Masking Engine as this will be
the only valid user until more AD users are created.
2. Once this user is created, bring down Masking Engine.
3. Once Masking Engine is stopped, you need to edit the dm-util.properties file. This is located in the <Masking Engine_home>\conf dire
ctory.
4. Scroll down in the file until you come to the following section:
#LDAP CONFIGURATION.
LDAP_ENABLE=0
LDAP_HOST=10.10.10.31
LDAP_PORT=389
LDAP_BASEDN=DC=tbspune,DC=com
LDAP_FILTER=(&(objectClass=person)(sAMAccountName=?))
LDAP_ANONYMOUS=false
MSAD_DOMAIN=AD
LDAP_KERBEROS_AUTH=true
LDAP_USERID_ATTR=msfwid
5. Set the following entries:
LDAP_ENABLE=1
LDAP_HOST=xxx.xxx.xxx.xxx (your AD host IP address)
LDAP_PORT=389 (your AD host port, this is normally 389)
LDAP_BASEDN=xxx (your AD environment's base DN)
LDAP_KERBEROS_AUTH=false (disable Kerberos if your environment does not use it)
6. Save the file
7. Restart Masking Engine
8. Once Masking Engine comes up you should be able to login to Masking Engine using your AD login and password. If this does not work,
a few things are the possible cause:
a) You did not enter in you username in Masking Engine exactly the way AD expects it. In order to fix this, you will have to bring
Masking Engine down. Edit the dm-util.properties file and change LDAP_ENABLE=0, and save the file. Restart Masking Engine and
login as axistech, correct the AD user. Edit the property file again setting LDAP_ENABLE=1, and save the file. Bring Masking Engine
down, and restart Masking Engine and then try the login again.
b) It is possible that your Active Directory environment is customized, we have run into this before and then you will need to open a
support ticket and have your Active Directory support people available for consultation.
Setting Notes:
Multiple AD domains can be supported by specifying the MSAD_DOMAIN property as a comma separated list of AD domains. For
example: MSAD_DOMAIN=AD,TEST,DEMO
LDAP_USERID_ATTR is only used when LDAP_KERBEROS_AUTH=true
878
879
1. Go to the following location, where <tomcat_home> is the directory with the tomcat installation:
/<tomcat_home>/conf
For example:
Masking Engine/apache_tomcat_6.0.18/conf
2. The conf folder is at the same level as the bin folder.
3. Modify the following line in the server.xml file:
<Connector port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" />
In this example, the default port is changed to 8443.
4. Save and close the file.
5. Restart your application server.
880
1. Go to the following location, where <tomcat_home> is the directory with the tomcat installation:
/<tomcat_home>/bin
For example:
Masking Engine/apache_tomcat_7.0.27/bin
2. Execute the startup.bat file.
To restart your Masking Engine application for WebLogic Server:
1. Go to the following location, where <bea_server_root> is the location of your application server root folder:
<bea_server_root>/userprojects/domains/Masking Engine_domain/bin
For example:
bea_Masking Engine/apache_tomcat_6.0.18/userprojects/domains/Masking Engine_domain/bin
2. Execute the startupWebLogic.cmd file.
To restart your Masking Engine application for IBM WebSphere:
1. Select Programs > IBM WebSphere > Application Server ... > Profiles > newly created profile > Start the server.
For example, if the default profile created when you installed WebSphere was AppSrv01, your newly created profile might be
AppSrv02:
Programs > IBM WebSphere > Application Server ... > Profiles > AppSrv02 > Start the server.
881
Troubleshooting
Memory Usage
Masking Engine masking operations can be memory- and processor-intensive. Therefore, the number of jobs that can run in parallel and the
speed with which they run varies depending on processor and RAM.
Initially, we recommend that you allocate at least 1 GB for the Tomcat application server instance. Other application servers might require more
memory; follow the suggested guidelines for your server. If you encounter memory issues, you might need to increase your memory allocation.
32-bit Java Virtual Machines (JVMs) have a maximum memory setting (1.5 GB) that you cannot exceed. 64-bit JVMs do not have this
restriction.
If you do not allocate enough memory initially, you could have issues if you try to allocate memory as needed. To avoid this problem, we suggest
that you set your Java Xms and Xmx values to the same number. This ensures that all necessary memory is reserved and available for the job at
the beginning. Otherwise, your operating system might attempt to terminate some lower priority processes to free up memory, which could halt
your higher priority processes. We recommend allocating 1 GB per job.
For information on system requirements, see Masking Engine System Requirements.
Stack Traces
If an unhandled exception occurs in code, you might get a stack trace. If this happens, do the following:
1. Restart the server.
2. Ensure that the database is up.
3. If the problem persists, contact Customer Support.
882
Memory Usage
Masking Engine masking operations can be memory- and processor-intensive. Therefore, the number of jobs that can run in parallel and the
speed with which they run varies depending on processor and RAM.
Initially, we recommend that you allocate at least 1 GB for the Tomcat application server instance. Other application servers might require more
memory; follow the suggested guidelines for your server. If you encounter memory issues, you might need to increase your memory allocation.
32-bit Java Virtual Machines (JVMs) have a maximum memory setting (1.5 GB) that you cannot exceed. 64-bit JVMs do not have this
restriction.
If you do not allocate enough memory initially, you could have issues if you try to allocate memory as needed. To avoid this problem, we suggest
that you set your Java Xms and Xmx values to the same number. This ensures that all necessary memory is reserved and available for the job at
the beginning. Otherwise, your operating system might attempt to terminate some lower priority processes to free up memory, which could halt
your higher priority processes. We recommend allocating 1 GB per job.
For information on system requirements, see Masking Engine System Requirements.
883
Stack Traces
If an unhandled exception occurs in code, you might get a stack trace. If this happens, do the following:
1. Restart the server.
2. Ensure that the database is up.
3. If the problem persists, contact Customer Support.
884
885
886
887
Known Issues
888
Notes
License
License installation is required separately. The license file goes under the dm_license directory under <DMSUITEHOME> (e.g.,
/opt/dmsuite/dm_license or C:/dmsuite/dm_license).
License is valid for installation only for the date specified and cannot be used on a different date.
License is bound to the MAC address provided for installation.
Ports
Port 5432 (Linux & windows) should be available on the machine, if using the bundled PostgreSQL repository.
Port 8282 should be available on the machine, if using default port of apache tomcat.
Port 8443 should be available if tomcat is to be used in https mode.
Extraction of fast-stack [DMSuite-4.7.1-2.tar.bz2] is done in <Current folder>/dmsuite.
For Linux, user should move/copy the extracted subfolder dmsuite and its contents to /opt/ for default paths to work (absolute path should
be /opt/dmsuite).
For Windows, user should move/copy the extract subfolder dmsuite and its contents to C:\ for default paths to work (absolute path should
be C:\dmsuite).
Configuration updates may be needed as per your environment to the /opt/dmsuite/conf/dm-util.properties file or
C:/dmsuite/conf/dm-util.properties (see Step 3 under Basic Install Instructions below for more information):
OUTPUT_FOLDER_PATH property should be:
OUTPUT_FOLDER_PATH=/opt/ or OUTPUT_FOLDER_PATH=C:
APPLICATOR_HOME property should be:
APPLICATOR_HOME=/opt/dmsuite/DMSApplicator/ or APPLICATOR_HOME=C:/dmsuite/DMSApplicator/
Bundled Mappings will be functional if the installation paths are /opt/dmsuite in current minor release
DMSuite Repository
DMSuite includes PostgreSQL as its database repository. To use another database as the DMSuite repository, one must comment/uncomment
the sections in the <DMSUITEHOME>/conf/dmsuite-dao.properties file applicable to the repository type being used (do not forget to comment
out/modify the PostgreSQL section).
The dmsuite-dao.properties for PostgreSQL appears as shipped:
/opt/conf/dmsuite-dao.properties or C:/dmsuite/conf/dmsuite-dao.properties
Entry:
## ------------ DATABASE: PostgreSQL -----------## ---- FIXED ITEMS
database.dialect=org.hibernate.dialect.PostgreSQLDialect
database.driver=org.postgresql.Driver
database.instancename=POSTGRESQL
database.Prefix=
## ---- VARIABLE ITEMS
database.ownername=dmsuite
database.username=dmsuite
database.password=ENC(LRFCy6TouiWKGkE0VL5WJlc41biWuGCf)
database.url=jdbc:postgresql://127.0.0.1:5432/dmsuite
889
Install
If your install directory (<DMSUITEHOME>) is not /opt/, you must edit the following 2 files:
a.
<DMSUITEHOME>/conf/dmsuite-log4j.properties
E.g. :
/opt/conf/dmsuite-log4j.properties or
C:/dmsuite/conf/dmsuite-log4j.properties
Entry:
local path
b.
<DMSUITEHOME>/conf/dm-util.properties
File :
/opt/conf/dm-util.properties or C:/dmsuite/conf/dm-util.properties
Entry: Change the appropriate paths (e.g., /opt/ to c:/dmsuite) - ~49 changes, so
use replace all
Entry: PROCESS_WAIT_FOR_VALUE=137#143 (default for Linux) change
value to PROCESS_WAIT_FOR_VALUE=1#-1073741510 for windows
4. Create an environment variable DMSUITEHOMEDIR with corresponding local value e.g. DMSUITEHOMEDIR=C:/dmsuite or
DMSUITEHOMEDIR=/opt/dmsuite
5. Execute Start_all.bat or Start_all.sh from the <DMSUITEHOME> directory to startup the bundled PostgreSQL repository and tomcat
server.
Install
If you get an error message indicating "The program can't start because MSVCR120.dll is missing from your computer. Try
reinstalling the program to fix this problem.", download and run vcredist_x64.exe to load the Visual C++ Redistributable
Package for Visual Studio 2013 (see http://www.microsoft.com/en-us/download/details.aspx?id=40784).
890
_XPP
891
892
Engine cross-platform
provisioning functionality utilizes algorithms that are unique to the Delphix File System (DxFS) to detect similarities
between the Unix datafiles and converted Linux datafiles, allowing the converted database to be stored in less than
5/100 of the space that would normally be required.
Requirements
The underlying Oracle technology used to transform to Linux imposes several requirements, including:
Encryption can not be used
Tablespace Transport Set must be self-contained
Tablespaces with XML types can not be used before Oracle version 11.2
Advanced queues versions 8 or later
Spatial indexes can not be used before Oracle version 11.2
These requirements are checked by Transformation Validation, as described in Enabling Oracle dSources for Cross-Platform Provisioning. C
reating Scripts for Cross-Platform Provisioning describes how to modify the database to meet these requirements.
Related Links
Enabling Oracle dSources for Cross-Platform Provisioning
Creating Scripts for Cross-Platform Provisioning
893
Prerequisites
A source Unix Oracle database
This can be a dSource or a VDB.
A Unix staging environment
This environment must be the same platform and Oracle version as the source database. See Enabling Validated Sync for Oracle for
information on designating a staging environment.
The default OS user for the staging host must have access to the Oracle installation that will be used as the staging
environment.
Procedure
1. Log into the Delphix Admin application using delphix_admin credentials
2. In the Manage menu, select Databases > My Databases.
3. Select the Oracle dSource that you want to use for cross-platform provisioning.
4. Click the dSource's Expand icon to open the dSource card, then click the Flip icon on the card to view the back.
5. On the back of the dSource card, click the Linux tab.
6. In the lower right corner of the dSource card, click the green Validate Transformation button.
The validation process will create a temporary VDB on the Unix staging environment, and run SQL commands against it to verify that the
database structure meets the requirements of the underlying Oracle platform conversion technology. Depending on the size of the
dSource, this may take several minutes. See Cross-Platform Provisioning of Oracle dSources: Overview for more information about
the specific database requirements that will checked during this process.
7. If the validation process is successful, green check marks will appear next to each validation requirement, and a gold database icon will
appear next to the dSource name in the Databases panel. If the dSource does not pass the validation process, a red X will appear next
to the requirement. See Creating Scripts for Cross-Platform Provisioning for more information on how to correct these violations of
the cross-platform provisioning requirements.
Related Links
Enabling Validated Sync for Oracle
General Network and Connectivity Requirements
Network Performance Configuration Options
Creating Scripts for Cross-Platform Provisioning
894
Prerequisites
A Unix Oracle dSource or VDB that has passed the validation checks for cross-platform provisioning as described in Enabling Oracle
dSources for Cross-Platform Provisioning
A Unix staging environment
This environment must be the same platform and Oracle version as the source database. See Enabling Validated Sync for Oracle for
information on designating a staging environment.
A Linux provisioning environment
This environment must be the same Oracle version as the source database. We recommend that this environment have a fast network
link to the Delphix Engine, because it needs to process all blocks in the database when converting a database to Linux. See Network
and Connectivity Requirements and Network Performance Configuration Options for general information about network requirements
and configuration for the Delphix Engine.
Procedure
1. Log into the Delphix Admin application using delphix_admin credentials.
2. In the Manage menu, select Databases > My Databases if the Databases panel is not visible.
3. In the Databases panel, select an Oracle dSource that has passed the validation checks for cross-platform provisioning.
Eligible dSources will have a gold database icon next to the dSource name, as shown in the dSource Icon Reference.
4. Select a provision point for the virtual database.
See Provisioning an Oracle VDB for information on using Snapshots, LogSync, and SCN Numbers as provision points.
5. Click Transform to Linux.
6. In the Linux Transformation VDB wizard, select a Linux environment where you want to provision the VDB, and follow the steps for
configuring the new VDB as described in Provisioning an Oracle VDB.
When the Linux transformation process completes, a VDB will be created with the transformed database running on Linux. You should be
aware that the transformation process can be time and resource intensive, because Oracle must read and convert all blocks in the
database
Related Links
Enabling Oracle dSources for Cross-Platform Provisioning
Enabling Oracle Pre-Provisioning
Network and Connectivity Requirements
Network Performance Configuration Options
dSource Icon Reference
Provisioning an Oracle VDB
895
Procedure
1. Log into the Delphix Admin application using delphix_admin credentials.
2. If the Databases panel is not visible, select Manage > Databases > My Databases.
3. In the Databases panel, select the dSource that did not pass the cross-platform provisioning validation checks.
4. Click the Expand icon for the dSource to view its card.
5. Click the Flip icon to view the back of the card.
6. Click the Linux tab.
7. Click the Upload Transformation Script icon in the lower-right corner of the card.
8. Click Choose a File to Upload and navigate to the location of the script, then click Choose.
The file will automatically upload when you click Choose.
The Transformation Script must be an SQL or plain text file otherwise the upload will fail.
9. Click the Validate Transformation icon to execute the script against the temporary virtual database.
Related Links
Enabling Oracle dSources for Cross-Platform Provisioning
896
_Unstructured Files
897
Like with other data types, you can configure a dSource to sync periodically with a set of unstructured files external
to the Delphix Engine. The dSource is a copy of these physical files stored on the Delphix Engine. On Unix
platforms, dSources are created and periodically synced by an implementation of the rsync utility. On Windows,
files are synced using the robocopy utility, which is distributed with Windows.
From dSources, you can provision vFiles, which are virtual copies of data that are fully functional read write copies
of the original files source.You can mount vFiles across one target environment or many.
898
899
900
Version
Processor Family
Solaris
9, 10, 11
SPARC
Solaris
10, 11
x86_64
x86, x86_64
5.3 - 5.10
6.0 - 6.5
x86_64
x86_64
AIX
Power
HP-UX
11i v2 (11.23)
IA64
11i v3 (11.31)
Delphix supports all 64-bit OS environments for source and target, though 64-bit Linux environments also require that a 32-bit version of glibc is
installed.
Required HP-UX patch for Target Servers
PHNE_37851 - resolves a known bug in HP-UX NFS client prior to HP-UX 11.31.
Additional Source Environment Requirements
The following permissions are usually granted via sudo authorization of the commands.
See Sudo Privilege Requirements for further explanation of the commands and for examples of the
/etc/sudoers file on different operating systems.
901
nfso command
as a super-user
There must be a directory on the source environment where you can install the Delphix Engine
Toolkit for example, /var/opt/delphix/toolkit .
The delphix_os user must own the directory
The directory must have permissions -rwxrwx--- (0770), but you can also use more permissive settings
The delphix_os user must have read and execute permissions on each directory in the path leading to the toolkit directory. For
example, when the toolkit is stored in /var/opt/delphix/toolkit, the permissions on /var, /var/opt, and /var/opt/d
elphix should allow read and execute for "others," such as -rwxr-xr-x.
The directory should have a total of at least 800MB of storage, plus 1MB of storage per vFile that will be provisioned to the target
On a Solaris host, gtar must be installed. Delphix uses gtar to handle long file names when extracting the toolkit files into the toolkit
directory on a Solaris host. The gtar binary should be installed in one of the following directories:
/bin:/usr
/bin:/sbin:/usr
/sbin:/usr/contrib
/bin:/usr/sf
/bin:/opt/sfw
/bin:/opt/csw/bin
There must be an empty directory (/delphix) that will be used as a container for the mount points that are
created when provisioning a vFile to the target environment. The group associated with the directory must be
the primary group of the delphix_os user. Group permissions for the directory should allow read, write, and
execute by members of the group.
The Delphix Engine must be able to initiate an SSH connection to the target environment
NFS client services must be running on the target environment
902
The Delphix Engine makes use of the following network ports for unstructured files dSources and vFiles:
Inbound to the Delphix Engine Port Allocation
Protocol
Port
Number
Use
TCP
873
TCP/UDP
111
Remote Procedure Call (RPC) port mapper used for NFS mounts
Note: RPC calls in NFS are used to establish additional ports, in the high range 32768-65535, for supporting services.
Some firewalls interpret RPC traffic and open these ports automatically. Some do not.
TCP
1110
NFS Server daemon status and NFS server daemon keep-alive (client info)
TCP/UDP
2049
TCP
4045
UDP
33434 33464
Traceroute from source and target database servers to the Delphix Engine (optional)
UDP/TCP
32768 65535
NFS mountd and status services, which run on a random high port. Necessary when a firewall does not dynamically
open ports.
Protocol
Port Numbers
Use
TCP
873
TCP
xxxx
DSP connections used for monitoring and script management during SnapSync. Typically DSP runs on port 8415.
Protocol
Port Numbers
Use
TCP
22
Protocol
Port Numbers
Use
TCP
873
TCP
xxxx
DSP connections used for monitoring and script management. Typically DSP runs on port 8415.
903
Protocol
Port Numbers
Use
TCP
22
Protocol
Port
Numbers
Use
TCP
25
TCP/UDP
53
UDP
123
UDP
162
HTTPS
443
SSL connections from the Delphix Engine to the Delphix Support upload server
TCP/UDP
636
TCP
8415
TCP
50001
Connections to source and target environments for network performance tests via the Delphix command line interface
(CLI). See Network Performance Tool.
Protocol
Port
Number
Use
TCP
22
TCP
80
UDP
161
TCP
443
TCP
8415
Delphix Session Protocol connections from all DSP-based network services including Replication, SnapSync for
Oracle, V2P, and the Delphix Connector.
TCP
50001
Connections from source and target environments for network performance tests via the Delphix CLI. See Network
Performance Tool.
TCP/UDP
32768 65535
Required for NFS mountd and status services from target environment only if the firewall between Delphix and the
target environment does not dynamically open ports.
Note: If no firewall exists between Delphix and the target environment, or the target environment dynamically opens
ports, this port range is not explicitly required.
SSHD Configuration
Both source and target Unix environments are required to have sshd running and configured such that the Delphix Engine can connect over ssh.
The Delphix Engine expects to maintain long-running, highly performant ssh connections with remote Unix environments. The following sshd con
figuration entries can interfere with these ssh connections and are therefore disallowed:
904
905
Privilege
Sources
Targets
mkdir/rmdir
Not
Optional
Required
Rationale
Delphix dynamically makes and removes directories under the provisioning directory during vFiles
operations. This privilege is optional, provided the provisioning directory permissions allow the delphi
x_os user to make and remove directories.
mount/umount Not
Required Delphix dynamically mounts and unmounts directories under the provisioning directory during vFiles
Required
operations. This privilege is required because mount and umount are typically reserved for a
super-user.
nfso (AIX only)
Not
Required Delphix monitors NFS read and write sizes on an AIX target host. It uses the nfso command to
Required
query the sizes in order to optimize NFS performance for vFiles running on the target host. Only a
super-user can issue the nfso command.
On a Solaris target, sudo access to mount, umount, mkdir, and rmdir is required. In this customer example, super-user privilege is
restricted to the virtual dataset mount directory
/delphix.
On a Linux target, sudo access to mount, umount, mkdir, and rmdir is required. In this customer example, super-user privilege is restricted to
the virtual database mount directory /delphix.
\
\
\
\
\
\
\
906
In addition to sudo access to the mount, umount, mkdir, and rmdir commands on AIX target hosts, Delphix also
requires sudo access to nfso. This is required on target hosts for Delphix to monitor the NFS read / write sizes
configured on the AIX system. Super-user access level is needed to run the nfso command.
Example: AIX /etc/sudoers File for a Delphix Target
Defaults:delphix_os !requiretty
delphix_os ALL=NOPASSWD: \
/bin/mount, \
/bin/umount, \
/bin/mkdir, \
/bin/rmdir, \
/usr/sbin/nfso
Configuring sudo Access on HP-UX for Target Environments
On the HP-UX target, as with other operating systems, sudo access to mount, umount, mkdir, and rmdir is required.
907
908
12. For Password Login, click Verify Credentials to test the username and password.
13. Enter a Toolkit Path.
The toolkit directory stores scripts used for Delphix Engine operations. It should have a persistent working directory rather than a
temporary one.
14. Click OK.
Post-Requisites
After you create the environment, you can view information about it by doing the following:
1. Click Manage.
2. Select Environments.
3. Select the environment name.
Related Links
909
Users that you add to an environment must meet the requirements for that environment as described in the platform-specific Requirements topics.
Procedure
910
911
Procedure for Adding and Installing the Delphix Connector for Windows
All Windows environments that will communicate with Delphix must have the Delphix Connector installed. The instructions in this topic cover
downloading Delphix Connector, running the Delphix Connector installer on the Windows machine, and then registering the environment with the
Delphix Engine.
Procedure
Downloading the Delphix Connector
Delphix Connector software supplied by Delphix Engine versions before 4.2.4.0 required that the Windows machine had SQL Server
installed. If you are using a Windows machine that does not have SQL Server installed, you must download the Delphix Connector from
a Delphix Engine of version 4.2.4.0 or higher.
The Delphix Connector can be downloaded through the Delphix Engine Interface, or by directly accessing its URL.
Using the Delphix Engine Interface
A Flash player must be available on the Windows host in order to download Delphix Connector using the Delphix GUI.
1. From the Windows machine that you want to use, start a browser session and connect to the Delphix Engine GUI using the
delphix_admin login.
2. Click Manage.
3. Select Environments.
4. Next to Environments, click the green Plus icon.
5. In the Add Environment dialog, select Windows in the operating system menu.
6. Select Target.
7. Select Standalone.
912
1. You can download the Delphix Connector directly by navigating to this URL: http://<name of your Delphix
Engine>/connector/DelphixConnectorInstaller.msi
Installing Delphix Connector
On the Windows machine that you want to want to use, run the Delphix Connector installer. Click Next to advance through each of the installation
wizard screens.
The installer will only run on 64-bit Windows systems. 32-bit systems are not supported.
1. For Connector Configuration, make sure there is no firewall in your environment blocking traffic to the port on the Windows
environment that the Delphix Connector service will listen to.
2. For Select Installation Folder, either accept the default folder, or click Browse to select another.
3. Click Next on the installer final Confirm Installation dialog to complete the installation process and then Close to exit the Delphix
Connector Install Program.
4. Note: At this point, you can close the Delphix GUI dialog by clicking Cancel.
Registering Environment With Delphix Engine
1. On the Windows machine, navigate to the folder where the Delphix Connector was installed for example, C:\Program
Files\Delphix\DelphixConnector.
2. Run this batch script as Administrator: <Delphix Connector installation
folder>\Delphix\DelphixConnector\connector\addhostgui.cmd.
3. When the Add Windows Environment Wizard launches, provide the Host IP Address, Delphix Engine IP Address, your login
credentials, and the environment user on the Windows host.
4. After providing this information, click Submit.
5. Click Yes to confirm the environment addition request.
6. In the Delphix Engine interface, you will see a new icon for the environment, and two jobs running in the Delphix Admin Job History,
one to Create and Discover an environment, and another to Create an environment. When the jobs are complete, click on the icon for
the new environment, and you will see the details for the environment.
Post-Requisites
On the Windows machine, in the Windows Start Menu, go to Services > Extended Services, and make sure that the Delphix
Connector service has a Status of Started, and that the Startup Type is Automatic.
913
The Delphix Engine makes use of the following network ports for unstructured files dSources and VDBs:
Outbound from the Delphix Engine
Protocol
Port
Numbers
Use
TCP
xxxx
Delphix Connector connections to source and target environments. Typically the Delphix Connector runs on port
9100.
Protocol
Port Number
Use
TCP
3260
iSCSI target daemon for connections from iSCSI initiators on the target environments to the Delphix Engine
Protocol
Port Numbers
Use
TCP
80
TCP
xxxx
DSP connections used for monitoring and script management. Typically DSP runs on port 8415.
Protocol
Port Numbers
Use
TCP
xxxx
Delphix Connector connections to source environments. Typically the Delphix Connector runs on port 9100.
Protocol
Port
Numbers
Use
TCP
25
TCP/UDP
53
UDP
123
UDP
162
HTTPS
443
SSL connections from the Delphix Engine to the Delphix Support upload server
TCP/UDP
636
914
TCP
8415
TCP
50001
Connections to source and target environments for network performance tests via the Delphix command line interface
(CLI). See Network Performance Tool.
Protocol
Port
Number
Use
TCP
22
TCP
80
UDP
161
TCP
443
TCP
8415
Delphix Session Protocol connections from all DSP-based network services including Replication, SnapSync for
Oracle, V2P, and the Delphix Connector.
TCP
50001
Connections from source and target environments for network performance tests via the Delphix CLI. See Network
Performance Tool.
TCP/UDP
32768 65535
Required for NFS mountd and status services from target environment only if the firewall between Delphix and the
target environment does not dynamically open ports.
Note: If no firewall exists between Delphix and the target environment, or the target environment dynamically opens
ports, this port range is not explicitly required.
SSHD Configuration
Both source and target Unix environments are required to have sshd running and configured such that the Delphix Engine can connect over ssh.
The Delphix Engine expects to maintain long-running, highly performant ssh connections with remote Unix environments. The following sshd con
figuration entries can interfere with these ssh connections and are therefore disallowed:
Disallowed sshd Configuration Entries
ClientAliveInterval
ClientAliveCountMax
915
916
All Windows source and target environments containing unstructured files must have the Delphix Connector
installed to enable communication between the environment and the Delphix Engine. The instructions in this topic
cover initiating the Add Target process in the Delphix Engine interface, running the Delphix Connector installer on
the environment, and verifying that the environment has been added in the Delphix Engine interface.
Prerequisites
Make sure your source and target environment meets the requirements described in Requirements for Windows Environments.
Procedure
1. From the machine that you want to use, login to the Delphix Admin application.
2. Click Manage.
3. Select Environments.
4. Next to Environments, click the green Plus icon.
5. In the operating system menu, select Windows.
6. Select Target.
7. Select Standalone.
8. Click the download link for the Delphix Connector Installer.
The Delphix Connector will download to your local machine.
9. On the Windows machine that you want to use, run the Delphix Connector installer. Click Next to advance through each of the installation
wizard screens.
The installer will only run on 64-bit Windows systems. 32-bit systems are not supported.
a. For Connector Configuration, make sure there is no firewall in your environment blocking traffic to the port on the target
environment to which the Delphix Connector service will listen.
b. For Select Installation Folder, either accept the default folder or click Browse to select another.
c. Click Close to complete the installation process.
d. Run this batch script as Administrator: <Delphix Connector installation
folder>\Delphix\DelphixConnector\connector\addhostgui.cmd.
When the Add Windows Target Environment Wizard launches, enter the Target Host IP Address, Delphix Engine IP
Address, your login credentials, and the environment user on the Windows host.
e. After providing this information, click Submit.
f. Click Yes to confirm the target environment addition request.
10. In the Delphix Engine interface, you will see a new icon for the environment and two jobs running in the Delphix Admin Job History,
one to Create and Discover an environment, and another to Create an environment. When the jobs are complete, click the icon for the
new environment, and you will see the details for the environment.
Post-Requisites
917
Users that you add to an environment must meet the requirements for that environment as described in the platform-specific Requirements topics.
Procedure
918
919
Protocol
Port
Numbers
Use
TCP
xxxx
Delphix Connector connections to source and target environments. Typically, the Delphix Connector runs on port
9100.
Protocol
Port Number
Use
TCP
3260
iSCSI target daemon for connections from iSCSI initiators on the target environments to the Delphix Engine
Protocol
Port Numbers
Use
TCP
80
TCP
xxxx
DSP connections used for monitoring and script management. Typically, DSP runs on port 8415.
Protocol
Port Numbers
Use
TCP
xxxx
Delphix Connector connections to source environments. Typically, the Delphix Connector runs on port 9100.
Protocol
Port
Numbers
Use
TCP
25
TCP/UDP
53
UDP
123
UDP
162
HTTPS
443
SSL connections from the Delphix Engine to the Delphix Support upload server
TCP/UDP
636
920
TCP
8415
TCP
50001
Connections to source and target environments for network performance tests via the Delphix command line interface
(CLI). See Network Performance Tool.
Protocol
Port
Number
Use
TCP
22
TCP
80
UDP
161
TCP
443
TCP
8415
Delphix Session Protocol connections from all DSP-based network services including Replication, SnapSync for
Oracle, V2P, and the Delphix Connector.
TCP
50001
Connections from source and target environments for network performance tests via the Delphix CLI. See Network
Performance Tool.
TCP/UDP
32768 65535
Required for NFS mountd and status services from target environment only if the firewall between Delphix and the
target environment does not dynamically open ports.
Note: If no firewall exists between Delphix and the target environment, or the target environment dynamically opens
ports, this port range is not explicitly required.
SSHD Configuration
Both source and target Unix environments are required to have sshd running and configured such that the Delphix Engine can connect over ssh.
The Delphix Engine expects to maintain long-running, highly performant ssh connections with remote Unix environments. The following sshd con
figuration entries can interfere with these ssh connections and are therefore disallowed:
Disallowed sshd Configuration Entries
ClientAliveInterval
ClientAliveCountMax
921
Prerequisites
The source environment must meet the requirements outlined in Unstructured Files Environment Requirements.
The Delphix Engine must have access to an environment user. This user should should have read permissions on all files to be cloned.
Unstructured Files on Cluster Environments
Unstructured files cannot be linked from, or provisioned to, any form of cluster environment, such as an Oracle RAC environment. To
link or provision unstructured files from a host that is part of a cluster, add the host as a standalone environment. Then, link from or
provision to this standalone host.
Procedure
1. Login to the Delphix Admin application using Delphix Admin credentials.
2. Click Manage.
3. Select Environments.
4. Select the environment containing the unstructured files you want to link.
If you have not already added the environment, see the Managing Unix Environments and Managing Windows Environments topics
for more information about adding environments.
5. Click the Environment Details tab.
6. If the environment user described in the Prerequisites section is not already added to the Delphix Engine, add the user.
See the Managing Unix Environments and Managing Windows Environments topics for more information about adding environment
users.
7. Click the Databases tab.
8. Click the Plus icon next to Add Dataset Home.
Adding the files as a dataset home will register the type and location of the files with the Delphix Engine.
9. Select Unstructured Files as the Dataset Home Type.
10. Enter a Name to help identify the files.
11. Enter the Path to the root directory of the files. On Windows, this may be a local path or an UNC name.
12. Click the Check icon to save your dataset home. Scroll down the list of dataset homes to view and edit this dataset home if necessary.
13. Click Manage.
14. Select Databases.
15. Select Add
dSource.
Alternatively, on the Environment Management screen, you can click Link next to a dataset name to start
the dSource creation process.
20. If you are linking files from a Unix environment, enter Paths of Symlinks to Follow.
922
Related Links
Unstructured Files - Getting Started
Provisioning Unstructured Files as vFiles
Customizing vFiles with Hook Operations
923
Prerequisites
You will need an unstructured files dSource, as described in Linking Unstructured Files, or an existing vFiles from which you want to
provision another.
The target environment must meet the requirements outlined in Unstructured Files Environment Requirements.
Unstructured Files on Cluster Environments
Unstructured files cannot be linked from, or provisioned to, any form of cluster environment, such as an Oracle RAC environment. To
link or provision unstructured files from a host that is part of a cluster, add the host as a standalone environment. Then, link from or
provision to this standalone host.
Procedure
1. Login to the Delphix Admin application using Delphix Admin credentials.
2. Click Manage.
3. Select Databases.
4. Select My
Databases.
5.
6.
Select a snapshot.
7.
Click Provision.
The Provision vFiles panel will open, and the field Mount Path will auto-populate with the path to the files on
the source environment.
8.
9.
10.
Click Advanced.
11.
12.
13.
Click Next.
14.
15.
16.
924
16.
17.
Click Next.
18.
Enter any operations that should be run at Hooks during the lifetime of the vFiles.
See Customizing Oracle VDB Configuration Settings for more information.
19.
Click Next.
20.
Click Finish.
When provisioning starts, you can review progress of the job in the Databases panel, or in the Job History p
anel of the Dashboard. When provisioning is complete, the vFiles will be included in the group you
designated and listed in the Databases panel. If you select the vFiles in the Databases panel and click the O
pen icon, you can view its card, which contains information about the vFiles and its Data Management
settings.
Related Links
Linking Unstructured Files
Managing Data Operations for vFiles
Creating Empty vFiles from the Delphix Engine
925
dSource Hooks
Hook
Description
Pre-Sync
Post-Sync
Operations performed after a sync. This hook will run regardless of the success of the sync or Pre-Sync hook operations.
These operations can undo any changes made by the Pre-Sync hook.
Description
Configure
Clone
Operations performed after initial provision or after a refresh. This hook will run after the virtual dataset has been started.
During a refresh, this hook will run before the Post-Refresh hook.
Pre-Refresh
Post-Refresh
Operations performed after a refresh. During a refresh, this hook will run after the Configure Clone hook. This hook will not run
if the refresh or Pre-Refresh hook operations fail.
These operations can restore cached data after the refresh completes.
Pre-Rewind
Post-Rewind
Operations performed after a rewind. This hook will not run if the rewind or Pre-Rewind hook operations fail.
These operations can restore cached data after the rewind completes.
Pre-Snapshot
Post-Snapshot
Operations performed after a snapshot. This hook will run regardless of the success of the snapshot or Pre-Snapshot hook
operations.
These operations can undo any changes made by the Pre-Snapshot hook.
Operation Failure
If a hook operation fails, it will fail the entire hook: no further operations within the failed hook will be run.
926
You can construct hook operation lists through the Delphix Admin application or the command line interface (CLI). You can either define the
operation lists as part of the linking or provisioning process or edit them on dSources or virtual datasets that already exist.
927
*> add
0 *> set type=RunCommandOnSourceOperation
0 *> set command="echo Refresh completed."
0 *> ls
0 *> commit
source
source
source
source
source
source
"pomme"
"pomme"
"pomme"
"pomme"
"pomme"
"pomme"
update
update
update
update
update
update
operations
operations
operations
operations
operations
operations
postRefresh
postRefresh
postRefresh
postRefresh
postRefresh
postRefresh
*> add
1 *> set type=RunCommandOnSourceOperation
1 *> set command="echo Refresh completed."
1 *> back
*> unset 1
*> commit
928
Shell Operations
RunCommand Operation
The RunCommand operation runs a shell command on a Unix environment using whatever binary is available at /bin/sh. The environment user
runs this shell command from their home directory. The Delphix Engine captures and logs all output from this command. If the script fails, the
output is displayed in the Delphix Admin application and command line interface (CLI) to aid in debugging.
If successful, the shell command must exit with an exit code of 0. All other exit codes will be treated as an operation failure.
Examples of RunCommand Operations
You can input the full command contents into the RunCommand operation.
remove_dir="$DIRECTORY_TO_REMOVE_ENVIRONMENT_VARIABLE"
if test -d "$remove_dir"; then
rm -rf "$remove_dir" || exit 1
fi
exit 0
If a script already exists on the remote environment and is executable by the environment user, the RunCommand operation can execute this
script directly.
The RunBash operation runs a Bash command on a Unix environment using a bash binary provided by the Delphix Engine.The environment user
runs this Bash command from their home directory. The Delphix Engine captures and logs all output from this command. If the script fails, the
output is displayed in the Delphix Admin application and command line interface (CLI) to aid in debugging.
If successful, the Bash command must exit with an exit code of 0. All other exit codes will be treated as an operation failure.
Example of RunBash Operations
You can input the full command contents into the RunBash operation.
remove_dir="$DIRECTORY_TO_REMOVE_ENVIRONMENT_VARIABLE"
# Bashisms are safe here!
if [[ -d "$remove_dir" ]]; then
rm -rf "$remove_dir" || exit 1
fi
exit 0
Shell Operation Tips
929
Using nohup
You can use the nohup command and process backgrounding from resource in order to "detach" a process from the Delphix Engine. However, if
you use nohup and process backgrounding, you MUST redirect stdout and stderr.
Unless you explicitly tell the shell to redirect stdout and stderr in your command or script, the Delphix Engine will keep its connection to the
remote environment open while the process is writing to either stdout or stderr . Redirection ensures that the Delphix Engine will see no more
output and thus not block waiting for the process to finish.
For example, imagine having your RunCommand operation background a long-running Python process. Below are the bad and good ways to do
this.
Bad Examples
nohup
nohup
nohup
nohup
python
python
python
python
file.py
file.py
file.py
file.py
& # no redirection
2>&1 & # stdout is not redirected
1>/dev/null & # stderr is not redirected
2>/dev/null & # stdout is not redirected
Good Examples
nohup python file.py 1>/dev/null 2>&1 & # both stdout and stderr redirected, Delphix Engine will not
block
Other Operations
RunExpect Operation
The RunExpect operation executes an Expect script on a Unix environment. The Expect utility provides a scripting language that makes it easy to
automate interactions with programs which normally can only be used interactively, such as ssh. The Delphix Engine includes a
platform-independent implementation of a subset of the full Expect functionality.
The script is run on the remote environment as the environment user from their home directory. The Delphix Engine captures and logs all output
of the script. If the operation fails, the output is displayed in the Delphix Admin application and CLI to aid in debugging.
If successful, the script must exit with an exit code of 0. All other exit codes will be treated as an operation failure.
Example of a RunExpect Operation
The RunPowershell operation executes a Powershell script on a Windows environment. The environment user runs this shell command from their
home directory. The Delphix Engine captures and logs all output of the script. If the script fails, the output is displayed in the Delphix
Admin application and command line interface (CLI) to aid in debugging.
If successful, the script must exit with an exit code of 0. All other exit codes will be treated as an operation failure.
Example of a RunPowershell Operation
You can input the full command contents into the RunPowershell operation.
930
$removedir = $Env:DIRECTORY_TO_REMOVE
if ((Test-Path $removedir) -And (Get-Item $removedir) -is [System.IO.DirectoryInfo]) {
Remove-Item -Recurse -Force $removedir
} else {
exit 1
}
exit 0
Unstructured Files Environment Variables
Operations that run user-provided scripts have access to environment variables. For operations associated with specific dSources or virtual
databases (VDBs), the Delphix Engine will always set environment variables so that the user-provided operations can use them to access the
dSource or VDB.
dSource Environment Variables
Environment Variable
Description
DLPX_DATA_DIRECTORY
Environment Variable
Description
DLPX_DATA_DIRECTORY
931
932
You must have already provisioned a vFiles. For more information, see Provisioning Unstructured Files as vFiles.
Procedure
Click Manage.
3.
Select Databases.
4.
Select My Databases.
5.
6. On the back of the card, move the slider control from Enabled to Disabled.
When you are ready to enable the vFiles again, move the slider control form Disabled to Enabled, and the vFiles will continue to function as it did
previously.
Disabling the vFiles will unmount it from target environments. This unmount will fail if there are processes still accessing the vFiles.
Related Links
933
Rewinding vFiles
This topic describes how to rewind a vFiles.
Prerequisites
You must have already provisioned a vFiles. For more information, see Provisioning Unstructured Files as vFiles.
Procedure
Related Links
Managing Data Operations for vFiles
Provisioning Unstructured Files as vFiles
934
Refreshing vFiles
This topic describes how to manually refresh a vFiles.
Refreshing a vFiles will re-provision the vFiles from its parent. As with the normal provisioning process, you can choose to refresh the vFiles from
any snapshot available in its parent. However, you should be aware that refreshing a vFiles will delete any changes that have been made to it
over time. When you refresh a vFiles, you are essentially resetting it to the state you select during the Refresh process. You can refresh a vFiles
manually, as described in this topic, or you can set a vFiles refresh policy, as described in the topics Managing Policies: An Overview, Creating
Custom Policies, and Creating Policy Templates.
Although the VDB no longer contains the previous contents, the previous Snapshots and TimeFlow still remain in the Delphix Engine
and are accessible through the Command Line Interface (CLI).
Prerequisites
You must have already provisioned a vFiles. For more information, see Provisioning Unstructured Files as vFiles.
Procedure
On the back of the vFiles card, click the Refresh icon in the lower right-hand corner.
Related Links
Managing Data Operations for vFiles
Provisioning Unstructured Files as vFiles
935
Deleting vFiles
This topic describes how to delete a vFiles.
Prerequisites
You must have already provisioned a vFiles. For more information, see Provisioning Unstructured Files as vFiles.
Procedure
Deleting a vFiles may fail if it cannot be unmounted successfully from all target environments. You can use the Force Delete option to
ignore all failures during unmount.
Related Links
936
Migrating vFiles
This topic describes how to migrate a vFiles from one target environment to another.
Prerequisites
You must have already provisioned a vFiles. For more information, see Provisioning Unstructured Files as vFiles.
Procedure
Related Links
937
in two ways: by provisioning from an existing dataset that is, from a dSource or
from another vFiles or by creating an empty vFiles and filling it with data.
Creating an empty vFiles places an initially-empty mount on target environments, hence the term "empty vFiles." This mount is useful when you
have no existing files to copy into the Delphix Engine, but you do have files which you will generate, track, and copy with vFiles.
vFiles created without dSources are almost identical to those created by provisioning. The only thing you cannot do with them is refresh. Refreshi
ng a dataset means overwriting the datasets content with new data that is pulled in from the datasets parent. If you create new vFiles from
scratch, that newly-created dataset will not have a parent. Therefore, it cannot be refreshed. All other functionality is identical you can provision
from such a dataset, rewind, take snapshots, and so forth.
Prerequisites
The target environment must meet the requirements outlined in Unstructured Files Environment Requirements.
Unstructured Files on Cluster Environments
You cannot create vFiles on any form of cluster environment, such as an Oracle RAC environment. To create a vFiles on a host that is
part of a cluster, add the host as a standalone environment. Then, create the vFiles on this standalone host.
Procedure
To create new vFiles without provisioning:
1. Login to the Delphix Admin application.
2. Click Manage.
3. Select Database.
4. Select Create vFiles as seen below.
Related Links
Managing Data Operations for vFiles
Provisioning Unstructured Files as vFiles
938
Prerequisites
You will need an unstructured files dSource, as described in Linking Unstructured Files, or an existing vFiles from which you want to
provision another.
The target environment must meet the requirements outlined in Unstructured Files Environment Requirements.
Procedure
1. Login to the command line interface (CLI) using Delphix Admin credentials.
2. Navigate to database and select createRestorationDataset.
delphix> database
delphix database> createRestorationDataset
3. View parameter information using list.
939
database
database
database
database
name=restoration
group=Untitled
sourceConfig.name=restoration
sourceConfig.repository=builtin:files
timeflowPointParameters.container=source
createRestorationDataset> defaults
createRestorationDataset defaults> set container=source
createRestorationDataset defaults> commit
createRestorationDataset> set sourceConfig.repository=builtin:files
Supported Operations
Because restoration datasets are not fully provisioned like normal virtual datasets, they do not support the full set of management features
available through the Delphix Engine. Restoration datasets support the following operations:
delete
refresh
switchTimeflow
undo
enable
disable
All other source and database operations will result in errors when executed against a restoration dataset.
Related Links
Unstructured Files - Getting Started
Provisioning Unstructured Files as vFiles
Customizing vFiles with Hook Operations
940
941
942
dbTechStack Dataset
The source dbTechStack is linked using the Delphix Engine's EBS support: the linking process automatically runs pre-clone logic to ensure EBS
943
configuration is always appropriately staged at the time of data capture. When you provision EBS, the Delphix Engine automates post-clone
configuration such that a copy of the dbTechStack is available for use on the target dbTier server with no additional effort. You can add this copy
of the dbTechStack to the Delphix Engine as an Oracle installation home and use it to host an EBS virtual database (VDB).
Database Dataset
The database dSource is linked using the Delphix Engine's support for Oracle databases. This dSource contains database data files that EBS is
currently using. For more information about managing Oracle databases, see Managing Oracle, Oracle RAC, and Oracle PDB Data Sources.
When you provision EBS, you will use the Delphix Engine to set up a copy of the EBS database on the target dbTier server. This copy of the
database will be used to back virtual EBS instances appsTier.
appsTier Dataset
The appsTier is linked using the Delphix Engine's EBS support: the linking process automatically runs pre-clone logic to ensure EBS configuration
is always appropriately staged at the time of data capture. When you provision EBS, the Delphix Engine will automate post-clone configuration
such that a copy of the appsTier is available for use on the target appsTier server. This virtual copy of the appsTier will connect to the provisioned
EBS virtual database (VDB).
Related Links
Managing Oracle, Oracle RAC, and Oracle PDB Data Sources
Oracle EBS R12.2
Oracle EBS R12.1
Virtual EBS Instance Recipes
944
945
Operating Systems
Linux
Solaris
AIX and HP-UX Not Supported
The Delphix Engine does not support linking EBS instances running on AIX or HP-UX.
dbTier Requirements
Supported Topologies
Oracle SI dbTechStack and Database
Oracle RAC dbTechStack and Database
appsTier Requirements
Supported Topologies
Single-node appsTier
Multi-node appsTier with a shared APPL_TOP
Non-shared APPL_TOP Not Supported
The Delphix Engine does not provide support for linking a multi-node appsTier where the APPL_TOP is not shared between nodes.
Caveats
The Delphix Engine does not provide support for linking an EBS R12.2 instance utilizing custom context variables maintained in the EBS context
file.
Related Links
Virtual EBS R12.2 Instance Requirements
946
Operating Systems
Linux
Solaris
AIX and HP-UX Not Supported
The Delphix Engine does not support provisioning EBS instances to AIX or HP-UX.
dbTier Requirements
Supported Topologies
Oracle SI dbTechStack and Database
Oracle RAC Not Supported
The Delphix Engine does not support provisioning Oracle RAC dbTechStack for use with Oracle Enterprise Business Suite.
However, you may provision an Oracle SI dbTier from a linked EBS RAC dbTier instance. During the provisioning process, the Delphix
Engine will relink the dbTechStack for use with an Oracle SI database and scale down the Oracle RAC database to an Oracle SI
database.
appsTier Requirements
Supported Topologies
Single-node appsTier
Multi-node appsTier with a shared APPL_TOP
Non-shared APPL_TOP Not Supported
The Delphix Engine does not provide support for provisioning a multi-node appsTier where the APPL_TOP is not shared between
nodes.
Caveats
947
The Delphix Engine will only register the default WLS-managed server for each server type during multi-node appsTier provisioning. Additional
WLS managed servers need to be registered manually or as a Configure Clone hook.
Related Links
Source EBS R12.2 Instance Requirements
948
oracle User
The Delphix Engine must have access to an oracle user on the dbTier.
This user should be a member of both the EBS dba and oinstall groups.
The user should have read permissions on all dbTechStack and database files that will be cloned.
949
applmgr User
The Delphix Engine must have access to an applmgr user on the appsTier.
This user should be a member of the EBS oinstall group.
The user should have read permissions on all appsTier files to be cloned.
Related Links
Source EBS R12.2 Instance Requirements
Provisioning a Virtual EBS R12.2 Instance
Oracle Support and Requirements
Requirements for Unix Environments
950
Prerequisites
Prepare your source EBS R12.2 instance for linking by following the outline in Preparing a Source EBS R12.2 Instance for Linking.
Procedure
Link the Oracle Database
1.
Link the Oracle database used by EBS, as outlined in Linking an Oracle Data Source.
5.
6.
If the oracle environment user described in Preparing a Source EBS R12.2 Instance for Linking is not
already added to the Delphix Engine, add the user.
For more information about adding environment users, see the Managing Unix Environment Users topics .
7.
8.
9.
For your Dataset Home Type , select E-Business Suite R12.2 dbTechStack.
When you select this type of dataset home, the Delphix Engine will know to automate pre-clone logic.
Specifically, the Delphix Engine will run adpreclone.pl dbTier prior to every SnapSync of the
dbTechStack. During dSource creation, you will be able to enter additional pre-clone steps as Pre-Sync hook
operations.
10.
11.
In the Add dSource wizard, select the dbTechStack files source you just created.
15.
16.
951
16.
Click Advanced.
17.
Exclude the EBS database's data files if they are stored underneath the Oracle base install directory.
These data files will be linked with the database instead of with the dbTechStack. Add the relative path to the
data files to the Paths to Exclude list.
18.
Click Next.
19.
Adding a dSource to a database group enables you to set Delphix Domain user permissions for that
dSource's objects, such as snapshots. For more information, see the topics under Users, Permissions, and
Policies Users, Permissions, and Policies.
22. Select a SnapSync policy.
23. Click Next.
24.
Enter any custom pre or post sync logic as Pre-Sync or Post-Sync hook operations.
Remember that adpreclone.pl dbTier is already run prior to every Snapshot of the dbTechStack.
The Pre-Sync hook operations will be run prior to running the adpreclone.pl tool.
For more information, see Customizing Unstructured Files with Hook Operations.
25.
Click Next.
26.
Review the dSource Configuration and Data Management information, and then click Finish.
The Delphix Engine will initiate two jobs to create the dSource, DB_Link and DB_Sync . You can monitor
these jobs by clicking Active Jobs in the top menu bar, or by selecting System > Event Viewer. When the
jobs have completed successfully, the files icon will change to a dSource icon on the Environments > Data
bases screen, and the dSource will be added to the list of My Databases under its assigned group.
Related Links
Preparing a Source EBS R12.2 Instance for Linking
Linking an Oracle Data Source
Preparing a Source EBS R12.2 Instance for Linking
Managing Unix Environment Users
Users, Permissions, and Policies
Customizing Unstructured Files with Hook Operations
Managing Unix Environments
953
oracle User
The Delphix Engine must have access to an oracle user on the dbTier.
This user should be a member of both the EBS dba and oinstall groups.
This user will be given proper permissions to manage the dbTechStack and database.
oraInst.loc
An oraInst.loc file must exist on the dbTier prior to provisioning. This file will specify where the oraInventory directories live or where they should
be created if they do not already exist.
The oraInst.loc file is typically located at /etc/oraInst.loc on Linux or /var/opt/oracle/oraInst.loc on Solaris. Ensure that the
oraInventory to which this file points is writeable by the oracle user.
Consult Oracle EBS documentation for more information about where to place this file on your dbTier and what this file should contain.
applmgr User
954
The Delphix Engine must have access to an applmgr user on the appsTier.
This user should be a member of the EBS oinstall group.
This user will be given proper permissions to manage the appsTier.
oraInst.loc
An oraInst.loc file must exist on every appsTier node prior to provisioning. This file will specify where the oraInventory directories live or where
they should be created if they do not already exist.
The oraInst.loc file is typically located at /etc/oraInst.loc on Linux or /var/opt/oracle/oraInst.loc on Solaris. Ensure that the
oraInventory to which this file points is writeable by the applmgr user.
Delphix recommends that this file specify an oraInventory location under the virtual appsTier mount path.
If you are provisioning a single-node appsTier, this recommendation is OPTIONAL; putting the oraInventory directories on
Delphix-provided storage merely eases administration of the virtual EBS instance.
If you are provisioning a multi-node appsTier, this recommendation is REQUIRED; the Delphix Engine's automation requires that all
nodes in the appsTier have access to the oraInventory directories via Delphix-provided storage.
Consult Oracle EBS documentation for more information about where to place this file on your appsTier and what this file should contain.
Related Links
Virtual EBS R12.2 Instance Requirements
Requirements for Unix Environments
Oracle Support and Requirements
955
Prerequisites
You must have already linked a source instance of EBS R12.2. For more information, see Linking a Source EBS R12.2 Instance.
Prepare your target EBS R12.2 environments for provisioning by following the outline in Preparing Target EBS R12.2 Environments for
Provisioning.
Snapshot Coordination
Changes applied to EBS and picked up only in certain dSource snapshots may make certain combinations of snapshots across the
appsTier and dbTier incompatible. When provisioning, refreshing, or rewinding a virtual EBS instance, be sure the points in time you
select for each dataset are compatible with each other.
Procedure
Provision the EBS dbTechStack
1.
2. Click Manage.
3. Select My Databases.
4.
5.
6. Click Provision.
The Provision vFiles wizard will open.
7. Select an Environment.
This environment will host the virtual dbTechStack and be used to execute hook operations specified in step 16.
8. Select an Environment User.
This user should be the oracle user outlined in Preparing Target EBS R12.2 Environments for Provisioning.
9. Enter a Mount Path for the virtual dbTechStack files.
10. Enter the EBS-specific parameters for the virtual dbTechStack. A subset of these parameters are discussed in more detail below.
a. Ensure that the Target DB Hostname value is the short hostname, not the fully-qualified hostname.
b. The APPS Password is required to configure the virtual dbTechStack.
This password is encrypted when stored within the Delphix Engine and is available as an environment variable to the adcfgclo
ne process.
c. Enable the Disable RAC option if you want to permit the Delphix Engine to automatically disable the RAC option for the binaries
when applicable.
This option is necessary if provisioning from a dSource with RAC dbTier, because the binaries are relinked with the rac_on opti
on even after running adcfgclone. If the source binaries already have the RAC option disabled (also the case for SI dbTier),
the Delphix Engine ignores this option.
d. Enable the Cleanup Before Provision option if you want to permit the Delphix Engine to automatically cleanup stale EBS
configuration during a refresh. This option is recommended, but only available if your Oracle Home is patched with Oracle
Universal Installer (OUI) version 10.2 or above.
i. With this option enabled, the Delphix Engine will inspect the target environment's oraInventory prior to refreshing this
virtual dbTechStack. If any Oracle Homes are already registered within the specified Mount Path, the Delphix Engine
will detach them from the inventory prior to running adcfgclone. These homes must be detached prior to running
post-clone configuration, or else adcfgclone will fail, citing conflicting oraInventory entries as an issue.
ii. Without this option enabled, Oracle Homes found to conflict with the specified Mount Path will be reported in an error
instead of automatically detached. For refresh to succeed, conflicting Oracle Homes must be manually detached prior to
refresh.
11. Click Next.
12. Enter a vFiles Name.
13.
956
15.
Click Next.
16.
Enter any custom hook operations that are needed to help correctly manage the virtual dbTechStack files.
For more information about these hooks, when they are run, and how operations are written, see Customizin
g Unstructured Files with Hook Operations.
The Configure Clone hook will be run after the adcfgclone.pl tool has both mounted and configured the
dbTechStack.
When provisioning starts, you can review progress of the job in the Databases panel, or in the Job History p
anel of the Dashboard. When provisioning is complete, the dbTechStack vFiles will be included in the group
you designated and listed in the Databases panel. If you select the dbTechStack vFiles in the Databases pa
nel and click the Open icon, you can view its card, which contains information about the virtual files and its
Data Management settings.
19.
For tips on monitoring the progress of dbTechStack provisioning, see Monitoring EBS R12.2 dbTechStack
Provisioning Progress.
2. Click Manage.
3. Select Environments.
4.
5.
6.
7.
8.
Click the Refresh button in the bottom right-hand corner of the environment card.
957
2.
3.
Enter the SID same as whats provided to the virtual dbTechStack you just added to the Delphix Engine.
4. Click Advanced.
5. Select the correct Oracle Node Listeners value.
This should be the listener corresponding to the virtual dbTechStack you just added to the Delphix Engine.
6. Add the EBS R12.2 dbTier environment file as a Custom Environment Variables entry.
This file can be specified as an Environment File with Path Parameters of $ORACLE_HOME/<CONTEXT_NAME>.env .
Replace <CONTEXT_NAME> with the virtual EBS instance's context name. The Delphix Engine will exapnd the $ORACLE_HOME variable
at runtime.
For more information, see Customizing Oracle VDB Environment Variables.
7.
Add a Run Bash Shell Command operation to the Configure Clone hook to ensure that adcfgclone is run
against the newly provisioned database. Typically, this operation will look similar to the script below.
# NOTE: Ensure the below environment variables will be set up correctly by the
shell. If not, hardcode or generate the values below.
# CONTEXT_NAME=${ORACLE_SID}_$(hostname -s)
# APPS_PASSWD=<passwd>
. ${ORACLE_HOME}/${CONTEXT_NAME}.env
sqlplus "/ as sysdba" <<EOF
@${ORACLE_HOME}/appsutil/install/${CONTEXT_NAME}/adupdlib.sql so
EOF
perl ${ORACLE_HOME}/appsutil/clone/bin/adcfgclone.pl dbconfig
${ORACLE_HOME}/appsutil/${CONTEXT_NAME}.xml <<EOF
${APPS_PASSWD}
EOF
8. Set up a Pre-Snapshot hook Run Bash Shell Command operation to run any pre-clone steps necessary and specific to your EBS
database. Normally, these steps will include running Oracle's adpreclone tool. Below is an example of a simple Run Bash Shell
Command hook operation:
# NOTE: Ensure the below environment variables will be set up correctly by the
shell. If not, hardcode or generate the values below.
# CONTEXT_NAME=${ORACLE_SID}_$(hostname -s)
# APPS_PASSWD=<passwd>
. ${ORACLE_HOME}/${CONTEXT_NAME}.env
perl ${ORACLE_HOME}/appsutil/scripts/${CONTEXT_NAME}/adpreclone.pl database <<-EOF
${APPS_PASSWD}
EOF
958
5.
6. Click Provision.
The Provision vFiles wizard will open.
7. Select an Environment.
This environment will host the virtual appsTier and be used to execute hook operations specified in a few steps. This environment will
also run the WebLogic Admin server (Web Administration service) for the virtual appsTier.
If you are provisioning a multi-node appsTier, you will be able to specify additional environments to host the virtual appsTier in a few
steps.
8. Select an Environment User.
This user should be the applmgr user outlined in Preparing Target EBS R12.2 Environments for Provisioning.
9. Enter a Mount Path for the virtual appsTier files.
If you are provisioning a multi-node appsTier, this mount path will be used across all target environments.
10. Enter the EBS-specific parameters for the virtual appsTier. A subset of these parameters are discussed in more detail below.
a. Ensure that the Target Application Hostname and Target DB Server Node values are the short hostnames, not the
fully-qualified hostnames.
b. The APPS Password is required to configure and manage the virtual appsTier.
This password is encrypted when stored within the Delphix Engine and is available as an environment variable to the adcfgclo
ne, adstrtal, and adstpall processes.
c. Enable the Cleanup Before Provision option if you want to permit the Delphix Engine to automatically cleanup stale EBS
configuration during a refresh. This option is recommended, but only available if your Oracle Home is patched with Oracle
Universal Installer (OUI) version 10.2 or above.
i. With this option enabled, the Delphix Engine will inspect the target environment's oraInventory prior to refreshing this
virtual appsTier. If any Oracle Homes are already registered within the specified Mount Path, the Delphix Engine will
detach them from the inventory prior to running adcfgclone. These homes must be detached prior to running
post-clone configuration, or else adcfgclone will fail, citing conflicting oraInventory entries as an issue. The Delphix
Engine will also remove any conflicting INST_TOP directories left on the environment. Non-conflicting INST_TOP
directories will not be modified.
ii. Without this option enabled, Oracle Homes or INST_TOP directories found to conflict with the specified Mount Path or
desired INST_TOP location will be reported in errors instead of automatically cleaned up. For refresh to succeed,
conflicting Oracle Homes must be manually detached and conflicting INST_TOP directories must be manually removed
prior to refresh.
d. Delphix recommends specifying an Instance Home Directory under the Mount Path so that instance-specific EBS files live on
Delphix-provided storage.
For example, if the provided Mount Path is /u01/oracle/VIS, then providing an Instance Home Directory of /u01/oracle
/VIS would allow EBS to generate virtual application INST_TOP in /u01/oracle/VIS/fs1/inst/apps/<CONTEXT_NAME >
and /u01/oracle/VIS/fs2/inst/apps/<CONTEXT_NAME > .
i. If you are provisioning a single-node appsTier, this recommendation is OPTIONAL; putting instance-specific EBS files
on Delphix-provided storage merely eases administration of the virtual EBS instance.
ii. If you are provisioning a multi-node appsTier, this recommendation is REQUIRED; the Delphix Engine's automation
requires that all nodes in the appsTier have access to instance-specific files via Delphix-provided storage.
e. If you are provisioning a multi-node appsTier, enter additional appsTier nodes as Additional Nodes.
i. The Environment User for each node should be the applmgr user outlined in Preparing Target EBS R12.2
Environments for Provisioning.
ii. Ensure that the Hostname value for each node is the short hostname, not the fully-qualified hostname.
iii. The Mount Path is not configurable for each node individually. The Mount Path provided for the primary environment
will be used for each additional node.
11. Click Next.
12.
13.
14.
959
15.
Click Next.
16.
Enter any custom hook operations that are needed to help correctly manage the virtual appsTier.
For more information about these hooks, when they are run, and how operations are written, see Customizin
g Unstructured Files with Hook Operations.
The Configure Clone hook will be run after the adcfgclone.pl tool has both mounted and configured the
appsTier.
All hook operations run against the environment specified for provision. For a multi-node appsTier, hook
operations never run against additional nodes specified.
Related Links
Linking a Source EBS R12.2 Instance
Preparing Target EBS R12.2 Environments for Provisioning
Customizing Unstructured Files with Hook Operations
Monitoring EBS R12.2 dbTechStack Provisioning Progress
Provisioning an Oracle VDB
Customizing Oracle VDB Environment Variables
Customizing Unstructured Files with Hook Operations
Monitoring EBS R12.2 appsTier Provisioning Progress
960
Procedure
1. Connect to the target dbTier environment using SSH or an alternative utility.
2. Change directories to the <ORACLE_HOME>/appsutil/log/<CONTEXT_NAME>/.
Replace <ORACLE_HOME> with the path to the dbTechStack's Oracle Home. This path will be under the mount path specified
during provisioning.
Replace <CONTEXT_NAME> with the virtual EBS instance's context name.
3. After adcfgclone has begun running, a file matching ApplyDBTechStack_*.log will exist. Identify this log file and use tail or an
equivalent utility to monitor it.
Related Links
Monitoring EBS R12.2 appsTier Provisioning Progress
961
Procedure
1. Connect to the target appsTier environment using SSH or an alternative utility.
2. Change directories to the <INST_TOP>/admin/log/.
Replace <INST_TOP> with the value of INST_TOP on the virtual EBS instance.
3. After adcfgclone has begun running, a file matching ApplyAppsTier_*.log will exist. Identify this log file and use tail or an
equivalent utility to monitor it.
Related Links
Monitoring EBS R12.2 dbTechStack Provisioning Progress
962
963
Operating Systems
Linux
Solaris
AIX and HP-UX Not Supported
The Delphix Engine does not support linking EBS instances running on AIX or HP-UX.
dbTier Requirements
Supported Topologies
Oracle SI dbTechStack and Database
Oracle RAC dbTechStack and Database
appsTier Requirements
Supported Topologies
Single-node appsTier
Multi-node appsTier with a shared APPL_TOP
Non-shared APPL_TOP Not Supported
The Delphix Engine does not provide support for linking a multi-node appsTier where the APPL_TOP is not shared between nodes.
Caveats
The Delphix Engine does not provide support for linking an EBS R12.1 instance utilizing custom context variables that is, custom variables
maintained in the EBS context file.
Related Links
Virtual EBS R12.1 Instance Requirements
964
Operating Systems
Linux
Solaris
AIX and HP-UX Not Supported
The Delphix Engine does not support provisioning EBS instances to AIX or HP-UX.
dbTier Requirements
Supported Topologies
Oracle SI dbTechStack and Database
Oracle RAC Not Supported
The Delphix Engine does not support provisioning Oracle RAC dbTechStack for use with Oracle Enterprise Business Suite.
However, you can provision an Oracle SI dbTier from a linked EBS instance with an Oracle RAC dbTier. During the provisioning
process, the Delphix Engine will relink the dbTechStack for use with an Oracle SI database and scale down the Oracle RAC database
to an Oracle SI database.
appsTier Requirements
Supported Topologies
Single-node appsTier
Multi-node appsTier with a shared APPL_TOP
Non-shared APPL_TOP Not Supported
The Delphix Engine does not provide support for provisioning a multi-node appsTier where the APPL_TOP is not shared between
nodes.
Related Links
Preparing a Source EBS R12.1 Instance for Linking
965
oracle User
The Delphix Engine must have access to an oracle user on the dbTier.
This user should be a member of both the EBS dba and oinstall groups.
The user should have read permissions on all dbTechStack and database files that will be cloned.
966
applmgr User
The Delphix Engine must have access to an applmgr user on the appsTier.
This user should be a member of the EBS oinstall group.
The user should have read permissions on all appsTier files that will be cloned.
Related Links
Source EBS R12.1 Instance Requirements
Requirements for Unix Environments
Provisioning a Virtual EBS R12.1 Instance
Oracle Support and Requirements
967
Prerequisites
Prepare your source EBS R12.1 instance for linking by following the outline in Preparing a Source EBS R12.1 Instance for Linking.
Procedure
Link the Oracle Database
1.
Link the Oracle database used by EBS as outlined in Linking an Oracle Data Source.
5.
6.
If the oracle environment user described in Preparing a Source EBS R12.1 Instance for Linking is not
already added to the Delphix Engine, add the user.
For more information about adding environment users, see the Managing Unix Environment Users topics.
7.
8.
9.
For your Dataset Home Type , select E-Business Suite R12.1 dbTechStack.
When you select this type of dataset home, the Delphix Engine will know to automate pre-clone logic.
Specifically, adpreclone.pl dbTier will be run prior to every SnapSync of the dbTechStack. During
dSource creation, you will be able to enter additional pre-clone steps as Pre-Sync hook operations.
10.
If necessary, scroll down the list of dataset homes to view and edit this dataset home.
12.
Click Manage.
13.
14.
In the Add dSource wizard, select the dbTechStack files source you just created.
15.
16.
968
16.
Click Advanced.
17.
Exclude the EBS database's data files if they are stored underneath the Oracle base install directory.
These data files will be linked with the database i nstead of with the dbTechStack . Add the relative path to the data files to the Paths to
Exclude list.
18.
Click Next.
19.
20.
21.
Click Next.
Adding a dSource to a database group enables the ability for you to set Delphix Domain user permissions for
that dSource's objects, such as snapshots. For more information, see the topics under Users, Permissions,
and Policies.
22.
23.
Click Next.
24.
Enter any custom pre- or post-sync logic as Pre-Sync or Post-Sync hook operations.
Remember that adpreclone.pl dbTier is already run prior to every SnapSync of the dbTechStack.
The Pre-Sync hook operations will be run prior to running the adpreclone.pl tool.
For more information, see Customizing Unstructured Files with Hook Operations.
25.
Click Next.
26.
Review the dSource Configuration and Data Management information, and then click Finish.
The Delphix Engine will initiate two jobs to create the dSource, DB_Link and DB_Sync . You can monitor these
jobs by clicking Active Jobs in the top menu bar, or by selecting System > Event Viewer. When the jobs
have completed successfully, the files icon will change to a dSource icon on the Environments > Database
s screen, and the dSource will be added to the list of My Databases under its assigned group.
Related Links
Preparing a Source EBS R12.1 Instance for Linking
Linking an Oracle Data Source
Managing Unix Environments
Preparing a Source EBS R12.1 Instance for Linking
Managing Unix Environment Users
Users, Permissions, and Policies
Customizing Unstructured Files with Hook Operations
970
oracle User
The Delphix Engine must have access to an oracle user on the dbTier.
This user should be a member of both the EBS dba and oinstall groups.
This user will be given proper permissions to manage the dbTechStack and database.
oraInst.loc
An oraInst.loc file must exist on the dbTier prior to provisioning. This file will specify where the oraInventory directories live or where they should
be created if they do not already exist.
The oraInst.loc file is typically located at /etc/oraInst.loc on Linux or /var/opt/oracle/oraInst.loc on Solaris. Ensure that the
oraInventory to which this file points is writeable by the oracle user.
Consult Oracle EBS documentation for more information about where to place this file on your dbTier and what this file should contain.
applmgr User
971
The Delphix Engine must have access to an applmgr user on the appsTier.
This user should be a member of the EBS oinstall group.
This user will be given proper permissions to manage the appsTier.
oraInst.loc
An oraInst.loc file must exist on every appsTier node prior to provisioning. This file will specify where the oraInventory directories live or where
they should be created if they do not already exist.
The oraInst.loc file is typically located at /etc/oraInst.loc on Linux or /var/opt/oracle/oraInst.loc on Solaris. Ensure the
oraInventory to which this file points is writeable by the applmgr user.
Delphix recommends that this file specify an oraInventory location under the virtual appsTier mount path.
If you are provisioning a single-node appsTier, this recommendation is OPTIONAL; putting the oraInventory directories on
Delphix-provided storage merely eases administration of the virtual EBS instance.
If you are provisioning a multi-node appsTier, this recommendation is REQUIRED; the Delphix Engine's automation requires that all
nodes in the appsTier have access to the oraInventory directories via Delphix-provided storage.
Consult Oracle EBS documentation for more information about where to place this file on your appsTier and what this file should contain.
Related Links
Virtual EBS R12.1 Instance Requirements
Requirements for Unix Environments
Oracle Support and Requirements
Requirements for Unix Environments
972
Prerequisites
You must have already linked a source instance of EBS R12.1. For more information, see Linking a Source EBS R12.1 Instance.
Prepare your target EBS R12.1 environments for provisioning by following the outline in Preparing Target EBS R12.1 Environments for
Provisioning.
Snapshot Coordination
Changes applied to EBS and picked up only in certain dSource snapshots may make certain combinations of snapshots across the
appsTier and dbTier incompatible. When provisioning, refreshing, or rewinding a virtual EBS instance, be sure the points in time you
select for each dataset are compatible with each other.
Procedure
Provision the EBS dbTechStack
1.
2. Click Manage.
3. Select My Databases.
4.
5.
6.
Click Provision.
The Provision vFiles wizard will open.
7.
Select an Environment.
This environment will host the virtual dbTechStack and be used to execute hook operations specified in a few
steps.
8.
9.
10.
Enter the EBS-specific parameters for the virtual dbTechStack. A subset of these parameters are discussed
in more detail below.
a. Ensure that the Target DB Hostname value is the short hostname, not the fully-qualified hostname.
b. The APPS Password is required to configure the virtual dbTechStack.
This password is encrypted when stored within the Delphix Engine and is available as an environment variable to the adcfgclo
ne process.
c. Enable the Disable RAC option if you want to permit the Delphix Engine to automatically disable the RAC option for the binaries
when applicable.
This option is necessary if provisioning from a dSource with a RAC dbTier, because the binaries are relinked with the rac_on op
tion even after running adcfgclone. If the source binaries already have the RAC option disabled (also the case for SI dbTier),
the Delphix Engine ignores this option.
d. Enable the Cleanup Before Provision option if you want to permit the Delphix Engine to automatically cleanup stale EBS
configuration during a refresh. This option is recommended, but only available if your Oracle Home is patched with Oracle
Universal Installer (OUI) version 10.2 or above.
i. With this option enabled, the Delphix Engine will inspect the target environment's oraInventory prior to refreshing this
virtual dbTechStack. If any Oracle Homes are already registered within the specified Mount Path, the Delphix Engine
will detach them from the inventory prior to running adcfgclone. These homes must be detached prior to running
post-clone configuration, or else adcfgclone will fail, citing conflicting oraInventory entries as an issue.
ii. Without this option specified, Oracle Homes found to conflict with the specified Mount Path will be reported in an error
973
14.
15.
Click Next.
16.
Enter any custom hook operations that are needed to help correctly manage the virtual dbTechStack files.
For more information about these hooks, when they are run, and how operations are written, see Customizin
g Unstructured Files with Hook Operations.
The Configure Clone hook will be run after the adcfgclone.pl tool has both mounted and configured the
dbTechStack.
When provisioning starts, you can review progress of the job in the Databases panel, or in the Job History p
anel of the Dashboard. When provisioning is complete, the dbTechStack vFiles will be included in the group
you designated and listed in the Databases panel. If you select the dbTechStack vFiles in the Databases pa
nel and click the Open icon, you can view its card, which contains information about the virtual files and its
Data Management settings.
19. See Monitoring EBS R12.1 dbTechStack Provisioning Progress for tips for monitoring the progress of dbTechStack provisioning.
2. Click Manage.
3. Select Environments.
4.
5.
6.
7.
8.
974
Click the Refresh button in the bottom right-hand corner of the environment card.
2.
3.
Enter the SID. This should be the same as what is provided to the virtual dbTechStack you just added to the
Delphix Engine.
4.
Click Advanced.
5.
7.
Add a Run Bash Shell Command operation to the Configure Clone hook to ensure that adcfgclone is run
against the newly provisioned database. Typically, this operation will look similar to the script below.
# NOTE: Ensure the below environment variables will be set up correctly by the
shell. If not, hardcode or generate the values below.
# CONTEXT_NAME=${ORACLE_SID}_$(hostname -s)
# APPS_PASSWD=<passwd>
. ${ORACLE_HOME}/${CONTEXT_NAME}.env
sqlplus "/ as sysdba" <<EOF
@${ORACLE_HOME}/appsutil/install/${CONTEXT_NAME}/adupdlib.sql so
EOF
perl ${ORACLE_HOME}/appsutil/clone/bin/adcfgclone.pl dbconfig
${ORACLE_HOME}/appsutil/${CONTEXT_NAME}.xml <<EOF
${APPS_PASSWD}
EOF
8. Set up a Pre-Snapshot hook Run Bash Shell Command operation to run any pre-clone steps necessary and specific to the virtual EBS
database. Normally, these steps will include running Oracle's adpreclone tool. Below is an example of a simple Run Bash Shell
Command hook operation:
975
# NOTE: Ensure the below environment variables will be set up correctly by the
shell. If not, hardcode or generate the values below.
# CONTEXT_NAME=${ORACLE_SID}_$(hostname -s)
# APPS_PASSWD=<passwd>
. ${ORACLE_HOME}/${CONTEXT_NAME}.env
perl ${ORACLE_HOME}/appsutil/scripts/${CONTEXT_NAME}/adpreclone.pl database <<-EOF
${APPS_PASSWD}
EOF
Select My Databases.
4.
5.
6.
Click Provision.
The Provision vFiles wizard will open.
7.
Select an Environment.
This environment will host the virtual appsTier and be used to execute hook operations specified in a few
steps.
If you are provisioning a multi-node appsTier, you will be able to specify additional environments to host the
virtual appsTier in a few steps.
8. Select an Environment
User.
This user should be the applmgr user outlined in Preparing Target EBS R12.1 Environments for
Provisioning.
11.
Enter the EBS-specific parameters for the virtual appsTier. A subset of these parameters is discussed in
more detail below.
a. Ensure that the Target Application Hostname and Target DB Server Node values are the short
hostnames, not the fully-qualified hostnames.
b. The APPS Password is required to configure and manage the virtual appsTier.
This password is encrypted when stored within the Delphix Engine and is available as an environment variable to the adcfgclo
ne, adstrtal, and adstpall processes.
c. Enable the Cleanup Before Provision option if you want to permit the Delphix Engine to automatically cleanup stale EBS
configuration during a refresh. This option is recommended, but only available if your Oracle Home is patched with Oracle
Universal Installer (OUI) version 10.2 or above.
i. With this option enabled, the Delphix Engine will inspect the target environment's oraInventory prior to refreshing this
virtual appsTier. If any Oracle Homes are already registered within the specified Mount Path, the Delphix Engine will
detach them from the inventory prior to running adcfgclone. These homes must be detached prior to running
post-clone configuration, or else adcfgclone will fail, citing conflicting oraInventory entries as an issue. The Delphix
Engine will also remove any conflicting INST_TOP directories left on the environment. Non-conflicting INST_TOP
directories will not be modified.
ii. Without this option enabled, Oracle Homes or INST_TOP directories found to conflict with the specified Mount Path or
desired INST_TOP location will be reported in errors instead of being automatically cleaned up. For refresh to succeed,
conflicting Oracle Homes must be manually detached and conflicting INST_TOP directories must be manually removed
prior to refresh.
d. Delphix recommends specifying an Instance Home Directory under the Mount Path so that instance-specific EBS files live on
Delphix-provided storage.
976
d.
For example, if the provided Mount Path is /u01/oracle/VIS, then providing an Instance Home Directory of /u01/oracle
/VIS/inst would allow EBS to generate virtual application INST_TOP in /u01/oracle/VIS/inst/apps/<CONTEXT_NAME>
.
i. If you are provisioning a single-node appsTier, this recommendation is OPTIONAL; putting instance-specific EBS files
on Delphix-provided storage merely eases administration of the virtual EBS instance.
ii. If you are provisioning a multi-node appsTier, this recommendation is REQUIRED; the Delphix Engine's automation
requires that all nodes in the appsTier have access to instance-specific files via Delphix-provided storage.
e. If you are provisioning a multi-node appsTier, enter additional appsTier nodes as Additional Nodes.
i. The Environment User for each node should be the applmgr user outlined in Preparing Target EBS R12.1
Environments for Provisioning.
ii. Ensure that the Hostname value for each node is the short hostname, not the fully-qualified hostname.
iii. The Mount Path is not configurable for each node individually. The Mount Path provided for the primary environment
will be used for each additional node.
12. Click Next.
13.
14.
15.
16.
Click Next.
17.
Enter any custom hook operations that are needed to help correctly manage the virtual appsTier.
For more information about these hooks, when they are run, and how operations are written, see Customizin
g Unstructured Files with Hook Operations.
The Configure Clone hook will be run after the adcfgclone.pl tool has both mounted and configured the
appsTier.
All hook operations run against the environment specified for provision. For a multi-node appsTier, hook
operations never run against additional nodes specified.
Related Links
Linking a Source EBS R12.1 Instance
Preparing Target EBS R12.1 Environments for Provisioning
977
978
Procedure
1. Connect to the target dbTier environment using SSH or an alternative utility.
2. Change directories to the <ORACLE_HOME>/appsutil/log/<CONTEXT_NAME>/.
Replace <ORACLE_HOME> with the path to the dbTechStack's Oracle Home: this path will be under the mount path specified
during provisioning.
Replace <CONTEXT_NAME> with the virtual EBS instance's context name.
3. After adcfgclone has begun running, a file matching ApplyDBTechStack_*.log will exist. Identify this log file and use tail or an
equivalent utility to monitor it.
Related Links
Monitoring EBS R12.1 appsTier Provisioning Progress
979
Procedure
1. Connect to the target appsTier environment using SSH or an alternative utility.
2. Change directories to the <INST_TOP>/admin/log/.
Replace <INST_TOP> with the value of INST_TOP on the virtual EBS instance.
3. After adcfgclone has begun running, a file matching ApplyAppsTier_*.log will exist. Identify this log file and use tail or an
equivalent utility to monitor it.
Related Links
Monitoring EBS R12.1 dbTechStack Provisioning Progress
980
981
Operating Systems
Linux (32-bit or 64-bit)
Solaris (32-bit or 64-bit)
AIX and HP-UX Not Supported
The Delphix Engine does not support linking EBS instances running on AIX or HP-UX.
dbTier Requirements
Supported Topologies
Oracle SI dbTechStack and Database
Oracle RAC Not Supported
The Delphix Engine does not support linking EBS 11i instances with an Oracle RAC database.
appsTier Requirements
Supported Topologies
Single-node appsTier
Multi-node appsTier Not Supported
The Delphix Engine does not support linking EBS 11i instances with a multi-node appsTier.
Caveats
The Delphix Engine does not provide support for linking an EBS 11i instance utilizing custom context variables (custom variables
maintained in the EBS context file).
982
Operating Systems
Linux (32-bit or 64-bit)
Solaris (32-bit or 64-bit)
AIX and HP-UX Not Supported
The Delphix Engine does not support provisioning EBS instances to AIX or HP-UX.
dbTier Requirements
Supported Topologies
Oracle SI dbTechStack and Database
Oracle RAC Not Supported
The Delphix Engine does not support provisioning Oracle RAC dbTechStack for use with Oracle Enterprise Business Suite.
appsTier Requirements
Supported Topologies
Single-node appsTier
Multi-node appsTier Not Supported
The Delphix Engine does not support provisioning EBS 11i instances with a multi-node appsTier.
983
oracle User
The Delphix Engine must have access to an oracle user on the dbTier. This user should be a member of both the EBS dba and oinstall grou
ps. The user should have read permissions on all dbTechStack and database files to be cloned.
applmgr User
The Delphix Engine must have access to an applmgr user on the appsTier. This user should be a member of the EBS oinstall group. The
user should have read permissions on all appsTier files to be cloned.
984
If you plan to utilize the Cleanup Before Provision option available during appsTier provisioning, the Delphix Engine requires the iAS Oracle
Home be patched with Oracle Universal Installer (OUI) version 10.2 or above. You can read more about this provisioning option in Provisioning a
Virtual EBS 11i Instance.
Note that provisioning is still possible without this option specified, but you will need to manage the target appTier's Oracle Inventory manually to
ensure conflicting entries do not cause provisions to fail.
985
Prerequisites
Prepare your source EBS 11i instance for linking by following the outline in Preparing a Source EBS 11i Instance for Linking.
Procedure
Link the Oracle Database
1.
Link the Oracle database used by EBS, as outlined in Linking an Oracle Data Source.
Select Environments.
4.
5.
6.
If the oracle environment user described in Preparing a Source EBS 11i Instance for Linking is not
already added to the Delphix Engine, add the user.
For more information about adding environment users, see the Managing Unix Environment Users topics.
7.
8.
9.
10.
11.
12.
Click Manage.
dSource.
14.
In the Add dSource wizard, select the dbTechStack files source you just created.
15.
16.
Click Advanced.
17.
Exclude the EBS database's data files if they are stored underneath the Oracle base install directory.
These data files will be linked with the database instead of with the dbTechStack. Add the relative path to the data files to the Paths to
Exclude list.
18.
Click Next.
19.
20.
986
Adding a dSource to a database group enables you to set Delphix Domain user permissions for that
dSource's objects, such as snapshots. For more information, see the topics under Users, Permissions, and
Policies.
22. Select a SnapSync policy.
23. Click Next.
24.
Enter any custom pre or post sync logic as Pre-Sync or Post-Sync hook operations.
Remember that adpreclone.pl dbTier is already run prior to every SnapSync of the dbTechStack.
The Pre-Sync hook operations will be run prior to running the adpreclone.pl tool.
For more information, see Customizing Unstructured Files with Hook Operations.
25.
Click Next.
26.
Review the dSource Configuration and Data Management information, and then click Finish.
The Delphix Engine will initiate two jobs to create the dSource, DB_Link and DB_Sync . You can monitor
these jobs by clicking Active Jobs in the top menu bar, or by selecting System > Event Viewer. When the
jobs have completed successfully, the files icon will change to a dSource icon on the Environments > Data
bases screen, and the dSource will be added to the list of My Databases under its assigned group.
987
988
oracle User
The Delphix Engine must have access to an oracle user on the dbTier. This user should be a member of both the EBS dba and oinstall grou
ps. This user will be given proper permissions to manage the dbTechStack and database.
oraInst.loc
An oraInst.loc file must exist on the dbTier prior to provisioning. This file will specify where the oraInventory directories live or should be created if
they do not already exist.
The oraInst.loc file is typically located at /etc/oraInst.loc or /var/opt/oracle/oraInst.loc on Linux and Solaris respectively. Ensure
the oraInventory pointed to by this file is writable by the oracle user.
Consult Oracle EBS documentation for more information about where to place this file on your dbTier and what this file should contain.
oratab
An oratab file must exist on the dbTier prior to provisioning. This file must be writable by the oracle user.
The oratab file is typically located at /etc/oratab or /var/opt/oracle/oratab on Linux and Solaris respectively,
Consult Oracle EBS documentation for more information about where to place this file on your dbTier and what this file should contain.
989
The appsTier must meet the target requirements outlined in Requirements for Unix Environments. These requirements are generic to all target
Unix environments added to the Delphix Engine.
applmgr User
The Delphix Engine must have access to an applmgr user on the appsTier. This user should be a member of the EBS oinstall group. This
user will be given proper permissions to manage the appsTier.
oraInst.loc
An oraInst.loc file must exist on the appsTier prior to provisioning. This file will specify where the oraInventory directories live or should be created
if they do not already exist.
The oraInst.loc file is typically located at /etc/oraInst.loc or /var/opt/oracle/oraInst.loc on Linux and Solaris respectively. Ensure
the oraInventory pointed to by this file is writable by the applmgr user.
Consult Oracle EBS documentation for more information about where to place this file on your appsTier and what this file should contain.
oratab
An oratab file must exist on the appsTier prior to provisioning. This file must be writable by the applmgr user.
The oratab file is typically located at /etc/oratab or /var/opt/oracle/oratab on Linux and Solaris respectively,
Consult Oracle EBS documentation for more information about where to place this file on your appsTier and what this file should contain.
990
Prerequisites
You must have already linked a source instance of EBS 11i. For more information, see Linking a Source EBS 11i Instance .
Prepare your target EBS 11i environments for provisioning by following the outline in Preparing Target EBS 11i Environments for
Provisioning.
Snapshot Coordination
Changes applied to EBS and picked up only in certain dSource snapshots may make certain combinations of snapshots across the
appsTier and dbTier incompatible. When provisioning, refreshing or rewinding a virtual EBS instance, be sure the points in time you
select for each dataset are compatible with each other.
Procedure
Provision the EBS dbTechStack
1.
2.
Click Manage.
3.
Select My Databases.
4.
5.
6.
Click Provision.
The Provision vFiles wizard will open.
7.
Select an Environment.
This environment will host the virtual dbTechStack and be used to execute hook operations specified in a
few steps.
8.
9.
10. Enter the EBS-specific parameters for the virtual dbTechStack. A subset of these parameters are discussed in more detail below.
a. Ensure that the Target DB Hostname value is the short hostname, not the fully-qualified hostname.
b. The APPS Password is required to configure the virtual dbTechStack.
This password is encrypted when stored within the Delphix Engine and is available as an environment variable to the adcfgclo
ne process.
c. Enable the Cleanup Before Provision option to permit the Delphix Engine to automatically cleanup stale EBS configuration
during a refresh. This option is recommended, but only available if your Oracle Home is patched with Oracle Universal Installer
(OUI) version 10.2 or above.
i. With this option specified, the Delphix Engine will inspect the target environment's oraInventory prior to refreshing this
virtual Oracle Home. If any Oracle Homes are already registered within the specified Mount Path, the Delphix Engine
will detach them from the inventory prior to running adcfgclone. These homes must be detached prior to running
post-clone configuration, or else adcfgclone will fail citing conflicting oraInventory entries as an issue. The Delphix
Engine will also inspect the target environment's oratab file, and cleanup any conflicting entries registered within the
specified Mount Path.
ii. Without this option specified, Oracle Homes found to conflict with the specified Mount Path will be reported in an error
instead of automatically detached. For refresh to succeed, these Oracle Homes must be manually detached prior to
refresh.
11. Click Next.
12. Enter a vFiles Name.
13.
991
15.
Click Next.
16.
Enter any custom hook operations that are needed to help correctly manage the virtual dbTechStack files.
For more information about these hooks, when they are run, and how operations are written, see Customizin
g Unstructured Files with Hook Operations.
The Configure Clone hook will be run after the adcfgclone.pl tool has both mounted and configured the
dbTechStack.
17.
Click Next.
18.
Click Finish.
When provisioning starts, you can review progress of the job in the Databases panel, or in the Job History p
anel of the Dashboard. When provisioning is complete, the dbTechStack vFiles will be included in the group
you designated and listed in the Databases panel. If you select the dbTechStack vFiles in the Databases pa
nel and click the Open icon, you can view its card, which contains information about the virtual files and its
Data Management settings.
19.
See Monitoring EBS 11i dbTechStack Provisioning Progress for tips for monitoring the progress of
dbTechStack provisioning.
2.
Click Manage.
3.
Select Environments.
4.
5.
6.
7.
8.
9.
10.
Click the Refresh button in the bottom right-hand corner of the environment card.
992
1. Provision the EBS database to the target dbTier environment by following the steps outlined in Provisioning an Oracle VDB.
EBS SnapSync Conflicts
When Snapshot is running against the dbTechStack, database, or appsTier, the Delphix Engine also executes pre-clone logic
to ensure the latest configuration is staged in the captured snapshots. Unfortunately, if multiple Snapshots are running against
the same EBS instance concurrently, this pre-clone logic may fail and produce bad snapshots.
To avoid SnapSync conflicts, spread out your SnapSync policies for an EBS instance by one hour or more.
2.
3.
Enter the SID same as whats provided to the virtual dbTechStack you just added to the Delphix Engine.
4.
Click Advanced .
5.
This file can be specified as an Environment File with Path Parameters of $ORACLE_HOME/<CONTEXT_NAME>.env.
See Customizing Oracle VDB Environment Variables for more information.
Replace <CONTEXT_NAME> with the virtual EBS instance's context name. The $ORACLE_HOME variable will be expanded by the
Delphix Engine at runtime.
7. Add a Run Bash Shell Command operation to the Configure Clone hook to ensure adcfgclone is run against the newly provisioned
database. Typically, this operation will look similar to the below script.
# NOTE: Ensure the below environment variables will be set up correctly by the
shell. If not, hardcode or generate the values below.
# CONTEXT_NAME=${ORACLE_SID}_$(hostname -s)
# APPS_PASSWD=<passwd>
. ${ORACLE_HOME}/${CONTEXT_NAME}.env
sqlplus "/ as sysdba" <<EOF
@${ORACLE_HOME}/appsutil/install/${CONTEXT_NAME}/adupdlib.sql so
EOF
perl ${ORACLE_HOME}/appsutil/clone/bin/adcfgclone.pl dbconfig
${ORACLE_HOME}/appsutil/${CONTEXT_NAME}.xml <<EOF
${APPS_PASSWD}
EOF
8.
Set up a Pre-Snapshot hook Run Bash Shell Command operation to run any pre-clone steps necessary and
specific to the target EBS database. Normally, these steps will include running Oracle's adpreclone tool.
Below is an example of a simple Run Bash Shell Command hook operation:
# NOTE: Ensure the below environment variables will be set up correctly by the
shell. If not, hardcode or generate the values below.
# CONTEXT_NAME=${ORACLE_SID}_$(hostname -s)
# APPS_PASSWD=<passwd>
. ${ORACLE_HOME}/${CONTEXT_NAME}.env
perl ${ORACLE_HOME}/appsutil/scripts/${CONTEXT_NAME}/adpreclone.pl database <<-EOF
${APPS_PASSWD}
EOF
993
Click Manage.
3. Select My
Databases.
4.
5.
6. Click Provision.
The Provision vFiles wizard will open.
7. Select an Environment.
This environment will host the virtual appsTier and be used to execute hook operations specified in a few steps.
8.
9.
10. Enter the EBS-specific parameters for your target appsTier. A subset of these parameters are discussed in more detail below.
a. Ensure the Target Application Hostname and Target DB Server Node values are the short hostnames, not the fully-qualified
hostnames.
b. The APPS Password is required to configure and manage the virtual appsTier.
This password is encrypted when stored within the Delphix Engine and is available as an environment variable to the adcfgclo
ne, adstrtal, and adstpall processes.
c. Enable the Cleanup Before Provision option to permit the Delphix Engine to automatically cleanup stale EBS configuration
during a refresh. This option is recommended, but only available if your Oracle Home is patched with Oracle Universal Installer
(OUI) version 10.2 or above.
i. With this option specified, the Delphix Engine will inspect the target environment's oraInventory prior to refreshing this
virtual appsTier. If any Oracle Homes are already registered within the specified Mount Path, the Delphix Engine will
detach them from the inventory prior to running adcfgclone. These homes must be detached prior to running
post-clone configuration, or else adcfgclone will fail citing conflicting oraInventory entries as an issue. The Delphix
Engine will also inspect the target environment's oratab file, and cleanup any conflicting entries registered within the
specified Mount Path.
ii. Without this option specified, Oracle Homes found to conflict with the specified Mount Path will be reported in an error
instead of automatically cleaned up. For refresh to succeed, conflicting Oracle Homes must be manually detached and
removed from oratab prior to refresh.
11. Click Next.
12.
13.
14.
15.
Click Next.
16.
Enter any custom hook operations that are needed to help correctly manage the virtual appsTier.
For more information about these hooks, when they are run, and how operations are written, see Customizin
g Unstructured Files with Hook Operations.
994
16.
Delphix User Guide 2016 Delphix
The Configure Clone hook will be run after the adcfgclone.pl tool has both mounted and configured the
appsTier.
17. Click Next.
18. Click Finish.
dbTier Must Be Accessible During appsTier Provisioning
Post-clone configuration will fail if the appsTier cannot connect to the database. Ensure that the target dbTier is accessible to
the appsTier during the provisioning process. Ensure that both the target database and database listener are running.
When provisioning starts, you can review progress of the job in the Databases panel, or in the Job History panel of the Dashboard.
When provisioning is complete, the appsTier vFiles will be included in the group you designated and listed in the Databases panel. If you
select the appsTier vFiles in the Databases panel and click the Open icon, you can view its card, which contains information about the
virtual files and its Data Management settings.
19. See Monitoring EBS 11i appsTier Provisioning Progress for tips for monitoring the progress of appsTier provisioning.
Once all three EBS virtual datasets have been provisioned successfully, your virtual EBS instance should be running and accessible.
995
996
997
998
Stopping
1. Login to the Delphix Admin application using Delphix Admin credentials.
2. Click Manage.
3. Select My Databases.
4.
Starting
1. Login to the Delphix Admin application using Delphix Admin credentials.
2. Click Manage.
3. Select My Databases.
4. Select the dbTechStack vFiles hosting your virtual EBS database.
5. In the lower right-hand corner, click Start.
Starting the dbTechStack will start the database listener.
6. Select the VDB utilized by your EBS instance.
7. In the lower right-hand corner, click Start.
Starting the database will open the database.
8. Select the appsTier vFiles for your EBS instance.
9. In the lower right-hand corner, click Start.
Starting the appsTier will run Oracles adstrtal.sh utility.
For multi-node appsTier, the primary node will be started first, followed by secondary nodes sequentially.
999
Be careful! Services running on the appsTier depend on the availability of services on the dbTier. The steps below are explicitly ordered
with these dependencies in mind. Executing steps out of order may lead to errors in accessing the virtual EBS instance.
Prerequisites
The appsTier Instance Home Directory of the virtual EBS instance must reside under the specified Mount Path.
Procedure
1. Login to the Delphix Admin application using Delphix Admin credentials.
2. Click Manage.
3. Select My Databases.
4. Select the appsTier vFiles for your EBS instance.
5. Click the Stop icon to shut down the appsTier services.
6. Select the VDB utilized by your EBS instance.
7. Click the Stop icon to shut down the database.
8. Select the dbTechStack vFiles hosting your virtual EBS database.
9. Click the Stop icon to shut down the database listener.
10. Rewind the dbTechStack vFiles.
a. Select a snapshot.
b. Click the Rewind button below the snapshots.
11. Rewind the EBS VDB.
a. Select a snapshot.
b. Click the Rewind button below the snapshots.
12. Rewind the appsTier vFiles.
a. Select a snapshot.
b. Click the Rewind button below the snapshots.
Once you have rewound all three EBS virtual datasets successfully, your virtual EBS instance should be running and accessible.
1000
Be careful! Services running on the appsTier depend on the availability of services on the dbTier. The steps below are explicitly ordered
with these dependencies in mind. Executing steps out of order may lead to errors in accessing the virtual EBS instance.
Procedure
1. Login to the Delphix Admin application using Delphix Admin credentials.
2. Click Manage.
3. Select My Databases.
4. Select the appsTier vFiles for your EBS instance.
5. Click the Stop icon to shut down the appsTier services.
6. Select the VDB utilized by your EBS instance.
7. Click the Stop icon to shut down the database.
8. Select the dbTechStack vFiles hosting your virtual EBS database.
9. Click the Stop icon to shut down the database listener.
Clean Up Might Be Required
If you did NOT specify the Cleanup Before Provision option for either your virtual dbTechStack or appsTier, you must
manually clean up your target environments prior to refresh. If you have specified this option for both datasets, no manual work
is required.
1001
Disabling
1. Login to the Delphix Admin application using Delphix Admin credentials.
2. Click Manage.
3. Select My Databases.
4.
5. On the back of the card, move the slider control from Enabled to Disabled.
Disabling the appsTier vFiles will stop the appsTier services and unmount the appsTier files.
Stopping the appsTier may take a long time. The Delphix Engine will wait for all Oracle application processes to exit before
declaring the appsTier as stopped.
Enabling
1. Login to the Delphix Admin application using Delphix Admin credentials.
2. Click Manage.
3. Select My Databases.
4. Select the dbTechStack vFiles hosting your virtual EBS database.
5. On the back of the card, move the slider control from Disabled to Enabled.
Enabling the dbTechStack vFiles will mount the dbTechStack files and start the database listener.
6. Select the VDB utilized by your EBS instance.
7. On the back of the card, move the slider control from Disabled to Enabled.
Enabling the VDB will mount the data files and start the database instance.
8. Select the appsTier vFiles hosting your virtual EBS database.
9. On the back of the card, move the slider control from Disabled to Enabled.
Enabling the dbTechStack vFiles will mount the appsTier files and start the application services.
Once you have enabled all three EBS virtual datasets successfully, your virtual EBS instance should be running and accessible.
1002
Procedure
1. Login to the Delphix Admin application using Delphix Admin credentials.
2. Click Manage.
3. Select My Databases.
4. Select the appsTier vFiles for your EBS instance.
5. Delete the appsTier vFiles by clicking the Trash Can icon in the lower left-hand corner.
6. Select the VDB utilized by your EBS instance.
7. Delete the VDB by clicking the Trash Can icon in the lower left-hand corner.
8. Click Manage.
9. Select Environments.
10. Select the target dbTier environment.
11. Click the Databases tab.
12. In the list of Installation Homes on the environment, click the Trash Can icon next to the dbTechStack Oracle Home you want to delete.
13. Click Manage.
14. Select My Databases.
15. Select the dbTechStack vFiles for your EBS instance.
16. Delete the dbTechStack vFiles by clicking the Trash Can icon in the lower left-hand corner.
17. Clean up any files that the virtual EBS instance might have created outside of the Delphix mount points on the target
environments. These typically include the instance-specific directories, oraInventory files, and oraTab entries.
Once you have deleted all three EBS virtual datasets successfully, your virtual EBS instance should be fully removed from the target
environments.
1003
Procedure
1. Disable the virtual EBS instance by following the procedure outlined in Enabling and Disabling a Virtual EBS Instance.
2. Select the appsTier vFiles for your EBS instance.
3. On the back of the card, modify the EBS-specific parameters for the virtual appsTier.
This process will normally entail adding or deleting Additional Nodes and configuring its corresponding Services.
4. Apply the configuration changes by refreshing the entire virtual EBS instance.
Follow the procedure outlined in Refreshing a Virtual EBS Instance.
Related Links
Enabling and Disabling a Virtual EBS Instance
Refreshing a Virtual EBS Instance
1004
Replication
1. Login to the source Delphix Engine's Delphix Admin application using Delphix Admin credentials.
2. Configure replication between the source Delphix Engine and a target Delphix Engine.
For a detailed outline of the replication process, see Configuring Replication.
3. Select the dbTechStack vFiles , VDB , and appsTier vFiles objects to be replicated.
These objects have dependencies on all other Delphix Engine objects relevant to the virtual EBS instance: you do not need to specify
any additional objects for EBS replication.
4. Schedule or perform the replication.
Failover
1. Login to the target Delphix Engine's Delphix Admin application using Delphix Admin credentials.
2. Failover the replica dbTechStack vFiles , VDB , and appsTier vFiles. Failing over these object will sever future replication, but will
not enable the datasets.
For a detailed outline of the failover process, see Failing Over a Replica.
3. Enable the dbTier and appsTier environments.
a. Click Manage.
b. Select Environments.
c. For each environment, move the slider control from Disabled to Enabled .
4. Enable the virtual EBS instance by following the procedure outlined in Enabling and Disabling a Virtual EBS Instance.
Related Links
Configuring Replication
Failing Over a Replica
Enabling and Disabling a Virtual EBS Instance
Virtualizing Oracle Enterprise Business Suite
Provisioning from Replicated Data Sources or VDBs
1005
Procedure
1. Disable the virtual EBS instance by following the procedure outlined in Enabling and Disabling a Virtual EBS Instance.
2. Once you have safely disabled the virtual EBS instance , upgrade the Delphix Engine by following the procedure outlined in Upgrading
the Delphix Engine.
3. Enable the virtual EBS instance by following the procedure outlined in Enabling and Disabling a Virtual EBS Instance.
Related Links
Enabling and Disabling a Virtual EBS Instance
Upgrading the Delphix Engine
1006
1007
Procedure
1. Create a Jet Stream data template by following the procedure outlined in Understanding Jet Stream Data Templates.
a. For EBS, the data template will have three data sources: the dbTechStack, database and appsTier.
2. Be sure to set the following ordering of the data sources when creating the data template. This ordering will ensure that Jet Stream
operations do not violate the EBS dataset dependencies.
Order
Dataset
dbTechStack
Database
appsTier
Once you have created a Jet Stream data template, you can configure Jet Stream data containers to manage virtual EBS instances. Jet Stream
data containers will follow the ordering of data sources configured in the template. All Jet Stream operations should works as expected for virtual
EBS instances.
Related Links
Managing Data Operations of Virtual EBS Instances
Refreshing a Virtual EBS Instance
Understanding Jet Stream Data Templates
1008
Refreshing or Rewinding
The APPS password stored across individual snapshots of a virtual EBS instance will not be consistent after a password change. Old snapshots
of EBS data will refer to a different APPS password than new snapshots of EBS data. To perform a refresh or rewind, you must explicitly
manipulate the Delphix Engine's copy of the APPS password to ensure that the virtual EBS instance is being accessed with the correct APPS
password at every step.
Recipe Not Needed If Password Not Changed
These steps are only necessary if the virtual EBS instance has a different APPS password than the snapshots being targeted by the
refresh or rewind.
If the APPS password has not been changed, follow the instructions in Refreshing a Virtual EBS Instance or Rewinding a Virtual
EBS Instance.
1. Before refreshing or rewinding the virtual EBS instance, disable the entire virtual EBS instance.
For an outline of this process, see Enabling and Disabling a Virtual EBS Instance.
2. Identify the APPS password for the snapshots being targeted by the refresh or rewind. Modify the virtual EBS instance to refer to this
password.
a. Change the APPS password on the dbTechStack.
i. Select the dbTechStack vFiles hosting your virtual EBS database.
1009
2.
Related Links
Enabling and Disabling a Virtual EBS Instance
Refreshing a Virtual EBS Instance
Rewinding a Virtual EBS Instance
1010
Procedure
1. Prior to refreshing a target dbTier environment, ensure that the virtual dbTechStack is both enabled and started.
For more information, see Enabling and Disabling a Virtual EBS Instance and Starting and Stopping a Virtual EBS Instance.
2. Refresh the target dbTier environment.
For more information, see Refreshing an Oracle Environment.
Related Links
Enabling and Disabling a Virtual EBS Instance
Starting and Stopping a Virtual EBS Instance
Refreshing an Oracle Environment
1011
Getting Started
Welcome to Delphix Mission Control
User Roles and Permissions
System Requirements
Supported Browsers
Delphix Engine Configuration
Activity One: Install Mission Control
Logging In
Mission Control Toolbar
Activity Two: Add Delphix Engines to Mission Control
Adding Users
Change a User Password
Search and Run Reports
Activity Three: Access a List of Reports
Filter, Organize, and Extract Reports
Tagging
Activity Four: Apply Tags
Filtering
Activity Five: Extracting Data from Reports
Understanding the Graphs Interface
Activity Six: Viewing Graphs in the Breakdown Tab
Working with Total Storage Graphs
Working with Source Usage Graphs
Activity Seven: Working with Graphs in the Historical Tab
Mission Control Maintenance
Managing the Operating System
Upgrading Mission Control
Activity Eight: Self-Service Upgrade of Mission Control
Activity Nine: Generate and Upload MC Support Bundles
Resources
Support
1012
Auditor User
Auditor users can only view report data. Admin users can also assign auditor users a set of tags (arbitrary text strings) to restrict which report data
they can view. There is no default auditor account. The first Delphix Administrator will need to create the auditor users and will be responsible for
creating their User IDs and Passwords.
System Requirements
The VM guest where you install Mission Control has the following requirements:
VMware ESX: 4.x or greater
Two Virtual CPUs
4 GB of Memory
50 GB of Storage
Mission Control supports Delphix Engine 4.0 or later.
Supported Browsers
The following are the minimum supported browser versions for accessing the Mission Control console:
Chrome 37
Safari 7
Firefox 32
Internet Explorer 11
1013
Activity One: Import the OVA file for Mission Control into a VM guest
1. Using the vSphere client, login to the vSphere server where you want to install Mission Control.
2. Click File.
3. Select Deploy OVA Template.
4. Select the Mission Control OVA file.
6.
a.
Logging In
1. Access Mission Control by opening a web browser using the IP address or DNS qualified host name. Mission Control does not currently
support SSL connections, so you should use http, not https.
2. Mission Control ships with one generic Delphix Admin User. The User ID is delphix_admin and the password is delphix.
Once logged in as the Delphix Admin User, change your password. You can find instructions to do this in the Change a User Password section
found below.
Viewing Reports
1015
The View Report s tab provides aggregated data across all connected Delphix Engines and presents it as a set of different reports. You can
select these reports from the drop-down menu. Mission Control has automated features that check for updates across all Delphix Engines and
sync these updates into reports every 10 minutes. To refresh the currently displayed report manually, click Refresh.
1016
Configure Reports
The Reports tab is the central place to configure settings, create scripts, and email reports in Mission Control. There are three sections that
include Report scripts, Script configuration (tunables), and Email reports. To learn more about how to navigate and work in each of these
sections, please continue reading.
To navigate to the Report configuration tab:
1. Click the configuration icon
2. Click Reports.
Report Scripts
Enable/disable individual reports to determine which ones are available in the reports drop-down menu
Delete reports
Deleted reports are no longer generated in Mission Control
Upload new reports
This is an experimental feature. Please contact Delphix if you are interested in customizing existing reports or creating new ones.
Script Configuration
Configure tunable parameters for specific reports
Click the field in the value column to make it editable
Email Reports
Configure email reports which automatically send tabular data to any number of email addresses
Send emails on daily, weekly, or monthly schedules
Customize the way the data is presented in emails by choosing the sort column and limiting the number of rows.
1017
2. Click Reports.
3. Scroll down to Email Reports.
2. Click System .
3. Scroll down to Email and click Edit Settings .
1018
1019
a. The Report field provides a selection of the specific Mission Control report you would like to use for the Email Report function.
Note: Only tabular reports are available for email.
b. Sort by Selection provides a drop down of the column you wish to sort by , which varies based on the report you have selected
above, and whether the results should be ascending or descending.
c. In the Limit the Report To fields, a selection choice appears allowing you to run and email a report with all data rows or to enter
the number of data rows you would like included in the report.
d.
In the Schedule field, fields are provided to select the scheduled day and time that you want the report to be sent.
e. In the Send to field, enter the email addresses to which you want to send the report. Note: Use a comma to separate email
addresses.
Once you have configured all of the fields above, save the information by clicking Add Email Report . The newly added report will appear. You
will then have access to additional features to edit, send a report now, or click the X button to delete the report.
Optional: Click the Edit button when you need to change or enter new information into any of the configuration fields found in the Add Email
Report functionality.
Optional : Click the Send Now button to either:
1. Send a test email report during the process of configuring an email report in order to verify the report settings, or design.
Or
2. Send a one-off email outside of an automated and scheduled email report.
1020
Configure Engines
To navigate to the Engines screen, as seen below:
1. Click the configuration icon
2. Click Engines .
The Engines tab lists all Delphix Engines that you have added to Mission Control. The Status column shows whether Mission Control is
connected to each Engine; it will prompt a specific error message if it is unable to connect. To remove an engine from Mission Control:
1. Click the X icon next to the engine you wish to delete.
Configure Users
To navigate to the Users screen, as seen below.
1. Click the configuration icon
2. Click Users .
The Users tab displays the set of user accounts that have permission to access Mission Control. You can assign tags to auditor users to restrict
which Delphix Engines and containers they can see. For more information, refer to the How to Assign Tags activity in a later section.
1021
1022
2. Click Users.
3. Click Add user.
4. Enter a username and password.
5. Select auditor or admin.
6. Inform the newly-created user of their user ID and password login credentials.
2. Click Users.
3. Click the name in the upper right-hand corner.
4. Click change password.
1023
1024
Health Reports
Active Faults: Presents a consolidated view of faults across all Delphix Engines, along with suggested actions (in the Action column) to resolve
the fault. When you have identified and fixed a fault, an administrator can go to the affected engine and mark the fault as resolved through the
1025
GUI or CLI.
Source Reports
dSource Usage: Shows a list of dSources with the following information for each:
Actual disk capacity the dSource uses
Unvirtualized capacity that is, the disk space that would be required if not using Delphix Engines
Percentage storage saved
Number of VDBs that are currently provisioned from the dSource
SnapSync Summary: Allows you to validate that SnapSync is occurring as expected and to compare the current and average duration of
SnapSync operations. The duration of SnapSync operations may vary based on the size of the database, available network bandwidth, and
database configuration for example, whether change block tracking (CBT) is enabled. You can use this report to easily find the dSources for
which SnapSyncs take the longest.
1026
Storage Reports
Storage Breakdown
Using the information displayed with the Total button, you can:
Determine which engines have the most free space and identify good candidates for new dSources/VDBs
Determine which engines have the least free space, identify which engines need additional storage or require storage to be freed, and
identify which engines may require different retention policies
Determine which engines have the most space used by VDBs and take actions such as refreshing VDBs or removing unneeded VDBs
and/or VDB snapshots
Determine which engines have the most space used by dSources and identify source breakdown to see how capacity is used for dSource
data. If needed, you can make appropriate changes to free up space.
1027
Using the information displayed with the Source button, you can determine which engines have the most space used for logs and snapshots and
modify retention policies or refresh VDBs to release old snapshots.
VDB Reports
VDB Inventory: Shows a consolidated list of all virtual datasets (VDBs and vFiles) that have been provisioned from a data source using the
Delphix Engine. This report contains the same data as the top-level Containers tab. You can use this report to easily identify where each virtual
database is located.
1028
1029
1030
Tagging
You can tag Delphix Engines in Mission Control with a set of arbitrary text strings. You can then filter reports to show only data from Delphix
Engines with a certain tag. You can also use tags to restrict auditor users so that they can only view data from Delphix Engines with that tag.
2. Click Users.
3. Click the space under the Tag headline.
4. Enter any text string.
5. Click OK.
2. Click Users.
3. Click in space under the Tag headline.
4. Enter the tag category configured for the Auditor User.
5. Click OK.
1031
Filtering
Each report contains a free text filter field. Using this filter allows you to search all displayed columns and returns all rows that have at least one
match. Examples of report filtering include:
Identifying certain types of faults
Identifying all assets related to an engine
Locating a virtual database by name
1032
The View Report Drop-Down Menu in the Mission Control Toolbar Version 1.3.
1033
Graphical Visualization of Storage Capacity Breakdown for All Delphix Engines - Version 1.3.
Using the information displayed with the Total button, you can:
Determine which engines have the most free space and identify good candidates for new dSources/VDBs
Determine which engines have the least free space, identify which engines need additional storage or require storage to be freed, and
identify which engines may require different retention policies
Determine which engines have the most space used by VDBs and take actions such as refreshing VDBs or removing unneeded VDBs
and/or VDB snapshots
Determine which engines have the most space used by dSources and identify source breakdown to see how capacity is used for dSource
data. If needed, you can make appropriate changes to free up space.
Graphical Visualization of Storage Capacity Breakdown for All Engines by Source, Mission Control Version 1.3.
To display engines according to a particular category:
1. Click a category in the Category Legend Key.
The engines will appear in order according to the category you chose to prioritize. In the screenshot above, Active Source Data has been
prioritized.
Using the information displayed with the Source button, you can determine which engines have the most space used for logs and snapshots and
modify retention policies or refresh VDBs to release old snapshots.
1034
Storage History for Top Five Engines, Mission Control Version 1.3.
By default, the above graph shows historical details of the top five engines, based on the most recent data point. However, you can choose which
engines details to display by selecting it from the drop-down menu.
The screenshot below illustrates using the scroll bar at the bottom to hone in on a particular time and date of capacity use. Use your mouse and
hover over interesting points on the graph for specific storage information. A rollover box will appear with specific information.
1035
2. Click System.
Here you can view the current version of Mission Control.
1036
1.
2. Click System.
3. Scroll down to the Upgrade section.
4. Click Choose file.
5. Select the upgrade script.
6. Click Upload & Install.
2. Click System.
3. Scroll down to the Support section.
4. Enter the case number if provided by Delphix support.
5. Click Submit.
1037
1038
_JetStream
1039
1040
Bookmarks
Capacity
Understanding Bookmarks
Bookmarks Overview
Using Bookmarks in Data Templates
Resources
Support
1041
Admin User
Admin users have full access to all report data and can configure Jet Stream. Additionally, they can use the Delphix data platform to add/delete
Delphix Engines, add/delete reports, add/delete users, change tunable settings, add/delete tags, and create and assign data templates and
containers.
Login
1. Access Jet Stream by opening a web browser and using the IP address or DNS qualified host name.
2. Login with the Delphix Admin User ID and Password provided for you.
1042
Data Sources
A data source in Delphix can represent a database, an application, or a set of unstructured files. Delphix administrators configure the Delphix
Engine to link to data sources, which pulls the data of these sources into Delphix. The Delphix Engine will periodically pull in new changes to the
data, based on a specific policy. This, in turn, begins building a custom timeline for each data source. Additionally, the Delphix Engine can rapidly
provision new data sources that are space-efficient copies, allowing users to work in parallel without impacting each other.
Data Templates
Data templates are the backbone of the Jet Stream data container. They are created by you, the Delphix administrator, and consist of the data
sources users need in order to manage their data playground and their testing and/or development environments. Data templates serve as the
parent for a set of data containers that the administrator assigns to Jet Stream users. Additionally, data templates enforce the boundaries for how
data is shared. Data can only be shared directly with other users whose containers were created from the same parent data template.
Data Containers
A Jet Stream data container allows data users to access and manage their data in powerful ways. Their data can consist of application binaries,
supporting information, and even the entire database(s) that underlie it.
A Jet Stream data container allows users to:
Undo any changes to their application data in seconds or minutes
Have immediate access to any version of their data over the course of their project
Share their data with other people on their team, without needing to relinquish control of their own container
Refresh their data from production data without waiting for an overworked DBA
A Jet Stream data container consists of one or more data sources, such as databases, application binaries, or other application data. The user
controls the data made available by these data sources. Just like data sources in a template, changes that the user makes will be tracked,
providing the user with their own data history.
The Jet Stream Data Container Interface lets users view the details and status of their data container and its associated data sources, as well
as manipulating which data is in those sources. The Data Container Interface includes a section called the Data Container Report Panel, which
displays details about each source, including the connection information needed to access it - for example, the java database connectivity (JDBC)
string for a database. This connection of information is persistent and stable for the life of the data container, regardless of what data the
resources are hosting.
1043
points in their container with other Jet Stream data users of other data containers, as long as all the data containers were created from the same
parent data template.
1044
JetStream Delphix Admin User Interface Drop Down Menu, Version 1.0.0
1045
also help you navigate to areas where you can complete specific tasks, such as creating a new template or container, working with data
timeflows, assigning users to containers, and bookmarking important points in time.
1046
For more details about how to use this interface, please refer to the Jet Stream Data User Guide. The screenshot below illustrates the data user
interface.
1047
1048
The user is now a Jet Stream user! This means that the user can now login to the Jet Stream user interface, and you can make the user the
owner of a data container.
Notes
Jet Stream users will only be able to access the Jet Stream Data Management page. They will not be able to access the other portions
of the Jet Stream interface, nor the Admin App.
A Delphix admin user cannot be made a JS-Only User. However, admins can still use Jet Stream and own a data container. Admins are
also able to manage all data containers.
A user who owns one or more data containers cannot be deleted.
For the list of data containers that a given user owns, see Jet Stream User Details.
You cannot revoke a user's JS-Only role if they own any data containers.
For the list of data containers that a given user owns, see Jet Stream User Details.
1049
1050
1051
1. From the drop-down menu in the upper right-hand corner of the Delphix UI, select Jet Stream.
1053
You have the option of setting the ordering of data sources to a data template. This option minimizes the time needed to complete Jet
Stream operations by running them in parallel on each data source. You cannot change this setting after the data template has been
created. If you want default behavior, do NOT select the box highlighted in the image above.
When your template has ordering constraints, as with Oracle EBS, you must set the startup order for each data source. Check the Set
startup order of data sources box. The Delphix Engine will select the data source with order 1 as the first source started and the last
one to be stopped. The data source with order 2 will be selected as the second source started, and this sequence will continue until the
last data source is selected and ordered. Note that it is not possible to have operations performed in parallel on a subset of data
sources and sequentially on a different subset of data sources.
For Oracle EBS, the vFiles dbTechStack will have order 1, the Oracle database order 2, and the vFiles appsTier order 3. For more
information about EBS, see the EBS documentation.
Once you have created a template, you cannot change the set of data sources in it. Any VDBs or dSources being used as data sources
in Jet Stream will appear with a special badge in the Admin App.
1054
When creating a data template with masked data sources, select the parent masked VDBs as sources to use in the data template.
1055
1056
Notes
Each tile corresponds to a data template and contains high-level information about that data template. For example, the number of child
data containers is visible under the name of the container.
You can search, sort, and filter the data template tiles, making it easy to manage a large number of data templates in Jet Stream.
1057
1058
Summary
Use this tile to get an overview of the data template and its child data containers.
Containers
Use this tile to create, view, and delete child data containers from this data template.
1059
Sources
In this tile, you can view the data sources that this data template uses. Each data source has a Jet Stream user-visible name, a description, and a
set of properties that consist of arbitrary key/value pairs. This information will be included in the data containers provisioned from this template.
1060
Properties
Use this tile to edit the data template's properties. Properties are arbitrary key/value pairs associated with the data template. These values will be
propagated to all data containers provisioned from this template. This provides a way for you to annotate data templates and data containers with
whatever information is relevant to their use case.
Bookmarks
Use this tile to create and manage bookmarks on the data template. A bookmark represents a given point in time that is protected against
retention. Bookmarks created on a data template are visible to all of the data containers provisioned from it. For more details, refer to the
Bookmarks section in the Jet Stream Data User Guide.
1061
Capacity
Use this tile to get information about the storage associated with the data template and its child containers.
1062
Refresh
This is the same basic concept as Refresh in VDBs. In Jet Stream, Refresh will update the data on the active branch of a user's data container.
The user will then have the latest data in the sources of the data template from which the container was provisioned.
Restore
Restore allows a Jet Stream user to update the data on the active branch of their data container to any point in time on the data container, the
data template from which the container was provisioned, or a bookmark. This operation effectively means, "Take me to the data at this time."
Reset
Reset is a simplified version of Restore built to support the notion of "undo." It allows a user to reset the state of their application container to the
latest operation. This can be useful for testing workflows where, after each test, users want to reset the state of their environment.
1063
Branch
A Jet Stream branch represents a logical timeline, effectively a task on which a user is working. Only one branch can be active at a time, but a
user can use multiple branches to track logically separate tasks. Jet Stream branches do not require the allocation of a new VDB; instead, they
are comprised of a collection of timeflows within a VDB.
Activate
This allows the user to select which branch they want to be active. Only a single branch within a data container can be active at a time.
Bookmark
This creates a semantic name for a point in time and prevents this data from being removed by the retention policy. Bookmarks can be annotated
with tags to make them easier to search for. In addition to tags, bookmarks allow a user to enter a description of what the bookmark represents.
Share
Bookmarks can be shared, which allows them to be seen by users who own data containers that have been provisioned from the same data
template. This allows users to share data, providing a way for other users to either restore their existing timeline or create a new branch from
these shared points.
1064
1065
4. It is acceptable to have multiple Owners per each data container. Select the VDBs to use for this container's data sources. The available VDBs
have the following constraints:
They have been provisioned from the dSources/VDBs belonging to the parent data template
They are not already part of another Jet Stream data template or container
Note: If there are no VDBs that meet these constraints, you may see a message informing you that you do not have any compatible VDBs. Click
Create.
1066
Procedure
Once a child masked VDB is selected for the data container, the admin user can see the parent-child relationship as a masked source under data
sources.
1067
Additionally, an admin user can select both masked and unmasked data sources in both Jet Stream templates and data containers.
Jet Stream users will not know whether the data in their containers and branches is masked or unmasked. All Jet Stream functionality remains the
same regardless of whether a data source is masked or unmasked.
1068
process.
When performing the Delete Container operation, you can uncheck the Delete associated VDBs and vFiles box in the dialog window to keep
these data sources intact after the Data Container is deleted.
1069
Jet Stream Data Management Interface Shortcut in Jet Stream Data Template, Version 1.0.0
Coordinating Users
Opportunity for disruption increases as more owners are sharing a single container. Sharing a container works best when users can communicate
with each other, such as when they are part of a team, or when they are working with the container at different times. Jet Stream users cannot
see the other users with whom they share the container. Work with Jet Stream users to ensure they know who they are sharing with.
1070
Which user has performed which action can be seen in the history tab of the data-mgmt page in Jet Stream. Be aware that operations counts in
the template view are are currently counted based on the container, not the user performing the operation.
1071
Understanding Bookmarks
Bookmarks Overview
Bookmarks are a way to mark and name a particular moment of data on a timeline. You can restore the active branch's timeline to the moment of
data marked with a bookmark. You can also share bookmarks with other Jet Stream users, which allows them to restore their own active
branches to the moment of data in your container. The data represented by a bookmark is protected and will not be deleted until the bookmark is
deleted. To help manage the space used by this data, users can set an optional expiration date for a bookmark. At the end of the set date, the
bookmark will automatically be deleted. Once created, you can easily locate a bookmark through one of the bookmark viewers in the interface. To
understand how to use bookmarks in Jet Stream, please refer to the Jet Stream Data User Guide.
Using Bookmarks in Data Templates
An admin user can create a bookmark on a template that will then be automatically shared to all containers created from that template.
Additionally, an admin user can create a bookmark on the master template timeline with the point of time you are interested in. The bookmark will
always be saved from retention policies and a new branch can be created from this bookmark.
1072
1073
1074
Bookmarks The amount of space used by the bookmarks on this data template. This is the space that will be freed if you delete all
bookmarks on the template.
Unvirtualized The amount of space that would be used by the data in this template and its child data containers without Delphix
virtualization.
The pie chart and table graphs can help you analyze storage usage information.
You can locate the Usage tile at the bottom of the Jet Stream navigation sidebar, as seen in the image below. Usage summaries are available for
templates, containers, and users. For example, when you click the Usage tile on the Template Details page, the usage details you interact with
will be in the context of the selected data template. The same is true when you are navigating the Data Management page for the data
containers, and the User Details page for users.
The Usage tile in the Jet Stream navigation sidebar, Version 4.2
1075
1076
1077
Shared (self data) The amount of space that cannot be freed on this data container because it is also being referenced by sibling data
containers due to Restore or Create Branch operations, via shared bookmarks
Unvirtualized The amount of space that would be used by the data in this container without Delphix virtualization
1078
1079
Resources
Access more resources at http://docs.delphix.com/display/DOCS50/Delphix+Engine+4.1+Documentation
1080
Support
Ask the community for support at[ +https://community.delphix.com/delphix+|https://community.delphix.com/delphix]. If you are seeing an issue
that cannot be resolved with help from the community, file a support case as appropriate.
1081
1082
Resources
Support
1083
Getting Started
Welcome to Delphix Jet Stream
Jet Stream grants access to the data that users need, whenever they need it. Once users have been assigned a Jet Stream data container, they
can control the data available within it. This means they can refresh to the latest production data, roll back to a previous point in the data
container's timeline, and share data with another Jet Stream user without requiring any involvement from Information Technology or database
administrators (DBAs). Self-service data management allows developers to be more productive while using fewer resources, dramatically
improving operational efficiency.
Admin User
Admin users have full access to all report data and can configure Jet Stream. Additionally, they can use the Delphix data platform to add/delete
Delphix Engines, add/delete reports, add/delete users, change tunable settings, add/delete tags, and create and assign data templates and
containers.
Login
1. Access Jet Stream by opening a web browser using the IP address or DNS qualified host name.
2. Login with the User ID and Password the Delphix Administrator has provided for you.
1084
1085
1086
The Data Container Workspace contains all the tools, actions, and view panels needed to begin using Jet Stream features. For example,
the workspace allows a user to view the history of their data on a branch, and to refresh, reset, and restore that data.
The user login icon in the upper right-hand corner of the screen provides a drop-down menu with options to change your password
and/or log out.
The Container drop-down menu in the upper right-hand region of the screen allows you to change which data container (or data
template) is shown in the page. Users can own multiple data containers and can select whichever data containers they want to browse.
1087
The Data Container View Panel, found on the left-hand side of the screen, is divided into three tabular sections: time, branches, and bo
okmarks. These tabs allow you to find and select data that you are interested in. Based on user selections made in the view panel, the
corresponding branch timeline can change.
The Data Container Self-Service Toolbar allows you to perform tasks and activities with data in the current container, by clicking on the
following user action icons:
Activate will make a branch active
Bookmark will mark an interesting point of data on a branch timeline
Branch will create a branch that supports one task. A branch is a group of data time segments called a "timeline."
Share will share a bookmark with users of other data containers from the same template
Refresh will refresh each source in the data container on a branch timeline to the latest data in the corresponding source of the data
template.
Restore will restore the data to a point in time from the template, the container, or a shared bookmark.
Reset will reset to the last interesting moment of data time on the current data timeline
Stop will stop a data container
Start will start a data container
Branch Timeline
Use this to view the timeline associated with a branch. Note that this only shows the timeline for a single branch. The branch timeline is
how a user interacts with data in the container to mark, stamp, and perform tasks that occur at various points in time.
Data Container Report Panel (Bottom Half of the Jet Stream Interface)
1088
The Data Container Report Panel consists of a series of tile buttons to help report on activities being completed in the Data Container.
They are summarized below as Summary, Sources, History, Bookmarks, and Capacity.
Summary
The Summary tile allows you to see an overview identifying what data sources are in the data container, properties associated with the
data container, and information about operations performed in the data container.
Sources
The Sources tile in the upper left-hand panel bar provides information about each data source, such as the description, name, and
properties that the administrator has placed inside the data container. In particular, you can get the connection information to access them from
here.
History
The History tile reveals a list of actions performed in this data container. Using the filter control on the upper right-hand side of the
page is an easy way to find specific activities completed over time.
Bookmarks
The Bookmarks tile allows you to view and edit details about bookmarks within this data container and bookmarks accessible from it.
Usage
The Usage tile allows you to view information about how much storage capacity this container has used.
1089
1090
quality assurance (QA) teams need to a Jet Stream data template. This data template acts as a parent source to create the data containers that
the administrator will assign to Jet Stream data users. Data sources flow from the Delphix Engine into a data template and downstream into a
data container, where a Jet Stream data user or users will use the data sources to complete tasks. The data container acts as a self-contained
testing environment and playground for the Jet Stream data user. Additionally, Jet Stream data users are able to set, bookmark, and share data
points in their container with other Jet Stream data users of other data containers, as long as all the data containers were created from the same
parent data template.
Understanding Branches
You can organize data in the data container into task-specific groupings, called "branches." For example, you can use a branch to group all the
data you have used while addressing a particular bug, testing a new feature in an application, or exploring a business analytics scenario. By
default, Jet Stream automatically creates the first branch of source data for you when you login to Jet Stream for the first time. You can view the
default branch and any additional branches that you create over time by clicking the Branch tab. Additionally, to the right of the default branch,
you will see an interconnected branch timeline unique to whichever branch is currently active. The illustration below displays both the default
branch in the Branch tab of the Data Container View Panel and the default branch timeline.
1091
Jet Stream Branch View Panel and Branch Timeline, Version 1.0.0
A branch is used to track a logical task, and contains a timeline of the historical data for that task. One branch is the "active" branch, which means
that it is the branch that is currently being updated with new data from the data sources. At any time, you can change which branch is active and
thus change which data is in the associated data sources.
Understanding Timelines
Branch Timeline
A branch timeline acts as a dynamic point-in-time interface for user actions within the branch. You can interact with the source data in the active
branch by using both the branch timeline and icons along the Self-Service Toolbar at specific points in time. Common activities include re-setting
data sources to run a test, refreshing the data container with the most current source data, and bookmarking data to share or track interesting
moments of time along the branch timeline. Users work with one branch at a time to perform a series of actions related to a particular testing or
debugging task such as data updates or starting and stopping data. As you work within your data container, you can create more branches over
time to run or complete separate tasks. Additionally, the data container tracks each branch and the corresponding actions you perform on the
branches. To view the actions completed over the life of a branch, see the container timeline in the Time tab of the Data Container View Panel.
Jet Stream Branch with Timeline Segments Over the Life of the Branch, Version 1.0.0
Container Timeline
The Time tab displays the data container's timeline, which acts as a wall clock of time. It shows continuous real time across all branches and
timeline segments. You can scroll up and down in the container timeline to find the point of time that interests you.
1092
1093
3. Click on the data operation button on the toolbar that you want to perform at this point in time. Data operations can include Reset, Create
Branch, and Create a Bookmark.
Note: The flyout will not let you pick a date that is before the first point of data time in the container, or after the present moment.
1094
Jet Stream Self Service Toolbar with a Point In Time selected on an Active Branch Timeline, Version 1.0.0
The Self-Service Toolbar is dynamic and will change based on tasks a user performs in Jet Stream. These workflows will influence how and
when self service actions become available on the self-service toolbar.
1095
1096
Branch 2: Reset a branch to the last action (e.g., refresh) on the timeline, and use
In the above illustrations, an individual branch's timeline shows all actions performed on the branch while the branch was active. The active
branch timeline can be interrupted and deactivated when a user chooses to perform actions such as switching to another branch, Create Branch,
Activate, or Stop a data container. Additionally, a user will only be able to view actions on a single branch at a time. A better way to manage
multiple branches is to go to the Time tab in the Data Container View Panel. The Time tab allows you to access the container timeline, which
becomes useful as you toggle back and forth between branches to complete different tasks. The container timeline allows you to view all the
continuous data points of time, with all actions taken on all branches in a single data container.
1097
1098
underlying data sources, these times may be different.Not all data sources track changes at the same granularity, as illustrated below.
Understanding Bookmarks
Bookmarks are a way to mark and name a particular moment of data on a timeline. Once created, you can easily locate a bookmark through one
of the bookmark viewers in the interface. You can restore the active branch's timeline to the point of data marked with a bookmark. You can also
share bookmarks with other Jet Stream users, which allows them to restore their own active branches to the point of data in your container.
1099
Bookmark Appearance
A bookmark that is private
1100
1101
1102
Getting Started
Data Containers can be shared between multiple Jet Stream users. In this situation, Jet Stream users should coordinate with their co-owners
when performing data operations that could disrupt other user's workflow such as stopping or refreshing the Data Container.
1103
Create a Bookmark
1. Select a Data Point on a branch's timeline.
2. Click the Bookmark icon on the Self-Service Toolbar
1104
Activity Three: Using Refresh to Get the Latest Data From a Data Template
Start a new timeline segment with the most recent point of data from the Data Container's Data Template.
1. Click the Refresh icon.
Refresh creates a new timeline segment on the active branch. This refreshes each source in the data container to the latest data in the
corresponding source of the data template.
1105
If you restore data back to a point in time on the data template master timeline, Jet Stream will ask you which data container to restore into. It will
then:
Reflect the selected point of data into a new timeline segment on the active branch
Copy the moment of data into the data sources
If the timeline segment on a branch timeline was created by a Restore operation, then the segment starts with the moment of data from the
branch that was selected when the Restore operation was done. This is illustrated below.
Note: The parent branch for this segment can be the same branch of which this segment is a part. It is possible to restore the active branch from
a point in time on the same branch.
1106
The branch timeline will now show the timeline for the parent template.
2. Select one of the following:
A point of data on the timeline
A bookmark on the timeline
A bookmark under the Bookmarks tile in the Data Container Report Panel
4. A dialog will pop up. Use it to select the container you'd like to restore.
1107
icon.
Active Branch
Within a single data container, only one branch is active at any given time. The data located at the red star of the active branch's timeline is the
1108
newest copy of the data from the data container's data sources.
The active branch is distinguished by a red star, which appears at the far right of the timeline, alongside its name in the Branch Name area, and
in the Branch tab.
Active Branch
Inactive Branch
1109
Share a Bookmark
1. Select a bookmark by clicking one of the following:
a. The bookmark's bubble on the branch timeline.
b. The Bookmarks tab in the data container workspace.
c. The Bookmarks tile in the Data Container Report Panel.
icon
Note: You cannot share a bookmark that you or another user have already shared.
Un-share a Bookmark
1. Select a bookmark by clicking one of the following:
a. The bookmark's bubble on the branch timeline.
b. The Bookmarks tab in the data container workspace.
c. The Bookmarks tile in the Data Container Report Panel.
icon.
Note: You cannot unshare a bookmark that is already private or a bookmark which someone else has shared.
Delete a Bookmark
1. Select a bookmark by clicking one of the following:
a. The bookmark's bubble on the branch timeline.
b. The Bookmarks tile in the Data Container Report Panel.
icon.
1. Select a bookmark by clicking the Bookmarks tile in the Data Container Report Panel.
2. Click the Edit icon to the right of its name.
3. Check the "Will be deleted after" checkbox
4. Pick a new date using the date selector and click the checkmark to the right of the date selector.
Finding Bookmarks
In either the Bookmarks tab in the data container workspace or the Bookmarks tile in the Data Container Report Panel:
1. Type into the Filter field.
This will only show bookmarks that have names or tags that match the text you have entered.
1111
1112
1113
The stacked bar graph shows information about the top 10 space users. You can re-sort the graph based on the fields in the Sort by legend on
the top right-hand corner of the screen as seen in the image above. For example, if you want to know which data containers are sharing the most
data with others, you can un-select Shared (others data) and Unique by clicking them in the legend.
Note: When the legend items are not selected, their corresponding colored boxes turn gray and the data is removed from the chart. The data and
name will reappear when you re-select by click on the preferred grayed-out category.
The field categories display the following information:
Unique The amount of space that will be freed if you delete this data container. This assumes that also delete underlying data
sources.
Shared (others data) The amount of space that cannot be freed on the parent data template (or sibling data containers) because it is
also being referenced by this data container due to Restore or Create Branch operations. The snapshots on the template or sibling
container are what use up the space.
Shared (self data) The amount of space that cannot be freed on this data container because it is also being referenced by sibling data
containers due to Restore or Create Branch operations, via shared bookmarks
Unvirtualized The amount of space that would be used by the data in this container without Delphix virtualization
1114
by neighboring bookmarks or branches that have been created or restored from this bookmark
Externally Referenced The amount of space referenced by this bookmark that cannot be freed by deleting this bookmark because it
is also being referenced outside of Jet Stream for example, by a retention policy.
1115
1116
1117
_Reference
1118
1119
1120
1121
delphix>
While both delphix_admin and sysadmin produce the same prompt once logged in, be aware that the two users have different menus and
different functional areas.
Sysadmin Menu
delphix> ls
Children
network
service
storage
system
user
Operations
version
Operations
version
delphix>
1122
Delphix Admin
Menu
delphix> ls
Children
alert
audit
authorization
connectivity
database
environment
fault
group
host
job
namespace
network
policy
replication
repository
service
session
snapshot
source
sourceconfig
system
timeflow
user
Operations
version
delphix>
Individual commands passed as arguments to the SSH client will be interpreted as if they had been read from the terminal. More complex scripts
can be passed as input to the SSH command. When running SSH in non-interactive mode via these mechanisms, the command line prompt will
be suppressed, as will terminal font decorations such as underline and bold.
The CLI is also available from the serial terminal console should the network be unavailable. Consult your VM platform documentation for
information on how to connect to the terminal console. Once connected, log in using your Delphix user credentials just as you would over SSH.
If the management service is unavailable due to a software bug or other problem, the CLI can still be accessed as a system user provided that
user is locally authenticated (not via LDAP) and has logged in at least once before. While in this state, only the system commands are available,
including restart, which will attempt to restart the management service without rebooting the entire server. If this problem persists, please
contact Delphix support.
The topic CLI Cookbook: Configuring Key-Based SSH Authentication for Automation shows an example of how to connect to the CLI using
SSH key exchange instead of the standard password-based authentication.
1123
CLI Contexts
This topic explains the concept of contexts within the Delphix Engine command line interface.
The CLI is built on the concept of modal contexts that represent an administrative point for interacting with the web service APIs. These contexts
can be divided into the following types:
Context
Description
Static
Children
These contexts exist for the purpose of navigating between points in the hierarchy, but have no properties of their own and do not
correspond to any server side object. The root context is an example of this, as are most of the top level contexts such as databa
se or group.
Object
These contexts represent an object on the server, either a specific object (such as databases) or system wide state (such as
SMTP configuration). These contexts have properties that can be retrieved via the get command.
Operation
These contexts represent a request to the server. Commands may or may not require input and may or may not change state on
the server, but in all cases require an explicit commit operation to execute the command. When in command context, the prompt
includes a trailing asterisk (*) to indicate that commit or discard is required before exiting the context.
User can move between contents by typing the name of the context. To move to a previous context, the up or back commands can be used. In
addition, the CLI supports UNIX-like aliases for cd and ls, allowing navigation similar to a UNIX filesystem. For more information on these
commands, see the Command Reference section.
1124
Managing Objects
This topic describes the use of objects in the Delphix Engine command line interface, and provides a list of the object management operations.
The Delphix Engine represents state through objects. These objects are typically managed through the following operations, covered in more
detail in the Command Reference topics
The topic CLI Cookbook: Changing the Default Group Name illustrates the use of object management commands such as list and get.
Operation
Description
list
For a given object type (represented by a static context such as database), list the objects on the system, optionally constrained
by some set of attributes. Some objects are global to the system and do not support this operation.
select
Select a particular object by name to get properties or perform an operation on the object. See the Delphix Objects section for
more information on object naming.
get
update
Enter a command context to change one or more properties of an object after selecting. Not all objects support this operation, and
only properties that can be edited are shown when in the update command context.
create
Create a new instance of the object type from the root static context. Not all objects can be created in this simplified fashion.
Databases, for example, are created through the link and provision commands.
delete
Deletes an object that has been selected. Not all objects can be deleted.
In contexts where there are multiple objects of a given type, the list command can be used to display available objects, and the select
command can select an object for subsequent operation.
When listing objects, each context has its own set of default columns to display. The display option can be used to control what columns are
displayed to the user. This is a comma-separated list of property names as they would be retrieved by the get command. It is possible to specify
properties that do not exist in order to accommodate lists of objects of varying types, and untyped objects.
The topic CLI Cookbook: Listing Data Source Sizes provide an example of using the list command.
1125
Managing Properties
This topic describes the use of properties in relation to objects in the Delphix Engine command line interface.
Object properties are represented as a hierarchy of typed name/value pairs. The get command by itself will display the complete hierarchy for a
particular object. This hierarchy is displayed with each nested object indented by an additional level. The set of available properties depends on
the command context, and may change if the type of an object is changed.
Property State
Properties are typically set to a specific value, but they can also be unset. Unset properties indicate there is no known value, either because it
hasnt been provided yet, or it has been explicitly removed. Properties in this state are displayed via the following means:
(unset) The property is not currently set. It may never have been given a value or it may have been explicitly unset through the unse
t command.
(required) This has the same underlying semantics as (unset), but indicates that the property must be set before the current
command can be committed. Failure do so will result in a validation error at the time the commit operation is attempted. Required
properties are displayed in bold.
In addition, all objects have a default state when in command context. A property that has been modified is noted with an asterisk ( *), and can be
reverted to its default state through the revert command.
When updating properties, only those properties are sent to the server. The exception is arrays and untyped objects, covered in Array Properties
and Untyped Object Properties. These objects are always sent in their entirety, so changing any one element will send the entire object.
Basic Properties
Most properties are displayed and input as a string, though the underlying type may be more specific. The following are some of the basic types:
String An arbitrary string. This may be subject to additional validation (such as an IP address) that is enforced at the time the property
is set.
Number An integer number.
Boolean Either true or false.
Enumeration A string that must be chosen from a known set of options.
Nested Properties
Some properties are in fact other objects, and are represented as a nested set of properties. These properties can be manipulated in one of two
ways: by specifying a dot-delimited name, or changing the context via the edit command.
A dot (.) in a property name indicates that the portion to the left of the dot is the parent object name, and the portion to the right is a child of that
object. For example, sourcingPolicy.logsyncDisabled denotes the logsyncDisabled property within the sourcingPolicy property.
These dots can be arbitrarily nested. An alternative syntax of using brackets to enclose property names (sourcingPolicy[logsyncDisabled
]) is also supported for familiarity with other programming languages.
The edit command, in contrast, will change the current context such that all properties are relative to the specified object. This can be useful
when changing many nested properties at once, or when the complete set of properties can be confusing to manage all at once.
The topic CLI Cookbook: Disabling LogSync for a dSource provides an example of manipulating nested properties.
1126
Array Properties
This topic describes the use of array properties in the Delphix Engine command line interface.
Some Delphix objects represent properties as arrays. Arrays are effectively objects whose namespace is a contiguous set of integers. While they
behave like objects and their properties can be referenced via the same object property notation, they differ in several key areas.
Arrays can be divided into two types: arrays of primitive types (strings, integers, etc.) and arrays of objects. Arrays of objects can be managed like
other objects via nested property names and the edit command, but differ in the following respects:
When an array element is unset, it removes the element from the array and shifts all other elements down to preserve the contiguous
index space.
New array elements can only be appended to the end of the array by specifying an index that is one more than the maximum index of the
array.
When displaying a property that is an array, if the length is greater than 3, then it is displayed only as [ ]. The complete contents of
the array can be displayed by getting or editing that particular property.
Arrays of primitive types can be managed as arrays of objects, but also support an inline notation using comma-separated notation. This allows
single-element arrays to be set as a standard property, and for arrays of strings to be set on a single line instead of having to edit each element.
Regardless of element type, arrays are sent as complete objects when updated. When any array element is changed and subsequently
committed, the complete array is sent to the server. When a single array element is reverted, the entire contents of the array are reverted.
The topic CLI Cookbook: Setting Multiple Addresses for a Target Host provides an example of working with a property that is an array of
strings.
1127
1128
CLI Automation
This topic describes using automation with both the Delphix Engine command line interface (CLI) and the web service API.
All functionality is available in both, because the CLI is built upon the web services API. The CLI enables you to create scripts for simple
automation, and it is a useful aid in the development of more complex code which uses the web service API .
DELPHIX_ENGINE=172.16.180.33
SSH_CMD="ssh automation@${DELPHIX_ENGINE}"
${SSH_CMD}
Backward Compatibility
Both the CLI and web services API are versioned to support backwards compatibility. Future Delphix versions are guaranteed to support clients
that explicitly set a version provided the major version identifier is compatible. For more information, see the Web Service API Guide . The CLI
will always connect with the latest version, but the version command can be used to both display the current version and explicitly bind to a
supported version.
Users building a stable set of scripts can run version to get the current version. Scripts can then run the version <id> command to
guarantee that their scripts will be supported on future versions. For more information on the different API versions and how they map to Delphix
versions, see the API Version Information section.
Using version command to create a stable script for Delphix 4.2.1.0 (API Version 1.5.0)
DELPHIX_ENGINE=172.16.180.33
SSH_CMD="ssh automation@${DELPHIX_ENGINE}"
${SSH_CMD}
1129
DELPHIX_ENGINE=172.16.180.33
SSH_CMD="ssh automation@${DELPHIX_ENGINE}"
env_array=(`${SSH_CMD} "version 1.5.0; cd environment; list display=name" | grep -v
NAME` )
for i in "${env_array[@]}"
do
${SSH_CMD} "version 1.5.0; cd environment; select $i; disable; commit"
done
This script works, but it will be slow on systems with many environments, since each SSH command will start a new session.
The web service APIs are superior when performing many operations as a single logical unit. The web service APIs also provide substantially
more data with a single call than what is shown in the CLI output, which can greatly simplify your code and avoid multiple round trips.
However, the input and output of web service API calls is JSON data, and it can be difficult to quickly determine what the input and output will look
like.
For this reason, the CLI provides two options which can greatly assist you in the development of complex automations: JSON Output and Tracin
g.
(setopt format=json) changes the CLI to output of all results to parseable JSON (javascript object notation). This is the fastest and easiest
way to quickly see what the JSON output will look like when executed via the Web Service APIs. The JSON format has wide support in a variety
of programming languages; see http://www.json.org for more information.
(setopt trace=true) will display the underlying HTTP calls being made with each operation and their JSON payload. This allows you to
determine the GET and POST calls, and their JSON payloads, which perform the actions that you need to power your automation.
(setopt format=text) changes the CLI back into its regular output mode. (setopt trace=false) turns off the trace display.
The fastest way to develop complex automation is to experiment with the CLI and copy the underlying API calls to a
custom system for better control over behavior.
1130
1131
Delphix Objects
These topics describe the object model for the Delphix Engine command line interface.
The Delphix object model is a flexible system for describing arbitrary hierarchies and relationships of objects. In order to enable current and future
functionality of the system, the relationship between objects is not always immediately obvious. The CLI is merely a veneer atop the web services
layer to ensure that the full complement of functionality expressed by the API is always available, but this requires users to have some
understanding of how objects are represented in the system.
Object Type Hierarchy
Object Names and References
Databases and Environments
Asynchronous Jobs
1132
1133
1134
Environment Components
An environment is the root of the representation of external state that manages database instances. An environment could be a single host (Unix
HostEnvironment) or an Oracle cluster (OracleClusterEnvironment). Environments exist to contain repositories, and each environment
may have any number of repositories associated with it. A repository is the entity that contains database instances. Repositories are typically
installation directories (OracleInstall) within an environment. Within each repository of any number of SourceConfig objects, which
represent known database instances. The source config exists independent of Delphix, and could represent a possible dSource (in which case
there is no associated database object), or could be managed entirely by Delphix (for VDBs). The source config contains intrinsic properties of the
database instance, while the source (described below) contains information specific to Delphix and only exists when the source config is linked to
a dSource or VDB.
Most environment objects are created through the act of discovery. By specifying a host, Delphix will attempt to automatically discover all
environments, repositories, and source configs. These objects can also be added manually after the fact in cases where discovery fails.
The environment hierarchy can be represented this way:
The generic type is listed in the top portion of each box, with an example of the Oracle single instance objects in the lower portion of each box.
Each of these objects can contain multiple child objects with it.
Database Components
The core of all databases within Delphix is the Container that contains all the physical data associated with the database, whether it is a
dSource or VDB. Within each container is a Timeflow, which represents a single timeline of change within the database history. Currently, a
container can only have one timeflow, though this limitation may be relaxed in a future release. Within a timeflow are two important object: Timef
lowSnapshot objects and TimeflowRange objects. Timeflow ranges represent the provisionable ranges within the history of the timeflow, while
timeflow snapshot represent a point at which at snapshot was taken and therefore more likely to provision in a short amount of time. The resulting
data hierarchy can be represented this way:
1135
Each container may be associated with a Source. A source is the Delphix representation of an external database when it is associated with a
container, and contains information specific to managing that source. Not all source configs within an environment have a source associated with
them (as is the case with linkable databases), but all sources must have a source config. Containers may have no sources associated with them if
they are unlined; sources can be manually attached at a later point. Currently, each container can have at most once source associated with it,
though this may change in a future release.
1136
Asynchronous Jobs
This topic describes conditions under which command line interface operations may spawn jobs that run in the background, and using the wait o
ption to wait for job completion.
Not all operations can be performed in the context of a single web service API call. For cases where there is a long running operation that cannot
be executed quickly and transactionally, a job may be dispatched to do the remaining work in the background. For more information on jobs and
their semantics, see the topic Viewing Action Status. Within the CLI, any command can potentially result in an asynchronous operation. The
default behavior is to wait for any such job to complete, and display its progress in the CLI.
In the event that you do not want to wait for the operation to complete, the global wait option can be set (setopt wait=false). If disabled,
the CLI will display the reference to any job that was dispatched, but not wait for it to complete.
1137
Command Reference
These topics describes the core built-in commands within the CLI. It is not an exhaustive list of all commands in all contexts. For object or type
specific commands, consult the API documentation.
CLI Help and Display Commands
CLI Context Commands
CLI Object Commands
CLI Property Commands
CLI Miscellaneous Commands
1138
Description
children
Display all statically defined children valid for the current context. These children can be targets of the cd command.
commands
help
Display all commands and properties valid for the current context. Specifying a command or property will provide more
information about that command or object. When nested properties are present, only top-level properties are displayed by
default, though specifying a particular property will display the entire hierarchy.
ls
Display children, commands, objects, and operations valid in the current context. Only those sections that are relevant in the
current context are displayed.
operations
Display available context-specific operations. These operations require an explicit commit command to execute the operation,
or discard to abort it.
1139
Description
back
Return to the previous visited valid context. This history only tracks contexts that were actually visited, so running database exa
mple followed by back will return you to the root context, not the database (because the two were executed as part of one action
and never actually visited). If a previous context was deleted or is no longer valid, this command will skip over it.
cd
Switch to the given child. This is identical to typing the name of the child itself, but also support UNIX-style directory structures,
such as / and ... This allows for contexts to be chained such as cd ../database/template.
history
Display the history of input to the shell. The shell supports the ability to move back and forth in the history using the up and down
arrows.
up
This is an alias for cd .. for the benefit of those less familiar with UNIX filesystem navigation. Unlike back, which only returns to
the previous context only if it was visited, and may return to a child context, this command will always return to the immediate
parent context.
1140
Description
list
List all objects of a particular type when in the appropriate root context. Different contexts may support different options to the list
command to constrain the output; run help list to see possibilities.
select
1141
Description
commit
When in operation context, commit the changes and execute the operation.
discard
When in operation context, discard any changes and abort the operation.
edit
Change the current context to be relative to a particular object property when in operation context.
get
Get all properties (with no arguments) or a particular property of the current object.
revert
Revert a particular property to its default value, either the value of the underlying object during an update, or the default command
input value.
set
Set the value of one or more properties. These properties can be specified as name=value, or as simply the property name.
When only the property name is specified the CLI will prompt for the value to use, optionally obscuring the input if the property is a
password.
unset
Clear the current value of a property. This is not the same as reverting the property, though this can have semantically identical
behavior in the case that the default value is unset.
1142
Description
echo
exit
Exit from the current CLI session. This is equivalent to sending the EOF control character (typically Ctrl-D) or closing your client
SSH application.
getopt
Get the current value of a global configuration option. The list of global options can be retrieved by running help getopt, but
include options for controlling JSON output (format), tracing HTTP calls (trace), and enabling synchronous job semantics (wai
t).
setopt
version
Display the current API version or bind to a particular version. See the CLI Automation section for more information.
1143
1144
1145
Procedure
1. Consult your client documentation for information on generating a public/private key pair. The ssh-keygen program is typical on UNIX
platforms. If you need details on ssh-keygen usage or have unique requirements (such as named RSA keys), see Third Party SSH Key
Generation Example. If you already have a public/private key pair generated on your system, you can skip to step 2.
2. Connect as the user you wish to configure or as a Delphix administrator.
Connecting to Namespaces
When you connect to the Delphix Engine with the CLI, you should specify the appropriate namespace (either DOMAIN or
SYSTEM). See Connecting to the CLI for more information.
3. Select the current user, or select a specific user if configuring another user as an administrator.
5. Paste the contents of the public key configured on your client and commit the result.
1146
These operations are performed as a command line user on a non-Dephix host, where SSH is installed. In the remainder of the document we will
use:
username - to refer to the existing command line user the non-Delphix host
host name - to refer to the existing non-Delphix host
These example here should work with a variety of SSH distributions, however your distribution may behave differently. If you are unable to follow
these instructions successfully, consult with your system administrator, and/or your operating system or SSH client vendor.
Procedure
4.
Related Links
1148
CLI Cookbook: Setting Up SSH Key Authentication for UNIX Environment Users
This topic describes adding public key authentication for a UNIX environment user, thus allowing the Delphix server to connect to your UNIX
Environments without an explicit password. This method uses the Delphix CLI in order to set up the environment user and gather SSH public
keys. It is also possible to perform these actions in the Delphix Engine Admin interface by navigating to Manage > Environments and selecting P
ublic Key as the Login Type for the environment (see Managing Environments with Agile Data Masking for details).
UNIX host environments (and Oracle cluster environments) can have users configured to use SSH-key based public key authentication instead of
the traditional password authentication method. Within Delphix, there is a per-system SSH public key that can be placed into the ~/.ssh/autho
rized_keys file of the remote user. Once this has been done, the Delphix environment user can be configured to use the private key instead of
an explicit password.
Prerequisites
You must be able to log into the remote host (or all hosts of an Oracle cluster) and have write access to the ~/.ssh/authorized_keys
file within the desired user's home directory.
Procedure
1. Get the current system public key:
$ mkdir ~/.ssh
b. If creating the file or directory as an administrator:
Related Topics
1149
1150
1151
Procedure
1. Add a VMXNET3 virtual network adapter to the Delphix VM and reboot the VM.
A reboot is required because the Delphix Engine does not dynamically recognize newly added network devices.
2. Log in to the Delphix Engine as the sysadmin user and switch to the network interface context. Then use the listcommand to view the
available network interfaces, and select the new interface to be configured.
1152
Procedure
1. Log in to the Delphix Engine as the sysadmin user and switch to the network route context.
OUTINTERFACE
vmxnet3s0
vmxnet3s1
vmxnet3s0
3. Committhe operation.
1153
Procedure
1. Switch to the group context and list groups on the system.
delphix> group
delphix group> list
NAME
DESCRIPTION
<New Group> 2. Select the default group and show current properties.
1154
ssh delphix_admin@delphix
2. Go to Users and select the User you would like to change the password of
delphix
delphix
delphix
delphix
> user
user > ls
user > select example_user
user "example_user" > ls
3. Select updateCredential to allow you to change password and set new password
ssh delphix_admin@delphixengine
delphixengine > user
delphixengine user > ls
Objects
NAME
EMAILADDRESS
sysadmin
delphix_admin
test_user
Operations
create
current
delphixengine user > select test_user
delphixengine user "test_user" > ls
Properties
type: User
name: test_user
authenticationType: NATIVE
credential:
type: PasswordCredential
password: ********
1155
emailAddress: [email protected]
enabled: true
firstName: (unset)
homePhoneNumber: (unset)
isDefault: true
lastName: (unset)
locale: en_US
mobilePhoneNumber: (unset)
passwordUpdateRequested: false
principal: test_user
publicKey: (unset)
reference: USER-2
sessionTimeout: 30min
userType: DOMAIN
workPhoneNumber: (unset)
Operations
delete
update
disable
enable
updateCredential
delphixengine user "test_user" > updateCredential
delphixengine user "test_user" updateCredential *> set newCredential.password=<new
password>
delphixengine user "test_user" update *> commit
1156
1157
Procedure
1. ssh into your engine using your delphix_admin username and password
ssh delphix_admin@yourdelphixengine
2. Go into your alerts and list the alerts you already have
Example:
ssh delphix_admin@yourengine
delphix > alert
delphix alert> ls
1158
Objects
REFERENCE TIMESTAMP
EVENTTITLE
ALERT-102 2015-01-14T21:00:04.380Z
Job complete
ALERT-101 2015-01-14T20:55:57.880Z
Job complete
ALERT-100 2015-01-14T19:35:32.958Z
Job complete
ALERT-99
2015-01-14T19:35:32.850Z
Job complete
ALERT-98
2015-01-14T19:34:58.744Z
Error during job execution
ALERT-97
2015-01-14T18:12:01.928Z
Job complete
ALERT-96
2015-01-14T18:03:10.664Z
Job complete
ALERT-95
2015-01-14T17:16:07.464Z
Job complete
ALERT-94
2015-01-14T17:15:55.298Z
Job complete
ALERT-93
2015-01-14T17:15:45.995Z
Job complete
ALERT-92
2015-01-14T16:39:33.133Z
complete
ALERT-91
2015-01-14T16:38:33.719Z
complete
ALERT-90
2015-01-14T15:47:35.005Z
Validated sync failed for dSource
ALERT-89
2015-01-14T15:45:40.895Z
Validated sync failed for dSource
ALERT-88
2015-01-14T15:02:14.874Z
Job complete
ALERT-87
2015-01-14T11:33:28.766Z
Job complete
ALERT-86
2015-01-13T23:11:46.838Z
Job complete
ALERT-85
2015-01-13T11:30:01.154Z
Job complete
ALERT-84
2015-01-13T11:07:04.385Z
Backup detection failed
ALERT-83
2015-01-12T22:35:18.774Z
Backup detection failed
ALERT-82
2015-01-12T11:30:00.063Z
Unable to connect to remote database
ALERT-81
2015-01-12T11:30:00.054Z
Unable to connect to remote database
ALERT-80
2015-01-12T08:38:26.983Z
Backup detection failed
ALERT-79
2015-01-12T06:04:34.666Z
Validated sync failed for dSource
ALERT-78
2015-01-11T11:30:03.393Z
Job complete
Children
profile
delphix alert> select ALERT-98
delphix alert "ALERT-98"> ls
Properties
type: Alert
TARGETNAME
ASE/pubs2
ASE/pubs2VDB
ASE/pubs2VDB
ASE/pubs2VDB
ASE/pubs2
ASE/pubs2
ASE/pubs2
ASE/pubs2
ASE/market
ASE/pubs2VDB
nstacksolase2.acme.com-2015-01-14T16:39:13.821Z
Job
nstacksolase2.acme.com
Job
market
pubs2
ASE/market
ASE/pubs2VDB
ASE/market
ASE/pubs2VDB
pubs2
pubs2
ASE/pubs2VDB
during virtual database policy enforcement
ASE/pubs2
during dSource policy enforcement
pubs2
pubs2
ASE/pubs2VDB
1159
event: alert.jobs.failed.object
eventAction: Create the database on the target host.
eventDescription: DB_EXPORT job for "ASE/pubs2" failed due to an error during
execution: Could not find database "pubs2VDB" on target instance "SRC_157_4K",
environment "ASE".
eventSeverity: CRITICAL
eventTitle: Error during job execution
reference: ALERT-98
target: ASE/pubs2
targetName: ASE/pubs2
targetObjectType: ASEDBContainer
timestamp: 2015-01-14T19:34:58.744Z
delphix alert> profile
delphix alert profile> select ALERT_PROFILE-1
delphix alert profile "ALERT_PROFILE-1"> ls
Properties
type: AlertProfile
actions:
0:
type: AlertActionEmailList
addresses: [email protected]
format: HTML
eventFilter: (empty)
reference: ALERT_PROFILE-1
severityFilter: CRITICAL,WARNING
targetFilter: (empty)
Operations
delete
update
delphix alert profile> create
delphix alert profile create *> set actions.0.type=AlertActionEmailList
1160
*The last piece of the alert profile that needs to be configured is the "targetFilter". This is an array so you can define
multiple targets. In the following example, there is a dSource named "pubs2" the user wants to define an alert on. If
they try to set the filter to just the name of the dSource itself ("pubs2"), it will warn them that this is ambiguous and
gives a hint on how to fully qualify it:
delphix alert profile create *> set targetFilter=pubs2
The name 'pubs2' is ambiguous, specify one of: [ "ASE/pubs2", "pubs2/pubs2",
"SRC_157_4K/pubs2" ].
delphix alert profile create *> set targetFilter.0=pubs2/pubs2
delphix alert profile create *> set targetFilter.1=ASE/pubs2
delphix alert profile "ALERT_PROFILE-34" update *> commit
Use the tab button freely to autocomplete and also see available options, for instance, while changing the severityFilter property, you can use the
tab key like so:
DELPHIX-4221.dcenter alert profile 'ALERT_PROFILE-1' update *> set severityFilter= <I HIT
TAB HERE TO SEE OPTIONS BELOW>
AUDIT
CRITICAL
INFORMATIONAL
WARNING
1161
Procedure
1. Switch to the capacity system context.
1162
1163
Procedure
1. Create a new environment and set the parameter type to be a UNIX host.
The default is a UNIX host, but for completeness this demonstrates how one would add another type of environment (Oracle cluster or
Windows host).
1164
5.
1165
Procedure
Enter these commands through the command line interface:
/environment;
create;
set type=HostEnvironmentCreateParameters;
set hostEnvironment.type=WindowsHostEnvironment;
set hostEnvironment.name=<Source environment name>;
set hostEnvironment.proxy=<target host name>;
set hostParameters.type=WindowsHostCreateParameters;
set hostParameters.host.type=WindowsHost;
set hostParameters.host.addresses="<Source host IP address or hostname>";
set primaryUser.name="<domain\username>";
set primaryUser.credential.type=PasswordCredential;
set primaryUser.credential.password=<password>;
commit;
Example
The CLI commands for adding source host "mssql_source_1"using target host mssql_target_1 as proxy and environment user ad\d
elphix_user would be:
/environment;
create;
set type=HostEnvironmentCreateParameters;
set hostEnvironment.type=WindowsHostEnvironment;
set hostEnvironment.name="mssql_source_1";
set hostEnvironment.proxy="mssql_target_1";
set hostParameters.type=WindowsHostCreateParameters;
set hostParameters.host.type=WindowsHost;
set hostParameters.host.addresses="mssql_source_1";
set primaryUser.name="ad\delphix_user";
set primaryUser.credential.type=PasswordCredential;
set primaryUser.credential.password="i_am_the_password";
commit;
Related Links
Setting Up SQL Server Environments: An Overview
1166
Procedure
1. Select the host to update
delphix> host
delphix host> select example
delphix host "example"> update
2. Set the address:
1167
ssh delphix_admin@delphix
2. Go to Environment and find the Environment you would like to update
4. Set primaryUser to new user you would like to use for the Environment
1168
ssh delphix_admin@delphix
delphix > environment
delphix environment > ls
Objects
NAME
DESCRIPTION
Demo
Children
oracle
user
Operations
create
delphix environment > select Demo
delphix environment "Demo" > update
delphix environment "Demo" update *> ls
Properties
type: UnixHostEnvironment
name: Demo
description:
primaryUser: delphix
delphix environment "Demo" update *> set primaryUser=<new user>
delphix environment "Demo" update *> commit
1169
1170
Prerequisites
A dSource can only be attached to a new data source once it has been unlinked.
When attaching an Oracle dSource to a new data source, the new data source must be the same logical database satisfying the following
constraints:
Same dbid
Same dbname
Same creation time
Same resetlogs SCN
Same resetlogs time
Same redo stream, where a log must exist with
Same sequence
Same thread
Same end SCN
For Oracle dSources, this procedure can be used to initially link from a standby server that is faster or less disruptive, unlink the dSource, and
then attach it to the production server for subsequent incremental SnapSync operations. When you perform the attach operation, you will need the
source config name of an unlinked database.
Procedure
1. Select the dSource.
1171
1172
Procedure
1. Select the dSource to be changed and run the updatecommand.
1173
delphix>/repository/select '/u01/app/ora10205/product/10.2.0/db_1'
2. Execute the update command.
delphix>/database/select redsox1
2. Execute the update command.
1174
CLI Cookbook: Linking a Microsoft SQL Server Database Loading from a Specific Full Backup of the
Source Database
This topic describes how to use the command line interface to link a SQL Server database by loading from a specific full backup of the source
database as indicated by the backup UUID.
Prerequisites
You can find the fullBackupUUID referenced in the last command line in the msdb.dbo.backupset on the source database, for
example using the following query
Use master
select backupset.database_name,
backupset.type,
backupset.backup_set_id,
backupset.backup_set_uuid,
backupset.family_guid,
backupset.position,
backupset.first_lsn,
backupset.last_lsn,
backupset.database_backup_lsn,
backupset.name,
backupset.has_bulk_logged_data,
backupset.is_damaged,
backupset.begins_log_chain,
backupset.is_copy_only,
backupset.backup_finish_date,
backupset.database_version,
backupset.database_guid,
mediafamily.logical_device_name,mediafamily.physical_device_name
from msdb.dbo.backupmediafamily mediafamily join msdb.dbo.backupset backupset
on mediafamily.media_set_id = backupset.media_set_id where backupset.database_name =
N'<Database Name>'
order by backupset.backup_finish_date desc
Procedure
Enter these commands through the Delphix Engine command line interface:
1175
/database;
link;
set type=MSSqlLinkParameters;
set
set
set
set
container.type=MSSqlDatabaseContainer;
container.name=<dSource name>;
container.group=<group name>;
container.sourcingPolicy.loadFromBackup=true;
set source.type=MSSqlLinkedSource;
set source.config=<source database>;
set source.sharedBackupLocation="<source database backup location>";
set pptRepository=<SQL instance on the staging server>;
set container.sourcingPolicy.type=SourcingPolicy;
set dbUser=<source database login with SQL authentication>;
set dbCredentials.type=PasswordCredential;
set dbCredentials.password=<password for the database login>;
set fullBackupUUID=859FD1F1-1590-4FCB-A341-5D2D13852E2E;
commit;
1176
CLI Cookbook: Linking a Microsoft SQL Server Database Loading from the Last Full Backup of the
Source Database
This topic describes how to use the command line interface to link a SQL Server database by loading from the last full backup of the source
database.
Procedure
Enter the following commands in the Delphix Engine command line interface:
/database;
link;
set type=MSSqlLinkParameters;
set
set
set
set
container.type=MSSqlDatabaseContainer;
container.name=<dSource name>;
container.group=<group name>;
container.sourcingPolicy.loadFromBackup=true;
set source.type=MSSqlLinkedSource;
set source.config=<source database>;
set source.sharedBackupLocation="<source database backup location>";
set pptRepository=<SQL instance on the staging server>;
set container.sourcingPolicy.type=SourcingPolicy;
set dbUser=<source database login with SQL authentication>;
set dbCredentials.type=PasswordCredential;
set dbCredentials.password=<password for the database login>;
commit;
1177
Prerequisites
You will need the following information:
The name of the dSource you want to create.
The group in which you want to create the dSource.
The database unique name of the Oracle database you want to link to.
The database username/password with sufficient privileges as described in the Delphix User Guide.
The host environment user with sufficient privileges as described in the Delphix User Guide.
Procedure
1. Execute the database link command.
1178
1179
delphix> source
delphix source> list
NAME
CONTAINER VIRTUAL
example
example
false
vexample vexample
true
CONFIG
example
vexample
delphix> source
delphix source> list display=name,virtual,runtime.databaseSize
NAME
VIRTUAL RUNTIME.DATABASESIZE
example
false
12784
vexample true
12842
1180
When attaching a SQL Server dSource to a new data source, the new data source must be the same database
satisfying the following containts:
Same database name
Same recovery fork UUID
pptRepository needs to be set to the name of the SQL instance on the staging server. The unlink operation removes the database from
the SQL instance on the staging server and unmounts the iscsi luns, reattaching the dSource via the CLI will remount the iscsi luns and
puts the database back.
Procedure
1. Select the dSource.
the detachSourcecommand, specifying the currently active source. This step can be skipped if the dSource
has already been detached through the GUI.
delphix database "dexample"> detachSource
delphix database "dexample" detachSource *> set source=dexample
delphix database "dexample" detachSource *> commit
3. Run the attachSourcecommand.
1181
ssh delphix_admin@delphixengine
2. Go to sourceconfig and fine the Database that you need to update the password on
ssh delphix_admin@example
delphix > sourceconfig
delphix sourceconfig > ls
Objects
NAME
REPOSITORY
LINKINGENABLED
meta1
'/u01/oracle/10.2.0.4/ee1'
true
Operations
create
delphix sourceconfig > select meta1
delphix sourceconfig "meta1" > ls
Properties
type: OracleSIConfig
name: meta1
credentials:
type: PasswordCredential
password: ********
databaseName: meta1
discovered: true
environmentUser: delphix
instance:
1182
type: OracleInstance
instanceName: meta1
instanceNumber: 1
linkingEnabled: true
nonSysCredentials: (unset)
nonSysUser: (unset)
reference: ORACLE_SINGLE_CONFIG-1
repository: '/u01/oracle/10.2.0.4/ee1'
services:
0:
type: OracleService
discovered: true
jdbcConnectionString: jdbc:oracle:thin:@172.16.100.69:1525:meta1
1:
type: OracleService
discovered: true
jdbcConnectionString: jdbc:oracle:thin:@172.16.100.69:1521:meta1
uniqueName: meta1
user: delphix
Operations
delete
update
validateCredentials
1183
1184
1185
Procedure
1. Stop the VDB through the GUI and login to the Delphix CLI
2. Select the sourceconfig of the RAC VDB whose instances you would like to rename.
kfc-manual.dcenter> sourceconfig
kfc-manual.dcenter sourceconfig> select Vchicago_BEB
3. Use the update command to change the properties of the sourceconfig
1186
5. Use the Set command to change the values for instanceName and instanceNumber for each instance.
1187
Restrictions
The following restrictions apply when migrating a VDB between repositories:
When migrating a RAC VDB, the host of each OracleRACInstance must be updated as well.
The mount point of the VDB source cannot be changed.
The database_unique_name and db_name cannot be changed.
The new environment and repository must be a compatible target environment.
Procedure
1. Select the source associated with the VDB. By default, sources are named the same as the VDB.
1188
1189
Prerequisites
You will need the following information:
The name of the VDB you want to create
The group in which to create the VDB
The Oracle database name
The Oracle database unique name
The Oracle database instance number
The Oracle database instance name
The source dSource or VDB from which you wish to provision
The semanticLocation, SCN, or timestamp of the point you want to provision from. You can run these commands to get the list of
snapshots or timeflow ranges:
Procedure
1. Execute the database provisioncommand.
1190
delphix
delphix
delphix
delphix
database
database
database
database
provision
provision
provision
provision
1191
Prerequisites
You will need the following information:
The name of the VDB you want to create
The group in which to create the VDB
The SQL Server database name for the VDB
The source dSource or VDB from which you wish to provision
The semanticLocation, LSN, or timestamp of the point you want to provision from. You can run these commands to get the list of
snapshots or timeflow ranges:
Procedure
1. Execute the database provision command.
1192
9.
1193
Prerequisites
You will need the following information:
The name of the timeflow bookmark you want to create
The name of the VDB you want to create
The group in which to create the VDB
The Oracle database name
The Oracle database unique name
The Oracle database instance number
The Oracle database instance name
The source dSource or VDB from which you wish to provision
The SCN, or timestamp of the point you want to provision from. You can run these commands to get the list of snapshots or timeflow
ranges:
1194
delphix
delphix
delphix
delphix
database
database
database
database
provision
provision
provision
provision
1195
Procedure
1. Use the ls command to find the VDB you want to roll forward.
In this example the TimeFlows and their associated containers are listed. The VDB called PVDB will be the one to roll forward.
delphix timeflow> ls
Objects
NAME
CONTAINER
PARENTPOINT.LOCATION PARENTPOINT.TIMESTAMP
hrprod/default
hrprod
erpprod/default
erpprod
'DB_PROVISION@2013-11-25T17:37:06' PVDB
657925
'DB_ROLLBACK@2013-11-25T18:24:16'
PVDB
678552
PARENTPOINT.TIMEFLOW
-
erpprod/default
'DB_PROVISION@2013-11-25T17:37:06'
1196
Related Links
Rewinding a VDB
1197
Prerequisites
You will need the following information:
The VDB name
The Timeflow location, SCN, or timestamp of the point you want to provision from.
Login to CLI:
$ ssh delphix_admin@<delphixengine>
Determine the TimeFlow
Run:
> database
> select <VDB name>
> refresh
> set timeflowPointParameters.type= (one of TimeflowPointBookmark,
TimeflowPointBookmarkTag, TimeflowPointLocation, TimeflowPointSemantic,
TimeflowPointTimestamp as appropriate)
> set timeflowPointParameters.location= (the location, timestamp, or bookmark you wish to
refresh to)
> set timeflowPointParameters.timeflow= (the timeflow associated with location)
> commit
Perform the Refresh from Latest
>
>
>
>
>
database
select <yourdatabase>
refresh
set timeflowPointParameters.container= (Parent of VDB)
commit
1198
Procedure
1. Log into the Delphix Engine as an Admin user. Go to timeflow and then list. Find the timeflow that needs to be repaired.
ssh delphix_admin@<yourengine>
delphix > timeflow
delphix timeflow > ls
2. Go to oracle/log and list the missing logs for the timeflow, the maximum number of logs reported is controlled by the value of the pageSize
argument, if there's a very large number of missing logs this may need to be increased. Note the start and end scn of the missing log.
delphix timeflow
delphix timeflow
delphix timeflow
delphix timeflow
file]
delphix timeflow
delphix timeflow
delphix timeflow
the file to]
delphix timeflow
delphix timeflow
delphix timeflow
delphix timeflow
oracle
oracle
oracle
oracle
log
log
log
log
fetch
fetch
fetch
fetch
*>
*>
*>
*>
NOTE It is possible there may be more than one timeflow visible for a given container/source, if so you can verify the current timeflow being used with:
1199
Procedure
1. Log into the Delphix Engine as delphix_admin or a user with Admin privileges.
2. Go to source and then select the name of the VDB that you would like to change the parameters of
3. You are then going to update and edit the configParams
4. Next you are going to set the sga_target= the correct value
5. Commit the operation so that it saves
Example
ssh delphix_admin@engine
delphix > source
delphix source > select "vdb_example"
delphix source "vdb_example" > update
delphix source "vdb_example" *> set sga_target=new value
delphix source "vdb_example" *> commit
1200
ssh delphix_admin@delphix_engine
2. List Timeflows for the database that you want to rollback to
de > ls
de > timeflow
de timeflow > ls
3. Switch to the VDB you want to rollback
de
de
de
de
database
database
database
database
'vdb_example'
'vdb_example'
'vdb_example'
'vdb_example'
> rollback
rollback *> set timeflowPointParameters.container=
rollback *> set timeflowPointParameters.location=
rollback *> commit
ssh delphix@<yourengine>
2. Retrieve database and time flow information that you would like to rewind/rollback to
delphix > ls
delphix database > select "dexample"
delphix database "dexample" > get currentTimeflow
3. Rollback/Rewind VDB
1201
1202
Procedure
1. ssh into you delphix engine using delphix_admin credentials
ssh delphix_admin@<yourdelphixengine>
2. Go to database and then template and then create
delphix
delphix
delphix
delphix
delphix
Example:
ssh delphix_admin@testengine
testengine > database template
testengine database template > create
testengine database template create *>
testengine database template create *>
testengine database template create *>
testengine database template create *>
set name=test_template
set parameters.none
set sourceType=OracleVirtualSource
ls
Properties
type: DatabaseTemplate
name: test_template (*)
description: (unset)
parameters:
none: none (*)
sourceType: OracleVirtualSource (*)
testengine database template create *> commit
1203
Procedure
1. ssh into your Delphix Engine using delphix_admin credentials
ssh delphix_admin@delphixengine
delphix > ls
2. Go to policies and createAndApply (please note that you cannot just create a policy, you must createAndApply, in the GUI you
have the option to just create) and set your policy parameter
delphix
delphix
delphix
delphix
delphix
policy
policy
policy
policy
policy
createAndApply
createAndApply
createAndApply
createAndApply
createAndApply
3. Set your target parameters which are going to be a container, group etc
1204
Prerequisites
You will need the following information:
The name of the VDB you want to create
The group in which to create the VDB
The SAP ASE database name for the VDB
The source dSource or VDB from which you wish to provision
The semanticLocation, LSN, or timestamp of the point you want to provision from (if not using the most recent). You can run these
commands to get the list of snapshots or timeflow ranges:
Procedure
1. Execute the database provision command.
6. Set the source configuration properties on the target SAP ASE instance
1206
Procedure:
1. ssh into the delphix engine using delphix_admin credentials
2. Go into databases and select the VDB or dSource you would like to take a Snapshot of
ssh delphix_admin@dengine
delphix > database
delphix database > select vdb_test
3. You are now going to sync and commit the operation
Related Articles:
CLI Cookbook: Creating Policies
1207
ssh delphix_admin@<server_ip>
2. Select the VDB.
delphix> database
delphix database> ls
Objects
NAME
PARENTCONTAINER
dSource1
dSource2
VDB1
dSource1
VDB2
dSource2
VDB3
dSource1
delphix database> select VDB1
DESCRIPTION
3. List the VDB parameters, and make a note of the currentTimeflow value.
1208
type: OracleTimeflow
name: VDB1/default
container: VDB1
parentPoint:
type: OracleTimeflowPoint
location: 141285148
timeflow: dSource1/default
parentSnapshot: @2013-02-14T15:07:28.491Z
reference: ORACLE_TIMEFLOW-92572
1209
Prerequisites
A dSource can only be attached to a new data source once it has been unlinked.
When attaching an SAP ASE dSource to a new data source, the new data source must be the same logical database satisfying the following
constraints:
Same dbid
Same dbname
Same creation time
You must also make sure that you follow the normal prerequisites for an SAP ASE data source found in Requirements for SAP ASE Source
Hosts and Databases.
Procedure
Detach a dSource
1. Login to the CLI as delphix_admin or a user with Admin privileges
2. Select the dSource.
1210
delphix
delphix
delphix
delphix
delphix
delphix
database
database
database
database
database
database
"dexample"
"dexample"
"dexample"
"dexample"
"dexample"
"dexample"
attachSource
attachSource
attachSource
attachSource
attachSource
attachSource
"dexample"
"dexample"
"dexample"
"dexample"
attachSource
attachSource
attachSource
attachSource
type: ASEAttachSourceParameters
dbCredentials:
type: PasswordCredential
password: ******** (*)
dbUser: sa (*)
source:
type: ASELinkedSource
name: source_ASE_servername_example (*)
config: dexample (*)
externalFilePath: (unset)
loadBackupPath: /tmp/backups (*)
loadBackupServerName: source_backupserver_name_example (*)
monitorLocation: (unset)
operations:
type: LinkedSourceOperations
postSync:
0:
type: RunCommandOnSourceOperation (*)
command: # (*)
preSync:
0:
1211
1212
1213
1214
Prerequisites
You should review the topic Replication Overview to understand which objects are copied as part of a backup or restore operation, as
well as the dependencies between objects.
Procedure
1. Switch to the replication spec context.
delphix> cd replication/spec
delphix replication spec> ls
Operations
create
2. Create a new replication spec.
5.
1215
1216
Procedure
1. Switch to the replication spec context and list the specs on the system.
delphix> cd replication/spec
delphix replication spec> ls
Objects
REFERENCE
TARGETHOST
REPLICATION_SPEC-1 exampleHost.mycompany.com
Operations
create
2. Select the replication spec to remove.
1217
Procedure
1. Switch to the namespace context and list the namespaces on the system
delphix> cd namespace
delphix namespace> ls
Objects
NAME
[172.16.203.93]
Operations
lookup
2. Select the namespace to failover
1218
Procedure
1. Switch to the replication spec context and list the specs on the system.
delphix> cd replication/spec
delphix replication spec> ls
Objects
REFERENCE
TARGETHOST
REPLICATION_SPEC-1 exampleHost.mycompany.com
Operations
create
2. Select the replication spec to execute.
1219
1220
Procedure:
ssh delphix@<yourengine>
2. Navigate to the JetStream container that you want to delete
delphix
delphix
delphix
delphix
3. Delete container, note you need to set if you want to delete the VDBs in the container (false will preserve the VDBs and true the
VDBs will be deleted along with the container)
1221
1. Log into your Delphix Engine using delphix_admin (or admin privileged account)
ssh delphix_admin@<yourengine>
2. Find the template you want to delete
delphix
delphix
delphix
delphix
> jetstream
jetstream template > ls
jetstream template > select 'TEMPLATE_X'
jetstream template 'TEMPLATE_X'>
1222
1223
API
Version
Major Changes
3.0.x.x
1.0.0
http://<engine-address>/api/ json/versions/1.0.0/delphix.jso
n
3.1.0.x 3.1.1.x
1.1.0
http:// <engine-address>/api/json/versions/1.1.0/delphix.jso
n
3.1.2+
1.1.1
http:// <engine-address>/api/json/versions/1.1.1/delphix.jso
n
3.2.0.x
1.2.0
http:// <engine-address>/api/json/versions/1.2.0/delphix.jso
n
3.2.1.x
1.2.1
http:// <engine-address>/api/json/versions/1.2.1/delphix.jso
n
3.2.2.x 3.2.3.x
1.2.2
http:// <engine-address>/api/json/versions/1.2.2/delphix.jso
n
3.2.4+
1.2.3
http:// <engine-address>/api/json/versions/1.2.3/delphix.jso
n
4.0.0.x
1.3.0
http://<engine-address>/api/json/versions/1.3.0/delphix.json
4.0.1.x - 4.0.
2.x
1.3.1
http://<engine-address>/api/json/versions/1.3.1/delphix.json
4.0.3+
1.3.2
http://<engine-address>/api/json/versions/1.3.2/delphix.json
4.1.0.x
1.4.0
http://<engine-address>/api/json/versions/1.4.0/delphix.json
4.1.1.x
1.4.1
http://<engine-address>/api/json/versions/1.4.1/delphix.json
4.1.2.x 4.1.3.x
1.4.2
http://<engine-address>/api/json/versions/1.4.2/delphix.json
4.1.4+
1.4.3
http://<engine-address>/api/json/versions/1.4.3/delphix.json
4.2.1.x
1.5.0
http://<engine-address>/api/json/versions/1.5.0/delphix.json
4.2.2.x
1.5.1
http://<engine-address>/api/json/versions/1.5.1/delphix.json
4.2.3.x
1.5.2
http://<engine-address>/api/json/versions/1.5.2/delphix.json
4.2.4.x 4.2.5.x
1.5.3
http://<engine-address>/api/json/versions/1.5.3/delphix.json
4.3.1.x 4.3.2.x
1.6.0
http://<engine-address>/api/json/versions/1.6.0/delphix.jso
n
4.3.3.x
1.6.1
http://<engine-address>/api/json/versions/1.6.1/delphix.jso
n
4.3.4.x
1.6.2
http://<engine-address>/api/json/delphix.json
Related Links
Migrating from the Delphix Engine 2.7 CLI
1224
Affects
Object Types
All objects in the Delphix API are "typed objects." All such objects have a type field that indicates the type of the object and its associated
semantics. This allows for object inheritance and polymorphism without requiring separate APIs for each type, and allows generic client-specific
semantic encoding and decoding without having to be aware of the context. This means that even APIs that operate only a specific type (such as
the Group API) still require a type field to be specified as part of the input, and will continue to report the type of objects when listing or retrieving
objects. This allows the APIs to evolve in a backwards-compatible fashion through the introduction of new types.
Certain "root" object types (groups, containers, sources, etc) have an associated API. This API is rooted at a particular point under /resources/
json/delphix, but all APIs follow a standard format beneath this namespace. The CLI namespace is a direct reflection of this URL, and the API
reference has an index both by object type as well as by object (CLI) path. These APIs may operate on different extended types (such as Oracle
SIConfig and OracleRACConfig), but the base operations remain the same regardless of input type.
Object References
Some objects returned by the APIs are pure typed objects, in that they don't represent persistent state on the Delphix Engine but are rather
calculated and returned upon request. Most of the objects in the system, however, are "persistent objects." Persistent objects (of type Persisten
tObject) have a stable reference that uniquely identifies the object on the system. This reference is separate from its name, so that objects can
be renamed without affecting the programmatic API. More information about object names and how they can be represented generically can be
found in the CLI documentation. Object references are opaque tokens; while they can be stored and re-used for later use, an interpretation of
their contents is unstable and may break in a future release.
The Delphix object hierarchy is stitched together by these object references. When fetching an object that refers to another object, the member
will be returned as a reference, rather than being inserted directly within the parent object. This allows consumers to control when and how links
are resolved, and makes it clear when an object may change independently from its parent. The per-object APIs outlined below all operate on
object references.
Note that some Delphix objects are singleton objects, and there is only one such object on the system. These objects do not have references
because the API URL uniquely identifies the object on the system.
API Operations
All APIs optionally support the following operations:
CREATE - Create a new instance of the given object type. This is used for most objects, but more complicated objects, such as dSources
and VDBs, must be created through a dedicated link or provision operation. The input to this operation is typically the object itself,
though some objects may have specialized parameters used during the creation of objects. An example of this is HostEnvironmentCr
eateParameters.
UPDATE - Update properties of the given object, specified as an object reference in the URL.
DELETE - Delete a particular object, specified as an object reference in the URL. These operations are typically done as HTTP DELETE
operations, but it is also possible to do a POST operation to the /delete API to do the same thing. The POST form allows for
delete-specific parameters, such as OracleDeleteParameters.
GET - Get the properties for a particular object, specified as an object reference in the URL.
LIST - List all objects of the given type within the system. These APIs typically take optional query parameters that allow the set of
results to be constrained, filtered, paginated, or sorted.
In addition, the following non-CRUD operations may be supported:
Root Operation - A POST or GET operation to the root of an API namespace, not associated with a particular object. This can be
used for singleton objects, such as NDMPConfig, operations that create objects, such as link, and operations that operate on multiple
objects at once.
Per-object Operation - A POST operation to a particular object reference. These operations are typically read-write, but are not required
to be so. These would include read-only operations that cannot be modeled as CRUD operations or require complex input use per-object
operations.
1225
attached "sources". More information about how Database objects are modeled within Delphix can be found in the CLI documentation.
Asynchronous Execution
All APIs are designed to be transactionally safe and "quick." However, there are operations that may take a long period of time, or may need to
reach out to external hosts or databases such that they cannot be done safely within the context of a single API call. Such operations will dispatch
a Job to handle asynchronous execution of the operation. Any API can potentially spawn a job, and which APIs spawn jobs and which do not
may differ between object types or releases. If you are developing a full-featured automation system, it is recommended that you build a generic
infrastructure to handle job monitoring, rather than encoding the behavior of particular APIs that may change over time.
Every operation, except for LIST and GET, which are guaranteed to be read-only, can potentially spawn a job. This is represented by the job fiel
d of the APIResult object. If this field is null, then the action can be completed within the bounds of the API call. Otherwise, a reference to a
dispatched job is returned.
Jobs can spawn other jobs for especially complex operations, such as provisioning to an Oracle cluster environment. The job returned in the API
invocation is the root job, and overall success or failure of the operation is determined by the state of this job. Some operations may succeed even
if a child job fails. An example would be provisioning to an Oracle cluster where one node failed, but others were successful.
Progress can be monitored by examining the JobEvent objects in the Job object returned through the job API.
1226
Introduction
The Delphix Engine provides a set of public stable web service APIs (application programming interfaces). The web services form the basis upon
with the GUI and CLI are built, and are designed to be public and stable. This guide covers the basic operation of the protocol, concepts, and
considerations when building layered infrastructure. It is not a reference for all available APIs. For more information on available APIs, go to the
'/api' URL of a Delphix appliance, which will provide a complete reference of all available APIs for the version of Delphix running on that system.
http://<server>/api
The CLI is a thin veneer over the web services. If you are new to the web services, it is recommended you first test out operations with the CLI,
and use the setopt trace=true option to dump the raw data being sent and received to see the API in action.
Protocol Operation
The Delphix web services are a RESTful API with loose CRUD semantics using JSON encoding.
The following HTTP methods are supported:
GET - Retrieve data from the server where complex input is not needed. All GET requests are guaranteed to be read-only, but not all
read-only requests are required to use GET. Simple input (strings,number, boolean values) can be passed as query parameters.
POST - Issue a read/write operation, or make a read-only call that requires complex input. The optional body of the call is expressed as
JSON.
DELETE - Delete an object on the system. For languages that don't provide a native wrapper for DELETE, or for delete operations with
optional input, all delete operations can also be invoked as POST to the same URL with /delete appended to it.
Regardless of the operation, the result is returned as a JSON encoded result, the contents of which are covered below. For example, the following
invocation of curl demonstrates establishing a new Session (pretty-printing the result):
1227
Session Establishment
Login involves establishing a session and then authenticating to the Delphix Engine. Only authenticated users can access the web APIs. Each
user must establish a session prior to making any other API calls. This is done by sending a Session object to the URL /resources/json/de
lphix/session. This session object will specify the APIVersion to use for communication between the client and server. If the server doesn't
support the version requested due to an incompatible change reflected in the API major version number, an error will be returned.
The resulting session object encodes the native server version, which can be different than the version requested by the client. If the server is
running a more recent but compatible version, any translation of input and output to the native version is handled automatically. More information
on versioning can be found in the documentation for the APIVersion object within the API reference on a Delphix system. If the client supports
multiple versions, the appropriate type can be negotiated by trying to establish a session with each major version supported, and then inspecting
the version returned.
The session will also return an identifier through browser cookies that can be reused in subsequent calls to use the same session credentials and
state without having to re-authenticate. The format of this cookie is private to the server and may change at any point. Sessions do not persist
across a server restart or reboot. The mechanism by which this cookie is preserved and sent with subsequent requests is client-specific. The
following demonstrates invoking the session login API call using curl and storing the cookies in the file ~/cookies.txt for later use:
Authentication
Once the session has been established, the next step is to authenticate to the server by executing the LoginRequest API. Unauthenticated
sessions are prohibited from making any API calls other than this login request. The username can be either a system user or domain user, and
the backend will authenticate using the appropriate method. This example illustrates logging in via curl using cookies created when the session
was established:
1228
POST /resources/json/delphix/database/provision
All operations in the CLI (those that require an explicit commit command) are modeled as POST HTTP calls. This is an example of a "root
operation", as they are invoked not on any particular object, but across the system as a whole. Operations that are invoked on a particular object
use a URL specific to that object:
POST /resources/json/delphix/database/ORACLE_DB_CONTAINER-3/refresh
While the CLI uses names to refer to objects, at the API level we use references. Persistent objects (those with a permanent unique identity) have
a reference field that is used in all cases to refer to the object. This allows references to remain constant even if objects are renamed.
For the standard CRUD (create, read, update, delete) operations, the mapping is slightly different:
CLI Operation
HTTP API
database list
GET /resources/json/delphix/database
database create
POST /resources/json/delphix/database
GET /resources/json/delphix/database/<reference>
POST /resources/json/delphix/database/<reference>
DELETE /resources/json/delphix/database/<reference>
POST /resources/json/delphix/database/<reference>/delete
The DELETE operation has an optional POST form that can take complex input for clients that don't support sending a payload as part of a DELET
E operation.
1229
1230
dSource Operations
GUI
CLI
API Topic
Input Object
Web Services
Link
database
link
Container LinkParameters
POST /resources/json/delphix/database/link
Show
database
configuration "name" get
source
"name" get
Container -
GET /resources/json/delphix/database/{ref}
GET /resources/json/delphix/source/{ref}
Sync
database
"name" sync
Container SyncParameters
POST /resources/json/delphix/database/{ref}/sync
Update
database
"name"
update
Container Container
POST /resources/json/delphix/database/{ref}
Delete
database
"name"
delete
Container DeleteParameters
POST /resources/json/delphix/database/{ref}/delete
Detach
database
Container DetachSourceParameters
"name"
detachSource
POST
/resources/json/delphix/database/{ref}/detachSource
Attach
database
Container AttachSourceParameters
"name"
attachSource
POST
/resources/json/delphix/database/{ref}/attachSource
Disable
source
"name"
disable
Source
Enable
source
"name"
enable
Source
SourceEnableParameters
POST /resources/json/delphix/source/{ref}/enable
Input Object
Web Services
Source
VDB Operations
GUI
CLI
API Topic
Provision
database
provision
Container ProvisionParameters
POST
/resources/json/delphix/database/provision
V2P
database
export
Container ExportParameters
POST /resources/json/delphix/database/export
Refresh
database
"name" refresh
Container RefreshParameters
POST
/resources/json/delphix/database/{ref}/refresh
Snapshot database
"name" sync
Container SyncParameters
POST
/resources/json/delphix/database/{ref}/sync
Update
database
"name" update
Container Container
POST /resources/json/delphix/database/{ref}
Delete
database
"name" delete
Container DeleteParameters
POST
/resources/json/delphix/database/{ref}/delete
1231
Start
source "name"
start
Source
StartParameters
POST
/resources/json/delphix/source/{ref}/start
Stop
source "name"
stop
Source
StopParameters
POST
/resources/json/delphix/source/{ref}/stop
Enable
source "name"
enable
Source
SourceEnableParameters
POST
/resources/json/delphix/source/{ref}/enable
Disable
source "name"
disable
Source
SourceDisableParameters POST
/resources/json/delphix/source/{ref}/disable
Environment Operations
GUI
CLI
API Topic
Input Object
Web Services
Add
environment SourceEnvironment SourceEnvironmentCreateParameters POST /resources/json/delphix/environment
environment create
Update
POST /resources/json/delphix/environment/
Delete
DELETE /resources/json/delphix/environmen
Refresh
POST /resources/json/delphix/environment/
Enable
POST /resources/json/delphix/environment/
Disable
POST /resources/json/delphix/environment/
Add manual
repository
repository
create
SourceRepository
SourceRepository
POST /resources/json/delphix/repository
Update
repository
repository
"name"
update
SourceRepository
SourceRepository
POST /resources/json/delphix/repository/{
Remove
manual
repository
repository
"name"
delete
SourceRepository
DELETE /resources/json/delphix/repository
Show host
details
host
"name" get
Host
GET /resources/json/delphix/host/{ref}
Add cluster
node
POST /resources/json/delphix/environment/
Update
cluster node
POST
/resources/json/delphix/environment/oracl
Remove
cluster node
DELETE
/resources/json/delphix/environment/oracl
1232
Add listener
environment OracleListener
oracle
listener
create
OracleListener
POST /resources/json/delphix/environment/
Update
listener
environment OracleListener
oracle
listener
"name"
update
OracleListener
POST
/resources/json/delphix/environment/oracl
Remove
listener
environment OracleListener
oracle
listener
"name"
delete
DELETE
/resources/json/delphix/environment/oracl
1233
1234
Delphix Login
$ curl -s -X POST -k --data @- http://delphix-server/resources/json/delphix/login \
-b ~/cookies.txt -H "Content-Type: application/json" <<EOF
{
"type": "LoginRequest",
"username": "delphix_username",
"password": "delphix_password"
}
EOF
NOTE: It is generally recommended to set the API session version to the highest level supported by your Delphix engine.
1235
List Environment
curl -X GET -k http://delphix-server/resources/json/delphix/environment \
-b ~/cookies.txt -H "Content-Type: application/json"
For single-host environments, the reference can be used to get information about the associated host. It is also possible to get information about
all hosts (regardless of whether they are in a single-host environment or cluster) by omitting the environment query parameter.
For more information about the content of host objects, see the /api/#Host reference on your local Delphix Engine. Depending on the type of
the host, additional information may be available through the following types:
UnixHost
WindowsHost
1236
List Alerts
$ curl -X GET -k http:/delphix-server/resources/json/delphix/alert \
-b ~/cookies.txt -H "Content-Type: application/json"
For more information about the structure of an alert object, see the "/api/#Alert" link on your local Delphix Engine.
1237
List Databases
$ curl -X GET -k http://delphix-server/resources/json/delphix/database
-b ~/cookies.txt -H "Content-Type: application/json"
For more information on the structure of a database object, see the /api/#Container reference on your local Delphix Engine. The following
sub-types are available depending on the type of database:
OracleDatabaseContainer
MSSqlDatabaseContainer
Each database has zero or one source associated with it. This source could be a linked source, indicating that the database is a dSource, or it
could be a virtual source, indicating that it is a VDB. If there are no sources, it is a detached dSource. The parentContainer property indicates
the reference to the parent container, also indicating that the database is a VDB. To get runtime information about the source associated with the
dSource or VDB, use the Source API with a database parameter set to the reference of the database in question.
List Sources
$ curl -X GET -k
http://delphix-server/resources/json/delphix/source?database=DB_CONTAINER-13
-b ~/cookies.txt -H "Content-Type: application/json"
If the virtual flag is true, the source is a VDB, otherwise it is a dSource. For more information about the contents of a source object, see the /a
pi/#Source reference on your local Delphix Engine. The following sub-types are available depending on the type of source:
OracleSource
OracleLinkedSource
OracleVirtualSource
MSSqlSource
MSSqlLinkedSource
MSSqlVirtualSource
1238
List Snapshots
curl -X GET -k
http://services.cloud.skytap.com:23173/resources/json/delphix/snapshot?database=ORACLE_DB
_CONTAINER-15 \
-b ~/cookies.txt -H "Content-Type: application/json"
For more information about the structure of a snapshot object, see the /api/#TimeflowSnapshot reference on your local Delphix Engine.
Snapshots, while representing point where provisioning will be most efficient, are not the only provisionable points within a database. To get a list
of all provisioning points, use the timeflowRange API. This API is based on a timeflow, which is the representation of one timeline within a
database. Currently, all databases have a single timeflow, though this may change in the future. To query for the ranges for a particular database,
you will need to use the currentTimeflow member of the target database.
1239
See the
c for information on how to determine provisionable points in the parent database.
TimeflowPointBookmark type allows you to reference a previously created timeflow bookmark.
topi
Refresh VDB
curl -v -X POST -k --data @http://delphix-server/resources/json/delphix/database/ORACLE_DB_CONTAINER-13/refresh
-b ~/cookies.txt -H "Content-Type: application/json" <<EOF
{
"type": "OracleRefreshParameters",
"timeflowPointParameters": {
"type": "TimeflowPointSemantic",
"container": "ORACLE_DB_CONTAINER-1",
"timeflow": "ORACLE_TIMEFLOW-13",
"location": "LATEST_SNAPSHOT"
}
}
EOF
For more information about the content of refresh parameters, see the /api/#RefreshParameters reference on your local Delphix Engine.
Depending on the type of the database, the following parameter types are available:
OracleRefreshParameters
MSSqlRefreshParameters
1240
Group reference -
VDB name -
Mount path -
DB/unique names -
The Oracle DB and unique names, often the same as the VDB name.
Instance name/number -
The Oracle instance name and number to use (dictated by your environment, but often
See API Cookbook: List Snapshots for information on finding a timeflow point, as well as
the reference at "/api/#TimeflowPointParameters".
Timeflow point -
You will need to use the structure of the OracleProvisionParameters object to fill it out, see "/api/#OracleProvisionParameters" for details on which
fields are mandatory/optional.
Here is a minimal example using curl to communicate with the API, provisioning a VDB called
"EGVDB" (authentication omitted)
curl -X POST -k --data @http://delphix1.company.com/resources/json/delphix/database/provision \
-b cookies.txt -H "Content-Type: application/json" <<EOF
{
"container": {
"group": "GROUP-2",
"name": "EGVDB",
"type": "OracleDatabaseContainer"
},
"source": {
"type": "OracleVirtualSource",
"mountBase": "/mnt/provision"
},
"sourceConfig": {
"type": "OracleSIConfig",
"databaseName": "EGVDB",
"uniqueName": "EGVDB",
"repository": "ORACLE_INSTALL-3",
"instance": {
"type": "OracleInstance",
"instanceName": "EGVDB",
"instanceNumber": 1
}
},
"timeflowPointParameters": {
"type": "TimeflowPointLocation",
"timeflow": "ORACLE_TIMEFLOW-123",
"location": "3043123"
},
"type": "OracleProvisionParameters"
}
EOF
1241
1242
1243
#!/bin/bash
#
# sample script to start or stop a VDB.
#
# set this to the FQDN or IP address of the Delphix Engine
DE="192.168.2.131"
# set this to the Delphix admin user name
DELPHIX_ADMIN="delphix_admin"
# set this to the password for the Delphix admin user
DELPHIX_PASS="delphix"
# set this to the object reference for the VDB
VDB="ORACLE_VIRTUAL_SOURCE-5"
#
# create our session
curl -s -X POST -k --data @- http://${DE}/resources/json/delphix/session \
-c ~/cookies.txt -H "Content-Type: application/json" <<EOF
{
"type": "APISession",
"version": {
"type": "APIVersion",
"major": 1,
"minor": 4,
"micro": 1
}
}
EOF
echo
#
# authenticate to the DE
curl -s -X POST -k --data @- http://${DE}/resources/json/delphix/login \
-b ~/cookies.txt -H "Content-Type: application/json" <<EOF
{
"type": "LoginRequest",
"username": "${DELPHIX_ADMIN}",
"password": "${DELPHIX_PASS}"
}
EOF
echo
#
# start or stop the vdb based on the argument passed to the script
case $1 in
start)
curl -s -X POST -k http://${DE}/resources/json/delphix/source/${VDB}/start \
-b ~/cookies.txt -H "Content-Type: application/json"
;;
stop)
curl -s -X POST -k http://${DE}/resources/json/delphix/source/${VDB}/stop \
-b ~/cookies.txt -H "Content-Type: application/json"
;;
*)
echo "Unknown option: $1"
;;
esac
echo
1244
Installation
The Delphix Python API is available through PyPI and you may install it with pip.
Python API
group list
group.get_all(engine)
1245
group create
group.create(engine, group=<delphixpy.web.vo.Group>)
group.get(engine, <reference>)
group.delete(engine, <reference>)
Asynchronous Mode
The Python API runs by default in synchronous mode. If you would wish to perform operations asynchronously there is a context manager that
allows you to do that. If you need to track job progress in asynchronous mode, you can get the reference of the last job started
from engine.last_job. When exiting the async context manager, it will wait for all jobs started within the context to finish. If a job
fails, exceptions.JobError will be thrown.
Here is how you would perform a sync operation on all databases asynchronously.
1246
Procedure
1. Create new environment creation parameters and initialize the structure for a UNIX host.
host_environment_create_parameters_vo.host_parameters.host.addresses =
["192.168.1.2"]
host_environment_create_parameters_vo.host_parameters.host.port = 22
3. Set the toolkit path.
This is where Delphix will store temporary binaries used while the host is configured as part of Delphix.
host_environment_create_parameters_vo.host_parameters.host.toolkit_path =
"/var/delphix"
4. Set the username and password to use when connecting over SSH.
This user must have the privileges described in the Delphix Administration Guide. To configure a SSH user, change the credential object
to SystemKeyCredential.
host_environment_create_parameters_vo.primary_user.name = "oracle"
host_environment_create_parameters_vo.primary_user.credential.password = "my secret
password"
5. Commit the result. A reference to your new environment will be returned from the create call.
The environment discovery process will execute as an asynchronous job. The default behavior is to wait for the result, so progress will be
updated until the discovery process is complete or fails.
1247
1248