0% found this document useful (0 votes)
49 views53 pages

Divar Ip 7000 Raid Controller - 20!01!2019

Uploaded by

Alaa M
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
49 views53 pages

Divar Ip 7000 Raid Controller - 20!01!2019

Uploaded by

Alaa M
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 53

XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX

RAID Levels Intro


XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX

RAID 0, RAID 1, RAID 5, RAID 10 Explained with Diagrams


RAID stands for “Redundant Array of Inexpensive (Independent) Disks”.
On most situations you will be using one of the following four levels of RAIDs.

- RAID 0
- RAID 1
- RAID 5
- RAID 10 (also known as RAID 1+0)

This article explains the main difference between these raid levels along with an easy to understand diagram.

In all the diagrams mentioned below:


- A, B, C, D, E and F – represents blocks
- p1, p2, and p3 – represents parity
RAID LEVEL 0

Following are the key points to remember for RAID level 0:


- Minimum 2 disks.
- Excellent performance (as blocks are striped).
- No redundancy (no mirror, no parity).
- Don’t use this for any critical system.
RAID LEVEL 1

Following are the key points to remember for RAID level 1:


- Minimum 2 disks.
- Good performance (no striping. no parity).
- Excellent redundancy (as blocks are mirrored).
RAID LEVEL 5

Following are the key points to remember for RAID level 5:


- Minimum 3 disks.
- Good performance (as blocks are striped).
- Good redundancy (distributed parity).
- Best cost effective option providing both performance and redundancy. Use this for DB that is heavily read
oriented. Write operations will be slow.

RAID LEVEL 10

Following are the key points to remember for RAID level 10:
- Minimum 4 disks.
- This is also called as “stripe of mirrors”
- Excellent redundancy (as blocks are mirrored)
- Excellent performance (as blocks are striped)
- If you can afford the dollar, this is the BEST option for any mission critical applications (especially
databases).
RAID 2, RAID 3, RAID 4, RAID 6 Explained with Diagram
In most critical production servers, you will be using either RAID 5 or RAID 10.
However, there are several non-standard raids, which are not used except in some rare situations. It is good
to know what they are.
This article explains with a simple diagram how RAID 2, RAID 3, RAID 4, and RAID 6 works.
RAID 2

 This uses bit level striping. i.e. Instead of striping the blocks across the disks, it stripes the bits across
the disks.
 In the above diagram b1, b2, b3 are bits. E1, E2, E3 are error correction codes.
 You need two groups of disks. One group of disks are used to write the data; another group is used
to write the error correction codes.
 This uses Hamming error correction code (ECC), and stores this information in the redundancy disks.
 When data is written to the disks, it calculates the ECC code for the data on the fly, and stripes the
data bits to the data-disks, and writes the ECC code to the redundancy disks.
 When data is read from the disks, it also reads the corresponding ECC code from the redundancy
disks, and checks whether the data is consistent. If required, it makes appropriate corrections on the
fly.
 This uses lot of disks and can be configured in different disk configuration. Some valid configurations
are 1) 10 disks for data and 4 disks for ECC 2) 4 disks for data and 3 disks for ECC
 This is not used anymore. This is expensive and implementing it in a RAID controller is complex, and
ECC is redundant now-a-days, as the hard disk themselves can do this.
RAID 3

 This uses byte level striping. i.e. Instead of striping the blocks across the disks, it stripes the bytes
across the disks.
 In the above diagram B1, B2, B3 are bytes. p1, p2, p3 are parities.
 Uses multiple data disks, and a dedicated disk to store parity.
 The disks have to spin in sync to get to the data.
 Sequential read and write will have good performance.
 Random read and write will have worst performance.
 This is not commonly used.
RAID 4

 This uses block level striping.


 In the above diagram B1, B2, B3 are blocks. p1, p2, p3 are parities.
 Uses multiple data disks, and a dedicated disk to store parity.
 Minimum of 3 disks (2 disks for data and 1 for parity)
 Good random reads, as the data blocks are striped.
 Bad random writes, as for every write, it has to write to the single parity disk.
 It is somewhat similar to RAID 3 and 5, but a little different.
 This is just like RAID 3 in having the dedicated parity disk, but this stripes blocks.
 This is just like RAID 5 in striping the blocks across the data disks, but this has only one parity disk.
 This is not commonly used.
RAID 6

 Just like RAID 5, this does block level striping. However, it uses dual parity.
 In the above diagram A, B, C are blocks. p1, p2, p3 are parities.
 This creates two parity blocks for each data block.
 Can handle two disk failure
 This RAID configuration is complex to implement in a RAID controller, as it has to calculate two parity
data for each data block.
RAID 10 Vs RAID 01 (RAID 1+0 Vs RAID 0+1) Explained with Diagram

RAID 10 is not the same as RAID 01.


This article explains the difference between the two with a simple diagram.
I’m going to keep this explanation very simple for you to understand the basic concepts well. In the following
diagrams A, B, C, D, E and F represents blocks.

RAID 10

 RAID 10 is also called as RAID 1+0


 It is also called as “stripe of mirrors”
 It requires minimum of 4 disks
 To understand this better, group the disks in pair of two (for mirror). For example, if you have a total
of 6 disks in RAID 10, there will be three groups–Group 1, Group 2, Group 3 as shown in the above
diagram.
 Within the group, the data is mirrored. In the above example, Disk 1 and Disk 2 belongs to Group 1.
The data on Disk 1 will be exactly same as the data on Disk 2. So, block A written on Disk 1 will be
mirrored on Disk 2. Block B written on Disk 3 will be mirrored on Disk 4.
 Across the group, the data is striped. i.e. Block A is written to Group 1, Block B is written to Group 2,
Block C is written to Group 3.
 This is why it is called “stripe of mirrors”. i.e. the disks within the group are mirrored. But, the groups
themselves are striped.
If you are new to this, make sure you understand how RAID 0, RAID 1 and RAID 5 and RAID 2, RAID 3, RAID 4,
RAID 6 works.
RAID 01

 RAID 01 is also called as RAID 0+1


 It is also called as “mirror of stripes”
 It requires minimum of 3 disks. But in most cases this will be implemented as minimum of 4 disks.
 To understand this better, create two groups. For example, if you have total of 6 disks, create two
groups with 3 disks each as shown below. In the above example, Group 1 has 3 disks and Group 2
has 3 disks.
 Within the group, the data is striped. i.e. In the Group 1 which contains three disks, the 1st block will
be written to 1st disk, 2nd block to 2nd disk, and the 3rd block to 3rd disk. So, block A is written to
Disk 1, block B to Disk 2, block C to Disk 3.
 Across the group, the data is mirrored. i.e. The Group 1 and Group 2 will look exactly the same. i.e.
Disk 1 is mirrored to Disk 4, Disk 2 to Disk 5, Disk 3 to Disk 6.
 This is why it is called “mirror of stripes”. i.e. the disks within the groups are striped. But, the groups
are mirrored.

Main difference between RAID 10 vs RAID 01

 Performance on both RAID 10 and RAID 01 will be the same.


 The storage capacity on these will be the same.
 The main difference is the fault tolerance level. On most implementations of RAID controllers, RAID
01 fault tolerance is less. On RAID 01, since we have only two groups of RAID 0, if two drives (one in
each group) fails, the entire RAID 01 will fail. In the above RAID 01 diagram, if Disk 1 and Disk 4 fails,
both the groups will be down. So, the whole RAID 01 will fail.
 RAID 10 fault tolerance is more. On RAID 10, since there are many groups (as the individual group is
only two disks), even if three disks fail (one in each group), the RAID 10 is still functional. In the
above RAID 10 example, even if Disk 1, Disk 3, Disk 5 fails, the RAID 10 will still be functional.
 So, given a choice between RAID 10 and RAID 01, always choose RAID 10.
Bosch Security Academy Training
RAID
• Redundant array of independent disks
• Data storage scheme using multiple disks
• RAID combines multiple hard disk drives into a single logical hard drive. Thus, instead of seeing
several different hard drives, the operating system sees only one.
• RAID is typically used on server computers, and is usually (but not necessarily) implemented with
identically sized disk drives.

Raid 0

• RAID 0: Striped Set (2 disk minimum) without parity: provides improved performance and additional
storage but no fault tolerance from disk errors or disk failure.
• Any disk failure destroys the array, which becomes more likely with more disks in the array.

Raid 1

RAID 1: Mirrored Set (2 disks minimum) without parity: provides fault tolerance from a single disk failure.
Array continues to operate with one failed drive
Raid 5

• RAID 5: Striped Set (3 disk minimum) with Distributed Parity: Distributed parity requires all but one
drive to be present to operate; drive failure requires replacement, but the array is not destroyed by
a single drive failure.
• The array will have data loss in the event of a second drive failure and is vulnerable until the data
that was on the failed drive is rebuilt onto a replacement drive.

Raid 4
RAID-DP (Non-standard RAID levels)

Raid DP

 DP = Double Parity
 Can handle two disk failures in one Raidgroup without loss of data.
 Comparable to Raid 6 for the cost of little bit more than Raid4
 Raid DP is only available in combination with NetApp filers, therefore on iSCSI only.

RAID-DP implements RAID 4, except with an additional disk that is used for a second parity, so it has the
same failure characteristics of a RAID 6.[3] The performance penalty of RAID-DP is typically under 2% when
compared to a similar RAID 4 configuration

How it works:
Data ONTAP uses RAID-DP (also referred to as "double", "dual", or "diagonal" parity). It is a form of RAID 6,
but unlike usual RAID 6 implementations and does not use distributed parity as in RAID 5. Instead, two
unique parity disks with separate parity calculations are used. This is a modification of RAID 4 with an extra
parity disk.

Horizontal stripe” (for first Parity disk) and “diagonal strip” (for second) is used.

By using these two algorithms together, data can be read and reconstructed
even in the event of a dual drive failure.
Spare drives
 It is possible to assign a drive as a spare drive.
 In case of a drive failure the spare drive takes over the functionality of the failed drive. Via the parity
bits on the remaining drives the data of the failed drive is rebuild on the spare drive.
 The failed drive has to be replaced and becomes automatically the new spare drive (configuration
setting).
 If a drive is assigned as a spare drive, there is no need to swap immediately the failed drive. The
system at this point does not get the status vulnerable. (However during rebuilding of the data the
system is vulnerable.)
 Vulnerable means: if another drive breaks down no data can be retrieved from the array.

Disk arrays
 Recording is done on iSCSI disk arrays.
 A disk array is a unit consisting of hard drives which are managed by a build-in controller.
 Based on the configuration, the unit has the capability to recover data from one or two broken disks.
 This redundancy level is announced as RAID level.
 RAID = Redundant array of independent disks
 Data storage scheme using multiple disks
 RAID combines multiple hard disk drives into a single logical hard drive. Thus, instead of
seeing several different hard drives, the operating system sees only one.
 RAID is usually implemented with identically sized disk drives.

iSCSI array
• An iSCSI device is a storage device consisting of a number of hard drives in one housing
• A DVSA has to be configured
– This can be done via different software communication tools. Depending on the type and
brand the can be:
• Telnet
• Hyperterminal
– Command line interface
• Raid-watch software.
• Filerview
• If the DVSA is properly configured, then IP devices can stream their recordings to it.

• An iSCSI device is a standalone unit that is able to record data coming from encoders without any
help of additional servers and or NVR software. This makes an iSCSI recording more fault tolerant
compared to e.g. NVR recordings since no additional NVR server is required.
• All BOSCH VIP encoders are able to record directly to iSCSI

• All recordings are done directly to iSCSI.


• Within direct to iSCSI recording two recording principals can be defined:
– VRM managed recording and
– VRM unmanaged recordings.
• In case we have VRM unmanaged recordings, every IP encoder needs a dedicated LUN for recording.
A LUN (Logical Unit Number) can be seen as a local recording partition on a virtual hard drive.

• LUNs can be different sizes with a maximum of 2Terra Byte


• Each LUN records cameras from one IP address
• One IP address = 1-4 cameras depending on encoder model
– Dinion IP and VIP X1 have 1 camera / IP address
– VIP X2 has 2 cameras / IP address
– Each VIP X1600 module has 4 cameras / IP address
– Fully loaded VIP X1600:
• 4 modules (total 16 cam) ;4 IP addresses; needs 4 LUNs.
Design parameters
There are three types of parameters that should be taken into account when applying an iSCSI array:
1- Number of iSCSI sessions
(communication sessions between an IP encoder and iSCSI array or playback client and iSCSI array)
In direct recording (not managed by VRM) every encoder has 1 iSCSI session opened to one iSCSI
2- Max data rate in Mb/s
3- The Maximum necessary recording storage

iSCSI with and without VRM


iSCSI Only
 No software license
 Fixed storage per camera
 No load balancing
 No failover
 2 TB LUN limit per IP address

iSCSI with VRM


 Software license
 Flexible storage per camera
 Video distributed across all storage
 Automatic iSCSI & VRM failover
 No space limit (max LUN size is 2TB)
Protecting against failure scenarios
 iSCSI RAID fails
 VRM redirects IP streams to 1 GB blocks on another dedicated iSCSI disk array, or to existing
disk arrays
 VRM server fails
 Recording continues until the IP camera has consumed its buffer of 128 pointers to 1 GB
blocks
 A 1 Mbps camera has about 10 camera-days of storage left.
 Each VRM server can have a backup VRM server for redundancy
 Network fails
 Use direct-attached iSCSI for independence from network interruptions
 Direct-attached iSCSI uses a Network-attached iSCSI for fail-over

What can stop recording?


• Two drives fail in a RAID4 or RAID5 configuration
• One drive fails in a RAID4 or 5 configuration and by mistake a good drive is removed.
• Three drives in RAID DP configuration fail.
• Amount of iSCSI sessions exceeded.
• Max data rate is exceeded.
• Storage space is insufficient and the minimum retention time is set to long.

Troubleshooting 1
 What happens if the VRM server stops?
 If the VRM server stops then the recording will continue until the storage blocks are used.
 If server comes back online everything works as normal again and block assignment is done by
the VRM.
 If the VRM server is offline then the archive player does not startup, this is because playback
runs via the VRM server

Troubleshooting 2
 What happens if the VRM servers fails and the physical server (HW) needs to be replaced?
 New hardware
 Server software needs to be reinstalled
 Licenses need to be transferred (help of support-team)
 Cameras need to be assigned again to the new server.
 iSCSI arrays need to be added to the server.

Note:
Restore database needs to be checked in order to use the existing recording if an iSCSI array was already
formatted then the LUN will appear in the device tree as “Locked by …….” This has No influence on the
performance of the LUN. The VRM server is able to distribute the 1GB block op this LUN as any other LUN.

XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Nested RAID levels (‫)متداخلة‬

Nested RAID levels, also known as hybrid RAID, combine two or more of the standard RAID levels (where
"RAID" stands for "redundant array of independent disks") to gain performance, additional redundancy or
both, as a result of combining properties of different standard RAID layouts.[1][2]

Nested RAID levels are usually numbered using a series of numbers, where the most commonly used levels
use two numbers. The first number in the numeric designation denotes the lowest RAID level in the "stack",
while the rightmost one denotes the highest layered RAID level; for example, RAID 50 layers the data striping
of RAID 0 on top of the distributed parity of RAID 5. Nested RAID levels include RAID 01, RAID 10, RAID 100,
RAID 50 and RAID 60, which all combine data striping with other RAID techniques; as a result of the layering
scheme, RAID 01 and RAID 10 represent significantly different nested RAID levels.[3]

Contents

 1 RAID 01 (RAID 0+1)


 2 RAID 03 (RAID 0+3)
 3 RAID 10 (RAID 1+0)
 4 RAID 50 (RAID 5+0)
 5 RAID 60 (RAID 6+0)
 6 RAID 100 (RAID 10+0)
 7 Comparison
 8 See also
 9 Notes
 10 References
 11 Further reading

RAID 01 (RAID 0+1)

A nested RAID 01 configuration


A hybrid RAID 01 configuration

RAID 01, also called RAID 0+1, is a RAID level using a mirror of stripes, achieving both replication and sharing
of data between disks. The usable capacity of a RAID 01 array is the same as in a RAID 1 array made of the
same drives, in which one half of the drives is used to mirror the other half. , where N is the total
number of drives and is the capacity of the smallest drive in the array.

At least four disks are required in a standard RAID 01 configuration, but larger arrays are also used.

RAID 03 (RAID 0+3)

A typical RAID 03 configuration

RAID 03, also called RAID 0+3 and sometimes RAID 53, is similar to RAID 01 with the exception that byte-level
striping with dedicated parity is used instead of mirroring.
RAID 10 (RAID 1+0)

A typical RAID 10 configuration

RAID 10, also called RAID 1+0 and sometimes RAID 1&0, is similar to RAID 01 with an exception that two used
standard RAID levels are layered in the opposite order; thus, RAID 10 is a stripe of mirrors.

RAID 10, as recognized by the storage industry association and as generally implemented by RAID
controllers, is a RAID 0 array of mirrors, which may be two- or three-way mirrors, and requires a minimum of
four drives. However, a nonstandard definition of "RAID 10" was created for the Linux MD driver; Linux
"RAID 10" can be implemented with as few as two disks. Implementations supporting two disks such as Linux
RAID 10 offer a choice of layouts. Arrays of more than four disks are also possible.

According to manufacturer specifications and official independent benchmarks, in most cases RAID 10
provides better throughput and latency than all other RAID levels except RAID 0 (which wins in throughput).
Thus, it is the preferable RAID level for I/O-intensive applications such as database, email, and web servers,
as well as for any other use requiring high disk performance.

RAID 50 (RAID 5+0)

A typical RAID 50 configuration. A1, B1, etc. each represent one data block; each column represents one
disk; Ap, Bp, etc. each represent parity information for each distinct RAID 5 and may represent different
values across the RAID 5 (that is, Ap for A1 and A2 can differ from Ap for A3 and A4).
RAID 50, also called RAID 5+0, combines the straight block-level striping of RAID 0 with the distributed parity
of RAID 5. As a RAID 0 array striped across RAID 5 elements, minimal RAID 50 configuration requires six
drives. On the right is an example where three collections of 120 GB RAID 5s are striped together to make
720 GB of total storage space.

One drive from each of the RAID 5 sets could fail without loss of data; for example, a RAID 50 configuration
including three RAID 5 sets can only tolerate three maximum potential drive failures. Because the reliability
of the system depends on quick replacement of the bad drive so the array can rebuild, it is common to
include hot spares that can immediately start rebuilding the array upon failure. However, this does not
address the issue that the array is put under maximum strain reading every bit to rebuild the array at the
time when it is most vulnerable.

RAID 50 improves upon the performance of RAID 5 particularly during writes, and provides better fault
tolerance than a single RAID level does. This level is recommended for applications that require high fault
tolerance, capacity and random access performance. As the number of drives in a RAID set increases, and
the capacity of the drives increase, this impacts the fault-recovery time correspondingly as the interval for
rebuilding the RAID set increases.

RAID 60 (RAID 6+0)

A typical RAID 60 configuration consisting of two sets of four drives each

RAID 60, also called RAID 6+0, combines the straight block-level striping of RAID 0 with the distributed double
parity of RAID 6, resulting in a RAID 0 array striped across RAID 6 elements. It requires at least eight disks.

RAID 100 (RAID 10+0)


A typical RAID 100 configuration

RAID 100, sometimes also called RAID 10+0, is a stripe of RAID 10s. This is logically equivalent to a wider
RAID 10 array, but is generally implemented using software RAID 0 over hardware RAID 10. Being "striped
two ways", RAID 100 is described as a "plaid RAID".

XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Selecting the Best RAID Level
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX

When you create arrays (or logical drives) for the Sun StorageTek SAS RAID External HBA, you can assign a
RAID level to protect data.

Each RAID level offers a unique combination of performance and redundancy. RAID levels also vary by the
number of disk drives they support.

This appendix describes the RAID levels supported by the HBA, and provides a basic overview of each to help
you select the best level of protection for your data storage.
The appendix contains the following sections:

 Understanding Drive Segments


 Nonredundant Arrays (RAID 0)
 RAID 1 Arrays
 RAID 1 Enhanced Arrays
 RAID 10 Arrays
 RAID 5 Arrays
 RAID 5EE Arrays
 RAID 50 Arrays
 RAID 6 Arrays
 RAID 60 Arrays
 Selecting the Best RAID Level
 Migrating RAID Levels

Understanding Drive Segments


A drive segment is a disk drive or portion of a disk drive that is used to create an array. A disk drive can
include both RAID segments (segments that are part of an array) and available segments. Each segment can
be part of only one logical device at a time. If a disk drive is not part of any logical device, the entire disk is an
available segment.

Nonredundant Arrays (RAID 0)


An array with RAID 0 includes two or more disk drives and provides data striping, where data is distributed
evenly across the disk drives in equal-sized sections. However, RAID 0 arrays do not maintain redundant
data, so they offer no data protection.
Compared to an equal-sized group of independent disks, a RAID 0 array provides improved I/O performance.
Drive segment size is limited to the size of the smallest disk drive in the array. For instance, an array with two
250 GB disk drives and two 400 GB disk drives can create a RAID 0 drive segment of 250 GB, for a total of
1000 GB for the volume, as shown in this figure.
FIGURE F-1 RAID 0 Array

RAID 1 Arrays
A RAID 1 array is built from two disk drives, where one disk drive is a mirror of the other (the same data is
stored on each disk drive). Compared to independent disk drives, RAID 1 arrays provide improved
performance, with twice the read rate and an equal write rate of single disks. However, capacity is only 50
percent of independent disk drives.
If the RAID 1 array is built from different- sized disk drives, the free space, drive segment size is the size of
the smaller disk drive, as shown in this figure.

FIGURE F-2 RAID 1 Array


RAID 1 Enhanced Arrays

A RAID 1 Enhanced (RAID 1E) array--also known as a striped mirror--is similar to a RAID 1 array except that
data is both mirrored and striped, and more disk drives can be included. A RAID 1E array can be built from
three or more disk drives.

In this example, the large bold numbers represent the striped data, and the smaller, non-bold numbers
represent the mirrored data stripes.

FIGURE F-3 RAID 1 Enhanced Array

RAID 10 Arrays
A RAID 10 array is built from two or more equal-sized RAID 1 arrays. Data in a RAID 10 array is both striped
and mirrored. Mirroring provides data protection, and striping improves performance.

Drive segment size is limited to the size of the smallest disk drive in the array. For instance, an array with two
250 GB disk drives and two 400 GB disk drives can create two mirrored drive segments of 250 GB, for a total
of 500 GB for the array, as shown in this figure.

FIGURE F-4 RAID 10 Array


RAID 5 Arrays
A RAID 5 array is built from a minimum of three disk drives, and uses data striping and parity data to provide
redundancy. Parity data provides data protection, and striping improves performance.
Parity data is an error-correcting redundancy that’s used to re-create data if a disk drive fails. In RAID 5
arrays, parity data (represented by Ps in the next figure) is striped evenly across the disk drives with the
stored data.
Drive segment size is limited to the size of the smallest disk drive in the array. For instance, an array with two
250 GB disk drives and two 400 GB disk drives can contain 750 GB of stored data and 250 GB of parity data,
as shown in this figure.
FIGURE F-5 RAID 5 Array

RAID 5EE Arrays


A RAID 5EE array--also known as a hot space--is similar to a RAID 5 array except that it includes a distributed
spare drive and must be built from a minimum of four disk drives.

Unlike a hot-spare, a distributed spare is striped evenly across the disk drives with the stored data and parity
data, and can’t be shared with other logical disk drives. A distributed spare improves the speed at which the
array is rebuilt following a disk drive failure.

A RAID 5EE array protects data and increases read and write speeds. However, capacity is reduced by two
disk drives’ worth of space, which is for parity data and spare data.
In this figure, S represents the distributed spare, P represents the distributed parity data.

FIGURE F-6 RAID 5EE Array

RAID 50 Arrays
A RAID 50 array is built from six to forty-eight disk drives configured as two or more RAID 5 arrays, and
stripes stored data and parity data across all disk drives in both RAID 5 arrays. (For more information, see
RAID 5 Arrays.)

The parity data provides data protection, and striping improves performance. RAID 50 arrays also provide
high data transfer speeds.

Drive segment size is limited to the size of the smallest disk drive in the array. For example, three 250 GB disk
drives and three 400 GB disk drives comprise two equal-sized RAID 5 arrays with 500 GB of stored data and
250 GB of parity data. The RAID 50 array can therefore contain 1000 GB (2 x 500 GB) of stored data and 500
GB of parity data.

In this figure, P represents the distributed parity data.


FIGURE F-7 RAID 50 Array

RAID 6 Arrays
A RAID 6 array--also known as dual drive failure protection--is similar to a RAID 5 array because it uses data
striping and parity data to provide redundancy. However, RAID 6 arrays include two independent sets of
parity data instead of one. Both sets of parity data are striped separately across all disk drives in the array.

RAID 6 arrays provide extra protection for data because they can recover from two simultaneous disk drive
failures. However, the extra parity calculation slows performance (compared to RAID 5 arrays).

RAID 6 arrays must be built from at least four disk drives. Maximum stripe size depends on the number of
disk drives in the array.
FIGURE F-8 RAID 6 Array

RAID 60 Arrays
Similar to a RAID 50 array (see RAID 50 Arrays), a RAID 60 array--also known as dual drive failure protection--
is built from eight disk drives configured as two or more RAID 6 arrays, and stripes stored data and two sets
of parity data across all disk drives in both RAID 6 arrays.
Two sets of parity data provide enhanced data protection, and striping improves performance. RAID 60
arrays also provide high data transfer speeds.

Selecting the Best RAID Level


Use this table to select the RAID levels that are most appropriate for the logical drives on your storage space,
based on the number of available disk drives and your requirements for performance and reliability.
TABLE F-1 Selecting the Best RAID Level

Disk
Drive Minimum
RAID Read Write Built-in Hot-
Level Redundancy Usage Performance Performance Spare Disk Drives

RAID 0 No 100% www www No 2

RAID 1 Yes 50% ww ww No 2

RAID 1E Yes 50% ww ww No 3

RAID 10 Yes 50% ww ww No 4

RAID 5 Yes 67 - 94% www w No 3

RAID 5EE Yes 50 - 88% www w Yes 4

RAID 50 Yes 67 - 94% www w No 6

RAID 6 Yes 50 - 88% ww w No 4

RAID 60 Yes 50 - 88% ww w No 8

Disk drive usage, read performance, and write performance depend on the number of drives in the logical
drive. In general, the more drives, the better the performance.

Migrating RAID Levels


As your storage space changes, you can migrate existing RAID levels to new RAID levels that better meet
your storage needs. You can perform these migrations through the Sun StorageTek RAID Manager software.
For more information, see the Sun StorageTek RAID Manager Software User’s Guide. TABLE F-2 lists the
supported RAID level migrations.
TABLE F-2 Supported RAID Level Migrations

Existing RAID Level Supported Migration RAID Level

Simple volume RAID 1

RAID 0  RAID 5
 RAID 10

RAID 1  Simple volume


 RAID 0
 RAID 5
 RAID 10

RAID 5  RAID 0
 RAID 5EE
 RAID 6
 RAID 10

RAID 6 RAID 5

RAID 10  RAID 0
 RAID 5

XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX

RAID 1E types
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX

RAID 1E types

There are several types of RAID 1E: RAID 1E near, RAID 1E interleaved and RAID 1E far. Usually, RAID 1E type
is not specified in a controller documentation. Any type of RAID 1E can survive a failure of one member disk
or any number of nonadjacent disks (more info on RAID 1E failures). Let's consider each RAID 1E type in
details.

RAID 1E near

RAID 1E near can be created only using odd number of disks. If a number of disks is even, RAID 1E near turns
into RAID 10. It is possible to create RAID 1E near using md-raid (Linux).

RAID 1E interleaved

RAID 1E interleaved can be created by a controller, e.g. IBM or Promise. This RAID 1E type can be
implemented using either even or odd number of member disks.

RAID 1E far

This RAID 1E type can be created using any number of disks. In fact RAID 1E far is the same as RAID 10 far. As
of now, ReclaiMe Free RAID Recovery cannot automatically recover configuration for RAID 1E far. To recover
such RAID you should use host protected area (HPA) to cut a half of member disks and then recover RAID 0.
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
The differences between RAID 1E, RAID 5E, RAID 5EE
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
RAID 1E, where the letter E comes from the word "Enhanced" is an extended and expanded version of the
matrix of level one and belongs to the so-called non-standard RAID levels. This mode of RAID is also called
striped mirroring, enhanced mirroring or a hybrid mirroring.
This matrix is a combination of RAID 1 and 0.
Data is divided into strips as in RAID 0 and spread across all disks in the array. But for each set of strips is
assigned a copy (as in RAID 1) which is recorded with a shift of one block / disc. RAID levels 1E requires a
minimum of three disks, and with an even distribution of the number of stripes on the disk is identical to
RAID 10 The advantages of array-level 1E is better than RAID 1 performance, though it is mainly thanks to a
larger number of disks.
This advantage, however, increases if the controller does not support simultaneous reading of several drives
in RAID 1, or does so poorly. As a defects we can indicate a higher minimum number of disks needed to build
the array, 50% reduction in space for the user, and the fact that a comparable number of disks in RAID 1
survive breakdown more of them (three-disks RAID 1E - one, RAID 1 - two).
Extended mirrored matrix is also practically implemented only in the newer models of hardware controllers.

In theory, resistance of levels 1E on the failure of the drives in the case of even their number is similar to
RAID 10, that means, that may be damaged half of the disks.
As can be seen from our tests is not entirely true, because on the Adaptec 6405 during a simulation of fail
two of the four drives in the array 1E, matrixpasses into a state of total failure, regardless of the choice of
drives that were removable.
For configurations with an odd number of disks may become damaged (n-1) / 2 drives, where n is the
number in the matrix, which for three-disks array gives us a one disk, and for five-disks - two.
Damaged discs cannot adjoin each other which reduces the number of possible configurations of failure
several disks not stopping work of the array.
Data recovery in texas: hard drive raid
RAID 5EE is an enhanced form of RAID 5E, although they are considered as equivalent.
Both of these matrices expanding (fifth-level of array) in built-in in the structure the "hot-spare" drive, both
are matrices with a custom list of RAID levels. 5E and RAID 5EE thus require a minimum of four drives.
In theory, they are resistant to failure even up to two drives. In practice, however, is little difference. Drives
may have failure one by one at a time interval needed for the so-called matrix compression.
The point is that if a disk failure in the matrix 5E and 5EE built into it a spare disk it is used to compress the
matrix to the usual array RAID 5.
Only after then may be failure of the second drive, which is a resistant for failure of the fifth level matrix.
Depending on the implementation of RAID 5E/5EE after compensation (i.e., after failure of one disk) is now
permanently RAID 5, or after replacing the failed drives is back expansion from the RAID 5 to array 5E/5EE.
The latter case is present in the being tested Adaptec controller.
The main advantage of these arrays over RAID 5 is of course built-in "hot" spare drive.
The second advantage is greater than in RAID 5 performance, but it is burdened with the increased the
minimum number of disks to four and a smaller available capacity (with the same number of disks in RAID 5).

Both levels of RAID, as already mentioned have built-in in their structure the spare disk.
But there is a difference between them.
RAID 5E distributes its blocs at the beginning of all component disks (like RAID 5 on four disks), leaving a free
space at the end of each disks, which globally is the size of a one disk. This is a built-in hot-spare drive.
By spreading strips on four and not three, as in the case of RAID 5 with the ordinary (or dedicated), a global
hot spare, the performance of such a matrix is simply bigger, like RAID 0 on the four disks is faster than that
of the three.
The fact that all areas of the "working" are on the start of disks may be marginal but yet measurable impact
on improving the speed of the matrix, because the slowest drives on a regular HDD area at the end of disks
are reserved as a spare.
The main decision to make when choosing a RAID 5E/5EE or RAID 5 on three drives with dedicated hot spare
is to determine whether it is more important for us is performance, or the fact that a dedicated hot-spare
drive is at rest and may even be completely disabled by the controller, and thus no consuming.
Data Retrieval among other things, specializes in RAID data recovery services.

RAID 5E and RAID 5EE differs the fact that the spare area instead of being at the end of the disks
components of the matrix is distributed in the form of strips next to parity strips (as RAID 6, but instead of a
second set of parity strips are strips of empty space).

This has two implications: compensation of such a matrix is faster than the alternative RAID 5E.

Increase the speed, mentioned in the previous paragraph in this case is absent. Depending on the
implementation of both types of arrays fifth extended, permit a maximum of eight to sixteen drives,
although the gain of speed and effort I / O operations associated with the compression and expansion of the
matrix reduces the maximum number to the lower value.

In addition, both matrices allow to create on them only one logical drive.

The last disadvantage of these arrays is their negligible availability similarly to RAID 1E.

Qualified RAID data recovery and encryption specialists in Tuscon,AZ will routinely recover your crashed or
failed RAID arrays.

XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Promise
Benefits of Using RAID 50 or 60 in Single High Capacity
RAID Array Volumes Greater than 16 Disk Drives
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
1- Abstract
This document will explain and demystify the misconception that there is a benefit to creating single RAID 5
and RAID 6 array volumes greater than 16 SAS/SATA disk drives.

It will explain the logic behind using less than 16 disk drive spindles in a RAID 5 or RAID 6 array as well as
explain the benefit of using RAID 50 or RAID 60 for RAID arrays with more than 16 disk drive spindles.

All of these RAID levels accomplish the same objective, data protection.

2- Discussion
The reason for the 16 drive design maximum in the Promise V3 and V4 RAID Engine for RAID 5 and 6 is due
to the increased probability of disk drive spindle or bad block failures when multiple drives are configured in
the same RAID 5 or RAID 6 array unit. The V4 and V3 RAID Engine are currently used in our VTrak E-Class,
VTrak M-Class and SuperTrak EX line of products.

The type of failures can range from simple bad blocks to catastrophic failures, where the disk drive is no
longer operational, such as motor failure or head crash.

Although these types of issues are less likely to happen with the current, constantly improving SAS/SATA
drive, bad blocks are not uncommon, especially on the larger capacity disk drives.

The larger the capacity of a disk drive the greater the probability there is of hitting a bad block.

Factors can include the platter size, density or both. Since RAID 5 allows for a one drive failure and remain
operational running in Critical mode, losing an additional drive can result in data loss, i.e. result in an Offline
RAID 5 array.

RAID 6 allows for a two drive failure before resulting in a Critical mode thus reducing the chances of the
array being in an Offline state.

Although RAID 6 has its benefits, RAID 50 and RAID 60 are much more redundant in that the more axles
(more explanation on axles in the next section) in an array the more drives can fail before an array will go
into an Offline state.

Using RAID 50 and 60 also allows the user to surpass the 16 disk drive limitations of RAID 5 and RAID 6.

If you are creating a single drive volume of up to 60 disk drives, RAID 50 or 60 is a better alternative.

Below Figure 1 displays the RAID Level and number of drives that are configurable.
Expansion units are only applicable on E-Class product configurations with use of J-Class.
Figure 1. Depicts possible RAID Level Combinations with minimum drive requirements

3- E-Class RAID Level Support of Interest

RAID 5 – Block and Parity Stripe

RAID 5 organizes block data and parity data across the physical drives. Generally, RAID Level 5
tends to exhibit lower random write performance due to the heavy workload of parity
recalculation for each I/O.
RAID 5 is generally considered to be the most versatile RAID level.

Figure 3. RAID 6 Block and Double Parity Stripe

The total capacity of a RAID 6 disk logical drive is the smallest physical drive times the number of
physical drives, minus two.

Hence, a RAID 6 disk logical drive with (6) 100 GB hard drives will have a capacity of 400 GB.

A disk logical drive with (4) 100 GB hard drives will have a capacity of 200GB. RAID 6 becomes
more capacity efficient in terms of physical drives as the number of physical drives increases.
RAID 6 offers double fault tolerance. Your logical drive remains available when up to two
physical drives fail. RAID 6 is generally considered to be the safest RAID level.
RAID 6 requires a minimum of four physical drives and a maximum of 16.

Recommended applications: Accounting, financial, and database servers; any application


requiring very high availability.

RAID 50 – (Nested RAID) RAID 5+0


Striping of Distributed Parity
RAID 50 combines both RAID 5 and RAID 0 features. Data is striped across disks as in RAID 0, and
it uses distributed parity as in RAID 5.

RAID 50 provides data reliability, good overall performance and supports larger volume sizes.

Figure 2. RAID 5 Stripes all drives with data and parity information

The capacity of a RAID 5 disk array is the smallest drive size multiplied by the number of drives
less one. Hence, a RAID 5 disk array with (4) 100 GB hard drives will have a capacity of 300GB.
A disk array with (8) 120GB hard drives and (1) 100GB hard drive will have a capacity of 800GB.
RAID 5 requires a minimum of three physical drives and a maximum of 16.

Recommended applications: File and Application Servers; WWW, E-mail, News servers,
Intranet Servers

RAID 6 – Block and Double Parity Stripe


RAID level 6 stores dual parity data is rotated across the physical drives along with the block
data.
A RAID 6 disk logical drive can continue to accept I/O requests when any two physical drives
fail.

Figure 4. RAID 50 Striping of Distributed Parity disk arrays

RAID 50 also provides high reliability because data is still available even if multiple disk drives fail
(one in each axle).

The greater the number of axles, the greater the number of disk drives that can fail without the
RAID 50 array going offline. RAID 50 arrays consist of six or more physical drives.

Recommended applications: File and Application Servers, Transaction Processing, Office applications
with many users accessing small files.

RAID 50 Axles
When you create a RAID 50, you must specify the number of axles. An axle refers to a single RAID
5 array that is striped with other RAID 5 arrays to make RAID 50.

An axle can have from three or greater physical drives, depending on the number of physical
drives in the array (E-Class has been tested up to 60 physical drives, 1 E310f + 4 J300s).

Example:

Although not depicted in the table below in a 60 disk drive configuration you could set up to 16
drives per axle for a total of 4 axles. 3 out of the four axles would have 16 disk drives and 1 axle
would have 12 disk drives thus resulting in an unbalanced RAID 50 configuration.

The chart below shows RAID 50 arrays with 6 to 16 disk drives, the available number of axles, and the
resulting distribution of disk drives on each axle. The VTrak attempts to distribute the number of disk drives
equally among the axles but in some cases, one axle will have more disk drives than another.

RAID 60 (RAID 6+0) – Nested RAID


Striping of Double Parity

RAID 60 combines both RAID 6 and RAID 0 features. Data is striped across disks as in RAID 0, and
it uses double distributed parity as in RAID 6.

RAID 60 provides data reliability, good overall performance and supports larger volume sizes.
Figure 5. RAID 60 Striping of Double Distributed Parity disk arrays

RAID 60 also provides very high reliability because data is still available even if multiple disk
drives fail (two in each axle).
The greater the number of axles, the greater the number of disk drives that can fail without the
RAID 60 array going offline.
RAID 60 arrays consist of eight or more physical drives.

Recommended applications: Accounting, financial, and database servers; any application


requiring very high availability.

RAID 60 Axles
When you create a RAID 60, you must specify the number of axles.

An axle refers to a single RAID 6 array that is striped with other RAID 6 arrays to make RAID 60.

An axle can have from four or more physical drives depending on the number of physical drives in
the array (E-Class has been tested up to 60 physical drives, 1 E310f + 4 J300s).

Example:
Although not depicted in the table below in a 60 disk drive configuration you could set up to 16
drives per axle for a total of 4 axles. 3 out of the four axles would have 16 disk drives and 1 axle
would have 12 disk drives thus resulting in an unbalanced RAID 60 configuration.

The chart below shows RAID 60 arrays with 8 to 16 physical drives, the available number of axles,
and the resulting distribution of disk drives on each axle. VTrak attempts to distribute the number
of disk drives equally among the axles but in some cases, one axle will have more disk drives than
another.
4- Benefits

RAID 5

Advantages Disadvantages
 High Read data transaction rate  Disk failure has a medium
impact on throughput
 Medium Write data transaction rate
 Good aggregate transfer rate
 16 Array Limitation per Single
RAID 5 volume

Recommended Applications for RAID 5


 File and Application servers
 WWW, E-mail, and News servers
 Intranet servers
 Most versatile RAID level

RAID 6

Advantages Disadvantages
 High Read data transaction rate  High disk overhead – equivalent
of two drives used for parity
 Medium Write data transaction rate
 Slightly lower performance than
 Good aggregate transfer rate
RAID 5
 Safest RAID Level
 16 Array Limitation per Single
RAID 6 volume

Recommended Applications for RAID 6


 Accounting and Financial
 Any Application requiring very high availability
 Database Servers
RAID 50

Advantages Disadvantages

 Greater than 16 disk drive single  Higher disk overhead than RAID 5
volume for high capacity (up to 60
disk drives using J300s)
 High Read data transaction rate
 Medium Write data transaction rate
 Good aggregate transfer rate
 High reliability

Recommended Applications for RAID 50


 File and Application servers
 Transaction Processing
 Office Applications with many users accessing small files
 High Capacity Data Storage

RAID 60

Advantages Disadvantages
 Greater than 16 disk drive single  High disk overhead – equivalent of
volume for high capacity (up to 60 two
disk drives using J300s)
 drives used for parity
 High Read data transaction rate
 Slightly lower performance
 Medium Write data transaction rate than RAID 50
 Good aggregate transfer rate
 Safest RAID level

Recommended Applications for RAID 60


 Accounting and Financial
 Database Servers
 Any Application Requiring high availability
 High Capacity Data Storage
5- Conclusion

RAID 50 or RAID 60 is best suited for high capacity volume array with greater than 16 disk drive single
RAID configurations as opposed to RAID 5 or RAID 6 due to the increased probability of disk drive spindle
or bad block failures.

Using RAID 50 in these types of configurations allows for one drive failure per one disk drive per axle.

RAID 60 allows you to lose up to two disks per axle thus making RAID 50 and RAID 60 the array of choice
for safer, more redundant, high capacity Single RAID volumes with greater than 16 disk drives.
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
DIVAR IP 7000 RAID Controller
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
DIVAR IP 7000 2U Revision 2:
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Hard Drives:
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX

1- Intel(R) Rapid Storage Technology enterprise SATA Controller OS 2 x 132.7 GB SSD RAID-1 (Mirror)
configuration
Used flash drives to fast boot the operating system (Microsoft Windows Storage Server 2012 R2
Standard)
Flash drive 1:
Intel SSDSC2BB15 139.75GB Member
Flash drive 2:
Intel SSDSC2BB15 139.75GB Member

Formatted Disk:
Disk 1 (Basic) Online 132.57 GB GPT contains two partitions

Partition 1 (Bootable): OS (C:) 100.00 GB NTFS


Partition 2 (Non-Bootable): Updates (E:) 32.47 GB NTFS
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
2- LSI 3108 MegaRAID Array Controller
RAID 5 (8 x 3 TB Drives Total Capacity 21.828 TB)
SAS RAID Card 8 ports LSI 3108 SAS3 controller

Formatted Disk:
Disk 0 (Basic) Online Formatted Capacity Data (D:) Data
19.0 TB (19558.00 GB) GPT NTFS
Contains LUNs (Virtual or logical drives) 12 x 1.88 TB
D:\vhd_D1 to D:\vhd_D12
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Boot Sequence (Boot Device Priority)
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Choosing the Bootable controller:
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
DIVAR IP 7000 Revision 2
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
You must configure the BIOS to first boot from the RAID 1 controller
(flash drives SSD cards) [UEFI Hard Disk: Windows Boot Manager] which contains the operating system.

Press Delete key once when the Bosch logo is displayed


(Press DEL to run Setup)
Boot
Boot mode select: [UEFI]
FIXED BOOT PRDER Priorities:

Boot Option #1: [UEFI CD/DCD]


Boot Option #2: [UEFI Hard Disk: Windows Boot Manager] >>> RAID 1 Controller
Boot Option #3: [UEFI USB Key: UEFI: USB 2.0 USB Flash Driver 1100] >>>
Default is Disabled
Boot Option #4: [Disabled]
Boot Option #5: [Disabled]
Boot Option #6: [Disabled]
Boot Option #7: [Disabled]

We don't have the option to boot from the LSI 3108 MegaRAID RAID 5 Array Controller
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
DIVAR IP 7000 Revision 1
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Boot > Boot Device Priority
1st Boot Device is [CD/DVD:SS-TEAC DV-]
2nd Boot Device is [SATA-PM-INTEL SSDS] >> Flash drive containing
operating system, Windows Server 2008 R2
Boot > Hard Disk Drives
1st Drive is [SATA:PM-INTEL SSDS] >>> Boot operating system
2nd Drive is [SATA:4M-ATP Veloci] >>> Disk-On-Module (System Recovery)

We don't have the option to boot from the LSI 3108 MegaRAID SAS-NFI RAID 5 Array Controller
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
To configure the RAID controllers:

RAID 1 (Mirror) > Press Ctrl> <S> during the BIOS power-on-self-test
to enter the Setup Menu.

RAID 5 > DIVAR IP 70000 revision 2, Press <Ctrl> <R> during the BIOS
power-on-self-test to Run MegaRAID Configuration Utility and for revision 1 press
the keys <Ctrl> <H> together to enter the WebBIOS page.

After windows 2012 loaded, we can also use the program LSI MegaRAID Storage
Manager to configure the RAID 5 (add drives, remove drives, configure
spare disks and repair hard drives errors .... etc.)

We can manually create or modify LUNs and iSCSI Target from the Server Manager.
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Read and Write Cache Policies
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
1-Write Cache Policies
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Normally, when controller cache is set to "write back" The controller be
buffered by a battery.

The cache of the RAID disks should be set to "write through" if no UPS is active.
Nevertheless, both caches have a great influence on the RAID performance.
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Write Back >>> You must have UPS (faster performance)

-Data are stored in the cache and written to the PD’s when possible >>>> Promise

-Controller tells your OS the data has been written as soon as its in
the controller cache (not on the PD Physical disk), so once it’s in cache
memory the controller doesn’t need to write to disk straight away and can
perform other tasks, this can speed up reads as well as it will read straight
from the data in cache not the disk.

-This caching technique improves the subsystem's response time to write requests
by allowing the controller to declare the write operation 'complete' as soon as
the data reaches its cache memory. The controller performs the slower operation
of writing the data to the disk drives at a later time.

-If you lose power or something and the controller doesn’t have a battery backup,
the writes in cache will be lost, so there is a chance of
stale data/corruption here, so the data in the cache (buffer)
that is not yet flashed to disk and will be lost,
after power comes back, the host will not re-send the data
(because the host not notified or acknowledgment that the data is
not yet saved on disk) so the data that was in cache will be lost.
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Write Through >>> If there is no UPS (slower performance)

-Data are written to the PD’s immediately (No WriteCache) >>>>> Promise

-When the controller receives a write request from the host, it writes the
data PD's immediately without storing in the cache, and as soon as it is saved to
disk, the controller then notifies (tells or acknowledgment) the host when
the write operation is complete.

-This process is called write-trough caching because the data actually passes
through-and is stored in- the cache memory on its way to the disk drives.
-In case of power failure, the host will not receive the notification
(acknowledgment) from the controller that the data saved in the storage,
after power comes back the host will re-send the data
(because it knows that it is not saved or written on the disk yet).
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Dirty-Cache

-Dirty cache data' refers to user data that has been written in cache memory,
but has not been flushed to the drives yet. When there is dirty data in cache,
it is necessary to enable the battery so that the data will not get lost if
there is a power failure.

-When there is no dirty data in cache for a certain length of time


(current setting is 5 seconds), it is necessary to disable the battery to
save power usage and prolong battery life.
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
2-Read Cache Policies >>>> Promise
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
>NoCache

-Data always read from PD’s

-The WriteCache is switched to WriteThru automatically

>ReadCache

-Suitable for small accesses patterns or random accesses

>ReadAhead

-Suitable for large accesses, video large files


XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
RAID 5 Parameters:
RAID Level: RAID 5
Strip Size: 64 KB
Access Policy: RW
Read Policy: Ahead
Write Policy: Always Write Back
IO Policy: Cached
Drive Cache: NoChange
Disable BGI: No
Select Size: 12.726 TB

Apply the following parameters.


Click on the Accept button.
Click on Yes button to confirm “Always Write Back” mode.
Click on the Next button.

Message:
Always Write Back policy ensures optimal performance at all times.
Using this policy might result in loss of cached data when a power failure
occurs and there is no backup battery or the battery charge is low.

It is highly recommended to use a backup battery or an uninterruptable power


supply to prevent the loss of cached data in case of a power failure.

Write Through will eliminate risk of losing cached data in case of power failure.
But it may result in slower performance.

Write Back with BBU (Backup Battery Unit) policy enables Write Back caching
when BBU is installed and charged. It provides optimal balance between data
safety and performance.

Are you sure you want to select Always Write Back mode?
Press Yes Then Save the Configuration.
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX

XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Hard Disk Slot 3 Error
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
RAID 5 (data) Hard disk slot 3 error_red color and server continues
peep sound
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Problem description:
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
There is one drive gives red color led and server
make continues peep.
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
This will not affect the booting RAID 1 (Mirror)
flash SSD drives wich contains boot operation system,
it will affect only the RAID 5 data LUNs
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
The RAID 5 size is still same size, 19.00 TB but it is degraded.
Drive D (Data): 19.0 TB Free 44.6 MB
Total no. of Luns = 11
Drive E (Updates): 32.4 GB
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Steps for problem resolving:
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
1-Run LSI MegaRAID Utility from the server under program files
of windows 2012 R2
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
2- Login as administrator
Username: BVRAdmin
Password: GIZA4bosch
or
Username: admin
Password: exsrv90

Note:
You can access the the server from remote desktop or from the
console, while BVMS default screen press CTRL+Alt+Del and then
Hold down SHIFT key and keep SHIFT pressed for about 5 seconds
while clicking switch user option and again press CTRL+ALT+DEL

Total capacity: 21.828 TB


Configured Capacity: 19.100 TB
Unconfigured Capacity: 2.729 TB
Status: Need attention

Logical tab:
Logical Group: 0 RAID 5
Virtual Drive(s):
Virtual Drive: 0, 19.100 TB, Degraded

Note:
It is degraded because there is one unconfigured bad drive in slot # 3.
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
3- Right click the drive:
Backplane, Slot: 3 SATA, 2.729 TB, (Foreign) Unconfigured BAD, (512 B).
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
4- Choose Start Locating Drive.
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
5- Choose Prepare for Removal.
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
6- Remove the bad drive from server.
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
7- Re-insert the drive again
(This is don hot pluggable while the server is working).
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
8- Right click on the drive Backplane, Slot: 3 SATA, 2.729 TB, Offline (512 B)
and choose Make Drive Online.
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
9- Click Confirm and choose Yes, the drive will work
normal and added to the Backplane and the RAID 5 size will
be same 19.100 TB but status will change from
critical to Optimal.

Total capacity: 21.828 TB


Configured Capacity: 21.828 TB
Unconfigured Capacity: 0 TB
Status: Optimal

Logical tab:
Logical Group: 0 RAID 5
Virtual Drive(s):
Virtual Drive: 0, 19.100 TB, Optimal
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
10- Re-start the server.
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
11- New RAID 5 logical drive (Virtual drive) status is:

Drive D (Data): 19.0 TB Free 2.38 TB


Luns : 11
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
12- Now you can create or expand luns under drive D.

We will create new lun


d:\vhd_D12 = 1950 GB (max size supported by Microsoft is 2 TB)
default size from bosch manufacture is 1.88 TB / 1880 GB.

We will expand the existing lun no. 11


Existing size d:\vhd_D11 = 258 GB
New size d:\vhd_D11 = 700 GB

Drive D (Data): 19.0 TB Free 52 GB


Total No. of Luns = 12

Very Important Note:


In case of expanding the lun 11, the iSCSI initiator (VRM and Cams)
are now using the lun, to prevent or avoid data corruption, first delete
the connection of this lun with any iSCSI initiator using it,
this is done from in the BVMS under VRM > Pools > iSCSI Storage, then
delete lun with size 258 GB, then save and activate the BVMS configuration.
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
13- Re-start the server.
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
14- Open BVMS 7.5 Config Client.
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
15- Right click on iSCSI Storage and choose Scan Target.
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
16- Click the newly added lun and the expanded lun.
LUN 12 (newly added)
LUN 11 (expanded)
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
17- Choose Format LUN.
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
18- Save configuration and activate.
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX

XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
System Recovery
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX

Very Important Note:

1- System recovery will not solve any RAID 5 errors.

2- System recovery procedure must be used when there are no RAID 5 problems
or there are no any disk errors existing.

3- When you run it with any disk error existing, it will restore the system
but the RAID 5 is degraded (Drive d:) and the LUNs will not restored probably, and after you fix the disk
error and the RAID 5 is optimal, you have to manually create
and prepare the LUNs.

4- The DOM (Disk-On-Module) SSD flash card or DVD 1 (7 GB) contains only Operating System Image
(Bootable Drive c:), we cannot recover drive
d: (RAID 5 19 TB) which contains the LUNs.

5- The DIVAR IP 7000 revision 1 DOM operating system image is Windows Server 2008 R2

6- The DIVAR IP 7000 revision 2 DVD 1 operating system image version 1.01 is Windows Storage Server
2012 R2
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
DIVAR IP 7000 Revision 1
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
The DIVAR IP 7000 Rev 1 has SATA DOM (Disk-On-Module) for recovery
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
When the Bosch Logo has displayed, quickly press F11 to enter boot device selection menu.
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Boot device selection menu is displayed
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Please select boot device:
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
SATA: 4PM-INTEL SSDSA2BW080G3
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
CD/DVD:SS-TEAC DV-W28S-W
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
SATA: 4PM-ATP SATA DOM
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
RAID: (Bus 01 Dev 00) PCI RAID Ad
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Select the SATA: 4M-ATP Velocity MI SATA DOM
Press Enter key.

Initial Factory Setup (all data on the system will be lost!).


(restores to factory default image and deletes all data on the HDDs)

Initial Setup Disk Array Only Mode

System recovery (Back to factory defaults)


(restores to factory default image; data on the HDDs will not be deleted)

Update a Bosch System Image

Wipe Disks

Show volumes and physical disks

Console

Reboot
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
DIVAR IP 7000 Revision 2
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
The DIVAR IP 7000 Rev 2 doesn’t have SATA DOM (Disk-On-Module) for recovery,
it uses DVDs for system recovery.
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
When the Bosch Logo has displayed, quickly press F11 to enter boot device selection menu.
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Boot device selection menu is displayed:
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
- UEFI CD/DCD >>> Use bootable DVD 1 System Recovery (OS image version 1.01).
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
- Windows Boot Manager >>> Flash drive 1: Intel SSDSC2BB15 139.75GB Member
Previously installed Microsoft Windows Storage Server 2012 R2 Standard.
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
- Windows Boot Manager >>> Flash drive 2: Intel SSDSC2BB15 139.75GB Member
Previously installed Microsoft Windows Storage Server 2012 R2 Standard.
Note:
Flash drive 1 and Flash drive 2 are RAID1 mirrored drives.
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
- UEFI: USB 2.0 USB Flash Driver1100 >>> Use USB Flash drive containing
the OS image (Make CD/DVD ISO Image File from the bootable DVD 1, using PowerISO software, then
create a bootable USP disk using the ISO image file using rufus-2.7 software).
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
- Enter Setup
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
We can use UEFI CD/DCD or UEFI: USB 2.0 USB Flash Driver1100
but USB Flash Drive is faster than the DVD-ROM drive.
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
System Management Utility
DIVAR IP 7000
Revision 2
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Initial Factory Setup (all data on the system will be lost!).
(restores to factory default image and deletes all data on the HDDs)

System recovery (Back to factory defaults)


(restores to factory default image; data on the HDDs will not be deleted)

Wipe Disks

Show volumes and physical disks

Console

Reboot
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
After that insert the DVD 2 or the other USB flash disk contains the
BVMS 6.5, you can upgrade to BVMS 7.5 later on.
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Service and repair
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX

The storage system is backed by a 3-year warranty. Issues will be handled according to Bosch Support and
Service guidelines.

The standard Bosch support way of working applies.

The following modules may be replaced on-site in case of a failure without returning the unit:

– Hard drives: As replacement hard drives only original Bosch hard drives are supported. Otherwise the
warranty is void. The replacement drives will come with the carrier included.
– Power supply: Only original Bosch replacement is supported.
– Fan: Only original Bosch replacement is supported.
– DOM: Disk on Module with the operating system image.
– Chassis w/o hard drives: Fully equipped unit without the hard drives.

‫مالحظه‬:
‫التسلسل لها وتقوم بتعبئه النموزج ويلزم ارسال اي قطعه‬ ‫تغيها والرقم‬ ‫البد من احضار امر ر‬
‫الشاء والفاتوره ورقم القطعه المراد ر‬
‫ي‬
‫حت ال ندفع جمارك عند استالم القطعة الجديده المستبدلة‬ ‫عطالنه ال بوش وتكلفة الشحن عل رشكه ر ز‬.
‫الجية والفاتوره ى‬
‫ي‬
Request for RMA (Report Maintenance Application)

Please request an RMA for failed parts from one of the following Bosch RMA contacts.
– RMA Contact AMEC
Bosch ST, RMA Swapstock, 8601 East Cornhusker Hwy, Lincoln, NE 68507 -USA
Phone: +1(402)467-6610
Fax: n.a.
E-mail: [email protected]
Opening Hours: Monday to Friday, 06:00 – 16:30
– RMA Desk APR
Robert Bosch (SEA) Pte Ltd, 11 Bishan Street 21, (level 5, from service lift), Singapore
573943
Phone: +65 6571 2872
Fax: n.a.
Email: [email protected]
Opening Hours: Monday to Friday, 08:30 – 17:45
– RMA contact China
Bosch (Zhuhai) Security Systems Co. Ltd. Ji Chang Bei Road 20#, Qingwan Industrial
Estate; Sanzao Town, Jinwan District, Zhuhai; P.R. China; Postal Code: 519040
Phone: +86 756 7633117 / 121
Fax: n.a.
Email: [email protected]
Opening Hours: Monday to Friday, 08:30 – 17:30
– RMA Contact EMEA
Bosch Security Systems, C/o EVI Audio GmbH, Ernst-Heinkel Str. 4, 94315 Straubing,
GERMANY
Contact person: RA Desk Supervisor
Phone: +49(9421)706-366
Fax: n.a.
E-mail: [email protected]
Opening Hours: Monday to Friday, 07:00 – 18:00

XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX

End !!!

You might also like