0% found this document useful (0 votes)
47 views3 pages

A Review of Oracle ASM Disk I - O Benchmark

The document discusses using the dd command in Linux to test redo log write performance on an Oracle ASM cluster. It provides examples of running dd on individual nodes and in parallel to identify any bottlenecks. Recommendations are given for carefully planning I/O performance tests to ensure accurate results.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
47 views3 pages

A Review of Oracle ASM Disk I - O Benchmark

The document discusses using the dd command in Linux to test redo log write performance on an Oracle ASM cluster. It provides examples of running dd on individual nodes and in parallel to identify any bottlenecks. Recommendations are given for carefully planning I/O performance tests to ensure accurate results.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

Oracle ASM disks I/O benchmark Tools:

Testing Cluster Redo Log Write Performance

dd command in Linux is a useful command for DBAs and help us to create a file into a disks
and then do a performance i/o evaluation or using it to simulate a corruption on disks and
review recovery scenarios also useful to modify specific blocks location at a disk and etc.

Here I illustrate a i/o benchmark on Redo disk group, simplifying using dd in a Rac 3-Node env:

• Note: This is a destructive test !


• Don’t run this test in a production env or at least use newly created disk without any DATA !
• Use bs=128k for writing 128k blocks
• Use oflag=sync to bypass buffer cache ( Oracle uses same flag )
• You better to use this on beginning of asm disk creation to see a benchmark of IOPS on your
Storage LUNs.
• If you nees check disk performance when database is on load you should review AWR
reports/checking trcae content about problematic process /using system stat views/ linux
commands as sar, ioping,iotrace,....
• SLOB is a good option to I/O benchmarks
• Using iostat in asmcmd env.
• If your DB residents in Local disks (No ASM) you can used
DBMS_RESOURCE_MANAGER.CALIBRATE_IO scrips.

Test Instances one after the other to get an idea about your single instance performance ( run
test 3x )

Instance raca1:
# dd if=/dev/zero of=/dev/asmdisk1_test bs=128k oflag=sync count=5000
5000+0 records in
5000+0 records out
655360000 bytes (655 MB) copied, 61.1947 s, 10.7 MB/s
--> Instance raca1: 81,96 IOPs Transfer Rate: 10.7 MB/s

Instance raca2:
# dd if=/dev/zero of=/dev/asmdisk2_test bs=128k oflag=sync count=5000
5000+0 records in
5000+0 records out
655360000 bytes (655 MB) copied, 41.4141 s, 15.8 MB/s
--> Instance raca2: 121.95 IOPs Transfer Rate: 15.8 MB/s

Instance raca3:
# dd if=/dev/zero of=/dev/asmdisk3_test bs=128k oflag=sync count=5000
5000+0 records in
5000+0 records out
655360000 bytes (655 MB) copied, 62.8516 s, 10.4 MB/s
--> Instance raca3: 80,64 IOPs Transfer Rate: 10.4 MB/s
We expect a cummulated Write I/O rate from about 280 IOPs for our whole cluster !

Testing all instances by running dd command in parallel to figure out any bottlenecks

[root@raca1 dev]# dd if=/dev/zero of=/dev/asmdisk1_test bs=128k oflag=sync count=5000


5000+0 records in
5000+0 records out
655360000 bytes (655 MB) copied, 97.604 s, 6.7 MB/s

[root@raca2 ~]# dd if=/dev/zero of=/dev/asmdisk2_test bs=128k oflag=sync count=5000


5000+0 records in
5000+0 records out
655360000 bytes (655 MB) copied, 97.645 s, 6.7 MB/s

[root@raca3 ~]# dd if=/dev/zero of=/dev/asmdisk3_test bs=128k oflag=sync count=5000


5000+0 records in
5000+0 records out
655360000 bytes (655 MB) copied, 118.749 s, 5.5 MB/s

--> Clusterwide IOPs: 15.000 / 118 = 127 IOPs

• IOPs drops from expected rate of 280 IOPs to 127 IOPs


• There is a Controller and/or Disk bottleneck and needs further review from storage group
• If suffering “LOG FILE SYNC” Wait event IOPs sequential write performance is the limiting
factor and you should review whether your hardware is scaling well.

Some recommendations about i/o performance test:

SLOB is a great tool for testing Oracle I/O performance.

Plan your performance tests carefully to ensure that you really are testing I/O and not Oracle
cache performance or the performance of your servers.

Create and test enough data to exceed the database and array caches.

Repeat tests several times to identify anomalies in your tests. Single tests prove nothing.

Use AWR to see what Oracle observed as I/O performance.


I/O performance measured at other points in the stack may not be representative of what
Oracle sees.

You may not be able to achieve the hero numbers vendors publish.
Remember many vendors use a 4K block size 100% read work load to achieve the high IOPs
numbers they publish.
That is not representative of a real-world workload.

Unity all-flash compression allows for impressive space savings, up to 4.5:1.


Unity compression does impact I/O performance, especially in write heavy environments.

Compression makes it ideal for ASM disk groups such as the FRA where lower levels of
performance may be okay, but should be used with caution if I/O performance is of the highest
priority.

I hope this article was useful for you.

Alireza Kamrani.

You might also like