Us 11436210

Download as pdf or txt
Download as pdf or txt
You are on page 1of 57

US011436210B2

( 12) United States Patent


Prahlad et al .
( 10 ) Patent No.: US 11,436,210 B2
(45 ) Date of Patent : Sep. 6, 2022
( 54 ) CLASSIFICATION OF VIRTUALIZATION ( 58 ) Field of Classification Search
DATA CPC G06F 16/2272 ; G06F 9/455 ; G06F
11/1453 ; G06F 11/1458 ; G06F 11/1469 ;
( 71 ) Applicant: Commvault Systems , Inc. , Tinton (Continued )
Falls , NJ (US )
( 56 ) References Cited
( 72 ) Inventors: Anand Prahlad , Bangalore (IN) ; Rahul U.S. PATENT DOCUMENTS
S. Pawar , Marlboro, NJ (US ); Prakash
Varadharajan, Morganville , NJ (US ); 4,084,231 A 4/1978 Capozzi et al .
Pavan Kumar Reddy Bedadala , 4,267,568 A 5/1981 Dechant et al .
Piscataway, NJ ( US ) ( Continued )
( 73 ) Assignee : Commvault Systems , Inc. , Tinton FOREIGN PATENT DOCUMENTS
Falls , NJ (US )
EP 0259912 3/1988
( * ) Notice: Subject to any disclaimer, the term of this EP 0405926 1/1991
patent is extended or adjusted under 35 (Continued )
U.S.C. 154 ( b ) by 61 days .
OTHER PUBLICATIONS
(21 ) Appl. No .: 917,591
U.S. Appl. No. 61 / 100,686 , filed Sep. 26 , 2008 , Kottomtharayil .
(22) Filed : Jun . 30 , 2020 (Continued )
( 65 ) Prior Publication Data Primary Examiner Edward J Dudek , Jr.
US 2020/0334221 A1 Oct. 22 , 2020 Assistant Examiner — Sidney Li
(74 ) Attorney, Agent, or Firm Commvault Systems ,
Inc.
Related U.S. Application Data (57) ABSTRACT
( 60 ) Continuation of application No. 15 / 679,560 , filed on A method and system described herein for classifying data
Aug. 17 , 2017 , now Pat . No. 10,754,841 , which is a of virtual machines in a heterogeneous computing compris
( Continued ) ing virtual machines and non - virtual machines. The system
may access a secondary copy of data stored by a virtual
(51 ) Int. Ci. machine , create metadata associated with that data , store the
G06F 16/22 (2019.01 ) metadata in an index that comprises metadata associated
G06F 11/14 ( 2006.01 ) with data stored on non -virtual machines , using a journal
(Continued) file , determine modified data objects within the data stored
(52) U.S. Cl. by the virtual machine, access or create metadata associated
CPC G06F 16/2272 ( 2019.01 ) ; GO6F 9/455 with modified data objects, and update the index accord
(2013.01 ) ; G06F 11/1453 ( 2013.01 ) ; ingly.
(Continued ) 24 Claims , 21 Drawing Sheets
1050

1005
1031
slorage mgmt.
manager agent
1411 1025
1030 jobs interface 1130
index agent agent
1095 1095
1070 client client 1070
1020
meta data data meta
base agent agent base

11060 1062
primary SC
data
SC

primary storage
secondary storage

1051 1065 1061 1065


SS secondary storage SS secondary storage
MB computing device MB computing device 1023

1015 1015 sidb


storage storage
device device
US 11,436,210 B2
Page 2

Related U.S. Application Data 5,673,381 A 9/1997 Huai et al .


12/1997 Ding et al.
5,699,361 A
continuation of application No. 14 /275,381 , filed on 5,729,743 A 3/1998 Squibb
May 12 , 2014 , now Pat . No. 9,740,723 , which is a 5,751,997 A 5/1998 Kullick et al .
division of application No. 13 / 667,890 , filed on Nov. 5,758,359 A 5/1998 Saxon
5,761,677 A 6/1998 Senator et al .
2 , 2012 , now Pat. No. 8,725,973 , which is aa division 5,764,972 A 6/1998 Crouse et al .
of application No. 12 / 553,294 , filed on Sep. 3 , 2009, 5,778,395 A 7/1998 Whiting et al.
now Pat. No. 8,307,177 . 5,812,398 A 9/1998 Nielsen
5,813,009 A 9/1998 Johnson et al .
( 60 ) Provisional application No. 61 / 169,515 , filed on Apr. 5,813,017 A 9/1998 Morris
15 , 2009, provisional application No. 61/121,383 , 5,875,478 A 2/1999 Blumenau
filed on Dec. 10 , 2008 , provisional application No. 5,887,134 A
5,901,327 A
3/1999 Ebrahim
5/1999 Ofek
61 / 094,753 , filed on Sep. 5 , 2008 . 5,924,102 A 7/1999 Perks
5,950,205 A 9/1999 Aviani, Jr.
(51 ) Int. Cl. 5,974,563 A 10/1999 Beeler, Jr.
GO6F 9/455 ( 2018.01) 6,021,415 A 2/2000 Cannon et al.
G06F 16/383 ( 2019.01 ) 6,026,414 A 2/2000 Anglin
GOOF 16/14 ( 2019.01 ) 6,052,735 A 4/2000 Ulrich et al .
6,076,148 A 6/2000 Kedem et al .
(52) U.S. CI. 6,094,416 A 7/2000 Ying
CPC G06F 11/1458 (2013.01 ) ; G06F 11/1469 6,101,585 A 8/2000 Brown et al.
( 2013.01 ) ; G06F 11/1435 (2013.01 ) ; GOOF 6,131,095 A 10/2000 Low et al .
11/1456 ( 2013.01 ) ; G06F 16/14 ( 2019.01 ) ; 6,131,190 A 10/2000 Sidwell
6,148,412 A 11/2000 Cannon et al .
GO6F 16/383 (2019.01 ) ; G06F 2201/815 6,154,787 A 11/2000 Urevig et al .
( 2013.01 ) ; G06F 2201/84 (2013.01 ) 6,161,111 A 12/2000 Mutalik et al .
( 58 ) Field of Classification Search 6,167,402 A 12/2000 Yeager
CPC .... GO6F 16/14 ; G06F 16/383 ; G06F 11/1435 ; 6,212,512 B1 4/2001 Barney et al .
G06F 11/1456 ; G06F 2201/815 ; G06F 6,260,069 B1 7/2001 Anglin
6,269,431 B1 7/2001 Dunham
2201/84 6,275,953 B1 8/2001 Vahalia et al .
USPC 707/696 6,301,592 B1 10/2001 Aoyama et al .
See application file for complete search history. 6,324,581 B1 11/2001
12/2001
Xu et al .
Long
6,328,766 B1
( 56) References Cited 6,330,570 B1 12/2001 Crighton et al .
6,330,642 B1 12/2001 Carteau
6,343,324 B1 1/2002 Hubis et al .
U.S. PATENT DOCUMENTS RE37,601 E 3/2002 Eastridge et al.
4,283,787 A 8/1981 Chambers 6,356,801 B1 3/2002 Goodman et al .
4,417,321 A 11/1983 Chang et al . 6,389,432 B1 5/2002 Pothapragada et al .
4,641,274 A 2/1987 Swank 6,397,242 B1 5/2002 Devine et al .
4,654,819 A 3/1987 Stiffler et al . 6,418,478 B1 7/2002 Ignatius et al .
4,686,620 A 8/1987 Ng 6,421,711 B1 7/2002 Blumenau et al .
4,912,637 A 3/1990 Sheedy et al . 6,487,561 B1 11/2002 Ofek et al.
4,995,035 A 2/1991 Cole et al . 6,519,679 B2 2/2003 Devireddy et al .
5,005,122 A 4/1991 Griffin et al . 6,538,669 B1 3/2003 Lagueux , Jr. et al .
5,093,912 A 3/1992 Dong et al . 6,542,972 B2 4/2003 Ignatius et al .
5,133,065 A 7/1992 Cheffetz et al . 6,564,228 B1 5/2003 O'Connor
5,193,154 A 3/1993 Kitajima et al. 6,581,076 B1 6/2003 Ching et al .
5,212,772 A 5/1993 Masters 6,658,436 B2 12/2003 Oshinsky et al .
5,226,157 A 7/1993 Nakano et al. 6,658,526 B2 12/2003 Nguyen et al .
5,239,647 A 8/1993 Anglin et al . 6,721,767 B2 4/2004 De Meno et al.
5,241,668 A 8/1993 Eastridge et al . 6,760,723 B2 7/2004 Oshinsky et al .
8/1993 Eastridge et al . 6,772,290 B1 8/2004 Bromley et al .
5,241,670 A 6,820,214 B1 11/2004 Carbrera et al .
5,276,860 A 1/1994 Fortier et al . 7,003,641 B2 2/2006 Prahlad et al.
5,276,867 A 1/1994 Kenley et al. 7,035,880 B1 4/2006 Crescent et al .
5,287,500 A 2/1994 Stoppani, Jr. 7,076,270 B2 7/2006 Jaggers et al.
5,301,286 A 4/1994 Rajani 7,107,298 B2 9/2006 Prahlad et al .
5,321,816 A 6/1994 Rogan et al . 7,107,385 B2 9/2006 Rajan et al.
5,333,315 A 7/1994 Saether et al . 7,130,970 B2 10/2006 Devassy et al .
5,347,653 A 9/1994 Flynn et al . 7,162,496 B2 1/2007 Amarendran et al .
5,410,700 A 4/1995 Fecteau et al. 7,174,433 B2 2/2007 Kottomtharayil et al .
5,420,996 A 5/1995 Aoyagi 7,219,162 B2 5/2007 Donker et al .
5,448,724 A 9/1995 Hayashi et al . 7,246,207 B2 7/2007 Kottomtharayil et al .
5,454,099 A 9/1995 Myers et al . 7,315,923 B2 1/2008 Retnamma et al .
5,491,810 A 2/1996 Allen 7,324,543 B2 1/2008 Wassew et al .
5,495,607 A 2/1996 Pisello et al . 7,343,356 B2 3/2008 Prahlad et al .
5,504,873 A 4/1996 Martin et al . 7,343,453 B2 3/2008 Prahlad et al .
5,544,345 A 8/1996 Carpenter et al . 7,346,751 B2 3/2008 Prahlad et al .
5,544,347 A 8/1996 Yanai et al . 7,356,817 B1 4/2008 Cota - Robles et al .
5,559,957 A 9/1996 Balk
5,559,991 A 9/1996 Kanfi 7,376,895 B2 5/2008 Tsao
5,594,901 A 1/1997 Andoh 7,383,463 B2 6/2008 Hayden et al.
5,619,644 A 4/1997 Crockett et al . 7,386,744 B2 6/2008 Barr et al .
5,638,509 A 6/1997 Dunphy et al . 7,389,311 B1 6/2008 Crescent et al .
5,642,496 A 6/1997 Kanfi 7,395,282 B1 7/2008 Crescenti et al .
5,664,204 A 9/1997 Wang 7,440,982 B2 10/2008 Lu et al .
US 11,436,210 B2
Page 3

( 56 ) References Cited 8,156,086 B2 4/2012 Lu et al .


4/2012 Khandelwal et al .
8,156,301 B1
U.S. PATENT DOCUMENTS 8,170,995 B2 5/2012 Prahlad et al.
8,185,893 B2 5/2012 Hyser et al.
7,448,079 B2 11/2008 Tremain 8,191,063 B2 5/2012 Shingai et al .
7,454,569 B2 11/2008 Kavuri et al. 8,200,637 B1 6/2012 Stringham
7,475,282 B2 1/2009 Tormasov et al . 8,209,680 B1 6/2012 Le et al .
7,484,208 B1 1/2009 Nelson 8,219,524 B2 7/2012 Gokhale
7,490,207 B2 2/2009 Amarendran et al . 8,219,653 B1 7/2012 Keagy et al .
7,500,053 B1 3/2009 Kavuri et al . 8,219,769 B1 7/2012 Wilk
7,502,820 B2 3/2009 Manders et al . 8,225,133 B1 7/2012 Lyadvinsky et al .
7,529,782 B2 5/2009 Prahlad et al . 8,229,896 B1 7/2012 Narayanan
7,536,291 B1 5/2009 Vijayan Retnamma et al . 8,229,954 B2 7/2012 Kottomtharayil et al .
7,543,125 B2 6/2009 Gokhale 8,230,195 B2 7/2012 Amarendran et al .
7,546,324 B2 6/2009 Prahlad et al . 8,230,256 B1 7/2012 Raut
7,552,279 B1 6/2009 Gandler 8,234,236 B2 7/2012 Beaty et al.
7,561,899 B2 7/2009 Lee 8,234,641 B2 7/2012 Fitzgerald et al .
7,568,080 B2 7/2009 Prahlad 8,266,099 B2 9/2012 Vaghani
7,603,386 B2 10/2009 Amarendran et al . 8,266,406 B2 9/2012 Kavuri
7,606,844 B2 10/2009 Kottomtharayil 8,285,681 B2 10/2012 Prahlad et al .
7,613,752 B2 11/2009 Prahlad et al . 8,307,177 B2 11/2012 Prahlad et al .
7,617,253 B2 11/2009 Prahlad et al . 8,307,187 B2 11/2012 Chawla et al .
7,617,262 B2 11/2009 Prahlad et al . 8,315,992 B1 11/2012 Gipp et al.
7,620,710 B2 11/2009 Kottomtharayil et al. 8,346,726 B2 1/2013 Liu et al .
7,631,351 B2 12/2009 Erofeev 8,364,652 B2 1/2013 Vijayan et al .
7,636,743 B2 12/2009 Erofeev 8,370,542 B2 2/2013 Lu et al .
7,640,406 B1 12/2009 Hagerstrom et al . 8,386,798 B2 2/2013 Dodgson et al.
7,651,593 B2 1/2010 Prahlad et al . 8,396,838 B2 3/2013 Brockway et al .
7,657,550 B2 2/2010 Prahlad et al . 8,407,190 B2 3/2013 Prahlad et al.
7,660,807 B2 2/2010 Prahlad et al . 8,433,679 B2 4/2013 Crescenti et al.
7,661,028 B2 2/2010 Erofeev 8,434,131 B2 4/2013 Varadharajan et al .
7,668,884 B2 2/2010 Prahlad et al . 8,438,347 B1 5/2013 Tawri et al .
7,685,177 B1 3/2010 Hagerstrom et al . 8,453,145 B1 5/2013 Naik
7,694,070 B2 4/2010 Mogi et al . 8,458,419 B2 6/2013 Basler et al .
7,716,171 B2 5/2010 Kryger 8,473,594 B2 6/2013 Astete et al .
7,721,138 B1 5/2010 Lyadvinsky et al. 8,473,652 B2 6/2013 Amit et al .
7,725,893 B2 5/2010 Jaeckel et al . 8,473,947 B2 6/2013 Goggin et al .
7,730,035 B2 6/2010 Berger et al. 8,489,676 B1 7/2013 Chaplin et al .
7,734,669 B2 6/2010 Kottomtharayil et al. 8,495,108 B2 7/2013 Nagpal et al .
7,739,527 B2 6/2010 Rothman et al . 8,554,981 B2 10/2013 Schmidt et al .
7,747,579 B2 6/2010 Prahlad et al . 8,560,788 B1 10/2013 Sreedharan et al .
7,756,835 B2 7/2010 Pugh 8,577,845 B2 11/2013 Nguyen et al.
7,756,964 B2 7/2010 Madison, Jr. et al. 8,578,120 B2 11/2013 Attarde et al .
7,765,167 B2 7/2010 Prahlad et al . 8,578,126 B1 11/2013 Gaonkar et al .
7,778,984 B2 8/2010 Zhang et al . 8,578,374 B2 11/2013 Kane
7,788,665 B2 8/2010 Oshins 8,578,386 B1 11/2013 Bali et al .
7,792,789 B2 9/2010 Prahlad et al . 8,612,439 B2 12/2013 Prahlad et al.
7,793,307 B2 9/2010 Gokhale et al . 8,620,870 B2 12/2013 Dwarampudi et al .
7,801,864 B2 9/2010 Prahlad et al . 8,635,429 B1 1/2014 Naftel et al .
7,802,056 B2 9/2010 Demsey et al . 8,667,171 B2 3/2014 Guo et al .
7,809,914 B2 10/2010 Kottomtharayil et al. 8,706,867 B2 4/2014 Vijayan
7,822,967 B2 10/2010 Fung 8,707,070 B2 4/2014 Muller
7,823,145 B1 10/2010 Le et al . 8,725,973 B2 5/2014 Prahlad et al .
7,840,537 B2 11/2010 Gokhale et al . 8,751,857 B2 6/2014 Frenkel et al .
7,861,234 B1 12/2010 Lobo et al . 8,769,048 B2 7/2014 Kottomtharayil
7,873,700 B2 1/2011 Pawlowski et al . 8,776,043 B1 7/2014 Thimsen et al.
7,882,077 B2 2/2011 Gokhale et al . 8,799,431 B2 8/2014 Pabari
7,890,467 B2 2/2011 Watanable et al . 8,805,788 B2 8/2014 Gross , IV et al.
7,899,788 B2 3/2011 Chandhok et al . 8,831,202 B1 9/2014 Abidogun et al .
7,904,482 B2 3/2011 Lent et al . 8,849,955 B2 9/2014 Prahlad et al .
7,917,617 B1 3/2011 Ponnapur et al . 8,850,146 B1 9/2014 Majumdar
7,925,850 B1 4/2011 Waldspurger et al . 8,904,008 B2 12/2014 Calder et al .
7,937,421 B2 5/2011 Mikesell et al. 8,904,081 B1 12/2014 Kulkarni
7,937,612 B1 5/2011 Lyadvinsky et al . 8,909,774 B2 12/2014 Vijayan
7,970,965 B2 6/2011 Kedem et al . 8,924,967 B2 12/2014 Nelson
8,001,277 B2 8/2011 Mega et al . 8,930,543 B2 1/2015 Ashok et al .
8,037,016 B2 10/2011 Odulinski et al . 8,938,481 B2 1/2015 Kumarasamy et al .
8,037,028 B2 10/2011 Prahlad et al . 8,938,643 B1 1/2015 Karmarkar et al .
8,037,032 B2 10/2011 Pershin et al . 8,954,446 B2 2/2015 Vijayan Retnamma et al.
8,046,550 B2 10/2011 Feathergill 8,954,796 B1 2/2015 Cohen et al .
8,055,745 B2 11/2011 Atluri 8,959,509 B1 2/2015 Sobel et al .
8,060,476 B1 11/2011 Afonso et al . 8,966,318 B1 2/2015 Shah
8,069,271 B2 11/2011 Brunet et al . 9,015,181 B2 4/2015 Kottomtharayil et al .
8,099,391 B1 1/2012 Monckton 9,020,895 B1 4/2015 Rajashekar
8,108,427 B2 1/2012 Prahlad et al . 9,020,900 B2 4/2015 Vijayan Retnamma et al.
8,112,605 B2 2/2012 Kavuri 9,021,459 B1 4/2015 Qu
8,117,492 B1 2/2012 Searls et al . 9,026,498 B2 5/2015 Kumarasamy
8,135,930 B1 3/2012 Mattox et al. 9,069,587 B2 6/2015 Agarwal et al.
8,140,786 B2 3/2012 Bunte et al. 9,098,457 B2 8/2015 Towstopiat et al .
US 11,436,210 B2
Page 4

( 56 ) References Cited 10,228,962 B2 3/2019 Dornemann et al .


8/2019 Kripalani
10,379,892 B2
U.S. PATENT DOCUMENTS 10,387,073 B2 8/2019 Bhagi et al.
10,417,102 B2 9/2019 Sanakkayala et al .
9,098,495 B2 8/2015 Gokhale 10,437,505 B2 10/2019 Dornemann et al .
9,098,514 B2 8/2015 Dwarampudi et al . 10,445,186 B1 10/2019 von Thenen
9,116,633 B2 8/2015 Sancheti et al . 10,452,303 B2 10/2019 Dornemann et al .
9,124,633 B1 9/2015 Eizadi et al . 10,474,483 B2 11/2019 Kottomtharayil et al .
9,141,529 B2 9/2015 Klein et al . 10,474,542 B2 11/2019 Mitkar et al.
9,146,755 B2 9/2015 Lassonde et al . 10,474,548 B2 11/2019 Sanakkayala et al .
9,213,706 B2 12/2015 Long et al . 10,481,984 B1 11/2019 Semyonov et al .
9,223,597 B2 12/2015 Deshpande et al . 10,496,547 B1 12/2019 Naenko
9,235,474 B1 1/2016 Petri et al . 10,565,067 B2 2/2020 Dornemann
9,235,582 B1 1/2016 Madiraju Varadaraju et al. 10,572,468 B2 2/2020 Dornemann et al .
9,239,687 B2 1/2016 Vijayan et al. 10,592,350 B2 3/2020 Dornemann
9,239,762 B1 1/2016 Gunda et al . 10,650,057 B2 5/2020 Pawar et al .
9,246,996 B1 1/2016 Brooker 10,678,758 B2 6/2020 Dornemann
9,268,602 B2 2/2016 Prahlad et al . 10,684,883 B2 6/2020 Deshpande et al.
9,280,378 B2 3/2016 Shah 10,733,143 B2 8/2020 Pawar et al .
9,286,086 B2 3/2016 Deshpande et al . 10,747,630 B2 8/2020 Sanakkayala et al .
9,286,110 B2 3/2016 Mitkar et al . 10,754,841 B2 8/2020 Prahlad et al.
9,292,350 B1 3/2016 Pendharkar et al. 10,768,971 B2 9/2020 Dornemann et al .
9,298,715 B2 3/2016 Kumarasamy et al. 10,776,209 B2 9/2020 Pawar et al .
9,311,121 B2 4/2016 Deshpande et al . 2002/0069369 Al 6/2002 Tremain
9,311,248 B2 4/2016 Wagner 2002/0095609 Al 7/2002 Tokunaga
9,342,537 B2 5/2016 Kumarasamy 2002/0194511 A1 12/2002 Swoboda
9,378,035 B2 6/2016 Kripalani 2003/0031127 A1 2/2003 Saleh et al .
9,397,944 B1 7/2016 Hobbs et al . 2003/0037211 Al 2/2003 Winokur
9,405,763 B2 8/2016 Prahlad et al . 2003/0126494 A1 7/2003 Strasser
9,417,968 B2 8/2016 Dornemann et al . 2003/0182427 Al 9/2003 Halpern
9,424,136 B1 8/2016 Teater et al . 2003/0204597 Al 10/2003 Arakawa et al .
9,436,555 B2 9/2016 Dornemann et al . 2004/0030668 A1 2/2004 Pawlowski et al .
9,451,023 B2 9/2016 Sancheti et al . 2004/0030822 Al 2/2004 Rajan et al.
9,461,881 B2 10/2016 Kumarasamy et al . 2004/0049553 Al 3/2004 Iwamura et al .
9,471,441 B1 10/2016 Lyadvinsky et al . 2004/0205152 A1 10/2004 Yasuda
9,477,683 B2 10/2016 Ghosh 2004/0230899 A1 11/2004 Pagnano et al.
9,489,244 B2 11/2016 Mitkar et al . 2005/0060356 Al 3/2005 Saika
9,495,370 B1 11/2016 Chatterjee et al . 2005/0060704 A1 3/2005 Bulson
9,495,404 B2 11/2016 Kumarasamy et al. 2005/0080970 A1 4/2005 Jeyasingh et al .
9,563,514 B2 2/2017 Dornemann 2005/0108709 Al 5/2005 Sciandra
9,575,789 B1 2/2017 Rangari et al . 2005/0198303 A1 9/2005 Knauerhase et al .
9,575,991 B2 2/2017 Ghosh 2005/02 16788 Al 9/2005 Mani-Meitav et al .
9,588,847 B1 3/2017 Natanzon et al . 2005/0262097 A1 11/2005 Sim - Tang
9,588,972 B2 3/2017 Dwarampudi et al . 2006/0058994 Al 3/2006 Ravi et al .
9,594,636 B2 3/2017 Mortensen et al. 2006/0064555 Al 3/2006 Prahlad et al .
9,606,745 B2 3/2017 Satoyama et al . 2006/0101189 Al 5/2006 Chandrasekaran et al .
9,612,966 B2 4/2017 Joshi et al. 2006/0155712 A1 7/2006 Prahlad et al.
9,632,882 B2 4/2017 Kumarasamy et al. 2006/0184935 Al 8/2006 Abels et al.
9,633,033 B2 4/2017 Vijayan et al. 2006/0195715 A1 8/2006 Herington
9,639,274 B2 5/2017 Maranna et al . 2006/0224846 A1 10/2006 Amarendran
9,639,426 B2 5/2017 Pawar et al . 2006/0225065 Al 10/2006 Chandhok et al .
9,652,283 B2 5/2017 Mitkar et al . 2006/0230136 Al 10/2006 Ma
9,684,535 B2 6/2017 Deshpande et al . 2006/0259908 Al 11/2006 Bayer
9,684,567 B2 6/2017 Derk et al . 2007/0027999 Al 2/2007 Allen et al .
9,703,584 B2 7/2017 Kottomtharayil et al. 2007/0043870 A1 2/2007 Ninose
9,710,465 B2 7/2017 Dornemann et al . 2007/0089111 A1 * 4/2007 Robinson G06F 21/53
9,766,989 B2 7/2017 Mitkar et al. 718/1
9,740,702 B2 8/2017 Pawar et al . 2007/0100792 Al 5/2007 Lent et al .
9,740,723 B2 8/2017 Prahlad et al . 2007/0179995 A1 * 8/2007 Prahlad G06F 16/119
9,760,398 B1 9/2017 Pai 2007/0198802 Al 8/2007 Kavuri
9,760,448 B1 9/2017 Per et al . 2007/0203938 Al 8/2007 Prahlad et al .
9,766,825 B2 9/2017 Bhagi et al. 2007/0208918 A1 9/2007 Harbin et al .
9,823,977 B2 11/2017 Dornemann et al . 2007/0220319 Al 9/2007 Desai et al .
9,852,026 B2 12/2017 Mitkar et al . 2007/0234302 Al 10/2007 Suzuki et al .
9,904,598 B2 2/2018 Kumarasamy 2007/0239804 Al 10/2007 Armstrong et al .
9,928,001 B2 3/2018 Dornemann et al . 2007/0260831 A1 11/2007 Michael et al .
9,939,981 B2 4/2018 White et al . 2007/0266056 A1 11/2007 Stacey et al .
9,965,316 B2 5/2018 Deshpande et al . 2007/0288536 A1 12/2007 Sen et al .
9,977,687 B2 5/2018 Kottomtharayil et al. 2008/0005146 A1 1/2008 Kubo et al .
9,983,936 B2 5/2018 Dornemann et al . 2008/0007765 Al 1/2008 Ogata et al.
9,996,287 B2 6/2018 Dornemann et al . 2008/0028408 Al 1/2008 Day
9,996,534 B2 6/2018 Dornemann et al . 2008/0059704 A1 3/2008 Kavuri
10,048,889 B2 8/2018 Dornemann et al . 2008/0071841 Al 3/2008 Okada et al .
10,061,657 B1 8/2018 Chopra 2008/0091655 Al 4/2008 Gokhale et al .
10,061,658 B2 8/2018 Long et al . 2008/0126833 Al 5/2008 Callaway et al.
10,108,652 B2 10/2018 Kumarasamy et al. 2008/0133486 A1 * 6/2008 Fitzgerald G06F 21/6218
10,152,251 B2 12/2018 Sancheti et al . 2008/0134175 A1 6/2008 Fitzgerald
10,162,528 B2 12/2018 Sancheti et al . 2008/0134177 A1 6/2008 Fitzgerald et al .
10,162,873 B2 12/2018 Desphande et al . 2008/0141264 A1 6/2008 Johnson
US 11,436,210 B2
Page 5

( 56 ) References Cited 2010/0325471


2010/0325727
A1
A1
12/2010 Mishra et al .
12/2010 Neystad et al .
U.S. PATENT DOCUMENTS 2010/0332401 A1 12/2010 Prahlad
2010/0332454 Al 12/2010 Prahlad et al.
2008/0163206 Al 7/2008 Nair 2010/0332456 Al 12/2010 Prahlad et al.
2008/0189468 Al 8/2008 Schmidt et al . 2010/0332479 A1 12/2010 Prahlad
2008/0195639 Al 8/2008 Freeman et al. 2010/0332629 Al 12/2010 No et al .
2008/0228771 A1 9/2008 Prahlad et al . 2010/0332818 Al 12/2010 Prahlad
2008/0228833 A1 9/2008 Kano 2010/0333100 Al 12/2010 Miyazaki et al.
2008/0229037 A1 9/2008 Bunte 2010/0333116 Al 12/2010 Prahlad
2008/0235479 A1 9/2008 Scales et al . 2011/0004586 Al 1/2011 Cherryholmes et al .
2008/0243855 Al 10/2008 Prahlad 2011/0010515 A1 1/2011 Ranade
2008/0243947 Al 10/2008 Kaneda 2011/0016467 A1 1/2011 Kane
2008/0244028 A1 10/2008 Le et al . 2011/0022811 A1 1/2011 Kirihata et al .
2008/0244068 A1 10/2008 Iyoda et al . 2011/0023114 Al 1/2011 Diab et al .
2008/0244177 A1 10/2008 Crescenti et al . 2011/0035620 A1 2/2011 Vitaly et al .
2008/0250407 Al 10/2008 Dadhia et al . 2011/0047541 A1 2/2011 Yamaguchi et al .
2008/0270564 A1 10/2008 Rangegowda et al . 2011/0061045 A1 3/2011 Phillips
2008/0275924 A1 11/2008 Fries 2011/0072430 A1 3/2011 Mani
2008/0282253 Al 11/2008 Huizenga 2011/0087632 A1 4/2011 Subramanian et al .
2008/0313371 A1 12/2008 Kedem et al . 2011/0093471 A1 4/2011 Brockway et al .
2008/0320319 Al 12/2008 Muller 2011/0107025 A1 5/2011 Urkude et al .
2009/0006733 Al 1/2009 Gold et al . 2011/0107331 A1 5/2011 Evans et al .
2009/0037680 A1 2/2009 Colbert et al . 2011/0161299 Al 6/2011 Prahlad
2009/0113109 Al 4/2009 Nelson et al . 2011/0179414 Al 7/2011 Goggin et al .
2009/0144416 Al 6/2009 Chatley et al. 2011/0185355 Al 7/2011 Chawla et al.
2009/0157882 A1 6/2009 Kashyap 2011/0191559 Al 8/2011 Li et al .
2009/02 10427 Al 8/2009 Eidler et al. 2011/0202728 Al 8/2011 Nichols et al .
2009/0210458 A1 8/2009 Glover et al. 2011/0202734 Al 8/2011 Dhakras et al .
2009/0216816 Al 8/2009 Basler et al . 2011/0208928 A1 8/2011 Chandra et al .
2009/0222496 A1 9/2009 Liu et al . 2011/0213754 Al 9/2011 Bindal
2009/0228669 A1 9/2009 Siesarev et al . 2011/0219144 A1 9/2011 Amit et al .
2009/0234892 Al 9/2009 Anglin et al. 2011/0225277 Al 9/2011 Freimuth et al .
2009/0240904 A1 9/2009 Austruy et al . 2011/0239013 Al 9/2011 Muller
2009/0248762 Al 10/2009 Prahlad et al . 2011/0246430 A1 10/2011 Prahlad et al.
2009/0249005 A1 10/2009 Bender et al . 2011/0252208 Al 10/2011 Ali et al .
2009/0282404 A1 11/2009 Khandekar et al . 2011/0264786 A1 10/2011 Kedem et al .
2009/0287665 A1 11/2009 Prahlad 2012/0016840 A1 1/2012 Lin et al .
2009/0300023 Al 12/2009 Vaghani 2012/0017027 A1 1/2012 Baskakov et al .
2009/0300057 A1 12/2009 Friedman 2012/0017043 A1 1/2012 Aizman et al .
2009/0307166 Al 12/2009 Routray et al . 2012/0017114 A1 1/2012 Timashev et al .
2009/0313260 A1 12/2009 Mimatsu 2012/0054736 A1 3/2012 Arcese et al .
2009/0313447 Al 12/2009 Nguyen 2012/0072685 Al 3/2012 Otani
2009/0313503 Al 12/2009 Atluri et al. 2012/0079221 A1 3/2012 Sivasubramanian et al .
2009/0319534 A1 12/2009 Gokhale 2012/0084262 Al 4/2012 Dwarampudi et al .
2009/0319585 Al 12/2009 Gokhale 2012/0084769 Al 4/2012 Adi et al .
2009/0320029 A1 12/2009 Kottomtharayil 2012/0096149 A1 4/2012 Sunkara et al .
2009/0320137 Al 12/2009 White et al. 2012/0110328 A1 5/2012 Pate et al .
2009/0327471 A1 12/2009 Astete et al . 2012/0131295 Al 5/2012 Nakajima
2009/0327477 A1 12/2009 Madison , Jr. et al . 2012/0131578 Al 5/2012 Ciano et al.
2010/0011178 A1 1/2010 Feathergill 2012/0136832 A1 5/2012 Sadhwani
2010/0017647 A1 1/2010 Callaway et al . 2012/0137292 A1 5/2012 Iwamatsu
2010/0030984 Al 2/2010 Erickson 2012/0150815 Al 6/2012 Parfumi
2010/0049929 A1 2/2010 Nagarkar et al . 2012/0150818 A1 6/2012 Vijayan Retnamma et al.
2010/0049930 A1 2/2010 Pershin 2012/0150826 A1 6/2012 Vijayan Retnamma et al.
2010/0070466 Al 3/2010 Prahlad et al . 2012/0151084 A1 6/2012 Stathopoulos et al .
2010/0070474 Al 3/2010 Lad 2012/0159232 A1 6/2012 Shimada et al.
2010/0070726 A1 3/2010 Ngo et al. 2012/0167083 A1 6/2012 Suit
2010/0082672 Al 4/2010 Kottomtharayil 2012/0209812 A1 8/2012 Bezbaruah
2010/0094948 A1 4/2010 Ganesh et al . 2012/0221843 A1 8/2012 Bak et al .
2010/0106691 A1 4/2010 Preslan et al . 2012/0233285 Al 9/2012 Suzuki
2010/0107158 A1 4/2010 Chen et al . 2012/0254119 Al 10/2012 Kumarasamy
2010/0107172 A1 4/2010 Calinescu et al . 2012/0254364 Al 10/2012 Vijayan
2010/0161919 A1 6/2010 Dodgson et al . 2012/0254824 Al 10/2012 Bansold
2010/0186014 A1 7/2010 Vaghani et al . 2012/0278287 A1 11/2012 Wilk
2010/0211829 A1 8/2010 Ziskind et al . 2012/0278571 Al 11/2012 Fleming et al .
2010/0218183 A1 8/2010 Wang 2012/0278799 Al 11/2012 Starks et al .
2010/0228913 Al 9/2010 Czezatke et al . 2012/0290802 A1 11/2012 Wade et al .
2010/0242096 Al 9/2010 Varadharajan et al. 2012/0324183 A1 12/2012 Chiruvolu et al .
2010/0250767 A1 9/2010 Barreto et al . 2012/0331248 Al 12/2012 Kono et al .
2010/0257523 A1 10/2010 Frank 2013/0024641 A1 1/2013 Talagala et al.
2010/0262586 Al 10/2010 Rosikiewicz et al . 2013/0024722 A1 1/2013 Kotagiri
2010/0262794 Al 10/2010 DeBeer 2013/0036418 A1 2/2013 Yadappanavar et al .
2010/0274981 A1 10/2010 Ichikawa 2013/0042234 A1 2/2013 Deluca et al .
2010/0280999 Al 11/2010 Atluri et al . 2013/0047156 Al 2/2013 Jian et al .
2010/0299309 Al 11/2010 Maki et al . 2013/0054533 Al 2/2013 Hao et al .
2010/0299666 A1 11/2010 Agbaria et al. 2013/0074181 Al 3/2013 Singh
2010/0306173 A1 12/2010 Frank 2013/0080841 A1 3/2013 Reddy et al.
2010/0306486 Al 12/2010 Balasubramanian et al . 2013/0086580 A1 4/2013 Simonsen et al .
US 11,436,210 B2
Page 6

( 56 ) References Cited 2015/0378758 A1 12/2015 Duggan et al .


2015/0378771 A1 12/2015 Tarasuk - Levin
U.S. PATENT DOCUMENTS 2015/0378833 Al 12/2015 Misra et al .
2015/0378849 Al 12/2015 Liu et al .
2013/0097308 Al 4/2013 Le 2015/0381711 A1 12/2015 Singh et al.
2013/0117744 Al 5/2013 Klein et al . 2016/0004721 A1 1/2016 Iyer
2013/0173771 A1 7/2013 Ditto et al . 2016/0019317 Al 1/2016 Pawar et al .
2013/0204849 Al 8/2013 Chacko 2016/0070623 A1 3/2016 Derk
2013/0227558 A1 8/2013 Du et al . 2016/0092467 Al 3/2016 Lee et al .
2013/0232215 A1 9/2013 Gupta et al. 2016/0154709 A1 6/2016 Mitkar et al .
2013/0232480 A1 9/2013 Winterfeldt et al . 2016/0170844 A1 6/2016 Long et al .
2013/0238562 A1 9/2013 Kumarasamy 2016/0188413 A1 6/2016 Abali et al .
2013/0238785 Al 9/2013 Hawk et al. 2016/0202916 A1 7/2016 Cui et al .
2013/0262390 A1 10/2013 Kumarasamy et al. 2016/0283335 Al 9/2016 Yao et al .
2013/0262638 A1 10/2013 Kumarasamy et al . 2016/0306651 A1 10/2016 Kripalani
2013/0262801 A1 10/2013 Sancheti et al . 2016/0306706 A1 10/2016 Pawar et al .
2013/0263113 Al 10/2013 Cavazza 2016/0308722 A1 10/2016 Kumarasamy
2013/0268931 A1 10/2013 O'Hare et al . 2016/0335007 A1 11/2016 Ryu et al .
2013/0290267 A1 10/2013 Dwarampudi et al . 2016/0350391 A1 12/2016 Vijayan et al .
2013/0311429 Al 11/2013 Agetsuma 2016/0373291 A1 12/2016 Dornemann
2013/0326260 A1 12/2013 Wei et al . 2017/0090972 A1 3/2017 Ryu et al .
2013/0332685 Al 12/2013 Kripalani 2017/0090974 A1 3/2017 Dornemann
2014/0006858 A1 1/2014 Helfman et al . 2017/0123939 A1 5/2017 Maheshwari et al.
2014/0007097 Al 1/2014 Chin et al . 2017/0168903 A1 6/2017 Dornemann et al .
2014/0007181 A1 1/2014 Sarin et al . 2017/0185488 A1 6/2017 Kumarasamy et al .
2014/0019769 Al 1/2014 Pittelko 2017/0192866 Al 7/2017 Vijayan et al .
2014/0040892 A1 2/2014 Baset 2017/0193003 A1 7/2017 Vijayan et al .
2014/0052892 Al 2/2014 Klein et al . 2017/0235647 A1 8/2017 Kilaru et al .
2014/0059380 Al 2/2014 Krishnan 2017/0242871 A1 8/2017 Kilaru et al .
2014/0075440 A1 3/2014 Prahlad et al . 2017/0249220 A1 8/2017 Kumarasamy et al .
2014/0089266 A1 3/2014 Une et al. 2017/0262204 Al 9/2017 Dornemann et al .
2014/0095816 A1 4/2014 Hsu et al . 2017/0264589 Al 9/2017 Hunt et al.
2014/0115285 Al 4/2014 Arcese et al . 2017/0286230 A1 10/2017 Zamir
2014/0136803 A1 5/2014 Qin 2017/0318111 A1 11/2017 Dornemann
2014/0156684 A1 6/2014 Zaslavsky et al . 2017/0371547 A1 12/2017 Fruchtman et al .
2014/0181038 A1 6/2014 Pawar et al . 2018/0011885 Al 1/2018 Prahlad
2014/0181044 A1 6/2014 Pawar et al . 2018/0089031 Al 3/2018 Dornemann et al .
2014/0181046 A1 6/2014 Pawar et al . 2018/0113623 A1 4/2018 Sancheti
2014/0188803 A1 7/2014 James et al . 2018/0143880 A1 5/2018 Dornemann
2014/0196038 Al 7/2014 Kottomtharayil et al. 2018/0181598 A1 6/2018 Pawar et al .
2014/0196039 Al 7/2014 Kottomtharayil et al . 2018/0253192 A1 9/2018 Varadharajan et al.
2014/0196056 Al 7/2014 Kottomtharayil et al. 2018/0267861 Al 9/2018 Iyer
2014/0201151 A1 7/2014 Kumarasamy et al. 2018/0276022 A1 9/2018 Mitkar et al .
2014/0201157 Al 7/2014 Pawar et al . 2018/0276083 Al 9/2018 Mitkar et al .
2014/0201162 A1 7/2014 Kumarasamy et al . 2018/0276084 Al 9/2018 Mitkar et al .
2014/0201170 A1 7/2014 Vijayan et al. 2018/0276085 Al 9/2018 Mitkar et al .
2014/0237537 A1 8/2014 Manmohan et al . 2018/0285202 A1 10/2018 Bhagi et al.
2014/0244610 Al 8/2014 Raman et al . 2018/0285209 Al 10/2018 Liu
2014/0259015 A1 9/2014 Chigusa et al . 2018/0285215 A1 10/2018 Ashraf
2014/0278530 A1 9/2014 Bruce et al. 2018/0285353 A1 10/2018 Rao et al .
2014/0282514 A1 9/2014 Carson et al . 2018/0329636 A1 11/2018 Dornemann et al .
2014/0330874 A1 11/2014 Novak et al . 2019/0012339 Al 1/2019 Kumarasamy et al .
2014/0337295 A1 11/2014 Haselton et al . 2019/0026187 A1 1/2019 Gulam et al.
2014/0344323 Al 11/2014 Pelavin et al . 2019/0065069 Al 2/2019 Sancheti et al.
2014/0344805 A1 11/2014 Shu 2019/0090305 Al 3/2019 Hunter et al .
2014/0372384 Al 12/2014 Long et al . 2019/0324791 A1 10/2019 Kripalani
2015/0058382 Al 2/2015 St. Laurent 2019/0340088 A1 11/2019 Sanakkayala et al .
2015/0067393 Al 3/2015 Madani et al . 2019/0347120 A1 11/2019 Kottomtharayil et al .
2015/0074536 A1 3/2015 Varadharajan et al. 2019/0369901 A1 12/2019 Dornemann et al .
2015/0081636 Al 3/2015 Schindler 2019/0391742 A1 12/2019 Bhagi et al.
2015/0120928 A1 4/2015 Gummaraju et al . 2020/0034252 A1 1/2020 Mitkar et al.
2015/0121122 A1 4/2015 Towstopiat et al. 2020/0142612 A1 5/2020 Dornemann et al .
2015/0134607 Al 5/2015 Magdon -Lsmail et al. 2020/0142782 A1 5/2020 Dornemann
2015/0142745 A1 5/2015 Tekade et al . 2020/0142783 A1 5/2020 Dornemann
2015/0160884 A1 6/2015 Scales et al . 2020/0174894 A1 6/2020 Dornemann
2015/0161015 Al 6/2015 Kumarasamy et al. 2020/0174895 A1 6/2020 Dornemann
2015/0163172 A1 6/2015 Mudigonda et al. 2020/0183728 A1 6/2020 Deshpande et al .
2015/0212897 Al 7/2015 Kottomtharayil 2020/0241908 A1 7/2020 Dornemann et al .
2015/0227438 A1 8/2015 Jaquette 2020/0265024 Al 8/2020 Pawar et al .
2015/0227602 A1 8/2015 Ramu 2020/0301891 Al 9/2020 Dornemann
2015/0242283 Al 8/2015 Simoncelli et al . 2020/0327163 Al 10/2020 Pawar et al .
2015/0248333 A1 9/2015 Aravot 2020/0334113 Al 10/2020 Sanakkayala et al .
2015/0293817 A1 10/2015 Subramanian et al . 2020/0334201 Al 10/2020 Pawar et al .
2015/0317216 A1 11/2015 Hsu et al . 2020/0341945 A1 10/2020 Pawar et al .
2015/0347165 Al 12/2015 Lipchuk et al .
2015/0347430 A1 12/2015 Ghosh FOREIGN PATENT DOCUMENTS
2015/0363413 A1 12/2015 Ghosh
2015/0366174 Al 12/2015 Burova EP 0467546 1/1992
2015/0370652 Al 12/2015 He et al . EP 0541281 A2 5/1993
US 11,436,210 B2
Page 7

( 56 ) References Cited International Preliminary Report on Patentability and Written Opin


ion for PCT/US2011 / 054374 , dated Apr. 11 , 2013 , 6 pages .
FOREIGN PATENT DOCUMENTS International Search Report and Written Opinion for PCT /US2011 /
EP 0774715 5/1997
054378 , dated May 2 , 2012 , 9 pages .
EP 0809184 11/1997 Jander, M. , “ Launching Storage - Area Net ,” Data Communications,
EP 0899662 3/1999 US , McGraw Hill , NY, vol . 27 , No. 4 (Mar. 21 , 1998 ) , pp . 64-72 .
EP 0981090 2/2000 Microsoft Corporation, “ How NTFS Works , ” Windows Server
WO W09513580 5/1995 TechCenter, updated Mar. 28 , 2003 , internet accessed Mar. 26 ,
WO W09912098 3/1999 2008, 26 pages.
WO WO 2006/052872 5/2006 Rosenblum et al . , “ The Design and Implementation of a Log
Structured File System ,” Operating Systems Review SIGOPS, vol .
>>

OTHER PUBLICATIONS 25 , No. 5 , New York , US , pp . 1-15 (May 1991 ) .


Sanbarrow.com , “ Disktype - table ,” < http://sanbarrow.com/vmdk/
U.S. Appl. No. 61 / 164,803 , filed Mar. 30 , 2009 , Muller. disktypes.html> , internet accessed on Jul. 22 , 2008 , 4 pages .
Armstead et al . , “ Implementation of a Campwide Distributed Mass Sanbarrow.com , “ Files Used by a VM , ” < http://sanbarrow.com/
Storage Service : The Dream vs. Reality , ” IEEE , Sep. 11-14 , 1995 , vmx/vmx - files -used -by - a - vm.html> , internet accessed on Jul. 22 ,
pp . 190-199 . 2008, 2 pages .
Arneson , “Mass Storage Archiving in Network Environments, " Sanbarrow.com , “ Monolithic Versus Split Disks,” < http : // sanbarrow .
>>

Digest of Papers, Ninth IEEE Symposium on Mass Storage Sys com /vmdk /monolithicversusspllit.html> , internet accessed on Jul.
tems , Oct. 31 , 1988 -Nov. 3 , 1988 , pp . 45-50 , Monterey, CA. 14, 2008 , 2 pages.
Brandon , J. , “ Virtualization Shakes Up Backup Strategy , " < http :// Vmware, Inc., “ Open Virtual Machine Format, ” < http : //www.vmware .
www.computerworld.com > , internet accessed on Mar. 6 , 2008 , 3 com / appliances/ learn /ovf.html> , internet accessed on May 6 , 2008 ,
pages. 2 pages .
Cabrera et al . , " ADSM : A Multi - Platform , Scalable, Backup and VMware, Inc. , " OVF , Open Virtual Machine Format Specification ,
Archive Mass Storage System ,” Digest of Papers, Compcon '95 ,
>> version 0.9 , " White Paper, < http://www.vmware.com > , 2007 , 50
pages.
Proceedings of the 40th IEEE Computer Society International VMware, Inc. , “ The Open Virtual Machine Format Whitepaper for
Conference, Mar. 5 , 1995 -Mar. 9 , 1995 , pp . 420-427 , San Francisco, OVF Specification, version 0.9 , " White Paper, < http ://www.vmware.
CA. com>, 2007 , 16 pages .
CommVault Systems , Inc. , " A CommVault White Paper: VMware VMware, Inc. , “ Understanding VMware Consolidated Backup , "
Consolidated Backup (VCB ) Certification Information Kit ,” 2007 , White Paper, < http://www.vmware.com >, 2007 , 11 pages .
23 pages. VMware, Inc. , “ Using VMware Infrastructure for Backup and
CommVault Systems, Inc., " CommVault SolutionsVMware , " < http :// Restore , ” Best Practices, < http://www.vmware.com > , 2006 , 20 pages.
www.commvault.com/solutions/vmware/ > , internet accessed Mar.
1
VMware, Inc. , “ Virtual Disk API Programming Guide,” < http : //
24, 2008, 2 pages. www.vmware.com > , Revision Apr. 11 , 2008 , 2008 , 44 pages .
CommVault Systems , Inc. , “ Enhanced Protection and Manageabil VMware, Inc. , “ Virtual Disk Format 1.1,” VMware Technical Note ,
ity of Virtual Servers , ” Partner Solution Brief, 2008 , 6 pages. < http://www.vmware.com >, Revision Nov. 13 , 2007 , Version 1.1 ,
Commvault, “ Automatic File System Multi-Streaming, ” http :// 2007 , 18 pages .
documentation.commvault.com/hds/release 700 /books online 1 /eng VMware , Inc. , “ Virtual Machine Backup Guide, ESX Server 3.0.1
lish us/ feature, downloaded Jun . 4 , 2015 , 4 pages . and VirtualCenter 2.0.1 ,” < http://www.vmware.com > , updated Nov.
Davis, D. , “ 3 VMware Consolidated Backup (VCB ) Utilities You 21 , 2007 , 74 pages .
Should Know ,” Petri IT Knowlegebase , < http://www.petri.co.il/ VMware, Inc. , “ Virtual Machine Backup Guide, ESX Server 3.5 ,
vmware -consolidated -backup - utilities.htm > , internet accessed on ESX Server 3i version 3.5 , VirtualCenter 2.5 ," <http : //www.vmware.
>>

Jul . 14 , 2008, 7 pages. com> , updated Feb. 21 , 2008 , 78 pages .


Davis, D. , “ Understanding VMware VMX Configuration Files ," VMware , Inc. , “ Virtualized iSCSI SANS : Flexible , Scalable Enter
Petri IT Knowledgebase, <http://www.petri.co.il/virtual_vmware prise Storage for Virtual Infrastructures ,” White Paper, < http ://www .
vmx_configuration_files.htm >, internet accessed on Jun . 19 , 2008 , vmware.com > , Mar. 2008 , 13 pages.
2

6 pages . VMware, Inc. , “ VMware Consolidated Backup, Improvements in


Davis, D. , “ VMware Server & Workstation Disk Files Explained ,” Version 3.5 ,” Information Guide , < http://www.vmware.com >, 2007 ,
Petri IT Knowledgebase, <http://www.petri.co.il/virtual_vmware_ 11 pages.
files_explained.htm > , internet accessed on Jun . 19 , 2008 , 5 pages . VMware, Inc. , “ VMware Consolidated Backup ,” Product Datasheet,
Davis, D. , “ VMware Versions Compared ,” Petri IT Knowledgebase, < http://www.vmware.com > , 2007 , 2 pages .
< http://www.petri.co.il/virtual_vmware_versions_compared.htm >, inter VMware, Inc. , “ VMware ESX 3.5," Product Datasheet, < http ://
net accessed on Apr. 28 , 2008 , 6 pages . www.vmware.com > , 2008 , 4 pages .
Eitel , “ Backup and Storage Management in Distributed Heteroge VMware, Inc. , “ VMware GSX Server 3.2 , Disk Types: Virtual and
neous Environments , ” IEEE , Jun . 12-16 , 1994 , pp . 124-126 . Physical,” < http://www.vmware.com/support/gsx3/doc/disks_types_
>

Gait , J. , “ The Optical File Cabinet : A Random - Access File System gsx.html> , internet accessed on Mar. 25 , 2008 , 2 pages .
For Write - Once Optical Disks,” IEEE Computer, vol . 21 , No. 6 , pp . VMware, Inc. , “ VMware OVF Tool,” Technical Note , < http : // www .
11-22 ( Jun . 1988 ) . vmware.com > , 2007 , 4 pages .
Hitachi, “ Create A Virtual Machine — VM Lifecycle Management VMware, Inc. , “ VMware Workstation 5.0 , Snapshots in a Linear
Vmware,” http://documentation.commvault.com/hds/v10/article?p= Process , " < http : /www.vmware.com/support/ws5/doc/ws_preserve_
products/ vs vmware/vm provisio ... , downloaded Apr. 28 , 2015 , sshot_linear.html > , internet accessed on Mar. 25 , 2008 , 1 page .
2 pages. VMware, Inc. , “ VMware Workstation 5.0 , Snapshots in a Process
Hitachi, “ Frequently Asked Questions — Virtual Server Agent for Tree ,” < http://www.vmware.com/support/ws5/doc/ws_preserve_
Vmware,” http://documentation.commvault.com/hds/v10/article?p= sshot_tree.html> , internet accessed on Mar. 25 , 2008 , 1 page .
products /vs vmware / faqs.htm , downloaded Apr. 28 , 2015 , 11 pages . VMware, Inc. , “ VMware Workstation 5.5 , What Files Make Up a
Hitachi, “ Overview — Virtual Server Agent for VMware , " http :// Virtual Machine ? ” < http://www.vmware.com/support/ws55/doc/ws_
documentation.commvault.com/hds,v10 /artide ? p = products/ vs vmware learning_files_in_a_vm.html> , internet accessed on Mar. 25 , 2008 ,
overview.htm , downloaded Apr. 28 , 2015 , 3 pages . 2 pages .
Hitachi, “ Recover Virtual Machines or VM FilesWeb Console , ” Wikipedia, “ Cluster ( file system ),” < http://en.wikipedia.org/wiki/
http://documentation.commvault.com/hds/v10/article?p=products/ Cluster_ % 28file_system % 29 > , internet accessed Jul. 25 , 2008 , 1
vs vmware /vm archivin . . . , downloaded Apr. 28 , 2015 , 2 pages. page .
US 11,436,210 B2
Page 8

( 56 ) References Cited Dell Power Solutions, Dell , Inc., Aug. 2007 in 21 pages.
Deng , et al . , “ Fast Saving and Restoring Virtual Machines with Page
OTHER PUBLICATIONS Compression ” , 2011 , pp . 150-157 .
1

Wikipedia , " Cylinder -head -sector," < http://en.wikipedia.org/wiki/ Doering, et al . , “ Guide to Novell NetWare 5.0 5.1 Network Admin
istration ”, Course Technology, 2001 in 40 pages .
Cylinder -head -sector > , internet accessed Jul. 22 , 2008 , 6 pages. Edwards, “ Discovery Systems in Ubiquitous Computing”, IEEE
Wikipedia, “ File Allocation Table ,” < http://en.wikipedia.org/wiki/ Pervasive Computing, issue vol . 5 , No. 2 , Apr.- Jun . 2006 in 8 pages.
File_Allocation_Table > , internet accessed on Jul. 25 , 2008 , 19 Eldos Callback File System product information from https : // www .
pages. eldos.com/clients/104-345.php retrieved on Dec. 30 , 2016 in 2
Wikipedia, “ Logical Disk Manager, " < http://en.wikipedia.org/wiki/
>>

pages .
Logical_Disk_Manager >, internet accessed Mar. 26 , 2008 , 3 pages . Eldos Usermode filesystem for your Windows applications
Wikipedia, “ Logical Volume Management, ” <http : //en.wikipedia . Callback File System® (CBFS® ) Create and manage virtual
org /wiki/Logical_volume_management> , internet accessed on Mar. filesystems and disks from your Windows applications retrieved
26, 2008, 5 pages. from https://eldos.com/cbfs on Dec. 30 , 2016 in 4 pages .
Wikipedia , “ Storage Area Network , ” < http://en.wikipedia.org/wiki/
>>

Storage_area_network > , internet accessed on Oct. 24 , 2008 , 5 EsxRanger can not connect to VirtualCentre, VMware Technology
pages. Network community message board thread , Jun . 28 , 2007 in 2
Wikipedia , “ Virtualization ,” < http://en.wikipedia.org/wiki/ pages.
Virtualization > , internet accessed Mar. 18 , 2008 , 7 pages . EsxRanger Professional, version 3.15 , Reference Manual, Copy
Techopedia. “ Restore Point ” . Jan. 13 , 2012 snapshotviaArchive.org. right Vizioncore Inc , 2006 in 102 pages .
URL Link : < https://www.techopedia.com/definition/13181/restore EsxRanger Professional, version 3.15 , Reference Manual, Copy
point> . Accessed Jul. 2019. ( Year: 2012 ) . right Vizioncore Inc , 2006 in 103 pages .
TechTarget. “ raw device mapping ( RDM ) ” . Last updated Feb. 2012 . File Wrapper of U.S. Pat. No. 9,740,723 in 594 pages.
URL Link : < https://searchvmware.techtarget.com/definition/raw Fraser, et al ., " Safe Hardware Access With the Xen Virtual Machine
device -mapping -RDM > . Accessed Jul. 2019. ( Year: 2012 ) . Monitor ”, 1st Workshop on Operating System and Architectural
Prahlad, et al . , U.S. Appl. No. 12 / 553,294 Published as 2010 / Support for the demand IT Infrastructure ( OASIS ) , 2004 , pp . 1-10 .
0070725 Al Now U.S. Pat. No. 8,307,177 , filed Sep. 3 , 2009 , Galan et al . “ Service Specification in Cloud Environments Based on
Systems and Methods for Management of Virtualization Data. Extension to Oper Standards” COMSWARE Jun . 16-19 , 2009
Prahlad, et al . , U.S. Appl. No. 13 /667,890 Published as 2013 / Dublin , Ireland ACM .
0061014 Al Now U.S. Pat. No. 8,725,973 , filed Nov. 2 , 2012 , Gibson , et al . , “ Implementing Preinstallation Environment Media
Systems and Methods for Management of Virtualization Data . for Use in User Support ,” 2007 , pp . 129-130 .
Prahlad , et al . , U.S. Appl. No. 14 /275,381 Published as 2014 / Gourley et al . , “ HTTP The Definitive Guide ” , O'Reilly, 2002 , in 77
0250093 A1 Now U.S. Pat. No 9,740,723 , filed May 12 , 2014 , pages.
Systems and Methods for Management of Virtualization Data . Granger, et al . , “ Survivable Storage Systems”, 2001 , pp . 184-195 .
Prahlad, et al . , U.S. Appl. No. 15 /679,560 Published As 2018 / Gupta , et al . , “ GPFS - SNC : An enterprise storage framework for
0011885 A1 Now U.S. Pat. No. 10,754,841 , filed Aug. 17 , 2017 , virtual -machine clouds” , 2011 , pp . 1-10 .
Systems and Methods For Management of Virtualization Data. Guttman et al . , " Service Templates and Services Schemes ” , RFC2609,
Prahlad , et al . , U.S. Appl. No. 16 /917,591 Published as 2020 / Standards Track , Jun . 1999 in 33 pages .
0334221 Al , filed Jun . 30 , 2020 , Classification of Virtualization Haselhorst , et al., “ Efficient Storage Synchronization for Live
Data . Migration in Cloud Infrastructures ” , 2011 , pp . 511-518 .
Bastiaansen, “ Robs Guide to Using Vmware : Covers Workstation , Hirofuchio , Takahiro et al . , “ A live storage migration mechanism
ACE , GSX and ESX Server ” , Second Edition , Books4Brains , 2005 , over wan and its performance evaluation ,” 2009 , pp . 67-74 .
178 pages. Hirofuchi, et al . , “ Enabling Instantaneous Relocation of Virtual
Bastiaansen , “ Robs Guide to Using VMWare: Covers Workstation, Machines with a Lightweight VMM Extension ” , 2010 , pp . 73-83 .
ACE , GSX and ESX Server ”, Second Edition , reviewed Amazon . Howorth , Vizioncore esxEssentials Review , ZDNet , Aug. 21 , 2007
com , printed on Mar. 22 , 2021 in 5 pages . in 12 pages.
Bastiaansen , “ Robs Guide to Using VMWare: Covers Workstation, Hu , et al . , “ Virtual Machine based Hot - spare Fault - tolerant Sys
ACE , GSX and ESX Server ” , Second Edition , Sep. 2005 in 28 tem ” , 2009 , pp . 429-432 .
pages. Hu , Wenjin et al . , " A Quantitative Study of Virtual Machine Live
Braswell, et al . , Abstract for “ Server Consolidation with VMware ESX Migration ,” 2013 , pp . 1-10 .
Server” , IBM Redpaper, Jan. 2005 in 2 pages . Huff, “ Data Set Usage Sequence Number, ” IBM Technical Disclo
Brooks , " esxRanger Ably Backs Up VMs” , eWeek , May 2 , 2007 in sure Bulletin , vol . 24 , No. 5 , Oct. 1981 New York , US , pp .
6 pages. 2404-2406 .
Carrier, “ File System Forensic Analysis ” , Pearson Education, 2005 Ibrahim , Shadi et al . , “ CLOUDLET: Towards MapReduce Imple
in 94 pages. mentation on Virtual Machines,” 2009 , pp . 65-66 .
Celesti , et al., “ Improving Virtual Machine Migration in Federated Informationweek , Issue 1,101 , Aug. 14 , 2006 in 17 pages.
Cloud Environments ” , 2010 , pp . 61-67 . Infoworld , vol . 28 , Issue 7 , Feb. 13 , 2006 in 17 pages .
Chan , et al . , “ An Approach to High Availability for Cloud Servers Infoworld , vol . 28 , Issue 10 , Mar. 6 , 2006 in 18 pages .
with Snapshot Mechanism ,” 2012 , pp . 1-6 . Infoworld , vol . 28 , Issue 15 , Apr. 10 , 2006 in 18 pages.
Chen et al . , “ When Virtual Is Better Than Real” , IEEE 2001 , pp . Infoworld , vol . 28 , Issue 16 , Apr. 17 , 2006 in 4 pages .
133-138 . Infoworld , vol . 28 , Issue 18 , May 1 , 2006 in 15 pages .
Chervenak , et al . , “ Protecting File Systems A Survey of Backup Infoworld, vol . 28 , Issue 39 , Sep. 25 , 2006 in 19 pages .
Techniques ,” 1998 , pp . 17-31 . Infoworld , vol . 29 , Issue 6 , Feb. 5 , 2007 in 22 pages .
Chiappetta, Marco , “ ESA Enthusiast System Architecture,” < http :// Infoworld , vol . 29 , Issue 7 , Feb. 12 , 2007 in 20 pages .
hothardware.com/Articles/NVIDIA-ESA-Enthusiast-System Ismail et al . , Architecture of Scalable Backup Service For Private
Architecture / >, Nov. 5 , 2007, 2 pages . Cloud, IEEE 2013 , pp . 174-179 .
Cover and table of contents for Cluster Computing, vol . 9 , Issue 1 , IT Professional Technology Solutions for the Enterprise, IEEE
Jan. 2006 in 5 pages. Computer Society, vol . 9 , No. 5 , Sep. - Oct . 2007 in 11 pages .
Cully, et al . , “ Remus: High Availability via Asynchronous Virtual Javaraiah , et al . , “ Backup for Cloud and Disaster Recovery for
Machine Replication ” , 2008 , pp . 161-174 .
4 Consumers and SMBs, " 2008 , pp . 1-3 .
Data Protection for Large Vmware and Vblock Environments Using Jhawar et al., “ Fault Tolerance Management in Cloud Computing:
EMC Avamar Applied Technology, Nov. 2010 , EMC Corporation, A System - Level Perspective ” , IEEE Systems Journal 7.2 , 2013 , pp .
26 pages. 288-297 .
US 11,436,210 B2
Page 9

( 56 ) References Cited “ Vizioncore Offers Advice to Help Users Understand VCB for
VMware Infrastructure 3 ” , Business Wire, Jan. 23 , 2007 in 3 pages .
OTHER PUBLICATIONS VMware VirtualCenter Users Manual, Version 1.2 , Copyright 1998
2004 VMware, Inc.in 466 pages .
Jo , et al ., “ Efficient Live Migration of Virtual Machines Using Vmware, Inc. , “ VMware Solution Exchange (VSX )” < http ://www .
Shared Storage ” , 2013 , pp . 1-10 .
2
vmware.com/appliances/learn/ovf.html> , 2014 , 3 pages .
Kashyap “ RLC - A Reliable approach to Fast and Efficient Live Vmware, Inc. , “ VMware Workstation 5.5 , What Files Make Up a
Migration of Virtual Machines in the Clouds ” IEEE 2014 IEEE Virtual Machine ?" < http://www.vmware.com/support/ws55/doc/ws_
Computer Society. learning_files_in_a_vm.html> , 2014 , 2 pages .
Kim , et al., “ Availability Modeling and Analysis of a Virtualized VMware White Paper, “ Virtualization Overview ” , Copyright 2005 ,
System , ” 2009 , pp. 365-371 . VMware, Inc. , 11 pages .
Kuo, et al . , “ A Hybrid Cloud Storage Architecture for Service Vmware White Paper, “ VMware Infrastructure 3 , Consolidated
Operational High Availability ” , 2013 , pp . 487-492 . Backup in VMware Infrastructure 3 ” , Vmware, Inc. in 6 pages.
Li et al . " Comparing Containers versus Virtual Machines for 2

VMware White Paper, “ Understanding VMware Consolidated Backup ",


Achieving High Availability ” 2015 IEEE . Copyright 2007 , VMware, Inc. , in 11 pages.
Liang, et al . , “ A virtual disk environment for providing file system Vrable, et al . , “ Cumulus : Filesystem Backup to the Cloud ” , 2009 ,
recovery ” , 2006 , pp . 589-599 .
Listing of Reviews on ZDNet.com in 33 pages . pp. 1-28.
Little et al . , “ Digital Data Integrity ", Wiley, 2007 in 24 pages . VSphere Storage vMotion : Storage Management & Virtual Machine
Lu et al . “ Virtual Machine Memory Access Tracing with Hypervi Migration . http://www.vmware.com/products / vsphere / features/
sor Exclusive Cache ” , Usenix Annual Technical Conference, 2007 , storage -vmotion Retrieved Aug. 12 , 2014 ; 6 pages .
pp . 29-43 . Wikipedia, “ Cloud computing, " < http://en.wikipedia.org/wiki/Cloud
Mao , et al . , “ Read -Performance Optimization for Deduplication computing > , 2009 , 11 pages .
Based Storage Systems in the Cloud ” , 2014 , pp . 1-22 . Wolf, “ Lets Get Virtual A Look at Todays Server Virtualization
Microsoft Computer Dictionary, Microsoft Press, 5th edition , 2002 Architectures ” , Data Center Strategies, Burton Group, Version 1.0 ,
in 3 pages. May 14 , 2007 in 42 pages .
Migrate a Virtual Machine with Storage vMotion in the vSphere Wood, et al . , “ Disaster Recovery as a Cloud Service : Economic
Client . http://pubs.vmware.com/vsphere-51/advanced/print/jsp ? Benefits & Deployment Challenges ” , 2010 , pp . 1-7 .
topic = com.vmware.vsphere.vcent ... Retrieved Aug. 12 , 2014 ; 2 Yang , et al ., “ Toward Reliable Data Delivery for Highly Dynamic
pages. Mobile Ad Hoc Networks, ” 2012 , pp . 111-124 .
Muller, “ Scripting Vmware TM : Power Tools Automating Virtual Yang, et al . , “ TRAP - Array : A Disk Array Architecture Providing
Infrastructure Administration ” , Syngress, 2006 in 66 pages . Timely Recovery to Any Pointin -time," 2006 , pp . 1-12 .
Muller et al . , “ Scripting VmwareTM : Power Tools Automating Yoshida et al . , “ Orthros: A High -Reliability Operating System with
Virtual Infrastructure Administration ” , Syngress, 2006 in 19 pages . Transmigration of Processes ,” 2013 , pp . 318-327 .
Nance et al . , “ Virtual Machine Introspection: Observation or Inter Zhao , et al . , “ Adaptive Distributed Load Balancing Algorithm based
ference ? ” , 2008 IEEE . on Live Migration of Virtual Machines in Cloud ” , 2009 , pp .
Newman , et.al , “ Server Consolidation with VMware ESX Server ” , 170-175 .
IBM Redpaper, Jan. 12 , 2005 in 159 pages . Zhao , et al . , Supporting Application - Tailored Grid File System
Ng , Chun -Ho et al . , “ Live Deduplication Storage of Virtual Machine Sessions with WSRF -Based Services, Advanced Computing and
Images in an Open -Source Cloud,” 2011 , pp . 80-99 . Information Systems Laboratory (ACIS ), pp . 24-33 .
Nicolae , Bogdan et al ., “ A Hybrid Local Storage Transfer Scheme Zhao , et al . , Abstract for “ Distributed File System Virtualization
for Live Migration of 1/0 Intensive Workloads ," 2012 , pp . 85-96 . Techniques Supporting On - Demand Virtual Machine Environments
Reingold , B et al . , “ Cloud Computing: The Intersection of Massive for Grid Computing ”, Cluster Computing, 9 , pp . 45-56 , 2006 .
Scalability, Data Security and Privacy ( Part I)," LegalWorks, a Zhao , et al . , “ Distributed File System Virtualization Techniques
Thomson Business , Jun . 2009 , 5 pages . Supporting On - Demand Virtual Machine Environments for Grid
Reingold, B et al . , “ Cloud Computing: Industry and Government Computing ", Cluster Computing, 9 , pp . 45-56 , 2006 .
Developments ( Part II ), ” LegalWorks, Sep. 2009 , 5 pages . International Preliminary Report on Patentability and Written Opin
Reingold , B et al . , “ Cloud Computing: Whose Law Governs the ion for PCT/US2011 /054374 , dated Apr. 2 , 2013 , 9 pages .
Cloud ? ( Part III ), ” LegalWorks, Jan.- Feb . 2010 , 6 pages . Affidavit of Duncan Hall and Exhibit B in regarding of Internet
Results of search for Roger Howorth on ZDNet.com in 3 pages . Archive on Mar. 3 , 2021 in 16 pages .
Rosenblum et al. “ Virtual Machine Monitors Current Technology First Affidavit of Duncan Hall and Exhibit A in regarding of Internet
and Future Trends ” , IEEE , May 2005 in 9 pages . Archive on Jan. 20 , 2021 in 106 pages .
Rule , Jr., “ How to Cheat at Configuring VMware ESX Server ” , Second Affidavit of Duncan Hall and Exhibit A in regarding of
Elsevier, Inc. , 2007 in 16 pages. Internet Archive on Jan. 27 , 2021 in 94 pages .
Somasundaram et al . , Information Storage and Management. 2009 , Complaint for Patent Infringement, Commvault Systems, Inc., Plain
pp . 251-281 . tiff, v . Rubrik Inc., Defendant, Case No. 1 :20 -cv -00524 -MN , U.S.
Sriram Subramaniam et al . , Snapshots in a Flash with ioSnap , In District Court, District of Delaware, filed on Apr. 21 , 2020 in 29
Proceedings of the Ninth European Conference on Computer Sys pages .
tems ( EuroSys ’14 ) , Association for Computing Machinery, New Commvault Systems, Inc. v. Cohesity Inc., Civil Action No. 1 :20
York , NY, USA , Article 23 , pp . 1-14 , DOI:https: //doi.org/10.1145/ CV - 00525 , U.S. District Court, District of Delaware, Complaint filed
2592798.2592824 ( Year: 2014 ) . on Apr. 21 , 2020 .
TechTarget News , Week of May 20 , 2007 “ Moonwalk's plans to Declaration of Benjamin Dowell in support of Petition for Inter
float over the chasm ” in 39 pages . Partes Review of U.S. Pat . No. 9,740,723 , Rubrik, Inc., Petitioners ,
Tran , et al . , “ Efficient Cooperative Backup with Decentralized Trust v. Commvault Systems, Inc., Patent Owner, dated Oct. 15 , 2020 , in
Management” , 2012 , pp . 1-25 . 3 pages .
Travostino , et al . , “ Seamless live migration of virtual machines over Declaration of Dr. H.V. Jagadish in support of Petition for Inter
the MAN /WAN ” , 2006 , pp . 901-907 . Partes Review of U.S. Pat. No.9,740,723 , Rubrik, Inc., Petitioner v .
Tudoran , Radu et al . , “ Adaptive File Management for Scientific Commvault Systems, Inc., Patent Owner, dated Mar. 31 , 2021 , in
Workflows on the Azure Cloud ,” 2013 , pp . 273-281 . 200 pages.
Vaghani, “ Virtual Machine File System ” , 2010 , pp . 57-70 . Declaration of Dr. H.V. Jagadish in support of Petition for Inter
“ Vizioncore Inc. Releases First Enterprise - Class Hot Backup and Partes Review of U.S. Pat. No. 9,740,723 , Rubrik, Inc., Petitioner v .
Recovery Solution for VMware Infrastructure 3 ” , Business Wire, Commvault Systems, Inc., Patent Owner, dated Mar. 16 , 2021 , in
Aug. 31 , 2006 in 2 pages . 191 pages.
US 11,436,210 B2
Page 10

( 56 ) References Cited PTAB - IPR2021-00609 – Exhibit 1006_US20150212895A1 ( Pawar ),


Publication Date Jul. 30 , 2015 , in 60 pages .
OTHER PUBLICATIONS PTAB - IPR2021-00609 — Exhibit 1007 — U.S . Pat. No. 9,665,386
Declaration of Sylvia Hall - Ellis , Ph.D. in support of Petition for
(Bayapuneni ), Issue Date May 30 , 2017 , in 18 pages .
PTAB - IPR2021-00609 — Exhibit 1008 — Popek and Golberg, Jul.
Inter Partes Review of U.S. Pat. No. 9,740,723 , Rubrik , Inc., 1974, in 10 pages.
Petitioner v. Commvault Systems, Inc., Patent Owner, dated Feb. 15 , PTAB - IPR2021-00609 – Exhibit 1009_Virtualization Essentials
2021 , in 55 pages. First Edition ( 2012 ) Excerpted , 2012 , in 106 pages.
Declaration of Sylvia Hall - Ellis , Ph.D. in support of Petition for PTAB - IPR2021-00609 — Exhibit 1010 – Virtual Machine Monitors
Inter Partes Review of U.S. Pat . No. 9,740,723 , RUBRIK , INC . Current Technology and Future Trends, May 2005 , in 9 pages .
, Petitioner v. Commvault Systems, Inc., Patent Owner, dated Mar. PTAB -IPR2021-00609_Exhibit 1011 – Virtualization Overview , 2005 ,
30, 2021 , in 291 pages . in 11 pages.
Petitioner's Explanation of Multiple Petitions Challenging U.S. Pat . PTAB - IPR2021-00609 — Exhibit 1012 — A Let's Get Virtual: Look
No. 9,740,723 , filed by petitioner Rubrick, Inc., Petitioner v. Com at Today's Server Virtualization Architectures, May 14 , 2007 , in 42
mvault Systems, Inc., Patent Owner, Case No. IPR2021-00674, pages.
dated Mar. 31 , 2021 , in 9 pages. PTAB - IPR2021-00609 — Exhibit 1013 — Virtual Volumes, Jul. 22 ,
Petition for Inter Partes Review of U.S. Pat. No.9,740,723 , filed by 2016, in 2 pages .
petitioner Rubrik, Inc., Petitioner V. Commvault Systems, Inc., PTAB - IPR2021-00609 — Exhibit 1014 – Virtual Volumes and the
Patent Owner, Case No. IPR2021-00674 , dated Mar. 31 , 2021 , in 87 SDDC – Virtual Blocks , Internet Archives on Sep. 29 , 2015 , in 4
pages . pages.
Petition for Inter Partes Review of U.S. Pat. No.9,740,723 , filed by PTAB - IPR2021-00609 – Exhibit 1015 —NEC White Paper
petitioner Rubrik, Inc., Petitioner v. Commvault Systems, Inc., VMWare vSphere Virtual Volumes ( 2015 ) , Internet Archives Dec. 4 ,
Patent Owner, Case No. IPR2021-00673 , dated Mar. 17 , 2021 , in 98 2015 in 13 pages .
pages. PTAB - IPR2021-00609 — Exhibit 1016 — EMC Storage and Virtual
U.S. Appl. No. 60 /920,847 , filed Mar. 29 , 2007 in 70 pages. Volumes, Sep. 16 , 2015 in 5 pages.
Scheduling Order, Commvault Systems, Inc., Plaintiff v. Rubrik, PTAB - IPR2021-00609 — Exhibit 1017 — U.S . Pat. No. 8,621,460
Inc., Case No. 1 : 20 - cv - 00524 - MN , filed Feb. 17 , 2021 in 15 pages . (Evans ), Issue Date Dec. 31 , 2013 , in 39 pages .
Case 1 : 20 -cv - 00525 - MN, Joint Claim Construction Statement, DDE PTAB - IPR2021-00609_Exhibit 1018 – U.S . Pat. No. 7,725,671
1-20 - cv - 00525-119 , filed Oct. 29 , 2021 in 12 pages . (Prahlad ), Issue Date May 25 , 2010 , in 48 pages .
Case 1 : 20 - cv - 00525 - MN , Letter from Kelly Farnan, DDE - 1-20 -cv PTAB - IPR2021-00609 — Exhibit 1019 - Assignment — Vaghani to
00525-111 , filed Oct. 6 , 2021 in 2 pages. VMWare, Feb. 8 , 2012 , in 8 pages .
Case 1 :20 -cv -00525 -MN - CJB , Letter from Kelly Farnan Exhibit A , PTAB - IPR2021-00609 – Exhibit 1020 —Assignment Docket
DDE - 1-20 - cv - 00525-111-1 , filed Oct. 6 , 2021 in 7 pages . Vaghani, Nov. 11 , 2011 , in 1 page .
Case No. 1 : 20 - cv - 00525 - MN , Joint Claim Construction Brief DDE PTAB - IPR2021-00609 — Exhibit 1021 - Dive into the VMware ESX
1-20 - cv -00525-107, filed Oct. 1 , 2021 in 79 pages . Server hypervisor -IBM Developer, Sep. 23 , 2011 , in 8 pages .
Case No. 1 : 20 - cv - 00525 - MN , Joint Claim Construction Brief Exhib PTAB - IPR2021-00609 — Exhibit 1022 — MS Computer Dictionary
its DDE - 1-20 - cv - 00525-107-1 , filed Oct. 1 , 2021 in 488 pages in 7 Backup labeled, 2002 in 3 pages .
parts . PTAB - IPR2021-00609_Exhibit 1023 — Jul. 7 , 2014_VMware vSphere
Case No. 1 : 20 - cv - 00525 - MN , First Amended Answer DDE - 1-20 Blog , Jun . 30 , 2014 , 4 pages .
cv - 00525-95 , filed Jul. 23 , 2021 , in 38 pages . PTAB - IPR2021-00609 – Exhibit 1024 — CommVault v . Rubrik Com
Case No. 1-20 - cv - 00525 , Oral Order DDE -1-20 - cv -00524-86_DDE plaint, filed on Apr. 21 , 2020 , in 29 pages.
1-20 - cv - 00525-87 , filed Jun . 29 , 2021 , in 1 page . PTAB - IPR2021-00609 — Exhibit 1025 — CommVault v . Cohesity Com
Case No. 1-20 - CV - 00525 -MN , Oral Order DDE - 1-20 -cv -00524 plaint , filed on Apr. 21 , 2020 , in 28 pages.
78_DDE - 1-20 -cv -00525-77, filed May 24 , 2021 , in 1 page . PTAB - IPR2021-00609 — Exhibit 1026 Feb. 17 , 2021 ( 0046 ) Sched
Case No. 1 : 20 -cv -00525 -MN , Order DDE - 1-20 -cv -00525-38_DDE uling Order, filed on Feb. 17 , 2021 , in 15 pages .
1-20 - cv - 00524-42 , filed Feb. 10 , 2021 , in 4 pages . PTAB - IPR2021-00609 -Exhibit 2001 — Prosecution History_
Case No. 1 : 20 - cv - 00525 - MN , Amended Complaint DDE - 1-20 -cv Part1, Issue Date Feb. 19 , 2019 , in 300 pages , Part 1 of 2 .
00525-15 , filed Jul. 27 , 2020 in 30 pages . PTAB - IPR2021-00609 –Exhibit 2001 — Prosecution History_
Case No. 1 : 20 -cv -00525 -MN , Complaint DDE - 1-20 - cv - 00525-1 , Part2, Issue Date Feb. 19 , 2019 , in 265 pages , Part 2 of 2 .
filed Apr. 21 , 2020 in 28 pages . PTAB - IPR2021-00609 — Exhibit 2002 — Jones Declaration , dated
Case 1 :20 -cv - 00525 -MN , Stipulation of Dismissal , dated Jan. 27 , Jun. 16 , 2021 , in 38 pages.
2022 in 2 pages . PTAB - IPR2021-00609 – Exhibit 3001 - RE_IPR2021-00535, 2021
Case 1 : 20 - cv - 00525 - MN , Joint Appendix of Exhibits , 157 , DDE 00589 , 2021-00590 , 2021-00609 , 2021-00673 , 2021-00674 , 2021
1-20 - cv - 00525-119 , filed Jan. 13 , 2022 in 54 pages . 00675 , dated Aug. 30 , 2021 , in 2 pages .
Case 1 : 20 - cv - 00525 - MN , Joint Appendix of Exhibits , 158 , DDE PTAB - IPR2021-00609 — Joint Motion to Terminate . Filed Aug. 31 ,
1-20 - cv - 00525-119 , filed Jan. 13 , 2022 in 2 pages . 2021 , in 7 pages .
Case 1 : 20 - cv - 00525 - MN , Joint Appendix of Exhibits , 158-1 , DDE PTAB - IPR2021-00609 — Joint Request to Seal Settlement Agree
1-20 - cv - 00525-119 , filed Jan. 13 , 2022 in 224 pages . ment , filed Aug. 31 , 2021 , in 4 pages .
PTAB - IPR2021-00609 (' 048) POPR Final , filed Jun . 16 , 2021 , in PTAB - IPR2021-00609 — Termination Order, Sep. 1 , 2021 , in 4
28 pages. pages .
PTAB - IPR202-00609 — Mar. 10 , 2021 IPR Petition — pty , Mar. 10 , Case No. 1 : 20 - cv - 00524 - MN , Order Dismissing with Prejudice All
2021 , in 89 pages . Claims , DDE - 1-20 - cv - 00524-101 , filed Jul. 26 , 2021 , in 1 page .
PTAB - IPR2021-00609_Exhibit 1001 – U.S . Appl. No. 10 /210,048 , Case No. 1 : 20 - cv - 00524 - MN , First Amended Answer DDE - 1-20
Issue Date Feb. 19 , 2019 , in 49 pages . cv - 00524-96 , filed Jul. 23 , 2021 , in 41 pages .
PTAB - IPR2021-00609 — Exhibit 1002 — Sandeep Expert Case No. 1 : 20 - cv - 00524 - MN , Stipulation DDE - 1-20 - cv - 00524-93 ,
Declaration ,dated Mar. 10 , 2021 , in 176 pages. filed Jul. 14 , 2021 , in 3 pages .
PTAB - IPR2021-00609 — Exhibit 1003 — U.S . Pat. No. 9,354,927 Case No. 1 : 20 -cv - 00524 -MN , Oral Order DDE - 1-20 - cv - 00524-86_
( Hiltgen ), Issue Date May 31 , 2016 , in 35 pages . DDE - 1-20 - cv - 00525-87 , filed Jun . 29 , 2021 , in 1 page .
PTAB - IPR2021-00609_Exhibit 1004 — U.S . Pat. No. 8,677,085 Case No. 1 :20 - cv -00524 -MN , Answer DDE - 1-20 - cv -00524-45 , filed
( Vaghani), Issue Date Mar. 18 , 2014 , in 44 pages. Feb. 16 , 2021 , in 25 pages.
PTAB - IPR2021-00609 — Exhibit 1005 — U.S . Pat. No. 9,639,428 Case No. 1 : 20 -cv -00524 -MN , Order DDE - 1-20 -cv -00525-38_DDE
( Boda ), Issue Date May 2 , 2017 , in 12 pages . 1-20 - cv - 00524-42 , filed Feb. 10 , 2021 , in 4 pages.
US 11,436,210 B2
Page 11

( 56 ) References Cited PTAB -IPR2021-00674 — Joint Motion to Terminate , Filed Aug. 31 ,


2021 , in 7 pages .
OTHER PUBLICATIONS PTAB - IPR2021-00674_Response to Notice Ranking Petitions Final,
filed Jul . 8 , 2021 , in 7 pages.
Case No. 1 : 20 - cv - 00524 - MN , Amended Complaint DDE -1-20 -cv PTAB - IPR2021-00674 — Termination Order, filed Sep. 1 , 2021 , in 4
00524-13 , filed Jul. 27 , 2020 , in 30 pages . pages .
Case No. 1 :20 - cv -00524 -MN , Complaint DDE - 1-20 - cv - 00524-1 , PTAB - IPR2021-00673 — ( ’723) POPR Final , filed Jun . 30 , 2021 , in
filed Apr. 21 , 2020 in 29 pages . 70 pages.
PTAB - IPR2021-00674 ( *723) POPR Final , filed Jul. 8 , 2021 , in PTAB - IPR2021-00673 — ( *723 ) Sur-Reply Final , filed Aug. 16 ,
70 pages. 2021 , in 7 pages .
PTAB - IPR2021-00674 — Mar. 31 , 2021 723 Petition , filed Mar. 31 , PTAB - IPR2021-00673—723 patent IPR - Reply to POPR, filed
2021 , in 87 pages. Aug. 9 , 2021 , in 6 pages .
PTAB - IPR2021-00674 — Mar. 31 , 2021 Explanation for Two Peti PTAB - IPR2021-00673 — Mar. 17 , 2021_Petition_723, filed Mar.
tions , filed Mar. 31 , 2021 , in 9 pages . 17, 2021 , in 98 pages .
PTAB - IPR2021-00674 – Exhibit 1001 — U.S . Pat. No. 9,740,723 ,
Issue Date Aug. 22 , 2017 , in 51 pages . PTAB - IPR2021-00673 – Exhibit 1001 – U.S . Pat. No. 9,740,723 ,
PTAB - IPR2021-00674_Exhibit 1002 — Jagadish Declaration , dated Issue Date Aug. 22 , 2017 , in 51 pages.
Mar. 31 , 2021 , in 200 pages . PTAB -IPR2021-00673 Exhibit 1002 - Declaration_Jagadish_
PTAB - IPR2021-00674 – Exhibit 1003 — U.S . Pat. No. 9,740,723 EXSRanger, filed Mar. 16 , 2021 , in 191 pages .
file history, Issue Date Aug. 22 , 2017 , in 594 pages . PTAB - IPR2021-00673 – Exhibit 1003 —FH 9740723 , Issue Date
PTAB - IPR2021-00674 – Exhibit 1004 – Virtual Machine Monitors Aug. 22 , 2017 , in 594 pages.
Current Technology and Future Trends, May 2005 , in 9 pages . PTAB - IPR2021-00673 — Exhibit 1004 - esxRanger Profes
PTAB -IPR2021-00674 Exhibit 1005 — Virtualization Overview , 2005 , sionalUserManual v.3.1 , 2006 in 102 pages.
11 pages. PTAB - IPR2021-00673 — Exhibit 1005 — VC_Users_Manual_11
PTAB - IPR2021-00674 – Exhibit 1006—Let's Get Virtual Final NoRestriction, Copyright date 1998-2004 , in 466 pages .
Stamped, May 14 , 2007 , in 42 pages . PTAB - IPR2021-00673 – Exhibit 1006 — U.S . Pat. No. 8,635,429
PTAB - IPR2021-00674 — Exhibit 1007 – U.S . Pat . No. 8,458,419— Naftel, Issue Date Jan. 21 , 2014 , in 12 pages .
Basler, Issue Date Jun . 4 , 2013 , in 14 pages . PTAB - IPR2021-00673 —Exhibit 1007_US20070288536A1
PTAB - IPR2021-00674 — Exhibit 1008_US20080244028A1 ( Le ) , Sen , Issue Date Dec. 13 , 2007 , in 12 pages .
Publication Date Oct. 2 , 2008 , in 22 pages . PTAB - IPR2021-00673 –Exhibit 1008 - US20060224846A1
PTAB - IPR2021-00674 —Exhibit 1009—60920847 ( Le Provi Amarendran, Oct. 5 , 2006 , in 15 pages .
sional ), Filed Mar. 29 , 2007 , in 70 pages . PTAB - IPR2021-00673 — Exhibit 1009 — U.S . Pat . No. 8,209,680–
PTAB - IPR2021-00674_Exhibit 1010 — Discovery Systems in Ubiq Le , Issue Date Jun . 26 , 2012 , in 55 pages.
uitous Computing (Edwards), 2006 , in 8 pages . PTAB - IPR2021-00673 – Exhibit 1010 — Virtual Machine Monitors
PTAB - IPR2021-00674_Exhibit 1011 — HTTP The Definitive Guide Current Technology and Future Trends , May 2005 in 9 pages.
excerpts (Gourley ), 2002 , in 77 pages . PTAB - IPR2021-00673 – Exhibit 1011 – Virtualization Overview , Copy
PTAB - IPR2021-00674 — Exhibit 1012 – VCB White Paper (Wayback right 2005 , VMware, Inc. , 11 pages.
Mar. 21 , 2007 ) , retrieved Mar. 21 , 2007 , Coypyright Date 1998 PTAB - IPR2021-00673 – Exhibit 1012– Let's Get Virtual A Look
2006 , in 6 pages. at Today's Virtual Server, May 14 , 2007 in 42 pages .
PTAB - IPR2021-00674_Exhibit 1013 — Scripting VMware excerpts PTAB - IPR2021-00673 — Exhibit 1013 — U.S . Pat . No. 8,135,930
( Muller) , 2006 , in 66 pages . Mattox , Issue Date Mar. 13 , 2012 , in 19 pages .
PTAB - IPR2021-00674 — Exhibit 1014 - Rob's Guide to Using PTAB - IPR2021-00673 — Exhibit 1014_U.S . Pat . No. 8,060,476—
VMWare excerpts ( Bastiaansen ), Sep. 2005 , in 178 pages . Afonso, Issue Date Nov. 15 , 2011 , in 46 pages .
PTAB -IPR2021-00674 — Exhibit 1015 — Carrier, 2005 in 94 pages . PTAB - IPR2021-00673 — Exhibit 1015 — U.S . Pat. No. 7,823,145—
PTAB - IPR2021-00674 - Exhibit 1016 — U.S . Pat. No. 7,716,171 Le 145 , Issue Date Oct. 26 , 2010 , in 24 pages.
(Kryger ), Issue Date May 11 , 2010 , in 18 pages . PTAB - IPR2021-00673 –Exhibit 1016_US20080091655A1
PTAB - IPR2021-00674 — Exhibit 1017 — RFC2609, Jun . 1999 , in 33 Gokhale, Publication Date Apr. 17 , 2008 , in 14 pages .
pages. PTAB - IPR2021-00673 —Exhibit 1017_US20060259908A1
PTAB -IPR2021-00674_Exhibit 1018 — MS Dictionary excerpt, 2002 , Bayer, Publication Date Nov. 16 , 2006 , in pages .
in 3 pages. PTAB - IPR2021-00673 – Exhibit 1018 – U.S . Pat. No. 8,037,016—
PTAB - IPR2021-00674 – Exhibit 1019 - Commvaultv. Rubrik Com Odulinski, Issue Date Oct. 11 , 2011 , in 20 pages .
plaint , Filed Apr. 21 , 2020 , in 29 pages. PTAB - IPR2021-00673 — Exhibit 1019 — U.S . Pat. No. 7,925,850—
PTAB - IPR2021-00674_Exhibit 1020 — Commvault v. Rubrik Sched Waldspurger, Issue Date Apr. 12 , 2011 , in 23 pages .
uling Order, Filed Feb. 17 , 2021 , in 15 pages . PTAB - IPR2021-00673 – Exhibit 1020 — U.S . Pat. No. 8,191,063—
PTAB - IPR2021-00674 - Exhibit 1021 — Duncan Affidavit, Dated Shingai, May 29 , 2012 , in 18 pages .
Mar. 3 , 2021 , in 16 pages . PTAB - IPR2021-00673 — Exhibit 1021_US8959509B1- Sobel, Issue
PTAB - IPR2021-00674_Exhibit 1022 — Hall - Ellis Declaration , dated Date Feb. 17 , 2015 , in 9 pages .
Mar. 30 , 2021 , in 291 pages . PTAB - IPR2021-00673 — Exhibit 1022 — U.S . Pat. No. 8,458,419
PTAB - IPR2021-00674 — Exhibit 1023 — Digital_Data_Integrity_ Basler, Issue Date Jun . 4 , 2013 , in 14 pages.
2007_Appendix_A_UMCP, 2007 ,, in 24 pages . PTAB - IPR2021-00673 — Exhibit 1023 — D . Hall_Internet Archive
PTAB - IPR2021-00674_Exhibit 1024 — Rob's Guide Amazon review Affidavit & Ex . A , dated Jan. 20 , 2021 , in 106 pages.
( Jan. 4 , 2007 ) , retrieved Jan. 4 , 2007 , in 5 pages . PTAB - IPR2021-00673 — Exhibit 1024 - esxRanger Profes
PTAB - IPR2021-00674 — Exhibit 2001 esxRanger, 2006 , in 102 sionalUserManual, 2006 , in 103 pages .
pages. PTAB - IPR2021-00673 – Exhibit 1025 — D.Hall Internet Archive Affi
PTAB -IPR2021-00674 – Exhibit 2002 — Want, 1995 , in 31 pages . davit & Ex . A ( source html view) , dated Jan. 27 , 2021 , in 94 pages.
PTAB - IPR2021-00674 — Exhibit 2003 —Shea , retrieved Jun . 10 , PTAB - IPR2021-00673 — Exhibit 1026_Scripting VMware ( excerpted )
2021 , in 5 pages . ( GMU) , 2006 , in 19 pages .
PTAB - IPR2021-00674 — Exhibit 2004 — Jones Declaration, Dated PTAB - IPR2021-00673 — Exhibit 1027 — How to cheat at configur
Jul. 8, 2021 , in 36 pages. ing VMware ESX server ( excerpted ), 2007 , in 16 pages.
PTAB - IPR2021-00674 – Exhibit 3001 , dated Aug. 30 , 2021 , in 2 PTAB - IPR2021-00673 — Exhibit 1028 — Robs Guide to Using VMware
pages. ( excerpted ), Sep. 2005 in 28 pages.
PTAB -IPR2021-00674_Exhibit IPR2021-00674 Joint Request to PTAB -IPR2021-00673 – Exhibit 1029_Hall - Ellis Declaration , dated
Seal Settlement Agreement, dated Aug. 31 , 2021 , in 4 pages . Feb. 15 , 2021 , in 55 pages.
US 11,436,210 B2
Page 12

( 56 ) References Cited PTAB - IPR2021-00673 – Exhibit 1049 — Dell Power Solutions


Aug. 2007 ( excerpted ), Aug. 2007 in 21 pages .
OTHER PUBLICATIONS PTAB - IPR2021-00673 – Exhibit 1050 — communities -vmware -t5
VI-VMware -ESX -3-5 -Discussions, Jun . 28 , 2007 , in 2 pages .
PTAB - IPR2021-00673 — Exhibit 1030 - B . Dowell declaration , dated PTAB - IPR2021-00673 — Exhibit 1051 — Distributed_File_System
Oct. 15 , 2020 , in 3 pages . Virtualization, Jan. 2006 , pp . 45-56 , in 12 pages .
PTAB - IPR2021-00673 — Exhibit 1031 - Vizioncore esxEssentials
Review ZDNet , Aug. 21 , 2007 , in 12 pages . PTAB - IPR2021-00673 — Exhibit 1052 — Distributed File System
PTAB -IPR2021-00673 – Exhibit 1032— ZDNet Search on_howorth Virtualization article abstract, 2006 , in 12 pages .
p . 6_, printed on Jan. 15 , 2021 , ZDNet 3 pages . PTAB - IPR2021-00673 — Exhibit 1053 —Cluster Computing_vol. 9 ,
PTAB - IPR2021-00673 – Exhibit 1033— ZDNet_Reviews_ZDNet , issue 1 , Jan. 2006 in 5 pages .
printed on Jan. 15,02021 , in 33 pages . PTAB - IPR2021-00673 — Exhibit 1054 — redp3939 — Server Consoli
PTAB - IPR2021-00673 – Exhibit 1034_Understanding VMware Con dation with VMware ESX Server, Jan. 12 , 2005 in 159 pages .
solidated Backup , 2007 , 11 pages. PTAB - IPR2021-00673 — Exhibit 1055 – Server Consolidation with
PTAB - IPR2021-00673 — Exhibit 1035—techtarget.com news links VMware ESX Server_Index Page , Jan. 12 , 2005 in 2 pages .
May 2007 , May 20 , 2007 , in 39 pages. PTAB - IPR2021-00673 — Exhibit 1056 — Apr. 21 , 2020 [ 1 ] Com
PTAB - IPR2021-00673 — Exhibit 1036_ITPro 2007 Issue 5 ( excerpted ), plaint, filed Apr. 21 , 2020 , in 300 pages .
Sep.-Oct. 2007 in 11 pages . PTAB - IPR2021-00673 — Exhibit 1057 - Feb . 17 , 2021 ( 0046 ) Sched
PTAB -IPR2021-00673 – Exhibit 1037 — InfoWorld — Feb . 13 , 2006 , uling Order, filed Feb. 17 , 2021 , in 15 pages .
Feb. 13 , 2006 , in 17 pages . PTAB - IPR2021-00673 — Exhibit 1058 — Novell Netware 5.0-5.1 Net
PTAB - IPR2021-00673 – Exhibit 1038 — Info World — Mar. 6 , 2006 , work Administration ( Doering ) , Copyright 2001 in 40 pages .
Mar. 6 , 2006, in 18 pages . PTAB - IPR2021-00673 — Exhibit 1059_US20060064555A1 (Prahlad
PTAB - IPR2021-00673 — Exhibit 1039 InfoWorld — Apr . 10 , 2006 , 555 ) , Publication Date Mar. 23 , 006 , in 33 pages.
Apr. 10 , 2006 , in 18 pages . PTAB - IPR2021-00673 — Exhibit 1060— Carrier Book , 2005 in 94
PTAB - IPR2021-00673 — Exhibit 1040 — Info World — Apr. 17 , 2006 , pages .
Apr. 17 , 2006 , in 4 pages . PTAB - IPR2021-00673 – Exhibit 2001 Jones Declaration, filed Jun .
PTAB - IPR2021-00673 — Exhibit 1041 — InfoWorld — May 1 , 2006 , 30, 2021 , in 35 pages.
May 1 , 2006 , in 15 pages. PTAB - IPR2021-00673 — Exhibit 2002 VM Backup Guide 3.0.1 ,
PTAB -IPR2021-00673 — Exhibit 1042 — InfoWorld — Sep . 25 , 2006 , updated Nov. 21 , 2007 , 74 pages .
Sep. 25 , 2006 , in 19 pages . PTAB - IPR2021-00673 – Exhibit 2003 VM Backup Guide 3.5 , updated
PTAB - IPR2021-00673 — Exhibit 1043 — InfoWorld — Feb . 5 , 2007 , Feb. 21 , 2008, 78 pages.
Feb. 5 , 2007 , in 22 pages . PTAB - IPR2021-00673 — Exhibit 3001 RE_IPR2021-00535, 2021
PTAB - IPR2021-00673 — Exhibit 1044 — InfoWorld — Feb . 12 , 2007 , 00589 , 2021-00590 , 2021-00609 , 2021-00673 , 2021-00674 , 2021
Feb. 12 , 2007 , in 20 pages . 00675 , Aug. 30 , 2021 , in 2 pages .
PTAB -IPR2021-00673— Exhibit 1045 — Information Week — Aug. 14 , PTAB - IPR2021-00673 — Joint Motion to Terminate , filed Aug. 31 ,
2006 , Aug. 14 , 2006 , in 17 pages . 2021 , in 7 pages .
PTAB - IPR2021-00673 — Exhibit 1046 — esxRanger Ably Backs Up PTAB - IPR2021-00673 — Joint Request to Seal Settlement Agree
VMs , May 2 , 2007 in 6 pages . ment, filed Aug. 31 , 2021 , in 4 pages.
PTAB - IPR2021-00673 – Exhibit 1047 — Businesswire — Vizioncore PTAB - IPR2021-00673-673 674 Termination Order, Sep. 1 , 2021 ,
Inc. Releases First Enterprise -Class Hot Backup and Recovery in 4 pages.
Solution for VMware Infrastructure, Aug. 31 , 2006 in 2 pages . PTAB - IPR2021-00673 — Patent Owner Mandatory Notices , filed
PTAB - IPR2021-00673 – Exhibit 1048 — Vizioncore Offers Advice Apr. 7 , 2021 , 6 pages .
to Help Users Understand VCB for VMwar, Jan. 23 , 2007 in 3
pages . * cited by examiner
U.S. Patent Sep. 6, 2022 Sheet 1 of 21 US 11,436,210 B2

100

secondarystoragedstaorae
175 170

1vmsiatrcohuiangle ma4n5gervmsiatrcohuianlge1copmer5oantei0 1dVrL5ivU2eNr vmiaorctuhinatle1com5p n4e t 1com5pn7et dvanirltysuzaeklr1com6p n0e t


ad155gaetnat
integraion sdtaotrae
0

165

135
sptroimagreydsatorae
140a
dvirtsuakl
100 dvirtsuakl

140b 130
SAN

180
network

mv1hiaorc0thsui5antle 1mvaicrht0una2el app OS


1mvaicrht0unabel app OS

1 5a 120a 715b 120b 125


U.S. Patent Sep. 6, 2022 Sheet 2 of 21 US 11,436,210 B2

101

secondarystorage sdtaotrae
175 170

1
c
o pme5
r an
t0io d
a
155
get
n a
1Vmsiartcohuianlge ma4n 5gervmsiatrcohuiangle 1dVrLi5vUe2Nr vmiaorcthuinalte1com5p n4e t integraio1ncom5p n7e t vadinrtlsyuzakelr1com6pn0et sdtaotrae

165
IB
.
FIG

network
180

1hmvaiocr0htsiu5natel 1mvaicrht0unael app OS


1mvaicrht0iunabel app sptroimagreysdtaotrae
OS

000 dVirtsuakl vdirtsuakl

1 5a 120a 1 5b 120b 135 140a 140b 125


U.S. Patent Sep. 6, 2022 Sheet 3 of 21 US 11,436,210 B2

175 Secondary storagesdtaotrae


d
a
155
g e
t na
vmsiatrcohuiangle1ma4n5ger vmsiatrcohuianglec1opme5rant0io 1dVrLi5vUe2Nr vmiaorcuthnale integraion1com5p7net vadinrltsyuzakelr1com6p0net
1com5pn4et sdtaorae
170 2
.
FIG

2mvaicrhtunael ma0n 2ger virtual machine man ge ment compnet API compo nent
205 210 215

sptroimagrey135sdtaotrae
10000 dvirtsukal dvirtsuakl dvirtsuakl vdirtsuakl
140a 140b 140c 140
130
SAN
180
network

vmaicrhtunael 1h0o5sbt 110cVITI app 1vm0 app OS

1 50 120c 1 50 120d 125b

mvaicrhtunael 1ho0s5ta 110avm app 1vm0b app OS

1 5a 120a 1 5b 120b 125a


U.S. Patent Sep. 6, 2022 Sheet 4 of 21 US 11,436,210 B2

Copy
305
Receive indication to perform
copy

310
Is there a virtual Y
machine manager ?

315
325
Select next virtual machine
host Query virtual machine manager
to determine virtual machines
320
Query virtual inachine host to
determine virtual machines

330
Select next virtual machine

335
Copy virtual machine data
according to indication
337
Perform other processing of
virtual machine data

340
More virtual
machines ?

345
If there
is not a virtual
Y machine manager ,
are there other virtual
machine hosts ?

C Done
FIG . 3
U.S. Patent Sep. 6, 2022 Sheet 5 of 21 US 11,436,210 B2

X
405 Enable Auto Discovery
410a O Match virtual machine names by regular expression
Configure 415a
410b Virtual machine host affinity
Configure -415b

OK Cancel Help

418

FIG . 4
U.S. Patent Sep. 6, 2022 Sheet 6 of 21 US 11,436,210 B2

530 540 500

Backupset Property of ManualBS ?

510 General Virtual Machine Configuration Security


512 Virtual Machine Storage Manager . bumout
514 Application : Virtual Machine Host
516 Instance Name : VSDefaultinstance
518 Backup Set Name: ManualBS

520 Automatically add new virtual machines that don't


qualify for membership in any of the sub - client

522
Rule - based discovery
524a O Match virtual machine names by regular expression
Configure 526a
5245 O Virtual machine host affinity
Configure 526b

OK Cancel Help

528

FIG . SA
U.S. Patent Sep. 6, 2022 Sheet 7 of 21 US 11,436,210 B2

540 500

Backup Set Property of defaultBackupSet X?


General Virtual Machine Configuration Security
542 544 546

Virtual machine host Virtual machine Sub- client name


rack0102 TESTVM79 Subclient test
rack0102 TESTVM99 Subclient test
rack0102 TESVM91
560 rack0102 rack0102 - rh3
rack0102 TESTVM92 Subclient test
KW

rack0102 TESTVM92 Subclient test


w

rack0102 rack0102554x64 570


rack0202 rack0102m2 Subclient test
rack0202 TESTVM2459 Default
tack0202 Marketing Sales SC
562
rack0202 TESTVM ESX1 Subclient test
rack0202 rack0202su 10x64
rack0202 VM2 Database SC 572
rack0202 VM3 Filesrv SC 574
rack0202 VM4 x64
rack0302 S? 1 : Marketing Sales_SC 576
rack0302 SG3 ( 1 ) Marketing Sales SC
rack0302
564 rack0302 SG W2K8 X64
rack0302 W2K8_SC Files v SC 574
rack0302 W2K8_SC_X64 ( 1 )
rack0302 rack0302vm1
Discover 552
Change all selected virtual machines to sub - client:
554 Apply -556

OK Cancel Help

578

FIG . 5B
U.S. Patent Sep. 6, 2022 Sheet 8 of 21 US 11,436,210 B2

600

610 Virtual machine name userName password V Add 612

Edit 614

Remove 616

OK Cancel Help

618

FIG . 6
U.S. Patent Sep. 6, 2022 Sheet 9 of 21 US 11,436,210 B2

Copy virtual machine data


according to indication
705
Quiesce virtual machine file systems
710
Create snapshot of virtual machine

Unquiesce virtual machine file systems


720
Determine how to copy according to indication :
file - level, volume - level of disk - level

file - evel volume- level disk- level


722 722 746
Determine mount point on Determine mount point on Determine copy point on virtual
virtual machine storage virtual machine storage machine storage manager
manager manager
724 7 24 748
Determine volumes of virtual Determine volumes of virtual Determine virtual disk and
machine machine configuration files for virtual
machine
726 726 750
Mount determined volumes at Mount determined volumes at Copy virtual disk and
mount point on virtual machine mount point on virtual machine configuration files to virtual
storage manager storage manager machine storage manager
728
Copy files on determined 752
volumes at mount point on Extract metadata Remove snapshot of virtual
virtual machine storage machine
manager to secondary storage
data store 734 754
730 Copy determined volumes at
mount point on virtual machine Extract metadata
Unmount determined volumes storage manager to secondary
storage data store
732 730 Copy virtual disk and
Remove snapshot of virtual configuration files to secondary
machine Unmount determined volumes storage data store
732
Remove copied virtual disk and
Remove snapshot of virtual configuration files from virtual
machine machine storage manager

Done
FIG .
U.S. Patent Sep. 6, 2022 Sheet 10 of 21 US 11,436,210 B2

800

Extract metadata

805
Access configuration files to
determine parent - child
relationships between virtual
disk files
810
Determine relationships
between virtual disk files

815

Determine how volumes are


structured on virtual disks

820
Determine location of master
file table

825
Store determined relationships ,
determined structure of
volumes and determined
location of master file table

Done

FIG . 8
U.S. Patent Sep. 6, 2022 Sheet 11 of 21 US 11,436,210 B2

Restore

905
Receive indication to restore
file , volume , virtual disk or
Virtual machine

Determine how copy was


performed : file -level,
volume - level or disk - level

file - level volume - level disk - level


915 915
Mount copy set Mount copy set Mount copy set

920 945 945


Restore file Access stored Access stored metadata
metadata

950
Reconstruct files Reconstruct virtual disk using
using master file relationships between virtual
table disk files
955
930 920 Reconstruct volumes using
determination of how volumes
Restore Restore are structured on virtual disks
volume
Reconstruct files using
master file table

965 930 920 970


Restore Restore Restore Restore one
Virtual volume or more
machine virtual disks

Done

FIG . 9
U.S. Patent Sep. 6, 2022 Sheet 12 of 21 US 11,436,210 B2

1050

1005
1031

storage mgmt .
manager agent
1411 1025
1030 jobs interface 1130
index agent agent
1095 qm
1095
1070 client client 1070
1020
meta data data meta
base agent } agent base
1

1060 1
1062
primary SC
data 1
SC
}
1 ?
1
primary storage FEE . *****
1

secondary storage

1061 1065 1061 1065

SS secondary storage SS secondary storage


computing device computing device 1023

sidb
storage storage
device device

FIG . 10
U.S. Patent Sep. 6, 2022 Sheet 13 of 21 US 11,436,210 B2

1100

Browse Options X
BrowsetheLatest Data
1105
O Specify Browse Time
Browse Data Before :
Wed 01/21/2009 05:24 PM

Time Zone : (GMT-05 :00 ) Eastern Time ( US & Canada )

Virtual Machine
Storage Manager: jordan
1115
Secondary Storage
Computing Device : < Any secondary storage computing device >
? Image Browsing
- Type of Intended Restore
© Individual files /folders
1120 O Entire Volume

O Virtual Machines Virtual Disks

OK Cancei Advanced List Media Help

FIG . 11
U.S. Patent Sep. 6, 2022 Sheet 14 of 21 US 11,436,210 B2

Virtual Machine Storage 1200


w REGULAR X Job Controller Manager : jordan (Latest D ... X ?????
ha Virtual Machine Storage Manager : jordan (Latest Data )
Current Selecied: 1502088ad - e01e -d723-0dc4-79927234bd3b \Volume 1\Documents and Settings
INST007 Name Size Type
defaultBackupSet O Administrator 0 Bytes Folder
PERFM3
PERFOM2
Administrator GALAX ... O Bytes Folder
O All Users 0 Bytes Folder
PERFORMANCE
0 Default User O Bytes Folder
1205 REGULAR
TESTVM111 O LocalService O Bytes holder
Volume 1 Network Service O Bytes Folder
1208 - Documents and wmpub O Bytes Folder
HO Volume 2 1250
- Volume 3
Restore Options for All Selected items X
General Job Initiation
12101 Restore ACLS

1215 Unconditional Overwrite


Restore Destination

1220 Destination Computer : jordan


Destination folder: Browse

Preserve Source Paths

1225 Preserve 1 level from the end of the source path

O Remove 06 level from the beginning of the source path

Cancel Advanced Help

FIG . 12
U.S. Patent Sep. 6, 2022 Sheet 15 of 21 US 11,436,210 B2

1300

Restore Options for All Selected Items X


General Job Initiation
Restore as Type

O Physical volume
83 File
File
1305 O VHD files

O VMDK files

Destination Computer: amber 1310

Source Volume Destination Volume 1340


Epong TESTVM111 1320
wapbanVolume Browse mount points on amber ?
1315 pony Volume2
- Volume3 Select a mount point and click OK
par Volumed C : \[ 14.65 GBI
d : ( 1.95 GB )
e : 117.2 GB ]
1325 1: \[ 17.11 GB ]
h : \ [ 136.73 GB ]
\ 30.01 GB ]
1: 19.23 GB ]
m : \ [ 19.53 GB ]
OK Cancel Help
Cancel

FIG . 13A
U.S. Patent Sep. 6, 2022 Sheet 16 of 21 US 11,436,210 B2

1300

Restore Options for All Selected Items ?

General Job Initiation

Restore as

O Physical volume
11305 © VHD files

VMDK files

1310

Destination Computer: amber 1330

Destination Folder :
|
1335

Browse

OK Cancel Advanced Help

FIG . 13B
U.S. Patent Sep. 6, 2022
9 Sheet 17 of 21 US 11,436,210 B2

1400
Restore Options for All Selected Items X
General Job Initiation
Restore as

© Virtual Machines
1405
Virtual Disks

1410

Destination Computer: jordan 1430

Destination Folder : D : MountFolder 1435

Browse

Virtual Machine Restore Options 1415

Virtual Machine Name TESTVM100 1420

Server Name VirtualCenterServer.dom


Virtual Machine Manager 1440
1425 Virtual Machine Host managed by ESXServer.commvault.c
Virtual Machine Manager
O Virtual Machine Host
Authentication

User Name : root

1445 Password : *************

Confirm Password : *****

OK Cancel Advanced Help

FIG. 14A
U.S. Patent Sep. 6, 2022 Sheet 18 of 21 US 11,436,210 B2

1400

Restore Options for All Selected Items ?


General Job Initiation
-Restore as

O Virtual Machines
1405
Virtual Disks

1410

Destination Computer: amber


Destination Folder : D :WountFolder
1430
*
?1435WA-
Browse

OK Cancel Advanced Help

FIG . 14B
U.S. Patent Sep. 6, 2022 Sheet 19 of 21 US 11,436,210 B2

1500

Sub - client Properties of default X

Storage Device Activity Control Encovotion Content Auto Discover


General Filters Pre / Post Process Security
Virtual Machine Storage Manager : jordan
DataAgent: Virtual Server
1505
Backup Set: REGULAR
Sub- client name : default

Number of Data Readers :


Note : This is a default sub - client .
Copy Type
O File level
1515 O Volume level

O Disk Level
? Keep snaps between failed attempts for Disk level restartability
wwwwwwww
1520 www

Use Virtual Machine Storage Manager


amber
Description jordan
sg - vm2
sg9 - vm3
sogo24
Water
wind

Cancel Help

FIG . 15
U.S. Patent Sep. 6, 2022 Sheet 20 of 21 US 11,436,210 B2

1600
Copy
1605
Access virtual disk internal
data structures

1610
Determine blocks used in
virtual disk

1615
Access block identifier data
Structure

1620
Generate identifier for used
block

Identifier 1625
of block different
from previous
identifier?
Y
1630

Copy block
1635
Update block identifier data
structure with new identifier

1640
Y
More blocks ?

Done

FIG . 16
U.S. Patent Sep. 6, 2022 Sheet 21 of 21 US 11,436,210 B2

1700

Block Identifier
Substantially
Unique Identifier
490 OXA1B3FG

1706 491 OXFG329A

492 OXC1D839

1702 1704
FIG . 17
US 11,436,210 B2
1 2
CLASSIFICATION OF VIRTUALIZATION ating system by Microsoft Corporation . VCB supports file
DATA level backups ( backups at the level of files and directories )
for virtual machines using Microsoft Windows operating
CROSS - REFERENCE TO RELATED systems . In a file - level backup, the granularity of the backup
APPLICATIONS 5 is at the level of individual files and / or directories of the
virtual machine . A file - level backup allows copies of indi
This application is aa continuation of U.S. patent applica- vidual files on virtual disks to be made . File - level backups
tion Ser. No. 15 / 679,560 filed Aug. 17 , 2017 , which is a can be full backups, differential backups, or incremental
continuation of U.S. patent application Ser. No. 14/ 275,381 backups .
filed May 12 , 2014 , now U.S. Pat . No. 9,740,723 , which is 10 VCB also supports image - level backups for virtual
a divisional of U.S. patent application Ser. No. 13 / 667,890 machines using any operating system (e.g. , Microsoft Win
filed Nov. 2 , 2012 , now U.S. Pat . No. 8,725,973 , which is a dows operating systems , Linux operating systems, or other
divisional of U.S. patent application Ser. No. 12/ 553,294 operating systems that may be installed upon ESX Server ).
filed Sep. 3 , 2009 , now U.S. Pat. No. 8,307,177 , which In an image - level backup, the granularity of the backup is at
claims priority to U.S. Provisional Patent Application No. 15 the level of a virtual machine (i.e. , the entire virtual machine ,
61 /094,753 filed Sep. 5 , 2008 , U.S. Provisional Patent including its current state is backed up ) . For an image -level
Application No. 61 / 121,383 filed Dec. 10 , 2008 , and U.S. backup, typically the virtual machine is suspended and all
Provisional Patent Application No. 61 / 169,515 filed Apr. 15 , virtual disk and configuration files associated with the
2009 , each of which is incorporated by reference herein in virtual machine are backed up , and then the virtual machine
its entirety. 20 is resumed .
An administrator would typically choose to perform a
BACKGROUND file - level backup of a Microsoft Windows virtual machine
because of the potential need to restore individual files or
In general, virtualization refers to the simultaneous host- directories from the backed -up Microsoft virtual machine .
ing of one or more operating systems on a physical com- 25 However, VCB may not perform a file - level backup of a
puter. Such virtual operating systems and their associated Microsoft Windows virtual machine as quickly as an image
virtual resources are called virtual machines . Virtualization level backup. Accordingly, a system that enables a backup of
software sits between the virtual machines and the hardware a Microsoft Windows virtual machine to be performed at
of the physical computer. One example of virtualization least as quickly as a file - level backup and enables granular
software is ESX Server, by VMware, Inc. of Palo Alto , Calif. 30 restoration of any data ( e.g. , individual files or directories )
Other examples include Microsoft Virtual Server and from the backed- up Microsoft virtual machine would have
Microsoft Windows Server Hyper - V, both by Microsoft significant utility .
Corporation of Red nd , Wash ., and Sun XVM by Sun Because VCB only supports file - level backups for virtual
Microsystems Inc. of Santa Clara, Calif. machines using Microsoft Windows operating systems , a
Virtualization software provides to each virtual operating 35 file -level backup cannot be performed using VCB for virtual
system virtual resources , such as a virtual processor, virtual machines using operating systems other than Microsoft
memory , a virtual network device, and a virtual disk . Each Windows (e.g. , Linux operating systems ). An administrator
virtual machine has one or more virtual disks . Virtualization must back up a non -Microsoft Windows virtual machine
software typically stores the data of virtual disks in files on using an image -level backup. Therefore, in order to granu
the filesystem of the physical computer, called virtual 40 larly restore data ( e.g. , an individual file or directory) from
machine disk files (in the case of VMware virtual servers ) or the backed - up non -Microsoft Windows virtual machine, the
virtual hard disk image files ( in the case of Microsoft virtual entire non -Microsoft Windows virtual machine must be
servers ). For example, VMware’s ESX Server provides the restored . This may require overwriting the original virtual
Virtual Machine File System (VMFS ) for the storage of machine with the backed -up virtual machine , or re - creating
virtual machine disk files. A virtual machine reads data from 45 the original virtual machine on a different physical machine .
and writes data to its virtual disk much the same way that an This may be a laborious and time- intensive process , and may
actual physical machine reads data from and writes data to result in loss of virtual machine data . Accordingly, a system
an actual disk . that enables the granular restoration of any data ( e.g. ,
Traditionally, virtualization software vendors have individual files or directories) within aa virtual machine using
enabled the backup of virtual machine data in one of two 50 any type of operating system would have significant utility.
ways. A first method requires the installation of backup Another challenge posed by the use of VCB to perform
software on each virtual machine having data to be backed backups of virtual machines is that such backups require an
up and typically uses the same methods used to back up the administrator to manually identify or specify the virtual
data of physical computers to back up the virtual machine machines that are to be backed up , typically via a script
data. A second method backs up the files that store the virtual 55 created in advance of the backup operation . However,
disks of the virtual machines , and may or may not require the because virtual machines may be easily set up and torn
installation of backup software on each virtual machine for down , virtual machines may be less permanent in nature
which the data is to be backed up . than actual physical machines. Due to this potential tran
As an example of the second method, VMware Consoli- sience of virtual machines, it may be more difficult for the
dated Backup (VCB ) , also by VMware, Inc. , enables the 60 administrator to identify all of the virtual machines which
backup of the data of virtual machines on ESX Server are to be backed up in advance of the backup operation .
without having to install backup software on the virtual Accordingly, a system that provides automatic identification
machines. VCB consists of a set of utilities and scripts that of virtual machines that are to be backed up at the time of
work in conjunction with third -party backup software to the backup operation would have significant utility.
backup virtual machine data . VCB and the third -party 65 The need exists for a system that overcomes the above
backup software are typically installed on a backup proxy problems, as well as one that provides additional benefits .
server that uses the Microsoft Windows Server 2003 oper- Overall, the examples herein of some prior or related sys
US 11,436,210 B2
3 4
tems and their associated limitations are intended to be ration files to the proxy server, extracting metadata from the
illustrative and not exclusive . Other limitations of existing virtual disk and configuration files, and copying the virtual
or prior systems will become apparent to those of skill in the disk and configuration files and the extracted metadata to the
art upon reading the following Detailed Description . secondary storage data store .
5 Various examples of aspects of the invention will now be
BRIEF DESCRIPTION OF THE DRAWINGS described . The following description provides specific
details for a thorough understanding and enabling descrip
FIGS . 1A and 1B are block diagrams illustrating envi- tion of these examples. One skilled in the relevant art will
ronments in which aspects of the invention may be config- understand, however, that aspects of the invention may be
ured to operate . 10 practiced without many of these details . Likewise , one
FIG . 2 is a block diagram illustrating another environment skilled in the relevant art will also understand that aspects of
in which aspects of the invention may be configured to the invention may have many other obvious features not
operate . described in detail herein. Additionally, some well -known
FIG . 3 is a flow diagram illustrating a process for dis- structures or functions may not be shown or described in
covering one or more virtual machines . 15 detail below, so as to avoid unnecessarily obscuring the
FIGS . 4-6 are display diagrams illustrating example inter relevant description .
faces provided by aspects of the invention. The terminology used below is to be interpreted in its
FIG . 7 is a flow diagram illustrating a process for copying broadest reasonable manner , even though it is being used in
virtual machine data . conjunction with a detailed description of certain specific
FIG . 8 is a flow diagram illustrating a process for extract- 20 examples of aspects of the invention. Indeed , certain terms
ing metadata from virtual volumes and /or virtual disk and may even be emphasized below ; however, any terminology
configuration files . intended to be interpreted in any restricted manner will be
FIG . 9 is aa flow diagram illustrating a process for restor- overtly and specifically defined as such in this Detailed
ing virtual machine data . Description section.
FIG . 10 is block diagram illustrating an example of a 25 Unless described otherwise below , aspects of the inven
data storage enterprise that may employ aspects of the tion may be practiced with conventional data processing
invention . systems . Thus, the construction and operation of the various
FIGS . 11-15 are display diagrams illustrating example blocks shown in FIGS . 1A , 1B and 2 may be of conventional
interfaces provided by aspects of the invention . design , and need not be described in further detail herein to
FIG . 16 is a flow diagram illustrating a process for 30 make and use aspects of the invention , because such blocks
copying virtual machine data. will be understood by those skilled in the relevant art. One
FIG . 17 is a diagram illustrating a suitable data structure skilled in the relevant art can readily make any modifications
that may be employed by aspects of the invention. necessary to the blocks in FIGS . 1A , 1B and 2 ( or other
embodiments or figures) based on the detailed description
DETAILED DESCRIPTION 35 provided herein .
Aspects of the invention will now be described in detail
The headings provided herein are for convenience only with respect to FIGS . 1 through 17. FIGS . 1A , 1B and 2 are
and do not necessarily affect the scope or meaning of the block diagrams illustrating various environments in which
claimed invention . aspects of the invention may be configured to operate. FIG .
Overview 40 1A illustrates aspects of the invention interacting with
Described in detail herein is aa method of copying data of virtual machines ( e.g. , VMware virtual machines or
one or more virtual machines being hosted by one or more Microsoft virtual machines ) storing data on a storage device
non -virtual machines. The method includes receiving an connected to the virtual machine via a Storage Area Network
indication that specifies how to perform a copy of data of ( SAN ), and FIG . 1B illustrates aspects of the invention
one or more virtual machines hosted by one or more virtual 45 interacting with virtual machines storing data locally. FIG . 2
machine hosts . The method further includes determining illustrates aspects of the invention interacting with aa virtual
whether the one or more virtual machines are managed by a machine manager (e.g. , a VMware Virtual Center server or
virtual machine manager that manages or facilitates man- a Microsoft System Center Virtual Machine Manager ),
agement of the virtual machines . If so , the virtual machine which manages virtual machines. FIG . 3 is a flow diagram
manager is dynamically queried to automatically determine 50 illustrating a process for discovering one or more virtual
the virtual machines that it manages or that it facilitates machines in one or more of the environments illustrated in
management of. If not , a virtual machine host is dynamically FIGS . 1A , 1B and 2 ( or in other environments ).
queried to automatically determine the virtual machines that FIGS . 4-6 are display diagrams illustrating example inter
it hosts . The data of each virtual machine is then copied faces provided by aspects of the invention . An administrator
according to the specifications of the received indication . 55 (or other user) may use the example interfaces to administer
Under one example of the method , a file - level, volume- storage operations, such as the copying of data of virtual
level or disk - level copy of aa virtual machine is performed. machines. FIG . 7 is a flow diagram illustrating a process
Performing a file - level copy involves determining volumes used by aspects of the invention to copy data of a virtual
of the virtual machine , mounting the volumes on a proxy machine . FIG . 8 is a flow diagram illustrating a process for
server ,and copying files from the volumes mounted on the 60 extracting metadata from virtual volumes and/ or virtual disk
proxy server to a secondary storage data store . Performing and configuration files. FIG . 9 is a flow diagram illustrating
a volume - level copy involves determining volumes of the a process for restoring virtual machine data . FIG . 10 is a
virtual machine, mounting the volumes on a proxy server, block diagram illustrating an example of a data storage
and copying the volumes mounted on the proxy server to the enterprise in which aspects of the invention may be config
secondary storage data store . Performing a disk - level copy 65 ured to operate .
involves determining virtual disk and configuration files of FIGS . 11 , 12 , 13A , 13B , 14A , and 14B are display
the virtual machine , copying the virtual disk and configu- diagrams illustrating example interfaces provided by aspects
US 11,436,210 B2
5 6
of the invention . The administrator may also use these computing device 125 via a SAN 130. The environment 100
example interfaces to administer storage operations, such as also includes a virtual machine storage manager 145 oper
the restoration of data previously copied from virtual ating on or being hosted by another computing device 170 ,
machines. FIG . 3 is aa flow diagram illustrating a process that which may be another server , and a secondary storage data
may be used in a storage operation to perform incremental 5 store 175 connected to the computing device 170. The
copies of blocks of virtual machine data. FIG . 17 is a computing devices 125 and 170 are connected to each other
diagram illustrating a suitable data structure that may be via a network 180 , which may be a LAN , a WAN , the public
used during the process of FIG . 16 . Internet, some other type of network , or some combination
Suitable Environments of the above .
FIGS . 1A , 1B and 2 and the discussion herein provide a 10 The virtual machine host 105 hosts one or more virtual
brief, general description of certain exemplary suitable com machines 110 ( shown individually as virtual machines 110a
puting environments in which aspects of the invention can and 110b) . Each virtual machine 110 has its own operating
be implemented . Although not required , aspects of the system 120 ( shown individually as operating systems 120a
invention are described in the general context of computer
executable instructions, such as routines executed by a 15 and 1205) and one or more applications 115 executing on the
operating system or loaded on the operating system ( shown
general- purpose computer, e.g. , a server computer, wireless
device, or personal computer. Those skilled in the relevant individually as applications 115a and 115b ) . The operating
art will appreciate that aspects of the invention can be systems 120 may be any type of operating system 120 ( e.g. ,
practiced with other communications, data processing , or Microsoft Windows 95 / 98 /NT/ 2000 /XP / 2003 / 2008, Linux
computer system configurations, including: Internet appli- 20 operating systems, Sun Solaris operating systems, UNIX
ances, hand -held devices ( including personal digital assis- operating systems , etc. ) that can be hosted by the virtual
tants ( PDAs ) ) , wearable computers, all manner of wireless machine host 105. The applications 115 may be any appli
devices, multi -processor systems, microprocessor -based or cations (e.g. , database applications, file server applications
programmable consumer electronics, set -top boxes , network mail server applications, web server applications , transac
PCs , mini-computers, mainframe computers, and the like. 25 tion processing applications, etc. ) that may run on the
Indeed , the terms “ computer , " " host,” and “ host computer " operating systems 120. The virtual machines 110 are also
are generally used interchangeably herein , and refer to any connected to the network 180 .
of the above or similar devices and systems, as well as any The computing device 125 is connected to the primary
data processor. storage data store 135 via the SAN 130 , which may be any
Aspects of the invention can be embodied in a special 30 type of SAN (e.g. , a Fibre Channel SAN , an iSCSI SAN , or
.

purpose computer or data processor that is specifically any other type of SAN ). The primary storage data store 135
programmed , configured , or constructed to perform one or stores the virtual disks 140 ( shown individually as virtual
more of the computer -executable instructions explained in disks 140a and 140b ) of the virtual machines 110 hosted by
detail herein . Aspects of the invention can also be practiced the virtual machine host 105. Virtual disk 140a is used by
in distributed computing environments where tasks or mod- 35 virtual machine 110a , and virtual disk 140b is used by
ules are performed by remote processing devices, which are virtual machine 110b . Although each virtual machine 110 is
linked through a communications network , such as a Local shown with only one virtual disk 140 , each virtual machine
Area Network (LAN ), a Wide Area Network ( WAN ), a 110 may have more than one virtual disk 140 in the primary
SAN , a Fibre Channel network, or the Internet. In a distrib- storage data store 135. As described in more detail herein , a
uted computing environment, program modules may be 40 virtual disk 140 corresponds to one or more files ( e.g. , one
located in both local and remote memory storage devices. or more * .vmdk or * .vhd files) on the primary storage data
Aspects of the invention may be stored or distributed on store 135. The primary storage data store 135 stores a
tangible computer - readable media , including magnetically primary copy of the data of the virtual machines 110 .
or optically readable computer discs , hard -wired or prepro- Additionally or alternatively, the virtual disks 140 may be
grammed chips ( e.g. , EEPROM semiconductor chips ) , 45 stored by other storage devices in the environment 100 .
nanotechnology memory , biological memory , or other tan- A primary copy of data generally includes a production
gible or physical data storage media . In some aspects of the copy or other “ live ” version of the data that is used by a
system , computer implemented instructions, data structures , software application and is generally in the native format of
screen displays, and other data under aspects of the inven- that application. Primary copy data may be maintained in a
tion may be distributed over the Internet or over other 50 local memory or other high - speed storage device ( e.g. , on
networks ( including wireless networks), on a propagated the virtual disks 140 located in the primary storage data store
signal on a propagation medium (e.g. , an electromagnetic 135 ) that allows for relatively fast data access if necessary .
wave ( s ) , a sound wave , etc. ) over a period of time , or they Such primary copy data may be intended for short - term
may be provided on any analog or digital network ( packet retention ( e.g. , several hours or days) before some or all of
switched, circuit switched, or other scheme ). Those skilled 55 the data is stored as one or more secondary copies , for
in the relevant art will recognize that portions of aspects of example, to prevent loss of data in the event a problem
the invention may reside on a server computer, while cor- occurs with the data stored in primary storage.
responding portions reside on a client computer such as a In contrast, secondary copies include point- in - time data
mobile or portable device , and thus, while certain hardware and are typically intended for long - term retention ( e.g. ,
platforms are described herein , aspects of the invention are 60 weeks, months, or years depending on retention criteria , for
equally applicable to nodes on a network . example , as specified in a storage or retention policy ) before
FIG . 1A is a block diagram illustrating an environment some or all of the data is moved to other storage or
100 in which aspects of the invention may be configured to discarded. Secondary copies may be indexed so users can
operate . The environment 100 includes a virtual machine browse and restore the data at another point in time . After
host 105 operating on or being hosted by a computing device 65 certain primary copy data is backed up , a pointer or other
125 , which may be a server . The environment 100 also location indicia, such as a stub , may be placed in the primary
includes a primary storage data store 135 connected to the copy to indicate the current location of that data . The
US 11,436,210 B2
7 8
secondary storage data store 175 stores one or more sec- The environment 200 also includes a virtual machine
ondary copies of the data of the virtual machines 110 . manager 202 operating on a computing device 215 ( e.g. , a
The virtual machine storage manager 145 includes a server ). The virtual machine manager 202 includes a virtual
virtual machine storage operation component 150 , which machine management component 205 which enables admin
includes aa Virtual Logical Unit Number ( VLUN ) driver 152 5 istrators ( or other users with the appropriate permissions; the
( for accessing virtual disks 140 , described in more detail term administrator is used herein for brevity ) to manage the
herein ) and a virtual machine mount component 154 ( for virtual machines 110. The virtual machine manager 202 also
mounting virtual machines , described in more detail herein ). includes an Application Programming Interface ( API) com
The virtual machine storage manager 145 also includes a ponent 210 , which provides functions that enable the data
9

data agent 155. The data agent 155 includes an integration 10 agent 155 to programmatically interact with the virtual
component 157 that provides functionality for the virtual machine manager 202 and the virtual machines 110. The
machine storage operation component 150. The data agent virtual machine hosts 105 may also each include an API
155 also includes aa virtual disk analyzer component 160 that component. The virtual machine manager 202 and / or the
examines the virtual disk and configuration files correspond- virtual machine hosts 105 may expose or provide other APIs
ing to the virtual disks 140 and extracts metadata from the 15 not illustrated in FIG . 1A , 1B or 2 , such as an API for
virtual disk and configuration files. For example, the inte- accessing and manipulating virtual disks 140 , and APIs for
gration component 157 may include a set of scripts that the performing other functions related to management of virtual
data agent 155 causes to be run prior to , during, and / or machines 110 .
following a copy of virtual machine data. As another The environments 100 , 101 and 200 may include com
example, the integration component 157 may be a compo- 20 ponents other than those illustrated in FIGS . 1A, 1B and 2 ,
nent that encapsulates or wraps the virtual machine mount respectively, and the components illustrated may perform
component 154 and provides an Application Programming functions other than or in addition to those described herein .
Interface (API) with functions for accessing the virtual For example , the virtual machine storage manager 145 may
machine mount component 154. The virtual machine storage include a public key certificate ( e.g. , an X.509 public key
manager 145 also includes a data store 165 that maintains 25 certificate) that the virtual machine storage manager 145
data used by the virtual machine storage manager 145 , such provides to the virtual machine host 105 or the virtual
as data used during storage operations, and configuration machine manager 202. The virtual machine host 105 or the
data . virtual machine manager 202 can then use the X.509 public
The secondary storage data store 175 is connected to the key of the certificate to encrypt data that is to be transmitted
computing device 170. The secondary storage data store 175 30 to the virtual machine storage manager 145. As another
may be any type of storage suitable for storing one or more example , the network 180 may include a firewall that sits
secondary copies of data , such as Directly - Attached Storage between the virtual machine host 105 and the virtual
( DAS ) such as hard disks , storage devices connected via machine storage manager 145 , and data being copied may
another SAN ( e.g. , a Fibre Channel SAN , an iSCSI SAN , or have to pass through the firewall. If this is the case , the
any other type of SAN ), Network - Attached Storage (NAS ), 35 virtual machine storage manager 145 may use the systems
a tape library, optical storage, or any other type of storage . and methods described in commonly - assigned U.S. patent
The secondary storage data store 175 stores virtual machine application Ser. No. 10 /818,747 ( entitled SYSTEM AND
data that is copied by the virtual machine storage manager METHOD FOR PERFORMING STORAGE OPERA
145. Accordingly, the secondary storage data store 175 TIONS THROUGH A FIREWALL ), the entirety of which is
stores one or more secondary copies , of the data of the 40 incorporated by reference herein .
virtual machines 110. A secondary copy can be in one or As another example , a secondary storage computing
more various formats ( e.g. , a copy set , a backup set, an device (which is described in more detail herein , e.g. , with
archival set, a migration set , etc. ). reference to FIG . 10 ) may be connected to the virtual
FIG . 1B is a block diagram illustrating another environ- machine storage manager 145 and to the secondary storage
ment 101 in which aspects of the invention may be config- 45 data store 175. The secondary storage computing device
ured to operate . The environment 101 is substantially the may assist in the transfer of copy data from the virtual
same as the environment 100 illustrated in FIG . 1A , except machine storage manager 145 to the secondary storage data
that primary storage data store 135 resides in the computing store 175. The secondary storage computing device may
device 125 hosting the virtual machine host 105 ( the primary perform functions such as encrypting, compressing, single
storage data store 135 is local storage ). The local primary 50 or variable instancing, and / or indexing data that is trans
storage data store 135 includes aa virtual disk 140a for use by ferred to the secondary storage data store 175. As another
virtual machine 110a , and a virtual disk 140b for use by example , one or more agents (e.g. , a file system agent and / or
virtual machine 110b . In addition to or as an alternative to a proxy host agent) as well as a set of utilities (e.g. , VMware
the primary storage data stores 135 illustrated in FIGS . 1A Tools if the virtual machines 110 are VMware virtual
and 1B , the virtual machine host 105 may use other methods 55 machines ) may reside on each virtual machine 110 to
of storing data, such as Raw Device Mapping (RDM) on a provide functionality associated with copying and restoring
local or network - attached device (NAS) or on storage virtual machine data. As another example, the environments
devices connected via another SAN . 100 , 101 and 200 may include components or agents that
FIG . 2 is a block diagram illustrating yet another envi- perform various functions on virtual machine and other data ,
ronment 200 in which aspects of the invention may be 60 such as classifying data , indexing data , and single or vari
configured to operate . The environment 200 includes two able instancing or deduplicating data at different phases of
computing devices 125 ( shown individually as computing storage operations performed on virtual machine and other
devices 125a and 125b ) , each hosting a virtual machine host data .
105 ( shown individually as virtual machine hosts 105a and As another example , the secondary storage data store 175
105b ) . The primary storage data store 135 includes two 65 may include one or more single instance storage devices that
additional virtual disks 140c and 140d that store the data of store only a single instance of multiple instances of data
virtual machines 110c and 110d, respectively . (e.g. , only a single instance of multiple instances of identical
US 11,436,210 B2
9 10
files or data objects stored on one or more computing snapshots) or in a process tree ( in which there are multiple
devices ). If this is the case , the secondary storage data store branches of snapshots from the original state of the virtual
175 may include one or more single instance storage devices machine 110 , and two snapshots may or may not be in the
as described in one or more of the following commonly- same branch from the original state of the virtual machine
assigned U.S. patent applications: 1 ) U.S. patent application 5 110 ) . When a snapshot is taken of a virtual machine 110 , the 2

Ser. No. 11 /269,512 ( entitled SYSTEM AND METHOD TO virtual machine 110 stops writing to its virtual disks 140
SUPPORT SINGLE INSTANCE STORAGE OPERA- (e.g. , stops writing to the one or more * .vmdk files ). The
TIONS ) ; 2 ) U.S. patent application Ser. No. 12/ 145,347 virtual machine 110 writes future writes to a delta disk file
( entitled APPLICATION - AWARE AND REMOTE (e.g. , a * delta.vmdk file ) using, for example, a copy - on -write
SINGLE INSTANCE DATA MANAGEMENT); or 3 ) U.S. 10 (COW) semantic . As the virtual machine host 105 can
patent application Ser. No. 12 / 145,342 (entitled APPLICA- snapshot a virtual machine 110 repeatedly, there can be
TION - AWARE AND REMOTE SINGLE INSTANCE multiple delta disk files. The virtual disk and delta disk files
DATA MANAGEMENT), 4 ) U.S. patent application Ser . can be analogized to links in a chain . Using this analogy, the
No. 11 / 963,623 ( entitled SYSTEM AND METHOD FOR original disk file is aa first link in the chain . A first child delta
STORING REDUNDANT INFORMATION ); 5 ) U.S. pat- 15 disk file is a second link in the chain , and a second child
ent application Ser. No. 11 / 950,376 ( entitled SYSTEMS delta disk file is a third link in the chain, and so forth .
AND METHODS FOR CREATING COPIES OF DATA Also as previously described , a virtual machine 110
SUCH AS ARCHIVE COPIES ) ; or 6 ) U.S. Pat App . No. generally has associated configuration files that a virtual
61 / 100,686 ( entitled SYSTEMS AND METHODS FOR machine host 105 uses to store configuration data about the
MANAGING SINGLE INSTANCING DATA ), each of 20 virtual machine 110. These configuration files may include
which is incorporated by reference herein in its entirety. a * .vmx file , which stores data about the parent -child
As a further example, the secondary storage data store 175 relationships created between virtual disk files and delta disk
may include one or more variable instance storage devices files when a snapshot of a virtual machine 110 is taken .
that store a variable number of instances of data ( e.g. , a These configuration files may also include aa disk descriptor
variable number of instances of identical files or data objects 25 file (e.g. , a * .vmdk file ). In some embodiments, instead of
stored on one or more computing devices ). If this is the case , using a disk descriptor file , the disk descriptor is embedded
the secondary storage data store 175 may include one or into a virtual disk file (e.g. , embedded in a * .vmdk file ).
more variable instance storage devices as described in the The disk descriptor file generally stores data about the
following commonly -assigned U.S. Pat. App . No. 61/164 , virtual disk files that make up a virtual disk 140. This data
803 (entitled STORING A VARIABLE NUMBER OF 30 includes information about the type of the virtual disk 140 .
INSTANCES OF DATA OBJECTS ). For example, the virtual disk 140 may be a monolithic flat
virtual disk , a monolithic sparse virtual disk , a split flat
Example Layouts of Virtual Disks virtual disk , a split sparse virtual disk or another type of a
virtual disk . This data also includes an identifier of the parent
Virtual disks 140 , as used in the systems described in 35 of the virtual disk file, if it has one (if the virtual machine
FIGS . 1A , 1B and 2 , may have various configurations. As 110 has been snapshotted , its original virtual disk file will
previously described , a virtual disk 140 corresponds to one have a child virtual disk file ), a disk database describing
or more virtual disk files (e.g. , one or more * .vmdk or * .vhd geometry values for the virtual disk 140 ( e.g. , cylinders,
files ) on the primary storage datastore 135. A virtual heads and sectors ) and information describing the extents
machine host 105 may support several types of virtual disks 40 that make up the virtual disk 140. Each extent may be
140. For example, a virtual disk 140 may be either: 1 ) a described by a line in the disk descriptor file having the
growable virtual disk 140 contained in a single virtual disk following format:
file that can grow in size (e.g. , a monolithic sparse virtual [type of access ] [ size ] [type] [ file name of extent]
disk that starts at 2 GB and grows larger ); 2 ) a growable Following is an example of a line in the disk descriptor file
virtual disk 140 split into multiple virtual disk files (e.g. , a 45 describing an extent:
split sparse virtual disk comprising multiple 2 GB virtual RW 16777216 VMFS " test- flat.vmdk ”
disk files ), the aggregation of which can grow in size by This line describes an extent for which read /write access
adding new virtual disk files ; 3 ) a preallocated virtual disk is allowed , of size 16777216 sectors , of type VMFS (e.g. , for
140 contained in a single virtual disk file (e.g. , a monolithic use on a primary storage data store 135 ) , and the filename of
flat virtual disk , the size of which does not change ); or 4 ) a 50 the virtual disk file “ test -flat.vmdk .”
preallocated virtual disk 140 split into multiple virtual disk A virtual machine host 105 provides an abstraction layer
files ( e.g. , a split flat virtual disk comprising multiple 2 GB such that the one or more virtual disks files ( and any delta
virtual disk files, the number of which and the size of each disk files) of the virtual disks 140 appear as one or more
of which does not change ). Where a virtual disk 140 is split actual disks (e.g. , one or more hard disk drives) to a virtual
into multiple virtual disk files, each individual virtual disk 55 machine 110. Because the virtual machine host 105 abstracts
file is called an extent. A virtual machine host 105 may also the virtual disk 140 so that it appears as an actual disk to an
support types of virtual disks 140 other than these types. operating system 120 executing on the virtual machine 110 ,
Those of skill in the art will understand that aa virtual disk the operating system 120 can generally use its standard file
140 can be structured in a wide variety of configurations, system for storing data on a virtual disk 140. The various
and that virtual disks 140 are not limited to the configura- 60 structures used by the file system and the operating system
tions described herein . 120 (e.g. , the partition table ( s ) , the volume manager data
A virtual machine host 105 may support snapshotting, or base ( s ) and the file allocation table ( s )) are stored in the one
taking a snapshot of a virtual machine 110. The virtual or more virtual disk files that make up a virtual disk 140 .
machine host 105 can snapshot a virtual machine 110 in a For example, a virtual machine host 105 may store a
linear fashion (in which there is only one branch of snap- 65 single virtual disk file (e.g. , a single * .vmdk file) that is a
shots from the original state of the virtual machine 110 , and preallocated virtual disk 140 (a monolithic flat virtual disk)
each snapshot in the branch linearly progresses from prior for each virtual disk used by a virtual machine 110 operating
US 11,436,210 B2
11 12
on the virtual machine host 105. The single virtual disk file For example, an administrator may create a storage policy
may be named < virtual machine name > -flat.vmdk . There for copying data of virtual machines 110 and perform the
would also be a disk descriptor file for the single virtual disk copying of their data to the secondary storage data store 175
file that would typically be named < virtual machine name > according to the storage policy. This storage policy may
.vmdk . A snapshot taken of the virtual machine 110 would 5 specify that the virtual machine storage manager 145 is to
result in an additional delta disk file being created that is a perform a file - level copy of certain files ( e.g. , all files in a
specific directory or satisfying selection criteria ) on multiple
single virtual disk file ( e.g. , a single * .vmdk file ), which is virtual machines 110. As yet another example, the storage
a growable virtual disk 140 (a monolithic sparse virtual policy may
disk ). The delta disk file would typically be named < virtual 10 145 is to specify that the virtual machine storage manager
perform a volume - level copy of all virtual
disk name > »-delta.vmdk , where <## ##> is a
number indicating the sequence of the snapshot. There machines 110 on multiple virtual machine hosts 105. As
another example, the storage policy may specify that the
would also be aa disk descriptor file for the single virtual disk virtual machine storage manager 145 is to perform a disk
file that would typically be named < virtual disk name > level copy of all virtual machines 110 on all virtual machine
< ###### > -.vmdk , again, where < #### is a number indi 15 hosts 105 associated with a virtual machine manager 202 .
cating the sequence of the snapshot. File - level , volume - level and disk- level copying is discussed
Process for Discovering Virtual Machines in more detail herein , for example, with reference to FIG . 7 .
FIG . 3 is a flow diagram illustrating a process for dis- At decision step 310 the data agent 155 determines ( e.g. ,
covering one or more virtual machines 110 (e.g. , for an by reading a stored indication of the virtual machine man
operation to copy their data ) . In general, for ease in describ- 20 ager 202 , or by scanning a network for a virtual machine
ing features of the invention , aspects of the invention will manager 202 ) whether there is a virtual machine manager
now be described in terms of a user ( e.g. , an administrator ) 202 managing the virtual machine hosts 105 and associated
interacting with the server computer via his or her user virtual machines 110. If there is a virtual machine manager
computer. As implemented, however, the user computer 202 , the process 300 continues at step 325 , where the data
receives data input by the user and transmits such input data 25 agent 155 queries the virtual machine manager 202 to
to the server computer. The server computer then queries the determine the virtual machines 110 that it manages and to
database, retrieves requested pages , performs computations receive an ordered or unordered list of virtual machines 110 .
and / or provides output data back to the user computer, The data agent 155 may call a function of the API compo
typically for visual display to the user . Thus, for example, nent 210 to determine the virtual machines 110 managed by
under step 305 , a user provides input specifying that a copy 30 the virtual machine manager 202 and receive an ordered or
operation is to be performed and how to perform the copy unordered list of virtual machines 110 .
operation. The data agent 155 receives this input and per If there is not a virtual machine manager 202 , the process
forms the copy operation according the input. 300 continues at step 315 , where the data agent 155 selects
The process 300 begins at step 305 when the data agent the next virtual machine host 105 , which, on the first loop ,
155 receives an indication specifying that the data agent 155 35 is the first determined virtual machine host 105. The virtual
is to perform a copy operation and how to perform the copy machine hosts 105 may be dynamically determined (e.g. , by
operation. The indication may be received from the admin- scanning the network 180 ) or determined statically (e.g. , by
istrator ( e.g. , a manually - specified indication to perform a reading a stored indication of the virtual machine hosts 105 ) .
copy operation ) or be triggered automatically ( e.g. , by an More details as to the detection of virtual machine hosts 105
automated schedule ). The indication may be received as a 40 and virtual machines 110 are described herein for example ,
result of a storage policy that specifies how and / or when to with reference to FIGS . 4-6 . The steps 325 and 320 are not
copy data from one or more virtual machines 110 to the to be understood as mutually exclusive . For example, the
secondary storage data store 175 . data agent 155 may determine a first set of virtual machines
A storage policy is generally a data structure or other 110 by accessing the virtual machine manager 202 , and a .

information source that includes a set of preferences and 45 second set of virtual machines by accessing one or more
other storage criteria associated with performing a storage virtual machine hosts 105 .
operation . The preferences and storage criteria may include, At step 320 the data agent 155 queries the virtual machine
but are not limited to , a storage location , relationships host 105 to determine the virtual machines 110 that it hosts .
between system components, network pathways to utilize in The data agent 155 may call a function of the API compo
a storage operation, retention policies , data characteristics, 50 nent 210 to determine the virtual machines 110 hosted by the
compression or encryption requirements, preferred system virtual machine host 105 and to receive an ordered or
components to utilize in a storage operation , a single- unordered list of virtual machines 110. At step 330 , the data
instancing or variable instancing policy to apply to the data , agent 155 begins looping through the list of virtual machines
and other criteria relating to a storage operation . For 110 that it determined in either or both of steps 320 or 325
example, a storage policy may indicate that certain data is to 55 and selects the next virtual machine 110 on the list, which,
be stored in the secondary storage data store 175 , retained on the first loop , is the first determined virtual machine 110 .
for a specified period of time before being aged to another At step 335 the data agent 155 copies the data of the virtual
tier of secondary storage , copied to the secondary storage machine , for example , according to the indication received
data store 175 using a specified number of data streams, etc. in step 305 , or according to a storage policy. This process is
A storage policy may be stored in a database of aa storage 60 described in more detail herein , for example, with reference
manager ( see , e.g. , FIG . 10 and accompanying description ), to FIG . 7 .
to archive media as metadata for use in restore operations or At step 337 , other processing of virtual machine data may
2

other storage operations, or to other locations or components be performed. For example, the data agent 155 ( or another
of the system . The storage manager may include a jobs agent agent, such as a data classification agent) may analyze and
that monitors the status of some or all storage operations 65 classify the virtual machine data . To do so , the data agent
previously performed, currently being performed, or sched- 155 may use techniques such as those described in com
uled to be performed. monly assigned U.S. patent application Ser. No. 11 / 564,119
US 11,436,210 B2
13 14
( entitled SYSTEMS AND METHODS FOR CLASSIFY- formats , such as email or character / code -based formats ,
ING AND TRANSFERRING INFORMATION IN A STOR- algorithm -based formats (e.g. , vector generated ), or matrix
AGE NETWORK) , the entirety of which is incorporated by or bit - mapped formats. While aspects of the invention are
reference herein . As another example, the data agent 155 (or described herein using a networked environment, some or
another agent, such as an indexing agent) may create an 5 all features may be implemented within a single -computer
index of the virtual machine data . To do so , the data agent environment.
155 may use techniques such as those described in com- FIG . 4 is a display diagram illustrating an example
monly - assigned U.S. patent application Ser. No. 117694,869 interface 400 provided by aspects of the invention . The
( entitled METHOD AND SYSTEM FOR OFFLINE interface 400 enables an administrator to specify options for
INDEXING OF CONTENT AND CLASSIFYING 10 the data agent 155 to discover virtual machines 110 for
STORED DATA ), the entirety of which is incorporated purposes of adding them to a sub - client. Clients and sub
herein . As aa final example, the data agent 155 may single or clients are discussed in more detail with respect to FIGS . 5A
variable instance or de - duplicate the virtual machine data. and 5B . The administrator can specify that the data agent
To do so , the data agent 155 may use techniques described 155 is to automatically discover virtual machines 110 by
in one or more of previously - referenced U.S. patent appli- 15 selecting check box 405 , which enables two options . The
cation Ser. Nos . 11 / 269,512 , 12 / 145,347 , 12 / 145,342 , first option , which can be chosen by selecting radio button
11 / 963,623 , 11 / 950,376 , 61 / 100,686 , and 61,164,803 . At 410a and using the button 415a labeled " Configure ," speci
decision step 340 , the data agent 155 determines whether fies that the data agent 155 is to discover virtual machine
there are more virtual machines 110 for which the data is to hosts 105 or virtual machines 110 that match a regular
be copied . If so , the data agent 155 returns to step 330 , where 20 expression (e.g. , an expression that describes a set of
the next virtual machine 110 is selected . strings ). The second option, which can be chosen by select
If there are no more virtual machines 110 for which the ing radio button 410b and using the button 415b labeled
data is to be copied ( e.g. , if the data agent 155 has looped “ Configure , ” specifies that the data agent 155 is to discover
through the list of all the virtual machines 110 determined in virtual machines 110 associated with one or more specified
either or both of steps 320 or 325 ) , the process continues at 25 virtual machine hosts 105. The two options allow the
step 345. At decision step 345 , if there is not a virtual administrator to specify one or more criteria (e.g. , based on
machine manager 202 ( e.g. , as determined in decision step names of the virtual machines 110 ) that discovered virtual
310 ) , the data agent 155 determines whether there are more machine 110 should meet in order to be associated with a
virtual machine hosts 105 (e.g. , if more than one virtual storage policy, and this to have storage operations performed
machine hosts 105 was specified in the indication received 30 upon their data . Additionally or alternatively, data of virtual
in step 305 ) . If there are more virtual machine hosts 105 , the machines 110 can be classified or categorized (e.g. , using
data agent 155 returns to step 315. If not , the process 300 techniques described in the previously referenced U.S. pat
concludes . ent application Ser. No. 11 / 564,119 ) and the one or more
Interfaces for Configuring Storage Operations for Virtual criteria can use these classifications or categorizations.
Machine Data 35 Detected virtual machines 110 that meet these one or more
Referring to FIGS . 4 through 6 , representative computer criteria ( or having data that meets these one or more criteria )
displays or web pages for configuring storage operations to can be associated with a storage policy. This allows storage
be performed for virtual machine data will now be operations to be performed upon their data .
described . The screens of FIGS . 4 through 6 may be imple- Buttons 418 enable the administrator to confirm or cancel
mented in any of various ways , such as in C ++ or as web 40 the selections and / or view help regarding the interface 400 .
pages in XML (Extensible Markup Language ), HTML (Hy- The interface 400 may enable discovery of virtual machines
per Text Markup Language ), or any other scripts or methods 110 by both regular expression matching and by association
of creating displayable data, such as the Wireless Access with one or more specified virtual machine hosts 105. For
Protocol (" WAP ” ). The screens or web pages provide facili- example, the interface 400 could be configured to discover
ties to present information and receive input data , such as a 45 all virtual machines 110 associated with a specific virtual
form or page with fields to be filled in, pull-down menus or machine host 105 , as well as an additional number of virtual
.

entries allowing one or more of several options to be machines 110 having names that match a regular expression
selected , buttons , sliders, hypertext links or other known (e.g. , “ virtual /A ” ) .
user interface tools for receiving user input. While certain FIG . 5A is a display diagram illustrating another example
ways of displaying information to users is shown and 50 interface 500 provided by aspects of the invention . Tab 510
described with respect to certain Figures, those skilled in the specifies general options that may be configurable by an
relevant art will recognize that various other alternatives administrator. Tab 510 also specifies a virtual machine
may be employed . The terms “ screen , ” “ web page ” and storage manager 145 , name 512 , an application 514 , an
“ page ” are generally used interchangeably herein . instance name 516 , and a backup set name 518. The name
When implemented as web pages , the screens are stored 55 512 corresponds to the virtual machine storage manager 145
as display descriptions, graphical user interfaces, or other that hosts the data agent 155. The administrator can establish
methods of depicting information on a computer screen one or more sub - clients for the virtual machine storage
( e.g. , commands, links, fonts, colors , layout , sizes and manager 145. A sub -client is a portion of a client, and can
relative positions , and the like) , where the layout and contain either all of the client's data or a designated subset
information or content to be displayed on the page is stored 60 thereof. A default sub - client may be established for the data
in a database typically connected to a server. In general, a agent 155 that provides for protection of substantially all of
" link ” refers to any resource locator identifying a resource the client's data ( e.g. , the data of the virtual machines 110 ) .
on a network , such as a display description provided by an Protection of data generally refers to performing a storage
organization having a site or node on the network . A " display operation on a primary copy of data to produce one or more
description , ” as generally used herein , refers to any method 65 secondary copies of the data . Storage operations performed
of automatically displaying information on a computer to protect data may include copy operations , backup opera
screen in any of the above -noted formats, as well as other tions , snapshot operations, Hierarchical Storage Manage
US 11,436,210 B2
15 16
ment ( HSM ) operations, migration operations, archive list box 570 listing three different sub - clients that may be
operations, and other types of storage operations known to selected for the virtual machine 110 named
those of skill in the art. " rack0102rh4x64.” The virtual machine 110 is currently part
An administrator can also establish additional sub - clients of the sub -client“ Sub -client_test ,” but other sub - clients may
to provide a further level of protection of virtual machine 5 be selected . The virtual machines 110 that are part of the
data. For example, for aa virtual machine 110 upon which is same sub - client have the same storage policy applied to
loaded aa mail application (e.g. , a Microsoft Exchange mail protect their data .
server) and aa database application ( e.g. , an Oracle database Other virtual machines 110 that are part of the groups 562
application ), the administrator could establish one sub - client or 564 are shown as being part of other sub - clients, such as
for protection of the data of the mail application (e.g. , user 10 the virtual machine 110 named “ VM2” that is part of a
mailboxes ) and one sub - client for protection of the data of sub -client 572 named “ Database_SC ," which may be a
the database application ( e.g. , databases, datafiles and / or sub - client directed toward protecting data of a database
tablespaces ) . As another example, the administrator could application, and the virtual machine 110 named “ VM3” that
establish sub - clients for organizational groupings ( e.g. , a is part of a sub - client 574 named “ Filesrv_SC , ” which may
sub - client for a marketing group , a sub - client for a sales 15 be a sub - client directed toward protecting data on a file
group , etc. ) and/or for virtual machines 110 based upon their server . Similarly, the virtual machines 110 named “ SG111”
purpose ( e.g. , a sub -client for virtual machines 110 used in
9 and “ SG3 ( 1 ) ” are both part of a sub - client 576 named
production settings, a sub -client for virtual machines 110 “ Marketing Sales_SC ,” which may be aa sub -client directed
used in test and / or development settings, etc. ) . Those of skill toward protecting data of marketing and sales organizations.
in the art will understand that an administrator may establish 20 The virtual machine 110 named “ W2K8_SC ” is also part of
sub - clients according to various groupings. the sub - client 574. Accordingly, two different virtual
An administrator can specify that any newly discovered machines 110 on two different virtual machine hosts 105
virtual machines 110 that do not qualify for membership in may be part of the same sub -client.
an established sub - client group are to be added to the default The sub - client 574 may also include other, non - virtual
sub - client by selecting check box 520. The administrator can 25 machines . (Non -virtual machines can be defined broadly to
also select that the data agent 155 is to discover virtual include operating systems on computing devices that are not
machines 110 and add them to particular sub - clients based virtualized . For example, the operating systems of the virtual
upon rules by selecting check box 522. Since check box 522 machine hosts 105 , the virtual machine manager 202 , and
is not selected , the options below it may not be selectable . the virtual machine storage manager 145 can be considered
However, selecting check box 522 allows the administrator 30 to be non - virtual machines . ) In this case , the same storage
to select radio buttons 524 ( shown individually as radio policy would be applied to protect data of both the associ
buttons 524a and 524b ) and buttons 526 ( shown individually ated virtual machines 115 and the non - virtual machines . An
as buttons 526a and 526b) , which enable functionality administrator can select one or more virtual machine hosts
2

similar to that discussed with reference to the interface 400 105 and select a sub - client using the listbox 554 , and then
of FIG . 4. For example, selection of the radio button 524a 35 select the button 556 labeled “ Apply ” to change all of the
and the button 526a can enable the administrator to specify selected virtual machine hosts 105 to a selected sub - client.
that all virtual machines 110 that match a regular expression When the administrator selects the button 552 labeled “ Dis
( e.g. , the regular expression “ - [ a - g ] ” could be used to match cover,” an automated process for discovering virtual
any virtual machines 110 ( or any virtual machine hosts 105 ) machine hosts 105 and / or virtual machines 110 is started .
beginning with names for which the first character begins 40 When it concludes , the interface 550 displays any virtual
with any character in the range of “ a ” to “ g ” ), are to be added machine hosts 105 and / or virtual machines 110 discovered
to a particular sub - client. As another example , selection of by the process . Buttons 578 enable the administrator to
the radio button 524b and the button 526b can enable the confirm or cancel the selections and /or view help regarding
administrator to specify that all virtual machines 110 that are the interface 550 .
associated with a particular virtual machine host 105 (that is 45 FIG . 6 is a display diagram illustrating another example
identified by e.g. , name, IP address, and / or other identifier ) interface 600 provided by aspects of the invention . The
are to be added to a particular sub - client. Buttons 528 enable interface 600 enables the administrator to specify virtual
the administrator to confirm or cancel the selections and / or machine hosts 105 and / or virtual machines 110 and security
view help regarding the interface 500 . credentials for the data agent 155 to use when accessing the
FIG . 5B is a display diagram illustrating another example 50 virtual machines 110. For example, the virtual machine hosts
interface 550 provided by aspects of the invention that is 105 and / or the virtual machines 110 may use well- known
shown when tab 530 is selected . Tab 530 specifies the authentication and authorization technicians (e.g. , username
configurations of virtual machine hosts 105 , virtual and password , and / or access control lists ( ACLs )) to control
machines 110 , and sub - clients . The tab 530 displays three access to virtual machine data . The interface 600 includes a
columns, column 542 , labeled “ Virtual machine host, ” col- 55 region 610 in which a listing of virtual machine hosts 105
>

umn 544 , labeled “ Virtual machine , " and column 546 , and / or virtual machines 110 can be shown . The administra
labeled “ Sub - client name.” Column 542 contains a listing of tor can add, edit and / or remove virtual machine hosts 105
discovered virtual machine hosts 105. Column 544 contains and /or virtual machines 110 by selecting buttons 612 , 614
a listing of discovered virtual machines 110 , grouped by and / or 616 , respectively . Buttons 618 enable the adminis
their associated virtual machine hosts 105. Column 546 60 trator to confirm or cancel the selections and / or view help
contains a listing of sub -client names that are associated regarding the interface 600 .
with each virtual machine 110. The virtual machine hosts Process for Copying Virtual Machine Data
105 are divided into three groups 560 , 562 , and 564. Several FIG . 7 is a flow diagram illustrating a process 700 for
of the virtual machines 110 hosted by the first group of copying data offaa virtual machine 110. ( E.g. , according to the
virtual machines servers 105 have sub - client names of 65 indication received in step 305 of the process 300 of FIG . 3 ,
“ Sub -client_test, ” indicating that they are part of this sub- or according to a storage policy. ) One or more of the entities
client. The last virtual machine 110 in group 560 displays a illustrated in the figures (e.g. , FIGS . 1A , 1B , 2 , and / or 10 )
US 11,436,210 B2
17 18
may perform different aspects of the process 700. In some 165 on the virtual machine storage manager 145 (e.g. , by
examples, a storage manager 1005 instigates the process 700 dynamically determining an available mount point or by
by sending an indication specifying the storage operation to reading a stored indication of a mount point to use ) . For
the data agent 155 on the virtual machine storage manager example , the mount point may be C : \mount \ < virtual
145. The data agent 155 performs the copying of the data of 5 machine name> on the data store 165. At step 724 the data
the virtual machine 110. The data agent 155 sends the data agent 155 determines the volumes of the virtual machine 110
to a secondary storage computing device 1065 , which then (e.g. , by calling an API function of the integration compo
stores the data on one or more storage devices 1015 (e.g. , the nent 157 or by calling a function of the API component 210 ) .
secondary storage data store 175 ) . In some examples, less For example , a virtual machine 110 using a Microsoft
than all of these entities may be involved in performing the 10 Windows operating system may have a C :\volume, a D :\vol
storage operation . The processes described herein are indi- ume , and so forth . At step 726 the data agent 155 mounts the
cated as being performed by the data agent 155 , although determined volumes containing files at the determined
those of skill in the art will understand that aspects of the mount point of the data store 165 (e.g. , by again calling an
process 700 may be performed by any one of the entities API function of the integration component 157 or by calling
described herein ( e.g. , the storage manager 1005 , the sec- 15 a function of the API component 210 ) .
ondary storage computing device 1065 , etc. ). As previously described , a virtual disk 140 corresponds to
As previously described , the integration component 157 one or more files ( e.g. , one or more * .vmdk or * .vhd files ),
encapsulates the virtual machine mount component 154 and called virtual disk files, on the primary storage datastore
provides an API for accessing the virtual machine mount 135. A volume may span one or more virtual disks 140 , or
component 154. For example, if the virtual machines 110 are 20 one or more volumes may be contained within a virtual disk
VMware virtual machines , the virtual machine mount com- 140. When the data agent 155 mounts the determined
ponent 154 may be VMware's vcbMounter command - line volumes, the primary storage data store 135 sends to the
tool , and the integration component 157 may encapsulate the VLUN driver 152 a block list of the virtual disk files
functionality provided by vcbMounter into an API and corresponding to the virtual disks 140 of the determined
redirect the output of the vcbMounter tool . At step 705 , the 25 volumes . The VLUN driver 152 uses the block list infor
data agent 155 calls an API function of the integration mation to present the determined volumes ( e.g. , as read -only
component 157 to quiesce the file systems of the virtual volumes or as read -write volumes ) to the operating system
machine 110. Quiescing the file systems ensures that no file of the virtual machine storage manager 145. The data agent
system writes are pending at the time a snapshot of a virtual 155 communicates with the VLUN driver 152 to mount the
machine 110 is taken , thereby allowing the creation of 30 determined volumes at the mount point of the virtual
filesystem -consistent copies. The data agent 155 may, prior machine storage manager 145. Using the previous examples
to quiescing the file systems in step 705 , also quiesce of a virtual machine 110 with a C : \volume and a D : \volume,
applications that are executing on the virtual machine 110 or the data agent 155 would mount these volumes at the
are loaded on the virtual machine 110 . following respective locations:
At step 710 , the data agent 155 calls an API function of 35 C : \mount } < virtual machine name > \ letters \ C
the integration component 157 to put the virtual machine C : \mount} < virtual machine name > \letters \ D
110 into snapshot mode . Alternatively, the data agent 155 After mounting the determined volumes , the data agent
may call a function of the API component 210 to put the 155 can present to an administrator an interface displaying
virtual machine 110 into snapshot mode. When the virtual the mounted volumes and the directories and files on the
machine 110 is put into snapshot mode, the virtual machine 40 mounted volumes , to enable the administrator to select
110 stops writing to its virtual disks 140 (e.g. , stops writing which files and / or directories are to be copied . Alternatively,
to the one or more * .vmdk files or * .vhd files ) on the primary files and /or directories can be automatically selected in
storage data store 135. The virtual machine 110 writes future accordance with a storage policy determined by the virtual
writes to a delta disk file ( e.g. , a * delta.vmdk file) on the machine's 110 membership in aa sub - client, or in accordance
primary storage data store 135. Putting the virtual machine 45 with a set of criteria or rules . At step 728 the data agent 155
110 into snapshot mode enables the virtual machine 110 to copies the selected files and / or directories on the determined
continue operating during the process 700. At step 715 the volumes to the secondary storage data store 175 ( e.g. , via a
data agent 155 calls an API function of the integration secondary storage computing device ) . The data agent 155
component 157 to unquiesce the file systems of the virtual does so by providing an indication of a file and / or directory
machine 110. The data agent 155 may, subsequent to unqui- 50 that is to be copied to the VLUN driver 152 , which requests
escing the file systems in step 705 , also unquiesce any the blocks corresponding to the selected file and / or directory
applications that were previously quiesced. in the virtual disk files 140 on the primary storage datastore
At step 720 the data agent 155 determines ( e.g. , based 135. The mapping between blocks and files / directories may
upon the indication received in step 305 of the process 300 be maintained by the primary storage data store 135 ( e.g. , in
of FIG . 3 ) , how to copy the data of the virtual machine 110. 55 a table or other data structure ).
For example, the data agent 155 may copy the data of the After completing the copy, the data agent 155 at step 730
virtual machine 110 in one of three ways: 1 ) a file - level unmounts the determined volumes from the virtual machine
copy ; 2 ) an image - level copy ; or 3 ) a disk- level copy . storage manager 145 ( e.g. , by calling an API function of the
File -Level Copy integration component 157 or by calling a function of the
If the indication specifies that the data agent 155 is to 60 API component 210 ) . At step 732 the data agent 155 calls an
perform a file - level copy, the process 700 branches to the API function of the integration component 157 to take the
file - level copy branch . For example, an administrator may virtual machine 110 out of snapshot mode . Alternatively, the
provide that a file - level copy is to be performed if the data agent 155 may call a function of the API component
administrator wishes to copy only certain files on a volume 210 to take the virtual machine 110 out of snapshot mode .
of a virtual disk 140 (e.g. , only files within a certain 65 Taking the virtual machine 110 out of snapshot mode
directory or files that satisfy certain criteria ). At step 722 , the consolidates the writes from the delta disk file ( e.g. , any
data agent 155 determines a mount point of the data store intervening write operations to the virtual disk 140 between
US 11,436,210 B2
19 20
the time the virtual machine 110 was put into snapshot mode Disk -Level Copy
and the time it was taken out of snapshot mode ) to the virtual The file -level copy and the volume - level copy can be
disk file of the virtual disk 140. In this way , performing a thought of as operating at the virtual level . In other words ,
copy operation on a primary copy of virtual machine data the data agent 155 may have to utilize data structures ,
does not affect the virtual machine's 110 use of the data. 5 functions or other information or aspects exposed or pro
Rather, operations can pick up at the point where they left vided by a virtual machine 110 ( or the virtual machine host
off . The process 700 then concludes . 105 ) in order to copy the data of the virtual machine 110 to
Volume-Level Copy the secondary storage data store 175. For example , in order
If the indication specifies that the data agent 155 is to to perform a file - level or volume- level copy of the data of a
perform a volume - level copy, the process 700 branches to 10 virtual machine 110 , the data agent 155 utilizes some
the volume- level copy branch . The process for performing a information or aspect of the virtual machine 110 to deter
volume - level copy is similar to that for performing a file- mine its files and directories and / or volumes . The data agent
level copy, and steps 722 through 726 are effectively the 155 does so in order to present the determined files and
same for this second branch of the process 700. At step 754 directories or volumes for their selection by an administra
the data agent 155 analyzes the virtual volume and extracts 15 tor, or in order to apply, implement or execute storage
metadata from the virtual volume. This process is described operations according to a storage policy. In contrast, a
in more detail herein , e.g. , with reference to FIG . 8 . disk- level copy can be thought of as operating at a non
After mounting the determined volumes ( step 726 of the virtual level (e.g. , at a level of the physical computer hosting
volume - level copy branch ), the data agent 155 can present the virtual machine 110 and /or the physical storage media
to an administrator an interface displaying the mounted 20 upon which the virtual machine data is stored ). In other
volumes ( optionally, the files and / or directories on the words, the data agent 155 can directly access the physical
mounted volumes can also be displayed) to enable the storage media storing the data of the virtual machine 110
administrator to select which volumes are to be copied . (e.g. , the primary storage data store 135 connected via the
Alternatively, volumes can be automatically selected in SAN 130 ) to copy the virtual disks 140 of the virtual
accordance with a storage policy determined by the virtual 25 machine 110. Because the data agent 155 is copying the
machine's 110 membership in a sub -client, or in accordance virtual disks 140 without necessarily determining files and
with a set of criteria or rules . At step 734 the data agent 155 directories or volumes of the virtual machine 110 , the data
copies the selected volumes at the mount point on the virtual agent 155 does not necessarily have to utilize information or
machine storage manager 145 to the secondary storage data aspects of the virtual machine 110 or the virtual machine
store 175 (e.g. , via a secondary storage computing device ). 30 host 105 .
The data agent 155 does so by providing an indication of a If the indication specifies that the data agent 155 is to
volume that is to be copied to the VLUN driver 152 , which perform a disk- level copy, the process branches to the
requests the blocks corresponding to the selected volumes in disk- level copy branch . At step 746 , the data agent 155
the virtual disk files 140 on the primary storage datastore determines a copy point on the data store 165. For example ,
135. The mapping between blocks and volumes may be 35 the copy point may be c : \ copy \ < virtual machine name>
maintained by the primary storage data store 135 (e.g. , in a copyvirtualmachinel on the data store 165. At step 748 the
table or other data structure ). data agent 155 determines the virtual disk and any associ
After copying , the data agent 155 at step 730 unmounts ated configuration files (e.g. , the * .vmx file and / or the disk
the determined volumes from the virtual machine storage descriptor files) of the virtual machine 110 ( e.g. , by calling
manager 145 (e.g. , by calling an API function of the inte- 40 an API function of the integration component 157 or by
gration component 157 or by calling a function of the API calling a function of the API component 210 ) . The primary
component 210 ) . At step 732 the data agent 155 calls an API storage data store 135 sends to the VLUN driver 152 a block
function of the integration component 157 to take the virtual list of the virtual disk and configuration files. At step 750 , the
machine 110 out of snapshot mode . Alternatively, the data data agent 155 copies these files to the copy point on the
agent 155 may call aa function of the API component 210 to 45 datastore 165 on the virtual machine storage manager 145 .
take the virtual machine 110 out of snapshot mode . Taking The data agent 155 does so by providing an indication of the
the virtual machine 110 out of snapshot mode consolidates virtual disk and configuration files to the VLUN driver 152 ,
the writes to the delta disk file to the virtual disk file of the which requests the blocks corresponding to the virtual disk
virtual disk 140. The process 700 then concludes . and configuration files from the primary storage datastore
One advantage of performing copy operations at the 50 135. The mapping between blocks and files / directories may
file - level or the volume - level is that the data agent 155 can be maintained by the primary storage data store 135 ( e.g. , in
copy the virtual machine data from the primary storage a table or other data structure ).
datastore 135 to the secondary storage data store 175 with- At step 752 the data agent 155 calls an API function of the
out having to copy it to the datastore 165 on the virtual integration component 157 to take the virtual machine 110
machine storage manager 145. Stated another way, the data 55 out of snapshot mode (or calls a function of the API
agent 155 can obtain the virtual machine data from the component 210 ) . At step 754 the data agent 155 analyzes the
primary storage datastore 135 , perform any specified opera- virtual disk and configuration files and extracts metadata
tions upon it ( e.g. , compress it , single or variable instance it , from the virtual disk and configuration files. This process is
encrypt it , etc. ) , and stream the virtual machine data to the described in more detail herein , e.g. , with reference to FIG .
secondary storage data store 175 (e.g. , via a secondary 60 8. At step 756 the data agent 155 copies the virtual disk and
storage computing device 1065 ) , without staging or caching configuration files to the secondary storage data store 175 .
the data at the virtual machine storage manager 145. This At step 758 the data agent 155 removes the copied virtual
allows the data agent 155 to copy the data directly to the disk and configuration files from the data store 165 on the
secondary storage data store 175 , without first copying it to virtual machine storage manager 145. The process 700 then
an intermediate location . Accordingly, the data agent 155 65 concludes.
can quickly and efficiently perform file - level and volume- Because a disk - level copy operates essentially at a non
level copies of data of virtual machines 110 . virtual level, it may not have to utilize information or aspects
US 11,436,210 B2
21 22
of the virtual machine 110 (or the virtual machine host 105 ) virtual disk 140 may have a * .vmdk disk descriptor file with
in order to copy its data to the secondary storage data store an entry that uniquely identifies its parent having the fol
175. Therefore, a disk -level copy may not necessarily lowing syntax :
involve much of the overhead involved in aa file - level copy [parentidentifier -name] = [parentidentifier -value]
or a volume - level copy. Rather, a disk - level copy can 5 For example , the entry parentCID = daf6cf10 comports
directly access the physical storage media storing the data of with this syntax. The virtual disk analyzer component 160
the virtual machine 110 (e.g. , the primary storage data store may identify parent - child relationships between virtual disk
135 ) to copy the virtual disks 140 of the virtual machine 110 . files in other ways , such as by observing access to virtual
Because a disk- level copy can directly access the primary disk files and inferring the relationships by such observa
storage data store 135 , the volumes on the virtual disks 140 10 tions . At step 810 the virtual disk analyzer component 160
do not need to be mounted . Accordingly, a disk - level copy determines the relationships between virtual disk files (the
may be performed faster and more efficiently than a file virtual disk analyzer component 160 determines how the
virtual disk 140 is structured how many extents make up
level copy or a volume - level copy. each link in the chain) . The virtual disk analyzer component
Process for Extracting Metadata 15 160 performs this step by reading and analyzing the disk
Certain steps in the following process for extracting descriptor file if it is a separate file or by reading the disk
metadata from the virtual volumes and / or the virtual disk descriptor information if it is embedded into the virtual disk
and configuration files are described below using a configu file. The virtual disk analyzer component 160 may determine
ration of a virtual machine 110 having a single virtual disk the relationships between virtual disk files in other ways ,
140 comprised in a single virtual disk file. Those of skill in 20 such as by observing access to virtual disk files and inferring
the art will understand that the process is not limited in any theAtrelationships from such observations.
way to this configuration. Rather, the following process may step 815 the virtual disk analyzer component 160
be used to extract metadata from virtual disk and configu- determines ow the partitions and volumes are structured on
ration files that are arranged or structured in aa wide variety the virtual disks 140. The virtual disk analyzer component
of configurations, such as multiple virtual disks 140 span- 25 160 does this by reading the sectors of the virtual disks 140
ning multiple virtual disk files. Generally, metadata refers to that contain the partition tables to determine how the virtual
data or information about data. Metadata may include, for disk 140 is structured (e.g. , whether it is a basic or a dynamic
example, data relating to relationships between virtual disk disk ). The virtual disk analyzer component 160 also reads
files, data relating to how volumes are structured on virtual the sectors of the virtual disks 140 that contain the logical
disks 140 , and data relating to a location of a file allocation 30 volume manager databases. Because the locations of these
table or a master file table . Metadata may also include data sectors ( e.g. , the sectors of the partition tables and the logical
describing files and data objects ( e.g., names of files or data volume manager databases ) are well-known and / or can be
objects, timestamps of files or data objects, ACL entries, and dynamically determined , the virtual disk analyzer compo
file or data object summary, author, source or other infor nent 160 can use techniques that are well -known to those of
mation ). Metadata may also include data relating to storage 35 dataskill in the art to read those sectors and extract the necessary
operations or storage management, such as data locations, to determine . The virtual disk analyzer component 160 is thus able
storage management components associated with data , stor operating system how the virtual disks 140 is partitioned by the
age devices used in performing storage operations, index volumes are laid out120in of the virtual machine 110 and how
data , data application type, or other data . Those of skill in the simple volumes , spanned virtual
the disks 140 (e.g. , if there are
volumes , striped volumes, mir
art will understand that metadata may include data or 40 rored volumes , and / or RAID - 5 volumes , etc.)
information about data other than the examples given herein . At step 820 the virtual disk analyzer component 160
FIG . 8 is a flow diagram illustrating a process 800 determines the location of the Master File Table (MFT ) or
performed by the virtual disk analyzer component 160 for similar file allocation table for each volume. As with the
extracting metadata (e.g. , file location metadata and meta partition tables and the logical volume manager databases ,
data describing virtual disks 140 and / or files and / or volumes 45 the locations of the sectors containing the MFT are well
within virtual disks 140 ) from virtual volumes and / or virtual known and / or can be dynamically determined . Therefore,
disk and configuration files. The process 800 begins at step the virtual disk analyzer component 160 can use techniques
805 , where the virtual disk analyzer component 160 of the that are well - known to those of skill in the art to determine
data agent 155 accesses the configuration files to determine the location of the MFT. At step 825 the virtual disk analyzer
if there are any parent - child relationships between virtual 50 component 160 stores the determined parent -child relation
disk files (e.g. , the virtual disk analyzer component 160 ships and relationships between virtual disk files, the deter
determines how many links in a chain of virtual disk files mined structure of volumes of the virtual disks 140 , and the
there are ). The virtual disk analyzer component 160 per- determined location of the MFT in a data structure , such as
forms this step by reading and analyzing the virtual disk a table . For example, a table having the following schema
and / or configuration files. 55 may be used to store this information :
For example, for a VMware virtual machine 110 , the
virtual disk analyzer component 160 may read and analyze
the * .vmx configuration files and / or the * .vmdk disk Virtual
descriptor files. In this example, the parent virtual disk 140 Virtual disk file Volume Location
may be named “ basedisk.vmdk .” The parent virtual disk 140 60 Machine ID relationships structures of MFT
may have a * .vmdk disk descriptor file with an entry that E.g. , a substan- E.g. , description E.g. , partition E.g. , the
uniquely identifies the parent virtual disk 140 having the tially unique of the parent information and location of
following syntax: identifier for child relationships,
how virtual vol the MFT within
the virtual such as by a umes are laid each volume
[ identifier -name ] = [identifier -value] machine 110 hierarchical out on virtual
For example, the entry CID = daf6cf10 comports with this 65 description disks
syntax . A first child virtual disk 140 (e.g. , a first snapshot )
may be named "basedisk -000001.vmdk .” The first child
>
US 11,436,210 B2
23 24
The virtual disk analyzer component 160 may use other ondary storage computing device 1065 can restore the one or
data structures to store this information in addition or as an more files or directories to the original virtual machine 110
alternative to the preceding table . The virtual disk analyzer from which they were originally copied , to a different virtual
component 160 may store this information in the secondary machine 110 , to a non - virtual machine , and / or to another
storage data store 175 or in another data store . The virtual 5 storage device 1015. The process 900 then concludes.
disk analyzer component 160 may also collect other meta- Restore from a Volume -Level Copy
data, such as metadata describing virtual disks 140 and /or If the data agent 155 originally performed a volume- level
metadata describing files and / or data objects within virtual copy, the process 900 branches to the volume - level restore
disks 140. For example, instead of storing the determined branch . At step 915 , the secondary storage computing device
location of the MFT, the virtual disk analyzer component 10 1065 mounts a copy set corresponding to the files or
160 could store the locations of files or data objects within volumes to be restored from the secondary storage data store
virtual disks 140. After storing this metadata , the process 175. The copy set may be manually selected by an admin
800 then concludes . istrator or automatically selected based on an association
Process for Restoring Data of Virtual Machines between the copy set and the virtual machine 110 from
FIG . 9 is a flow diagram illustrating a process 900 15 which the data in the copy set came . Additionally or alter
performed by the secondary storage computing device 1065 natively, the copy set may be automatically determined
for restoring virtual machine data . One or more of the based upon the metadata extracted and stored ( described
entities illustrated in the figures ( e.g. , FIGS . 1A , 1B , 2 , with reference to , e.g. , FIG . 8 ) or based upon other metadata
and / or 10 ) may perform different aspects of the process 900 . (e.g. , metadata stored in index 1061 ) .
In some examples, an administrator at a management con- 20 At step 945 the secondary storage computing device 1065
sole instigates the process 900 by sending an indication to accesses metadata corresponding to the data that is to be
restore virtual machine data to the secondary storage com- restored (e.g. , the determined location of the MFT ) . This is
puting device 1065. The secondary storage computing the metadata that was stored in step 825 of the process 800 .
device 1065 accesses the index 1061 to locate the virtual At step 960 the secondary storage computing device 1065
machine data , and accesses the storage devices 1015 ( e.g. , 25 uses the determined location of the MFT to access the MFT
the secondary storage data store 175 ) upon which the virtual and use the entries in the MFT to determine where the files
machine data is located . The secondary storage computing and directories on the virtual disk 140 are located (e.g. , on
device 1065 restores the data from the storage devices 1015 which sectors of the virtual disk 140 a particular file is
to a specified location (e.g. , a location specified by the located ).
administrator ). 30 Because the data agent 155 originally performed a vol
The process 900 begins at step 905 where the secondary ume - level copy (of selected volumes including files and / or
storage computing device 1065 receives an indication to directories within the volumes ) , the secondary storage com
restore data of one or more virtual machines 110. The puting device 1065 can generally restore both files and / or
indication can be to restore one or more files, one or more directories and entire volumes (e.g. , an entire C : \ volume, an
volumes , one or more virtual disks of a virtual machine 110 , 35 entire D : \volume, etc. ) out of the copy set . If the secondary
or an entire virtual machine 110. At step 910 the secondary storage computing device 1065 is to restore a file , the
storage computing device 1065 determines how ( e.g. , by process 900 branches to step 920. At this step the secondary
analyzing the index 1061 ) the data agent 155 originally storage computing device 1065 restores one or more files or
copied the virtual machine data , either: 1 ) a file - level copy ; directories out of the copy set ( e.g. , a single file ). The
2 ) an image - level copy ; or 3 ) a disk - level copy. 40 secondary storage computing device 1065 can restore the
Restore from a File -Level Copy one or more files or directories to the original virtual
If the data agent 155 originally performed a file - level machine 110 from which they were originally copied, to a
copy, the process 900 branches to the file -level restore different virtual machine 110 , to a non - virtual machine ,
branch . At step 915 , the secondary storage computing device and /or to another storage device 1015. For example, if the
1065 mounts a copy set corresponding to the files to be 45 original virtual machine 110 no longer exists , the one or
restored from the secondary storage data store 175. The copy more files or directories may be restored to its replacement.
set may be manually selected by an administrator or auto- If instead , the secondary storage computing device 1065
matically selected based on an association between the copy is to restore a volume , the process 900 branches to step 930 .
set and the virtual machine from which the data in the copy At this step the secondary storage computing device 1065
set came . Additionally or alternatively , the copy set may be 50 restores one or more volumes out of the copy set . The data
automatically determined based upon the metadata extracted agent 155 secondary storage computing device 1065 can
and stored ( described with reference to , e.g. , FIG . 8 ) or restore the one or more volumes to the original virtual
based upon other metadata (e.g. , metadata stored in index machine 110 from which they were originally copied up , to
1061 ) . a different virtual machine 110 , or to a non - virtual machine
Because the data agent 155 originally performed a file- 55 and /or to another storage device 1015. For example, a
level copy (of selected files and / or directories ) , the second- C : \ volume may be restored out of a copy set to the original
ary storage computing device 1065 generally restores files virtual machine 110 from which it was copied , thus over
and / or directories out of the copy set . At step 920 the writing its existing C : \ volume. As another example, a
secondary storage computing device 1065 restores one or D :\ volume may be restored out of a copy set to another
more files or directories (e.g. , a single file) out of the copy 60 virtual machine 110 , thus replacing its current D : \ volume.
set. For example, the secondary storage computing device The secondary storage computing device 1065 may
1065 can call a function of an API exposed by a virtual restore the files , directories and / or volumes to various loca
machine 110 or its hosting virtual machine host 105 to tions . For example, the secondary storage computing device
restore the one or more files or directories to the virtual 1065 can copy the files, directories and /or volumes to the
machine 110. As another example, the secondary storage 65 primary storage data store 135. The secondary storage
computing device 1065 can copy the one or more files or computing device 1065 can restore the one or more volumes
directories to the primary storage data store 135. The sec- to the original virtual machine 110 from which they were
US 11,436,210 B2
25 26
originally copied up , to a different virtual machine 110 , to a to be restored, the process 900 branches to step 965. The
9

non - virtual machine (e.g. , to a physical machine) , and / or to secondary storage computing device 1065 can copy all the
another storage device 1015. For example, an entire D : \ virtual disk and configuration files to the location where the
volume from an original virtual machine 110 may be entire virtual machine 110 is to be restored . This can be the
restored to the original virtual machine 110 , to another 5 original location of the virtual machine 110 ( on the original
virtual machine 110 and / or to a non - virtual machine ( e.g. , to virtual machine host 105 ) , or it can be a new location where
a physical machine ). As described in more detail herein , a the virtual machine had not originally been located (e.g. , on
volume of a virtual machine 110 may be restored in its a new virtual machine host 105. ) If the virtual disk and
original format (e.g. , if the volume came from a VMware configuration files are copied to the original virtual machine
virtual machine 110 , it can be restored as a volume in the 10 host 105 , the virtual machine host 105 should be able to
VMware format, such as a * .vmdk file) or converted to restart the virtual machine 110 , which can then recommence
another format (e.g. , if the volume came from a VMware operating in the state it existed in when its virtual disk and
virtual machine 110 , it can be restored as a volume in the configuration files were originally copied.
Microsoft format, such as a * .vhd file ). The secondary Similarly, if the virtual disk and configuration files are
storage computing device 1065 can also restore the volume 15 copied to a new virtual machine host 105 , the new virtual
as a container file , from which the volume can be extracted . machine host 105 should be able to start the virtual machine
After steps 920 and /or 930 , the process 900 then concludes . 110 , which can then commence operating in the state it
Restore from a Disk - Level Copy existed in when its virtual disk and configuration files were
If the data agent 155 originally performed a disk - level originally copied . The ability to restore a virtual machine
copy, the process 900 branches to the disk - level restore 20 110 to a new virtual machine host 105 other than its original
branch . At step 915 , the secondary storage computing device virtual machine host 105 allows virtual machines 110 to be
1065 mounts a copy set corresponding to the virtual disks , moved or “ floated " from one virtual machine host 105 to
files, volumes , and / or virtual machines 110 to be restored another. The secondary storage computing device 1065 can
from the secondary storage data store 175. The copy set may also restore the entire virtual machine 110 as a container file,
be manually selected by an administrator or automatically 25 from which the entire virtual machine 110 can be extracted .
selected based on an association between the copy set and After step 965 , the process 900 then concludes .
the virtual machine from which the data in the copy set If instead of restoring an entire virtual machine 110 , the
came . Additionally or alternatively, the copy set may be secondary storage computing device 1065 is to restore a
automatically determined based upon the metadata extracted volume, the process 900 branches to step 930. At this step
and stored (described herein , e.g. , with reference to FIG . 8 ) 30 the secondary storage computing device 1065 restores one
or based upon other metadata (e.g. , metadata stored in index or more volumes out of the copy set. After step 930 , the
1061 ) . process 900 then concludes .
At step 945 the secondary storage computing device 1065 If instead of restoring an entire virtual machine 110 or a
accesses metadata corresponding to the data that is to be volume, the secondary storage computing device 1065 is to
restored (e.g. , the determined parent- child relationships and 35 restore a file , the process 900 branches to step 920. At this
relationships between virtual disk files, the determined step the secondary storage computing device 1065 restores
structure of volumes of the virtual disks 140 , and the one or more files or directories out of the copy set (e.g. , a
determined location of the MFT ) . This is the metadata that single file ). The secondary storage computing device 1065
was stored in step 825 of the process 800. At step 950 the can restore the one or more files or directories to the original
secondary storage computing device 1065 uses the deter- 40 virtual machine 110 from which they were originally copied ,
mined parent - child relationships and relationships between to a different virtual machine 110 , to non - virtual machine ,
2

virtual disk files to reconstruct the virtual disks 140. For and / or to another storage device 1015. The process 900 then
example, if a virtual disk 140 is comprised of numerous concludes.
virtual disk files , the secondary storage computing device If instead of restoring an entire virtual machine 110 , a
1065 uses the determined relationships between them to link 45 volume, or a file, the secondary storage computing device
them together into a single virtual disk file. In so doing, the 1065 is to restore one or more virtual disks , the process 900
secondary storage computing device 1065 may access grain branches to step 970. At this step the secondary storage
directories and grain tables within virtual disk files. Grain computing device 1065 restores the virtual disk and con
directories and grain tables are data structures located within figuration files corresponding to the one or more virtual
virtual disk files that specify the sectors (blocks ) within 50 disks to be restored out of the copy set. The secondary
virtual disks 140 that have been allocated for data storage. storage computing device 1065 can restore the one or more
The secondary storage computing device 1065 may access virtual disks to the original virtual machine host 105 from
these data structures to locate data within virtual disks 140 . which they were originally copied . Additionally or alterna
At step 955 the secondary storage computing device 1065 tively, the secondary storage computing device 1065 can
uses the determined structure of volumes of the virtual disks 55 restore the one or more virtual disks to the original virtual
140 to reconstruct the volumes . At step 960 the secondary machine 110 from which they were originally copied, to a
storage computing device 1065 uses the determined location different virtual machine 110 , to a non - virtual machine ,
of the MFT to access the MFT and uses the entries in the and / or to another storage device 1015. If the one or more
MFT to determine where the files and directories on the virtual disks are to be restored to the virtual machine 105 ,
virtual disk 140 are located ( e.g. , on which sectors of the 60 they may overwrite , replace and / or supplement the existing
virtual disk 140 a particular file is located ). virtual disks of a virtual machine 110. The process 900 then
Because the data agent 155 originally performed a disk- concludes.
level copy (of virtual disk and configuration files ), the Depending upon what the secondary storage computing
secondary storage computing device 1065 can restore files device 1065 is to restore, certain steps in the process 900
or directories , entire volumes (e.g. , an entire C : \volume, an 65 may not need to be performed. For example, if the secondary
entire D : \ volume , etc.) as well as an entire virtual machine storage computing device 1065 is to restore an entire virtual
110 out of the copy set . If an entire virtual machine 110 is machine 110 out of a disk- level copy, the data agent 155 may
US 11,436,210 B2
27 28
not need to access the stored metadata ( step 945 ) or recon- consoles for users or system processes to interface with in
struct the virtual disk 140 , volumes and files ( steps 950 , 955 order to perform certain storage operations on electronic
and 960 ) . The data agent 155 can simply mount the copy set data as further described herein . Such integrated manage
and copy the virtual disk and configuration files to the ment consoles may be displayed at a central control facility
appropriate location . As another example, if the secondary 5 or several similar consoles distributed throughout multiple
storage computing device 1065 is to restore a volume out of network locations to provide global or geographically spe
a disk - level copy, the secondary storage computing device cific network data storage information .
1065 may not need to reconstruct files using the MFT, as In one example , storage operations may be performed
mentioned above . The secondary storage computing device according to various storage preferences, for example , as
1065 can simply reconstruct the volumes and then copy the 10 expressed by a user preference, a storage policy, a schedule
volumes to the appropriate location . Those of skill in the art policy, and / or a retention policy. A “ storage policy” is
will understand that more or fewer steps than those illus- generally a data structure or other information source that
trated in the process 900 may be used to restore data of includes a set of preferences and other storage criteria
virtual machines 110 . associated with performing a storage operation. The prefer
As previously described , one advantage of performing a 15 ences and storage criteria may include, but are not limited to ,
disk- level copy is that it may be quicker and more efficient a storage location , relationships between system compo
than file - level or volume -level copying. Also as previously nents, network pathways to utilize in a storage operation,
described, the process of extracting metadata from the data characteristics , compression or encryption require
virtual disk and configuration files enables the ability to ments, preferred system components to utilize in a storage
restore individual files, directories and /or volumes to the 20 operation, a single instancing or variable instancing policy
virtual machine 110 or to other locations (e.g. , to other to apply to the data , and / or other criteria relating to a storage
virtual machines 110 to non - virtual machines , and / or to operation. For example, a storage policy may indicate that
other storage devices 1015 ) . The combination of a disk - level certain data is to be stored in the storage device 1015 ,
copy and the capability to restore individual files, directories retained for a specified period of time before being aged to
and /or volumes of a virtual machine 110 provides for a fast 25 another tier of secondary storage , copied to the storage
and efficient process for duplicating primary copies of data , device 1015 using a specified number of data streams, etc.
while still enabling granular access (e.g. , at the individual A “ schedule policy ” may specify a frequency with which
file or data object level ) to the duplicated primary data to perform storage operations and aa window of time within
( granular access to the secondary copies of data is enabled ). which to perform them . For example , a schedule policy may
This combination optimizes the aspect of virtual machine 30 specify that a storage operation is to be performed every
data management that is likely performed most frequently Saturday morning from 2:00 a.m. to 4:00 a.m. In some cases ,
( duplication of primary copies of data ), but not at the the storage policy includes information generally specified
expense of the aspect that is likely performed less often by the schedule policy. (Put another way, the storage policy
( restoration of secondary copies of data ), because granular includes the schedule policy. ) Storage policies and / or sched
access to duplicated primary copies of data is still enabled . 35 ule policies may be stored in a database of the storage
Suitable Data Storage Enterprise manager 1005 , to archive media as metadata for use in
FIG . 10 illustrates an example of one arrangement of restore operations or other storage operations, or to other
resources in a computing network , comprising a data storage locations or components of the system 1050 .
system 1050. The resources in the data storage system 1050 The system 1050 may comprise
? a storage operation cell
may employ the processes and techniques described herein . 40 that is one of multiple storage operation cells arranged in a
The system 1050 includes a storage manager 1005 , one or hierarchy or other organization . Storage operation cells may
more data agents 1095 , one or more secondary storage be related to backup cells and provide some or all of the
computing devices 1065 , one or more storage devices 1015 , functionality of backup cells as described in the assignee's
one or more clients 1030 , one or more data or information U.S. patent application Ser. No. 09 /354,058 , now U.S. Pat.
stores 1060 and 1062 , a single instancing database 1023 , an 45 No. 7,395,282 , which is incorporated herein by reference in
index 1011 , a jobs agent 1020 , an interface agent 1025 , and its entirety . However, storage operation cells may also
a management agent 1031. The system 1050 may represent perform additional types of storage operations and other
a modular storage system such as the CommVault QiNetix types of storage management functions that are not generally
system , and also the CommVault GALAXY backup system , offered by backup cells .
available from CommVault Systems, Inc. of Oceanport, N.J., 50 Storage operation cells may contain not only physical
aspects of which are further described in the commonly- devices, but also may represent logical concepts, organiza
assigned U.S. patent application Ser. No. 09 / 610,738 , now tions , and hierarchies. For example, a first storage operation
U.S. Pat . No. 7,035,880 , the entirety of which is incorpo- cell may be configured to perform a first type of storage
rated by reference herein . The system 1050 may also rep- operations such as HSM operations, which may include
resent a modular storage system such as the CommVault 55 backup or other types of data migration, and may include a
Simpana system , also available from CommVault Systems, variety of physical components including a storage manager
Inc. 1005 (or management agent 1031 ) , a secondary storage
The system 1050 may generally include combinations of computing device 1065 , a client 1030 , and other components
hardware and software components associated with per- as described herein . A second storage operation cell may
forming storage operations on electronic data . Storage 60 contain the same or similar physical components ; however,
operations include copying, backing up , creating, storing, it may be configured to perform a second type of storage
retrieving, and / or migrating primary storage data ( e.g. , data operations, such as storage resource management ( “ SRM ” )
stores 1060 and / or 1062 ) and secondary storage data (which operations, and may include monitoring a primary data copy
may include , for example, snapshot copies, backup copies, or performing other known SRM operations.
HSM copies , archive copies , and other types of copies of 65 Thus, as can be seen from the above , although the first and
electronic data stored on storage devices 1015 ). The system second storage operation cells are logically distinct entities
1050 may provide one or more integrated management configured to perform different management functions
US 11,436,210 B2
29 30
( HSM and SRM , respectively ), each storage operation cell client 1030. These data agents 1095 would be treated as four
may contain the same or similar physical devices. Alterna- separate data agents 1095 by the system even though they
tively, different storage operation cells may contain some of reside on the same client 1030 .
the same physical devices and not others. For example, a Alternatively, the overall system 1050 may use one or
storage operation cell configured to perform SRM tasks may 5 more generic data agents 1095 , each of which may be
contain a secondary storage computing device 1065 , client capable of handling two or more data types. For example,
1030 , or other network device connected to a primary one generic data agent 1095 may be used to back up , migrate
storage volume, while a storage operation cell configured to and restore Microsoft Exchange 2000 Mailbox data and
perform HSM tasks may instead include a secondary storage Microsoft Exchange 2000 Database data while another
computing device 1065 , client 1030 , or other network device 10 generic data agent 1095 may handle Microsoft Exchange
connected to a secondary storage volume and not contain the 2000 Public Folder data and Microsoft Windows 2000 File
elements or components associated with and including the System data , etc.
primary storage volume . ( The term " connected ” as used Data agents 1095 may be responsible for arranging or
herein does not necessarily require a physical connection ; packing data to be copied or migrated into a certain format
rather, it could refer to two devices that are operably coupled 15 such as an archive file . Nonetheless, it will be understood
to each other, communicably coupled to each other, in that this represents only one example, and any suitable
communication with each other, or more generally, refer to packing or containerization technique or transfer methodol
the capability of two devices to communicate with each ogy may be used if desired . Such an archive file may include
other .) These two storage operation cells , however, may each metadata , a list of files or data objects copied , the file , and
include a different storage manager 1005 that coordinates 20 data objects themselves. Moreover, any data moved by the
storage operations via the same secondary storage comput- data agents may be tracked within the system by updating
ing devices 1065 and storage devices 1015. This “ overlap- indexes associated with appropriate storage managers 1005
ping " configuration allows storage resources to be accessed or secondary storage computing devices 1065. As used
by more than one storage manager 1005 , such that multiple herein , a file or a data object refers to any collection or
paths exist to each storage device 1015 facilitating failover, 25 grouping of bytes of data that can be viewed as one or more
load balancing, and promoting robust data access via alter- logical units .
native routes. Generally speaking, storage manager 1005 may be a
Alternatively or additionally, the same storage manager software module or other application that coordinates and
1005 may control two or more storage operation cells controls storage operations performed by the system 1050 .
(whether or not each storage operation cell has its own 30 Storage manager 1005 may communicate with some or all
dedicated storage manager 1005 ) . Moreover, in certain elements of the system 1050 , including clients 1030 , data
embodiments, the extent or type of overlap may be user- agents 1095 , secondary storage computing devices 1065 ,
defined (through a control console ) or may be automatically and storage devices 1015 , to initiate and manage storage
configured to optimize data storage and / or retrieval. operations ( e.g. , backups, migrations, data recovery opera
Data agent 1095 may be a software module or part of a 35 tions , etc. ).
software module that is generally responsible for performing Storage manager 1005 may include a jobs agent 1020 that
storage operations on the data of the client 1030 stored in monitors the status of some or all storage operations previ
data store 1060/1062 or other memory location . Each client ously performed, currently being performed , or scheduled to
1030 may have at least one data agent 1095 and the system be performed by the system 1050. Jobs agent 1020 may be
1050 can support multiple clients 1030. Data agent 1095 40 communicatively coupled to an interface agent 1025 ( e.g. , a
may be distributed between client 1030 and storage manager software module or application) . Interface agent 1025 may
1005 ( and any other intermediate components ), or it may be include information processing and display software, such
deployed from a remote location or its functions approxi- as a graphical user interface ( " GUI ” ) , an application pro
mated by a remote process that performs some or all of the gramming interface ( “ API” ), or other interactive interface
functions of data agent 1095 . 45 through which users and system processes can retrieve
The overall system 1050 may employ multiple data agents information about the status of storage operations. For
1095 , each of which may perform storage operations on data example , in an arrangement of multiple storage operations
associated with aa different application . For example, differ- cell , through interface agent 1025 , users may optionally
ent individual data agents 1095 may be designed to handle issue instructions to various storage operation cells regard
Microsoft Exchange data, Lotus Notes data, Microsoft Win- 50 ing performance of the storage operations as described and
dows 2000 file system data , Microsoft Active Directory contemplated herein . For example, a user may modify a
Objects data , and other types of data known in the art. Other schedule concerning the number of pending snapshot copies
embodiments may employ one or more generic data agents or other types of copies scheduled as needed to suit particu
1095 that can handle and process multiple data types rather lar needs or requirements . As another example, a user may
than ausing the specialized data agents described above . 55 employ the GUI to view the status of pending storage
If a client 1030 has two or more types of data , one data operations in some or all of the storage operation cells in a
agent 1095 may be required for each data type to perform given network or to monitor the status of certain components
storage operations on the data of the client 1030. For in a particular storage operation cell ( e.g. , the amount of
example , to back up , migrate, and restore all the data on a storage capacity left in a particular storage device 1015 ) .
Microsoft Exchange 2000 server, the client 1030 may use 60 Storage manager 1005 may also include a management
one Microsoft Exchange 2000 Mailbox data agent 1095 to agent 1031 that is typically implemented as a software
back up the Exchange 2000 mailboxes , one Microsoft module or application program . In general, management
Exchange 2000 Database data agent 1095 to back up the agent 1031 provides an interface that allows various man
Exchange 2000 databases , one Microsoft Exchange 2000 agement agents 1031 in other storage operation cells to
Public Folder data agent 1095 to back up the Exchange 2000 65 communicate with one another. For example, assume a
Public Folders , and one Microsoft Windows 2000 File certain network configuration includes multiple storage
System data agent 1095 to back up the file system of the operation cells hierarchically arranged or otherwise logi
US 11,436,210 B2
31 32
cally related in a WAN or LAN configuration . With this storage operations on Microsoft Exchange data may gener
arrangement, each storage operation cell may be connected ate index data . Such index data provides a secondary storage
to the other through each respective interface agent 1025 . computing device 1065 or other external device with a fast
This allows each storage operation cell to send and receive and efficient mechanism for locating data stored or backed
certain pertinent information from other storage operation 5 up . Thus, a secondary storage computing device index 1061 ,
cells , including status information , routing information, or a database 1011 of a storage manager 1005 , may store
information regarding capacity and utilization , etc. These data associating a client 1030 with a particular secondary
communications paths may also be used to convey infor- storage computing device 1065 or storage device 1015 , for
mation and instructions regarding storage operations. example, as specified in a storage policy, while a database or
For example , a management agent 1031 in a first storage 10 other data structure in secondary storage computing device
operation cell may communicate with a management agent 1065 may indicate where specifically the data of the client
1031 in a second storage operation cell regarding the status 1030 is stored in storage device 1015 , what specific files
of storage operations in the second storage operation cell . were stored, and other information associated with storage
Another illustrative example includes the case where a of the data of the client 1030. In some embodiments, such
management agent 1031 in a first storage operation cell 15 index data may be stored along with the data backed up in
communicates with a management agent 1031 in a second a storage device 1015 , with an additional copy of the index
storage operation cell to control storage manager 1005 (and data written to index cache in a secondary storage device .
other components) of the second storage operation cell via Thus the data is readily available for use in storage opera
management agent 1031 contained in storage manager 1005 . tions and other activities without having to be first retrieved
Another illustrative example is the case where manage- 20 from the storage device 1015 .
ment agent 1031 in a first storage operation cell communi- Generally speaking, information stored in cache is typi
cates directly with and controls the components in aa second cally recent information that reflects certain particulars
storage operation cell and bypasses the storage manager about operations that have recently occurred . After a certain
1005 in the second storage operation cell . If desired , storage period of time , this information is sent to secondary storage
operation cells can also be organized hierarchically such that 25 and tracked . This information may need to be retrieved and
hierarchically superior cells control or pass information to uploaded back into a cache or other memory in a secondary
hierarchically subordinate cells or vice versa . computing device before data can be retrieved from storage
Storage manager 1005 may also maintain an index , a device 1015. In some embodiments, the cached information
database, or other data structure 1011. The data stored in may include information regarding format or containeriza
database 1011 may be used to indicate logical associations 30 tion of archives or other files stored on storage device 1015 .
between components of the system , user preferences, man- One or more of the secondary storage computing devices
agement tasks , media containerization and data storage 1065 may also maintain one or more single instance data
information or other useful data . For example, the storage bases 1023. Single instancing ( alternatively called data
manager 1005 may use data from database 1011 to track deduplication ) generally refers to storing in secondary stor
logical associations between secondary storage computing 35 age only a single instance of each data object ( or data block)
device 1065 and storage devices 1015 (or movement of data in a set of data (e.g. , primary data ). More details as to single
as containerized from primary to secondary storage) . instancing may be found in one or more of the following
Generally speaking, the secondary storage computing previously - referenced U.S. patent application Ser. Nos .
device 1065 , which may also be referred to as a media agent, 11 /269,512 , 12 / 145,347 , 12/ 145,342 , 11 / 963,623 , 11/950 ,
may be implemented as a software module that conveys 40 376 , and 61 / 100,686 .
data, as directed by storage manager 1005 , between a client In some examples, the secondary storage computing
1030 and one or more storage devices 1015 such as a tape devices 1065 maintain one or more variable instance data
library, a magnetic media storage device , an optical media bases . Variable instancing generally refers to storing in
storage device, or any other suitable storage device. In one secondary storage one or more instances , but fewer than the
embodiment, secondary storage computing device 1065 may 45 total number of instances , of each data object (or data block)
be communicatively coupled to and control a storage device in a set of data (e.g. , primary data ) . More details as to
1015. A secondary storage computing device 1065 may be variable instancing may be found in the previously -refer
considered to be associated with a particular storage device enced U.S. Pat . App . No. 61 / 164,803 .
1015 if that secondary storage computing device 1065 is In some embodiments, certain components may reside
capable of routing and storing data to that particular storage 50 and execute on the same computer. For example, in some
device 1015 . embodiments, a client 1030 such as a data agent 1095 , or a
In operation, a secondary storage computing device 1065 storage manager 1005 , coordinates and directs local
associated with a particular storage device 1015 may instruct archiving, migration, and retrieval application functions as
the storage device to use a robotic arm or other retrieval further described in the previously - referenced U.S. patent
means to load or eject a certain storage media , and to 55 application Ser. No. 09/ 610,738 . This client 1030 can func
subsequently archive, migrate, or restore data to or from that tion independently or together with other similar clients
media . Secondary storage computing device 1065 may 1030 .
communicate with a storage device 1015 via a suitable As shown in FIG . 10 , secondary storage computing
communications path such as a SCSI or Fibre Channel devices 1065 each has its own associated metabase 1061 .
communications link . In some embodiments , the storage 60 Each client 1030 may also have its own associated metabase
device 1015 may be communicatively coupled to the storage 1070. However in some embodiments , each " tier ” of stor
manager 1005 via a SAN . age , such as primary storage , secondary storage, tertiary
Each secondary storage computing device 1065 may storage, etc. , may have multiple metabases or a centralized
maintain an index , a database , or other data structure 1061 metabase , as described herein . For example, rather than a
that may store index data generated during storage opera- 65 separate metabase or index associated with each client 1030
tions for secondary storage ( SS ) as described herein , includ- in FIG . 10 , the metabases on this storage tier may be
ing creating a metabase (MB ) . For example, performing centralized . Similarly , second and other tiers of storage may
US 11,436,210 B2
33 34
have either centralized or distributed metabases. Moreover, by sending heartbeat messages to the data agent 155 ) . Upon
mixed architecture systems may be used if desired , that may a failure to receive notifications from a virtual machine 110
include a first tier centralized metabase system coupled to a within a certain time period, the data agent 155 could
second tier storage system having distributed metabases and remove its identifier from the database . The data agent 155
vice versa , etc. 5 may use other methods and / or combinations of these meth
Moreover, in operation , a storage manager 1005 or other ods to maintain an up - to -date listing of virtual machine
management module may keep track of certain information identifiers in the database .
that allows the storage manager 1005 to select , designate, or These techniques for detecting virtual machines 110 and
otherwise identify metabases to be searched in response to maintaining identifiers thereof may also be used to detect
certain queries primary
data between as furtheranddescribed herein
secondary . Movement
storage of 10 virtual resources
may also of virtual machines 110 and maintain
identifiers
involve movement of associated metadata and other tracking be coupled to a virtual thereof . For example, a virtual machine 110 may
information as further described herein . storage device such as a virtual NAS
In some examples, primary data may be organized into device or a virtual optical drive. The data agent 155 could
detect these virtual resources and maintain identifiers for
one or more sub - clients . A sub - client is a portion of the data 15 them
of one or more clients 1030 , and can contain either all of the in a database or other data structure . The virtual
data of the clients 1030 or a designated subset thereof. As resources may then be addressed as if they were actual
depicted in FIG . 10 , the data store 1062 includes two resources . Once detected or identified , storage operations
sub - clients. For example, an administrator (or other user related to the virtual resources could be performed according
with the appropriate permissions; the term administrator is 20 to non -virtualized storage policies or preferences, according
used herein for brevity ) may find it preferable to separate to storage policies or preferences directed specifically to
email data from financial data using two different sub - clients virtual resources, and / or to combinations of non - virtualized
having different storage preferences, retention criteria , etc. and virtualized storage policies and preferences. As another
Detection of Virtual Machines and Other Virtual Resources example , a virtual machine 110 may be coupled to a virtual
As previously noted , because virtual machines 110 may 25 tape library (VTL ). The data agent 155 may perform addi
be easily set up and torn down , they may be less permanent tional analysis on the nature and structure of the virtual
in nature than non - virtual machines. Due to this potential resource which underlies the VTL (e.g. , a virtual disk 140 ) .
transience of virtual machines 110 , it may be more difficult This may allow the data agent 155 to realize additional
to detect them , especially in a heterogeneous or otherwise optimizations relating to storage operations associated with
disparate environment.For example, a virtual machine host 30 the data of the VTL . For example, even though the virtual
105 may host a number of different virtual machines 110 . resource is aa VTL (necessitating sequential access ) , storage
Virtual machines 110 may be discovered using the tech
niques previously described herein . Alternatively or addi operations
a random
might be able to be performed non - linearly or in
access fashion since the underlying virtual
tionally, virtual machines 110 could be detected by periodi
cally performing dynamic virtual resource detection routines 35 resource allows random access . Therefore, rather than
sequentially seeking through the VTL data to arrive at a
to identify virtual machines 110 in the network 180 ( or some particular point, the data agent 155 could simply go directly
subset thereof, such as a subnet) . For example , the data agent
155 (or other agent) could analyze program behaviors cor toof the relevant data on the virtual disk 140 that is the subject
the storage operation.
responding to known virtual resource behaviors, perform
fingerprint, hash , or other characteristic -based detection 40 Indexing Virtual Machine Data
methods or routines, query a system datastore ( e.g. , the In traditional copy or backup of virtual machines 110 , an
Windows registry) or other data structure of the virtual indexing agent is typically located at each virtual machine
machine host 105 for keys or other identifiers associated 110 or is otherwise associated with each virtual machine
with virtual resources . The data agent 155 may use other 110. The indexing agent indexes data on the virtual machine
methods and / or combinations of these methods to detect 45 110. This results in the creation of one index per virtual
virtual machines 110 . machine 110. This facilitates searching of data on a per
Once detected, the data agent 155 could maintain virtual virtual machine 110 basis , but may make it difficult to search
machine identifiers in a database or other data structure and data across multiple virtual machines 110. Moreover, the
use associated program logic to track existing virtual indexing is performed on the virtual machine 110 and thus
machines 110 in the network 180. Alternatively or addition- 50 uses its resources , which may not be desirable .
ally, an administrator could manually populate the database, In contrast, copying of data of virtual machines 110 using
or it could be populated as part of an install or virtual the techniques described herein may use one indexing agent
resource creation process , or by an agent or other software that is associated with multiple virtual machines 110. The
module directed to detecting installation of virtual machines . sole indexing agent thus indexes multiple virtual machines
The data agent 155 could update the database to remove a 55 110. This results in the creation of one index for the multiple
virtual machine identifier upon receiving an affirmative virtual machines 110. The one indexing agent can subdivide
indication that the corresponding virtual machine 110 has or logically separate the single index into multiple sub
been taken down or removed from its virtual machine host indexes for each virtual machine 110. This technique facili
105. Alternatively or additionally, the data agent 155 could tates searching of data using one index across multiple
periodically poll virtual machines 110 to determine if the 60 virtual machines 110 and also allows searching on a per
virtual machines 110 are still functioning. If a virtual virtual machine 110 basis . The sole indexing agent may
machine 110 does not respond after a certain number of create the single index using secondary copies of virtual
polling attempts , the data agent 155 may assume that the machine data so as not to impact the primary copies or
virtual machine 110 is no longer functioning and thus utilize virtual machine resources . The indexed data may be
remove its identifier from the database . Alternatively or 65 tagged by users . More details as to indexing data are
additionally , the virtual machines 110 could periodically described in the previously -referenced U.S. patent applica
notify the data agent 155 that they are still functioning ( e.g. , tion Ser. No. 11 / 694,869 .
US 11,436,210 B2
35 36
Classification of Virtual Machine Data relevant to the query that may be segregated based upon
As shown in FIG . 10 , clients 1030 and secondary storage their origin ( e.g. , based upon whether they came from virtual
computing devices 1065 may each have associated meta- machines or non - virtual machines ) . The returned results
bases ( 1070 and 1061 , respectively ). Each virtual machine may be optionally analyzed for relevance, arranged, and
110 may also have its own metabase containing metadata 5 placed in a format suitable for subsequent use (e.g. , with
about virtual machine data . Alternatively, one or more another application ), or suitable for viewing by a user and
virtual machines 110 may be associated with one or more reported. More details as to techniques for searching data
metabases . A classification agent may analyze virtual and providing results may be found in commonly -assigned
machines 110 to identify data objects or other files, email , or
other information currently stored or present by the virtual 10 U.S. patent application Ser. No. 11 / 931,034 ( entitled
METHOD AND SYSTEM FOR SEARCHING STORED
machines 110 and obtain certain information regarding the DATA ), the entirety of which is incorporated by reference
information , such as any available metadata. Such metadata
may include information about data objects or characteris herein Single
.
or Variable Instancing Virtual Machine Data
tics associated with data objects, such as data owner ( e.g. ,
the client or user that generates the data or other data 15 Virtual machine data may be single or variable instanced
manager ), last modified time (e.g. , the time of the most or de - duplicated in order to reduce the number of instances
recent modification ), data size (e.g. , number of bytes of virtual of stored data , sometimes to as few as one . For example, a
data ), information about the data content (e.g. , the applica machine host 105 may host numerous virtual
tion that generated the data , the user that generated the data , machines 110 configured identically or with slight variations
etc. ) , to / from information for email (e.g. , an email sender, 20 (e.g., the virtual machines have the same operating system
recipient , or individual or group on an email distribution files , but different application data files ). As another
list ) , creation date ( e.g. , the date on which the data object example , a virtual machine 110 may store substantially the
was created) , file type (e.g. , the format or application type ), same data in a virtual disk 140 that aa non -virtual machine
last accessed time (e.g. , the time the data object was most stores on its storage devices ( e.g. , both aa virtual machine 110
recently accessed or viewed) , application type (e.g. , the
9 25 and a non - virtual machine may have a C : \Windows directory
application that generated the data object ), location /network and corresponding system files, and only one instance of
( e.g. , a current, past , or future location of the data object and each system file may need to be stored) . If only a single
network pathways to / from the data object ), frequency of instance of each data object in this data (the data of both the
change ( e.g. , a period in which the data object is modified ), virtual machines and the non - virtual machines ) can be stored
business unit (e.g. , a group or department that generates, 30 on a single instance storage device , significant savings in
manages , or is otherwise associated with the data object ), storage space may be realized .
and aging information (e.g. , a schedule , which may include To single or variable instance virtual machine data , an
a time period, in which the data object is migrated to agent (e.g. , a media agent) may generate a substantially
secondary or long - term storage ), etc. The information unique identifier ( for example, a hash value , message digest ,
obtained in this analyzing process may be used to initially 35 checksum , digital fingerprint, digital signature or other
create or populate the metabases . sequence of bytes that substantially uniquely identifies the
Alternatively or additionally, a journaling agent may file or data object) for each virtual data object. The word
populate the metabase with content by accessing virtual “ substantially ” is used to modify the term “ unique identifier "
machines 110 , or by directly accessing virtual resources because algorithms used to produce hash values may result
( e.g. , virtual disks 140 ) . The journaling agent may include a 40 in collisions , where two different files or data objects result
virtual filter driver program and may be deployed on a in the same hash value . However, depending upon the
virtual input /output port or data stack and operate in con- algorithm or cryptographic hash function used , collisions
junction with aa virtual file management program to record a should be suitably rare and thus the identifier generated for
virtual machine's interactions with its virtual data. This may a virtual file or data object should be unique throughout the
involve creating a data structure such as a record or journal 45 system .
of each interaction . The records may be stored in a journal After generating the substantially unique identifier for the
data structure and may chronicle data interactions on an virtual data object, the agent determines whether it should be
interaction -by -interaction basis . The journal may include stored on the single instance storage device. To determine
information regarding the type of interaction that has this, the agent accesses a single instance database to deter
occurred along with certain relevant properties of the data 50 mine if a copy or instance of the data object has already been
involved in the interaction . The classification agent may stored on the single instance storage device . The single
analyze and process entries within respective journals asso- instance database utilizes one or more tables or other data
ciated with journaling agents, and report results to the structures to store the substantially unique identifiers of the
metabase . More details as to techniques used in the classi- data objects that have already been stored on the single
fication of data and journaling of changes to data may be 55 instance storage device . If a copy or instance of the data
found in the previously -referenced U.S. patent application object has not already been stored on single instance storage
Ser. No. 11 / 564,119 . device, the agent sends the copy of the virtual data object to
Searching Virtual Machine Data the single instance storage device for storage and adds its
Once virtual machine data has been indexed and / or clas- substantially unique identifier to the single instance data
sified , users can search for virtual machine data using 60 base . If a copy or instance of the data object has already been
techniques known to those of skill in the art . The system may stored, the agent can avoid sending another copy to the
provide a single interface directed to enabling the search for single instance storage device . In this case , the agent may
virtual machine data (as well as non - virtual machine data ) . add a reference ( e.g. , to an index in the single instance
A user can utilize the interface to provide a query which is database, such as by incrementing a reference count in the
used to search metabases and / or indices of virtual machine 65 index ) to the already stored instance of the data object.
data ( as well as non - virtual machine data ) . The system can Adding a reference to the already stored instance of the data
in return provide results from the metabases and /or indices object enables storing only a single instance of the data
US 11,436,210 B2
37 38
object while still keeping track of other instances of the data Similarly, in restoring an entire virtual machine 110 ( e.g. ,
object that do not need to be stored . step 965 of the process 900 ) , the secondary storage com
Redundant instances of data objects may be detected and puting device 1065 can restore the entire virtual machine
reduced at several locations or times throughout the opera 110 as a virtual machine 110 operating on a Microsoft
tion of the system . For example, the agent may single or 5 platform . The secondary storage computing device 1065
variable instance virtual machine data prior to performing does so by converting the data in the * .vmdk format to data
any other storage operations. Alternatively or additionally , in the * .vhd format ( and associated configuration files ). The
the agent may single instance virtual machine data after it secondary storage computing device 1065 can thus convert
has been copied to the secondary storage data store 175. The a virtual machine 110 operating on an ESX Server to a
agent may generate a substantially unique identifier and send 10 virtual machine 110 operating on MicrosoftVirtual Server or
Microsoft Windows Server Hyper - V. This conversion pro
it across the network 180 to the single instance database to cess
determine if the corresponding virtual data object should be from can also be performed in the opposite direction , e.g. ,
the Microsoft * .vhd format to the VMware * .vmdk
stored, or the agent may send the virtual data object to the format. The conversion process enables virtual machine data
single instance database, which then may generate a sub originating on VMware platforms to be migrated to other
stantially unique identifier for it . More details as to single 15 platforms , and for virtual machine data originating on non
instancing data may be found in one or more of the previ VMware platforms to be migrated to the VMware platform .
ously - referenced described in one or more of previously Similar conversions can also be performed for virtual disks
referenced U.S. patent application Ser. Nos . 11 /269,512 , 140 .
12 / 145,347 , 12/ 145,342 , 11 / 963,623 , 11 / 950,376 , 61/100 , 20 To perform the conversion , the secondary storage com
686 , and 61 / 164,803 . puting device 1065 may use APIs or other programmatic
Protecting Virtual Machine Data in Homogenous and Het- techniques . For example, to convert a * .vhd file to a * .vmdk
erogeneous Environments file , the secondary storage computing device 1065 may
The techniques described herein are applicable in both create the * .vmdk file, create necessary data structures ( e.g. ,
homogenous and heterogeneous environments. For 25 grain directories and grain tables) within the * .vmdk file ,
example, the techniques described herein can be used to and copy sectors of the volume of the * .vhd file to the
copy and restore data from and to virtual machines 110 * .vmdk file, going extent by extent and creating necessary
operating solely on VMware virtual machine hosts ( e.g. , entries in the data structures ( e.g. , entries in the grain
VMware ESX servers ) or on solely Microsoft virtual directories and grain tables) along the way. The secondary
machine hosts (e.g. , on a Microsoft Virtual Server or a 30 storage computing device 1065 may perform a similar
Microsoft Windows Server Hyper - V ). As another example, process to convert a * .vmdk file to a * .vhd file . As another
the techniques described herein can be used to copy and example, the secondary storage computing device 1065 may
restore data from and to virtual machines 110 that are analyze a * .vmdk file using an API nction , determine its
operating in a mixed - vendor environment (e.g. , virtual sectors using another API function, and copy each sector of
machines from VMware , Microsoft, and / or other vendors ). 35 to a * .vhd file using a third API function . As another
The data agent 155 can perform file - level, volume- level, example, the secondary storage computing device 1065 may
and / or disk- level copies of virtual machines 110 operating analyze a * .vhd file using an API function , determine its
on these Microsoft platforms, and perform restores out of sectors using another API function, and copy each sector of
file - level, volume - level and disk -level copies . to a * .vmdk file using a third API function. The secondary
For example, virtual machines 110 operating on these 40 storage computing device 1065 may use other techniques
Microsoft platforms have their virtual disks 140 in * .vhd (e.g. , third -party toolkits ) to perform conversions between
files. In performing a disk - level copy of aa virtual machine * .vmdk and * .vhd formats.
110 operating on a Microsoft platform , the data agent 155 Conversion between other formats is also possible . For
copies the * .vhd files, extracts metadata (e.g. , file , volume , example, the secondary storage computing device 1065 can
disk relationships metadata) from the * .vhd files and stores 45 convert data between the VMware format and an Open
this metadata . In restoring out of aa disk - level copy , the data Virtual Machine Format (OVF ) and vice - versa . Those of
agent 155 uses the stored metadata to reconstruct the virtual skill in the art will understand that a wide variety of
disks 140 , volumes and files to allow the data agent 155 to conversions are possible , and the techniques are not limited
restore files, volumes or entire virtual machines 110. The to the conversions described herein .
techniques described herein can also be used to copy and 50 Secondary Storage Computing Device Index
restore data from and to virtual machines 110 operating on As described herein , a secondary storage computing
virtual machine hosts 105 from other vendors. device may maintain an index , a database , or other data
Conversion Between Differing Virtual Machine Formats structure that it uses to store index data generated during
In the context of a VMware virtual machine 110 , in storage operations. The secondary storage computing device
restoring a volume of aa virtual machine 110 (e.g. , step 930 55 may use this index data to quickly and efficiently locate data
of the process 900 ) , the secondary storage computing device that has been previously copied . This index data may be used
1065 restores the volume as a VMware volume , e.g. , to a for various purposes, such as for browsing by an adminis
virtual machine 110 operating on a virtual machine host 105 . trator and / or for restoring the previously copied data.
However, the secondary storage computing device 1065 can During a storage operation involving multiple virtual
also restore the volume as a Microsoft volume, e.g. , to a 60 machines 110 , the secondary storage computing device
virtual machine 110 operating on Microsoft Virtual Server or populates one index with metadata corresponding to all the
Microsoft Windows Server Hyper - V. The secondary storage multiple virtual machines 110 ( e.g. , a master index ). For
computing device 1065 can thus convert data in the VMware each of the virtual machines 110 , the secondary storage
* .vmdk format to data in the Microsoft * .vhd format. This computing device also populates an index with metadata
conversion process can also be performed in the opposite 65 corresponding to that virtual machine 110 ( e.g. , a sub
direction , e.g. , from the Microsoft * .vhd format to the index ). The master index points to (or refers to ) the sub
VMware ** .vmdk format . indices . When an operation to restore virtual machine data is
US 11,436,210 B2
39 40
to be performed , the master index is accessed . Because the data agent 155 could increase the number of simultaneous
master index points to the sub - indices , these can be storage operations that it performs.
accessed , and the indexed data is used so as to present the Third , the throughput of concurrent storage operations
virtual machine data that is available to be restored . This could be reduced so as to utilize less of the resources ( e.g. ,
available virtual machine data is displayed to an adminis- 5 CPU , disk , memory , network bandwidth , etc. ) of the virtual
trator segregated by individual virtual machines 110 , which machines 110 and / or the virtual machine host 105. This
is a logical distinction that is likely intuitive to the admin- reduction in throughput may lessen the loads placed upon
istrator. Accordingly, accessing individual virtual machine the virtual machines 110 and / or the virtual machine host 105
index data involves two levels of indirection , one for the 10 by the simultaneous storage operations. However, this may
master index, and one for the sub - indices . also necessitate lengthening the window of time in which the
Additionally or alternatively, the secondary storage com- storage operations are performed . In each of these three
puting device can populate a single index that is subdivided approaches, if the data agent 155 is unable to perform a
or otherwise logically separated into multiple sub - indexes, storage operation upon a virtual machine 110 , the data agent
one sub - index for each virtual machine 110. When an 15 155 may flag the virtual machine 110 for later performance
operation to restore virtual machine data is to be performed, of a storage operation and move to the next virtual machine
the index data populated by the secondary storage comput- 110. These three approaches are not mutually exclusive , and
ing device can be used to present the virtual machine data combinations of two or more of the three may be used so as
segregated by individual virtual machines 110. Other logical to optimally perform storage operations upon virtual
separations and / or segregations of virtual machine data ( e.g. , 20 machines 110 .
by file type, by owner, etc. ) are of course possible . Additional Interfaces for Configuring Storage Operations
Automatic Throttling of Storage Operations for Virtual Machine Data
As described herein , a virtual machine host 105 may host FIG . 11 is a display diagram illustrating an example
multiple virtual machines 110. If a data agent 155 is to interface 1100 provided by aspects of the invention . The
perform simultaneous storage operations on a large number 25 interface 1100 enables an administrator to browse copied
of the virtual machines 110 , their performance , individually virtual machine data for purposes of restoring it . The admin
or collectively, may be adversely affected . This potential istrator can specify that the latest data is to be browsed or
adverse effect may be attributable to one or more reasons , specify a point in time before which the data is to be browsed
such as , for example , the snapshotting of virtual machines using options 1105. The administrator can also select a
110 prior to copying their data ( see FIG . 7 ) . There may not 30 virtual machine storage manager 145 using list box 1110 and
necessarily be a linear relationship between the number of a secondary storage computing device 1065 using list box
storage operations that the data agent 155 performs (or the 1115. The administrator can also select the intended type of
number of virtual machines 110 upon which the data agent restore using options 1120 : either restoration of individual
155 is performing storage operations) and the reduction in files and /or folders, restoration of entire volumes , or resto
performance. For example, performance may decrease lin- 35 ration of virtual machines and /or virtual disks .
early with regards to a first number of concurrent storage FIG . 12 is a display diagram illustrating example inter
operations ( e.g. , ten concurrent storage operations ), and then faces 1200 and 1250 provided by aspects of the invention .
may drastically decrease after surpassing that first number. The interface 1200 may be shown after the administrator has
Accordingly, it would be beneficial to be able to limit the selected to browse the latest data (e.g. , reference character
number of concurrent storage operations being performed 40 1105 of FIG . 11 ) and the selected intended restoration is that
upon the virtual machines 110 hosted by a virtual machine of individual files and / or folders ( e.g. , reference character
host 105. This could be done in one of several ways . First , 1120 of FIG . 11 ) . The interface 1200 includes a folder
there could be aa hard limit , or threshold , on the number of structure 1205 corresponding to the copied virtual machine
simultaneous storage operations performed. For example, data . As shown , a folder 1208 within a volume ( Volume 1 )
the data agent 155 could be limited to performing ten 45 of a virtual machine (TESTVM111 ) is selected . The inter
simultaneous storage operations (e.g. , upon ten different face 1250 provides the administrator with options for restor
virtual machines 110 ) . The data agent 155 could distribute ing the selected folder. These include an option 1210 to
the ten simultaneous storage operations across the sub- restore ACLs associated with the virtual machine data and
clients corresponding to the virtual machines 110. For an option 1215 to unconditionally overwrite data. The
example, if a single virtual machine host 105 hosts 50 virtual 50 administrator can specify the destination computer and
machines 110 distributed across five sub -clients, the data folder in region 1220. The administrator can also specify
agent 155 could be limited to performing two simultaneous options for preserving or removing source paths in region
storage operations (e.g. , upon two virtual machines 110 ) per 1225 .
9

sub - client . FIGS . 13A and 13B are display diagrams illustrating
Second , the number of concurrent storage operations 55 example interfaces 1300 and 1340 provided by aspects of
could be limited based upon the performance of one or more the invention . The interface 1300 may be shown after the
individual virtual machines 110 and /or the performance of administrator has selected the intended restoration to be that
the virtual machine host 105. The data agent 155 can of an entire volume ( e.g. , reference character 1120 of FIG .
measure performance using standard metrics ( e.g. , number 11 ) . The interface 1300 allows the administrator to select to
of disk writes and / or reads per second , central processing 60 restore a volume as a physical volume , as a * .vhd file
unit (CPU) usage , memory usage , etc. ). If the data agent 155 ( corresponding to Microsoft virtual machines ) , or as a
determines that the performances of the virtual machines * .vmdk file ( corresponding to VMware virtual machines)
110 are below a certain performance threshold, the data using options 1305. The administrator can also select a
agent 155 could reduce the number of simultaneous storage destination computer in list box 1310 , a source volume to be
operations that it performs. Alternatively, if the data agent 65 restored in region 1315 , and a destination volume using
155 determines that the performances of the virtual button 1320. Selecting the button 1320 causes the interface
machines 110 exceed the certain performance threshold , the 1340 to be displayed, which allows the administrator to
US 11,436,210 B2
41 42
select a mount point on the selected destination computer managers 145 can shorten the amount of time it takes to
from available mount points listed in region 1325 . conclude all the copy operations. This can be true even in the
FIG . 13B illustrates the interface 1300 when the admin- case of a single virtual machine 110 ( for example, when the
istrator has selected to restore a volume as a * .vhd file from single virtual machine 110 contains a large amount of data ).
the options 1305. The administrator can select a destination 5 This workload balancing can provide significant benefits,
computer in list box 1310 and a destination folder for the such as when copy operations need to be performed entirely
* .vhd file can be selected using button 1335. Once selected , within a specific window of time (e.g. , from 2:00 a.m. to
the destination folder will be displayed in text box 1330 . 4:00 a.m. ) . Moreover, such load balancing only requires a
FIGS . 14A and 14B are display diagrams illustrating an single virtual machine storage manager 145 to coordinate
example interface 1400 provided by aspects of the inven- 10 the performance of the copy operations by the multiple
tion . The interface 1400 may be shown after the adminis- virtual machine storage managers 145 .
trator has selected the intended restoration to be that of For example, an administrator could select a first virtual
virtual machines or virtual disks ( e.g. , reference character machine storage manager 145 that coordinates the copying
1120 of FIG . 11 ) . The interface 1400 allows the adminis- of data of multiple virtual machines 110. The administrator
trator to select to restore either aa virtual machine or virtual 15 could also select one or more second virtual machine storage
disks . As with the interface 1300 , the administrator can managers 145 to perform the copying of data of multiple
select a destination computer in list box 1410 and a desti- virtual machines 110. The first data agent 155 can allocate
nation folder using button 1435. Once selected , the desti- responsibility for the copying of the data amongst the second
nation folder will be displayed in text box 1430. If restore of virtual machine storage managers 145 such that the copying
virtual machines is selected (FIG . 14A) , the administrator 20 is more or less evenly distributed based upon selections
can provide the name of the virtual machine to be restored previously made ( static load balancing ).
in text box 1415 , and the name of the server to which it is Additionally or alternatively, the first virtual machine
to be restored in text box 1420. If the virtual machine is to storage manager 145 can distribute the copy operations
be restored to aa virtual machine host 105 , the administrator across the second virtual machine storage managers 145
selects this option 1425 and specifies the name of the virtual 25 based upon various factors . Consider an example where ten
machine host 105 in text box 1420. If the virtual machine is copy operations of the data of ten virtual machines 110 are
to be restored to a virtual machine host managed by a virtual to be performed , and where two second virtual machine
machine manager 202 , the administrator selects this option storage managers 145 can be used to perform the copy
1425 and provides the name of the virtual machine manager operations. The first virtual machine storage manager 145
202 in text box 1420 and the name of the virtual machine 30 can determine an availability of the second virtual machine
host 105 in text box 1440. The administrator also specifies storage managers 145 , as measured by percentage of CPU
authentication credentials in region 1445 . usage , percentage of network utilization , disk utilization ,
FIG . 15 is a display diagram illustrating an example average time spent performing storage operations, and / or
interface 1500 provided by aspects of the invention . The other factors . For example, if the first virtual machine
interface 1500 allows the administrator to specify options 35 storage manager 145 determines that one of the second
for storage operations for aa sub - client. Region 1505 displays virtual machine storage managers 145 have a percentage of
information associated with the sub - client. The administra- CPU usage of 10 % , and that the other second virtual
tor can specify the number of data readers to use in per- machine storage manager 145 has a percentage of CPU
forming storage operations using spinner 1510. The speci- usage of 50 % , the storage manager 1005 may allocate eight
fied number of data readers corresponds to the number of 40 of the copy operations to the one second virtual machine
storage operations to be simultaneous performed on the storage manager 145 and the remaining two copy operations
virtual machines 110 associated with the sub - client. As to the other second virtual machine storage manager 145 ,
described herein , the number of simultaneous storage opera- based upon this measurement of availability ( dynamic load
tions may be limited or capped so as not to adversely affect balancing) . The first virtual machine storage manager 145
performance of the virtual machines 110 . 45 may also use other factors known to those of skill in the art
The administrator can also specify the type of copy to balance the workloads of the two virtual machine storage
operation to be performed using options 1515 : either file managers 145. Additionally or alternatively, the storage
level , volume level , or disk level . The administrator can also manager 1005 may perform the load balancing amongst the
select one or more virtual machine storage managers 145 multiple virtual machine storage managers 145 .
that are to perform the copy operations using list box 1520. 50 Copying of Virtual Machine Data on an Incremental Basis
Generally, the administrator has to select at least one virtual As described herein , the primary storage data store 135
machine storage manager 145 to perform the copy opera- stores the data of virtual machines 110. The data is organized
tion. into multiple blocks of fixed size (e.g. , 64 kb, 128 kb, 256
If the administrator selects two or more virtual machine kb, 512 kb , etc. ) . A data agent 155 can perform full copies
storage managers 145 in the list box 1520 , this causes the 55 of data of virtual machines 110 using the blocks of data . In
copy operation, when it commences , to be performed by the some instances, it may not be necessary to perform a second
selected virtual machine storage managers 145. This can full backup of virtual machine data after aa first full backup
assist in load balancing and provide other benefits. For has been performed ( at least not until a set period of time has
example , one or more sub - clients could be configured to elapsed) . Rather, incremental and / or differential backups of
perform copy operations upon all the virtual machines 110 60 virtual machinea data may suffice.
associated with a specific virtual machine manager 202. This FIG . 16 is a flow diagram illustrating a process 1600 for
could be a large number of virtual machines 110 , and if only copying virtual machine data on an incremental basis ( or a
one virtual machine storage manager 145 were to perform differential basis , but incremental copies are described
copy operations upon the one or more sub -clients ' virtual herein for brevity ). The process 1600 may be performed by
machines 110 , it could take a lengthy period of time to 65 the data agent 155. The data agent 155 begins at step 1605
conclude all the copy operations. Accordingly, distributing by accessing data structures within virtual disk files 140. As
copy operations across multiple virtual machine storage described herein , virtual disks 140 can be growable or
US 11,436,210 B2
43 44
preallocated. In either case , virtual disks 140 may use step 1635 the data agent will add a row to the table 1700 with
internal data structures to specify the blocks that have been the block identifier and the generated substantially unique
allocated and / or are being used by the virtual machines 110 . identifier.
For example, VMware virtual machine disk files ( * .vmdk The process 1600 and the table 1700 thus enable copying
files) include grain directories and grain tables , and 5 of virtual machine data on an incremental basis . This can
Microsoft virtual disk files ( * .vhd files) include block allo- provide significant advantages in that it allows for only
cation tables. These internal data structures specify the copying the data that has changed while still providing for
blocks within virtual disks 140 that have been allocated protection of virtual machine data . Changes can be made to
and /or are being used for data storage . the process 1600 and / or the table 1700 while still retaining
At step 1610 , the data agent 155 determines the blocks 10 the ability to perform storage operations on an incremental
basis . For example, a monitoring agent could monitor the
that have been allocated and / or are being used within the blocks
virtual disks 140. At step 1615 the data agent 155 accesses changedof(e.g. the virtual disks 140 and, each time a block is
a block identifier data structure to make the determination of agent could set, adueflag
to a write operation ), the monitoring
( or bit) for the block in aa data structure .
which blocks have changed since the last storage operation 15 When the data agent 155 is to perform an incremental copy,
involving a full copy of the virtual machine data . it can access the data structure containing the flags and only
FIG . 17 is a diagram illustrating an example table 1700 copy blocks that have been flagged . As another example, the
that may be employed as a block identifier data structure . table 1700 could include a time copied column to store
The data agent 155 may create the table 1700 during, for timestamps of when a block was last copied to a storage
example, a storage operation that performs a full copy of all 20 device. If the difference between the time of the incremental
of the data of the virtual machine 110. The table 1700 copy operation and the last time copied is greater than a
includes a block identifier column 1702 and a substantially threshold time , the data agent 155 could copy the block to
unique identifier column 1704. The block identifier column the storage device, regardless of whether the generated
1702 stores identifiers of blocks within a virtual disk 140 . substantially unique identifier matches the stored substan
Block may be identified by their order within a virtual disk 25 tially unique identifier.
140. For example , a first block may have an identifier of one CONCLUSION
( “ 1 ” ), a second block may have an identifier of two ( “ 2 ” ),
and so forth . The substantially unique identifier column From the foregoing, it will be appreciated that specific
1704 stores identifiers generated for the block by the data
agent 155. For example, substantially unique identifiers 30 embodiments of the storage system have been described
herein for purposes of illustration , but that various modifi
could be generated using Message Digest Algorithm 5
( MD5 ) or Secure Hash Algorithm SHA 512. Although the cations may be made without deviating from the spirit and
table 1700 is illustrated as including three rows 1706 of three scope of the invention . For example, although copy opera
different blocks , the table 1700 generally includes one row 35 tions have been described , the system may be used to
for each block in a virtual disk 140 .
perform many types of storage operations ( e.g. , backup
operations, restore operations, archival operations, copy
Returning to FIG . 16 , at step 1620 , for each block that the operations, CDR operations, recovery operations, migration
data agent 155 determines has been allocated and / or is in operations, HSM operations, etc. ). Accordingly, the inven
use , the data agent 155 generates a substantially unique tion is not limited except as by the appended claims .
identifier. At step 1625 , the data agent 155 finds the row in 40 Unless the context clearly requires otherwise, throughout
the table 1700 for which the block identifier of column 1702 the description and the claims , the words " comprise," " com
.

is the same as the block identifier of the block currently prising," and the like are to be construed in an inclusive
being processed . The data agent 155 then looks up the sense , as opposed to an exclusive or exhaustive sense ; that
substantially unique identifier in the column 1704 , and is to say, in the sense of “ including, but not limited to .” The
compares it to the generated substantially unique identifier. 45 word “ coupled ," as generally used herein, refers to two or
If the two substantially unique identifiers do not match , then more elements that may be either directly connected , or
the block currently being processed has changed . The pro- connected by way of one or more intermediate elements.
cess 1600 then continues at step 1630 where the data agent Additionally, the words “ herein , ” “ above, ” “ below , " and
155 copies the block to a storage device . The data agent 155 words of similar import, when used in this application, shall
then updates the column 1704 of the table 1700 with the 50 refer to this application as a whole and not to any particular
generated substantially unique identifier. At step 1640 , the portions of this application . Where the context permits,
data agent 155 determines whether there are more blocks to words in the above Detailed Description using the singular
process. If so , the process 1600 returns to step 1620. If not , or plural number may also include the plural or singular
the process 1600 concludes. If the block has not changed number respectively. The word “ or ” in reference to a list of
( step 1625 ) , the process 1600 continues at step 1640. The 55 two or more items, that word covers all of the following
next time the data agent 155 performs a full copy of all of interpretations of the word : any of the items in the list , all of
the data of the virtual machine 110 , the data agent 155 can the items in the list , and any combination of the items in the
regenerate substantially unique identifiers for blocks of data list .
and repopulate or recreate the table 1700 . The above detailed description of embodiments of the
If, at step 1625 , the data agent 155 cannot find a row in 60 invention is not intended to be exhaustive or to limit the
the table 1700 for which the block identifier of column 1702 invention to the precise form disclosed above . While specific
is the same as the block identifier of the block currently embodiments of, and examples for, the invention are
being processed , this generally indicates that the data agent described above for illustrative purposes , various equivalent
155 is currently processing a block that has been allocated modifications are possible within the scope of the invention ,
and / or has been put to use since the time at which the last full 65 as those skilled in the relevant art will recognize . For
copy operation was performed . If this is the case , the data example, while processes or blocks are presented in a given
agent 155 will copy the block to the storage device , and at order, alternative embodiments may perform routines hav
US 11,436,210 B2
45 46
ing steps, or employ systems having blocks , in a different determining at least one modified data object within the at
order, and some processes or blocks may be deleted , moved , least some data stored by the virtual machine that has
added , subdivided, combined , and / or modified . Each of been modified ;
these processes or blocks may be implemented in a variety accessing or creating metadata associated with the at least
of different ways . Also , while processes or blocks are at 5
one modified data object; and
times shown as being performed in series, these processes or updating the index with the accessed or created metadata
blocks may instead be performed in parallel, or may be associated with the at least one modified data object.
performed at different times . 2. The method of claim 1 , wherein the at least some data
The teachings of the invention provided herein can be 10
stored by the virtual machine resides on a filesystem of a
applied to other systems , not necessarily the system virtual machine host hosting the virtual machine.
described above . The elements and acts of the various 3. The method of claim 1 , further comprising:
embodiments described above can be combined to provide for at least one modified data object:
further embodiments . generating a substantially unique identifier for the at
These and other changes can be made to the invention in 15 least one modified data object;
light of the above Detailed Description . While the above determining, based on the substantially unique identi
description details certain embodiments of the invention and fier, that an instance of the at least one modified data
describes the best mode contemplated, no matter how object has not been stored on a secondary storage
detailed the above appears in text, the invention can be device ; and
practiced in many ways . Details of the system may vary 20 in response to determining that an instance of the at
considerably in implementation details, while still being least one modified data object has not been stored on
encompassed by the invention disclosed herein . As noted a secondary storage device , storing the at least one
above , particular terminology used when describing certain modified data object on a secondary storage device .
features or aspects of the invention should not be taken to 4. The method of claim 1 , wherein the secondary copy of
imply that the terminology is being redefined herein to be 25 the at least some data stored by a virtual machine was
restricted to any specific characteristics, features, or aspects created according to a storage policy, wherein the storage
of the invention with which that terminology is associated . policy specifies how or when to copy data from one or more
In general, the terms used in the following claims should not virtual machines to a secondary storage data store .
be construed to limit the invention to the specific embodi- 5. The method of claim 4 , wherein the storage policy
ments disclosed in the specification, unless the above 30 comprises a data structure comprising one or more prefer
Detailed Description section explicitly defines such terms. ences or criteria associated with performing a storage opera
Accordingly , the actual scope of the invention encompasses tion .
not only the disclosed embodiments, but also all equivalent 6. The method of claim 1 , wherein updating the index is
ways of practicing or implementing the invention under the performed by a journaling agent, wherein the journaling
claims . 35 agent includes a virtual filter driver module .
While certain aspects of the invention are presented below 7. The method of claim 6 , wherein the journaling agent is
in certain claim forms, the inventors contemplate the various deployed on a virtual I/ O port or data stack .
aspects of the invention in any number of claim forms. For 8. The method of claim 6 , wherein the journaling agent
example , while only one aspect of the invention is recited as operates in conjunction with a virtual file management
embodied in a computer - readable medium , other aspects 40 module to record operations executed on the virtual
may likewise be embodied in a computer - readable medium . machine .
As another example, while only one aspect of the invention 9. At least one non -transitory computer - readable medium
is recited as a means -plus- function claim under 35 U.S.C. § carrying instructions, which when executed by at least one
112 , sixth paragraph, other aspects may likewise be embod data processor, executes operations to classify data of virtual
ied as a means -plus - function claim , or in other forms, such 45 machines in a heterogeneous computing environment that
as being embodied in a computer -readable medium . (Any includes virtual machines and non - virtual machines, the
claims intended to be treated under 35 U.S.C. $ 112 , 16 will operations comprising:
begin with the words “ means for ." ) Accordingly, the inven- accessing a secondary copy of the at least some data
tors reserve the right to add additional claims after filing the stored by a virtual machine ;
application to pursue such additional claim forms for other 50 creating metadata associated with the secondary copy of
aspects of the invention . the at least some data of the virtual machine ;
storing the metadata in an index, wherein the index also
We claim : comprises of metadata associated with data stored on at
1. A method of classifying data of virtual machines in a least one non - virtual machine ;
heterogeneous computing comprising virtual machines and 55 accessing a journal file for tracking operations performed
non- virtual machines , wherein the method is performed by on the at least some data stored on the virtual machine ;
one or more computing systems, each computing system determining at least one modified data object within the at
having a processor and memory, the method comprising: least some data stored by the virtual machine that has
accessing a secondary copy of at least some data stored by been modified ;
a virtual machine ; 60 accessing or creating metadata associated with the at least
creating metadata associated with the secondary copy of one modified data object; and
the at least some data of the virtual machine ; updating the index with the accessed or created metadata
storing the metadata in an index , wherein the index also associated with the at least one modified data object.
comprises of metadata associated with data stored on at 10. The at least one non - transitory computer - readable
least one non - virtual machine ; 65 medium of claim 9 , wherein the at least some data stored by
accessing a journal file for tracking operations performed the virtual machine resides on a filesystem of a virtual
on the at least some data stored on the virtual machine ; machine host hosting the virtual machine .
US 11,436,210 B2
47 48
11. The at least one non -transitory computer - readable access a journal file for tracking operations performed
medium of claim 9 , further comprising : on the at least some data stored on the virtual
for at least one modified data object: machine;
generating a substantially unique identifier for at least determine at least one modified data object within the
one modified data object; 5
at least some data stored by the virtual machine that
determining, based on the substantially unique identi has been modified ;
fier, that an instance of the at least one modified data access or creating metadata associated with the at least
object has not been stored on a secondary storage one modified data object; and
device; and
in response to determining that an instance of the at update the index with the accessed or created metadata
10
least one modified data object has not been stored on associated with the at least one modified data object .
a secondary storage device , storing the at least one 18. The system of claim 17 , wherein the at least some data
modified data object on a secondary storage device . stored by the virtual machine resides on a filesystem of a
12. The at least one non -transitory computer - readable
medium of claim 9 , wherein the secondary copy of the at virtual 19.
machine host hosting the virtual machine.
The system of claim 17 , the one or more computing
least some data stored by a virtual machine was created 15 systems further configured to :
according to a storage policy, wherein the storage policy for at least one modified data object:
specifies how or when to copy data from one or more virtual generate a substantially unique identifier for the at least
machines to a secondary storage data store . one modified data object;
13. The at least one non - transitory computer - readable determine, based on the substantially unique identifier,
medium of claim 12 , wherein the storage policy comprises 20 that an instance of the at least one modified data
a data structure comprising one or more preferences or object has not been stored on a secondary storage
criteria associated with performing a storage operation . device ; and
14. The at least one non -transitory computer - readable in response to determining that an instance of the at
medium of claim 9 , wherein updating the index is performed least one modified data object has not been stored on
by a journaling agent, wherein the journaling agent includes 25 a secondary storage device , store the at least one
a virtual filter driver module .
15. The at least one non - transitory computer -readable modified data object on a secondary storage device .
20.
medium of claim 14 , wherein the journaling agent is of the at least The system of claim 17 , wherein the secondary copy
deployed on a virtual I/ O port or data stack . some data stored by a virtual machine was
16. The at least one non -transitory computer- readable 30 created according to a storage policy, wherein the storage
policy specifies how or when to copy data from one or more
medium of claim 14 , wherein the journaling agent operates virtual machines to a secondary storage data store .
in conjunction with a virtual file management module to 21. The system of claim 20 , wherein the storage policy
record operations executed on the virtual machine . comprises a data structure comprising one or more prefer
17. A system for classifying data of virtual machines in a
heterogeneous computing environment that includes virtual 35 ences
storageoroperation
criteria .associated with performing at least one
machines and non -virtual machines the system comprising: 22. The system of claim 17 , wherein updating the index
one or more computing systems, each computing system is performed by a journaling agent, wherein the journaling
having a processor and memory, the one or more agent includes a virtual filter driver module .
computing systems configured to :
access a secondary copy of at least some data stored by 40 is 23. The system of claim 22 , wherein the journaling agent
deployed on a virtual I /O port or data stack .
a virtual machine ; 24. The system of claim 22 , wherein the journaling agent
create metadata associated with the secondary copy of operates 2

in conjunction with a virtual file management


the at least some data of the virtual machine; module to record operations executed on the virtual
store the metadata in an index , wherein the index also 45 machine .
comprises of metadata associated with data stored on
at least one non - virtual machine ; *

You might also like