Tool For Multi Model Interaction
Tool For Multi Model Interaction
4, August 2012
Computer science and engineering, Vel tech high tech engineering college, #42, avadi-vel tech road, avadi, chennai - 600 062, tamilnadu
[email protected]
2
Computer science and engineering, Vel Tech (Owned by R.S Trust) , #42, avadi-vel tech road, avadi, chennai - 600 062, tamilnadu
[email protected]
Abstract
The TMI allows applications to be composed hierarchically. Physical and dynamical modules of different models can be defined as separate gridded components. These components coupled together with a coupler component. All of these nested within a single master gridded component and work as a single modeling unit through the coupler component. Here the modeling components are RegCM (Regional Climate Model) as atmospheric gridded component and ROMS (Regional Ocean Modeling System) as oceanic gridded component. The coupler component acts as an agent to convert the atmospheric gridded component to oceanic gridded component and vice-versa as per the algorithmic necessity. Design of TMI, allows stopping interactive execution to standalone execution of the respective gridded component as per the user selection. TMI can be included in data assimilation and climate application.
Keyword
Regcm, Roms, Mct, TMI, MPI
1. INTRODUCTION
The upper layer is referred to as the Superstructure where it provides a shell which combines the user code and interconnects the input and output data streams between its components. Detailed description regarding generation and execution needs to be discussed. The lower layer is the Infrastructure Layer which provides a foundation that scientific components developer can use to speed up the coupler component construction. The elements in this layer helps in including constructs to support parallel processing with data types regarding RegCM applications and specialized libraries .It supports performance and time management, scalable I/O and error handling tools. TMI has two components. They are Girded Components which represents Models and other represents Coupler. The Model components such as ROMS or RegCM implements the physical and dynamic proportions are sandwiched between the two layers. The components include the classes which wrap user code and ensure the runtime persistent interfaces according to the
DOI : 10.5121/ijcsa.2012.2404 39
International Journal on Computational Sciences & Applications (IJCSA) Vo2, No.4, August 2012
execution requirement. A hierarchical combination of infrastructure, superstructure, and user code components are joined together. The combining of models can be performed running concurrently or sequentially. Each model runs on its own set of processors concurrently which reside on a common cluster and a driver exists for the coupled modeling system which governs the execution and data exchanges between the individual models. The cluster models are becoming popular and are used in Message Passing Interface (MPI) distributed memory parallelism communication protocol. It provides information about overlapping halo region of the grid tiles to pass between processors of different nodes. The MPI protocol is being used in this paper.
2. MODELS
2.1 Regional Ocean Modeling System (Roms, V.3.3)
ROMS is a public domain, free surface, hydrostatic, three dimensional, primitive equation ocean circulation model. The model solves the Bossiness approximation to the Reynolds averaged form of the Nervier Strokes equation on an orthogonal curvilinear Arakawa C grid in horizontal and uses stretched terrain following coordinates in the vertical. The model features second, third, and fourth order horizontal and vertical advection schemes for momentum and tracers, and can use the splines to reconstruct the vertical advection profiles. Along with temperature and salinity, ROMS can transport passive tracers, contain algorithm for suspended and bed load sediment transport, multiple choice for turbulence closures, biologic routines and several types of boundary conditions. We have implemented algorithm to include the effects of surface wind waves on the currents based on the method of Mellor. For these effect ROMS requires information of wave energy, wave length, wave direction. Other processor such as surface fluxes of turbulent kinetic energy due to the breaking waves, bed-load sediment transport, and enhanced bottom friction due to the waves require information of bottom orbital velocity, surface and bottom wave periods, and wave-energy dissipation. These parameters can be obtains directly from an atmospheric model, such as REGCM.
International Journal on Computational Sciences & Applications (IJCSA) Vo2, No.4, August 2012
3. TMI ARCHITECTURE
Figure 1: Figure 1 describes the Architecture of the TMI and how it works by communicating between RegCM and ROMS. RegCM is sending the input to the ROMS as short wave radiation, long wave radiation, Latent heat flux, evaporation and precipitation. ROMS is sending the input to the RegCM as Sea surface temperature (SST).
4. APPROACHES
The component models in a system joined via MCT (Model Coupling Toolkit) will be referred to as ROMS and REGCM. We present in a general manner, the methodology implement couple these two set of models. This provides a basic example that could be used for coupling of other types of models. A common approach used to develop both systems is that the data exchange between the individual component models is performed across the horizontal domain.
International Journal on Computational Sciences & Applications (IJCSA) Vo2, No.4, August 2012
concurrently on dissimilar processors i.e. it uses concurrent programming. MCT also allows chronological coupling but concurrent coupling is most suitable for our application. During implementation the processors allocated for the work are scattered to the selected models by MASTER program and calls each model component for initialize, run and finalize the structure, the MASTER program is organized as follows.MASTER initializes the MPI using a standard call MPI_INIT to turn on a common MPI communicator for the complete system.The MPI_COMM_SPLIT function is used to split the common communicator MPI_COMM_WORLD into multiple communicators COM1 and COM2 (one for each model) depending on the total number of models that are being coupled. To hand over specific processors to each model, the following is done 1. The, MASTER decides the total number of processors requested for the submitting job. 2. It also reads the input file to determine the number of processor to be assigned to each model, for instance of M processor to ROMS and N processor to REGCM. 3. Every processor ID is decides using the MPI_COMM_RANK function. MASTER assigns processor to each model depending on the processor ID and the number of processors requested for that model. In our test case, we use five processor, two for ROMS and three for REGCM .The number of processor allocated to the ROMS and REGCM are determined by the specific application and are limited only by the number of processor on the computer cluster. The MASTER program then calls steps to initialize, run, finalize on all processor for both ROMS, and REGCM. The modeling components independently should be structured as mentioned above to provide a regular framework and to also assist in determining the correct synchronization points. Supplementary subroutines are added for each model to follows the steps i.e. initialization, communication using MCT during the run phase, and expire MCT properly during the finalize phase. These routines are organized in modules called DOMINE1 (DOM1) and DOMINE2 (DOM2), for ROMS and REGCM respectively and look schematically as follows. Module DOM1 use M_ module Contains Subroutine MCTinit_DOM1 (COM1.) Subroutine MCTrun_DOM1 (COM1.) Subroutine MCTend_DOM1 End module SAMPLE_DOM1 The module DOM1 is placed with code for ROMS and compiled with that model. A similar module, DOM2 is placed with code for REGCM, and compiled with that selected model. The structure of each steps explain in detailed as follows initialize, run, and finalize.
4.2. Initialize
In the first phase, the grid segment decides by all the processors for assign the model and model variables. In this phase MCT is prepare to initialize.The initialization step occurs in the equivalent
42
International Journal on Computational Sciences & Applications (IJCSA) Vo2, No.4, August 2012
subroutine MCTinit_DOMn (where n is 1 for ROMS and 2 for REGCM). The basic structure initialization is, Subroutine MCTinit_DOM1 (COM1, ncomps, DOM1_ID, DOM2_ID) call MCTWorld_init (ncomps,MPI_COMM_WORLD,COM1,DOM1_ID) call GlobalSegMap_init (GlobalSegMapDOM1,s,l,r,COM1,DOM1_ID) call AttrVect_init (AV1_toDOM2,rList=M1var1:M1var2:M1var3,lsize=AV1size) call AttrVect_init (AV1_fromDOM2,rList=M2var1:M2var2:M2var3,lsize=AV2size) call Router_init (DOM2_ID, GlobalSegMapDOM1,COMM1,Router1) end subroutine MCTinit_DOM1 If more than one processor is assigned to the model, the model decomposes its gridded domain into segments. In this coupling technique every processor executes the calculation in one segment of the selected model. call GlobalSegMap_init (GlobalSegMapDOM1,s,l,r,COM1,DOM1_ID) In this call, GlobalSegMapDOM1is the GlobalSegMap created, s->starts & I -> lengths are arrays containing local segment start and length values, r-> root is the root for the communicator COM1on which the decomposition exists, and DOM1_ID is the MCT component ID for this model. The call is a collective operation, and the result is a domain decomposition descriptor containing all the information needed to locate a given element and to perform global-to-local and local-to-global index translation. An MCT_Router is a data type object. It allows parallel data transfer between model domain segments residing on different processors. It is Inter component parallel data transfer scheduler (between two GSMaps) GlobalSegMapDOM1 it gives the set of ROMS grid points on a given processor. GlobalSegMapDOM2 Router1 determines corresponding grid point locations of REGCM on other processors, and provides the channel for data that will be transferred. The Router1 table for the ROMS is initialized by a Router_init function call in MCTinit_DOM1 subroutine, which connects information about a second component DOM2_ID domain decomposition of the calling component GlobalSegMapDOM1, and the communicator of the calling component COM1. The Router2 communication table is initialized for the second model, in MCTinit_DOM2 routine.
International Journal on Computational Sciences & Applications (IJCSA) Vo2, No.4, August 2012
between other models and exchange of data takes place, say MCT. All models must contain details about the user defined model time (i.e) MCT. When a component model achieves MCT time, a subroutine is called using MCTrun_DOMn (n is 1 or 2). All the processor calls this routine and make a point where all models interacts and exchange the data. Subroutine MCTrun_DOM1 works on vector AV1 to DOM2 and AV1 from DOM2 similarly; second component works on vectors AV2 to DOM1 and AV2 from DOM1 the data which is to be exchanged has to be liberalized. call AttrVect_importRAttr(AV1_toDOM2,M1 var1,avdata) For example, the data which are to be exchanged are feed to the attribute vector and transferred to REGCM and this is done by importing the corresponding variables (M1var1, M1varn). Then the MCT_SEND is called, and the data is transferred from the attribute vector to MCT through a router which is earlier established. MCT, then directs this data to AV2 from DOM1 through ROUTER 2, MCT_Recv is invoked in this process this point is fundamental synchronization point where data are exchanged between respective models MCT_SEND and MCT_Recv are blocking commands. This means that the component model sends the data with a special TAG and waits until the second component receives the transferred data thus completing the data exchange. It transferred data from ROMS to REGCM this requires seprate set of MCT_Send and MCT_Recv calls. call MCT_Send(AV1_toDOM2,Router1,tag1) (in MCTrun_DOM1, sending data via Router1 computed by first model) call MCT_Recv(AV2_fromDOM1,Router2,tag1) (in MCTrun_DOM2, receiving the data by the second model) call MCT_Send(AV2_toDOM1,Router2,tag2) (in MCTrun_DOM2, sending data via Router2 computed by second model) call MCT_Recv(AV1_fromDOM2,Router1,tag2) (in MCTrun_DOM1, receiving data by the first model)
International Journal on Computational Sciences & Applications (IJCSA) Vo2, No.4, August 2012
5. CONCLUSION
Model Coupling Toolkit (MCT) is employed to develop coupled modelling systems of two models. In this kind of this system, each model component is systemized to initialize, run, and finalize structure. A Master program authorizes the execution of the modelling system and assigns the processors to each model. A new module comprises of numerous subroutines, these subroutines are essentially to be written for each of the component model structure and function of these modules and subroutines are explained elaborately. The routines give rise to global segment maps originated from routines illustrates division of grid among different processors for each module. Each module generates attribute and routers. The function of attribute vectors is to store data and functions of routers is to transfer the data. The models are initialized, imbedded into MCT and data is transferred utilizing MPI hosted MCT protocols to proficiently transmit and received model fields during model execution.
6. REFERENCES
[1] [2] [3] John C. Warner , Natalie Perlin and Eric D. Skyllingstad Using the Model Coupling Toolkit to couple earth system models , Environmental Modelling & Software 23 (2008) 12401249 Giorgi, F., and G. T. Bates, The climatological skill of a regional model over complex terrain, Mon. Wea. Rev., 117,23252347, 1989. Jacob, R., Larson, J., Ong, E., 2005. M N Communication and Parallel Interpolation inCommunity Climate System Model Version 3 using the model coupling toolkit. International Journal of High Performance Computing Applications 19, 293307 Zeng, X., M. Zhao, and R. E. Dickinson, Intercomparison of bulk aerodynamic algoriths for the computation of sea surface fluxes using toga coare and tao data, J. Climate, 11, 26282644, 1998. The Model Coupling Toolkit: A New Fortran90 Toolkit for Building Multiphysics Parallel Coupled Models Jay Larson, Robert Jacob, and Everest Ong, April 27, 2005, Mathematics and Computer Science Division Argonne National Laboratory, 9700 S. Cass Ave., Argonne, IL 60439 PAGE NO : 16 TO 30 Hill, C., DeLuca, C., Balaji, V., Suarez, M., da Silva, A., the ESMF Joint SpecificationTeam, 2004. The architecture of the earth system modeling framework. Computing in Science and Engineering 6, 1828. Decyk, V. K., Norton, C. D., and Syzmanski, B. K. 1996. Introduction to object-oriented concepts Using Fortran90. See http://www.cs.rpi.edu/~szymansk/OOF90/F90_Objects.html. Gropp, W., Lusk, E., and Skjellum, A. 1999. Using MPI: Portable Parallel Programming with the Message Passing Interface, 2nd edition, MIT Press, Cambridge, MA. [Jones, 1999] Jones, P. W. (1999). First- and Second-Order Conservative Remapping Schemes for Grids in Spherical Coordinates. Monthly Weather Reveiw, 127:2204-2210. Hill, C., C. DeLuca, Balaji, M. Suarez, and A. Da Silva, 2004: The architecture of the Earth System Modeling Framework. Computing in Science and Engineering, January/February issue,18-28. Jones, P. W. (1999). First- and Second-order Conservative Remapping Schemes for Grids in Spherical Coordinates. Monthly Weather Reveiw, 127:2204- 2210. V. Aslot, M. Domeika, R. Eigenmann, G. Gaertner, W. B. Jones,and B. Parady. SPEComp: A New Benchmark Suite for Measuring Parallel Computer Performance. Proc. of the Workshop on OpenMP Applications and Tools (WOMPAT2001), Lecture Notes in Computer Science, 2104, pages 110, July 2001. Y. Hu, H. Lu, A. Cox, and W. Zwaenepoel. OpenMP for Networks of SMPs. Journal of Parallel and Distributed Computing, 60(12):15121530, December 2000. 45
[4] [5]
[6]
International Journal on Computational Sciences & Applications (IJCSA) Vo2, No.4, August 2012 [15] A. Kneer. Industrial Mixed OpenMP/MPI CFD application for Practical Use in Free-surface Flow Calculations. In International Workshop on OpenMP Applications and Tools,WOMPAT 2000, http://www.cs.uh.edu/wompat2000/Program.html, 2000 [16] Message Passing Interface Forum. MPI: A message-passing interface standard. International Journal of Supercomputer Applications, 8(3/4):165414, 1994. [17] Interoperable mpi web page. http://impi.nist.gov. [18] Im, E-S., Ahn, J-B., Kwon, W-T., Giorgi, F. 2007. Multi-decadal scenario simulation over Korea using a one-way double-nested regional climate model system. part 2: Future climate projection (2021- 2050). Climate Dynamic, 30: 239-254. [19] Fiedler, B., J. Rafal, and C. Hudgin. F90tohtml tool and documentation.http://mensch.org/f90tohtml. [20] Burk, S.D., Haack, T., Samelson, R.M., 1999. Mesoscale simulation of supercritical, subcritical, and transcritical flow along coastal topography. Journal of Atmospheric Sciences 56, 27802795.
46