Efficient Data Mapping and PyAnsys r2
Efficient Data Mapping and PyAnsys r2
An Introduction to PyAnsys
————————————
• In this article, we’ll focus on a less ambitious goal: We’ll see how the PyAnsys framework
may be used to automate a frequent but cumbersome task: Mapping data from one
simulation source to another
• We’ll do so by first exploring options without PyAnsys
We want to transfer a
result –like temperature onto this domain...
over this domain...
Remote 0 displacement
(flexible: all degrees of
freedom)
• Specify what
• Browse data is stored
to file in each
location column
• Specify
dimensionality of
data, coordinate • Parsed
system, and results are
delimiter type shown here
here
• one file
per load
step
• The text file data format using the Python script is different than the one using the APDL code
simply because the default method for exporting Mechanical results to text file
(ExportToTextFile()) does not provide a delimiter option (getting around this limitation would
result in considerably more code, so we’ll omit that exercise for now)
• Instead, this function simply creates tab (^t) delimited files
• But Microsoft Excel also knows how to read tab-delimited files, and already associates the file
extension ‘xls’ with such a format (which is why we –and presumably the developers --use this
extension for exporting text files*)
• The text file data format used by the APDL script is comma-delimited (which is more convenient
in that language), but note that both file formats nevertheless use th e’xls’ file extension,
because excel recognizes both delimiters when encountering this extension
*The xls file extension for spreadsheets is a legacy format (97-2003), and users will always recieve a
warning when opening one of these files with newer versions of Microsoft Excel
We Make Innovation Work
www.padtinc.com
The Problem Statement: Data Mapping
-Automating Data Transfer
• Once the solution data from the ‘source’ model has been created, it then needs to be read into
an External Data object
• As we mentioend on slide 9, doing this manually can be very tedious (it is not uncommon to
generate output for hundreds of solution times), so we’ve supplied users with a Workbench
Journal script for doing this automatically*
loadfiledata.wbjn
*This is still an IronPython script which falls under the common ACT paradigm, but for some reason, the developers
call scripting at the Project Schematic level ‘journaling’ and have adopted a ‘wbjn’ file extension for these scripts. The
only difference between coding at this level and within a specific application (Mechanical, for example) is that you’re
in a different namespace (don’t have access to local modules like ExtAPI)
We Make Innovation Work
www.padtinc.com
The Problem Statement: Data Mapping
-Automating Data Transfer
• The journal file ‘loadfiledata.wbjn’ does most of the ‘heavy lifting’ of both reading in the ascii text
data to be transfered, as well a sending the necessary ACT code to the target system for
populating the downstream ‘imported load’ object in the system tree outline
• Lines 12 thru 21 contain global variables which control most of the behavior users may need
modify
• For example, line 14 defines the ‘filepath’ variable which contains the path to the source file data
(the ascii text files to be read)
• The data transfer step (running file ‘loadfiledata.wbjn’) took approximately 2 ½ minutes
• But this problem does not scale well (much worse than linear) using this technique
• A common requirement is to map CFD temperature and fluid prssures onto a structural
model. Such models may contain millions of source points over hundreds of load steps.
Such a problem could easily take hours (or more) to transfer data using the techniques
shown here
• A much more efficient (and compact) method involves the newer PyAnsys tools
(discussed next). We recommend the second option (using ansys.dpf.post to map
temperatures read from text files) for that scenario
• Next, append the path to tmap.py and invoke it with the lines below
• open files
• run files
• debug files
• Since we use the PyVista grid functionality to interpolate the results, the first step is to fill the
PyVista grid with the source solution
• To understand this a little better, pause for a momont to explore the grid obect
• Make a new object called sgrid by typing ‘sgrid = ssol.grid.copy() <enter>’
• We’re getting a contour plot, which is simply plotting the values in the first stored array (because we
didn’t specify), which happens to be the Ansys node number
We Make Innovation Work
www.padtinc.com
Data Mapping with PyAnsys
-ansys.mapdl.reader. How does it work?
• To understand more about what you can do in PyVista, simply type your query in this link
• These lines copy the ‘nodal_temperature’ result (which is a tuple of node numbers and nodal
temperature values for load step i) into arrays labeled ‘TN’, where N is the load step number
(starting from 0)
• Thus, the N temperature results are now also stored in the PyVista grid object. We’ll see why this
is necessary in a moment.
• Type ‘ssol.grid.array_names<enter>’ to see the new arrays we’ve created...
• Lines 45 thru 60 generate the APDL commands in the form of text files with the node,
temperature, and time data we need to run the analysis
.py files,
dp0\SYS-2\MECH
Target System
Model (on
ds.dat,
which to map file.rst
data)
• In this example,
we’re using the
comma-delimited
files generated by
the APDL scripts
(slides 15 – 17)
• To change that, just
uncomment line 22
• Line 28 gets the node data one time (so that it doesn’t have to keep reading that for every time step
• Line 29 generates a PyVista PolyData object out of the source nodes (instead of a full mesh)
• First, make sure to install a supported Python version (supports all versions between 3.9 and 3.11
inclusive as of this writing)
• Once downloaded, double-click on the installer to run it with ordinary user priveleges (executing it ‘As
Adminstrator’ won’t bring any advantages, and in fact it may be safer to execute it without those
privileges)
• If you have no other versions of Python installed (or if this is to be the ‘main’ one used), check both
boxes below and hit ‘Install Now’
• Make a Virtual Environment for this PyAnsys installation (targeted at Ansys2023R2) by entering the
following (followed by <enter>):
python –m venv pyansys2023R2
• By default, Python will place the virtual environment in the current directory location (in this
case, your ‘home’ drive as shown below). You can change this by supplying a path as shown
here
• For future installs, you can obtain a list of available PyAnsys versions by typing:
pip index versions pyansys
• Now, with this installation of PyAnsys, test it out with at least a few examples from the PyAnsys website.
We Make Innovation Work
www.padtinc.com
Appendix
Installing PyAnsys: Option 1
• In particular, make sure to test modules that you know you will use in a Python shell as below
• Testing this base installation in January of 2024 on the example found here results in the following.
• That seems to work...
• This happened in January 2024, but may not happen in future releases. If it does happen, the broken
modules and depencies must be repaired (future users may skip the following if nothing’s broken)
• If the above error does occur, exit Python and type and enter the following:
• Certainly not what we want to see, but so far, we haven’t noticed anything still broken
• Proceed to install the IDE
• We’re going to install the Spyder IDE (editor). We recommend doing this outside of any virtual
environment and NOT using pip (we’re going to download the installer). But before we do that, we’ll
use pip within the pyansys environment to supply Spyder with the dependencies it will need to run in
that environment (by the way: you’ll need to do this with all virtual environments you create in the
future)
• Type:
pip install spyder-kernels==2.5.*
• Python
version is
now 3.11.7