ANSYS 17.2 Executable Paths on Linux


ansys-linux-penguin-1When running on a machine with a Linux operating system, it is not uncommon for users to want to run from the command line or with a shell script. To do this you need to know where the actual executable files are located. Based on a request from a customer, we have tried to coalesce the major ANSYS product executables that can be run via command line on Linux into a single list:

ANSYS Workbench (Includes ANSYS Mechanical, Fluent, CFX, Polyflow, Icepak, Autodyn, Composite PrepPost, DesignXplorer, DesignModeler, etc.):

/ansys_inc/v172/Framework/bin/Linux64/runwb2

ANSYS Mechanical APDL, a.k.a. ANSYS ‘classic’:

/ansys_inc/v172/ansys/bin/launcher172 (brings up the MAPDL launcher menu)
/ansys_inc/v172/ansys/bin/mapdl (launches ANSYS MAPDL)

CFX Standalone:

/ansys_inc/v172/CFX/bin/cfx5

Autodyn Standalone:

/ansys_inc/v172/autodyn/bin/autodyn172

Note: A required argument for Autodyn is –I {ident-name}

Fluent Standalone (Fluent Launcher):

/ansys_inc/v172/fluent/bin/fluent

Icepak Standalone:

/ansys_inc/v172/Icepak/bin/icepak

Polyflow Standalone:

/ansys_inc/v172/polyflow/bin/polyflow/polyflow < my.dat

Chemkin:

/ansys_inc/v172/reaction/chemkinpro.linuxx8664/bin/chemkinpro_setup.ksh

Forte:

/ansys_inc/v172/reaction/forte.linuxx8664/bin/forte.sh

TGRID:

/ansys_inc/v172/tgrid/bin/tgrid

ANSYS Electronics Desktop (for Ansoft tools, e.g. Maxwell, HFSS)

/ansys_inc/v172/AnsysEM/AnsysEM17.2/Linux64/ansysedt

SIWave:

/ansys_inc/v172/AnsysEM/AnsysEM17.2/Linux64/siwave

Modeling 3D Printed Cellular Structures: Challenges

In this post, I discuss six challenges that make the modeling of 3D printed cellular structures (such as honeycombs and lattices) a non-trivial matter. In a following post, I will present how some of these problems have been addressed with different approaches.

At the outset, I need to clarify that by modeling I mean the analytical representation of material behavior, primarily for use in predictive analysis (simulation). Here are some reasons why this is a challenging endeavor for 3D printed cellular solids – some of these reasons are unique to 3D printing, others are a result of aspects that are specific to cellular solids, independent of how they are manufactured. I show examples with honeycombs since that is the majority of the work we have data for, but I expect that these ideas apply to foams and lattices as well, just with varying degrees of sensitivity.

1. Complex Geometry with Non-Uniform Local Conditions

I state the most well-appreciated challenge with cellular structures first: they are NOT fully-dense solid materials that have relatively predictable responses governed by straightforward analytical expressions. Consider a dogbone-shaped specimen of solid material under tension: it’s stress-strain response can be described fairly well using continuum expressions that do not account for geometrical features beyond the size of the dogbone (area and length for stress and strain computations respectively). However, as shown in Figure 1, such is not the case for cellular structures, where local stress and strain distributions are non-uniform. Further, they may have variable distributions of bending, stretching and shear in the connecting members that constitute the structure. So the first question becomes: how does one represent such complex geometry – both analytically and numerically?

non-uniform-strain
Fig 1. Honeycomb structure under compression showing non-uniform local elastic strains [Le & Bhate, under preparation]

2. Size Effects

A size effect is said to be significant when an observed behavior varies as a function of the size of the sample whose response is being characterized even after normalization (dividing force by area to get stress, for example). Here I limit myself to size effects that are purely a mathematical artifact of the cellular geometry itself, independent of the manufacturing process used to make them – in other words this effect would persist even if the material in the cellular structure was a mathematically precise, homogeneous and isotropic material.

It is common in the field of cellular structure modeling to extract an “effective” property – a property that represents a homogenized behavior without explicitly modeling the cellular detail. This is an elegant concept but introduces some practical challenges in implementation – inherent in the assumption is that this property, modulus for example, is equivalent to a continuum property valid at every material point. The reality is the extraction of this property is strongly dependent on the number of cells involved in the experimental characterization process. Consider experimental work done by us at PADT, and shown in Figure 2 below, where we varied both the number of axial and longitudinal cells (see inset for definition) when testing hexagonal honeycomb samples made of ULTEM-9085 with FDM. The predicted effective modulus increases with increasing number of cells in the axial direction, but reduces (at a lower rate) for increasing number of cells in the longitudinal direction.

This is a significant challenge and deserves a full form post to do justice (and is forthcoming), but the key to remember is that testing a particular cellular structure does not suffice in the extraction of effective properties. So the second question here becomes: what is the correct specimen design for characterizing cellular properties?

sizeeffect
Fig 2. Effective modulus under compression showing a strong dependence on the number of cells in the structure [Le & Bhate, under preparation]

3. Contact Effects

In the compression test shown in the inset in Figure 2, there is physical contact between the platen and the specimen that creates a local effect at the top and bottom that is different from the experience of the cells closer the center. This is tied to the size effect discussed above – if you have large enough cells in the axial direction, the contribution of this effect should reduce – but I have called it out as a separate effect here for two reasons: Firstly, it raises the question of how best to design the interface for the specimen: should the top and bottom cells terminate in a flat plate, or should the cells extend to the surface of contact (the latter is the case in the above image). Secondly, it raises the question of how best to model the interface, especially if one is seeking to match simulation results to experimentally observed behavior. Both these ideas are shown in Figure 3 below. This also has implications for product design – how do we characterize and model the lattice-skin interface? As such, independent of addressing size effects, there is a need to account for contact behavior in characterization, modeling and analysis.

contact
Fig 3. Two (of many possible) contact conditions for cellular structure compression – both in terms of specimen design as well as in terms of the nature of contact specified in the simulation (frictionless vs frictional, for example)

4. Macrostructure Effects

Another consideration related to specimen design is demonstrated in an exaggerated manner in the slowed down video below, showing a specimen flying off the platens under compression – the point being that for certain dimensions of the specimen being characterized (typically very tall aspect ratios), deformation in the macrostructure can influence what is perceived as cellular behavior. In the video below, there is some induced bending on a macro-level.

5. Dimensional Errors

While all manufacturing processes introduce some error in dimensional tolerances, the error can have a very significant effect for cellular structures – a typical industrial 3D printing process has tolerances within 75 microns (0.003″) – cellular structures (micro-lattices in particular) very often are 250-750 microns in thickness, meaning the tolerances on dimensional error can be in the 10% and higher error range for thickness of these members. This was our finding when working with Fused Deposition Modeling (FDM), where on a 0.006″ thick wall we saw about a 10% larger true measurement when we scanned the samples optically, as shown in Figure 4. Such large errors in thickness can yield a significant error in measured behavior such as elastic modulus, which often goes by some power to the thickness, amplifying the error. This drives the need for some independent measurement of the manufactured cellular structure – made challenging itself by the need to penetrate the structure for internal measurements. X-ray scanning is a popular, if expensive approach. But the modeler than has the challenge of devising an average thickness for analytical calculations and furthermore, the challenge of representation of geometry in simulation software for efficient analysis.

Fig 4. (Clockwise from top left): FDM ULTEM 9085 honeycomb sample, optical scan image, 12-sample data showing a mean of 0.064″ against a designed value of 0.060″ – a 7% error in thickness

6. Mesostructural Effects

The layerwise nature of Additive Manufacturing introduces a set of challenges that are somewhat unique to 3D Printed parts. Chief among these is the resulting sensitivity to orientation, as shown for the laser-based powder bed fusion process in Figure 5 with standard materials and parameter sets. Overhang surfaces (unsupported) tend to have down-facing surfaces with different morphology compared to up-facing ones. In the context of cellular structures, this is likely to result in different thickness effects depending on direction measured.

Fig 5. 3D Printed Stainless Steel Honeycomb structures showing orientation dependent morphology [PADT, 2016]
For the FDM process, in addition to orientation, the toolpaths that effectively determine the internal meso-structure of the part (discussed in a previous blog post in greater detail) have a very strong influence on observed stiffness behavior, as shown in Figure 6. Thus orientation and process parameters are variables that need to be comprehended in the modeling of cellular structures – or set as constants for the range of applicability of the model parameters that are derived from a certain set of process conditions.

Figure
Fig 6. Effects of different toolpath selections in Fused Deposition Modeling (FDM) for honeycomb structure tensile testing  [Bhate et al., RAPID 2016]

Summary

Modeling cellular structures has the above mentioned challenges – most have practical implications in determining what is the correct specimen design – it is our mission over the next 18 months to address some of these challenges to a satisfactory level through an America Makes grant we have been awarded. While these ideas have been explored in other manufacturing contexts,  much remains to be done for the AM community, where cellular structures have a singular potential in application.

In future posts, I will discuss some of these challenges in detail and also discuss different approaches to modeling 3D printed cellular structures – they do not always address all the challenges here satisfactorily but each has its pros and cons. Until then, feel free to send us an email at info@padtinc.com citing this blog post, or connect with me on LinkedIn so you get notified whenever I write a post on this, or similar subjects in Additive Manufacturing (1-2 times/month).

ANSYS How To: Result Legend Customization and Reuse

ansys-mechanical-custom-legend-0A user was asking how to modify the result legend in ANSYS Mechanical R17 so Ted Harris put together this little How To in PowerPoint:

padt_mechanical_custom_legend_r17.pdf

It shows how to modify the legend to get just what you want, how to save the settings to a file, and then how to use those seettings again on a different model.  Very simple and Powerful.

ansys-mechanical-custom-legend-1

 

 

ansys-mechanical-custom-legend-2

Jet Engines to Golf Clubs – Phoenix Area ANSYS Users Share their Stories

ansys-padt-skysong-conference-1There is nothing better than seeing the powerful and interesting way that other engineers are using the same tools you use.  That is why ANSYS, Inc. and PADT teamed up on Thursday to hold an “ANSYS Arizona Innovation Conference”  at ASU SkySong where users could come to share and learn.

The day kicked off with Andy Bauer from ANSYS welcoming everyone and giving them an update on the company and some general overarching direction for the technology.  Then Samir Rida from Honeywell Aerospace gave a fantastic keynote sharing how simulation drive the design of their turbine engines.  As a former turbine engine guy, I found it fascinating and exciting to see how accurate and detailed their modeling is.

img_1629b

Next up was my talk on the Past, Present, and Future of simulation for product development.  The point of the presentation was to take a step back and really think about what simulation is, what we have padt-ansys-innovation-az-2016-pptbeen doing, and what it needs to look at in the future.  We all sort of agreed that we wanted voice activation and artificial intelligence built in now.  If you are interested, you can find my presentation here: padt-ansys-innovation-az-2016.pdf.

After a short break ANSYS’s Sara Louie launched into a discussion on some of the new Antenna Systems modeling capabilities, simulating multiple physics and large domains with ANSYS products.  The ability to model the entire interaction of an antenna including large environments was fascinating.

Lunchtime discussions focused on the presentations in the morning as well as people sharing what they were working on.

img_1632The afternoon started with a review by Hoang Vinh of ANSYS of the ANSYS AIM product. This was followed by customer presentations. Both Galtronics and ON Semiconductor shared how they drive the design of their RF systems with ANSYS HFSS and related tools.  Then Nammo Talley shared how they incorporated simulation into their design process and then showed an example of a projectile redesign from a shoulder launched rocket that was driven by simulation in ANSYS CFX.  They had the added advantage of being able to show something that blows up, always a crowd pleaser.

ping-ansysAnother break was followed by a great look at how Ping used CFD to improve the design of one of their drivers.  They used simulation to understand the drag on the head through an entire swing and then add aerodynamic features that improved the performance of the club significantly. Much of the work is actually featured in an ANSYS Advantage article.

We wrapped things up with an in depth technical look at Shock and Vibration Analysis using ANSYS Mechanical and Multiphysics PCB Analysis with the full ANSYS product suite.

The best part of the event was seeing how all the different physics in ANSYS products were being used and applied in different industries.  WE hope to have similar events int he future so make sure you sign up for our mailings, the “ANSYS – Software Information & Seminars” list will keep you in the loop.

img_1628

 

 

New Second Edition in Paperback and Kindle: Introduction to the ANSYS Parametric Design Language (APDL)

APDL-Guide-Square-Advert-1After three years on the market and signs that sales were increasing year over year, we decided it was time to go through our popular training book “Introduction to the ANSYS Parametric

Introduction_to_APDL_V2-Kindle-Ipad-1
I’ll be honest, it was cool to see the book in print the first time, but seeing it on my iPad was just as cool.

Design Language (APDL)” and make some updates and reformat it so that it can be published as a Kindle e-book.   The new Second Edition includes two additonal chapters: APDL Math and Using APDL with ANSYS Mechanical.  The fact that we continue to sell more of these useful books is a sign that APDL is still a vibrant and well used language, and that others out there find power in its simplicity and depth.

This book started life as a class that PADT taught for many years. Then over time people asked if they could buy the notes. And then they asked for a real book. The bulk of the content came from Jeff Strain with input from most of our technical staff. Much of the editing and new content was done by Susanna Young and Eric Miller.

Here is the Description from Amazon.com:

The definitive guide to the ANSYS Parametric Design Language (APDL), the command language for the ANSYS Mechanical APDL product from ANSYS, Inc. PADT has converted their popular “Introduction to APDL” class into a guide so that users can teach themselves the APDL language at their own pace. Its 14 chapters include reference information, examples, tips and hints, and eight workshops. Topics covered include:

– Parameters
– User Interfacing
– Program Flow
– Retrieving Database Information
– Arrays, Tables, and Strings
– Importing Data
– Writing Output to Files
– Menu Customization
– APDL Math
– Using APDL in ANSYS Mechanical

At only $75.00 it is an investment that will pay for itself quickly. Even if you are an ANSYS Mechanical user, you can still benefit from knowing APDL, allowing you to add code snippets to your models. We have put some images below and you can also learn more here or go straight to Amazon.com to purchase the paperback or Kindle versions.

Introduction_to_APDL_V2-1_Cover

PADT-Intro-APDL-pg184-185 PADT-Intro-APDL-pg144-145 PADT-Intro-APDL-pg112-113 PADT-Intro-APDL-pg100-101 PADT-Intro-APDL-pg-020-021

 

Video Tips: Changing Multiple Load Step Settings in ANSYS Mechanical

ANSYS Mechanical allows you to specify settings for load steps one at a time. Most users don’t know that you can change settings for any combination of load steps using the selection of the load step graph. PADT’s Joe Woodward shows you how in this short but informative video.

Video Blog: Copying Time Steps from a Thermal Transient to a Static Structural Model in ANSYS Mechanical

Transient Thermal to Static Structural Load Transfer, ANSYS MechanicalIn this The Focus Video Blog, Joe Woodward shares a nice little trick he found when answering a tech support question.

When you want to take timesteps from a transient thermal analysis in ANSYS Mechanical and use the results as loads in a series of static simulations, in just a few mouse clicks.

ANSYS R17 Brings Added Tools to Mechanical Licenses

ansys-r17-splashSome of you have probably already noticed, but ANSYS Mechanical licenses have some changes at version 17. First, the license that for years has been known as ANSYS Mechanical is now known as ANSYS Mechanical Enterprise. Further, ANSYS, Inc. has enabled significantly more functionality with this license at version 17 than was available in prior versions. Note that the license task in the ANSYS license files, ‘ansys’ has not changed.

16.2 and Older (task) 17.0 (task)
ANSYS Mechanical (ansys) ANSYS Mechanical Enterprise (ansys)

The 17.0 ANSYS License Manager unlocks additional capability with this license, in addition to the existing Mechanical structural/thermal abilities. Previously, each of these tools used to be an additional cost. The change includes other “Mechanical-” licenses: e.g. Mech-EMAG, Mech CFD. The new tools enabled with ANSYS Mechanical Enterprise licenses at version 17.0 are:

Fatigue Module Rigid Body Dynamics Explicit STR Composite PrepPost (ACP)
SpaceClaim DesignXplorer ANSYS Customization Suite AQWA

Additionally, at version 17.1 these tools are included as well:

AIM Simplorer Entry

These changes do not apply to the lower level licenses, such as ANSYS Structural and Professional. In fact, these licenses are moving to ‘legacy’ mode at version 17. Two newer products now slot below Mechanical Enterprise. These newer products are ANSYS Mechanical Premium and ANSYS Mechanical Pro. We won’t explain those products here, but your local ANSYS provider can give you more information on these two if needed.

Getting back to the additional capabilities with Mechanical Enterprise, these become available once the ANSYS 17.0 and/or the ANSYS 17.1 license manager is installed. This assumes you have a license file that is current on TECS (enhancements and support). Also, a new license task is needed to enable Simplorer Entry.
Ignoring Simplorer Entry for the moment, once the 17.0/17.1 license manager is installed, the single Mechanical Enterprise license task (ansys) now enables several different tools. Note that:

  • Multiple tool windows can be open at once
    • g. ANSYS Mechanical and SpaceClaim
  • Only one can be “active” at a time
    • If solving, can’t edit geometry in SpaceClaim
  • Capabilities are then available in older versions, where applicable, once the 17.0/17.1 license manager is installed

Here is a very brief summary of these newly available capabilities:

Fatigue Module:

  • Runs in the Mechanical window
  • Can calculate fatigue lives for ‘simple’ products (linear static analysis)
    • Stress-life for
      • Constant amplitude, proportional loading
      • Variable amplitude, proportional loading
      • Constant amplitude, non-proportional loading
    • Strain-life
      • Constant amplitude, proportional loading
    • Activated by inserting the Fatigue Tool in the Mechanical Solution branch
    • Postprocess fatigue lives as contour plots, etc.
    • Requires fatigue life data as material properties

ansys-rbd-1Rigid Body Dynamics:

  • Runs in the Mechanical window
  • ANSYS, Inc.-developed solver using explicit time integration, energy conservation
  • Use when only concerned about motion due to joints and contacts
    • To determine forces and moments
  • Activated via Rigid Dynamics analysis system in the Workbench window

drop-test-of-mobile-phoneExplicit STR:

  • Runs in the Mechanical window
  • Utilizes the Autodyn solver
  • For highly nonlinear, short duration structural transient problems
    • Drop test simulations, e.g.
    • Lagrangian-only
  • Activated via Explicit Dynamics analysis system in the Workbench window

simulation-of-3d-compositesComposite PrepPost (ACP):

  • Tools for preparing composites models and postprocessing composites solutions
  • Define composite layup
    • Fiber Directions and Orientations
    • Draping
    • Optimize composite design
  • Results evaluation
    • Layer stresses
    • Failure criteria
    • Delamination
    • Wrinkling
  • Activated via ACP (Pre) and ACP (Post) component systems in the Workbench window

SpaceClaim-Model1bSpaceClaim:

  • Geometry creation/preparation/repair/defeaturing tool
  • Try it, learn it, love it
  • A direct modeler so no history tree
    • Just create/modify on the fly
    • Import from CAD or create in SpaceClaim
    • Can be an incredible time saver in preparing geometry for simulation
  • Activated by right clicking on the Geometry cell in the Workbench project schematic

DesignXplorer:

  • Design of Experiments/Design Optimization/Robust Design Tool
  • Allows for variation of input parameters
    • Geometric dimensions including from external CAD, license permitting
    • Material property values
    • Loads
    • Mesh quantities such as shell thickness, element size specifications
  • Track or optimize on results parameters
    • Max or min stress
    • Max or min temperature
    • Max or min displacement
    • Mass or volume
  • Create design of experiments
  • Fit response surfaces
  • Perform goals driven optimizations
    • Reduce mass
    • Drive toward a desired temperature
  • Understand sensitivities among parameters
  • Perform a Design for Six Sigma study to determine probabilities
  • Activated by inserting Design Exploration components into the Workbench project schematic

ANSYS Customization Suite:

  • Toolkit for customization of ANSYS Workbench tools
  • Includes tools for several ANSYS products
    • Top level Workbench
    • DesignModeler
    • Mechanical
    • DesignXplorer
  • Based on Python and XML
  • Wizards and documentation included

AQWA:

  • Offshore tool for ship, floating platform simulation
  • Uses hydrodynamic defraction for calculations
  • Model up to 50 structures
  • Include effects of moorings, fenders, articulated connectors
  • Solve in static, frequency, and time domains
  • Transfer motion and pressure info to Mechanical
  • Activated via Hydrodynamic Diffraction analysis system in the Workbench window

AIM:

  • New, common user interface for multiphysics simulations
    • Structural
    • Thermal
    • CFD
    • Electromagnetics
  • Capabilities expanding with each ANSYS release (was new at 16.0)
  • Uses SpaceClaim as geometry tool
  • Single window
  • Easy to follow workflow
  • Activated from the ANSYS 17.0/17.1 Start menu

Simplorer Entry:

  • System level simulation tool
  • Simulate interactions such as between
    • Controllers
    • Actuators
    • Sensors
    • Structural Reduced Order Models
    • Simple circuitry
  • Optimize complex system performance
    • Understand interactions and trade offs
  • Entry level tool, limited to 30 models (Simplorer Advanced enables more)
  • Activated from the ANSYS Electromagnetics tools (separate download)
  • Requires an additional license task from ANSYS, Inc.

Where to get more information:

  • Your local ANSYS provider
  • ANSYS Help System
  • ANSYS Customer Portal

PADT and ASU Collaborate on 3D Printed Lattice Research

The ASU Capstone team (left to right): Drew Gibson, Jacob Gerbasi, John Reeher, Matthew Finfrock, Deep Patel and Joseph Van Soest.
ASU student team (left to right): Drew Gibson, Jacob Gerbasi, John Reeher, Matthew Finfrock, Deep Patel and Joseph Van Soest

Over the past two academic semesters (2015/16), I had the opportunity to work closely with six senior-year undergraduate engineering students from the Arizona State University (ASU), as their industry adviser on an eProject (similar to a Capstone or Senior Design project). The area we wanted to explore with the students was in 3D printed lattice structures, and more specifically, address the material modeling aspects of these structures. PADT provided access to our 3D printing equipment and materials, ASU to their mechanical testing and characterization facilities and we both used ANSYS for simulation, as well as a weekly meeting with a whiteboard to discuss our ideas.

While there are several efforts ongoing in developing design and optimization software for lattice structures, there has been little progress in developing a robust, validated material model that accurately describes how these structures behave – this is what our eProject set out to do. The complex internal meso- and microstructure of these structures makes them particularly sensitive to process variables such as build orientation, layer thickness, deposition or fusion width etc., none of which are accounted for in models for lattice structures available today. As a result, the use of published values for bulk materials are not accurately predictive of true lattice structure behavior.

In this work, we combined analytical, experimental and numerical techniques to extract and validate material parameters that describe mechanical response of lattice structures. We demonstrated our approach on regular honeycomb structures of ULTEM-9085 material, made with the Fused Deposition Modeling (FDM) process. Our results showed that we were able to predict low strain responses within 5-10% error, compared to 40-60% error with the use of bulk properties.

This work is to be presented in full at the upcoming RAPID conference on May 18, 2016 (details at this link) and has also been accepted for full length paper submission to the SFF Symposium. We are also submitting a research proposal that builds on this work and extends it into more complex geometries, metals and failure modeling. If you are interested in the findings of this work and/or would like to collaborate, please meet us at RAPID or send us an email (info@padtinc.com).

Our final poster and the Fortus 400mc that we printed all our honeycomb structures with
The final poster summarizing our work rests atop the Stratasys Fortus 400mc that we printed all our honeycomb structures on

Webinars: Overview of Add-On Products that Work with ANSYS Mechanical

PADT-Webinar-LogoWith the introduction of the new ANSYS Mechanical Enterprise, many add-on products that had to be purchased separate, are now included. In these webinars PADT’s engineers will provide an overview of the key applications that users now have easy access to.

Each product will be reviewed by one of PADT’s engineers. The will share the functionality of each tool, discuss some lessons we have learned in using and supporting each tool, and provide a short demonstration. Each session will have time for Questions and Answers.

ANSYS-Footer-RBD-STR-ACT

Sign up for the one you want, or all three. Everyone that registers will receive a link to the recording and to a copy of the slides. So register even if you can not make the specific dates.

Here are the times and links to register:

Overview of ANSYS Rigid Body Dynamics (RBD) and ANSYS Explicit STR
May 19, 2016 (Thu)
11:00 am MST & PDT / 12:00 pm MDT

      REGISTER

Overview of ANSYS SpaceClaim and ANSYS AIM
May 24, 2016 (Tue)
11:00 am MST & PDT / 12:00 pm MDT

    REGISTER

Overview of ANSYS Customization Toolkit (ACT) and ANSYS DesignXplorer (DX)
May 26, 2016 (Thu)
11:00 am MST & PDT / 12:00 pm MDT

     REGISTER

We hope to see you online.  If you have any questions, contact us at support@padtinc.com or call 480.813.4884.

ANSYS_Mechanical_Header

Helpful New Meshing Feature in ANSYS Mechanical 17.0 – Nonlinear Mechanical Shape Checking

ansys=new-mesh-r17Meshing for Nonlinear Structural Problems

Overcoming convergence difficulties in nonlinear structural problems can be a challenge. I’ve written a couple of times previously about tools that can help us overcome those difficulties:

I’m pleased to announce a new tool in the ANSYS Mechanical tool belt in version 17.0.
With version 17.0 of ANSYS we get a new meshing option for structural simulations: Nonlinear Mechanical Shape Checking. This option has been added to the previously available Standard Mechanical Shape Checking and Aggressive Mechanical Shape Checking. For a nonlinear solution in which elements can become significantly distorted, if we start with better-shaped elements they can undergo larger deformations without encountering errors in element formulation we may encounter fewer difficulties as the nodes deflect and the elements become distorted. The nonlinear mechanical setting is more restrictive on the element shapes than the other two settings.

We’ve been recommending the aggressive mechanical setting for nonlinear solutions for quite a while. The new nonlinear mechanical setting is looking even better. Anecdotally, I have one highly nonlinear customer model that reached 95% of the applied load before a convergence failure in version 16.2. That was with the aggressive mechanical shape checking. With 17.0, it reached 99% simply by remeshing with the same aggressive setting and solving. That tells you that work has been going on under the hood with the ANSYS meshing and nonlinear technology. By switching to the new nonlinear mechanical shape checking and solving again, the solution now converges for the full 100% of the applied load.
Here are some statistics using just one measure of the ‘goodness’ of our mesh, element quality. You can read about the definition of element quality in the ANSYS Help, but in summary better shaped elements have a quality value close to 1.0, while poorly shaped elements have a value closer to zero. The following stats are for tetrahedral meshes of a simple turbomachinery blade/rotor sector model (this is not a real part, just something made up) comparing two of the options for element shape checking. The table shows that the new nonlinear mechanical setting produces significantly fewer elements with a quality value of 0.5 or less. Keep in mind this is just one way to look at element quality – other methods or a different cutoff might put things in a somewhat different perspective. However, we can conclude that the Nonlinear Mechanical setting is giving us fewer ‘lower quality’ elements in this case.

Shape Checking Setting Total Elements Elements w/Quality <0.5 % of elements w/Quality <0.5
Aggressive Mechanical 31683 1831 5.8
Nonlinear Mechanical 31865 1249 3.9

Here are images of a portion of the two meshes mentioned above. This is the mesh with the Aggressive Mechanical Shape Checking option set:ansys-new-meshing-17-01
The eyeball test on these two meshes confirms fewer elements at the lower quality contour levels.

And this is the mesh with the Nonlinear Mechanical Shape Checking option set:

ansys-new-meshing-17-02

So, if you are running nonlinear structural models, we urge you to test out the new Nonlinear Mechanical mesh setting. Since it is more restrictive on element shapes, you may see longer meshing times or encounter some difficulties in meshing complex geometry. You may see a benefit in easier to converge nonlinear solutions, however. Give it a try!

Keypad Shortcuts for Quick Views in Workbench

keypad1Hey, did you know that you can access predefined views in both ANSYS Mechanical and DesignModeler using your numeric keypad? You can! Assuming the front view is looking down the +Z-axis at the X-Y plane, here are the various views you can access via your numeric keypad.

For this to work, make sure you’ve clicked within the graphics window itself—not on the top window bar, or one of the tool bars, but right in the region where the model is displayed. You may need to turn off Num Lock, though it works for me on both my laptop and desktop with Num Lock on or off.

With that out of the way, here are the views:

0) Isometric view, a bit more zoomed in than the standard auto-fit isometric view. This is my preferred level of zoom while still being able to see the whole model, to be honest.

image

1) Front view (looking down the +Z-axis)

image

2) Bottom view (looking down the -Y-axis)

image

3) Right view (looking down the +X-axis)

image

4) Back up to the previous view

5) Isometric view, standard autofit (I don’t like the standard auto-fit—too much empty space. I prefer the keypad 0 level of zoom.)

image

6) Go forward to the next view in the cache

7) Left view (looking down the -X-axis)

image

8) Top view (looking down the +Y-axis)

image

9) Back view (looking down the -Z-axis)

image

Here’s a handy-dandy chart you can print out to refer to when using the numeric keypad to change views in Mechanical or DesignModeler. Share it with your friends.

image

Reading ANSYS Mechanical Result Files (.RST) from C/C++ (Part 3)

ansys-fortran-to-c-cpp-1-00In the last post of this series I illustrated how I handled the nested call structure of the procedural interface to ANSYS’ BinLib routines.  If you recall, any time you need to extract some information from an ANSYS result file, you have to bracket the function call that extracts the information with a *Begin and *End set of function calls.  These two functions setup and tear down internal structures needed by the FORTRAN library.  I showed how I used RAII principles in C++ along with a stack data structure to manage this pairing implicitly.  However, I have yet to actually read anything truly useful off of the result file!  This post centers on the design of a set of C++ iterators that are responsible for actually reading data off of the file.  By taking the time to write iterators, we expose the ANSYS RST library to many of the algorithms available within the standard template library (STL), and we also make our own job of writing custom algorithms that consume result file data much easier.  So, I think the investment is worthwhile.

If you’ve programmed in C++ within the last 10 years, you’ve undoubtedly been exposed to the standard template library.  The design of the library is really rather profound.  This image represents the high level design of the library in a pictorial fashion:

ansys-fortran-to-c-cpp-3-01

On one hand, the library provides a set of generic container objects that provide a robust implementation of many of the classic data structures available within the field of computer science.  The collection of containers includes things like arbitrarily sized contiguous arrays (vectors), linked lists, associative arrays, which are implemented as either binary trees or as a hash container, as well as many more.  The set of containers alone make the STL quite valuable for most programmers.

On the other hand, the library provides a set of generic algorithms that encompass a whole suite of functionality defined in classic computer science.  Sorting, searching, rearranging, merging, etc… are just a handful of the algorithms provided by the library.  Furthermore, extreme care has been taken within the implementation of these algorithms such that an average programmer would hard pressed to produce something safer and more efficient on their own.

However, the real gem of the standard library are iterators.  Iterators bridge the gap between generic containers on one side and the generic algorithms on the other side.  Need to sort a vector of integers, or a double ended queue of strings?  If so, you just call the same sort function and pass it a set of iterators.  These iterators “know” how to traverse their parent container.  (Remember containers are the data structures.)

So, what if we could write a series of iterators to access data from within an ANSYS result file?  What would that buy us?  Well, depending upon which concepts our iterators model, having them available would open up access to at least some of the STL suite of algorithms.  That’s good.  Furthermore, having iterators defined would open up the possibility of providing range objects.  If we can provide range objects, then all of the sudden range based for loops are possible.  These types of loops are more than just syntactic sugar.  By encapsulating the bounds of iteration within a range, and by using iterators in general to perform the iteration, the burden of a correct implementation is placed on the iterators themselves.  If you spend the time to get the iterator implementation correct, then any loop you write after that using either the iterators or better yet the range object will implicitly be correct from a loop construct standpoint.  Range based for loops also make your code cleaner and easier to reason about locally.

Now for the downside…  Iterators are kind of hard to write.  The price for the flexibility they offer is paid for in the amount of code it takes to write them.  Again, though, the idea is that you (or, better yet somebody else) writes these iterators once and then you have them available to use from that point forward.

Because of their flexibility, standard conformant iterators come in a number of different flavors.  In fact, they are very much like an ice cream Sunday where you can pick and choose what features to mix in or add on top.  This is great, but again it makes implementing them a bit of a chore.  Here are some of the design decisions you have to answer when implementing an iterator:

Decision Options Choice for RST Reader Iterators
Dereference Data Type Anything you want Special structs for each type of iterator.
Iteration Category 1.       Forward iterator
2.       Single pass iterator
3.       Bidirectional iterator
4.       Random access iterator
Forward, Single Pass

Iterators syntactically function much like pointers in C or C++.  That is, like a pointer you can increment an iterator with the ++ operator, you can dereference an iterator with the * operator and you can compare two iterators for equality.  We will talk more about incrementing and comparisons in a minute, but first let’s focus on dereferencing.  One thing we have to decide is what type of data the client of our iterator will receive when they dereference it.  My choice is to return a simple C++ structure with data members for each piece of data.  For example, when we are iterating over the nodal geometry, the RST file contains the node number, the nodal coordinates and the nodal rotations.  To represent this data, I create a structure like this:ansys-fortran-to-c-cpp-3-02

I think this is pretty self-explanatory.  Likewise, if we are iterating over the element geometry section of an RST file, there is quite a bit of useful information for each element.  The structure I use in that case looks like this:

ansys-fortran-to-c-cpp-3-03

 

Again, pretty self-explanatory.  So, when I’m building a node geometry iterator, I’m going to choose the NodalCoordinateData structure as my dereference type.

The next decision I have to make is what “kind” of iterator I’m going to create.  That is, what types of “iteration” will it support?  The C++ standard supports a variety of iterator categories.  You may be wondering why anyone would ever care about an “iteration category”?  Well, the reason is fundamental to the design of the STL.   Remember that the primary reason iterators exist is to provide a bridge between generic containers and generic algorithms.  However, any one particular algorithm may impose certain requirements on the underlying iterator for the algorithm to function appropriately.

Take the algorithm “sort” for example.  There are, in fact, lots of different “sort” algorithms.  The most efficient versions of the “sort” algorithm require that an iterator be able to jump around randomly in constant time within the container.  If the iterator supports jumping around (a.k.a. random access) then you can use it within the most efficient sort algorithm.   However, there are certain kinds of iterators that don’t support jumping around.  Take a linked list container as an example.  You cannot randomly jump around in a linked list in constant time.  To get to item B from item A you have to follow the links, which means you have to jump from link to link to link, where each jump takes some amount of processing time.  This means, for example, there is no easy way to cut a linked list exactly in half even if you know how many items in total are in the list.  To cut it in half you have to start at the beginning and follow the links until you’ve followed size/2 number of links.  At that point you are at the “center” of the list.  However, with an array, you simply choose an index equal to size/2 and you automatically get to the center of the array in one step.  Many sort algorithms, as an example, obtain their efficiency by effectively chopping the container into two equal pieces and recursively sorting the two pieces.  You lose all that efficiency if you have to walk out to the center.

If you look at the “types” of iterators in the table above you will see that they build upon one another.  That is, at the lowest level, I have to answer the question, can I just move forward one step?  If I can’t even do that, then I’m not an iterator at all.  After that, assuming I can move forward one step, can I only go through the range once, or can I go over the range multiple times?  If I can only go over the range once, I’m a single pass iterator.  Truthfully, the forward iterator concept and the single pass iterator concept form level 1A and 1B of the iterator hierarchy.  The next higher level of functionality is a bidirectional iterator.  This type of iterator can go forward and backwards one step in constant time.  Think of a doubly linked list.  With forward and backward links, I can go either direction one step in constant time.  Finally, the most flexible iterator is the random access iterator.  These are iterators that really are like raw pointers.  With a pointer you can perform pointer arithmetic such that you can add an arbitrary offset to a base pointer and end up at some random location in a range.  It’s up to you to make sure that you stay within bounds.  Certain classes of iterators provide this level of functionality, namely those associated with vectors and deques.

So, the question is what type of iterator should we support?  Perusing through the FORTRAN code shipped with ANSYS, there doesn’t appear to be an inherent limitation within the functions themselves that would preclude random access.  But, my assumption is that the routines were designed to access the data sequentially.  (At least, if I were the author of the functions that is what I would expect clients to do.)  So, I don’t know how well they would be tested regarding jumping around.  Furthermore, disk controllers and disk subsystems are most likely going to buffer the data as it is read, and they very likely perform best if the data access is sequential.  So, even if it is possible to randomly jump around on the result file, I’m not sold on it being a good idea from a performance standpoint.  Lastly, I explicitly want to keep all of the data on the disk, and not buffer large chunks of it into RAM within my library.  So, I settled on expressing my iterators as single pass, forward iterators.  These are fairly restrictive in nature, but I think they will serve the purpose of reading data off of the file quite nicely.

Regarding my choice to not buffer the data, let me pause for a minute and explain why I want to keep the data on the disk. First, in order to buffer the data from disk into RAM you have to read the data off of the disk one time to fill the buffer.  So, that process automatically incurs one disk read.  Therefore, if you only ever need to iterate over the data once, perhaps to load it into a more specialized data structure, buffering it first into an intermediate RAM storage will actually slow you down, and consume more RAM.  The reason for this is that you would first iterate over the data stored on the disk and read it into an intermediate buffer.  Then, you would let your client know the data is ready and they would iterate over your internal buffer to access the data.  They might as well just read the data off the disk themselves! If the end user really wants to buffer the data, that’s totally fine.  They can choose to do that themselves, but they shouldn’t have to pay for it if they don’t need it.

Finally, we are ready to implement the iterators themselves.  To do this I rely on a very high quality open source library called Boost.  Boost has within it a library called iterator_facade that takes care of supplying most all of the boilerplate code needed to create a conformant iterator.  Using it has proven to be a real time saver.  To define the actual iterator, you derive your iterator class from iterator_facade and pass it a few template parameters.  One is the category defining the type of iterator you are creating.  Here is the definition for the nodal geometry iterator:

ansys-fortran-to-c-cpp-3-04

You can see that there are a few private functions here that actually do all of the work.  The function “increment” is responsible for moving the iterator forward one spot.  The function “equal” is responsible for determining if two different iterators are in fact equal.  And the function “dereference” is used to return the data associated with the current iteration spot.  You will also notice that I locally buffer the single piece of data associated with the current location in the iteration space inside the iterator.  This is stored in the pData member function.  I also locally store the current index.   Here are the implementations of the functions just mentioned:

ansys-fortran-to-c-cpp-3-05

First you can see that testing iterator equality is easy.  All we do is just look to see if the two iterators are pointing to the same index.  If they are, we define them as equal. (Note, an important nuance of this is that we don’t test to see if their buffered data is equal, just their index.  This is important later on.)  Likewise, increment is easy to understand as well.  We just increase the index by one, and then buffer the new data off of disk into our local storage.  Finally, dereference is easy too.  All we do here is just return a reference to the local data store that holds this one node’s data.  The only real work occurs in the readData() function.  Inside that function you will see the actual call to the CResRdNode() function.  We pass that function our current index and it populates an array of 6 doubles with data and returns the actual node number.  After we have that, we simply parse out of that array of 6 doubles the coordinates and rotations, storing them in our local storage.  That’s all there is to it.  A little more work, but not bad.

With these handful of operations, the boost iterator_facade class can actually build up a fully conformant iterator will all the proper operator overloads, etc… It just works.  Now that we have iterators, we need to provide a “begin” and “end” function just like the standard containers do.  These functions should return iterators that point to the beginning of our data set and to one past the end of our data set.  You may be thinking to yourself, wait a minute, how to we provide an “end” iterator without reading the whole set of nodes?  The reality is, we just need to provide an iterator that “equality tests” to be equal to the end of our iteration space?  What does that mean?  Well, what it means is that we just need to provide an iterator that, when compared to another iterator which has walked all the way to the end, it passes the “equal” test.  Look at the “equal” function above.  What do two iterators need to have in common to be considered equal?  They need to have the same index.  So, the “end” iterator simply has an index equal to one past the end of the iteration space.  We know how big our iteration space is because that is one of the pieces of metadata supplied by those ResRd*Begin functions.  So, here are our begin/end functions to give us a pair of conformant iterators.

ansys-fortran-to-c-cpp-3-06

Notice, that the nodes_end() function creates a NodeIterator and passes it an index that is one past the maximum number of nodes that have coordinate data stored on file.  You will also notice, that it does not have a second Boolean argument associated with it.  I use that second argument to determine if I should immediately read data off of the disk when the iterator is constructed.  For the begin iterator, I need to do that.  For the end iterator, I don’t actually need to cache any data.  In fact, no data for that node is defined on disk.  I just need a sentinel iterator that is one past the iteration space.

So, there you have it.  Iterators are defined that implicitly walk over the rst file pulling data off as needed and locally buffering one piece of it.  These iterators are standard conformant and thus can be used with any STL algorithm that accepts a single pass, read only, forward iterator.  They are efficient in time and storage.  There is, though, one last thing that would be nice.  That is to provide a range object so that we can have our cake and eat it too.  That is, so we can write these C++11 range based for loops.  Like this:ansys-fortran-to-c-cpp-3-07

To do that we need a bit of template magic.  Consider this template and type definition:

ansys-fortran-to-c-cpp-3-08

There is a bit of machinery that is going on here, but the concept is simple.  I just want the compiler to stamp out a new type that has a “begin()” and “end()” member function that actually call my “nodes_begin()” and “nodes_end()” functions.  That is what this template will do.  I can also create a type that will call my “elements_begin()” and “elements_end()” function.  Once I have those types, creating range objects suitable for C++11 range based for loops is a snap.  You just make a function like this:

ansys-fortran-to-c-cpp-3-09

 

This function creates one of these special range types and passes in a pointer to our RST reader.  When the compiler then sees this code:

ansys-fortran-to-c-cpp-3-10

It sees a range object as the return type of the “nodes()” function.  That range object is compatible with the semantics of range based for loops, and therefore the compiler happily generates code for this construction.

Now, after all of this work, the client of the RST reader library can open a result file, select something of interest, and start looping over the items in that category; all in three lines of code.  There is no need to understand the nuances of the binlib functions.  But best of all, there is no appreciable performance hit for this abstraction.  At the end of the day, we’re not computationally doing anything more than what a raw use of the binlib functions would perform.  But, we’re adding a great deal of type safety, and, in my opinion, readability to the code.  But, then again, I’m only slightly partial to C++.  Your mileage may vary…

Reading ANSYS Mechanical Result Files (.RST) from C/C++ (Part 2)

ansys-fortran-to-c-cpp-1-00In the last post in this series I illustrated how you can interface C code with FORTRAN code so that it is possible to use the ANSYS, Inc. BinLib routines to read data off of an ANSYS result file within a C or C++ program.  If you recall, the routines in BinLib are written in FORTRAN, and my solution was to use the interoperability features of the Intel FORTRAN compiler to create a shim library that sits between my C++ code and the BinLib code. In essence, I replicated all of the functions in the original BinLib library, but gave them a C interface. I call this library CBinLib.

You may remember from the last post that I wanted to add a more C++ friendly interface on top of the CBinLib functions.  In particular, I showed this piece of code, and alluded to an explanation of how I made this happen.  This post covers the first half of what it takes to make the code below possible.

ansys-fortran-to-c-cpp-2-01

What you see in the code above is the use of C++11 range based “for loops” to iterate over the nodes and elements stored on the result file.  To accomplish this we need to create conformant STL style iterators and ranges that iterate over some space.  I’ll describe the creation of those in a subsequent post.  In this post, however, we have to tackle a different problem.  The solution to the problem is hidden behind the “select” function call shown in the code above.  In order to provide some context for the problem, let me first show you the calling sequence for the procedural interface to BinLib.  This is how you would use BinLib if you were programming in FORTRAN or if you were directly using the CBinLib library described in the previous post.  Here is the recommended calling sequence:

ansys-fortran-to-c-cpp-2-02

You can see the design pattern clearly in this skeleton code.  You start by calling ResRdBegin, which gives you a bunch of useful data about the contents of the file in general.  Then, if you want to read geometric data, you need to call ResRdGeomBegin, which gives you a little more useful metadata.  After that, if you want to read the nodal coordinate data you call ResRdNodeBegin followed by a loop over nodes calling ResRdNode, which gives you the actual data about individual nodes, and then finally call ResRdNodeEnd.  If at that point you are done with reading geometric data, you then call ResRdGeomEnd.  And, if you are done with the file you call ResRdEnd.

Now, one thing jumps off the page immediately.  It looks like it is important to pair up the *Begin and*End calls.  In fact, if you peruse the ResRd.F FORTRAN file included with the ANSYS distribution you will see that in many of the *End functions, they release resources that were allocated and setup in the corresponding *Begin function.

So, if you forget to call the appropriate *End, you might leak resources.  And, if you forget to call the appropriate *Begin, things might not be setup properly for you to iterate.  Therefore, in either case, if you fail to call the right function, things are going to go badly for you.

This type of design pattern where you “construct” some scaffolding, do some work, and then “destruct” the scaffolding is ripe for abstracting away in a C++ type.  In fact, one of the design principles of C++ known as RAII (Resource Acquisition Is Initialization) maps directly to this problem.  Imagine that we create a class in which in the constructor of the class we call the corresponding *Begin function.  Likewise, in the destructor of the class we call the corresponding *End function.  Now, as long as we create an instance of the class before we begin iterating, and then hang onto that instance until we are done iterating, we will properly match up the *Begin, *End calls.  All we have to do is create classes for each of these function pairs and then create an instance of that class before we start iterating.  As long as that instance is still alive until we are finished iterating, we are good.

Ok, so abstracting the *Begin and *End function pairs away into classes is nice, but how does that actually help us?  You would still have to create an instance of the class, hold onto it while you are iterating, and then destroy it when you are done.  That sounds like more work than just calling the *Begin, *End directly.  Well, at first glance it is, but let’s see if we can use the paradigm more intelligently.  For the rest of this article, I’ll refer to these types of classes as BeginEnd classes, though I call them something different in the code.

First, what we really want is to fire and forget with these classes.  That is, we want to eliminate the need to manually manage the lifetime of these BeginEnd classes.  If I don’t accomplish this, then I’ve failed to reduce the complexity of the *Begin and *End requirements.  So, what I would like to do is to create the appropriate BeginEnd class when I’m ready to extract a certain type of data off of the file, and then later on have it magically delete itself (and thus call the appropriate *End function) at the right time.  Now, one more complication.  You will notice that these *Begin and*End function pairs are nested.  That is, I need to call ResRdGeomBegin before I call ResRdNodeBegin.  So, not only do I want a solution that allows me to fire and forget, but I want a solution that manages this nesting.

Whenever you see nesting, you should think of a stack data structure.  To increase the nesting level, you push an item onto the stack.  To decrease the nesting level, you pop and item off of the stack.  So, we’re going to maintain a stack of these BeginEnd classes.  As an added benefit, when we introduce a container into the design space, we’ve introduced something that will control object lifetime for us.  So, this stack is going to serve two functions for us.  It’s going to ensure we call the *Begin’s and *End’s in the right nested order, and second, it’s going to maintain the BeginEnd object lifetimes for us implicitly.

To show some code, here is the prototype for my pure virtual class that serves as a base class for all of the BeginEnd classes.  (In my code, I call these classes FileSection classes)

ansys-fortran-to-c-cpp-2-03

You can see that it is an interface class by noting the pure virtual function getLevel.  You will also notice that this function returns a ResultFileSectionLevel.  This is just an enum over file section types.  I like to use an enum as opposed to relying on RTTI.  Now, for each BeginEnd pair, I create an appropriate derived class from this base ResultFileSection class.  Within the destructor of each of the derived classes I call the appropriate *End function.  Finally, here is my stack data structure definition:

ansys-fortran-to-c-cpp-2-03p5

 

You can see that it is just a std::stack holding objects of the type SectionPtrT.  A SectionPtrT is a std::unique_ptr for objects convertible to my base section class.  This will enable the stack to hold polymorphic data, and the std::unique_ptr will manage the lifetime of the objects appropriately.   That is, when we pop and object off of the stack, the std::unique_ptr will make sure to call delete, which will call the destructor.  The destructor calls the appropriate *End function as we’ve mentioned before.

At this point, we’ve reduced our problem to managing items on a stack.  We’re getting closer to making our lives easier, trust me!  Let’s look at a couple of different functions to show how we pull these pieces together.  The first function is called reduceToLevelOrBegin(level).  See the code below:ansys-fortran-to-c-cpp-2-04

The operation of this function is fairly straightforward, yet it serves an integral role in our BeginEnd management infrastructure.   What this function does is it iteratively pops items off of our section stack until it either reaches the specified level, or it reaches the topmost ResRdBegin level.  Again, through the magic of C++ polymorphism, when an item gets popped off of the stack, eventually its destructor is called and that in turn calls the appropriate *End function.  So, what this function accomplishes is it puts us at a known level in the nested section hierarchy and, while doing so, ensures that any necessary *End functions are called appropriately to free resources on the FORTRAN side of things.  Notice that all of that happens automatically because of the type system in C++.  By popping items off of the stack, I implicitly clean up after myself.

The second function to consider is one of a family of similar functions.  We will look at the function that prepares the result file reader to read element geometry data off of the file.  Here it is:

ansys-fortran-to-c-cpp-2-05

You will notice that we start by reducing the nested level to either the “Geometry” level or the “Begin” level.  Effectively what this does is unwind any nesting we have done previously.  This is the machinery that makes “fire and forget” possible.  That is, whenever in ages past that we requested something to be read off of the result file, we would have pushed onto the stack a series of objects to represent the level needed to read the data in question.  Now that we wish to read something else, we unwind any previously existing nested Begin calls before doing so.   That is, we clean up after ourselves only when we ask to read a different set of data.  By waiting until we ask to read some new set of data to unwind the stack, we implicitly allow the lifetime of our BeginEnd classes to live beyond iteration.

At this point we have the stack in a known state; either it is at the Begin level or the Geometry level.  So, we simply call the appropriate *Begin functions depending on what level we are at, and push the appropriate type of BeginEnd objects onto the stack to record our traversal for later cleanup.  At this point, we are ready to begin iterating.  I’ll describe the process of creating iterators in the next post.  Clearly, there are lots of different select*** functions within my class.  I have chosen to make all of them private and have a single select function that takes an enum descriptor of what to select and some additional information for specifying result data.

One last thing to note with this design.  Closing the result file is easy. All that is required is that we simply fully unwind the stack.  That will ensure all of the appropriate FORTRAN cleanup code is called in the right order.  Here is the close function:

ansys-fortran-to-c-cpp-2-06

As you can see, cleanup is handled automatically.  So, to summarize, we use a stack of polymorphic data to manage the BeginEnd function calls that are necessary in the FORTRAN interface.  By doing this we ensure a level of safety in our class design.  Also, this moves us one step closer to this code:

ansys-fortran-to-c-cpp-2-07

In the next post I will show how I created iterators and range objects to enable clean for loops like the ones shown above.

Reading ANSYS Mechanical Result Files (.RST) from C/C++ (Part 1)

ansys-fortran-to-c-cpp-1-00Recently, I’ve encountered the need to read the contents of ANSYS Mechanical result files (e.g. file.rst, file.rth) into a C++ application that I am writing for a client. Obviously, these files are stored in a proprietary binary format owned by ANSYS, Inc.  Even if the format were published, it would be daunting to write a parser to handle it.  Fortunately, however, ANSYS supplies a series of routines that are stored in a library called BinLib which allow a programmer to access the contents of a result file in a procedural way.  That’s great!  But, the catch is the routines are written in FORTRAN… I’ve been programming for a long time now, and I’ll be honest, I can’t quite stomach FORTRAN.  Yes, the punch card days were before my time, but seriously, doesn’t a compiler have something better to do than gripe about what column I’m typing on… (Editor’s note: Matt does not understand the pure elegance of FORTRAN’s majestic simplicity. Any and all FORTRAN bashing is the personal opinion of Mr. Sutton and in no way reflects the opinion of PADT as a company or its owners. – EM) 

So, the problem shifts from how to read an ANSYS result file to how to interface between C/C++ and FORTRAN.  It turns out this is more complicated than it really should be, and that is almost exclusively because of the abomination known as CHARACTER(*) arrays.  Ah, FORTRAN… You see, if weren’t for the shoddy character of CHARACTER(*) arrays the mapping between the basic data types in FORTRAN and C would be virtually one for one. And thus, the mechanics of function calls could fairly easily be made to be identical between the two languages.   If the function call semantics were made identical, then sharing code between the two languages would be quite straightforward.  Alas, because a CHARACTER array has a kind of implicit length associated with it, the compiler has to do some kind of magic within any function signature that passes one or more of these arrays.  Some compilers hide parameters for the length and then tack them on to the end of the function call.  Some stuff the hidden parameters right after the CHARACTER array in the call sequence.  Some create a kind of structure that combines the length with the actual data into a special type. And then some compilers do who knows what…  The point is, there is no convention among FORTRAN compilers for handling the function call semantics, so there is no clean interoperability between C and FORTRAN.

Fortunately, the Intel FORTRAN compiler has created this markup language for FORTRAN that functions as an interoperability framework that enables FORTRAN to speak C and vice versa.  There are some limitations, however, which I won’t go into detail on here.  If you are interested you can read about it in the Intel FORTRAN compiler manual.  What I want to do is highlight an example of what this looks like and then describe how I used it to solve my problem.  First, an example:

ansys-fortran-to-c-cpp-1-01

What you see in this image is code for the first function you would call if you want to read an ANSYS result file.  There are a lot of arguments to this function, but in essence what you do is pass in the file name of the result file you wish to open (Fname), and if everything goes well, this function sends back a whole bunch of data about the contents of the file.  Now, this function represents code that I have written, but it is a mirror image of the ANSYS routine stored in the binlib library.

I’ve highlighted some aspects of the code that constitute part of the interoperability markup.  First of all, you’ll notice the markup BIND highlighted in red.  This markup for the FORTRAN function tells the compiler that I want it to generate code that can be called from C and I want the name of the C function to be “CResRdBegin”.  This is the first step towards making this function callable from C.  Next, you will see highlighted in blue something that looks like a comment.  This however, instructs the compiler to generate a stub in the exports library for this routine if you choose to compile it into a DLL.  You won’t get a .lib file when compiling this as a .dll without this attribute.  Finally, you see the ISO_C_BINDING and definition of the type of character data we can make interoperable.  That is, instead of a CHARACTER(261) data type, we use an array of single CHARACTER(1) data.  This more closely matches the layout of C, and allows the FORTRAN compiler to generate compatible code.  There is a catch here, though, and that is in the Title parameter.  ANSYS, Inc. defines this as an array of CHARACTER(80) data types.  Unfortunately, the interoperability stuff from Intel doesn’t support arrays of CHARACTER(*) data types.  So, we flatten it here into a single string.  More on that in a minute.

You will notice too, that there are markups like (c_int), etc… that the compiler uses to explicitly map the FORTRAN data type to a C data type.  This is just so that everything is explicitly called out, and there is no guesswork when it comes to calling the routine.  Now, consider this bit of code:

ansys-fortran-to-c-cpp-1-02

First, I direct your attention to the big red circle. Here you see that all I am doing is collecting up a bunch of arguments and passing them on to the ANSYS, Inc. routine stored in BinLib.lib.  You also should notice the naming convention.  My FORTRAN function is named CResRdBegin, whereas the ANSYS, Inc. function is named ResRdBegin.  I continue this pattern for all of the functions defined in the BinLib library.  So, this function is nothing more than a wrapper around the corresponding binlib routine, but it is annotated and constrained to be interoperable with the C programming language.  Once I compile this function with the FORTRAN compiler, the resulting code will be callable directly from C.

Now, there are a few more items that have to be straightened up.  I direct your attention to the black arrow.  Here, what I am doing is converting the passed in array of CHARACTER(1) data into a CHARACTER(*) data type.  This is because the ANSYS, Inc. version of this function expects that data type.  Also, the ANSYS, Inc. version needs to know the length of the file path string.  This is stored in the variable ncFname.  You can see that I compute this value using some intrinsics available within the compiler by searching for the C NULL character.  (Remember that all C strings are null terminated and the intent is to call this function from C and pass in a C string.)

Finally, after the call to the base function is made, the strings representing the JobName and Title must be converted back to a form compatible with C.  For the jobname, that is a fairly straightforward process.  The only thing to note is how I append the C_NULL_CHAR to the end of the string so that it is a properly terminated C string.

For the Title variable, I have to do something different.  Here I need to take the array of title strings and somehow represent that array as one string.  My choice is to delimit each title string with a newline character in the final output string.  So, there is a nested loop structure to build up the output string appropriately.

After all of this, we have a C function that we can call directly.  Here is a function prototype for this particular function.

ansys-fortran-to-c-cpp-1-03

So, with this technique in place, it’s just a matter of wrapping the remaining 50 functions in binlib appropriately!  Now, I was pleased with my return to the land of C, but I really wanted more.  The architecture of the binlib routines is quite easy to follow and very well laid out; however, it is very, very procedural for my tastes.  I’m writing my program in C++, so I would really like to hide as much of this procedural stuff as I can.   Let’s say I want to read the nodes and elements off of a result file.  Wouldn’t it be nice if my loops could look like this:

ansys-fortran-to-c-cpp-1-04

That is, I just do the following:

  1. Ask to open a file (First arrow)
  2. Tell the library I want to read nodal geometric data (Second arrow)
  3. Loop over all of the nodes on the RST file using C++11 range based for loops
  4. Repeat for elements

Isn’t that a lot cleaner?  What if we could do it without buffering data and have it compile down to something very close to the original FORTRAN code in size and speed?  Wouldn’t that be nice?  I’ll show you how I approached it in Part 2.