Four Different Ways to Add Customization to ANSYS Mechanical

ANSYS Mechanical is a very powerful tool right out of the box.  Long gone are the days when an FEA tool was just a solver, and users had to write code to create input files and interpret the results.  Most of the time you never have to write anything to effectively use ANSYS Mechanical. But, users can realize significant gains in productivity and access greater functionality through customization. And it is easy to do.

Before we talk about the four options, we need to remember how the tool, ANSYS Mechanical, is actually structured.  The interface that users interact with is a version of ANSYS Workbench called ANSYS Mechanical. The interface allows users to connect to geometry, build and modify their model, set up their solution, submit a solve, and review results. The solve itself is done in ANSYS Mechanical APDL. This is the original ANSYS Multiphysics program. 

When you press the solve button ANSYS Mechanical writes out commands in the languages used by ANSYS Mechanical APDL, called the ANSYS Parametric Design Language, or APDL.  Yes, that is where ANSYS Mechanical APDL got its name. We like to call it MAPDL for short. (Side note: years ago we started a campaign to call it map-dul. It didn’t work.) Once the file is written, MAPDL is started, the file is read in, the solve happens, and all of the requested output files are written. Then ANSYS Mechanical reads those files and shows results to the user.

Customization Tool 1: Command Snippets for Controlling the Solver

Not every capability that is found in ANSYS Mechanical APDL is exposed in the interface for ANSYS Mechanical.  That is not a problem because users can use the APDL language in ANSYS Mechanical to access the full capability of the solver.  These small pieces of code are called Snippets and they are added to the tree for your ANSYS Mechanical model.  When the solver file is written, ANSYS Mechanical inserts your snippets into the command stream.  Simple and elegant.

PADT has a seminar from back in 2011 that lays it all out.  You can find the PowerPoint Presentation here. We do have plans to update this webinar soon.

This approach is used when you want to access capabilities in the solver that are not supported in the interface but you want to get to those features and keep track of them from inside your ANSYS Mechanical Model.

If you are not familiar with APDL, find a more “seasoned” user to help you. Or you can teach yourself APDL programming with PADT’s Guide to APDL .

Customization Tool 2: ANSYS Customization Toolkit (ACT) for Controlling the User Interface and Accessing the Model

As mentioned above, ANSYS Mechanical is used to define the model and review results.  The ANSYS Customization Toolkit (ACT) is how users customize the user interface, automate tasks in the interface, add tools to the interface, and access the model database. This type of customization can be as simple as a new feature, presented as an app, or it can be used to create a focused tool to streamline a certain type of simulation – what we call a vertical application.

image
A Vertical Application Written in ANSYS ACT by PADT for Automating the Design of Turbine Disks

Unlike APDL, ACT does is not have its own language. It uses Python and is a collection of Application Programmer Interface (API) calls from Python. This is a very powerful toolset that increases in capability at every release.  PADT has written stand alone applications using ACT to reduce simulation time significantly. We have also written features and apps for ourselves and users that make everyday use of ANSYS Mechanical better. 

Do note that ACT is supported in most of the major ANSYS products and more capability is being added across the available programs over time, not just in ANSYS Mechanical. You can also use ACT to connect ANSYS Mechanical to in-house or 3rd party software.

Because this is a standard environment, you can share your ACT applications on the ANSYS App Store found here. Take a look and you can see what users have done with ACT across the ANSYS Product suite, including ANSYS Mechanical.   PADT has two in the library, one for adding a PID controller to your model and the other is a tool for saving your ANSYS Mechanical APDL database.

Another great aspect of ACT is that it is fully documented.  If you go to the Customization Suite documentation in the ANSYS help library you can find everything you need.

Customization Tool 3: APDL for Automating the Solve  

With Code Snippets we talked about using APDL to access solver functions from ANSYS Mechanical that were not supported in ANSYS Mechanical.  You can also use APDL to automate what is going on during the solve.  Every capability in the ANSYS solver is accessible through APDL.

The most common usage of APDL is to create a tool that solves in batch mode. APDL programs are used to carry out tasks without going back to ANSYS Mechanical.  As an example, maybe you want to solve a load step, save some information from the solve, export it, read it in to a 3rd party program, modify it, modify some property in your model, then solve the next load step. You can do all of that with APDL in batch mode.

This is not for the faint of heart, you are getting into complex programming with a custom language. But if you take the time, it can be very powerful.  All of the commands are documented in the ANSYS Mechanical APDL help and details on the language are in the ANSYS Parametric Design Language Guide.  The PADT Blog is full of articles going back over a decade on using APDL in this way.

Customization Tool 4: User Programable Features in the Solver

One of the most powerful capabilities in the ANSYS Mechanical ADPL solver is the ability for end-users to add their own subroutines.  These User Programable Features, or UPF’s, allow you to create your own elements, make custom material models, customize loads, or customize contact behavior.

There are other general purpose FEA tools on the market that heavily publicize their user elements and user materials and they try to use it to differentiate themselves from ANSYS. However, ANSYS Mechanical APDL has always had this capability.  Many universities and companies add new capability to ANSYS using this method.

To learn more about how to do create your own custom version of ANSYS, consult the Programer’s Reference in the ANSYS Help. PADT also has a webinar sharing how to make a custom material here.

Next Steps

The key to successful customization ANSYS is to know your options, understand what you really want to do, and to use the wide range of tools you have available. Everything is documented in the help and this blog has some great examples.  Start small with a simple project and work your way up.

Or, you can leverage PADT’s expertise and contract with PADT to do your customization. This is what a half-dozen companies large and small have done over the years.  We understand ANSYS, we get engineering, and we know how to program. A perfect combination.

Regardless of how you customize ANSYS Mechanical, you will find it a rewording experience.  Greater functionality and more efficient usage are only a few lines of custom code away.

Can ANSYS Mechanical Handle My Required Modeling Precision?

Sometimes you need to use ANSYS Mechanical to model a big part as a way to determine a very small deflection.  The most common situation where this happens is optics. A lens that is around a meter in diameter may have nanometer distortions from mechanical or thermal loads that impact the optics. A customer asked if ANSYS Mechanical can handle that.  Please find Alex’s interesting and in-depth response in the attached presentation.   There is theory that explains the situation, then an example of how to determine if you can get the information you need, followed by advice on how to view the results.

PADT-ANSYS-Mechanical-Modeling-Precision

 

Secant or Instantaneous CTE? Understanding Thermal Expansion Modeling ANSYS Mechanical

One of the more common questions we get on thermal expansion simulations in tech support for ANSYS Mechanical and ANSYS Mechanical APDL revolve around how the Coefficient of Thermal Expansion, or CTE. This comes in to play if the CTE of the material you are modeling is set up to change with the temperature of that material.

This detailed presentation goes in to explaining what the differences are between the Secant and Instantaneous methods, how to convert between them, and dealing with extrapolating coeficients beyond temperatures for which you have data.

PADT-ANSYS-Secant_vs_Instantaneous_CTE-2017_07_05

You can download a PDF of the presentation here.

ANSYS 17.2 Executable Paths on Linux


ansys-linux-penguin-1When running on a machine with a Linux operating system, it is not uncommon for users to want to run from the command line or with a shell script. To do this you need to know where the actual executable files are located. Based on a request from a customer, we have tried to coalesce the major ANSYS product executables that can be run via command line on Linux into a single list:

ANSYS Workbench (Includes ANSYS Mechanical, Fluent, CFX, Polyflow, Icepak, Autodyn, Composite PrepPost, DesignXplorer, DesignModeler, etc.):

/ansys_inc/v172/Framework/bin/Linux64/runwb2

ANSYS Mechanical APDL, a.k.a. ANSYS ‘classic’:

/ansys_inc/v172/ansys/bin/launcher172 (brings up the MAPDL launcher menu)
/ansys_inc/v172/ansys/bin/mapdl (launches ANSYS MAPDL)

CFX Standalone:

/ansys_inc/v172/CFX/bin/cfx5

Autodyn Standalone:

/ansys_inc/v172/autodyn/bin/autodyn172

Note: A required argument for Autodyn is –I {ident-name}

Fluent Standalone (Fluent Launcher):

/ansys_inc/v172/fluent/bin/fluent

Icepak Standalone:

/ansys_inc/v172/Icepak/bin/icepak

Polyflow Standalone:

/ansys_inc/v172/polyflow/bin/polyflow/polyflow < my.dat

Chemkin:

/ansys_inc/v172/reaction/chemkinpro.linuxx8664/bin/chemkinpro_setup.ksh

Forte:

/ansys_inc/v172/reaction/forte.linuxx8664/bin/forte.sh

TGRID:

/ansys_inc/v172/tgrid/bin/tgrid

ANSYS Electronics Desktop (for Ansoft tools, e.g. Maxwell, HFSS)

/ansys_inc/v172/AnsysEM/AnsysEM17.2/Linux64/ansysedt

SIWave:

/ansys_inc/v172/AnsysEM/AnsysEM17.2/Linux64/siwave

New Second Edition in Paperback and Kindle: Introduction to the ANSYS Parametric Design Language (APDL)

APDL-Guide-Square-Advert-1After three years on the market and signs that sales were increasing year over year, we decided it was time to go through our popular training book “Introduction to the ANSYS Parametric

Introduction_to_APDL_V2-Kindle-Ipad-1
I’ll be honest, it was cool to see the book in print the first time, but seeing it on my iPad was just as cool.

Design Language (APDL)” and make some updates and reformat it so that it can be published as a Kindle e-book.   The new Second Edition includes two additonal chapters: APDL Math and Using APDL with ANSYS Mechanical.  The fact that we continue to sell more of these useful books is a sign that APDL is still a vibrant and well used language, and that others out there find power in its simplicity and depth.

This book started life as a class that PADT taught for many years. Then over time people asked if they could buy the notes. And then they asked for a real book. The bulk of the content came from Jeff Strain with input from most of our technical staff. Much of the editing and new content was done by Susanna Young and Eric Miller.

Here is the Description from Amazon.com:

The definitive guide to the ANSYS Parametric Design Language (APDL), the command language for the ANSYS Mechanical APDL product from ANSYS, Inc. PADT has converted their popular “Introduction to APDL” class into a guide so that users can teach themselves the APDL language at their own pace. Its 14 chapters include reference information, examples, tips and hints, and eight workshops. Topics covered include:

– Parameters
– User Interfacing
– Program Flow
– Retrieving Database Information
– Arrays, Tables, and Strings
– Importing Data
– Writing Output to Files
– Menu Customization
– APDL Math
– Using APDL in ANSYS Mechanical

At only $75.00 it is an investment that will pay for itself quickly. Even if you are an ANSYS Mechanical user, you can still benefit from knowing APDL, allowing you to add code snippets to your models. We have put some images below and you can also learn more here or go straight to Amazon.com to purchase the paperback or Kindle versions.

Introduction_to_APDL_V2-1_Cover

PADT-Intro-APDL-pg184-185 PADT-Intro-APDL-pg144-145 PADT-Intro-APDL-pg112-113 PADT-Intro-APDL-pg100-101 PADT-Intro-APDL-pg-020-021

Reading ANSYS Mechanical Result Files (.RST) from C/C++ (Part 3)

ansys-fortran-to-c-cpp-1-00In the last post of this series I illustrated how I handled the nested call structure of the procedural interface to ANSYS’ BinLib routines.  If you recall, any time you need to extract some information from an ANSYS result file, you have to bracket the function call that extracts the information with a *Begin and *End set of function calls.  These two functions setup and tear down internal structures needed by the FORTRAN library.  I showed how I used RAII principles in C++ along with a stack data structure to manage this pairing implicitly.  However, I have yet to actually read anything truly useful off of the result file!  This post centers on the design of a set of C++ iterators that are responsible for actually reading data off of the file.  By taking the time to write iterators, we expose the ANSYS RST library to many of the algorithms available within the standard template library (STL), and we also make our own job of writing custom algorithms that consume result file data much easier.  So, I think the investment is worthwhile.

If you’ve programmed in C++ within the last 10 years, you’ve undoubtedly been exposed to the standard template library.  The design of the library is really rather profound.  This image represents the high level design of the library in a pictorial fashion:

ansys-fortran-to-c-cpp-3-01

On one hand, the library provides a set of generic container objects that provide a robust implementation of many of the classic data structures available within the field of computer science.  The collection of containers includes things like arbitrarily sized contiguous arrays (vectors), linked lists, associative arrays, which are implemented as either binary trees or as a hash container, as well as many more.  The set of containers alone make the STL quite valuable for most programmers.

On the other hand, the library provides a set of generic algorithms that encompass a whole suite of functionality defined in classic computer science.  Sorting, searching, rearranging, merging, etc… are just a handful of the algorithms provided by the library.  Furthermore, extreme care has been taken within the implementation of these algorithms such that an average programmer would hard pressed to produce something safer and more efficient on their own.

However, the real gem of the standard library are iterators.  Iterators bridge the gap between generic containers on one side and the generic algorithms on the other side.  Need to sort a vector of integers, or a double ended queue of strings?  If so, you just call the same sort function and pass it a set of iterators.  These iterators “know” how to traverse their parent container.  (Remember containers are the data structures.)

So, what if we could write a series of iterators to access data from within an ANSYS result file?  What would that buy us?  Well, depending upon which concepts our iterators model, having them available would open up access to at least some of the STL suite of algorithms.  That’s good.  Furthermore, having iterators defined would open up the possibility of providing range objects.  If we can provide range objects, then all of the sudden range based for loops are possible.  These types of loops are more than just syntactic sugar.  By encapsulating the bounds of iteration within a range, and by using iterators in general to perform the iteration, the burden of a correct implementation is placed on the iterators themselves.  If you spend the time to get the iterator implementation correct, then any loop you write after that using either the iterators or better yet the range object will implicitly be correct from a loop construct standpoint.  Range based for loops also make your code cleaner and easier to reason about locally.

Now for the downside…  Iterators are kind of hard to write.  The price for the flexibility they offer is paid for in the amount of code it takes to write them.  Again, though, the idea is that you (or, better yet somebody else) writes these iterators once and then you have them available to use from that point forward.

Because of their flexibility, standard conformant iterators come in a number of different flavors.  In fact, they are very much like an ice cream Sunday where you can pick and choose what features to mix in or add on top.  This is great, but again it makes implementing them a bit of a chore.  Here are some of the design decisions you have to answer when implementing an iterator:

Decision Options Choice for RST Reader Iterators
Dereference Data Type Anything you want Special structs for each type of iterator.
Iteration Category 1.       Forward iterator
2.       Single pass iterator
3.       Bidirectional iterator
4.       Random access iterator
Forward, Single Pass

Iterators syntactically function much like pointers in C or C++.  That is, like a pointer you can increment an iterator with the ++ operator, you can dereference an iterator with the * operator and you can compare two iterators for equality.  We will talk more about incrementing and comparisons in a minute, but first let’s focus on dereferencing.  One thing we have to decide is what type of data the client of our iterator will receive when they dereference it.  My choice is to return a simple C++ structure with data members for each piece of data.  For example, when we are iterating over the nodal geometry, the RST file contains the node number, the nodal coordinates and the nodal rotations.  To represent this data, I create a structure like this:ansys-fortran-to-c-cpp-3-02

I think this is pretty self-explanatory.  Likewise, if we are iterating over the element geometry section of an RST file, there is quite a bit of useful information for each element.  The structure I use in that case looks like this:

ansys-fortran-to-c-cpp-3-03

 

Again, pretty self-explanatory.  So, when I’m building a node geometry iterator, I’m going to choose the NodalCoordinateData structure as my dereference type.

The next decision I have to make is what “kind” of iterator I’m going to create.  That is, what types of “iteration” will it support?  The C++ standard supports a variety of iterator categories.  You may be wondering why anyone would ever care about an “iteration category”?  Well, the reason is fundamental to the design of the STL.   Remember that the primary reason iterators exist is to provide a bridge between generic containers and generic algorithms.  However, any one particular algorithm may impose certain requirements on the underlying iterator for the algorithm to function appropriately.

Take the algorithm “sort” for example.  There are, in fact, lots of different “sort” algorithms.  The most efficient versions of the “sort” algorithm require that an iterator be able to jump around randomly in constant time within the container.  If the iterator supports jumping around (a.k.a. random access) then you can use it within the most efficient sort algorithm.   However, there are certain kinds of iterators that don’t support jumping around.  Take a linked list container as an example.  You cannot randomly jump around in a linked list in constant time.  To get to item B from item A you have to follow the links, which means you have to jump from link to link to link, where each jump takes some amount of processing time.  This means, for example, there is no easy way to cut a linked list exactly in half even if you know how many items in total are in the list.  To cut it in half you have to start at the beginning and follow the links until you’ve followed size/2 number of links.  At that point you are at the “center” of the list.  However, with an array, you simply choose an index equal to size/2 and you automatically get to the center of the array in one step.  Many sort algorithms, as an example, obtain their efficiency by effectively chopping the container into two equal pieces and recursively sorting the two pieces.  You lose all that efficiency if you have to walk out to the center.

If you look at the “types” of iterators in the table above you will see that they build upon one another.  That is, at the lowest level, I have to answer the question, can I just move forward one step?  If I can’t even do that, then I’m not an iterator at all.  After that, assuming I can move forward one step, can I only go through the range once, or can I go over the range multiple times?  If I can only go over the range once, I’m a single pass iterator.  Truthfully, the forward iterator concept and the single pass iterator concept form level 1A and 1B of the iterator hierarchy.  The next higher level of functionality is a bidirectional iterator.  This type of iterator can go forward and backwards one step in constant time.  Think of a doubly linked list.  With forward and backward links, I can go either direction one step in constant time.  Finally, the most flexible iterator is the random access iterator.  These are iterators that really are like raw pointers.  With a pointer you can perform pointer arithmetic such that you can add an arbitrary offset to a base pointer and end up at some random location in a range.  It’s up to you to make sure that you stay within bounds.  Certain classes of iterators provide this level of functionality, namely those associated with vectors and deques.

So, the question is what type of iterator should we support?  Perusing through the FORTRAN code shipped with ANSYS, there doesn’t appear to be an inherent limitation within the functions themselves that would preclude random access.  But, my assumption is that the routines were designed to access the data sequentially.  (At least, if I were the author of the functions that is what I would expect clients to do.)  So, I don’t know how well they would be tested regarding jumping around.  Furthermore, disk controllers and disk subsystems are most likely going to buffer the data as it is read, and they very likely perform best if the data access is sequential.  So, even if it is possible to randomly jump around on the result file, I’m not sold on it being a good idea from a performance standpoint.  Lastly, I explicitly want to keep all of the data on the disk, and not buffer large chunks of it into RAM within my library.  So, I settled on expressing my iterators as single pass, forward iterators.  These are fairly restrictive in nature, but I think they will serve the purpose of reading data off of the file quite nicely.

Regarding my choice to not buffer the data, let me pause for a minute and explain why I want to keep the data on the disk. First, in order to buffer the data from disk into RAM you have to read the data off of the disk one time to fill the buffer.  So, that process automatically incurs one disk read.  Therefore, if you only ever need to iterate over the data once, perhaps to load it into a more specialized data structure, buffering it first into an intermediate RAM storage will actually slow you down, and consume more RAM.  The reason for this is that you would first iterate over the data stored on the disk and read it into an intermediate buffer.  Then, you would let your client know the data is ready and they would iterate over your internal buffer to access the data.  They might as well just read the data off the disk themselves! If the end user really wants to buffer the data, that’s totally fine.  They can choose to do that themselves, but they shouldn’t have to pay for it if they don’t need it.

Finally, we are ready to implement the iterators themselves.  To do this I rely on a very high quality open source library called Boost.  Boost has within it a library called iterator_facade that takes care of supplying most all of the boilerplate code needed to create a conformant iterator.  Using it has proven to be a real time saver.  To define the actual iterator, you derive your iterator class from iterator_facade and pass it a few template parameters.  One is the category defining the type of iterator you are creating.  Here is the definition for the nodal geometry iterator:

ansys-fortran-to-c-cpp-3-04

You can see that there are a few private functions here that actually do all of the work.  The function “increment” is responsible for moving the iterator forward one spot.  The function “equal” is responsible for determining if two different iterators are in fact equal.  And the function “dereference” is used to return the data associated with the current iteration spot.  You will also notice that I locally buffer the single piece of data associated with the current location in the iteration space inside the iterator.  This is stored in the pData member function.  I also locally store the current index.   Here are the implementations of the functions just mentioned:

ansys-fortran-to-c-cpp-3-05

First you can see that testing iterator equality is easy.  All we do is just look to see if the two iterators are pointing to the same index.  If they are, we define them as equal. (Note, an important nuance of this is that we don’t test to see if their buffered data is equal, just their index.  This is important later on.)  Likewise, increment is easy to understand as well.  We just increase the index by one, and then buffer the new data off of disk into our local storage.  Finally, dereference is easy too.  All we do here is just return a reference to the local data store that holds this one node’s data.  The only real work occurs in the readData() function.  Inside that function you will see the actual call to the CResRdNode() function.  We pass that function our current index and it populates an array of 6 doubles with data and returns the actual node number.  After we have that, we simply parse out of that array of 6 doubles the coordinates and rotations, storing them in our local storage.  That’s all there is to it.  A little more work, but not bad.

With these handful of operations, the boost iterator_facade class can actually build up a fully conformant iterator will all the proper operator overloads, etc… It just works.  Now that we have iterators, we need to provide a “begin” and “end” function just like the standard containers do.  These functions should return iterators that point to the beginning of our data set and to one past the end of our data set.  You may be thinking to yourself, wait a minute, how to we provide an “end” iterator without reading the whole set of nodes?  The reality is, we just need to provide an iterator that “equality tests” to be equal to the end of our iteration space?  What does that mean?  Well, what it means is that we just need to provide an iterator that, when compared to another iterator which has walked all the way to the end, it passes the “equal” test.  Look at the “equal” function above.  What do two iterators need to have in common to be considered equal?  They need to have the same index.  So, the “end” iterator simply has an index equal to one past the end of the iteration space.  We know how big our iteration space is because that is one of the pieces of metadata supplied by those ResRd*Begin functions.  So, here are our begin/end functions to give us a pair of conformant iterators.

ansys-fortran-to-c-cpp-3-06

Notice, that the nodes_end() function creates a NodeIterator and passes it an index that is one past the maximum number of nodes that have coordinate data stored on file.  You will also notice, that it does not have a second Boolean argument associated with it.  I use that second argument to determine if I should immediately read data off of the disk when the iterator is constructed.  For the begin iterator, I need to do that.  For the end iterator, I don’t actually need to cache any data.  In fact, no data for that node is defined on disk.  I just need a sentinel iterator that is one past the iteration space.

So, there you have it.  Iterators are defined that implicitly walk over the rst file pulling data off as needed and locally buffering one piece of it.  These iterators are standard conformant and thus can be used with any STL algorithm that accepts a single pass, read only, forward iterator.  They are efficient in time and storage.  There is, though, one last thing that would be nice.  That is to provide a range object so that we can have our cake and eat it too.  That is, so we can write these C++11 range based for loops.  Like this:ansys-fortran-to-c-cpp-3-07

To do that we need a bit of template magic.  Consider this template and type definition:

ansys-fortran-to-c-cpp-3-08

There is a bit of machinery that is going on here, but the concept is simple.  I just want the compiler to stamp out a new type that has a “begin()” and “end()” member function that actually call my “nodes_begin()” and “nodes_end()” functions.  That is what this template will do.  I can also create a type that will call my “elements_begin()” and “elements_end()” function.  Once I have those types, creating range objects suitable for C++11 range based for loops is a snap.  You just make a function like this:

ansys-fortran-to-c-cpp-3-09

 

This function creates one of these special range types and passes in a pointer to our RST reader.  When the compiler then sees this code:

ansys-fortran-to-c-cpp-3-10

It sees a range object as the return type of the “nodes()” function.  That range object is compatible with the semantics of range based for loops, and therefore the compiler happily generates code for this construction.

Now, after all of this work, the client of the RST reader library can open a result file, select something of interest, and start looping over the items in that category; all in three lines of code.  There is no need to understand the nuances of the binlib functions.  But best of all, there is no appreciable performance hit for this abstraction.  At the end of the day, we’re not computationally doing anything more than what a raw use of the binlib functions would perform.  But, we’re adding a great deal of type safety, and, in my opinion, readability to the code.  But, then again, I’m only slightly partial to C++.  Your mileage may vary…

Reading ANSYS Mechanical Result Files (.RST) from C/C++ (Part 2)

ansys-fortran-to-c-cpp-1-00In the last post in this series I illustrated how you can interface C code with FORTRAN code so that it is possible to use the ANSYS, Inc. BinLib routines to read data off of an ANSYS result file within a C or C++ program.  If you recall, the routines in BinLib are written in FORTRAN, and my solution was to use the interoperability features of the Intel FORTRAN compiler to create a shim library that sits between my C++ code and the BinLib code. In essence, I replicated all of the functions in the original BinLib library, but gave them a C interface. I call this library CBinLib.

You may remember from the last post that I wanted to add a more C++ friendly interface on top of the CBinLib functions.  In particular, I showed this piece of code, and alluded to an explanation of how I made this happen.  This post covers the first half of what it takes to make the code below possible.

ansys-fortran-to-c-cpp-2-01

What you see in the code above is the use of C++11 range based “for loops” to iterate over the nodes and elements stored on the result file.  To accomplish this we need to create conformant STL style iterators and ranges that iterate over some space.  I’ll describe the creation of those in a subsequent post.  In this post, however, we have to tackle a different problem.  The solution to the problem is hidden behind the “select” function call shown in the code above.  In order to provide some context for the problem, let me first show you the calling sequence for the procedural interface to BinLib.  This is how you would use BinLib if you were programming in FORTRAN or if you were directly using the CBinLib library described in the previous post.  Here is the recommended calling sequence:

ansys-fortran-to-c-cpp-2-02

You can see the design pattern clearly in this skeleton code.  You start by calling ResRdBegin, which gives you a bunch of useful data about the contents of the file in general.  Then, if you want to read geometric data, you need to call ResRdGeomBegin, which gives you a little more useful metadata.  After that, if you want to read the nodal coordinate data you call ResRdNodeBegin followed by a loop over nodes calling ResRdNode, which gives you the actual data about individual nodes, and then finally call ResRdNodeEnd.  If at that point you are done with reading geometric data, you then call ResRdGeomEnd.  And, if you are done with the file you call ResRdEnd.

Now, one thing jumps off the page immediately.  It looks like it is important to pair up the *Begin and*End calls.  In fact, if you peruse the ResRd.F FORTRAN file included with the ANSYS distribution you will see that in many of the *End functions, they release resources that were allocated and setup in the corresponding *Begin function.

So, if you forget to call the appropriate *End, you might leak resources.  And, if you forget to call the appropriate *Begin, things might not be setup properly for you to iterate.  Therefore, in either case, if you fail to call the right function, things are going to go badly for you.

This type of design pattern where you “construct” some scaffolding, do some work, and then “destruct” the scaffolding is ripe for abstracting away in a C++ type.  In fact, one of the design principles of C++ known as RAII (Resource Acquisition Is Initialization) maps directly to this problem.  Imagine that we create a class in which in the constructor of the class we call the corresponding *Begin function.  Likewise, in the destructor of the class we call the corresponding *End function.  Now, as long as we create an instance of the class before we begin iterating, and then hang onto that instance until we are done iterating, we will properly match up the *Begin, *End calls.  All we have to do is create classes for each of these function pairs and then create an instance of that class before we start iterating.  As long as that instance is still alive until we are finished iterating, we are good.

Ok, so abstracting the *Begin and *End function pairs away into classes is nice, but how does that actually help us?  You would still have to create an instance of the class, hold onto it while you are iterating, and then destroy it when you are done.  That sounds like more work than just calling the *Begin, *End directly.  Well, at first glance it is, but let’s see if we can use the paradigm more intelligently.  For the rest of this article, I’ll refer to these types of classes as BeginEnd classes, though I call them something different in the code.

First, what we really want is to fire and forget with these classes.  That is, we want to eliminate the need to manually manage the lifetime of these BeginEnd classes.  If I don’t accomplish this, then I’ve failed to reduce the complexity of the *Begin and *End requirements.  So, what I would like to do is to create the appropriate BeginEnd class when I’m ready to extract a certain type of data off of the file, and then later on have it magically delete itself (and thus call the appropriate *End function) at the right time.  Now, one more complication.  You will notice that these *Begin and*End function pairs are nested.  That is, I need to call ResRdGeomBegin before I call ResRdNodeBegin.  So, not only do I want a solution that allows me to fire and forget, but I want a solution that manages this nesting.

Whenever you see nesting, you should think of a stack data structure.  To increase the nesting level, you push an item onto the stack.  To decrease the nesting level, you pop and item off of the stack.  So, we’re going to maintain a stack of these BeginEnd classes.  As an added benefit, when we introduce a container into the design space, we’ve introduced something that will control object lifetime for us.  So, this stack is going to serve two functions for us.  It’s going to ensure we call the *Begin’s and *End’s in the right nested order, and second, it’s going to maintain the BeginEnd object lifetimes for us implicitly.

To show some code, here is the prototype for my pure virtual class that serves as a base class for all of the BeginEnd classes.  (In my code, I call these classes FileSection classes)

ansys-fortran-to-c-cpp-2-03

You can see that it is an interface class by noting the pure virtual function getLevel.  You will also notice that this function returns a ResultFileSectionLevel.  This is just an enum over file section types.  I like to use an enum as opposed to relying on RTTI.  Now, for each BeginEnd pair, I create an appropriate derived class from this base ResultFileSection class.  Within the destructor of each of the derived classes I call the appropriate *End function.  Finally, here is my stack data structure definition:

ansys-fortran-to-c-cpp-2-03p5

 

You can see that it is just a std::stack holding objects of the type SectionPtrT.  A SectionPtrT is a std::unique_ptr for objects convertible to my base section class.  This will enable the stack to hold polymorphic data, and the std::unique_ptr will manage the lifetime of the objects appropriately.   That is, when we pop and object off of the stack, the std::unique_ptr will make sure to call delete, which will call the destructor.  The destructor calls the appropriate *End function as we’ve mentioned before.

At this point, we’ve reduced our problem to managing items on a stack.  We’re getting closer to making our lives easier, trust me!  Let’s look at a couple of different functions to show how we pull these pieces together.  The first function is called reduceToLevelOrBegin(level).  See the code below:ansys-fortran-to-c-cpp-2-04

The operation of this function is fairly straightforward, yet it serves an integral role in our BeginEnd management infrastructure.   What this function does is it iteratively pops items off of our section stack until it either reaches the specified level, or it reaches the topmost ResRdBegin level.  Again, through the magic of C++ polymorphism, when an item gets popped off of the stack, eventually its destructor is called and that in turn calls the appropriate *End function.  So, what this function accomplishes is it puts us at a known level in the nested section hierarchy and, while doing so, ensures that any necessary *End functions are called appropriately to free resources on the FORTRAN side of things.  Notice that all of that happens automatically because of the type system in C++.  By popping items off of the stack, I implicitly clean up after myself.

The second function to consider is one of a family of similar functions.  We will look at the function that prepares the result file reader to read element geometry data off of the file.  Here it is:

ansys-fortran-to-c-cpp-2-05

You will notice that we start by reducing the nested level to either the “Geometry” level or the “Begin” level.  Effectively what this does is unwind any nesting we have done previously.  This is the machinery that makes “fire and forget” possible.  That is, whenever in ages past that we requested something to be read off of the result file, we would have pushed onto the stack a series of objects to represent the level needed to read the data in question.  Now that we wish to read something else, we unwind any previously existing nested Begin calls before doing so.   That is, we clean up after ourselves only when we ask to read a different set of data.  By waiting until we ask to read some new set of data to unwind the stack, we implicitly allow the lifetime of our BeginEnd classes to live beyond iteration.

At this point we have the stack in a known state; either it is at the Begin level or the Geometry level.  So, we simply call the appropriate *Begin functions depending on what level we are at, and push the appropriate type of BeginEnd objects onto the stack to record our traversal for later cleanup.  At this point, we are ready to begin iterating.  I’ll describe the process of creating iterators in the next post.  Clearly, there are lots of different select*** functions within my class.  I have chosen to make all of them private and have a single select function that takes an enum descriptor of what to select and some additional information for specifying result data.

One last thing to note with this design.  Closing the result file is easy. All that is required is that we simply fully unwind the stack.  That will ensure all of the appropriate FORTRAN cleanup code is called in the right order.  Here is the close function:

ansys-fortran-to-c-cpp-2-06

As you can see, cleanup is handled automatically.  So, to summarize, we use a stack of polymorphic data to manage the BeginEnd function calls that are necessary in the FORTRAN interface.  By doing this we ensure a level of safety in our class design.  Also, this moves us one step closer to this code:

ansys-fortran-to-c-cpp-2-07

In the next post I will show how I created iterators and range objects to enable clean for loops like the ones shown above.

Reading ANSYS Mechanical Result Files (.RST) from C/C++ (Part 1)

ansys-fortran-to-c-cpp-1-00Recently, I’ve encountered the need to read the contents of ANSYS Mechanical result files (e.g. file.rst, file.rth) into a C++ application that I am writing for a client. Obviously, these files are stored in a proprietary binary format owned by ANSYS, Inc.  Even if the format were published, it would be daunting to write a parser to handle it.  Fortunately, however, ANSYS supplies a series of routines that are stored in a library called BinLib which allow a programmer to access the contents of a result file in a procedural way.  That’s great!  But, the catch is the routines are written in FORTRAN… I’ve been programming for a long time now, and I’ll be honest, I can’t quite stomach FORTRAN.  Yes, the punch card days were before my time, but seriously, doesn’t a compiler have something better to do than gripe about what column I’m typing on… (Editor’s note: Matt does not understand the pure elegance of FORTRAN’s majestic simplicity. Any and all FORTRAN bashing is the personal opinion of Mr. Sutton and in no way reflects the opinion of PADT as a company or its owners. – EM) 

So, the problem shifts from how to read an ANSYS result file to how to interface between C/C++ and FORTRAN.  It turns out this is more complicated than it really should be, and that is almost exclusively because of the abomination known as CHARACTER(*) arrays.  Ah, FORTRAN… You see, if weren’t for the shoddy character of CHARACTER(*) arrays the mapping between the basic data types in FORTRAN and C would be virtually one for one. And thus, the mechanics of function calls could fairly easily be made to be identical between the two languages.   If the function call semantics were made identical, then sharing code between the two languages would be quite straightforward.  Alas, because a CHARACTER array has a kind of implicit length associated with it, the compiler has to do some kind of magic within any function signature that passes one or more of these arrays.  Some compilers hide parameters for the length and then tack them on to the end of the function call.  Some stuff the hidden parameters right after the CHARACTER array in the call sequence.  Some create a kind of structure that combines the length with the actual data into a special type. And then some compilers do who knows what…  The point is, there is no convention among FORTRAN compilers for handling the function call semantics, so there is no clean interoperability between C and FORTRAN.

Fortunately, the Intel FORTRAN compiler has created this markup language for FORTRAN that functions as an interoperability framework that enables FORTRAN to speak C and vice versa.  There are some limitations, however, which I won’t go into detail on here.  If you are interested you can read about it in the Intel FORTRAN compiler manual.  What I want to do is highlight an example of what this looks like and then describe how I used it to solve my problem.  First, an example:

ansys-fortran-to-c-cpp-1-01

What you see in this image is code for the first function you would call if you want to read an ANSYS result file.  There are a lot of arguments to this function, but in essence what you do is pass in the file name of the result file you wish to open (Fname), and if everything goes well, this function sends back a whole bunch of data about the contents of the file.  Now, this function represents code that I have written, but it is a mirror image of the ANSYS routine stored in the binlib library.

I’ve highlighted some aspects of the code that constitute part of the interoperability markup.  First of all, you’ll notice the markup BIND highlighted in red.  This markup for the FORTRAN function tells the compiler that I want it to generate code that can be called from C and I want the name of the C function to be “CResRdBegin”.  This is the first step towards making this function callable from C.  Next, you will see highlighted in blue something that looks like a comment.  This however, instructs the compiler to generate a stub in the exports library for this routine if you choose to compile it into a DLL.  You won’t get a .lib file when compiling this as a .dll without this attribute.  Finally, you see the ISO_C_BINDING and definition of the type of character data we can make interoperable.  That is, instead of a CHARACTER(261) data type, we use an array of single CHARACTER(1) data.  This more closely matches the layout of C, and allows the FORTRAN compiler to generate compatible code.  There is a catch here, though, and that is in the Title parameter.  ANSYS, Inc. defines this as an array of CHARACTER(80) data types.  Unfortunately, the interoperability stuff from Intel doesn’t support arrays of CHARACTER(*) data types.  So, we flatten it here into a single string.  More on that in a minute.

You will notice too, that there are markups like (c_int), etc… that the compiler uses to explicitly map the FORTRAN data type to a C data type.  This is just so that everything is explicitly called out, and there is no guesswork when it comes to calling the routine.  Now, consider this bit of code:

ansys-fortran-to-c-cpp-1-02

First, I direct your attention to the big red circle. Here you see that all I am doing is collecting up a bunch of arguments and passing them on to the ANSYS, Inc. routine stored in BinLib.lib.  You also should notice the naming convention.  My FORTRAN function is named CResRdBegin, whereas the ANSYS, Inc. function is named ResRdBegin.  I continue this pattern for all of the functions defined in the BinLib library.  So, this function is nothing more than a wrapper around the corresponding binlib routine, but it is annotated and constrained to be interoperable with the C programming language.  Once I compile this function with the FORTRAN compiler, the resulting code will be callable directly from C.

Now, there are a few more items that have to be straightened up.  I direct your attention to the black arrow.  Here, what I am doing is converting the passed in array of CHARACTER(1) data into a CHARACTER(*) data type.  This is because the ANSYS, Inc. version of this function expects that data type.  Also, the ANSYS, Inc. version needs to know the length of the file path string.  This is stored in the variable ncFname.  You can see that I compute this value using some intrinsics available within the compiler by searching for the C NULL character.  (Remember that all C strings are null terminated and the intent is to call this function from C and pass in a C string.)

Finally, after the call to the base function is made, the strings representing the JobName and Title must be converted back to a form compatible with C.  For the jobname, that is a fairly straightforward process.  The only thing to note is how I append the C_NULL_CHAR to the end of the string so that it is a properly terminated C string.

For the Title variable, I have to do something different.  Here I need to take the array of title strings and somehow represent that array as one string.  My choice is to delimit each title string with a newline character in the final output string.  So, there is a nested loop structure to build up the output string appropriately.

After all of this, we have a C function that we can call directly.  Here is a function prototype for this particular function.

ansys-fortran-to-c-cpp-1-03

So, with this technique in place, it’s just a matter of wrapping the remaining 50 functions in binlib appropriately!  Now, I was pleased with my return to the land of C, but I really wanted more.  The architecture of the binlib routines is quite easy to follow and very well laid out; however, it is very, very procedural for my tastes.  I’m writing my program in C++, so I would really like to hide as much of this procedural stuff as I can.   Let’s say I want to read the nodes and elements off of a result file.  Wouldn’t it be nice if my loops could look like this:

ansys-fortran-to-c-cpp-1-04

That is, I just do the following:

  1. Ask to open a file (First arrow)
  2. Tell the library I want to read nodal geometric data (Second arrow)
  3. Loop over all of the nodes on the RST file using C++11 range based for loops
  4. Repeat for elements

Isn’t that a lot cleaner?  What if we could do it without buffering data and have it compile down to something very close to the original FORTRAN code in size and speed?  Wouldn’t that be nice?  I’ll show you how I approached it in Part 2.

Be a Pinball Wizard with Contact Regions in ANSYS Mechanical

pinball-wizard-pinball-machine-ANSYS-3
A pinball machine based on The Who’s Tommy

I had a very cool music teacher back in 6th or 7th grade in the 1970’s in upstate New York.  Today we’d probably say she was eclectic.  In that class we listened to and discussed fairly recent songs in addition to general music studies.  Two songs I remember in particular are ‘Hurdy Gurdy Man’ by Donovan and ‘Pinball Wizard’ by The Who.  If you’re not familiar with Pinball Wizard, it’s from The Who’s rock opera Tommy, and is about a deaf, mute, blind young man who happens to be adept at the game of pinball.  Yes, he is a Pinball Wizard.  This sing popped into my head recently when we had some customer questions here at PADT regarding the pinball region concept as it pertains to ANSYS contact regions.

I’m not sure if the developers at ANSYS, Inc. had this song in mind when they came up with the nomenclature for the 17X (latest and greatest) series of contact elements in ANSYS, but regardless, you too can be a pinball wizard when it comes to understanding contact elements in ANSYS Mechanical and MAPDL.

Fans of this blog may remember one of my prior posts on contact regions in ANSYS that also had a musical theme (bringing to mind Peter Gabriel’s song “I Have the Touch”):

In this current entry we will go more in depth on the pinball region, also known as the pinball radius.  The pinball region is involved with the distance from contact element to target element in a given contact region.  Outside the pinball region, ANSYS doesn’t bother to check to see if the elements on opposite sides of the contact region are touching or not.  The program assumes they are far away from each other and doesn’t worry about any additional calculations for the most part.

Here is an illustration.  The gray elements on the left represent the contact body and the red elements on the right represent the target body (assuming asymmetric contact).  Target elements outside the pinball radius will not be checked for contact.  The contact and target elements actually ‘coat’ the underlying solid elements so they are shown as dashed lines slightly offset from the solid elements for the sake of visibility.  Here the pinball radius is displayed as a dashed blue circle, centered on the contact elements, with a radius of 2X the depth of the underlying solid elements.

pinball_radius_contact_illustration

So, outside the pinball region, we know ANSYS doesn’t check to see if the contact and target are actually in contact.  It just assumes they are far away and not in contact.  What about what happens if the contact and target are inside the pinball region?  The answer to that question depends on which contact type we have selected.

For frictionless contact (aka standard contact in MAPDL) and frictional contact, the program will then check to see if the contact and target are truly touching.  If they are touching, the program will check to see if they are sliding or possibly separating.  If they are touching and penetrating, the program will check to see if the penetration exceeds the allowable amount and will make adjustments, etc.  In other words, for frictionless and frictional contact, if the contact and target elements are close enough to be inside the pinball region, the program will make all sorts of checks and adjustments to make sure the contact behavior is adequately captured.

The other scenario is for bonded and no separation contact.  With these contact types, the program’s behavior when the contact and target elements are within the pinball region is different.  For these types, as long as the contact and target are close enough to be within the pinball region, the program considers the contact region to be closed.  So, for bonded and no separation, your contact and target elements do not need to be line on line touching in order for contact to be recognized.  The contact and target pairs just need to be inside the pinball region.  This can be good, in that it allows for some ‘slop’ in the geometry to be automatically ignored, but it also can have a downside if we have a curved surface touching a flat surface for example.  In that case, more of the curved surface may be considered in contact than would be the case if the pinball region was smaller.  This effect is shown in the image below.  Reducing the pinball radius to an appropriate smaller amount would be the fix for eliminating this ‘overconstraint’ if desired.

pinball_radius_bonded_noseparation

There is a default value for the pinball region/radius.  It can be changed if needed.  We’ll add more details in a moment.  First, why is it called the “pinball” region?  I like to think it’s because when it’s visualized in the Mechanical window, it looks like a blue pinball from an actual pinball arcade game, but I’ll admit that the ANSYS terminology may predate the Mechanical interface.  The image below shows what I mean.  The blue balls are the different pinball radii for different contact regions.

pinball_radius_visualization

 

Note that you don’t see the pinball region displayed as shown in the above image unless you have manually changed the pinball size in Mechanical.  The pinball region can be changed in the Mechanical window in the details view for each contact region by changing Pinball Region from Program Controlled to Radius, like this:

pinball_radius_change

In MAPDL, the pinball radius value can be changed by defining or editing the real constant labeled PINB.

By now you’re probably wondering what is the default value for the pinball radius?  The good news is that it is intelligently decided by the program for each contact region.  The default is always a scale factor on the depth of the underlying elements of each contact region.  In the first pinball region image shown near the beginning of this article, the example plot shows the pinball region/radius as two times the depth of the underlying elements.

The table below summarizes the default pinball radius values for most circumstances for 2D and 3D solid element models.  More detailed information is available in the ANSYS Help.

Default Pinball Radius ValuesLarge Deflection Off
Flexible-Flexible
Large Deflection On
Flexible-Flexible
Frictionless and Frictional1* Underlying Element Depth2*Underlying Element Depth
Bonded and No Seperation0.25*Underlying Element Depth0.5*Underlying Element Depth
Rigid-Flexible Contact: Typically the Default Values are Doubled

Summing it all up:  we have seen how the default values are calculated and also how to change them.  We have seen what they look like as blue balls in a plot of contact regions in Mechanical if the pinball radius has been explicitly defined.  We also discussed what the pinball radius does and how it’s different for frictionless/frictional contact and bonded/no separation contact.

You should be well on your way to becoming a pinball wizard at this point.

Does performing simulation in ANSYS make you think of certain songs, or are there songs you like to listen to while working away on your simulations an addition to The Who’s “Pinball Wizard” and Peter Gabriel’s “I Have the Touch”?  If so, we’d love to hear about your song preferences in the comments below.

Using External Model to Utilize Legacy Mechanical APDL Models in ANSYS Workbench

For many years we’ve been asked, “Can I use my old Mechanical APDL/ANSYS ‘classic’ model in Workbench?”  Up until version 15.0 our answer has been along the lines of, “Uh, not really, unless you can just use the IGES geometry and start over or use FE Modeler to skin the mesh and basically start over.”  Now with version 15.0 of ANSYS there is a new option that makes legacy models more usable in both functionality and level of effort required.

So what is External Model?

  • A new capability at ANSYS 15.0 to use legacy MAPDL models in Workbench
  • Reads the .cdb file (coded database) created from /PREP7 in MAPDL (CDWRITE command)
  • Builds exterior skin geometry from the existing MAPDL mesh
  • Creates solids from the skin geometry
  • Retains the MAPDL mesh
  • May have trouble for complex meshes, although we’ve been impressed in a couple of trials
  • Has limitations on what is transferred into Mechanical  
  • No material properties, loads, or constraints
  • May give you very large surfaces, making it difficult to apply loads on faces, but you can bring in nodal components from Mechanical APDL as Named Selections in Mechanical as an alternative load application method
  • Allows us to apply new BC’s using geometry in Mechanical

Here is a representative Mechanical APDL Model.  It’s a simple static structural run with loads and constraints.

 p1

p2
p3
To use External Model, we’ll need a .cdb file from Mechanical APDL.  If you’re not familiar with using the CDWRITE command, here are the menu picks in the MAPDL Preprocessor:

Preprocessor > Archive Model > Write.  Enter a name for the .cdb (coded database) file being written and click OK.  Don’t worry about the .iges file.

 p4

Next, launch Workbench 15.0 and insert an External Model block from the Toolbox:

 p5
Next right click > Edit or double click on the Setup cell in the External Model block.  Click on the […] button under Location to browse to your .cdb file created in MAPDL.
p6p7
There is a Properties window (View > Properties) in which units can optionally be modified or a coordinate system transformation can be specified.
Next, click on the Workbench Project tab near the top of the Workbench window.  Right click on the Setup cell and choose Update.  You should now have a green check mark next to Setup:
p8
Insert a new (standalone) analysis type to continue your simulation in Mechanical.  Here we inert a Static Structural analysis.  For some reason you can’t drag and drop the new analysis onto the setup cell, so we establish the link in a separate step shown below.p9

Drag and drop the Setup cell under External Model to the Model cell under the new analysis block:
p10

Note that the Model cell Properties contain a Tolerance Angle that can be adjusted to help with exterior skin geometry creation from the MAPDL mesh.  Use this to help control where one skin surface starts and stops based on angles between element faces.

The two blocks are now linked as shown by the blue curve connecting Setup to Model:
p11
Double click the Model cell in the new (Static Structural) analysis block to open the Mechanical editor.  It should create geometry over the existing mesh, which is retained.
p12
p13
Although the mesh comes across, no material properties, loads, or constraints, etc. are retained from the MAPDL model

  • These must be entered separately in Workbench/Mechanical
  • There is no ability to remesh or modify the existing mesh

You can apply (or reapply) loads and constraints directly on geometry, or on nodal components that were defined in MAPDL which become Named Selections in Mechanical:

p14p5

 p14

Solve and postprocess as usual in the Mechanical editor.
p15
In conclusion, ANSYS 15.0 gives us new and enhanced capability for utilizing legacy models, particularly those from MAPDL saved as .cdb file format.  Although not everything is retained, this capability does provide us with additional tools to reuse existing models without having to start from scratch.

ANSYS & 3D Printing: Converting your ANSYS Mechanical or MAPDL Model into an STL File

image3D printing is all the rage these days.  PADT has been involved in what should be called Additive Manufacturing since our founding twenty years ago.  So people in the ANSYS world often come to us for advice on things 3D Printer’ish.  And last week we got an email asking if we had a way to convert a deformed mesh into a STL file that can be used to print that deformed geometry.  This email caused neurons to fire that had not fired in some time. I remembered writing something but it was a long time ago.

Fortunately I have Google Desktop on my computer so I searched for ans2stl, knowing that I always called my translators ans2nnn of some kind. There it was.  Last updated in 2001, written in maybe 1995. C.  I guess I shouldn’t complain, it could have been FORTRAN. The notes say that the program has been successfully tested on Windows NT. That was a long time ago.

So I dusted it off and present it here as a way to get results from your ANSYS Mechanical or ANSYS Mechanical APDL model as a deformed STL file.

UPDATE – 7/8/2014

Since this article was written, we have done some more work with STL files. This Macro works fine on a tetrahedral mesh, but if you have hex elements, it won’t work – it assumes triangles on the face.  It also requires a macro and some ‘C’ code, which is an extra pain. So we wrote a more generic macro that works with Hex or Tet meshes, and writes the file directly. It can be a bit slow but no annoyingly slow.  We recommend you use this method instead of the ones outlined below.

Here is the macro:  writstl.zip

The Process

An STL file is basically a faceted representation of geometry. Triangles on the surface of your model. So to get an STL file of an FEA model, you simply need to generate triangles on your mesh face, write them out to a file, and convert them to an STL format.  If you want deformed geometry, simply use the UPGEOM command to move your nodes to the deformed position.

The Program

Here is the source code for the windows version of the program:

/*
---------------------------------------------------------------------------

 PADT--------------------------------------------------- Phoenix Analysis &
                                                        Design Technologies

---------------------------------------------------------------------------
                             www.padtinc.com
---------------------------------------------------------------------------

       Package: ans2stl

          File: ans2stl.c
          Args: rootname
        Author: Eric Miller, PADT
		(480) 813-4884 
		eric.miller@padtinc.com

	Simple program that takes the nodes and elements from the
	surface of an ANSYS FE model and converts it to a binary
	STL file.

	USAGE:
		Create and ANSYS surface mesh one of two ways:
			1: amesh the surface with triangles
			2: esurf an existing mesh with triangles
         	Write the triangle surface mesh out with nwrite/ewrite
		Run ans2stl with the rootname of the *.node and *.elem files
		   as the only argument
		This should create a binary STL file

	ASSUMPTIONS:
		The ANSYS elements are 4 noded shells (MESH200 is suggested)
		in triangular format (nodes 3 and 4 the same)

		This code has been succesfully compiled and tested
		on WindowsNT

		NOTE: There is a known issue on UNIX with byte order
				Please contact me if you need a UNIX version

	COMPILE:
		gcc -o ans2stl_win ans2stl_win.c

       10/31/01:       Cleaned up for release to XANSYS and such
       1/13/2014:	Yikes, its been 12+ years. A little update 
       			and publish on The Focus blog
			Checked it to see if it works with Windows 7. 
			It still compiles with GCC just fine.

---------------------------------------------------------------------------
PADT, Inc. provides this software to the general public as a curtesy.
Neither the company or its employees are responsible for the use or
accuracy of this software.  In short, it is free, and you get what
you pay for.
---------------------------------------------------------------------------
*/
/*======================================================

   SAMPLE ANSYS INPUT DECK THAT SHOWS USAGE

finish
/clear
/file,a2stest
/PREP7  
!----------
! Build silly geometry
BLC4,-0.6,0.35,1,-0.75,0.55 
SPH4,-0.8,-0.4,0.45 
CON4,-0.15,-0.55,0.05,0.35,0.55 
VADD,all
!------------------------
! Mesh surface with non-solved (MESH200) triangles
et,1,200,4
MSHAPE,1,2D   ! Use triangles for Areas
MSHKEY,0      ! Free mesh
SMRTSIZE,,,,,5
AMESH,all
!----------------------
! Write out nodes and elements
nwrite,a2stest,node
ewrite,a2stest,elem
!--------------------
! Execute the ans2stl program
/sys,ans2stl_win.exe a2stest

======================================================= */

#include 
#include 
#include 

typedef struct vertStruct *vert;
typedef struct facetStruct *facets;
typedef struct facetListStruct *facetList;

        int     ie[8][999999];
        float   coord[3][999999];
        int	np[999999];

struct vertStruct {
  float	x,y,z;
  float	nx,ny,nz;
  int  ivrt;
  facetList	firstFacet;
};

struct facetListStruct {
  facets	facet;
  facetList	next;
};

struct facetStruct {
  float	xn,yn,zn;
  vert	v1,v2,v3;
};

facets	theFacets;
vert	theVerts;

char	stlInpFile[80];
float	xmin,xmax,ymin,ymax,zmin,zmax;
float   ftrAngle;
int	nf,nv;  

void swapit();
void readBin();
void getnorm();
long readnodes();
long readelems();

/*--------------------------------*/
main(argc,argv)
     int argc;
     char *argv[];
{
  char nfname[255];
  char efname[255];
  char sfname[255];
  char s4[4];
  FILE	*sfile;
  int	nnode,nelem,i,i1,i2,i3;
  float	xn,yn,zn;

  if(argc <= 1){
        puts("Usage:  ans2stl file_root");
        exit(1);
  }
  sprintf(nfname,"%s.node",argv[1]);
  sprintf(efname,"%s.elem",argv[1]);
  sprintf(sfname,"%s.stl",argv[1]);

  nnode = readnodes(nfname);
  nelem = readelems(efname);
  nf = nelem;

  sfile = fopen(sfname,"wb");
  fwrite("PADT STL File, Solid Binary",80,1,sfile);
  swapit(&nelem,s4);    fwrite(s4,4,1,sfile);

  for(i=0;i<nelem;i++){ 
      i1 = np[ie[0][i]];
      i2 = np[ie[1][i]];
      i3 = np[ie[2][i]];
      getnorm(&xn,&yn,&zn,i1,i2,i3);

      swapit(&xn,s4);	fwrite(s4,4,1,sfile);
      swapit(&yn,s4);	fwrite(s4,4,1,sfile);
      swapit(&zn,s4);	fwrite(s4,4,1,sfile);

      swapit(&coord[0][i1],s4);	fwrite(s4,4,1,sfile);
      swapit(&coord[1][i1],s4);	fwrite(s4,4,1,sfile);
      swapit(&coord[2][i1],s4);	fwrite(s4,4,1,sfile);

      swapit(&coord[0][i2],s4);	fwrite(s4,4,1,sfile);
      swapit(&coord[1][i2],s4);	fwrite(s4,4,1,sfile);
      swapit(&coord[2][i2],s4);	fwrite(s4,4,1,sfile);

      swapit(&coord[0][i3],s4);	fwrite(s4,4,1,sfile);
      swapit(&coord[1][i3],s4);	fwrite(s4,4,1,sfile);
      swapit(&coord[2][i3],s4);	fwrite(s4,4,1,sfile);
      fwrite(s4,2,1,sfile);
  }
  fclose(sfile);
    puts(" ");
  printf("  STL Data Written to %s.stl \n",argv[1]);
    puts("  Done!!!!!!!!!");
  exit(0);
}

void  getnorm(xn,yn,zn,i1,i2,i3)
	float	*xn,*yn,*zn;
	int	i1,i2,i3;
{
	float	v1[3],v2[3];
	int	i;

        for(i=0;i<3;i++){
	  v1[i] = coord[i][i3] - coord[i][i2];
	  v2[i] = coord[i][i1] - coord[i][i2];
	}

	*xn = (v1[1]*v2[2]) - (v1[2]*v2[1]);
	*yn = (v1[2]*v2[0]) - (v1[0]*v2[2]);
	*zn = (v1[0]*v2[1]) - (v1[1]*v2[0]);
}
long readelems(fname)
        char    *fname;
{
        long num,i;
        FILE *nfile;
        char    string[256],s1[7];

        num = 0;
        nfile = fopen(fname,"r");
		if(!nfile){
			puts(" error on element file open, bye!");
			exit(1);
		}
        while(fgets(string,86,nfile)){
          for(i=0;i<8;i++){
            strncpy(s1,&string[6*i],6);
            s1[6] = '\0';
            sscanf(s1,"%d",&ie[i][num]);
          }
          num++;
        }

        printf("Number of element read: %d\n",num);
        return(num);
}

long readnodes(fname)
        char	*fname;
{
        FILE    *nfile;
        long     num,typeflag,nval,ifoo;
        char    string[256];

        num = 0;
        nfile = fopen(fname,"r");
		if(!nfile){
			puts(" error on node file open, bye!");
			exit(1);

		}
        while(fgets(string,100,nfile)){
          sscanf(string,"%d ",&nval);
          switch(nval){
            case(-888):
                typeflag = 1;
            break;
            case(-999):
                typeflag = 0;
            break;
            default:
                np[nval] = num;
                if(typeflag){
                        sscanf(string,"%d %g %g %g",
                           &ifoo,&coord[0][num],&coord[1][num],&coord[2][num]);
                }else{
                        sscanf(string,"%d %g %g %g",
                           &ifoo,&coord[0][num],&coord[1][num],&coord[2][num]);
                        fgets(string,81,nfile);
                }
num++;
            break;
        }

        }
        printf("Number of nodes read %d\n",num);
        return(num);

}

/* A Little ditty to swap the byte order, STL files are for DOS */
void swapit(s1,s2)
     char s1[4],s2[4];
{
  s2[0] = s1[0];
  s2[1] = s1[1];
  s2[2] = s1[2];
  s2[3] = s1[3];
}

ans2stl_win_2014_01_28.zip

Creating the Nodes and Elements

I’ve created a little example macro that can be used to make an STL of deformed geometry.  If you do not want the deformed geometry, simply remove or comment out the UPGEOM command.  This macro is good for MAPDL or ANSYS Mechanical, just comment out the last line  to use it with MAPDL:

et,999,200,4

type,999

esurf,all

finish ! exit whatever preprocessor your in

! move the RST file to a temp file for the UPCOORD. Comment out if you want

! the original geometry

/copy,file,rst,,stl_temp,rst

/prep7 ! Go in to PREP7

et,999,200,4 ! Create a dummy triangle element type, non-solved (200)

type,999 ! Make it the active type

esurf,all ! Surface mesh your model

!

! Update the geometry to the deformed shape

! The first argument is the scale factor, adjust to the appropriate level

! Comment this line out if you don’t want deformed geometry

upgeom,1000,,,stl_temp,rst

!

esel,type,999 ! Select those new elements

nelem ! Select the nodes associated with them

nwrite,stl_temp,node ! write the node file

ewrite,stl_temp,elem ! Write the element file

! Run the program to convert

! This assumes your executable in in c:\temp. If not, change to the proper

! location

/sys,c:\temp\ans2stl_win.exe stl_temp

! If this is a ANSYS Mechanical code snippet, then copy the resulting STL file up to

! the root directory for the project

! For MAPDL, Comment this line out.

/copy,stl_temp,stl,,stl_temp,stl,..\..

An Example

To prove this out using modern computing technology (remember, last time I used this was in 2001) I brought up my trusty valve body model and slammed 5000 lbs on one end, holding it on the top flange.  I then inserted the Commands object into the post processing branch:

image

When the model is solved, that command object will get executed after ANSYS is done doing all of its post processing, creating an STL of the deformed geometry. Here is what it looks like in the output file. You can see what it looks like when APDL executes the various commands:

/COPY FILE FROM FILE= file.rst

TO FILE= stl_temp.rst

FILE file.rst COPIED TO stl_temp.rst

1

***** ANSYS – ENGINEERING ANALYSIS SYSTEM RELEASE 15.0 *****

ANSYS Multiphysics

65420042 VERSION=WINDOWS x64 08:39:44 JAN 14, 2014 CP= 22.074

valve_stl–Static Structural (A5)

Note – This ANSYS version was linked by Licensee

***** ANSYS ANALYSIS DEFINITION (PREP7) *****

ELEMENT TYPE 999 IS MESH200 3-NODE TRIA MESHING FACET

KEYOPT( 1- 6)= 4 0 0 0 0 0

KEYOPT( 7-12)= 0 0 0 0 0 0

KEYOPT(13-18)= 0 0 0 0 0 0

CURRENT NODAL DOF SET IS UX UY UZ

THREE-DIMENSIONAL MODEL

ELEMENT TYPE SET TO 999

GENERATE ELEMENTS ON SURFACE DEFINED BY SELECTED NODES

TYPE= 999 REAL= 1 MATERIAL= 1 ESYS= 0

NUMBER OF ELEMENTS GENERATED= 13648

USING FILE stl_temp.rst

THE SCALE FACTOR HAS BEEN SET TO 1000.0

USING FILE stl_temp.rst

ESEL FOR LABEL= TYPE FROM 999 TO 999 BY 1

13648 ELEMENTS (OF 43707 DEFINED) SELECTED BY ESEL COMMAND.

SELECT ALL NODES HAVING ANY ELEMENT IN ELEMENT SET.

6814 NODES (OF 53895 DEFINED) SELECTED FROM

13648 SELECTED ELEMENTS BY NELE COMMAND.

WRITE ALL SELECTED NODES TO THE NODES FILE.

START WRITING AT THE BEGINNING OF FILE stl_temp.node

6814 NODES WERE WRITTEN TO FILE= stl_temp.node

WRITE ALL SELECTED ELEMENTS TO THE ELEMENT FILE.

START WRITTING AT THE BEGINNING OF FILE stl_temp.elem

Using Format = 14(I6)

13648 ELEMENTS WERE WRITTEN TO FILE= stl_temp.elem

SYSTEM=

c:\temp\ans2stl_win.exe stl_temp

Number of nodes read 6814

Number of element read: 13648

STL Data Written to stl_temp.stl

Done!!!!!!!!!

/COPY FILE FROM FILE= stl_temp.stl

TO FILE= ..\..\stl_temp.stl

FILE stl_temp.stl COPIED TO ..\..\stl_temp.stl

image

The resulting STL file looks great:

image

I use MeshLab to view my STL files because… well it is free.  Do note that the mesh looks coarser.  This is because the ANSYS mesh uses TETS with midside nodes.  When those faces get converted to triangles those midside nodes are removed, so you do get a coarser looking model.

And after getting bumped from the queue a couple of times by “paying” jobs, our RP group printed up a nice FDM version for me on one of our Stratasys uPrint Plus machines:

image

It’s kind of hard to see, so I went out to the parking lot and recorded a short video of the part, twisting it around a bit:

Here is the ANSYS Mechanical project archive if you want to play with it yourself.

Other Things to Consider

Using FE Modeler

You can use FE Modeler in a couple of different ways with STL files. First off, you can read an STL file made using the method above. If you don’t have an STL preview tool, it is an easy way to check your distorted mesh.  Just chose STL as the input file format:

image

You get this:

image

If you look back up at the open dialog you will notice that it reads a bunch of mesh formats. So one thing you could do instead of using my little program, is use FE Modeler to make your STL.  Instead of executing the program with a /SYS command, simply use a CDWRITE,DB command and then read the resulting *.CDB file into FE Modeler.  To write out the STL, just set the “Target System” to STL and then click “Write Solver File”

image

You may know, or may have noticed in the image above, that FE Modeler can read other FEA meshes.  So if you are using some other FEA package, which you should not, then you can make an STL file in FE Modeler as well.

Color Contours

The next obvious question is how do I get my color contours on the plot. Right now we don’t have that type of printer here at PADT, but I believe that the dominant 3D Color printer out, the former Z-Corp and now 3D Systems machines, will read ANSYS results files. Stratasys JUST announced a new color 3D Printer that makes usable parts. Right now they don’t have a way to do contours, but as soon as they do we will publish something.

Another option is to use a /SHOW,vrml option and then convert that to STL with the color information.

Scaling

Scaling is something you should think about. Not only the scaling on your deformed geometry, but the scaling on your model for printing.  Units can be tricky with STL files so make sure you check your model size before you print.

Smoother STL Surfaces

Your FEA mesh may be kind of coarse and the resulting STL file is even coarser because of the whole midside node thing.  Most of the smoothing tools out there will also get rid of sharp edges, so you don’t want those. Your best best is to refine your mesh or using a tool like Geomagic.

Making a CAD Model from my Deformed Mesh

Perhaps you stumbled on this posting not wanting to print your model. Maybe you want a CAD model of your deformed geometry.  You would use the same process, and then use Geomagic Studio.  It actually works very well and give you a usable CAD model when you are done.

The 10 Coolest New Features in R15 of ANSYS Mechanical APDL

On Tuesday we posted on what I thought were the 10 coolest features in ANSYS Mechanical for R15. Now it is time to take a good look at ANSYS Mechanical APDL, or MAPDL (classic ANSYS, black window ANSYS, or my favorite: ANSYS).  The developers have been very busy and added a lot of useful features, and there are a large number of “Oh Yes!” capabilities in this release that different groups of users will be very excited about.  For this posting though, we are going to stay focused on the things that impact larger groups of users and/or expand capability in the code.  As always, you can learn more by attending one of the many upcoming ANSYS webinars or reading the release notes in the help.

image1: Rezoning Enhancements and Additions

This is my favorite change in R15, a mix of some improvements and some new capabilities. The whole idea of rezoning is that when you have a part that sees a large amount deformation, the mesh often gets very distorted. It often gets so distorted that the elements are no longer accurate and crazy strains are calculated and the element literally blows up.  Or it turns inside out and generates an error in the solver.

Rezoning has been around for a while but at this release some holes are plugged and some big advances are made.  The first change was a hole plug, you can now rezone areas that contain surface effect  (SURF153/154) elements.  It is very common to have that type of load on highly distorted geometry, so this is welcomed.

The next change was adding mesh splitting for 3D  tetrahedral elements. This is used for manual rezoning with the REMESH command. It was available before with 2D elements.  The 2D example from the manual shows it best:

image

The advantage of this approach is that the subsequent stress field that is placed upon the new mesh is already accurate at the nodes that existed for the original mesh, and are fairly accurately interpolated for the new nodes.  When you read in a completely new mesh, you have to interpolate the stress field and then iterate till the stresses are accurate.  This approach can be much faster.

The third and best addition is Automatic Rezoning or Mesh Nonlinear Adaptivity.  This process is completely automatic and does not require the user interaction that rezoning does.  Both splitting and remeshing are used. You can turn remeshing on based upon position, energy levels, or contact conditions.

Here is an example from the user manual:

seal2

And here is an example that ANSYS, Inc. is showing on the new :

Seal_Squish1

image2: Bolt Thread Modeling

Modeling bolt threads.  Classic newby mistake right?  They model threads on 37 bolts and then try and set up contact on all of them.  Never goes well does it. ANSYS MAPDL has had a bolt modeling capability for some time that allows you to simulate a bolt as a cylinder with preload and everything. But what that approximation missed was the fact that the contact is at an angle that is not normal to the cylinder surface.

At R15 you can now specify your thread geometry and the contact algorithm will calculate the proper normal and contact pattern for the contact forces. Much more efficient. There is a great example in the Technology Demonstration Guide, Section 39, showing all three approaches: model the threads in the mesh, use the new contact threads, or just bond the threaded area.

image

Needless to say we will be doing an in-depth posting on this one in the future.

image3: Mode-Superposition for Harmonic Analysis of Cyclic Structures

I started my career in turbomachinary and from day one, one of the holy grails was to be able to do a harmonic analysis using blade pressure loads from a CFD run: getting the actual stresses in the blades caused by the varying aerodynamic load as they spun around and dealt with variations caused by passing frequencies and resonance in the flow itself.  It was always doable as a full 360 model on both the CFD and structural side. And you could have done it using the full method for a few releases.  But now we can use cyclic symmetry and mode-superposition.

The ANSYS MAPDL side of things is released in R15.  You can take your complex loading info from CFD and apply that as a load on your blades using the new /MAP pre-processor (see below), a bit of a pain to do in the past.  The other big change was making it all work with modal-superposition.  The grail is almost complete.

image4: Arc Length

This is one of those things buried in the code that the user really doesn’t have to do anything to benefit from. If you are not familiar with the method, it is an approach used on non-linear problems using the Newton-Raphson method. Most solves use other methods,  but for things like non-linear buckling it is a better method. Check out 14.12 in the Theory Reference for the math and all that.

The bottom line at R15 is that they changed the algorithm to use the Crisfield Method and to avoid Driftback.  What it means to you the user is those nasty non-linear buckling problems that always seem to have a hard time converging, or that require really small steps to converge, should converge now or converge faster.

image5: Mode Selection

This is another one of those advanced options that users of other solvers have been asking for in the past. When you do a modal analysis that produces a ton of modes, you often want to ignore the majority of them and focus on the few modes that are strong or that get excited.  In the past you could specify a range only, and only one range. At R15 you can now select which modes to use in modal-superposition analysis.  You can decide which modes to use based on the modal effective mass, the mode coefficient, or the DDAM Procedure. Or, if you have your own criteria, you can use APDL to create a table that specifies a 1 for keep, and a 0 for toss. Very handy.

image6: Acoustics Enhancements

This is really not one new feature, but an overall continuation of adding functionality to the acoustics capability in ANSYS MAPDL. For decades, this capability was not really focused on advanced acoustic modeling. But over the last couple of releases we have seen added functionality that put the functionality on a par with specialty acoustics codes.

The key enhancements at R15 are:

  • Frequency-dependent acoustic material properties
  • Surface impedance can be frequency-dependent
  • A new boundary layer impedance (BLI) model is available for visco-thermo fluids modeling
  • A wider range of units are now supported for acoustics, including support for user defined units (/UNITS)
  • Many enhancements for coupling acoustics with CFD for FSI
  • New postprocessing commands for calculating acoustics specific information like sound power level, A-weighted sound pressure level (dBA), and return loss, and transmission loss, amongst others.

image

image7: Shape Memory on Beam, Shell and Plane Strain

For whatever reason shape memory alloys have always fascinated me, and being able to simulate them accurately is very important for those that use the material in their products. Development has been adding more and more functionality in this area for many releases.  The material has two unique properties: it is super elastic and it has a memory effect.

With R15 development rounds out the capabilities with full support for beam and shell elements, adding the memory behavior.  This is important, and warrants top 10 status for me, because many of the geometries we have worked on that use Nitinol (the most common shape memory alloy) are made with wires.  In the past you had to model them in 3D, now we can use beams.  Faster, more accurate, etc…

image

image8: RSTMAC to Experimental Data

Something we should all be doing more is comparing our FEA results to experimental data. One excuse we often use is that it is too hard to compare the data from a vib test to our modal analysis results. Well, that is no longer to. The RSTMAC command has been modified to not only compare to ANSYS result files, but to also read the old Universal file format for results. Yipee!

Why that format? Because back in the days when SDRC was SDRC and IDEAS was their prep/post tool, they had some awesome result comparison tools. So a lot of test software out there writes to the file that IDEAS read, the Universal file.  If your software does not write to a Universal file, the key info you need is in the user manual: Basic Analysis Guide, section 7.3.8.5.  and here is a link to some documentation on it.

image

image9: Mapping Processor

It has been a long time since ANSYS, Inc. has added a new processor to ANSYS Mechanical APDL.  /PREP7, /SOLVE, /POST26.  So it was kind of cool to see that they are creating a new mapping processor called /MAP that will be a place for you to do load mapping. At this initial release, it maps surface pressures as a point cloud from a CFD model onto your mechanical model.

Under the hood it is actually just the algorithms used in the *MOPER APDL function.  But now it is exposed in through its own set of commands so that users don’t have to script their load mapping. And it supports imaginary loading for that fancy cyclic-symmetry stuff some of us need to do. As you can imagine, this needs its own article, but here are the high points:

  • Enter the processor with /MAP
  • Your model must have surface effect elements (SURF154) elements paved onto the outside of where you want the pressures.

 

  • Specify the nodes you want to map the pressures on to with the TARGET command
  • You can provide pressures in a variety of formats (specified with the FTYPE command):

 

  • CFX Transient Blade Row format is made by CFX and contains real and imaginary terms
  • The standard output file from ANSYS CFDPost
  • A fixed format file that has x, y, z, and pressure, and yes, you can specify the actual format in FORTRAN using the READ command
  • And of course the trusty comma delimited file format: x, y, z, pressure.

 

  • The READ command specifies some other parameters and reads in the point based pressure data.
  • They have given us a PLGEOM command to view the target nodes and the point cloud on top of each other so you can see if things are aligned
  • A whole slew of /PREP7 like commands to edit and move your point cloud data. Basically they are treated as nodes and you manipulate them like nodes. They are just nodes with a pressure assigned to them.
  • When everything is good, use MAP to actually do the interpolation.
  • View the results with PLMAP
  • When you are happy, save the pressures as SFE commands using WRITEMAP

 

There is no GUI interface for this yet.  It was put into place to support some advanced FSI modeling of turbomachinery, but it benefits all users.  We hope to see more in this new module in future releases.  Here is an example we were playing with at PADT:

image

image

Image10: Performance Enhancements, Including GPU Stuff

Last but certainly not least are the enhancements to solver performance that we have come to expect. New compilers, optimized code, and new hardware all come together to deliver better bang for your ANSYS buck.  There is a ton in there, all documented in section 2.3.1 of the ANSYS, Inc. Release Notes part of the help.  The highlights are:

  • The sparse solver now has some sophisticated error detection for handling singular or near singular matrices.  This should keep you from solving poorly constrained models, or models with really messed up elements. Do note, some models that ran in the past, maybe with a warning, will now not solve. This is a good thing since the matrices are not good
  • Better domain decomposition for distributed ANSYS, especially for larger core counts.
  • The subspace method has been added for solving modal analysis.  It is well suited for larger problems and runs well distributed.
  • Switching to the new Intel compiler has resulted in a 30% faster solve time on some problems when using Sandy Bridge Intel processors.
  • Harmonic analyses solves using the full method have been improved, resulting into up to 40% improvements in solve time.
  • The Intel Xeon Phi coprocessor is now supported – we have no data yet on the performance but will try and get some as soon as we can
  • The latest NVIDIA Kepler GPU’s are now supported and the sparse solver has been improved again to take advanged of the Kepler GPU’s.

CUBE HVPC 512 Core System

Thoughts

The hard part for me in writing this posting was picking the top 10. There are a lot of significant enhancements but few are world changing. Most improve existing technologies, provide functionality for a subset of users, or fill a hole in capability.  Taken as a whole though, they show ANSYS, Inc.’s strong commitment to core technology: new elements, new material models, faster solves, expanded advanced capability, etc…

The end result is giving greater power to the user through greater depth and breadth.  And in the end, isn’t this why we use ANSYS Mechanical APDL in the first place – the incredible breadth and depth of capability it offers?  Scrolling back up through the images you have to admit, this is some pretty cool stuff.

This May Be the Fastest ANSYS Mechanical Workstation we Have Built So Far

The Build Up

Its 6:30am and a dark shadow looms in Eric’s doorway. I wait until Eric finishes his Monday morning company updates. “Eric check this out, the CUBE HVPC w16i-k20x we built for our latest customer ANSYS Mechanical scaled to 16 cores on our test run.” The left eyebrow of Eric’s slightly rises up. I know I have him now I have his full and complete attention.

Why is this huge news?

This is why; Eric knows and probably many of you reading this also know that solving differential equations, distributed, parallel along with using graphic processing unit makes our hearts skip a beat. The finite element method used for solving these equations is CPU intensive and I/O intensive. This is headline news type stuff to us geek types. We love scratching our way along the compute processing power grids to utilize every bit of performance out of our hardware!

Oh and yes a lower time to solve is better! No GPU’s were harmed in this tests. Only one NVIDIA TESLA k20X GPU was used during the test.

Take a Deep Breath and Start from the Beginning:

I have been gathering and hording years’ worth of ANSYS mechanical benchmark data. Why? Not sure really after all I am wanna-be ANSYS Analysts. However, it wasn’t until a couple weeks ago that I woke up to the why again. MY CUBE HVPC team sold a dual socket INTEL Ivy bridge based workstation to a customer out of Washington state. Once we got the order, our Supermicro reseller‘s phone has been bouncing of the desk. After some back and forth, this is how the parts arrive directly from Supermicro, California. Yes, designed in the U.S.A.  And they show up in one big box:

clip_image002[4]

Normal is as Normal Does

As per normal is as normal does, I ran the series of ANSYS benchmarks. You know the type of benchmarks that perform coupled-physics simulations and solving really huge matrix numbers. So I ran ANSYS v14sp-5, ANSYS FLUENT benchmarks and some benchmarks for this customer, the types of runs they want to use the new machine for. So I was talking these benchmark results over with Eric. He thought that now is a perfect time to release the flood of benchmark data. Well some/a smidge of the benchmark data. I do admit the data does get overwhelming so I have tried to trim down the charts and graphs to the bare minimum. So what makes this workstation recipe for the fastest ANSYS Mechanical workstation so special? What is truly exciting enough to tip me over in my overstuffed black leather chair?

The Fastest Ever? Yup we have been Changed Forever

Not only is it the fastest ANSYS Mechanical workstation running on CUBE HVPC hardware.  It uses two INTEL CPU’s at 22 nanometers. Additionally, this is the first time that we have had an INTEL dual socket based workstation continue to gain faster times on and up to its maximum core count when solving in ANSYS Mechanical APDL.

Previously the fastest time was on the CUBE HVPC w16i-GPU workstation listed below. And it peaked at 14 cores. 

Unfortunately we only had time before we shipped the system off to gather two runs: 14 and 16 cores on the new machine. But you can see how fast that was in this table.  It was close to the previous system at 14 cores, but blew past it at 16 whereas the older system actually got clogged up and slowed down:

  Run Time (Sec)
Cores Used Config B Config C Config D
14 129.1 95.1 91.7
16 130.5 99 83.5

And here are the results as a bar graph for all the runs with this benchmark:

CUBE-Benchmark-ANSYS-2013_11_01

  We can’t wait to build one of these with more than one motherboard, maybe a 32 core system with infinband connecting the two. That should allow some very fast run times on some very, very large problems.

ANSYS V14sp-5 ANSYS R14 Benchmark Details

  • Elements : SOLID187, CONTA174, TARGE170
  • Nodes : 715,008
  • Materials : linear elastic
  • Nonlinearities : standard contact
  • Loading : rotational velocity
  • Other : coupling, symentric, matrix, sparse solver
  • Total DOF : 2.123 million
  • ANSYS 14.5.7

Here are the details and the data of the March 8, 2013 workstation:

Configuration C = CUBE HVPC w16i-GPU

  • CPU: 2x INTEL XEON e5-2690 (2.9GHz 8 core)
  • GPU: NVIDIA TESLA K20 Companion Processor
  • GRAPHICS: NVIDIA QUADRO K5000
  • RAM: 128GB DDR3 1600Mhz ECC
  • HD RAID Controller: SMC LSI 2208 6Gbps
  • HDD: (os and apps): 160GB SATA III SSD
  • HDD: (working directory):6x 600GB SAS2 15k RPM 6Gbps
  • OS: Windows 7 Professional 64-bit, Linux 64-bit
  • Other: ANSYS R14.0.8 / ANSYS R14.5

Here are the details from the new, November 1, 2013 workstation:

Configuration D = CUBE HVPC w16i-k20x

  • CPU: 2x INTEL XEON e5-2687W V2 (3.4GHz)
  • GPU: NVIDIA TESLA K20X Companion Processor
  • GRAPHICS: NVIDIA QUADRO K4000
  • RAM: 128GB DDR3 1600Mhz ECC
  • HDD: (os and apps): 4 x 240GB Enterprise Class Samsung SSD 6Gbps
  • HD RAID CONTROLLER: SMC LSI 2208 6Gbps
  • OS: Windows 7 Professional 64-bit, Linux 64-bit
  • Other: ANSYS 14.5.7

You can view the output from the run on the newer box (Configuration D) here:

Here is a picture of the Configuration D machine with the info on its guts:

clip_image006[4]clip_image008[4]

What is Inside that Chip:

The one (or two) CPU that rules them all: http://ark.intel.com/products/76161/

Intel® Xeon® Processor E5-2687W v2

  • Status: Launched
  • Launch Date: Q3’13
  • Processor Number: E5-2687WV2
  • # of Cores: 8
  • # of Thread: 16
  • Clock Speed: 3.4 GHz
  • Max Turbo Frequency: 4 GHz
  • Cache:  25 MB
  • Intel® QPI Speed:  8 GT/s
  • # of QPI Link:  2
  • Instruction Se:  64-bit
  • Instruction Set Extension:  Intel® AVX
  • Embedded Options Available:  No
  • Lithography:  22 nm
  • Scalability:  2S Only
  • Max TDP:  150 W
  • VID Voltage Range:  0.65–1.30V
  • Recommended Customer Price:  BOX : $2112.00, TRAY: $2108.00

The GPU’s that just keep getting better and better:

Features

TESLA C2075

TESLA K20X

TESLA K20

Number and Type of GPU

FERMI

Kepler GK110

Kepler GK110

Peak double precision floating point performance

515 Gflops

1.31 Tflops

1.17 Tflops

Peak single precision floating point performance

1.03 Tflops

3.95 Tflops

3.52 Tflops

Memory Bandwidth (ECC off)

144 GB/sec

250 GB/sec

208 GB/sec

Memory Size (GDDR5)

6GB

6GB

5GB

CUDA Cores

448

2688

2496

clip_image012[4]

Ready to Try one Out?

If you are as impressed as we are, then it is time for you to try out this next iteration of the Intel chip, configured for simulation by PADT, on your problems.  There is no reason for you to be using a CAD box or a bloated web server as your HPC workstation for running ANSYS Mechanical and solving in ANSYS Mechanical APDL.  Give us a call, our team will take the time to understand the types of problems you run, the IT environment you run in, and custom configure the right system for you:

http://www.padtinc.com/products/hardware/cube-hvpc,
email: garrett.smith@padtinc.com,
or call 480.813.4884

/HBC: One of Those Little Known Commands

The other day we received a tech support call requesting a way to remove the space between the element faces on a pressure plot.  He wanted this so that he could get a contour plot without seeing the contours of the elements on the back side of the part. So I built my trusty test block and applied a pressure. By turning on the pressure load symbols with  the /PSF command, also under PlotCrtls > Symbols, you can get plots like this.

image

Face Outlines (/PSF,1,1)

 

image

Arrows (/PSF,1,2)

 

image

Contours (/PSF,1,3)

Of course the customer was using this last contour plot option, but as you can see below, if you have pressure on both sides of the model, then the backside pressures show through the gaps. The plot can get a bit confusing. So after some digging, starting with the /PSF command, and not finding any reference on how to change the plot behavior, I asked around if anyone else had a way to do it, other than my first inclination which was to write a macro. So as I reverted to creating a macro, to do what should be a simple task, I thought, “No, there HAS to be an easier way.” Of course there is.

image

The one thing I’ve learned over the years… Well, yes, I’ve learned more than ONE thing, but I’m trying to make a point here… The one thing I’ve learned over the years, is that no matter how much I learn, there is always someone who know more than me.  So I asked Sheldon! (Not the Sheldon on Big Bang Theory; ANSYS, Inc’s very own Sheldon Imaoka.) I thought, “Surely he will know some undocumented command to save me time.  It took him all of three minutes to get back to me with the /HBC command. It is a fully documented, but seldom used, command that is hidden in the recesses of the Command Reference that determines how boundary condition symbols are displayed. When turned on, it will “use an improved pressure contour display.” So you go from the picture on the top, to the picture on the bottom.

imageimage

So I learned two new things. One is the /HBC command can give you nicer looking plots. The other, and even more useful thing, is to click the links on the help page at the upper right corner.

image

For if I did, I would have found the /HBC command on my own.

image

It looks like I need to sit down with a nice cup of hot chocolate* and the Command Reference and just scan the listing for commands that I don’t recognize and learn what they do.  Oh, what I go through for you people. Well, I’ll just make sure that it’s really good hot chocolate*.   I’ll write a new post from time to time on cool commands I find useful.

Have a great day!!!

*It’s 85 degrees here this week and I really meant iced tea, but I didn’t want to rub it in. Smile

Making APDL Parameters Available in the ANSYS Parameter Manager or DesignXplorer: Prep, Solve, and Post

This is one of those questions that comes up every once in a while that is not so obvious at first glance, but that is simple once you understand how ANSYS Mechanical interacts with ANSYS Mechanical APDL.  After a couple of email exchanges around a tech support question, we thought it would be good to share with everyone.

Before we get started, if you need a refresher on Command Objects in ANSYS Mechanical, the way in which you send APDL commands to the ANSYS Mechanical APDL solver, here is a seminar from a couple of years ago that covers the whole deal:

The basic problem is this: you have an APDL script you execute as a command object that does some sort of model interrogation or stores the result of some calculation, and you want to use that parameter in the parameter manager or in DesignXplorer.  If you look at the details view for a command object you will notice that it only supports input parameters: ARG1-ARG9.

image

If you look at the example (silly) macro you will see that it:

  1. Grabs component (named selection) END1
  2. Figures out how many nodes are attached to END1 (NMND)
  3. Takes ARG1 as the total load applied load
  4. Calculates the per node load by dividing the total load by the number of loads.
  5. Applies that per node load
  6. Reselects all the nodes

If I want to know how many nodes I put the load on and what the per node load is I’m kind of stuck here.  Any command object you add to the tree above the Solution branch only allows input parameters.

But a command snippet applied in the Solution branch is different, it allows you to pull parameters back and share them through the parameter manager.

When you first insert a command object you only get input parameters (ARG1-ARG9) as usual, and an empty section called “Results”

image

The way you get result parameters, or what I think should be called “Output Parameters” is you create a parameter in the command object’s APDL script that starts with “my_”  When you click outside the text input window the program parses you script and if it finds any “my_” parameters in the text, it sticks them in the Results section:

image

Note, the default is “my_” but you can change it n the “Output Search Prefix” line in the Definition block.

Initially they will show up pinkish because the model has not been run and they are not defined. Click on the box to make them parameters that get passed outside of the program and then run:

image

If you pop back out to the project view you will see that we now have a Parameter Set bar with both input and output parameters:

image

And if you open the parameter manager up you can see the input and output parameters:

image

This works because all ANSYS mechanical is doing is making one big batch input file for ANSYS MAPDL.  That file contains any command objects you insert into the tree and extracts any parameters that you tagged in a post processing command object for return to ANSYS Mechanical.