Helpful New Meshing Feature in ANSYS Mechanical 17.0 – Nonlinear Mechanical Shape Checking

ansys=new-mesh-r17Meshing for Nonlinear Structural Problems

Overcoming convergence difficulties in nonlinear structural problems can be a challenge. I’ve written a couple of times previously about tools that can help us overcome those difficulties:

I’m pleased to announce a new tool in the ANSYS Mechanical tool belt in version 17.0.
With version 17.0 of ANSYS we get a new meshing option for structural simulations: Nonlinear Mechanical Shape Checking. This option has been added to the previously available Standard Mechanical Shape Checking and Aggressive Mechanical Shape Checking. For a nonlinear solution in which elements can become significantly distorted, if we start with better-shaped elements they can undergo larger deformations without encountering errors in element formulation we may encounter fewer difficulties as the nodes deflect and the elements become distorted. The nonlinear mechanical setting is more restrictive on the element shapes than the other two settings.

We’ve been recommending the aggressive mechanical setting for nonlinear solutions for quite a while. The new nonlinear mechanical setting is looking even better. Anecdotally, I have one highly nonlinear customer model that reached 95% of the applied load before a convergence failure in version 16.2. That was with the aggressive mechanical shape checking. With 17.0, it reached 99% simply by remeshing with the same aggressive setting and solving. That tells you that work has been going on under the hood with the ANSYS meshing and nonlinear technology. By switching to the new nonlinear mechanical shape checking and solving again, the solution now converges for the full 100% of the applied load.
Here are some statistics using just one measure of the ‘goodness’ of our mesh, element quality. You can read about the definition of element quality in the ANSYS Help, but in summary better shaped elements have a quality value close to 1.0, while poorly shaped elements have a value closer to zero. The following stats are for tetrahedral meshes of a simple turbomachinery blade/rotor sector model (this is not a real part, just something made up) comparing two of the options for element shape checking. The table shows that the new nonlinear mechanical setting produces significantly fewer elements with a quality value of 0.5 or less. Keep in mind this is just one way to look at element quality – other methods or a different cutoff might put things in a somewhat different perspective. However, we can conclude that the Nonlinear Mechanical setting is giving us fewer ‘lower quality’ elements in this case.

Shape Checking Setting Total Elements Elements w/Quality <0.5 % of elements w/Quality <0.5
Aggressive Mechanical 31683 1831 5.8
Nonlinear Mechanical 31865 1249 3.9

Here are images of a portion of the two meshes mentioned above. This is the mesh with the Aggressive Mechanical Shape Checking option set:ansys-new-meshing-17-01
The eyeball test on these two meshes confirms fewer elements at the lower quality contour levels.

And this is the mesh with the Nonlinear Mechanical Shape Checking option set:

ansys-new-meshing-17-02

So, if you are running nonlinear structural models, we urge you to test out the new Nonlinear Mechanical mesh setting. Since it is more restrictive on element shapes, you may see longer meshing times or encounter some difficulties in meshing complex geometry. You may see a benefit in easier to converge nonlinear solutions, however. Give it a try!

Keypad Shortcuts for Quick Views in Workbench

keypad1Hey, did you know that you can access predefined views in both ANSYS Mechanical and DesignModeler using your numeric keypad? You can! Assuming the front view is looking down the +Z-axis at the X-Y plane, here are the various views you can access via your numeric keypad.

For this to work, make sure you’ve clicked within the graphics window itself—not on the top window bar, or one of the tool bars, but right in the region where the model is displayed. You may need to turn off Num Lock, though it works for me on both my laptop and desktop with Num Lock on or off.

With that out of the way, here are the views:

0) Isometric view, a bit more zoomed in than the standard auto-fit isometric view. This is my preferred level of zoom while still being able to see the whole model, to be honest.

image

1) Front view (looking down the +Z-axis)

image

2) Bottom view (looking down the -Y-axis)

image

3) Right view (looking down the +X-axis)

image

4) Back up to the previous view

5) Isometric view, standard autofit (I don’t like the standard auto-fit—too much empty space. I prefer the keypad 0 level of zoom.)

image

6) Go forward to the next view in the cache

7) Left view (looking down the -X-axis)

image

8) Top view (looking down the +Y-axis)

image

9) Back view (looking down the -Z-axis)

image

Here’s a handy-dandy chart you can print out to refer to when using the numeric keypad to change views in Mechanical or DesignModeler. Share it with your friends.

image

Reading ANSYS Mechanical Result Files (.RST) from C/C++ (Part 3)

ansys-fortran-to-c-cpp-1-00In the last post of this series I illustrated how I handled the nested call structure of the procedural interface to ANSYS’ BinLib routines.  If you recall, any time you need to extract some information from an ANSYS result file, you have to bracket the function call that extracts the information with a *Begin and *End set of function calls.  These two functions setup and tear down internal structures needed by the FORTRAN library.  I showed how I used RAII principles in C++ along with a stack data structure to manage this pairing implicitly.  However, I have yet to actually read anything truly useful off of the result file!  This post centers on the design of a set of C++ iterators that are responsible for actually reading data off of the file.  By taking the time to write iterators, we expose the ANSYS RST library to many of the algorithms available within the standard template library (STL), and we also make our own job of writing custom algorithms that consume result file data much easier.  So, I think the investment is worthwhile.

If you’ve programmed in C++ within the last 10 years, you’ve undoubtedly been exposed to the standard template library.  The design of the library is really rather profound.  This image represents the high level design of the library in a pictorial fashion:

ansys-fortran-to-c-cpp-3-01

On one hand, the library provides a set of generic container objects that provide a robust implementation of many of the classic data structures available within the field of computer science.  The collection of containers includes things like arbitrarily sized contiguous arrays (vectors), linked lists, associative arrays, which are implemented as either binary trees or as a hash container, as well as many more.  The set of containers alone make the STL quite valuable for most programmers.

On the other hand, the library provides a set of generic algorithms that encompass a whole suite of functionality defined in classic computer science.  Sorting, searching, rearranging, merging, etc… are just a handful of the algorithms provided by the library.  Furthermore, extreme care has been taken within the implementation of these algorithms such that an average programmer would hard pressed to produce something safer and more efficient on their own.

However, the real gem of the standard library are iterators.  Iterators bridge the gap between generic containers on one side and the generic algorithms on the other side.  Need to sort a vector of integers, or a double ended queue of strings?  If so, you just call the same sort function and pass it a set of iterators.  These iterators “know” how to traverse their parent container.  (Remember containers are the data structures.)

So, what if we could write a series of iterators to access data from within an ANSYS result file?  What would that buy us?  Well, depending upon which concepts our iterators model, having them available would open up access to at least some of the STL suite of algorithms.  That’s good.  Furthermore, having iterators defined would open up the possibility of providing range objects.  If we can provide range objects, then all of the sudden range based for loops are possible.  These types of loops are more than just syntactic sugar.  By encapsulating the bounds of iteration within a range, and by using iterators in general to perform the iteration, the burden of a correct implementation is placed on the iterators themselves.  If you spend the time to get the iterator implementation correct, then any loop you write after that using either the iterators or better yet the range object will implicitly be correct from a loop construct standpoint.  Range based for loops also make your code cleaner and easier to reason about locally.

Now for the downside…  Iterators are kind of hard to write.  The price for the flexibility they offer is paid for in the amount of code it takes to write them.  Again, though, the idea is that you (or, better yet somebody else) writes these iterators once and then you have them available to use from that point forward.

Because of their flexibility, standard conformant iterators come in a number of different flavors.  In fact, they are very much like an ice cream Sunday where you can pick and choose what features to mix in or add on top.  This is great, but again it makes implementing them a bit of a chore.  Here are some of the design decisions you have to answer when implementing an iterator:

Decision Options Choice for RST Reader Iterators
Dereference Data Type Anything you want Special structs for each type of iterator.
Iteration Category 1.       Forward iterator
2.       Single pass iterator
3.       Bidirectional iterator
4.       Random access iterator
Forward, Single Pass

Iterators syntactically function much like pointers in C or C++.  That is, like a pointer you can increment an iterator with the ++ operator, you can dereference an iterator with the * operator and you can compare two iterators for equality.  We will talk more about incrementing and comparisons in a minute, but first let’s focus on dereferencing.  One thing we have to decide is what type of data the client of our iterator will receive when they dereference it.  My choice is to return a simple C++ structure with data members for each piece of data.  For example, when we are iterating over the nodal geometry, the RST file contains the node number, the nodal coordinates and the nodal rotations.  To represent this data, I create a structure like this:ansys-fortran-to-c-cpp-3-02

I think this is pretty self-explanatory.  Likewise, if we are iterating over the element geometry section of an RST file, there is quite a bit of useful information for each element.  The structure I use in that case looks like this:

ansys-fortran-to-c-cpp-3-03

 

Again, pretty self-explanatory.  So, when I’m building a node geometry iterator, I’m going to choose the NodalCoordinateData structure as my dereference type.

The next decision I have to make is what “kind” of iterator I’m going to create.  That is, what types of “iteration” will it support?  The C++ standard supports a variety of iterator categories.  You may be wondering why anyone would ever care about an “iteration category”?  Well, the reason is fundamental to the design of the STL.   Remember that the primary reason iterators exist is to provide a bridge between generic containers and generic algorithms.  However, any one particular algorithm may impose certain requirements on the underlying iterator for the algorithm to function appropriately.

Take the algorithm “sort” for example.  There are, in fact, lots of different “sort” algorithms.  The most efficient versions of the “sort” algorithm require that an iterator be able to jump around randomly in constant time within the container.  If the iterator supports jumping around (a.k.a. random access) then you can use it within the most efficient sort algorithm.   However, there are certain kinds of iterators that don’t support jumping around.  Take a linked list container as an example.  You cannot randomly jump around in a linked list in constant time.  To get to item B from item A you have to follow the links, which means you have to jump from link to link to link, where each jump takes some amount of processing time.  This means, for example, there is no easy way to cut a linked list exactly in half even if you know how many items in total are in the list.  To cut it in half you have to start at the beginning and follow the links until you’ve followed size/2 number of links.  At that point you are at the “center” of the list.  However, with an array, you simply choose an index equal to size/2 and you automatically get to the center of the array in one step.  Many sort algorithms, as an example, obtain their efficiency by effectively chopping the container into two equal pieces and recursively sorting the two pieces.  You lose all that efficiency if you have to walk out to the center.

If you look at the “types” of iterators in the table above you will see that they build upon one another.  That is, at the lowest level, I have to answer the question, can I just move forward one step?  If I can’t even do that, then I’m not an iterator at all.  After that, assuming I can move forward one step, can I only go through the range once, or can I go over the range multiple times?  If I can only go over the range once, I’m a single pass iterator.  Truthfully, the forward iterator concept and the single pass iterator concept form level 1A and 1B of the iterator hierarchy.  The next higher level of functionality is a bidirectional iterator.  This type of iterator can go forward and backwards one step in constant time.  Think of a doubly linked list.  With forward and backward links, I can go either direction one step in constant time.  Finally, the most flexible iterator is the random access iterator.  These are iterators that really are like raw pointers.  With a pointer you can perform pointer arithmetic such that you can add an arbitrary offset to a base pointer and end up at some random location in a range.  It’s up to you to make sure that you stay within bounds.  Certain classes of iterators provide this level of functionality, namely those associated with vectors and deques.

So, the question is what type of iterator should we support?  Perusing through the FORTRAN code shipped with ANSYS, there doesn’t appear to be an inherent limitation within the functions themselves that would preclude random access.  But, my assumption is that the routines were designed to access the data sequentially.  (At least, if I were the author of the functions that is what I would expect clients to do.)  So, I don’t know how well they would be tested regarding jumping around.  Furthermore, disk controllers and disk subsystems are most likely going to buffer the data as it is read, and they very likely perform best if the data access is sequential.  So, even if it is possible to randomly jump around on the result file, I’m not sold on it being a good idea from a performance standpoint.  Lastly, I explicitly want to keep all of the data on the disk, and not buffer large chunks of it into RAM within my library.  So, I settled on expressing my iterators as single pass, forward iterators.  These are fairly restrictive in nature, but I think they will serve the purpose of reading data off of the file quite nicely.

Regarding my choice to not buffer the data, let me pause for a minute and explain why I want to keep the data on the disk. First, in order to buffer the data from disk into RAM you have to read the data off of the disk one time to fill the buffer.  So, that process automatically incurs one disk read.  Therefore, if you only ever need to iterate over the data once, perhaps to load it into a more specialized data structure, buffering it first into an intermediate RAM storage will actually slow you down, and consume more RAM.  The reason for this is that you would first iterate over the data stored on the disk and read it into an intermediate buffer.  Then, you would let your client know the data is ready and they would iterate over your internal buffer to access the data.  They might as well just read the data off the disk themselves! If the end user really wants to buffer the data, that’s totally fine.  They can choose to do that themselves, but they shouldn’t have to pay for it if they don’t need it.

Finally, we are ready to implement the iterators themselves.  To do this I rely on a very high quality open source library called Boost.  Boost has within it a library called iterator_facade that takes care of supplying most all of the boilerplate code needed to create a conformant iterator.  Using it has proven to be a real time saver.  To define the actual iterator, you derive your iterator class from iterator_facade and pass it a few template parameters.  One is the category defining the type of iterator you are creating.  Here is the definition for the nodal geometry iterator:

ansys-fortran-to-c-cpp-3-04

You can see that there are a few private functions here that actually do all of the work.  The function “increment” is responsible for moving the iterator forward one spot.  The function “equal” is responsible for determining if two different iterators are in fact equal.  And the function “dereference” is used to return the data associated with the current iteration spot.  You will also notice that I locally buffer the single piece of data associated with the current location in the iteration space inside the iterator.  This is stored in the pData member function.  I also locally store the current index.   Here are the implementations of the functions just mentioned:

ansys-fortran-to-c-cpp-3-05

First you can see that testing iterator equality is easy.  All we do is just look to see if the two iterators are pointing to the same index.  If they are, we define them as equal. (Note, an important nuance of this is that we don’t test to see if their buffered data is equal, just their index.  This is important later on.)  Likewise, increment is easy to understand as well.  We just increase the index by one, and then buffer the new data off of disk into our local storage.  Finally, dereference is easy too.  All we do here is just return a reference to the local data store that holds this one node’s data.  The only real work occurs in the readData() function.  Inside that function you will see the actual call to the CResRdNode() function.  We pass that function our current index and it populates an array of 6 doubles with data and returns the actual node number.  After we have that, we simply parse out of that array of 6 doubles the coordinates and rotations, storing them in our local storage.  That’s all there is to it.  A little more work, but not bad.

With these handful of operations, the boost iterator_facade class can actually build up a fully conformant iterator will all the proper operator overloads, etc… It just works.  Now that we have iterators, we need to provide a “begin” and “end” function just like the standard containers do.  These functions should return iterators that point to the beginning of our data set and to one past the end of our data set.  You may be thinking to yourself, wait a minute, how to we provide an “end” iterator without reading the whole set of nodes?  The reality is, we just need to provide an iterator that “equality tests” to be equal to the end of our iteration space?  What does that mean?  Well, what it means is that we just need to provide an iterator that, when compared to another iterator which has walked all the way to the end, it passes the “equal” test.  Look at the “equal” function above.  What do two iterators need to have in common to be considered equal?  They need to have the same index.  So, the “end” iterator simply has an index equal to one past the end of the iteration space.  We know how big our iteration space is because that is one of the pieces of metadata supplied by those ResRd*Begin functions.  So, here are our begin/end functions to give us a pair of conformant iterators.

ansys-fortran-to-c-cpp-3-06

Notice, that the nodes_end() function creates a NodeIterator and passes it an index that is one past the maximum number of nodes that have coordinate data stored on file.  You will also notice, that it does not have a second Boolean argument associated with it.  I use that second argument to determine if I should immediately read data off of the disk when the iterator is constructed.  For the begin iterator, I need to do that.  For the end iterator, I don’t actually need to cache any data.  In fact, no data for that node is defined on disk.  I just need a sentinel iterator that is one past the iteration space.

So, there you have it.  Iterators are defined that implicitly walk over the rst file pulling data off as needed and locally buffering one piece of it.  These iterators are standard conformant and thus can be used with any STL algorithm that accepts a single pass, read only, forward iterator.  They are efficient in time and storage.  There is, though, one last thing that would be nice.  That is to provide a range object so that we can have our cake and eat it too.  That is, so we can write these C++11 range based for loops.  Like this:ansys-fortran-to-c-cpp-3-07

To do that we need a bit of template magic.  Consider this template and type definition:

ansys-fortran-to-c-cpp-3-08

There is a bit of machinery that is going on here, but the concept is simple.  I just want the compiler to stamp out a new type that has a “begin()” and “end()” member function that actually call my “nodes_begin()” and “nodes_end()” functions.  That is what this template will do.  I can also create a type that will call my “elements_begin()” and “elements_end()” function.  Once I have those types, creating range objects suitable for C++11 range based for loops is a snap.  You just make a function like this:

ansys-fortran-to-c-cpp-3-09

 

This function creates one of these special range types and passes in a pointer to our RST reader.  When the compiler then sees this code:

ansys-fortran-to-c-cpp-3-10

It sees a range object as the return type of the “nodes()” function.  That range object is compatible with the semantics of range based for loops, and therefore the compiler happily generates code for this construction.

Now, after all of this work, the client of the RST reader library can open a result file, select something of interest, and start looping over the items in that category; all in three lines of code.  There is no need to understand the nuances of the binlib functions.  But best of all, there is no appreciable performance hit for this abstraction.  At the end of the day, we’re not computationally doing anything more than what a raw use of the binlib functions would perform.  But, we’re adding a great deal of type safety, and, in my opinion, readability to the code.  But, then again, I’m only slightly partial to C++.  Your mileage may vary…

Reading ANSYS Mechanical Result Files (.RST) from C/C++ (Part 2)

ansys-fortran-to-c-cpp-1-00In the last post in this series I illustrated how you can interface C code with FORTRAN code so that it is possible to use the ANSYS, Inc. BinLib routines to read data off of an ANSYS result file within a C or C++ program.  If you recall, the routines in BinLib are written in FORTRAN, and my solution was to use the interoperability features of the Intel FORTRAN compiler to create a shim library that sits between my C++ code and the BinLib code. In essence, I replicated all of the functions in the original BinLib library, but gave them a C interface. I call this library CBinLib.

You may remember from the last post that I wanted to add a more C++ friendly interface on top of the CBinLib functions.  In particular, I showed this piece of code, and alluded to an explanation of how I made this happen.  This post covers the first half of what it takes to make the code below possible.

ansys-fortran-to-c-cpp-2-01

What you see in the code above is the use of C++11 range based “for loops” to iterate over the nodes and elements stored on the result file.  To accomplish this we need to create conformant STL style iterators and ranges that iterate over some space.  I’ll describe the creation of those in a subsequent post.  In this post, however, we have to tackle a different problem.  The solution to the problem is hidden behind the “select” function call shown in the code above.  In order to provide some context for the problem, let me first show you the calling sequence for the procedural interface to BinLib.  This is how you would use BinLib if you were programming in FORTRAN or if you were directly using the CBinLib library described in the previous post.  Here is the recommended calling sequence:

ansys-fortran-to-c-cpp-2-02

You can see the design pattern clearly in this skeleton code.  You start by calling ResRdBegin, which gives you a bunch of useful data about the contents of the file in general.  Then, if you want to read geometric data, you need to call ResRdGeomBegin, which gives you a little more useful metadata.  After that, if you want to read the nodal coordinate data you call ResRdNodeBegin followed by a loop over nodes calling ResRdNode, which gives you the actual data about individual nodes, and then finally call ResRdNodeEnd.  If at that point you are done with reading geometric data, you then call ResRdGeomEnd.  And, if you are done with the file you call ResRdEnd.

Now, one thing jumps off the page immediately.  It looks like it is important to pair up the *Begin and*End calls.  In fact, if you peruse the ResRd.F FORTRAN file included with the ANSYS distribution you will see that in many of the *End functions, they release resources that were allocated and setup in the corresponding *Begin function.

So, if you forget to call the appropriate *End, you might leak resources.  And, if you forget to call the appropriate *Begin, things might not be setup properly for you to iterate.  Therefore, in either case, if you fail to call the right function, things are going to go badly for you.

This type of design pattern where you “construct” some scaffolding, do some work, and then “destruct” the scaffolding is ripe for abstracting away in a C++ type.  In fact, one of the design principles of C++ known as RAII (Resource Acquisition Is Initialization) maps directly to this problem.  Imagine that we create a class in which in the constructor of the class we call the corresponding *Begin function.  Likewise, in the destructor of the class we call the corresponding *End function.  Now, as long as we create an instance of the class before we begin iterating, and then hang onto that instance until we are done iterating, we will properly match up the *Begin, *End calls.  All we have to do is create classes for each of these function pairs and then create an instance of that class before we start iterating.  As long as that instance is still alive until we are finished iterating, we are good.

Ok, so abstracting the *Begin and *End function pairs away into classes is nice, but how does that actually help us?  You would still have to create an instance of the class, hold onto it while you are iterating, and then destroy it when you are done.  That sounds like more work than just calling the *Begin, *End directly.  Well, at first glance it is, but let’s see if we can use the paradigm more intelligently.  For the rest of this article, I’ll refer to these types of classes as BeginEnd classes, though I call them something different in the code.

First, what we really want is to fire and forget with these classes.  That is, we want to eliminate the need to manually manage the lifetime of these BeginEnd classes.  If I don’t accomplish this, then I’ve failed to reduce the complexity of the *Begin and *End requirements.  So, what I would like to do is to create the appropriate BeginEnd class when I’m ready to extract a certain type of data off of the file, and then later on have it magically delete itself (and thus call the appropriate *End function) at the right time.  Now, one more complication.  You will notice that these *Begin and*End function pairs are nested.  That is, I need to call ResRdGeomBegin before I call ResRdNodeBegin.  So, not only do I want a solution that allows me to fire and forget, but I want a solution that manages this nesting.

Whenever you see nesting, you should think of a stack data structure.  To increase the nesting level, you push an item onto the stack.  To decrease the nesting level, you pop and item off of the stack.  So, we’re going to maintain a stack of these BeginEnd classes.  As an added benefit, when we introduce a container into the design space, we’ve introduced something that will control object lifetime for us.  So, this stack is going to serve two functions for us.  It’s going to ensure we call the *Begin’s and *End’s in the right nested order, and second, it’s going to maintain the BeginEnd object lifetimes for us implicitly.

To show some code, here is the prototype for my pure virtual class that serves as a base class for all of the BeginEnd classes.  (In my code, I call these classes FileSection classes)

ansys-fortran-to-c-cpp-2-03

You can see that it is an interface class by noting the pure virtual function getLevel.  You will also notice that this function returns a ResultFileSectionLevel.  This is just an enum over file section types.  I like to use an enum as opposed to relying on RTTI.  Now, for each BeginEnd pair, I create an appropriate derived class from this base ResultFileSection class.  Within the destructor of each of the derived classes I call the appropriate *End function.  Finally, here is my stack data structure definition:

ansys-fortran-to-c-cpp-2-03p5

 

You can see that it is just a std::stack holding objects of the type SectionPtrT.  A SectionPtrT is a std::unique_ptr for objects convertible to my base section class.  This will enable the stack to hold polymorphic data, and the std::unique_ptr will manage the lifetime of the objects appropriately.   That is, when we pop and object off of the stack, the std::unique_ptr will make sure to call delete, which will call the destructor.  The destructor calls the appropriate *End function as we’ve mentioned before.

At this point, we’ve reduced our problem to managing items on a stack.  We’re getting closer to making our lives easier, trust me!  Let’s look at a couple of different functions to show how we pull these pieces together.  The first function is called reduceToLevelOrBegin(level).  See the code below:ansys-fortran-to-c-cpp-2-04

The operation of this function is fairly straightforward, yet it serves an integral role in our BeginEnd management infrastructure.   What this function does is it iteratively pops items off of our section stack until it either reaches the specified level, or it reaches the topmost ResRdBegin level.  Again, through the magic of C++ polymorphism, when an item gets popped off of the stack, eventually its destructor is called and that in turn calls the appropriate *End function.  So, what this function accomplishes is it puts us at a known level in the nested section hierarchy and, while doing so, ensures that any necessary *End functions are called appropriately to free resources on the FORTRAN side of things.  Notice that all of that happens automatically because of the type system in C++.  By popping items off of the stack, I implicitly clean up after myself.

The second function to consider is one of a family of similar functions.  We will look at the function that prepares the result file reader to read element geometry data off of the file.  Here it is:

ansys-fortran-to-c-cpp-2-05

You will notice that we start by reducing the nested level to either the “Geometry” level or the “Begin” level.  Effectively what this does is unwind any nesting we have done previously.  This is the machinery that makes “fire and forget” possible.  That is, whenever in ages past that we requested something to be read off of the result file, we would have pushed onto the stack a series of objects to represent the level needed to read the data in question.  Now that we wish to read something else, we unwind any previously existing nested Begin calls before doing so.   That is, we clean up after ourselves only when we ask to read a different set of data.  By waiting until we ask to read some new set of data to unwind the stack, we implicitly allow the lifetime of our BeginEnd classes to live beyond iteration.

At this point we have the stack in a known state; either it is at the Begin level or the Geometry level.  So, we simply call the appropriate *Begin functions depending on what level we are at, and push the appropriate type of BeginEnd objects onto the stack to record our traversal for later cleanup.  At this point, we are ready to begin iterating.  I’ll describe the process of creating iterators in the next post.  Clearly, there are lots of different select*** functions within my class.  I have chosen to make all of them private and have a single select function that takes an enum descriptor of what to select and some additional information for specifying result data.

One last thing to note with this design.  Closing the result file is easy. All that is required is that we simply fully unwind the stack.  That will ensure all of the appropriate FORTRAN cleanup code is called in the right order.  Here is the close function:

ansys-fortran-to-c-cpp-2-06

As you can see, cleanup is handled automatically.  So, to summarize, we use a stack of polymorphic data to manage the BeginEnd function calls that are necessary in the FORTRAN interface.  By doing this we ensure a level of safety in our class design.  Also, this moves us one step closer to this code:

ansys-fortran-to-c-cpp-2-07

In the next post I will show how I created iterators and range objects to enable clean for loops like the ones shown above.

Reading ANSYS Mechanical Result Files (.RST) from C/C++ (Part 1)

ansys-fortran-to-c-cpp-1-00Recently, I’ve encountered the need to read the contents of ANSYS Mechanical result files (e.g. file.rst, file.rth) into a C++ application that I am writing for a client. Obviously, these files are stored in a proprietary binary format owned by ANSYS, Inc.  Even if the format were published, it would be daunting to write a parser to handle it.  Fortunately, however, ANSYS supplies a series of routines that are stored in a library called BinLib which allow a programmer to access the contents of a result file in a procedural way.  That’s great!  But, the catch is the routines are written in FORTRAN… I’ve been programming for a long time now, and I’ll be honest, I can’t quite stomach FORTRAN.  Yes, the punch card days were before my time, but seriously, doesn’t a compiler have something better to do than gripe about what column I’m typing on… (Editor’s note: Matt does not understand the pure elegance of FORTRAN’s majestic simplicity. Any and all FORTRAN bashing is the personal opinion of Mr. Sutton and in no way reflects the opinion of PADT as a company or its owners. – EM) 

So, the problem shifts from how to read an ANSYS result file to how to interface between C/C++ and FORTRAN.  It turns out this is more complicated than it really should be, and that is almost exclusively because of the abomination known as CHARACTER(*) arrays.  Ah, FORTRAN… You see, if weren’t for the shoddy character of CHARACTER(*) arrays the mapping between the basic data types in FORTRAN and C would be virtually one for one. And thus, the mechanics of function calls could fairly easily be made to be identical between the two languages.   If the function call semantics were made identical, then sharing code between the two languages would be quite straightforward.  Alas, because a CHARACTER array has a kind of implicit length associated with it, the compiler has to do some kind of magic within any function signature that passes one or more of these arrays.  Some compilers hide parameters for the length and then tack them on to the end of the function call.  Some stuff the hidden parameters right after the CHARACTER array in the call sequence.  Some create a kind of structure that combines the length with the actual data into a special type. And then some compilers do who knows what…  The point is, there is no convention among FORTRAN compilers for handling the function call semantics, so there is no clean interoperability between C and FORTRAN.

Fortunately, the Intel FORTRAN compiler has created this markup language for FORTRAN that functions as an interoperability framework that enables FORTRAN to speak C and vice versa.  There are some limitations, however, which I won’t go into detail on here.  If you are interested you can read about it in the Intel FORTRAN compiler manual.  What I want to do is highlight an example of what this looks like and then describe how I used it to solve my problem.  First, an example:

ansys-fortran-to-c-cpp-1-01

What you see in this image is code for the first function you would call if you want to read an ANSYS result file.  There are a lot of arguments to this function, but in essence what you do is pass in the file name of the result file you wish to open (Fname), and if everything goes well, this function sends back a whole bunch of data about the contents of the file.  Now, this function represents code that I have written, but it is a mirror image of the ANSYS routine stored in the binlib library.

I’ve highlighted some aspects of the code that constitute part of the interoperability markup.  First of all, you’ll notice the markup BIND highlighted in red.  This markup for the FORTRAN function tells the compiler that I want it to generate code that can be called from C and I want the name of the C function to be “CResRdBegin”.  This is the first step towards making this function callable from C.  Next, you will see highlighted in blue something that looks like a comment.  This however, instructs the compiler to generate a stub in the exports library for this routine if you choose to compile it into a DLL.  You won’t get a .lib file when compiling this as a .dll without this attribute.  Finally, you see the ISO_C_BINDING and definition of the type of character data we can make interoperable.  That is, instead of a CHARACTER(261) data type, we use an array of single CHARACTER(1) data.  This more closely matches the layout of C, and allows the FORTRAN compiler to generate compatible code.  There is a catch here, though, and that is in the Title parameter.  ANSYS, Inc. defines this as an array of CHARACTER(80) data types.  Unfortunately, the interoperability stuff from Intel doesn’t support arrays of CHARACTER(*) data types.  So, we flatten it here into a single string.  More on that in a minute.

You will notice too, that there are markups like (c_int), etc… that the compiler uses to explicitly map the FORTRAN data type to a C data type.  This is just so that everything is explicitly called out, and there is no guesswork when it comes to calling the routine.  Now, consider this bit of code:

ansys-fortran-to-c-cpp-1-02

First, I direct your attention to the big red circle. Here you see that all I am doing is collecting up a bunch of arguments and passing them on to the ANSYS, Inc. routine stored in BinLib.lib.  You also should notice the naming convention.  My FORTRAN function is named CResRdBegin, whereas the ANSYS, Inc. function is named ResRdBegin.  I continue this pattern for all of the functions defined in the BinLib library.  So, this function is nothing more than a wrapper around the corresponding binlib routine, but it is annotated and constrained to be interoperable with the C programming language.  Once I compile this function with the FORTRAN compiler, the resulting code will be callable directly from C.

Now, there are a few more items that have to be straightened up.  I direct your attention to the black arrow.  Here, what I am doing is converting the passed in array of CHARACTER(1) data into a CHARACTER(*) data type.  This is because the ANSYS, Inc. version of this function expects that data type.  Also, the ANSYS, Inc. version needs to know the length of the file path string.  This is stored in the variable ncFname.  You can see that I compute this value using some intrinsics available within the compiler by searching for the C NULL character.  (Remember that all C strings are null terminated and the intent is to call this function from C and pass in a C string.)

Finally, after the call to the base function is made, the strings representing the JobName and Title must be converted back to a form compatible with C.  For the jobname, that is a fairly straightforward process.  The only thing to note is how I append the C_NULL_CHAR to the end of the string so that it is a properly terminated C string.

For the Title variable, I have to do something different.  Here I need to take the array of title strings and somehow represent that array as one string.  My choice is to delimit each title string with a newline character in the final output string.  So, there is a nested loop structure to build up the output string appropriately.

After all of this, we have a C function that we can call directly.  Here is a function prototype for this particular function.

ansys-fortran-to-c-cpp-1-03

So, with this technique in place, it’s just a matter of wrapping the remaining 50 functions in binlib appropriately!  Now, I was pleased with my return to the land of C, but I really wanted more.  The architecture of the binlib routines is quite easy to follow and very well laid out; however, it is very, very procedural for my tastes.  I’m writing my program in C++, so I would really like to hide as much of this procedural stuff as I can.   Let’s say I want to read the nodes and elements off of a result file.  Wouldn’t it be nice if my loops could look like this:

ansys-fortran-to-c-cpp-1-04

That is, I just do the following:

  1. Ask to open a file (First arrow)
  2. Tell the library I want to read nodal geometric data (Second arrow)
  3. Loop over all of the nodes on the RST file using C++11 range based for loops
  4. Repeat for elements

Isn’t that a lot cleaner?  What if we could do it without buffering data and have it compile down to something very close to the original FORTRAN code in size and speed?  Wouldn’t that be nice?  I’ll show you how I approached it in Part 2.

Can I parameterize ANSYS Mechanical material assignments?

So we have known for a long time that we can parameterize material properties in the Engineering Data screen. That works great if we want to adjust the modulus of a material to account for material irregularities. But what if you want to change the entire material of a part from steel to aluminum? Or if you have 5 different types of aluminum to choose, on several different parts, and you want to run a Design Study to see what combination of materials is the best? Well, then you do this. The process includes some extra bodies, some Named Selections, and a single command snippet.
The first thing to do is to add a small body to your model for each different material that you want to swap in and out, and assign your needed material to them. You’ll have to add the materials to your Engineering Data prior to this. For my example I added three cubes and just put Frictionless supports on three sides of each cube. This assures that they are constrained but not going to cause any stresses from thermal loads if you forget and import a thermal profile for “All Bodies”.

ansys-material-parameters-01

Next, you make a Named Selection for each cube, named Holder1, Holder2, etc. This allows us to later grab the correct material based on the number of the Holder.

ansys-material-parameters-02

You also make a Named selection for each group of bodies for which you want to swap the materials. Name these selections as MatSwap1, MatSwap2, etc.

ansys-material-parameters-03

The command snippet goes in the Environment Branch. (ex. Static Structural, Steady-State Thermal, etc.)

ansys-material-parameters-04

!###############################################################################################################################
! MATSWAP.MAC
! Created by Joe Woodward at PADT,Inc.
! Created on 2/12/2016
!
! Usage: Create Named Selections, Holder1, Holder2, etc.,for BODIES using the materials that you want to use.
! Create Named Selections called MatSwap1, MatSwap2, etc. for the groups of BODIES for which you want to swap materials.
! Set ARG1 equal to the Holder number that has the material to give to MatSwap1.
! Set ARG2 equal to the Holder number that has the material to give to MatSwap2.
! And so on....
! A value of 0 will not swap materials for that given group.
!
! Use as is. No Modification to this command snippet is necessary.
!###############################################################################################################################
/prep7
*CREATE,MATSWAP,MAC
*if,arg1,NE,0,then
 *get,isthere,COMP,holder%arg1%,TYPE
 *get,swapgood,COMP,matswap%ARG2%,TYPE
 *if,isthere,eq,2,then
 esel,s,,,holder%arg1%
 *get,newmat,elem,ELNEXT(0),ATTR,MAT
 !swap material for Body 1
 *if,swapgood,eq,2,then
 esel,s,,,matswap%ARG2%
 emodif,all,mat,newmat
 *else
 /COM,The Named Selection - MatSwap%ARG2% is not set to one or more bodies
 *endif
 *else
 /COM,The Named Selection Holder%ARG1% is not set to one or more bodies
*endif
*endif
*END
MATSWAP,ARG1,1 !Use material from Holder1 for Swap1
MATSWAP,ARG2,2 !Use material from Holder1 for Swap2
MATSWAP,ARG3,3 !Use material from Holder1 for Swap3
MATSWAP,ARG4,4 !Use material from Holder1 for Swap4
MATSWAP,ARG5,5 !Use material from Holder1 for Swap5
MATSWAP,ARG6,6 !Use material from Holder1 for Swap6
MATSWAP,ARG7,7 !Use material from Holder1 for Swap7
MATSWAP,ARG8,8 !Use material from Holder1 for Swap8
MATSWAP,ARG9,9 !Use material from Holder1 for Swap9

alls
/solu

Now, each of the Arguments in the Command Snippet Details corresponds to the ‘MatSwap’ Name Selection of the same number. ARG1 controls the material assignment for all the bodies in the MatSwap1 name selection. The value of the argument is the number of the ‘Holder’ body with the material that you want to use. A value of zero leaves the material assignment alone and does not change the original material assignment for the bodies of that particular ‘MatSwap’ Named Selection. There is no limit on the number of ‘Holder’ bodies and materials that you can use, but there is a limit of nine ‘MatSwap’ groups that you can modify, because there are only nine ARG variables that you can parameterize in the Command Snippet details.

ansys-material-parameters-05

You can see how the deflection changes for the different material combinations. These three steps, holder bodies, Named Selections, and the command snippet above, will give you design study options that were not available before. Hopefully I’ll have an even simpler way in the future. Stay tuned.

10x with ANSYS 17.0 – Get an Order of Magnitude Impact

The ANSYS 17.0 release improves the impact of driving design with simulation by a factor of 10.  This 10x  jump is across physics and delivers real step-change enhancements in how simulation is done or the improvements that can be realized in products.

ANSYS-R17-Banner
Unless you were disconnected from the simulation world last week you should be aware of the fact that ANSYS, Inc released their latest version of the entire product suite.  We wanted to let the initial announcement get out there and spread the word, then come back and talk a little about the details.  This blog post is the start of a what should be a long line of discussions on how you can realize 10x impact from your investment in ANSYS tools.

As you may have noticed, the theme for this release is 10x. A 10x improvement in speed, efficiency, capability, and impact.  Watch this short video to get an idea of what we are talking about.

Where is the Meat?

We are already seeing this type of improvement here at PADT and with our customers. There is some great stuff in this release that delivers some real game-changing efficiency and/or capability.  That is fine and dandy, but how is this 10x achieved.  There are a lot of little changes and enhancements, but they can mostly be summed up with the following four things:

temperature-on-a-cpu-cooler-bgTighter Integration of Multiphysics

Having the best in breed simulation tools is worth a lot, and the ANSYS suite leads in almost every physics.  But real power comes when these products can easily work together.  At ANSYS 17.0 almost all of the various tools that ANSYS, Inc. has written or acquired can be used together. Multiphysics simulation allows you to remove assumption and approximations and get a more accurate simulation of your products.

And Multiphysics is about more than doing bi-directional simulation, which ANSYS is very good at. It is about being able to transfer loads, properties, and even geometry between different software tools. It is about being able to look at your full design space across multiple physics and getting more accurate answers in less time.  You can take heat loads generated in ANSYS HFSS and use them in ANSYS Mechanical or ANSYS FLUENT.  You can take the temperatures from ANSYS FLUENT and use them with ANSYS SiWave.  And you can run a full bidirectional fluid-solid model with all the bells and whistles and without the hassles of hooking together other packages.

simplorer-17-1500-modelica-components-smTo top it all off, the system level modeler ANSYS Simplorer has been improved and integrated further, allowing for true system level Multiphysics virtual prototyping of your entire system.  One of the changes we are most excited about is full support for Modelica models – allowing you to stay in Simplorer to model your entire system.

Improved Usability

Speed is always good, and we have come to expect 10%-30% increases in productivity at almost every release. A new feature here, a new module there. This time the developers went a lot further and across the product lines.

clip-regions-with-named-selections-bgThe closer integration of ANSYS SpaceClaim really delivers on a 10x or better speedup for geometry creation and cleanup when compared to other methods. We love SpaceClaim here at PADT and have been using it for some time.  Version 17 is not only integrated tighter, it also introduces scripting that allows users to take processes they have automated in older and clunker interfaces into this new more powerful tool.

One of our other favorites is the new interface in ANSYS Fluent, just making things faster and easier. More capability in the ANSYS Customization Toolkit (ACT) also allows users to get 10x or better improvements in productivity.  And for those who work with electronics, a host of ECAD geometry import tools are making that whole process an order of magnitude faster.

import-ecad-layout-geometry-bgIndustry Specific Workflows

Many of the past releases have been focused on establishing underlying technology, integration, and adding features. This has all paid off and at 17.0 we are starting to see some industry specific workflows that get models done faster and produce more accurate results.

The workflow for semiconductor packaging, the Chip Package System or CPS, is the best example of this. Here is a video showing how power integrity, signal integrity, thermal modeling, and integration across tools:

A similar effort was released in Turbomachinary with improvements to advanced blade row simulation, meshing, and HPC performance.

ansys-fluent-hpc-max-coresOverall Capability Enhancements

A large portion of the improvements at 17.0 are made up of relatively small enhancements that add up to so big benefits.  The largest development team in simulation has not been sitting around for a year, they have been hard at work adding and improving functionality.  We will cover a lot of these in coming posts, but some of our favorites are:

  1. Improvements to distributed solving in ANSYS Mechanical that show good scaling on dozens of cores
  2. Enhancements to ACT allowing for greater automation in ANSYS Mechanical
  3. ACT is now available to automate your CFD processes
  4. Significant improvements in meshing robustness, accuracy and speed (If you are using that other CFD package because of meshing, its time to look at ANSYS Fluent again)
  5. Fracture mechanics
  6. ECAD import in electromagnetic, fluids, and mechanical products.
  7. A new solver in ANSYS Maxwell that solves more than 10x faster for transient runs
  8. ANSYS AIM just keeps getting more functions and easier to use
  9. A pile of SpaceClaim new and improved features that greatly speed up geometry repair and modification
  10. Improved rigid body dynamics in ANSYS Mechanical

ansys-17-ribbons-UIMore to Come

And a ton more. It may take us all of the time we have before ANSYS 18.0 comes out before we have a chance to go over in The Focus all of the great new stuff.  But we will be giving a try in the coming weeks and months. ANSYS, Inc. will be hosting some great webinars as well.

If you see something that interests you or something you would like to see that was not there, shoot us an email at support@padtinc.com or call 480.813.4884.

Constitutive Modeling of 3D Printed FDM Parts: Part 2 (Approaches)

In part 1 of this two-part post, I reviewed the challenges in the constitutive modeling of 3D printed parts using the Fused Deposition Modeling (FDM) process. In this second part, I discuss some of the approaches that may be used to enable analyses of FDM parts even in presence of these challenges. I present them below in increasing order of the detail captured by the model.

  • Conservative Value: The simplest method is to represent the material with an isotropic material model using the most conservative value of the 3 directions specified in the material datasheet, such as the one from Stratasys shown below for ULTEM-9085 showing the lower of the two modulii selected. The conservative value can be selected based on the desired risk assessment (e.g. lower modulus if maximum deflection is the key concern). This simplification brings with it a few problems:
    • The material property reported is only good for the specific build parameters, stacking and layer thickness used in the creation of the samples used to collect the data
    • This gives no insight into build orientation or processing conditions that can be improved and as such has limited value to an anlayst seeking to use simulation to improve part design and performance
    • Finally, in terms of failure prediction, the conservative value approach disregards inter-layer effects and defects described in the previous blog post and is not recommended to be used for this reason
ULTEM-9085 datasheet from Stratasys - selecting the conservative value is the easiest way to enable preliminary analysis
ULTEM-9085 datasheet from Stratasys – selecting the conservative value is the easiest way to enable preliminary analysis
  • Orthotropic Properties: A significant improvement from an isotropic assumption is to develop a constitutive model with orthotropic properties, which has properties defined in all three directions. Solid mechanicians will recognize the equation below as the compliance matrix representation of the Hooke’s Law for an orthortropic material, with the strain matrix on the left equal to the compliance matrix by the stress matrix on the right. The large compliance matrix in the middle is composed of three elastic modulii (E), Poisson’s ratios (v) and shear modulii (G) that need to be determined experimentally.
Hooke's Law for Orthotropic Materials (Compliance Form)
Hooke’s Law for Orthotropic Materials (Compliance Form)

Good agreement between numerical and experimental results can be achieved using orthotropic properties when the structures being modeled are simple rectangular structures with uniaxial loading states. In addition to require extensive testing to collect this data set (as shown in this 2007 Master’s thesis), this approach does have a few limitations. Like the isotropic assumption, it is only valid for the specific set of build parameters that were used to manufacture the test samples from which the data was initially obtained. Additionally, since the model has no explicit sense of layers and inter-layer effects, it is unlikely to perform well at stresses leading up to failure, especially for complex loading conditions.  This was shown in a 2010 paper that demonstrated these limitations  in the analysis of a bracket that itself was built in three different orientations. The authors concluded however that there was good agreement at low loads and deflections for all build directions, and that the margin of error as load increased varied across the three build orientations.

An FDM bracket modeled with Orthotropic properties compared to experimentally observed results
An FDM bracket modeled with Orthotropic properties compared to experimentally observed results
  • Laminar Composite Theory: The FDM process results in structures that are very similar to laminar composites, with a stack of plies consisting of individual fibers/filaments laid down next to each other. The only difference is the absence of a matrix binder – in the FDM process, the filaments fuse with neighboring filaments to form a meso-structure. As shown in this 2014 project report, a laminar approach allows one to model different ply raster angles that are not possible with the orthotropic approach. This is exciting because it could expand insight into optimizing raster angles for optimum performance of a part, and in theory reduce the experimental datasets needed to develop models. At this time however, there is very limited data validating predicted values against experiments. ANSYS and other software that have been designed for composite modeling (see image below from ANSYS Composite PrepPost) can be used as starting points to explore this space.
Schematic of a laminate build-up as analyzed in ANSYS Composite PrepPost
Schematic of a laminate build-up as analyzed in ANSYS Composite PrepPost
  • Hybrid Tool-path Composite Representation: One of the limitations of the above approach is that it does not model any of the details within the layer. As we saw in part 1 of this post, each layer is composed of tool-paths that leave behind voids and curvature errors that could be significant in simulation, particularly in failure modeling. Perhaps the most promising approach to modeling FDM parts is to explicitly link tool-path information in the build software to the analysis software. Coupling this with existing composite simulation is another potential idea that would help reduce computational expense. This is an idea I have captured below in the schematic that shows one possible way this could be done, using ANSYS Composite PrepPost as an example platform.
Potential approach to blending toolpath information with composite analysis software
Potential approach to blending toolpath information with composite analysis software

Discussion: At the present moment, the orthotropic approach is perhaps the most appropriate method for modeling parts since it is allows some level of build orientation optimization, as well as for meaningful design comparisons and comparison to bulk properties one may expect from alternative technologies such as injection molding. However, as the application of FDM in end-use parts increases, the demands on simulation are also likely to increase, one of which will involve representing these materials more accurately than continuum solids.

Activating Hyperdrive in ANSYS Simulations

punch-it-chewie-ansysWith PADT and the rest of the world getting ready to pile into dark rooms to watch a saga that we’ve been waiting for 10 years to see, I figured I’d take this opportunity to address a common, yet simple, question that we get:

“How do I turn on HPC to use multiple cores when running an analysis?”

For those that don’t know, ANSYS spends a significant amount of resources into making the various solvers it has utilize multiple CPU processors more efficiently than before.  By default, depending on the solver, you are able to use between 1-2 cores without needing HPC licenses.

With the utilization of HPC licenses, users can unlock hyperdrive in ANSYS.  If you are equipped with HPC licenses it’s just a matter of where to look for each of the ANSYS products to activate it.

ANSYS Mechanical

Whether or not you are performing a structural, thermal or explicit simulation the process to activate multiple cores is identical.

  1. Go to Tools > Solve Process Settings
  2. The Solve Process Settings Window will pop up
  3. Click on Advanced to open up the Advanced Settings window
  4. You will see an option for Max number of utilized cores
  5. Simply change the value to your desired core count
  6. You will see below an option to allow for GPU acceleration (if your computer is equipped with the appropriate hardware)
  7. Select the GPU type from the dropdown and choose how many GPUs you want to utilize
  8. Click Ok and close
hyperdrive-ansys-f01
Go the proper settings dialog
hyperdrive-ansys-f02
Choose Advanced…
hyperdrive-ansys-f03
Specify the number of cores to use

Distributed Solve in ANSYS Mechanical

One other thing you’ll notice in the Advanced Settings Window is the option to turn “Distributed” On or Off using the checkbox.

In many cases Distributing a solution can be significantly faster than the opposite (Shared Memory Parallel).  It requires that MPI be configured properly (PADT can help guide you through those steps).  Please see this article by Eric Miller that references GPU usage and Distributed solve in ANSYS Mechanical

hyperdrive-ansys-f04
Turn on Distributed Solve if MPI is Configured

ANSYS Fluent

Whether launching Fluent through Workbench or standalone you will first see the Fluent Launcher window.  It has several options regarding the project.

  1. Under the Processing Options you will see 2 options: Serial and Parallel
  2. Simply select Parallel and you will see 2 new dropdowns
  3. The first dropdown lets you select the number of processes (equal to the number of cores) to use in not only during Fluent’s calculations but also during pre-processing as well
Default Settings in Fluent Launch Window
Default Settings in Fluent Launch Window
Options When Parallel is Picked
Options When Parallel is Picked

ANSYS CFX

For CFX simulations through Workbench, the option to activate HPC exists in the Solution Manager

  1. Open the CFX Solver Manager
  2. You will see a dropdown for Run Mode
  3. Rather than the default “Serial” option choose from one of the available “Parallel” options.
  4. For example, if running on the same machine select Platform MPI Local Parallel
  5. Once selected in the section below you will see the name of the computer and a column called Partitions
  6. Simply type the desired number of cores under the Partitions column and then either click “Save Settings” or “Start Run”
Change the Run Mode
Change the Run Mode
Specify number of cores for each machine
Specify number of cores for each machine

ANSYS Electronics Desktop/HFSS/Maxwell

Regardless of which electromagnetic solver you are using: HFSS or Maxwell you can access the ability to change the number of cores by going to the HPC and Analysis Options.

  1. Go to Tools > Options > HPC and Analysis Options.
  2. In the window that pops up you will see a summary of the HPC configuration
  3. Click on Edit and you will see a column for Tasks and a column for Cores.
  4. Tasks relate to job distribution utilizing Optimetrics and DSO licenses
  5. To simply increase the number of cores you want to run the simulation on, change the cores column to your desired value
  6. Click OK on all windows
hyperdrive-ansys-f09
Select the proper settings dialog
hyperdrive-ansys-f10
Select Edit to change the configuration
Specify Tasks and Cores
Specify Tasks and Cores

There you have it.  That’s how easy it is to turn on Hyperdrive in the flagship ANSYS products to advance your simulations and get to your endpoint faster than before.

If you have any questions or would like to discuss the possibility of upgrading your ship with Hyperdrive (HPC capabilities) please feel free to call us at 1-800-293-PADT or email us at support@padtinc.com.

PID Thermostat Boundary Condition ACT Extension for ANSYS Mechanical

ANSYS-ACT-PID-ThermostatPADT is pleased to announce that we have uploaded a new ACT Extension to the ANSYS ACT App Store.  This new extension implements a PID based thermostat boundary condition that can be used within a transient thermal simulation.  This boundary condition is quite general purpose in nature.  For example, it can be setup to use any combination of (P)roportional (I)ntegral or (D)erivate control.   It supports locally monitoring the instantaneous temperature of any piece of geometry within the model.  For a piece of geometry that is associated with more than one node, such as an edge or a face, it uses a novel averaging scheme implemented using constraint equations so that the control law references a single temperature value regardless of the reference geometry.

ANSYS-ACT-PID-Thermostat-img1

The set-point value for the controller can be specified in one of two ways.  First, it can be specified as a simple table that is a function of time.  In this scenario, the PID ACT Extension will attempt to inject or remove energy from some location on the model such that a potentially different location of the model tracks the tabular values.   Alternatively, the PID thermostat boundary condition can be set up to “follow” the temperature value of a portion of the model.  This location again can be a vertex, edge or face and the ACT extension uses the same averaging scheme mentioned above for situations in which more than one node is associated with the reference geometry.  Finally, an offset value can be specified so that the set point temperature tracks a given location in the model with some nonzero offset.

ANSYS-ACT-PID-Thermostat-img2

For thermal models that require some notion of control the PID thermostat element can be used effectively.  Please do note, however, that the extension works best with the SI units system (m-kg-s).

A Guide to Crawling, Walking, and Running with ANSYS Structural Analysis

crawl-walk-runAt PADT, we apply a Crawl, Walk, Run philosophy to just about everything we do. Start with the basics, build knowledge and capability on that, and then continue to develop your skills throughout your career. Unfortunately, all too often I run across some poor new grad, two weeks out of school, contending with a problem that’s more befitting someone with about a decade of experience under his or her belt.

Now, the point of this article isn’t to call anyone out. Rather, I sincerely hope that managers and supervisors see this and use it as a guideline in assigning tasks to their direct reports. Note that the recommendations are relative and general. Some people may be quite competent in the “run” categories after just a few months of usage and study while others may have been using the software for a decade and still have trouble figuring out how to even start it. It’s also possible that, for certain projects, the “crawl” categories may actually end up being more difficult to contend with than the “run” categories.

With those caveats in mind, here is our list of recommendations for Crawling, Walking, and Running with ANSYS. Note that these apply to structural analysis. I fully plan to hit up my colleagues for similar blog posts about heat transfer, CFD, and electrical simulation.

Crawlsimple-stress1

  • Linear static
  • Basic modal
  • Eigenvalue (linear) buckling, but don’t forget to apply a knock-down factor

Walkstruct-techtip6-contacts-between-bolts

  • Nonlinearities
    • Large Deflection
    • Rate-independent plasticity
    • Nonlinear contact (frictionless and frictional)
  • Dynamics
    • Modal with linear perturbation
    • Spectrum analyses (running the analysis is easy; understanding what you’re doing and interpreting results correctly is hard)
      • Shock/Single point response
      • Random Vibration (PSD)
    • Harmonic analysis
  • Fatigue

Runvibration-pumping platforms

  • Nonlinearities
    • Advanced element options
    • Hyperelasticity
    • Rate-dependent phenomena
      • Creep
      • Viscoelasticity
      • Viscoplasticity
    • Other advanced material models such as shape memory alloy and gaskets
    • Element birth and death
  • Dynamics
    • Transient dynamics (implicit)
    • Explicit dynamics (e.g. LS-Dyna and Autodyn)
    • Rotordynamics
  • Fracture and crack growth

So what’s the best, quickest way to move from crawling to walking or walking to running? Invest in general or consultative (or even better, both) ANSYS training with PADT. We’ll help you get to where you need to be.

Be a Pinball Wizard with Contact Regions in ANSYS Mechanical

pinball-wizard-pinball-machine-ANSYS-3
A pinball machine based on The Who’s Tommy

I had a very cool music teacher back in 6th or 7th grade in the 1970’s in upstate New York.  Today we’d probably say she was eclectic.  In that class we listened to and discussed fairly recent songs in addition to general music studies.  Two songs I remember in particular are ‘Hurdy Gurdy Man’ by Donovan and ‘Pinball Wizard’ by The Who.  If you’re not familiar with Pinball Wizard, it’s from The Who’s rock opera Tommy, and is about a deaf, mute, blind young man who happens to be adept at the game of pinball.  Yes, he is a Pinball Wizard.  This sing popped into my head recently when we had some customer questions here at PADT regarding the pinball region concept as it pertains to ANSYS contact regions.

I’m not sure if the developers at ANSYS, Inc. had this song in mind when they came up with the nomenclature for the 17X (latest and greatest) series of contact elements in ANSYS, but regardless, you too can be a pinball wizard when it comes to understanding contact elements in ANSYS Mechanical and MAPDL.

Fans of this blog may remember one of my prior posts on contact regions in ANSYS that also had a musical theme (bringing to mind Peter Gabriel’s song “I Have the Touch”):

In this current entry we will go more in depth on the pinball region, also known as the pinball radius.  The pinball region is involved with the distance from contact element to target element in a given contact region.  Outside the pinball region, ANSYS doesn’t bother to check to see if the elements on opposite sides of the contact region are touching or not.  The program assumes they are far away from each other and doesn’t worry about any additional calculations for the most part.

Here is an illustration.  The gray elements on the left represent the contact body and the red elements on the right represent the target body (assuming asymmetric contact).  Target elements outside the pinball radius will not be checked for contact.  The contact and target elements actually ‘coat’ the underlying solid elements so they are shown as dashed lines slightly offset from the solid elements for the sake of visibility.  Here the pinball radius is displayed as a dashed blue circle, centered on the contact elements, with a radius of 2X the depth of the underlying solid elements.

pinball_radius_contact_illustration

So, outside the pinball region, we know ANSYS doesn’t check to see if the contact and target are actually in contact.  It just assumes they are far away and not in contact.  What about what happens if the contact and target are inside the pinball region?  The answer to that question depends on which contact type we have selected.

For frictionless contact (aka standard contact in MAPDL) and frictional contact, the program will then check to see if the contact and target are truly touching.  If they are touching, the program will check to see if they are sliding or possibly separating.  If they are touching and penetrating, the program will check to see if the penetration exceeds the allowable amount and will make adjustments, etc.  In other words, for frictionless and frictional contact, if the contact and target elements are close enough to be inside the pinball region, the program will make all sorts of checks and adjustments to make sure the contact behavior is adequately captured.

The other scenario is for bonded and no separation contact.  With these contact types, the program’s behavior when the contact and target elements are within the pinball region is different.  For these types, as long as the contact and target are close enough to be within the pinball region, the program considers the contact region to be closed.  So, for bonded and no separation, your contact and target elements do not need to be line on line touching in order for contact to be recognized.  The contact and target pairs just need to be inside the pinball region.  This can be good, in that it allows for some ‘slop’ in the geometry to be automatically ignored, but it also can have a downside if we have a curved surface touching a flat surface for example.  In that case, more of the curved surface may be considered in contact than would be the case if the pinball region was smaller.  This effect is shown in the image below.  Reducing the pinball radius to an appropriate smaller amount would be the fix for eliminating this ‘overconstraint’ if desired.

pinball_radius_bonded_noseparation

There is a default value for the pinball region/radius.  It can be changed if needed.  We’ll add more details in a moment.  First, why is it called the “pinball” region?  I like to think it’s because when it’s visualized in the Mechanical window, it looks like a blue pinball from an actual pinball arcade game, but I’ll admit that the ANSYS terminology may predate the Mechanical interface.  The image below shows what I mean.  The blue balls are the different pinball radii for different contact regions.

pinball_radius_visualization

 

Note that you don’t see the pinball region displayed as shown in the above image unless you have manually changed the pinball size in Mechanical.  The pinball region can be changed in the Mechanical window in the details view for each contact region by changing Pinball Region from Program Controlled to Radius, like this:

pinball_radius_change

In MAPDL, the pinball radius value can be changed by defining or editing the real constant labeled PINB.

By now you’re probably wondering what is the default value for the pinball radius?  The good news is that it is intelligently decided by the program for each contact region.  The default is always a scale factor on the depth of the underlying elements of each contact region.  In the first pinball region image shown near the beginning of this article, the example plot shows the pinball region/radius as two times the depth of the underlying elements.

The table below summarizes the default pinball radius values for most circumstances for 2D and 3D solid element models.  More detailed information is available in the ANSYS Help.

Default Pinball Radius ValuesLarge Deflection Off
Flexible-Flexible
Large Deflection On
Flexible-Flexible
Frictionless and Frictional1* Underlying Element Depth2*Underlying Element Depth
Bonded and No Seperation0.25*Underlying Element Depth0.5*Underlying Element Depth
Rigid-Flexible Contact: Typically the Default Values are Doubled

Summing it all up:  we have seen how the default values are calculated and also how to change them.  We have seen what they look like as blue balls in a plot of contact regions in Mechanical if the pinball radius has been explicitly defined.  We also discussed what the pinball radius does and how it’s different for frictionless/frictional contact and bonded/no separation contact.

You should be well on your way to becoming a pinball wizard at this point.

Does performing simulation in ANSYS make you think of certain songs, or are there songs you like to listen to while working away on your simulations an addition to The Who’s “Pinball Wizard” and Peter Gabriel’s “I Have the Touch”?  If so, we’d love to hear about your song preferences in the comments below.

7 Reasons why ANSYS AIM Will Change the Way Simulation is Done

ANSYS-AIM-Icon1When ANSYS, Inc. released their ANSYS AIM product they didn’t just introduce a better way to do simulation, they introduced a tool that will change the way we all do simulation.  A bold statement, but after PADT has used the tool here, and worked with customers who are using it, we feel confident that this is a software package will drive that level of change.   It enables the type of change that will drive down schedule time and cost for product development, and allow companies to use simulation more effectively to drive their product development towards better performance and robustness.

It’s Time for a Productivity Increase

AIM-7-old-modelIf you have been doing simulation as long as I have (29 years for me) you have heard it before. And sometimes it was true.  GUI’s on solvers was the first big change I saw. Then came robust 3D tetrahedral meshing, which we coasted on for a while until fully associative and parametric CAD connections made another giant step forward in productivity and simulation accuracy. Then more recently, robust CFD meshing of dirty geometry. And of course HPC improvements on the solver side.

That was then.  Right now everyone is happily working away in their tool of choice, simulating their physics of choice.  ANSYS Mechanical for structural, ANSYS Fluent for fluids, and maybe ANSYS HFSS for electromagnetics. Insert your tool of choice, it doesn’t really matter. They are all best-in-breed advanced tools for doing a certain type of physical simulation.  Most users are actually pretty happy. But if you talk to their managers or methods engineers, you find less happiness. Why? They want more engineers to have access to these great tools and they also want people to be working together more with less specialization.

Putting it all Together in One Place

AIM-7-valve2-multiphysicsANSYS AIM is, among many other things, an answer to this need.  Instead of one new way of doing something or a new breakthrough feature, it is more of a product that puts everything together to deliver a step change in productivity. It is built on top of these same world class best-in-bread solvers. But from the ground up it is an environment that enables productivity, processes, ease-of-use, collaboration, and automation. All in one tool, with one interface.

Changing the Way Simulation is Done

Before we list where we see things changing, let’s repeat that list of what AIM brings to the table, because those key deliverables in the software are what are driving the change:

  • IAIM-7-pipe-setupmproved Productivity
  • Standardized Processes
  • True Ease-of-Use
  • Inherent Collaboration
  • Intuitive Automation
  • Single Interface

Each of these on their own would be good, but together, they allow a fundamental shift in how a simulation tool can be used. And here are the seven way we predict you will be doing things differently.

1) Standardized processes across an organization

The workflow in ANSYS AIM is process oriented from the beginning, which is a key step in standardizing processes.  This is amplified by tools that allow users, not just programmers, to create templates, capturing the preferred steps for a given type of simulation.  Others have tried this in the past, but the workflows were either too rigid or not able to capture complex simulations.  This experience was used to make sure the same thing does not happen in ANSYS AIM.

2) No more “good enough” simulation done by Design Engineers

Ease of use and training issue has kept robust simulation tools out of the hands of design engineers.  Programs for that group of users have usually been so watered down or lack so much functionality, that they simply deliver a quick answer. The math is the same, but it is not as detailed or accurate.  ANSYS AIM solves this by give the design engineer a tool they can pick up and use, but that also gives them access to the most capable solvers on the market.

3) Multiphysics by one user

Multiphysics simulation often involves the use of multiple simulation tools.  Say a CFD Solver and a Thermal Solver. The problem is that very few users have the time to learn two or more tools, and to learn how to hook them together. So some Multiphysics is done with several experts working together, some in tools that do multiple physics, but none well, or by a rare expert that has multi-tool expertise.  Because ANSYS AIM is a Multiphysics tool from the ground up, built on high-power physics solvers, the limitations go away and almost any engineer can now do Multiphysics simulation.

AIM-7-study4) True collaboration

The issues discussed above about Multiphysics requiring multiple users in most tools, also inhibit true collaboration. Using one user’s model in one tool is difficult when another user has another tool. Collaboration is difficult when so much is different in processes as well.  The workflow-driven approach in ANSYS AIM lends itself to collaboration, and the consistent look-and-feel makes it happen.

5) Enables use when you need it

This is a huge one.  Many engineers do not use simulation tools because they are occasional users.  They feel that the time required to re-familiarize themselves with their tools is longer than it takes to do the simulation. The combination of features unique to ANSYS AIM deal with this in an effective manner, making accurate simulation something a user can pick up when they need it, use it to drive their design, and move on to the next task.

6) Stepping away from CAD embedded Simulation

The growth of CAD embedded simulation tools, programs that are built into a CAD product, has been driven by the need to tightly integrate with geometry and provide ease of use for the users who only occasionally need to do simulation. Although the geometry integration was solved years ago, the ease-of-use and process control needed is only now becoming available in a dedicated simulation tool with ANSYS AIM.

7) A Return to home-grown automation for simulation

AIM-7-scriptIf you have been doing simulation since the 80’s like I have, you probably remember a day when every company had scripts and tools they used to automate their simulation process. They were extremely powerful and delivered huge productivity gains. But as tools got more powerful and user interfaces became more mature, the ability to create your own automation tools faded.  You needed to be a programmer. ANSYS AIM brings this back with recording and scripting for every feature in the tool, with a common and easy to use language, Python.

How does this Impact Me and or my Company?

It is kind of fun to play prognosticator and try and figure out how a revolutionary advance in our industry is going to impact that industry. But in the end it really does not matter unless the changes improve the product development process. We feel pretty strongly that it does.  Because of the changes in how simulation is done, brought about by ANSYS AIM, we feel that more companies will use simulation to drive their product development, more users within a company will have access to those tools, and the impact of simulation will be greater.

AIM-f1_car_pressure_ui

To fully grasp the impact you need to step back and ponder why you do simulation.  The fast cars and crazy parties are just gravy. The core reason is to quickly and effectively test your designs.  By using virtual testing, you can explore how your product behaves early in the design process and answer those questions that always come up.  The sooner, faster, and more accurately you answer those questions, the lower the cost of your product development and the better your final product.

Along comes a product like ANSYS AIM.  It is designed by the largest simulation software company in the world to give the users of today and tomorrow access to the power they need. It enables that “sooner, faster, and more accurately” by allowing us to change, for the better, the way we do virtual testing.

The best way to see this for yourself is to explore ANSYS AIM.  Sign up for our AIM Resource Kit here or contact us and we will be more than happy to show it to you.

AIM_City_CFD

To Use Large Deflection or Not, That Is the Question

Hamlet-Large-DeflectionIt seems like I’ve been explaining large deflection effects a lot recently. Between co-teaching an engineering class at nearby Arizona State University and also having a couple of customer issues regarding the concept, large deflection in structural analyses has been on my mind.

Before I explain any further, the thing you should note if you are an ANSYS Mechanical simulation user is this: If you don’t know if you need large deflection or not, you should turn it on. There is really no way to know for certain if it’s needed or not unless you perform a comparison study with and without it.

So, what are large deflection effects? In simple terms the inclusion of large deflection means that ANSYS accounts for changes in stiffness due to changes in shape of the parts you are simulating. The classic case to consider is the loaded fishing rod.

In its undeflected state, the fishing rod is very flexible at the tip. With a heavy fish on the end of the line, the rod deflects downward and it is then easy to observe that the stiffness of the rod has increased. In other words, when the rod is lightly loaded, a small amount of force will cause a certain downward deflection at the top. When the rod is heavily loaded however, a much larger amount of force will be needed to cause the tip to deflect downward by the same amount.

This change in the force amount required to achieve the same change in displacement implies that we do not have a linear relationship between force and displacement.
Consider Hooke’s law, also known as the spring equation:

F = Kx

Where F is the force applied, K is the stiffness of the structure, and x is the deflection. In a linear system, doubling the force results in double the displacement. In our fishing rod case, though, we have a nonlinear system. We might need to triple the force to double the displacement, depending on how much the rod is loaded relative to its size and other properties, and then to double the displacement again we might need to apply four times that force, just using numbers out of my head as examples.

Ted-rod-fishing1

So, in the case of the fishing rod, Hooke’s law in a linear form does not apply. In order to capture the nonlinear effect we need a way for the stiffness to change as the shape of the rod changes. In our finite element solution in ANSYS, it means that we want to recalculate the stiffness as the structure deflects.

This recalculation of the stiffness as the structure deflects is activated by turning on large deflection effects. Without large deflection turned on, we are constrained to using the linear equation, and no matter how much the structure deflects we are still using the original stiffness.

So, why not just have large deflection on by default and use it all the time? My understanding is that since large deflection adds computation expense to have it on, it’s off by default. It’s the same as for a lot of advanced usage, such as frictionless or frictional contact vs. the default bonded (simpler) behavior. In other words, turning on large deflection will trigger a nonlinear solution, meaning multiple passes through the solver using the Newton Raphson method instead of the single pass needed for a linear problem.

Here is an example of a simplified fishing rod. The image shows the undeflected rod (top), which is held fixed on the left side and has a downward force load applied on the right end. The bottom image shows the final deflected shape, with large deflection effects included. The deflection at the tip in this case is 34 inches.

Undeformed_deformed_rod

In comparison running the same load with large deflection turned off resulted in a tip deflection of 40 inches. Thus, the calculated tip deflection is 15% less with large deflection turned on, since we are now accounting for change in stiffness with change in shape as the rod deflects.

Below we have a force (horizontal axis) vs. deflection (vertical axis) plot for a nonlinear simulation of a fishing rod with large deflection turned on. The fact that the curve is not a straight line confirms that this is a nonlinear problem, with the stiffness (slope of the curve) not constant. We can also see that as the force gets higher, the slope of the curve is more horizontal, meaning that more force is needed for each incremental amount of displacement. This matches our observations of the fishing rod behavior.

Force_vs_Deflection

So, getting back to our original point, it’s often the case that we don’t know if we need to include large deflection effects or not. When in doubt, run cases with and without. If you don’t see a change in your key results, you can probably do without large deflection.

Here is an example using an idealized compressor vane. In this case, the deflections and stresses with and without large deflection effects are nearly the same (the stress difference is about 0.2%).

Large Deflection On:blade_large_defl

Small Deflection:blade_small_defl

Bottom line: when in doubt, try it out, with and without large deflection. In ANSYS Mechanical, Large Deflection effects are turned on or off in the details of the Analysis Settings branch.

It’s worth noting that turning on large deflection in ANSYS actually activates four different behaviors, known as large deflection which include large rotation, large strain, stress stiffening, and spin softening. All of these involve change in stiffness due to deformation in one way or another.

If you like this kind of info, or find it useful, we cover topics like this in our training classes. For more info, check out our training pages at http://www.padtinc.com/support/software/training.html.

Donny Don’t – Thin Sweep Meshing

It’s not a series of articles until there’s at least 3, so here’s the second article in my series of ‘what not to do’ in ANSYS…

Just in case you’re not familiar with thin sweep meshing, here’s an older article that goes over the basics.  Long story short, the thing sweep mesher allows you to use multiple source faces to generate a hex mesh.  It does this by essentially ‘destroying’ the backside topology.  Here’s a dummy board with imprints on the top and bottom surface:

image

If I use the automatic thin sweep mesher, I let the mesher pick which topology to use as the source mesh, and which topology to ‘destroy’.  A picture might make this easier to understand…

image

As you can see, the bottom (right picture) topology now lines up with the mesh, but when I look at the top (left picture) the topology does not line up with the mesh.  If I want to apply boundary conditions to the top of the board (left picture), I will get some very odd behavior:

image

I’ve fixed three sides of the board (why 3?  because I meant to do 4 but missed one and was too lazy to go back and re-run the analysis to explain for some of future deflection plots…sorry, that’s what you get in a free publication) and then applied a pressure to all of those faces.  When I look at the results:

image

Only one spot on the surface has been loaded.  If you go back to the mesh-with-lines picture, you’ll see that there is only a single element face fully contained in the outline of the red lines.  That is the face that gets loaded.  Looking at the input deck, we can see that the only surface effect element (how pressure loads are applied to the underlying solid) is on the one fully-contained element face:

image

If I go back and change my thin sweep to use the top surface topology, things make sense:

image

The top left image shows the thin sweep source definition.  Top right shows the new mesh where the top topology is kept.  Bottom left shows the same boundary conditions.  Bottom right shows the deformation contour.

The same problem occurs if you have contact between the top and bottom of a thin-meshed part.  I’ll switch the model above to a modal analysis and include parts on the top and bottom, with contact regions already imprinted.

image

I’ll leave the thin sweeping meshing control in place and fix three sides of the board (see previous laziness disclosure).  I hit solve and nothing happens:

image

Ah, the dreaded empty contact message.  I’ll set the variable to run just to see what’s going on.  Pro Tip:  If you don’t want to use that variable then you would have to write out the input deck, it will stop writing once it gets to the empty contact set.  Then go back and correlate the contact pair ID with the naming convection in the Connections branch.

The model solves and I get a bunch of 0-Hz (or near-0) modes, indicating rigid body motion:

image

Looking at some of those modes, I can see that the components on one side of my board are not connected:

image

The missing contacts are on the bottom of the board, where there are three surface mounted components (makes sense…I get 18 rigid body modes, or 6 modes per body).  The first ‘correct’ mode is in the bottom right image above, where it’s a flapping motion of a top-mounted component.

So…why don’t we get any contact defined on the bottom surface?  It’s because of the thin meshing.  The faces that were used to define the contact pair were ‘destroyed’ by the meshing:

image

Great…so what’s the take-away from this?  Thin sweep meshing is great, but if  you need to apply loads, constraints, define contact…basically interact with ANYTHING on both sides of the part, you may want to use a different meshing technique.  You’ve got several different options…

  1. Use the tet mesher.  Hey, 2001 called and wants its model size limits back.  The HPC capabilities of ANSYS make it pretty painless to create larger models and use additional cores and GPUs (if you have a solve-capable GPU).  I used to be worried if my model size was above 200k nodes when I first started using ANSYS…now I don’t flinch until it’s over 1.5M
    image
    Look ma, no 0-Hz modes!
  2. Use the multi-zone mesher.  With each release the mutli-zone mesher has gotten better, but for most practical applications you need to manually specify the source faces and possibly define a smaller mesh size in order to handle all the surface blocking features.
    image
    Look pa, no 0-Hz modes!Full disclosure…the multi-zone mesher did an adequate job but didn’t exactly capture all of the details of my contact patches.  It did well enough with a body sizing and manual source definition in order to ‘mostly’ bond each component to the board.
  3. Use the hex-dominant mesher.  Wow, that was hard for me to say.  I’m a bit of a meshing snob, and the hex dominant mesher was immature when it was released way back when.  There were a few instances when it was good, but for the most part, it typically created a good surface mesh and a nightmare volume mesh.  People have been telling me to give it another shot, and for the most part…they’re right.  It’s much, much better.  However, for this model, it has a hard time because of the aspect ratio.  I get the following message when I apply a hex dominant control:

    image
  4. The warning is right…the mesh looks decent on the surface but upon further investigation I get some skewed tets/pyramids.  If I reduce the element size I can significantly reduce the amount of poorly formed elements:image
  5. That’s going on the refrigerator door tonight!
    image
    And…no 0-Hz modes!
  • Lastly…go back to DesignModeler or SpaceClaim and slice/dice the model and use a multi-body part.image
    3 operations, ~2 minutes of work (I was eating at the same time)

    image
    Modify the connection group to search/sort across parts

    image

    That’s a purdy mesh!  (Note:  most of the lower-quality elements, .5 and under, are because there are 2-elements through thickness, reducing the element size or using a single element thru-thickness would fix that right up)

    image
    And…no 0-Hz modes.

Phew…this was a long one.  Sorry about that.  Get me talking about meshing and look what happens.  Again, the take-away from all of this should be that the thin sweeper is a great tool.  Just be aware of its limitations and you’ll be able to avoid some of these ‘odd’ behaviors (it’s not all that odd when you understand what happens behind the scenes).