Donny Don’t – Remote Objects

Nothing like a good ‘ol fashion Simpson’s reference.  I’m trying to start a new series of articles that address common mistakes and things to avoid, and what better reference than when Bart ‘joined’ the Junior Campers and found out he might get a knife out of the deal. 

6lrWlDO

For this first article, let’s talk about remote objects (force, displacement, points, joints).  First, remote objects are awesome.  Want to add a rotational DOF to your solid-object model?  Remote Displacement.  Want to apply a load and don’t want to worry about force/moment balance?  Remote Force.  Want to apply a load but also constrain a surface?  Remote Point.  Take two points and define a open/locked degrees of freedom and you have a kinematic joint.

The thing to watch out for is how you define these remote points.  ANSYS Mechanical does an amazing job at making a pretty tedious process easy (create pilot node, create constraint-type contact, specify DOFs to include, specify formulation).  In Mechanical, all you need to do is highlight some geometry, right mouse click, and insert the appropriate object (remote point, remote force, etc).  No need to keep track of real constant sets, element tshape’s…easy.  Almost too easy if you ask me.

Once you start creating multiple remote objects, you may see the following:

message1

If you dig into the solver output file you may see this:

image

The complaint is that we have multiple overlapping constraint sets.  Let’s take a step back and see the model I’ve setup:

image

I have a cylinder, attached to a body-to-ground spring on one face, a translational joint applied on the OD, and a remote force and moment applied on the opposite end.  If I follow the instructions shown from the ANSYS Workbench message about graphically displaying FE Connections (select the ‘Solution Information’ item, click the graphics tab):

image

We can see that any type of constraint equation is shown in red.  The issue here is that the nodes on the OD edge on the top and bottom of my cylinder belong to multiple constraint equation sets.  On the bottom my my cylinder those nodes are being constrained to the spring end AND the cylindrical joint.  On the top the nodes on the edge are being constrained to the joint AND remote force.  When you hit solve, ANSYS needs to figure out how to resolve the conflicting constraint sets (a node cannot be a slave term for two different constraint sets).  I don’t know exactly how the solver manages this, but I like to imagine it’s like two people fighting over who gets to keep a dog…and they place the dog in-between them and call for it, and whoever the dog goes to gets to keep it. 

Now for this example, the solver is capable of handling the over-constraint because overall…the model is properly constrained.  The spring can loose some of the edge nodes and still properly connect to the cylinder.  Same goes for the other remote objects (translation joint and remote force/moment).  If we had more objects defined and more overlaps, that’s a different story.  You can introduce a pretty lengthy lag, or outright solver failure, if there are a lot of overconstraint terms in the model. 

So now the question becomes, how do I fix this.  The easiest way is to not fix this and ignore the warning.  If our part behaves properly, we get the reaction forces we’d expect, then odds are the overconstraint terms that are automatically corrected are fine.  If we actually wanted to remove that warning, we would need to make sure we scope remote objects that do not touch other remote objects.  We can do this by going into DesignModeler or SpaceClaim and imprinting the surfaces. 

image

In DM, I just extruded the edges with the operation set to imprint face.  In SpaceClaim you would just need to use the ‘copy edge’ option on the pull command:

image

Now this will modify the topology and will ensure we have a separation of nodes for all of our remote objects:

image

When we solve…no warning message about MPC conflicts:

image

And when we look at the FE connectivity, there are no nodes shared by multiple remote objects:

image 

The last thing I’d like to point out is the application of a force and moment on a remote point:

image

Whenever you have two remote objects operating on the same surface (e.g. a moment and force, force and displacement, etc), you should really be using a remote point.  If I were to create two remote objects:

image

I now come right back to my original problem of conflicting constraints.  These two objects share the exact same nodal set but are creating two independent remote points.  If you want to do this, right-mouse-click on one of your remote objects and select ‘promote to remote point’:

image

Then modify the other remote objects to use that remote point.  No more conflict. 

Very last point…in R16 it will now tell you when you have ‘duplicate’ remote objects  (like the remote force + displacement shown above). 

image

Hope this helps! 

Thermal Submodeling in ANSYS Workbench Mechanical 15.0

thermal-submodeling-18
If you've been following The Focus for a long time, you may recall my prior article about submodeling using ANSYS Mechanical APDL, which was a 'sub' model of a submarine.  The article, from 2006, begins on page 2 at this link:

Also, Eric Miller here at PADT wrote a Focus blog entry on the new-at-14.5 submodeling capability in ANSYS Workbench Mechanical.

Since both of those articles were about structural submodeling, I decided it was time we published a blog entry on how to perform submodeling in ANSYS Mechanical for thermal simulations.

Submodeling is a technique whereby we can obtain more accurate results in a small, detailed portion of a large model without having to build an incredibly refined and detailed finite element model of our complete system.  In short, we map boundary conditions onto a 'chunk' of interest that is a subset of our full model so that we can solve that 'chunk' in more detail.  Typically we mesh the 'chunk' with a much finer mesh than was used in the original model, and sometimes we add more detail such as geometric features that didn't exist in the original model like fillets.

The ANSYS Workbench Project Schematic for a thermal solution involving submodeling looks like this:

thermal-submodeling-1

Figure 1 – Thermal Submodeling Project Schematic

Note that in the project schematic, the links are automatically established when we setup the submodel after completing the analysis on the coarse model as we shall see below.

First, here is the geometry of the coarse model.  It's a simple set of cooling fins.  In this idealized model, no fillets have been modeled between the fins and the block.

thermal-submodeling-2

Figure 2 – Coarse Model Geometry, Idealized without Fillets

The boundary conditions consisted of a heat flux due to a  thermal source on the base face and convection to ambient air on the cooling fin surfaces.  The heat flux was setup to vary over the course of 3 load steps as follows:

Load Step        Heat Flux (BTU/s*in^2)

            1                      0.2

            2                      0.5

            3                      0.005

Thus, the maximum heat going into the system occurs in load step 2, corresponding to 'time' 2.0 in this steady state analysis.

thermal-submodeling-3

Figure 3 – Coarse Model Boundary Conditions – Heat Flux and Convection

The coarse model is meshed with relatively large elements in this case.  The mesh refinement for a production model should be sufficient to adequately capture the fields of interest in the locations of interest.  After solving, the temperature results show a max temperature at the base where the heat flux is applied, transitioning to the minimum temperature on the cooling fins where convection is removing heat.

thermal-submodeling-4

Figure 4 – Coarse Model Mesh and Temperature Results for Load Step 2

Our task now is to calculate the temperature in one of these fins with more accuracy.  We will use a finer mesh and also add fillets between the fin and base.  For this example, I isolated one fin in ANSYS DesignModeler, did some slicing, and added a fillet on either side of the base of the fin of interest.

thermal-submodeling-5

Figure 5 – Fine Model (Submodel) Isolated Fin Geometry and Mesh, Including Fillets at Base

 

ANSYS requires that the submodel lie in the exact geometric position as it would in the coarse model, so it's a good idea to overlay our fine model geometry onto the coarse model to verify the positioning.

thermal-submodeling-6

Figure 6 – Submodel and Coarse Model Overlaid

thermal-submodeling-7

Figure 7 – Submodel and Coarse Model Overlaid, Showing Addition of Fillet

The next step is to insert the submodel geometry as a stand-alone geometry block in the Project Schematic which already contains the coarse model, as shown in figure 8.  A new Steady-State Thermal analysis is then dragged and dropped onto the geometry block containing the submodel geometry.

thermal-submodeling-8

Figure 8 – Submodel Geometry Added to Project Schematic, New Steady-State Thermal System Dragged and Dropped onto Submodel Geometry

 

Next, we drag and drop the Engineering Data cell from the coarse model to the Engineering Data cell in the submodel block.  This will establish a link so that the material properties will be shared.

thermal-submodeling-9

Figure 9 – Drag and Drop Engineering Data from Coarse Model to Submodel

The final needed link is established by dragging and dropping the Solution cell from the coarse model onto the Setup cell in the submodel.  This step causes ANSYS to recognize that we are performing submodeling, and in fact this will cause a Submodeling branch to appear in the outline tree in the Mechanical window for the submodel.

thermal-submodeling-10

Figure 10 – Solution Cell Dragged and Dropped from Coarse Model to Submodel Setup Cell

After opening the Mechanical editor for the submodel block, we can see that the Submodeling branch has automatically been added to the tree.

thermal-submodeling-11

Figure 11 – Submodeling Branch Automatically Added to Outline Tree

After meshing the submodel I specified that all three load steps should have their temperature data mapped to the submodel from the coarse model.  This was done in the Details view for the Imported Temperature branch, by setting Source Time to All.

thermal-submodeling-12

Figure 12 – Set Imported Temperature Source Time to All to Ensure All Loads Steps Are Mapped

Next I selected the four faces that make up the cut boundaries in the submodel and applied those to the geometry selection for Imported Temperature.

thermal-submodeling-13

Figure 13 – Cut Boundary Faces Selected for Imported Temperature

 

As mentioned above, the Imported Temperature details were set to read in all load steps by setting Source Time to All.  The Imported Temperature branch can now be right-clicked and the resulting imported temperatures viewed.  I also inserted a Validation branch which we will look at after solving.

thermal-submodeling-14

Figure 14 – Setting Source Time to All, Viewing Imported Temperature on Submodel

Any other loads that need to be applied to the submodel are added as well.  For this model, it's convection on the large faces of the fin that are exposed to ambient air.

thermal-submodeling-15

Figure 15 – Submodel Convection Load on Fin Exposed Faces

Since there are three load steps in the coarse model and we told ANSYS to map results from all time points, I set the number of steps to three in Analysis Settings, then solved the submodel.  Results are available for all three load steps.

thermal-submodeling-16

Figure 16 – Submodel Temperature Results for Step 2 (Highest Heat Flux Value in Coarse Model)

Regarding the Validation item under the Imported Temperature branch, this is probably best added after the solution is done.  In my case I had to clear it and recalculate it.  Validation can display either an absolute or relative (percent difference) plot on the nodes at which loads were imported.  Figure 17 shows the relative difference plot, which maxes out at about 6%.  The validation information as well as mapping techniques are described in the ANSYS Help.

thermal-submodeling-17

Figure 17 – Submodel Imported Temperature Validation Plot – Percent Difference on Mapped Nodes

Looking at the coarse model and submodel results side by side, we see good agreement in the calculated temperatures.  The temperature in the fillets shows a nice, smooth gradient.

thermal-submodeling-18

Figure 18 – Coarse and Submodel Temperature Results Showing Good Agreement

Hopefully this explanation will be helpful to you if you have a need to perform submodeling in a thermal simulation in ANSYS.  There is a Thermal Submodeling Workflow section in the ANSYS 15.0 Help in the Mechanical User's Guide that you may find helpful as well.

 

 

 

ANSYS 2015 Hall of Fame Announced – Los Alamos National Labs and SynCardia Models are Finalists

2015-hall-of-fame-header-closed

Every year for a while now ANSYS, Inc. has chosen models made by users of the ANSYS software tools for their Hall of Fame.  This year had some very cool models across CFD, Structural, and Electromagnetic – including some great Multiphysics applications. Visit the ANSYS website to see all the winners here.

The three commercial winers of "Best in Show" were varied but powerful examples of how simulation can be used to improve performance and reliability of products:

 best-in-show-2015-ansys-hall-of-fame

Andritz Hydro used ANSYS Mechanical to model their assemblies to see if replacing welds with bolted joints would reduce weight and cost while keeping reliability.  They used sub-modeling, bolted joints, and contact.  

BRP used ANSSY CFX, ICEM CFD, and Mechanical to capture the forces caused by cavitation on their outboard marine engine. This engine pushes a boat at 75MPH (!!!) through the water, so yes, they get cavitation.  They used ICEM CFD for meshing, CFX to predict the cavitation and capture the cavitation loading, and Mechanical to see how the loading impacted the gear train and shafts. They were able to obitmize the desgin quickly using this process.

Spinologics used ANSYS Mechanical APDL to model the process of using a rod to straighten a deformed spine (scoliosis). They use the scriptability of the APDL to automate the creation of the models.  Very cool stuff.  Check out the video on the link.

We also want to mention two customers that were involved as Finalists.  

syncardia-heartSynCardia is often mentioned in this blog because, well, they make a frick'n artificial heart that saves lives every day.  We modeled an early iteration on the heart as a multiphysic problem probobly 5 or 6 years ago, it could have been longer ago. More recently Stony Brook University and the University of Arizona did a much more detailed model in ANSYS Fluent that looks at not just pressure and velocity, but Platelet dispersion patterns in the artificial heart.  Check out the video here:  https://storage.ansys.com/hof/2015/video/2015-stonybrook.mp4

2015-lanl-bgLos Alamos National Labs is another long time PADT customer and we were fortunate enough to be involved in the study that was recognized as a finalist. They used ANSYS Fluent to model something called vortex-induced motion or VIM in off-shore oil rigs.  Basically waves hit the platform and create these big swirling vortices.  These in turn put loads on the structure that can sometimes be very large.  The purpose of this study was to find a way to accurate predict VIM with simulation so they could then evaluate various solutions. A true Fluid-Solid Interaction (FSI) and because of the size of the structures and all that turbulence, High Performance Computing (HPC) problem. We hope to publish a paper on some related work this year… watch this space for more.

 This competition is a great way to see what others are doing, and if you submit your models, to show off what you have done.  Contact your ANSYS rep to learn more or drop us a note.

 

Configuring Laptop “Switchable” Graphics for ANSYS Applications

IMG_4894

A lot of laptops these days come with “switchable” graphics.  The idea is that you have a lower capability but also lower power consuming ‘basic’ graphics device in addition to a higher performing but higher power demand graphics device.  By only using the higher performance graphics device when it’s needed, you can maximize the use time of a battery charge. 

A lot of the ANSYS graphics-intensive applications may need the higher end graphics device to display and run correctly.  In this article, we’ll focus on the AMD Firepro as the “higher end” graphics, with Intel HD graphics as the “lower end”.  We will show you how to switch to the AMD card to get around problems or errors in displaying ANSYS user interface windows.

The first step is to identify the small red dot graphics icon at the lower right in the task bar:

fix_laptop_graphics_ansys-01

Figure 1 – AMD Catalyst Icon

 

Next, right click on the icon to bring up the AMD Catalyst Control Center, if you don’t see the switchable option as shown two images down.

fix_laptop_graphics_ansys-02

Figure 2 – AMD Catalyst Control Center Right Click Menu Pick

 

Right click on the same icon again, if needed to select “Configure Switchable Graphics,” as shown here:

fix_laptop_graphics_ansys-03

Figure 3 – Select “Configure Switchable Graphics” via Right Click on the Same Icon

 

In the resulting AMD Catalyst Control Center window, click on the Add Application button.

fix_laptop_graphics_ansys-04

Figure 4 – AMD Catalyst Control Center Window

Next browse to the application that needs the higher end graphics capability.  This might take a little trial and error if you don’t know the exact application.  Here we select ANSYS CFD-Post and click Open.

fix_laptop_graphics_ansys-05

Figure 5 – Selecting appropriate executable for switchable graphics

Finally, select the High Performance option from the dropdown for your chosen executable, then click the Apply button.

fix_laptop_graphics_ansys-06

This should get your graphics working properly.  Again, the reason we have the two graphics choices is to allow us to better control power consumption based on the level of graphics that are needed per application.  Hopefully this article helps you to choose the proper graphics settings so that your ANSYS tools behave nicely on your laptop.

ANSYS Workbench Installations and RedHat 6.6 – Error and Workaround

penguin_shWe were recently alerted by a customer that there is apparently a conflict with ANSYS installations if Red Hat Enterprise Linux 6.6 (RHEL 6.6) is installed. We have confirmed this here at PADT. This effects several versions of ANSYS, including 15.0.7, 14.5, and 14.0. The primary problem seems to be with meshing in the Mechanical or Meshing window.

The windows errors encountered can be: “A software execution error occurred inside the mesher. The process suffered an unhandled exception or ran out of usable memory.” or “an inter-process communication error occurred while communicating with the MESHER module.”

The error message popup can look like this:
th1

or
th2

th3
Note that the Platform Support page on the ANSYS website does not list RHEL 6.6 as supported. RHEL is only supported up through 6.5 for ANSYS 15.0. This is the link to that page on the ANSYS website:

http://www.ansys.com/staticassets/ANSYS/staticassets/support/r150-platform-support-by-application.pdf

That all being said, there is a workaround that should allow you to continue using ANSYS Workbench with RHEL 6.6 if you encounter the error. It involves renaming a directory in the installation path:

In this directory:

/ansys_inc/v150/commonfiles/MainWin/linx64/mw/lib-amd64-linux/

Rename the folder ‘X11’ to ‘Old-X11’

After that change, you should be able to successfully complete meshes, etc,. in ANSYS Workbench. Keep in mind that RHEL 6.6 is not officially supported by ANSYS, Inc. and their recommendation is always to stick with supported levels of operating systems. These are always listed in the ANSYS Help for the particular version you are running as well as at the link shown above.

Since the renamed directory is contained within the ANSYS installation files, it is believed that this will not affect anything else other than ANSYS. Use at your own risk, however. Should you encounter one of more of the errors listed above, we hope this article has provided useful information to keep your ANSYS installations up and running.

From Piles to Power – My First PADT PC Build

Welcome to the PADT IT Department now build your own PC

[Editors Note: Ahmed has been here a lot longer than 2 weeks, but we have been keeping him busy so he is just now finding the time to publish this. ]

I have been working for PADT for a little over 2 weeks now. After taking the ceremonial office tour that left me with a fine white powder all over my shoes (it’s a PADT Inc special treat). I was taken to meet my team, David Mastel – My Boss for short, who is the IT commander & chief at PADT Inc. and Sam Goff – the all-knowing systems administrator.

I was shown to a cubicle that reminded me of the shady computer “recycling” outfits you’d see on a news report highlighting the vast amounts of abandoned hardware; except there were no CRT (tube) screens or little children working as slave labor.
aa1

Sacred Tradition

This tradition started with Sam, then Manny, and now it was my turn taking this rite of passage. As part of the PADT IT department, I am required by sacred tradition to build my own desktop with my bare hands – then I was handed a screwdriver.

My background is mixed and diverse but mostly has one thing in common. We usually depended on pre-built servers, systems and packages. Branded machines have an embedded promise of reliability, support and superiority over the custom built machines.

  1. What most people don’t know about branded machines is that they carry two pretty heavy tariffs.
  2. First, you are paying upfront for the support structure, development, R&D, supply chains that are required to pump out thousands of machines.
  3. Second, because these large companies are trying to maximize their margins, they will look for a proprietary cost effective configuration that will:
    1. Most probably fail or become obsolete as close as possible to the 3-year “expected” life-span of computers.
    2. Lock users into buying any subsequent upgrade or spare part from them.

Long Story short, the last time I fully built a desktop computer was back in college when a 2GB hard disk was a technological breakthrough that we could only imagine how many MP3’s we could store on it.

The Build

There were two computer cases on the ground, one resembled a 1990 Mercury Sable that was at most tolerable as a new car and the other looked more like 1990 BMW 325ci a little old but carries a heritage and potential to be great once again.
aa2

So with my obvious choice for a case I began to collect parts from the different bins and drawers and I was immediately shocked at how “organized” this room really was. So I picked up the following:

There are a few things that I would have chosen differently but were not available at the time of the build or were ridiculous for a work desktop would be:

  • Replaced 2 drives with SSD disks to hold OS and applications
  • Explored a more powerful Nvidia card (not really required but desired)

So after a couple of hours of fidgeting and checking manuals this is what the build looks like.
aa3

(The case above was the first prototype ANSYS Numerical Simulation workstation in 2010. It has a special place in David’s Heart)

Now to the Good STUFF! – Benchmarking the rebuilt CUBE prototype

ANSYS R15.0.7 FEA Benchmarks

Below are the results for the v15sp5 benchmark running distributed parallel on 4-Cores.
aa4

ANSYS R15.0.7 CFD Benchmarks

Below are the results for the aircraft_2m benchmark using parallel processing on 4-Cores.
aa5

This machine is a really cool sleeper computer that is more than capable at whatever I throw at it.

The only thing that worries me is that when Sam handed me the case to get started, David was trying –but failed- to hide a smile that makes me feel that there is something obviously wrong in my first build and I failed to catch it. I guess I will just wait and see.

You will be Surprised Where Sneeze Germs Travel in an Airplane

sneezing-in-airplane-300x279Ever been on a flight, hear someone sneeze, and then sit in fear as you imagine millions of tiny infectiousness germs laughing historically as they spread through the cabin of the plane?  In my imagination they are green and drip mucus. In reality they are small liquid particles and instead of going everywhere, it appears they fall on just a few unlucky people. 

ANSYS, Inc.  put out a very cool video showing the results of an in-cabin CFD run done by Purdue University that tracks the pathogens as they leave the sick persons mouth, get caught in the climate control system’s air stream, and waft right on the people next to and behind them.  The study was done for the FAA Center for Excellence for Airliner Cabin Environment Research.   

Here is the video, check it out and share with your friends. Especially if you have a friend that doesn’t sneezes out into the open air:

Visit the ANSYS Blog to learn even more.

#betterlivingthroughsimulation

Home Grown HPC on CUBE Systems

compute-cluster-1

A Little Project Background

Recently I’ve been working on developing a computer vision system for a long standing customer. We are developing software that enables them to use computers to “see” where a particular object is space, and accurately determine its precise location with respect to the camera. From that information, they can do all kinds of useful things.

In order to figure out where something is in 3D space from a 2D image you have to perform what is commonly referred to as pose estimation. It’s a highly interesting problem by itself, but it’s not something I want to focus on in detail here. If you are interested in obtaining more information, you can Google pose estimation or PnP problems. There are, however, a couple of aspects of that problem that do pertain to this blog article. First, pose estimation is typically a nonlinear, iterative process. (Not all algorithms are iterative, but the ones I’m using are.) Second, like any algorithm, its output is dependent upon its input; namely, the accuracy of its pose estimate is dependent upon the accuracy of the upstream image processing techniques. Whatever error happens upstream of this algorithm typically gets magnified as the algorithm processes the input.

The Problem I Wish to Solve

You might be wondering where we are going with HPC given all this talk about computer vision. It’s true that computer vision, especially image processing, is computationally intensive, but I’m not going to focus on that aspect. The problem I wanted to solve was this: Is there a particular kind of pattern that I can use as a target for the vision system such that the pose estimation is less sensitive to the input noise? In order to quantify “less sensitive” I needed to do some statistics. Statistics is almost-math, but just a hair shy. You can translate that statement as: My brain neither likes nor speaks statistics… (The probability of me not understanding statistical jargon is statistically significant. I took a p-test in a cup to figure that out…) At any rate, one thing that ALL statistics requires is a data set. A big data set. Making big data sets sounds like an HPC problem, and hence it was time to roll my own HPC.

The Toolbox and the Solution

My problem reduced down to a classic Monte Carlo type simulation. This particular type of problem maps very nicely onto a parallel processing paradigm known as Map-Reduce. The concept is shown below:
matt-hpc-1

The idea is pretty simple. You break the problem into chunks and you “Map” those chunks onto available processors. The processors do some work and then you “Reduce” the solution from each chunk into a single answer. This algorithm is recursive. That is, any single “Chunk” can itself become a new blue “Problem” that can be subdivided. As you can see, you can get explosive parallelism.

Now, there are tools that exist for this kind of thing. Hadoop is one such tool. I’m sure it is vastly superior to what I ended up using and implementing. However, I didn’t want to invest at this time in learning a specialized tool for this particular problem. I wanted to investigate a lower level tool on which this type of solution can be built. The tool I chose was node.js (www.nodejs.org).

I’m finding Node to be an awesome tool for hooking computers together in new and novel ways. It acts kind of like the post office in that you can send letters and messages and get letters and messages all while going about your normal day. It handles all of the coordinating and transporting. It basically sends out a helpful postman who taps you on the shoulder and says, “Hey, here’s a letter”. You are expected to do something (quickly) and maybe send back a letter to the original sender or someone else. More specifically, node turns everything that a computer can do into a “tap on the shoulder”, or an event. Things like: “Hey, go read this file for me.”, turns into, “OK. I’m happy to do that. I tell you what, I’ll tap you on the shoulder when I’m done. No need to wait for me.” So, now, instead of twiddling your thumbs while the computer spins up the harddrive, finds the file and reads it, you get to go do something else you need to do. As you can imagine, this is a really awesome way of doing things when stuff like network latency, hard drives spinning and little child processes that are doing useful work are all chewing up valuable time. Time that you could be using getting someone else started on some useful work. Also, like all children, these little helpful child processes that are doing real work never seem to take the same time to do the same task twice. However, simply being notified when they are done allows the coordinator to move on to other children. Think of a teacher in a class room. Everyone is doing work, but not at the same pace. Imagine if the teacher could only focus on one child at a time until that child fully finished. Nothing would ever get done!

Here is a little graph of our internal cluster at PADT cranking away on my Monte Carlo simulation.
matt-hpc-2

It’s probably impossible to read the axes, but that’s 1200+ cores cranking away. Now, here is the real kicker. All of the machines have an instance of node running on them, but one machine is coordinating the whole thing. The CPU on the master node barely nudges above idle. That is, this computer can manage and distribute all this work by barely lifting a finger.

Conclusion

There are a couple of things I want to draw your attention to as I wrap this up.

  1. CUBE systems aren’t only useful for CAE simulation HPC! They can be used for a wide range of HPC needs.
  2. PADT has a great deal of experience in software development both within the CAE ecosystem and outside of this ecosystem. This is one of the more enjoyable aspects of my job in particular.
  3. Learning new things is a blast and can have benefit in other aspects of life. Thinking about how to structure a problem as a series of events rather than a sequential series of steps has been very enlightening. In more ways than one, it is also why this blog article exists. My Monte Carlo simulator is running right now. I’m waiting on it to finish. My natural tendency is to busy wait. That is, spin brain cycles watching the CPU graph or the status counter tick down. However, in the time I’ve taken to write this article, my simulator has proceeded in parallel to my effort by eight steps. Each step represents generating and reducing a sample of 500,000,000 pose estimates! That is over 4 billion pose estimates in a little under an hour. I’ve managed to write 1,167 words…

CUBE_Logo_150w

Default Contact Stiffness Behavior for Bonded Contact

p7It recently came to my attention that the default contact stiffness factor for bonded contact can change based on other contact regions in a model. This applies both to Mechanical as well as Mechanical APDL. If all contacts are bonded, the default contact stiffness factor is 10.0. This means that in our bonded region, the stiffness tending to hold the two sides of contact together is 10 times the underlying stiffness of the underlying solid or shell elements.

However, if there is at least one other contact region that has a type set to anything other than bonded, then the default contact stiffness for ALL contact pairs becomes 1.0. This is the default behavior as documented in the ANSYS Mechanical APDL Help, in section 3.9 of the Contact Technology Guide in the notes for Table 3.1:

“FKN = 10 for bonded. For all other, FKN = 1.0, but if bonded and other contact behavior exists, FKN = 1 for all.”

So, why should we care about this? It’s possible that if you are relying on bonded contact to simulate a connection between one part and another, the resulting stress in those parts could be different in a run with all bonded contact vs. a run with all bonded and one or more contact pairs set to a type other than bonded. The default contact stiffness is now less than it would be if all the contact regions were set to bonded.

This can occur even if the non-bonded contact is in a region of the model that is in no way connected to the bonded region of interest. Simply the presence of any non-bonded contact region results in the contact stiffness factor for all contact pairs to have a default value of 1.0 rather than the 10.0 value you might expect.

Here is an example, consisting of a simple static structural model. In this model, we have an inner column with a disk on top. There are also two blocks supporting a ring. The inner column and disk are completely separate from the blocks and ring, sharing no load path or other interaction. Initially all contact pairs are set to bonded for the contact type. All default settings are used for contact.
p1

Loading consists of a uniform temperature differential as well as a bearing load on the disk at the top. Both blocks as well as the column have their bases constrained in all degrees of freedom.
p2

After solving, this is the calculated maximum principal stress distribution in the ring. The max value is 41,382.
p3

Next, to demonstrate the behavior described above, we changed the contact type for the connection between the column and the disk from bonded to rough, all else remaining the same.
p4

After solving, we check the stresses in the ring again. The max stress in the ring has dropped from 41,283 to 15,277 as you can see in the figure below. Again, the only change that was made was in a part of the model that was in no way connected to the ring for which we are checking stresses. The change in stress is due solely to a change in contact type setting in a different part of the model. The reason the stress has decreased is that the stiffness of the bonded connection is less by a factor of 10, so the bonded region is a softer connection than it was in the original run.

p5

So, what do we as analysts need to do in light of this information? A good practice would be to manually specify the contact stiffness factor for all contact pairs. This behavior only crops up when the default values for contact stiffness factor are utilized. We can define these stiffness factors easily in ANSYS Mechanical in the details view for each contact region. Further, we need to always remember that ANSYS as well as other analytical tools are just that – tools. It’s up to us to ensure that the results of interest we are getting are not sensitive to factors we can adjust, such as mesh density, contact stiffness, weak spring stiffness, stabilization factors, etc.

Learn Linux on edX

edx_linuxThe balance of Linux vs. Windows for simulation users is always in flux. For some time it was predicted that Windows would win the battle but in recent years Linux has made a resurgence, especially on clusters and in the cloud.  We strongly recommend that ANSYS users who want to be power users gain a good understanding of Linux from a user and sysadmin perspective. Especially CFD users since they are most likely to be solving on a Linux devices.  Too many of the people we interface with are left at the mercy of an IT support team that doesn’t know, or even fears Linux.

The best way to solve this problem is to learn Linux yourself. To help people get there, recommended a few books and “learn by doing.” Now we have a better option.

edX offers an Introduction to Linux class that looks outstanding, and you can audit it for free or take the course for real for a $250 minimum contribution.  The quality of these courses is fantastic. The material is thorough and practical.

If you do take the class, give us some feedback when you finish in the comments below.

Here is the video describing the course.  

Using Probes to Obtain Contact Forces in ANSYS Mechanical

Recently we have had a few questions on obtaining contact results in ANSYS Mechanical. A lot of contact results can be accessed using the Contact Tool, but to obtain contact forces we use Probes. Since not everyone is familiar with how it’s done, we’ll explain the basics here.

Below is a screen shot of a Mechanical model involving two parts. One part has a load that causes it to be deflected into the other part.

p1

We are interested in obtaining the total force that is being transmitted across the contact elements as the analysis progresses. Fortunately this is easy to do using Probes in Mechanical.

The first thing we do is click on the Solution branch in the tree so we can see the Probes button in the context toolbar. We then click on the Probe drop down button and select Force Reaction, as shown here:

p2

Next, we click on the resulting Force Reaction result item under the Solution branch to continue with the configuration. We first change the Location Method from Boundary Condition to Contact Region:

p3

We then specify the desired contact region for the force calculation from the Contact Region dropdown:

p4

Note that the coordinate system for force calculation can either be Cartesian or Cylindrical. You can setup a coordinate system wherever you need it, selectable via the Orientation dropdown.

There is also an Extraction dropdown with various options for using the contact elements themselves, the elements underlying the contact elements, or the elements underlying the target elements (target elements themselves have no reaction forces or other results calculated). Care must be taken when using underlying elements to make sure we’re not also calculating forces from other contact regions that are part of the same elements, or from applied loads or constraints. In most cases you will want to use either Contact (Underlying Element) or Target (Underlying Element). If contact is non-symmetric, only one of these will have non zero values.

In this case, the setting Contact (Contact Element) was a choice that gave us appropriate results, based on our contact behavior method of Asymmetric:

p5

Here are the details including the contact force results:

p6

This is a close up of the force vs. ‘time’ graphs and table (this was a static structural analysis with a varying pressure load):

p7
p8


***** SUMMATION OF TOTAL FORCES AND MOMENTS IN THE GLOBAL COORDINATE SYSTEM *****

FX = -0.4640219E-04
FY = -251.1265
FZ = -0.1995618E-06
MX = 62.78195
MY = -0.1096794E-04
MZ = -688.9742
SUMMATION POINT= 0.0000 0.0000 0.0000

We hope this information is useful to you in being able to quickly and easily obtain your contact forces.

Flownex and PADT Sponsor University of Houston’s Rankin Rollers Team

rankin-rollers-logoA group of enthusiastic students at the University of Houston are doing their part at solving that age old academia problem: not enough hand’s on experience.  They are designing and building a working steam turbine for the schools Thermodynamics lab so students can experiment with a Rankin cycle, learn how to take meaningful measurements, and study how to control a real thermodynamic system.

rankin-rollers-facebook
Look! Flownex and PADT on Social Media! Thanks for the plug guys.
After meeting a team member at the 2014 Houston ANSYS User conference, PADT saw a great opportunity to help the team by providing them with access to a full seat of Flownex SE so that they can create a virtual prototype of their steam turbine and the control system they are developing. 

The four team members have the following goals for their project:

    1. Create a fully automated system control
    2. Create system with rolling frame for ease of transport
    3. Create system with dimensions of 4x2x3.5 ft
    4. Deliver pre-made lab experiments
    5.  Produce an aesthetically pleasing product

    Flownex should be a great tool for them, allowing the team to simulate the thermodynamics and flow in the system as well as the system controls before committing to hardware. 

    You can learn more about the team on their Facebook page here, or on their website here

    We hope to share their models and what they have learned when their project is complete. If you are interested in using Flownex for your work or school project, contact PADT.

    steam-turbine-table-setup
    This is the Team’s proposed configuration for the final test bench.
    flow-schematic
    We can’t wait to see this flow diagram translated into Flownex.

    A 3D Mouse Testimonial

    The following is from an email that I received from Johnathon Wright.  I think he likes his brand new 3DConnexion Space Pilot Pro.
    -David Mastel
      IT Manager
      PADT, Inc.

    ——————-

    top-panel-deviceRecently PADT became a certified reseller for 3Dconnexion. Shortly following the agreement a sleek and elegant SpacePilot PRO landed on my desk. Immediately the ergonomic design, LCD display, and blue LED under the space ball appealed to the techie inside of me. As a new 3D mouse user I was a little skeptical about the effectiveness of this little machine, yet it quickly has gained my trust as an invaluable tool to any Designer or Engineer. On a daily basis it allows me to seamlessly transition from CAD to 3D printing software and then to Geomagic Scanning software, allowing dynamic control of my models, screen views, hotkeys and shortcuts.

    Outside of its consistency as an exceptional 3D modeling aid, the SpacePilot PRO also has a configurable home screen that allows quick navigation of email, calendar or tasks. This ensures that I can keep in touch with my team without having to ever leave my engineering programs, which is invaluable to my production on a daily basis. Whether you are a first time user who is looking to tryout a 3D Mouse for the first time or an experienced 3D mouse user who is looking to upgrade, you need to check out the SpacePilot Pro. I can’t imagine returning to producing CAD models or manipulating scan data without one. Combine the SpacePilot PRO cross-compatibility with its programmability and ease of use and you have a quality computer tool that applies to a wide range of users who are looking at new ways to increase productivity.

    Link to You Tube video – watch it do its thing along with a look at my 3D scanning workstation, the GEOCUBE: http://youtu.be/fsfkLPaZJe4

    Johnathon Wright
    Applications Engineer,
    Hardware Solutions
    PADT, Inc.

    ———————————————————————————————-
    Editors Note:

    Not familiar with what a 3D Mouse is?  It is a device that lets a use control 3D objects on their computer in an intuitive manner. Just as you move a 2D mouse on the plane of your desk, you spin a 3D Mouse in all three dimensions.  Learn more here

    spacepilot-pro-cad-professional-2-209-p

    Integrating ANSYS Fluent and Mechanical with Flownex

    Component boundaries generated in Flownex are useful in CFD simulation (inlet velocities, pressures, temperatures, mass flow). Generation of fluid and surface temperature distribution results from Flownex can also be useful in many FEA simulations. For this reason the latest release of Flownex SE was enhance to include several levels of integration with ANSYS.  

    ANF Import

    By simply clicking on an Import ANF icon on the Flownex Ribbon bar users can select the file that they want to import. The user will be requested to select whether the file must be imported as 3D Geometry which conserves the coordinates system or as an isometric drawing.

    The user can also select the type of component which should be imported in the Flownex library. Since the import only supports lines and line related items this will typically be a pipe component.

    Following a similar procedure, a DXF importer allows users to import files from AutoCAD.

    This rapid model construction gives Flownex users the ability to create and simulate networks quicker. With faster model construction, users can easily get to results and spend less time constructing models.

    p1

    ANSYS Flow Solver Coupling and Generic Interface

    The Flownex library was extended to include components for co-simulation with ANSYS Fluent and ANSYS Mechanical.
    p2

    These include a flow solver coupling checks, combined convergence and exchanges data on each iteration, and a generic coupling that can be used for cases when convergence between the two software programs is not necessary.

    The general procedure for both the Fluent and Mechanical co-simulation is the same:

    1. By identifying specified named selections, Flownex will replace values in a Fluent journal file or ds.dat file in the case of Mechanical.
    2. From Flownex, Fluent/Mechanical will then be run in batch mode
    3. The ANSYS results are then written into text files that are used inputs into Flownex.
    4. When applicable, specified convergence criteria will be checked and the procedure repeated if necessary.

    p3

    Learn More

    To learn more about Flownex or how Flownex and ANSYS Mechanical contact PADT at 480.813.4884 or roy.haynie@padtinc.com.  You can also learn more about Flownex at www.flownex.com.

    FDA Opening to Simulation Supported Verification and Validation for Medical Devices

    FDA-CDRH-Medical-Devices-SimulationBringing new medical device products to market requires verification and validation (V&V) of the product’s safety and efficacy. V&V is required by the FDA as part of their submission/approval process. The overall product development process is illustrated in the chart below and phases 4 and 5 show where verification is used to prove the device meets the design inputs (requirements) and where validation is used to prove the device’s efficacy. Historically, the V&V processes have required extensive and expensive testing. However, recently, the FDA’s Center for Devices and Radiological Health (CDRH) has issued a guidance document that helps companies uses computational modeling (e.g FEA and CFD) to support the medical device submission/approval process.

    FDA-Medical-Device-Design-Process-Verification-Validation
    Phases and Controls of Medical Device Development Process, Including Verification and Validation
     The document called, “Reporting of Computational Modeling Studies in Medical Device Submissions”, is a draft guidance document that was issued on January 17th, 2014. The guidance document specifically addresses the use of computation in the following areas for verification and/or validation:

    1. Computational Fluid Dynamics and Mass Transport
    2. Computation Solid Mechanics
    3. Computational Electromagnetics and Optics
    4. Computational Ultrasound
    5. Computational Heat Transfer

    The guidance specifically outlines what form reports need to take if a device developer is going to use simulation for V&V.  By following the guidance, a device sponsor can be assured that all the information required by the FDA is included. The FDA can also work with a consistent set of input from various applicants. 

    drug-delivery-1-large
    CFD Simulation of a Drug Delivery System. Used to Verify Uniform Distribution of Drug

    Computational Modeling & Simulation, or what we usually call simulation, has always been an ideal tool for reducing the cost of V&V by allowing virtual testing on the computer before physical testing. This reduces the number of iterations on physical testing and avoids the discovery of design problems during testing, which is usually late in the development process and when making changes is the most expensive. But in the past, you had to still conduct the physical testing. With these new guidelines, you may now be able to submit simulation results to reduce the amount of required testing.
    mm_model_stresses
    Simulation to Identify Stresses and Loads on Critical Components While Manipulating a Surgical Device

    Validation and verification using simulation has been part of the product development process in the aerospace industry for decades and has been very successful in increasing product performance and safety while reducing development costs.  It has proven to be a very effective tool, when applied properly.  Just as with physical testing, it is important that the virtual test be designed to verify and validate specific items in the design, and that the simulation makes the right assumptions and that the results are meaningful and accurate.

    PADT is somewhat unique because we have broad experience with product development, various types of computational modeling and simulation, and the process of submission/approval with the FDA. In addition, we are ISO 13485 certified. We can provide the testing that is needed for the V&V process and employ simulation to accelerate and support that testing to help our medical device customers get their products to market faster and with less testing cost.  We can also work with customers to help them understand the proper application of simulation in their product development process while operating within their quality system.