Parallel Performance: ANSYS FLUENT R13 with Service Pack 2

Recently, PADT has conducted some parallel benchmarks with our Linux cluster.  The model used for the tests is an ANSYS FLUENT *.cas file with 26.5 million cells and 6.5 million nodes.  The physics in this model are fairly simple; it is modeling external, steady airflow over an object.  Simulations of 10 iterations were conducted using as many as 144 processors over three 48-core systems (the machine names are “cs0”, “cs1”, and “cube48”, and are summarized in Table 1).  The first two machines in Table 1 (cs0 and cs1) are connected together via Infiniband, so they effectively form a 96-core machine.  Furthremore, the cs0 and cs1 machines are connected to cube48 via GigE network ports.  Each of the machines has two GigE network ports each which connect to a Gigabit switch to allow for trunking.

Table 1 – Benchmark machine specifications


Processor Type

Processor Count


2.3 GHz AMD Opteron 6176SE



2.3 GHz AMD Opteron 6176SE



2.2 GHz AMD Opteron 6176SE

Two versions of FLUENT were tested; the original version 13 (R13), and the new Service Pack 2 for version 13 (R13 SP2).  The results were scaled according to the one-processor solve time to compute the “speedup” (defined as the solve time on 1 processor divided by the solve time on “N” processors), and are presented in Figure 1 with a comparison to “Linear” or ideal speedup (ideal speedup implies that the speedup value is equal to the number of processors for a given processor count).


Figure 1 – Speedup vs. Processors with 6.5 million node ANSYS FLUENT model

The first result in Figure 1 (blue curve) was obtained using as many as 48 processors on the cs0 machine before installing the Service Pack 2 for FLUENT version 13.  As illustrated in Figure 1, the speedup values of the blue result decrease as the number of processors increases.  The next result (green curve with triangles) was calculated using Service Pack 2 for ANSYS FLUENT version 13 on the cube48 machine using as many as 48 processors.  This result on cube48 presents perfect correspondence with Linear speedup, and even displays some values which are “super-linear”.  This behavior led us to suspect that the improvements in Service Pack 2 for ANSYS FLUENT version 13 were the primary cause of the increase in performance demonstrated by the cube48 speedup result.

However, further testing of the cs0 and cs1 machine with Service Pack 2 for ANSYS FLUENT seems to suggest otherwise.  These data are represented by the red squares in Figure 1.  Runs were conducted on as many as 144 processors, which involved a distributed run using 48 processors on cs0, 48 processors on cs1, and 48 processors on cube48.  The general trend of this result is the same as that recorded on cs0 before installing Service Pack 2 for ANSYS FLUENT version 13, suggesting that some other effect (likely machine-related) is present.   Our current hypothesis is that the socket 2 processor (which handles the Infiniband UIO card hardware) is causing the slowdown, primarily because the Infiniband switch is present only on the cs0 + cs1  machine, and not on the cube48 machine.  This was suggested by the manufacturer of Infiniband switch (SuperMicro).  Testing to asses this problem is on-going.

Retrieving Accurate PSD Reaction Forces in ANSYS Mechanical

We just finished up a tech support call for a customer that wanted a way to get accurate reaction loads from a PSD run in ANSYS Mechanical. Alex Grishin took the call and provided a nice example to the customer, so we thought we would leverage that and share it with all of you.  Even if you are not in need of this particular function, it is a great example of using snippets.  If you are not familiar with this, check out our recent webinar on the subject.

The reason why you have to do this is because doing an accurate PSD force calculation is not a simple thing.  The math is a bit complicated, because PSD responses are probabilities of results that loose sign.  And it is right now only available in Mechanical APDL (MAPDL).  This is not a problem because we can use an APDL command object to get the results from MAPDL and bring them back to ANSYS Mechanical.

Three Simple Steps

There are three very simple steps needed to get this done:

  1. Identify the geometry you want the reaction loads calculated on
    Do this by selecting a face, edge, or corner and create a named component.  You will use that named component to grab the nodes that sit on the piece of geometry and do an FSUM in MAPDL. In our example, we call the named selection react_area1.
  2. Tell the solver to store the required modal information
    Since ANSYS Mechanical doesn’t do reaction force calculations they save disk space by not storing the info needed for such calculations, but we need them.  So add a command object in your modal analysis environment that says save all my results (outres) and expand all my modes (mxpand):
  3. Calculate the reaction force
    Now we simply need to add a command object to the post processing branch that:
    • gets the PSD deflection results (set,3)
    • selects the named selection (cmsel),which is a nodal component in MAPDL,
    • gets the nodes attached (esln)
    • calculates the reaction load (fsum)
    • stores the results in parameters that we return to ANSYS Mechanical. (*get,my_).  Remember that anytime you create a MAPDL parameter in the post processor that starts with my_ it gets returned to ANSYS Mechanical. (well, that is the default, you can change the prefix)
    • select everything so that MAPDL can keep post processing like normal

    For our example, it looks like this:

The following figure shows the model tree for our example, and the returned parameters:


Nothing fancy, simple in fact: Make a component, store the required info in the result files, do an FSUM and bring back the results.

You can download the example here.


That was a short article!  And no exciting pictures.  So… if you want to you could check out the travels of The PADT Hat around the world. 

How to Get a Cool Grip on Your Data Center Cooling in Three Easy Steps!

CUBE 96 Mini-ClusterOver the years I have learned to do more with less. When it comes to information systems world, you all know the equation is often much more with much less. One of my to-do’s over the years, umm that continued to get bumped down on the priorities list. It is the juggling act of making sure that the data center has enough cooling vs. power vs. yes again in AZ cooling. If you are an IT professional or even an Engineer you really don’t have time to attempt to try to convince someone, anyone that we need to speed more money. Even if you use the effective philosophy of Time, Money and Quality. After dealing recently “wish ware software vendors this past year” I added a fourth dimension to the above philosophy. It’s called Functionality; here is what Merriam Webster had to say about Functionality. The quality or state of being functional; especially: the set of functions or capabilities associated with computer software or hardware or an electronic device.

  1. Time – Try searching the internet for terms such as data center cooling calculator, data center cooling costs or how can I save money with our data center cooling? You will suddenly have millions of search queries at your beckon. From video blogs on data center cooling, white papers on optimizing IT strategy on data center cooling. It is endless 3 million plus hits on just one search term data center cooling calculator. Wow, Start researching my young padawon learner, fill out those lead generating white papers. Keep it simple…
  2. Money – I would prefer that we used or dollars on buying a server with a couple INTEL XEON ET-8870 processor. Or how about a QUAD based AMD FX-8130P processor server! I do not have any budget for a I am sure fabulous data center cost benefit crisis analysis.
  3. Quality – Can I even understand what the end-result document white paper will read. Will I look like even more of an idiot? This needs to be accurate information. I will have to do it myself or use a third party data center analysis.
  4. Functionality – Will our current air conditioners hold up this year? What about when we hit 120 degrees? Oh my do I need to add more cooling power?

Wikipedia on British Thermal Unit (BTU) –

So why I care about a BTU? Because approximately one "ton of cooling", which is a typical way people talk about cooling devices in the USA, is 12,000 BTU/h. This is the amount of power needed to melt one short ton of ice in 24 hours. Locked away in a climate controlled vault is one of my data centers. “Said such vault may or may not contain the following items on any given day.” After all this is a mobile compute server world these days.

  • 13 Servers
  • 174 Cores (Mix of Windows/Linux servers)
  • 2 – Network Routers
  • 3 – Network Switches
  • Phone System
  • Voice Mail System
  • 1 LCD 20” monitor/KVM

Go Green in the Data Center! First, let’s get a “Cool Grip” on your data center…16,484.058 BTU/h

A couple years ago one of the our ANSYS Mechanical Simulation Engineers named Jason Krantz told me about a watt meter device. A handy little watt meter monitoring device designed by P3 International KILL A WATT™. Over the years that little watt meter devices has become one of closest friends and ally in IT. Today, I was able to quickly asses (realistically about four hours of time) just how many Watts of power each one of our servers, network devices, etc. used. I tried to be as accurate as I could without having to take out a second mortgage. So I made sure and verified that one of our PHD FE Analyst or CFD Analysts had our servers at our near 100% CPU use.

YOUTUBE VIDEO :: Check out this real-world example of a AMD Opteron 6174, 287 hour electrical cost usage test. The data shown in this video is of a server that has four AMD Opteron 6174 processors installed.

AMD Opteron 6174 Electrical Cost Usage

So, what is your magical number? Ours is 4,831 Watts

Do you know how many watts of power your server room is using? could you even logically guess what that number is? Our magical number for server room #1 turned out to be 4,831 Watts. I do need to state that I was unable to take some of the devices offline. When that was the case I used data pulled from the actual technical documents of the device’s manufacture website.

So what is your BTU/h number? Ours is 16,484.058 BTU/h? Oh and I don’t even like math? I know, I know, math was solved and perfected centuries ago. But how do I convert Watts into BTU?

I used a 99 cent app that I bought off of the iTunes App store called “Convert Units”.

  • Step 1 – Convert Watts into BTU/m
  • Step 2 – Then multiplied by 60 to get that value into BTU/h.
  • Step 3 – Speak to your Operations Department, send an email, shout from the rooftops!! We need at least a two ton Air Conditioning unit for Server Room #1

Now with the precious BTU/h value in hand. I was able to speak the same language as that of our Director of Operations & Facilities manager.

I wish you all could have been there when I walked up to Scott and told him the news. The dialogue went something like this:

“Scott I wanted to talk to you about server room #1’s cooling situation…(pause for dramatic expression). Almost immediately you could see Scott’s blood pressure rising. Scott’s brain quickly churning through mountains of Air Conditioning cooling information and data. I quickly calmed his anxiety and said these exact words. “Server Room #1’s BTU/h ratio is approximately 16,484.058 BTU/h.” It took Scott just a moment for this bit of information to register. I do believe that I actually heard the hallelujah chorus in the heavens. I could also see the peace that passes all understanding come across Scott’s face. It was if I could read his mind and he was thinking how is this non-operation/facilities type humanoid speaking my language? For Scott knew immediately that he had enough cooling power at this moment into to cool that data center down all summer long.

DATA CENTER #1 – 274.7343 BTU/m*60 = 16,484.058 BTU/h

How the heck are you making money today? Step out of the Box, Step Into a Cube Computers for CFD and FEA Simulation.

Workbench and Excel, Part 1: Using the Excel Component System

It is a fact.  Microsoft Excel is the most used engineer tool in the world.  If you are like me, you do everything you can’t do in ANSYS in Excel.  And a lot of the time you wish you could talk directly from Excel to ANSYS – and in the day many of us wrote kludgey VB Macros that would write APDL scripts run ANSYS MAPDL in the background.  In the past couple of releases our friendly neighborhood ANSYS developers have added a lot of different ways to work with Excel: saving tables to a file, Python scripting to talk to Excel, and an Excel System.  I remember reading about these things as they came out, even wrote about how cool they were, but I never had the opportunity to use them.

Then, last week, I noticed an Excel icon just sitting there in the toolbox, mocking me, taunting me to use it. 


So I dragged it out on my project page and tried to use it… and got no where.  My assumptions were not valid, it  didn’t work the way I expected because, it turned out, I was not thinking about how it fit into the project correctly.  So I backed up, actually read the help (gasp!) and after a experimenting got it to work, I thought.  Then I talked to some folks at ANSYS, Inc. and they reminded me of what this is: an Excel Solver System.  It is  not a tool that lets you “drive” your project from Excel. It lets you use Excel to calculate values.

This posting is a summary of what I learned. And as I was working through it I thought it would be good to also cover the python interface to Excel and how to save tabular information to Excel. These will be covered in future articles (hence this being Part 1).

What You Need to Know

As I said above, my problem was that I was thinking about how Excel fit into my project all wrong.  The first thing you should do is read the help on the Excel System.  The best way to find it is type excel into search.  The item with the most hits will take you to the article for component systems, then click on Microsoft Office Excel. (I wish I could just put a link in… grumble… grumble… Instead, I leave the link to buy MS Office 2013 for those who does not have it).

To use the Excel system you do the following:

  1. Add the System to your project
  2. Make a spreadsheet and use range names to identify parameters
  3. Attach an Excel spreadsheet
  4. Edit the system and tell the program which parameters are input and which are output
  5. Go into the parameter manager and hook up any derived parameters you want to pass to Excel and use any of the Excel parameters with other parameters as needed
  6. Tell ANSYS to run a VB macro (if you want)
  7. Update your project or Design Points

We will go through the process in detail but first, a few things you should know:

  • The system kind of looks and feels like the parameter manager in Workbench, but it is not.  You have to think of Excel as a “solver” that feeds parameters from and to the parameter manager.
    • I struggled with this because I thought of output parameters as values calculated by Workbench and input parameters as ones that come from Excel, but the opposite is true.
    • Excel Input Parameter: A value calculated in Workbench parameter manager
    • Excel Output Parameter: A value calculated in Excel
    • You need to get your head around this or you will get stuck like I did.  The example should help.
  • Parameters that come from DesignModeler are dimensionless in the parameter manager.
  • This one really held me up for a while.  If you assign a parameter from Excel that has a unit to drive your geometry in design modeler, you get an error. 
  • The solution is to make sure that you DO NOT use units on Excel parameters that you get or pass to DesingModeler
  • When you attach the Excel file to your Excel system on the project page Workbench copies the Excel file to your project and buries it in dpall\XLS.
    • You will get burned by this if you go to your original excel file, edit it, then try and update your project.  Your changes will not show up.  That is because it is not linked to the original file, it is linked to a copy stored in that XLS directory.
    • Once you have linked a file you should exit Excel then open the file by RMB on the Excel system and choose  “Open File in Excel” (see below for more on this whole process)
    • I recommend that you start buy making your Excel file, save it with the name you want in the C:\Temp directory, attach that file, close Excel, then open from Workbench.
    • Now you have a file to add your stuff to and you don’t have to worry about having an earlier version lurking around.
    • An important side effect of this is if you delete your system, it deletes your Excel file!  So make sure you make a copy or do a save as before you remove the Excel system
  • To get changes in Excel to show up in your project, you need to save the file AND refresh/reload.
    • Making a change to he Excel file will put the system out of date.  A refresh on the project page or a reload on the “Edit Configuration” page will update things.
  • The parameter names in Excel are case sensitive.  So whatever your prefix is in the system properties (WB_ by default) you need to have the same in your Excel spreadsheet for range names.
  • To get a full update, including running any macros and doing any calculation, you have to update the system.   This is kind of obvious, but I kept forgetting to do it.
  • Your Excel file will not update if you use RSM.  Make sure your default for updating your project is to run local and, that if you are using design points, you set that update to run in the foreground.
    • The easiest way to check and change this is to click on the parameter bar and view its properties.  Under Design Point Update Process set Update Option to Run in Foreground.
  • If you want to have your Excel file define both input and output parameters for the same ANSYS simulation, workbench sees that as  a “cyclic dependency and will not let you do it.
    • Although annoying at first glance, it kind of makes sense.  If you feed a value to Excel and then Excel calculates a new value that effects your ANSYS model, you need to update the ANSYS model, which will change the value that gets passed into Excel, which will change the value that gets passed out which changes your ANSYS model, which… and so it goes in a loop. This is considered a bad thing.
    • This goes back to the fact that Excel should be used as a solver, not as  ‘”driver” of you simulation.
    • If you do want to drive your analysis from Excel, you’ll need to do some scripting. We’ll cover that in a future article.

    The Process

    I started this article with a really cool valve demo model. Then found that it was just too slow and a pain to work with for showing how the Excel system works.  So I went back to my second favorite type of model, a simple “tower of test.”  (my favorite is a FPWHII – flat plate with a  hole in it).  You can download the project here.

    Add the System to Your Project

    Like every other system in Workbench, you simply drag from the toolbox to the Project Schematic.  Notice how the green “drop zones” are all empty spaces.  You can’t drop it on an existing cell in a system because there is no dependency between other systems and an Excel system.  The Excel system is connected through parameters, which we will see in a bit.


    Once you have dropped it onto the schematic, click on the Top cell (C1 in this case) and check out the properties (RMB Properties if the window is not already open).  From the properties you can see the system ID (XLS) and you can specify an Analysis Type.  You can leave it blank or type in something like “Home Grown Optimization.”

    Then click on the Analysis cell (C2 in this case) and look at the properties.  They are shown here:


    One key thing to note is that the directory where the Excel file will be copied is shown.  I did this once already on this project so it made a XLS-1 directory. If I did it again, I’d see XLS-2, etc…   In fact, by the time I got done with this article and trying all sorts of things, it ended up in XLS-8.

    The most important option under Setup is the “Parameter Key”   Any Excel named range that begins with this string will get read into the parameter manager.  If you make it blank, all the named ranges will come in. 

    Make a Spreadsheet and Use Range Names to Identify Parameters

    Now you need to create your spreadsheet.  You need to plan ahead here a bit.  Figure out what parameters you need Excel to get from your models and what parameters you want to send back.  Come up with good names because that is what gets passed to Workbench.

    What happens when you attach a file is that Workbench goes to the Excel sheet and steps through all the named ranges in the file.  If it finds one with a name that starts with the filter value, it grabs the first value in the range as the parameter value and then grabs the second as the units.  If your range is bigger, it just ignores the rest.

    So this tells us that we need to create a range that has at least one cell, or two if units are important. For our simple example we will be calculating costs  and outputting that using the input Volume, Length and Width. There is a formula in the cost cell that multiples those values times pre-set costs per unit volume, length and width and sums them up to get a cost.

    So the laziest thing you can do is select a cell and name it. 

    But it will help you and others if you actually make a table that has a descriptive name, the parameter name (WB_ should be your default), the value for that parameter, and the units, if any.  Note that for an input parameter you can just set the value to zero to get started.  Here is what the tables look like for our example:


    To create a range you select the value and units for a given parameter, hold down the Right-Mouse-Button (RMB), and Select “Define Range”


    A cool thing that Excel does is to use the value just to the left of the range as the default name of the range.  So by creating the table you save yourself some typing.  Or, if you don’t use a table, just type in the name you want . 


    Now just click OK and you have a named range.  You can repeat this for each range, or you can get fancy and use the fact that your data is in a table, with the parameter name to the left, to quickly generate all the ranges at once.

    To do this, select the WB Param, value and unit columns. Then go to –>Formulas –> Defined Names –> Create from Selection.  When the dialog box pops up make sure only “Left column” is checked.  Click OK.


    In one fell swoop you created all your named ranges.  To see, edit, and delete ranges, regardless of how they were created, go to Formulas –> Defined Names –> Name Manager. 


    Take some time to look at this and understand it. When you are debugging and fixing stuff, you will use this window. 

    Now you have an Excel file that Workbench will like!  Time to attach  it. Save it (I recommend to save to temp so you don’t get it confused with the copy that Workbench will make).

    Attach an Excel Spreadsheet

    This is the easiest step. Simply RMB on the Analysis cell in the system and browse for the file.

    imageNow your Analysis cell has a lightning bolt, update to have it read the file and find parameters.  If you have your parameters set up wrong, such that you don’t have any named ranges with the specified prefix, it will generate an error but will still attach the file.

    NOTE:  If you get some weird errors  “Unexpected error…” and “Exception from HRESULT:…” when updating your Excel system, check your Excel file.  Odds are you have an open dialog box or the file is somehow locked. The error generates because Workbench can’t get Excel to talk to it. 

    Edit the System and Tell the Program Which Parameters are Input and Which are Output

    Although you have a green check mark, you will notice that your system is still not connected to your parameters, and therefor it is not connected to the rest of your model.  The way to fix this is to RMB->Edit Configuration. Double-clicking on Analysis also does the same thing.


    This puts you in Outline Mode.  You should be familiar with this mode from the Parameter Manager or Engineering Data. 


    Take some time to explore this outline.  Notice the setup cell, where you have access to the system properties.  Then it’s child, the Excel file.  Click on it to the properties for the file connection.  Under that is the important stuff, the parameters.

    If you did everything correctly, you will see all of your parameters in alphabetical order.  If you click on one, you will see the properties.  Here they are for the cost value:


    It shows the range, the value and units (C column) and the Quantity name.  Workbench guesses by units.  So PSI comes in as pressure by default.  If it is a stress, you need to change it here.

    But your main task right now is to tell Workbench which of these parameters you want passed to the parameter manager, and what type of parameter, input or output, they are.  Here is where I get screwed up.  Because an input parameter in the parameter manager is an output parameter here.  Remember, the Excel system is a solver that takes in parameters from the parameter manager and send back values to drive your models.  So in our example, all the dimensions and the volume are passed from the parameter manager TO excel, so they are input.  The cost is passed from Excel to the Parameter manager so it is output. 


    Now you have hooked up your Excel system. Click on the “Return to Project” at the top of the window and you will go back to the project schematic and see that a Parameters cell has been added to the system and it has been attached to the parameter bar.


    Go Into the Parameter Manager and Hook Things Up

    Although the Geometry and Mechanical systems are connected through the parameters to the Excel system, no relationships exist.  We need to assign some values to our Excel parameters.

    This is what our test model looks like before we do this:


    Our goal is to have the parameters in the first column below to drive those in the second:

    Driving Parameter Driven Parameter
    P7: Len P10: WB_L
    P5: W1 P12: WB_W
    P8: Solid Volume P11: WB_V

    I tried to just click on the value in value column (C) and change the value from the number it is to the parameter name it should be but that does not work.  Because the parameter is set as a constant.  So, you need to click anywhere on the row for the parameter you want to set, then go down to the Properties window and change the Expression to the Parameter ID you want to change.  This changes the Expression to be an equation and the Expression Type to be Derived:


    That is it. You now have Excel in your project as a solver. Update your project and the cost will be calculated and presented as a parameter for optimization, DOE studies or whatever you want.

    Tell ANSYS to run a VB macro (if you want)

    One really cool feature is that you can tell the program to run a VB macro on an update. What you do is go to your system, click on Analysis then RMB-> Edit Configuration.  Then click on the file cell (A3).  The property area now shows info on your file, and has a Use Macro row at the bottom. Click on the checkbox and a Macro Name row will popup.  Enter the name of a macro in your spreadsheet and you are off.

    Here is a silly example where I use a macro to calculate a value.  For the example I put in the well known equation for deriving the Kafizle of a system:

    1. Create a new row in my table for the Kafizle value to go in
    2. Create a name WB_KF for the value
    3. Write my macro (don’t laugh):  
      Sub CalcKafizle()
             Range("E7") = Rnd(1) + Cos(Range("E5").Value)
      End Sub
    4. Save my sheet and KABOOM. I now need to save it as an xlsm, not xlsx!  I didn’t think about that!
      • This means my Excel connection wont’ work.  So you have to delete your system and start again with your macro file.  So plan ahead! I’m glad I did this silly example rather than running in to it on a real problem.
    5. Once everything is right again, go into the outline for the excel system and make that new parameter (WB_KF)  an output parameter.
    6. Then click on the File (A3) and go to the properties window and click on the Use Macro checkbox
    7. Put the macro name into the  Macro Name field


    Now you can run you project, and every time you do, the program will calculate a new cost and Kafizle value.  This of course begs the question, what are the proper units of Kafizle?  Here is the Design Point table:


    Thoughts and Conclusions

    I started this effort thinking I would drive my model from Excel, basically replacing the Parameter Manager with Excel.  But that does not work because Excel doesn’t know enough about your project to handle the dependencies that can really cause problems if you don’t solve in the correct order.  So once I figured that out I found some pretty good uses.  Here are some other ideas for how to use the Excel System:

    • Do additional post processing on result values
    • Use formulas or lookup tables to calculate loads. 
    • Just make sure that the values you take from your ANSYS model into Excel (inputs) are also input parameters in the parameter manager. 
  • Use tables and lookups to calculate input values for an analysis
    • A good example would be a “family of parts” application where you put in a part number and Excel does a vlookup() on a table that has all the input parameters listed by part number.
  • To include results from an ANSYS analysis in a system model you have in Excel.
    • You still have to force the update on the ANSYS side, which is not the ideal way to run a system model, but it may be easier than writing scripts and hooking it up that way.

    This is a new feature at R13 and it can be a bit “touchy.”  Especially if you are rooting around in it like a Javalina rooting around in your flower bed (Arizona reference).  If you do something really crazy it can loose its way and start generated errors. I found the best solution at that point was to save a copy of my Excel file, delete the system, and start over. 

    This took a lot longer than I thought to write, but the Excel System does a lot more than I thought.  I think as we all start thinking about how to use this tool, people will come up with some pretty cool applications.

    How To run ANSYS Release 13.0 Workbench on 64-bit Linux

    Getting ANSYS Workbench up and running on Linux at R13 is pretty simple.  You just have to make sure that a few things are in place and some packages are loaded.  Then it works great.  Here is a quick HOW-TO on getting things going:

    Pre-Install Tasks

    • Install CentOS 5.3 or greater or RHEL5
      • Download and install the latest graphics card drivers for your video card. Restart
    • Next, Gnome Desktop Environment is required for optimum use.

    • Next, Using the Linux Package Manager. Select the Development main group and then select the additional libraries all needed. (see images below)



    • Select Optional packages and then select the additional MESA libraries (see below).


    • Next, select the Base System main group, then X Window system, and Legacy Software Support. With Legacy software support still selected. Click Optional Packages and select the additional package – openmotif22 and click close.



    • Restart the system

    Post ANSYS Install Setup Tasks

    • Within your Terminal session. Type


      • Click Configure Products, then select the products to configure or reconfigure
    • Pro/E Configuration GUI


    • Unigraphics NX install Configuration GUI


    • Click Continue and the product configuration script will run.


    • Click Finish

    How to launching ANSYS Release 13.0 Workbench

    • Open a Linux terminal session:
    • Change your path to include /ansys_inc/v130/Framework/bin/Linux64


    • Next, launch the program by typing ./runwb2 and press enter


    • Basic opening up of a Design Modeler project
    • image

    Here it is: ANSYS 13 Workbench on CentOS 5.5 64-bit Linux



    Dad, What Do You Do at Work?

    I’m sure the question comes up for a lot of us from time to time, whether from one of our own offspring, another relative, or an acquaintance.  “Just what is it that you do, anyway?”  Typical answers might be something short and sweet, such as, “I’m an engineer.”  A more detailed response might be, “I use a technique called finite element simulation which is a computer tool we use to simulate the behavior of parts or systems in their real world environment.”

    You’ll probably find that people’s eyes glaze over and they start looking for someone else to talk to by the time you get to the end of that second quote above.  In fact, I find that my extended family is much more interested in my brother-in-law’s surgery stories from the operating room than they are in my own triumphs and challenges in the engineering simulation world.  Maybe you’ve had that same sort of reaction.  You have probably noticed that there are a whole lot more medical dramas on TV at any one time than there are engineering dramas.  They’ve got many characters from Marcus Welby on up to Dr. Ross on ER, Jack on Lost, to Dr. Grey on Grey’s Anatomy, with more than I can count in between.

    We’ve got, well, Scotty.  And even then I think Dr. McCoy got more air time.

    So when my kids ask me what I do at work, I recall a scene from that late 1980’s to early 1990’s TV show The Wonder Years.  In the episode “My Father’s Office,” Kevin asks his dad what he does for a living.  His father responds in an angry tone, “You know what I do!  I work at NORCOM.”  As if that were a sufficient explanation.  I suppose it was his way of saying, “It’s complicated.  It can be high pressure.  You might find it boring.  It puts food on the table and a roof over our heads, though.”

    Rather than reply that way, I’ve tried to come up with what is hopefully a better response.  In fact, this concept constitutes the first portion of our Engineering with FEA training class, written by Keith DiRienz of FEA Technologies with contributions by yours truly.

    I can’t guarantee that your audience’s eyes won’t glaze over by the end, nor that you’ll become the hit of the party, but this is free and you get what you pay for.  This explanation can obviously be adjusted based on the audience, but it goes something like this:

    Simple explanation:

    –We have equations to solve for stresses and deflections in simply-shaped parts such as cantilevered beams.

    –No such equations exist for complex shaped objects subject to arbitrary loads.

    –So, using finite elements, we break up a complex part into solvable chunks, leading to a finite set of equations with finite unknowns.

    -We solve the equations for the chunks, and that ends up giving us the results for the whole part.

    If we want more details, we can use this:  As an example, here is a simple beam, fixed at one end with a tip load P at the other end.  We have an equation to calculate the tip deflection u for simple cases:


    In the above equation E is the Young’s Modulus, a property of the material being used and I is the moment of inertia, a property of the shape of the beam cross section.

    For more complex shapes and loading conditions, we don’t have simple equations like that, but we can use the concept by dividing up our complex shape into a bunch of simpler shapes.  Those shapes are called elements.


    A useful equation for us is the linear spring equation, F=Kx, where F is the force exerted on the spring, K is the stiffness of the spring, and x is the deflection of one end of the spring relative to the other.  If we extend that concept into 3D, we can have a spring representation in 3D space, meaning the X, Y, and Z directions.  In fact, the tip deflection equation shown above for the beam fixed at one end can be considered a special case of our linear spring equation, solved for deflection with a known applied force.

    By assembling our complex structure out of these 3D springs, or elements, we can model the full set of geometry for complex shapes.  The process of making the elements is called meshing, because a picture or plot of the elements looks like a mesh.

    Using linear algebra and some calculus (stay in school kids!) we can setup a big  series of equations that takes into account all the little springs in the structure as well as any fixed (unable to move) locations and any loads on the structure.  The equations are too big to solve by hand by normal people so computers are used to do this.

    When the computer is done solving we end up with deflection results in each direction for the corner points (called nodes) in each element.  Some elements have extra nodes too.

    From those deflection results, the computer can calculate other quantities of interest, such as stresses and strains.  Further, other types of analyses can be solved in similar fashion, such as temperature calculations and fluid flow.

    Here is an example using a familiar object that practically everyone can relate to.  This plot shows the mesh:


    This is fixed in the blue region at the bottom and has an upward force on the left end.  The idea here is that someone is holding it tightly on the blue surface and is pulling up on the red surface.


    After solving the simulation, we get deflection results like this:


    The picture above shows that the left end of the paper clip has deflected upward, which is what we would expect based on common experience with bending paper clips.  Using our finite element method, we can predict the permanent deflection resulting from bending the paper clip beyond it’s ‘yield’ point, resulting in what we call plastic deformation.

    Clearly there is a lot more to it than these few sentences describe, but hopefully this is enough to get the point across.

    In sum, not as exciting as my brother in law’s medical stories involving nail guns or other gruesome injuries, but hopefully this makes the world of engineering simulation a little more accessible to our friends and family.

    In the Wonder Years episode, Kevin ends up going to work with his father to see for himself what he does.  I won’t spoil the episode, but hopefully you’ll get the chance to show your family and friends what it is that you do from time to time.

    Knowing the ID of Coordinate Systems Created in ANSYS Mechanical – or perhaps – You didn’t know that, everyone knows that…

    During this weeks webinar on Using APDL Snippets in ANSYS Mechanical a question came up about coordinate systems. I actually don’t remember the original question, but in answering it the question came into my mind: how do you get access to the ID of coordinate systems that you create in ANSYS Mechanical?

    For a lot of items you can add to the ANSYS Mechanical model tree, you can attach a Command Object (snippet) and ANSYS Mechanical passes a parameter with the ID of the thing you want access to (material, contact pair, spring, joint, etc…).  But there is no way to add a Command Object to a coordinate system.

    So I dug into it and found something I didn’t know.  The problem with discovering something like this and sharing it is that you either just uncovered something that can help a lot of users or you are going to embarrass yourself over something that everyone already knows.  The idea of a blog is to be casual and informal, so let’s see which I did.

    If you click on a user created coordinate system in ANSYS Mechanical The Detail View list two things in the first grouping “Definition”.  They are Type and Coordinate System ID.  The default for the ID is “Program Controlled”  I’ve never clicked on it to see what the other options are. It turns out you can change it to “Manual”


    Once you do that it gives you a second “Coordinate System ID” line and you can put in whatever number you want there.


    Problem solved.  Just give your coordinate system whatever number you want and use that number in your macro.  Couldn’t be easier.

    Hopefully, this was helpful. If so, rate this posting at a 5.

    If you already knew this little factoid, rate it as a 1.

    – Eric

    Files and Info from “Using APDL Snippets in ANSYS Mechanical”

    We just finished the webinar “Using APDL Snippets in ANSYS Mechanical” and wanted to share the files weimage used and some additional information.

    The presentation can be found here.

    The sample script for plotting mode shapes is here.

    You can view a recording of the presentation here on our WebEx site.

    PADT’s ANSYS Webinar Series is now off on Summer Break. We will be back in August!

    A Further Assessment of Design Assessment

    Last weeks PADT ANSYS Webinar Series webinar was on a little used feature in ANSYS Mechanical called Design Assessment, or DA.  If you missed it, you can view a recording at:


    And a PDF of the presentation can be found here:

    As promised, this weeks The Focus posting will be a follow up on that webinar with an in-depth look at the scripts that were used in the example.  But first, let us review what DA is for those that don’t remember, fell asleep, or don’t want to sit through 60+ minutes of me mumbling into the telephone.

    Review of DA

    Design Assessment is a tool in ANSYS Workbench that works with ANSYS Mechanical to take results from multiple time or load steps, perform post processing on those results, and bring the calculated values back into ANSYS Mechanical for viewing as contour plots.  It was developed to allow the ANSYS, Inc. developers to add some special post processing tools needed in the off shore industry, but as they were working on it they saw the value of exposing the Application Programmers Interface (API) to the user community so anyone can write their own post processing tools.


    You use it by adding a Design Assessment system to you project.  In its most basic form, the default configuration, it is set up to do load case combinations.  That in itself is worth knowing how to use it.  But if you want to do more, you can point it to a custom post processing set of files and do your own calculations.

    A custom DA is defined by two things.  First is an XML file that tells ANSYS Mechanical what user input you want to capture, how you want to get results out of mechanical, what you want to do with the results, and how you want them displayed in your model tree.  Second is one or more Python scripts that actually do the work of capturing what the user input, getting results, doing the calculation, and sticking the resulting values back in the model.  Both are well documented and, once you get your head around the whole thing, pretty simple.

    Right now DA works with Static and Transient Structural models.  It also only allows access to element stress values.  Lots of good enhancements are coming in R14, but R13 is mature enough to use now.

    If that review was too quick, review the recording or the PowerPoint presentation.

    A Deep Dive Into an Example

    For the webinar we had a pretty simple, and a bit silly, example – the custom post processing tool took the results from a static stress model and truncates the stress values if they are above or below a user specified value.  Not a lot of calculating but a good example of how the tool works. 

    Note, this posting is going to be long because there is a lot of code pasted in.  For each section of code I’ll also include a link to the original file so you can download that yourself to use.

    Here is the XML file for the example (Original File):

       1:  <?xml version="1.0" encoding="utf-8"?>
    The first lesson learned was that you have to get all the tags and headers just right. 
    It is case sensitive and all the version and other stuff has to be there
    Cutting and pasting from something that work is the best way to go
       2:  <!-- 
       4:            XML file for ANSYS DA R13
       5:            Demonstration of how to use DA at R13
       6:            Goes through results and sets stresses below floor value to floor
       7:              value and above ceiling value to ceiling value.     
       9:          User adds DA Result to specify Floor and Ceiling
      10:          Attribute group can be used to specify a comment
      12:          Calls and in c:\temp
      14:          Eric Miller
      15:          5/18/2011
      16:  -->
    Everything is in a DARoot tag.
      18:  <DARoot ObjId ="1" Type="CAERep" Ver="2">
    Attributes tags define items you want to ask the user about.  
      19:    <Attributes ObjId="2" Type="CAERepBase" Ver="2">
    This first attribute is a drop down for the user to decide which stress value they want 
    You use <AttributeType> to make it a dropdown then put the values in <Validation>
      20:      <DAAttribute ObjId="101" Type="DAAttribute" Ver="2">
      21:        <AttributeName PropType="string">Stress Value</AttributeName>
      22:        <AttributeType PropType="string">DropDown</AttributeType>
      23:        <Application PropType="string">All</Application>
      24:        <Validation PropType="vector&amp;lt;string>">
      25:               SX,SY,SZ,SXY,SYZ,SXZ,S1,S2,S3,SEQV
      26:        </Validation>
      27:      </DAAttribute>
    Next is the prompt for the stress floor value
      28:      <DAAttribute ObjId="102" Type="DAAttribute" Ver="2">
      29:        <AttributeName PropType="string">Stress Floor</AttributeName>
      30:        <AttributeType PropType="string">Double</AttributeType>
      31:        <Application PropType="string">All</Application>
      32:        <Validation PropType="vector&amp;lt;string>">-1000000,10000000</Validation>
      33:      </DAAttribute>
    Then the Ceiling Value
      34:       <DAAttribute ObjId="103" Type="DAAttribute" Ver="2">
      35:        <AttributeName PropType="string">Stress Ceiling</AttributeName>
      36:        <AttributeType PropType="string">Double</AttributeType>
      37:        <Application PropType="string">All</Application>
      38:        <Validation PropType="vector&amp;lt;string>">-1000000,10000000</Validation>
      39:      </DAAttribute>
    Finally a user comment, just to show how to do a string.
      40:      <DAAttribute ObjId="201" Type="DAAttribute" Ver="2">
      41:        <AttributeName PropType="string">User Comments</AttributeName>
      42:        <AttributeType PropType="string">Text</AttributeType>
      43:        <Application PropType="string">All</Application>
      44:      </DAAttribute>
      45:    </Attributes>
    To expose an attribute, you can put it in an AttributeGroup to get info shared by
    all DA result objects.  This one just does the Comment
      46:    <AttributeGroups ObjId ="3" Type="CAERepBase" Ver="2">
      47:      <DAAttributeGroup ObjId="100002" Type="DAAttributeGroup" Ver="2">
      48:        <GroupType PropType="string">User Comments</GroupType>
      49:        <GroupSubtype PropType="string">Failure Criteria</GroupSubtype>
      50:        <AttributeIDs PropType="vector&amp;lt;unsigned int>">201</AttributeIDs>
      51:      </DAAttributeGroup>
      52:    </AttributeGroups>
    <DAScripts> is the most important part of the file.  It defines the scripts to run for a 
    solve and for a evaluate.  At R13 you do need to specify the whole path to your python files
      53:    <DAScripts ObjId="4" Type="DAScripts" Ver="2">
      54:      <Solve PropType="string">c:\temp\</Solve>
      55:      <Evaluate PropType="string">c:\temp\</Evaluate>
      56:      <DAData PropType="int">1</DAData>
      57:      <CombResults PropType="int">1</CombResults>
      58:      <SelectionExtra PropType="vector&amp;lt;string>">DeltaMin, DeltaMax</SelectionExtra>
      59:    </DAScripts>
    The other way to get user input is to put attributes into a <Results> object.  
    Here we are putting the choice of stresses (101),and the floor and ceiling (102,103)
    into an object
      60:    <Results ObjId="5" Type="CAERepBase" Ver="2">
      61:      <DAResult ObjId ="110000"  Type="DAResult" Ver="2">
      62:        <GroupType PropType="string">Ceiling and Floor Values</GroupType>
      63:        <AttributeIDs PropType="vector&amp;lt;unsigned int>">101,102,103</AttributeIDs>
      64:        <DisplayType PropType="string">ElemCont</DisplayType>
      65:      </DAResult>
      66:    </Results>
      67:  </DARoot>

    I always have to go over XML files a few times to figure them out. There is a lot of information, but only a small amount that you need to pay attention to.  After a while you figure out which is which.

    Now on to the fun part, the Python scripts.  The first one gets executed when they user chooses solve. is shown below and you can get the original here.   The comments inside pretty much explain it all.  It basically does two things:  creates an ANSYS MAPDL macro that extracts all the element stresses and puts them in a text file, then it runs MAPDL with that macro.

       1:  import subprocess
       2:  import os
       3:  import shutil
       5:  #======================================================================
       6:  #
       7:  #   ------------------------------------------------------------  PADT
       8:  #
       9:  #
      10:  #
      11:  #         Demonstration python script for Design Assessment in 
      12:  #            ANSYS R13
      13:  #         Called on solve from ANSYS Mechanical
      14:  #         Bulk of code copied and modified from TsaiWu example
      15:  #            provided by ANSYS, Inc.
      16:  #
      17:  #       E. Miller
      18:  #       5/18/2011
      19:  #======================================================================
      20:  def trunc_solve(DesignAssessment) :
      22:      # Get number of elements in model
      25:      # Change directory to current workspace for DA
      26:      originaldir = os.getcwd()
      27:      os.chdir(DesignAssessment.getHelper().getResultPath())
      29:      #Get the path to the results file name and the location of the temp directory
      30:      rstFname = DesignAssessment.Selection(0).Solution(0).getResult().ResultFilePath()
      31:      rstFname = rstFname.rstrip('.rst')
      32:      apath = DesignAssessment.getHelper().getResultPath()
      34:      print "Result File:", rstFname
      35:      print "Apath:", apath
      37:      # Write an ANSYS APDL macro to start ANSYS, resume the result file, grab the stress
      38:      #   results and write them to a file
      40:      macfile = open(DesignAssessment.getHelper().getResultPath()+"\\runda1.inp", "w")
      42:      macfile.write("/batch\n")
      43:      macfile.write("/post1\n")
      44:      macfile.write("file,"+rstFname+"\n")
      45:      macfile.write("set,last\n")
      46:      macfile.write("*get,emx,elem,,num,max\n")
      47:      macfile.write("*dim,evls,,emx,10\n")
      48:      macfile.write("etable,esx,s,x\n")
      49:      macfile.write("etable,esy,s,y\n")
      50:      macfile.write("etable,esz,s,z\n")
      51:      macfile.write("etable,esxy,s,xy\n")
      52:      macfile.write("etable,esyz,s,yz\n")
      53:      macfile.write("etable,esxz,s,xz\n")
      54:      macfile.write("etable,es1,s,1\n")
      55:      macfile.write("etable,es2,s,2\n")
      56:      macfile.write("etable,es3,s,3\n")
      57:      macfile.write("etable,eseqv,s,eqv\n")
      58:      macfile.write("*vget,evls(1, 1),elem,1,etab,  esx\n")
      59:      macfile.write("*vget,evls(1, 2),elem,1,etab,  esy\n")
      60:      macfile.write("*vget,evls(1, 3),elem,1,etab,  esz\n")
      61:      macfile.write("*vget,evls(1, 4),elem,1,etab, esxy\n")
      62:      macfile.write("*vget,evls(1, 5),elem,1,etab, esyz\n")
      63:      macfile.write("*vget,evls(1, 6),elem,1,etab, esxz\n")
      64:      macfile.write("*vget,evls(1, 7),elem,1,etab,  es1\n")
      65:      macfile.write("*vget,evls(1, 8),elem,1,etab,  es2\n")
      66:      macfile.write("*vget,evls(1, 9),elem,1,etab,  es3\n")
      67:      macfile.write("*vget,evls(1,10),elem,1,etab,eseqv\n")
      69:      macfile.write("*cfopen,darsts,txt\n")
      70:      macfile.write("*vwrite,evls(1,1),evls(1,2),evls(1,3),evls(1,4),
      71:      macfile.write("(G16.9, X, G16.9, X, G16.9, X, G16.9, X, G16.9, X, 
    G16.9, X, G16.9, X, G16.9, X, G16.9, X, G16.9)\n")
      72:      macfile.write("*cfclose\n")
      73:      macfile.write("finish\n")
      74:      macfile.write("/exit,nosave\n")
      76:      macfile.close()
      78:      # Set up execution of ANSYS MAPDL. 
      79:      #   Note: Right now you need to grab a different license than what Workbench is using
      80:      exelocation = "C:\\Program Files\\ANSYS Inc\\v130\\ansys\\bin\\winx64\\ansys130.exe"
      81:      commandlinestring = " -p ansys -b nolist -i runda1.inp -o runda1.out /minimise"
      83:      #Execute MAPDL and wait for it to finish
      84:      proc = subprocess.Popen(commandlinestring,shell=True,executable=exelocation)
      85:      rc = proc.wait()
      87:      # Read the output fromt the run and echo it to the DA log file
      88:      File = open("runda1.out","r")
      89:      DesignAssessment.getHelper().WriteToLog(
      90:      File.close()
      92:      # Go back to the original directory
      93:      os.chdir(originaldir)
      95:  trunc_solve(DesignAssessment

    Some key things you should note about this script:

    • You have to use MAPDL to get your stress values. Right now there is no method in the API to get the values directly.
    • You need a second MAPDL license in order to run this script.  It does not share the license you are using for ANSYS Mechanical at R13.  This should be addressed in R14.
      • One work around right now is to use an APDL code snippet in the ANSYS mechanical run that makes the text file when the original problem is solved.  The SOLVE script is then no longer needed and you can just have an evaluate script.  Not a great solution but it will work if you only have once seat of ANSYS available.
    • Note the directory changes and getting of result file paths.  This is important. Mechanical does stuff all over the place and not in just one directory.
    • Make sure the MAPDL execution stuff is correct for your installation.

    Once the solve is done and our text file, darsts.txt, is written, we can start truncating with the evaluate script (Original File).  This script is a little more sophisticated. First it simply reads the darsts.txt file into a python array.  It then has to go through a list of DA Results objects that the user added to their model and extract the stress value wanted as well as the floor and ceiling to truncate to. For each result object requested, it then loops through all the elements truncating as needed.  Then it stores the truncated values.

       1:  import subprocess
       2:  import os
       3:  import shutil
       4:  import sys
       6:  #======================================================================
       7:  #
       8:  #   ------------------------------------------------------------  PADT
       9:  #
      10:  #
      11:  #
      12:  #         Demonstration python script for Design Assessment in 
      13:  #            ANSYS R13
      14:  #         Called on eval from ANSYS Mechanical
      15:  #         Bulk of code copied and modified from TsaiWu example
      16:  #            provided by ANSYS, Inc.
      17:  #
      18:  # NOTE: Right now it just does SX.  XML and script need to be modified to allow user
      19:  #       to specify component to use (X, Y, or Z)
      20:  #
      21:  #       E. Miller
      22:  #       5/18/2011
      23:  #======================================================================
      24:  def trunc_eval(DesignAssessment) :
      26:      # Change directory to current workspace for DA
      27:      originaldir = os.getcwd()
      28:      os.chdir(DesignAssessment.getHelper().getResultPath())
      30:      # Find number of elements in DA
      31:      Mesh = DesignAssessment.GeometryMeshData()
      32:      Elements = Mesh.Elements()
      33:      Ecount = len(Elements)
      34:      print "DA number of elements is ",Ecount
      35:      print "Number of Result Groups:",DesignAssessment.NoOfResultGroups()
      37:      # get User comment from Atribute Group
      38:      # Note: Assuems one.  Need to use a loop for multiple
      39:      attg = DesignAssessment.AttributeGroups()
      40:      atts = attg[0].Attributes()
      41:      usercomment = atts[0].Value().GetAsString()
      42:      print "User Comment = ", usercomment
      44:          # create arrays for SX/Y/Z values
      45:      sx = []
      46:      sy =  []
      47:      sz = []
      48:      sxy = []
      49:      syz =  []
      50:      sxz = []
      51:      s1 = []
      52:      s2 = []
      53:      s3 = []
      54:      seqv = []
      55:      # read file written during solve phase
      56:      #   append stress values to SX/Y/Z arrays
      57:      File = open("darsts.txt","r")
      58:      for line in File:
      59:          words = line.split()
      60:          sx.append(float(words[0]))
      61:          sy.append(float(words[1]))
      62:          sz.append(float(words[2]))
      63:          sxy.append(float(words[3]))
      64:          syz.append(float(words[4]))
      65:          sxz.append(float(words[5]))
      66:          s1.append(float(words[6]))
      67:          s2.append(float(words[7]))
      68:          s3.append(float(words[8]))
      69:          seqv.append(float(words[9]))
      70:      File.close()
      72:      # Loop over DA Results created by user
      73:      for ResultGroupIter in range(DesignAssessment.NoOfResultGroups()):
      74:          # Get the Result Group
      75:          ResultGroup = DesignAssessment.ResultGroup(ResultGroupIter)
      76:          # Extract Cieling and Floor for the Group
      77:          strscmp  = ResultGroup.Attribute(0).Value().GetAsString()
      78:          strFloor = float(ResultGroup.Attribute(1).Value().GetAsString())
      79:          strCeil  = float(ResultGroup.Attribute(2).Value().GetAsString())
      80:          print "DA Result", ResultGroupIter+1, ":", strFloor, strCeil
      82:          #Add a set of results to store values in
      83:          ResultStructure = ResultGroup.AddStepResult()
      85:          print "strscmp", strscmp
      87:          # Loop on elements 
      88:          for ElementIter in range(Ecount):
      89:              #Add a place to put the results for this element
      90:              ResultValue = ResultStructure.AddElementResultValue()
      91:              # Get the element number and then grab SX values that
      92:              #   was read from file
      93:              Element = Mesh.Element(ElementIter).ID()
      95:              if strscmp == "SX":
      96:                  sss = sx[Element-1]
      97:              elif strscmp == "SY":
      98:                  sss = sy[Element-1]
      99:              elif strscmp == "SZ":
     100:                  sss = sz[Element-1]
     101:              elif strscmp == "SXY":
     102:                  sss = sxy[Element-1]
     103:              elif strscmp == "SYZ":
     104:                  sss = syz[Element-1]
     105:              elif strscmp == "SXZ":
     106:                  sss = sxz[Element-1]
     107:              elif strscmp == "S1":
     108:                  sss = s1[Element-1]
     109:              elif strscmp == "S2":
     110:                  sss = s2[Element-1]
     111:              elif strscmp == "S3":
     112:                  sss = s3[Element-1]
     113:              elif strscmp == "SEQV":
     114:                  sss = seqv[Element-1]
     116:              # Compare to Ceiling and Floor and truncate if needed
     117:              if sss > strCeil:
     118:                  sss = strCeil
     119:              if sss < strFloor:
     120:                  sss = strFloor
     121:              # Store the stress value
     122:              ResultValue.setValue(sss)
     123:      # Go back to the original directory
     124:      os.chdir(originaldir)
     126:  trunc_eval(DesignAssessment)

    Some things to note about this script are:

    • The same directory issues hold here.  Make sure you follow them
    • Always loop on ResultGroups.  You can assume the number of attributes is constant but you never know how many results the user has asked for.
    • In this example it is assumed that the stress label is the first attribute and that the floor and ceiling are the second and third.  This is probably lazy on my part and it should be more general.
      • The way to make it more general is to loop on the attributes in a group and grab their label, then use the label to determine which value it represents.
    • Before you can store values, you have to create the result object and then each result value in that structure:
      • ResultStructure = ResultGroup.AddStepResult() for each result object the user adds to the tree
      • ResultValue = ResultStructure.AddElementResultValue() for each element
      • ResultValue.setValue(sss) to set the actual value

    And that is our example.  It should work with any model, just make sure you get the paths right for where the files are.

    Be a Good Citizen and Share!

    If you have the gumption to go and try this tool out, we do ask that you share what you come up with.  A good place is www.ANSYS.NET.  That is the repository for most things ANSYS.  If you have questions or need help, try or your ANSYS support provider.

    Happy Assessing!

    Files for Design Assessment in R13

    During the Webinar on 5/18/2011 we discussed a simple example we created to show how to use Design Assessment.  Click here to download the Zip file.

    You can get a PDF of the PowerPoint here.

    You can watch a recording of the webinar at:


    I Have the Touch: Check Contact in Workbench Prior to Solving with the Contact Tool

    Song quotes Peter Gabriel, “I Have the Touch”

    The time I like is the rush hour, cos I like the rush
    The pushing of the people – I like it all so much
    Such a mass of motion – do not know where it goes
    I move with the movement and … I have the touch

    Looking back I can see a defining moment in my life when about a month after high school graduation two good friends and I drove four hours from home to see Peter Gabriel in concert.  It’s not that the concert was great, which it was, but it was the trip itself.  It was a first foray after high school, a sort of toe dipping into the freedom of adulthood while in a strange pause between graduating in a small town in the same school system with the same kids and starting engineering school in a big city in the Fall.

    Wanting contact
    I’m wanting contact
    I’m wanting contact with you
    Shake those hands, shake those hands

    What does all that have to do with ANSYS, you ask?  Primarily, it’s hard to get Peter Gabriel’s “I Have the Touch” out of my head whenever I’m working with contact elements.  Someone once said that we are a product of the music of our youth.  While as I’ve gotten older and hopefully wiser I would hope that we are made up of much more than the product of listening to some songs, I do find it true that certain songs from years ago tend to stick in my head.  So, while Mr. Gabriel plays in my head, let’s discuss checking our contact status in Workbench Mechanical.

    For those of us familiar with Mechanical APDL, the CNCHECK command has been a good friend for a lot of years now.  This command can be used to interrogate our contact pairs prior to solving to report back which pairs are closed, what the gap distance is for pairs that are near touching, etc.  More recently, this type of capability has become available in Workbench Mechanical by inserting a Contact Tool under the Connections branch.

    Let’s take a look at that in version 13.0.  Here we have inserted a Contact Tool under the Connections branch.  It automatically includes the Initial Information sub-branch, with a yellow lightening bolt meaning no initial information has yet been calculated.


    By right clicking on the Initial Information sub-branch, we can select Generate Initial Contact Results.  The resulting worksheet view provides significant information on all of the defined contact regions.  By default the information displayed for each contact region includes the name, contact/target side, type (fictionless, no separation, etc.), status, number of elements contacting, initial penetration, initial gap, geometric penetration, geometric gap, pinball radius, and real constant set number.  That last value is useful when reviewing the solver output, as it lists contact info per real constant set number of each contact pair (contact region).


    Further, by right clicking on that table we have the option to display some additional data, or remove fields of data.  The additional fields that can be added are contact depth, normal stiffness, and tangential stiffness.  We can also sort the table by clicking on any of the headings. 

    The colors in the table display four possible status values:

    Red = open status for bonded and no separation types

    Yellow = open for non bonded/no separation types

    Orange = closed by with a large amount of gap or penetration

    Gray = inactive due to MPC or Normal Lagrange or auto asymmetric contact


    If we left click on one or more of the contact regions in the table, we can then right click and “Go to Selected Items in Tree.”  This is a convenient way to view a particular set of contact regions in the graphics view.

    Any social occasion, it’s hello, how do you do
    All those introductions, I never miss my cue
    So before a question, so before a doubt
    My hand moves out and … I have the touch

    So, what do we do with this information?  Ideally it will prevent us from launching a solution that goes off and cranks for a few hours only to fail due to an improper contact setup.  For example, by viewing the initial status for each pair we can hopefully verify that regions that should be initially touching are in fact touching as far as ANSYS is concerned.  If there is an initial gap or penetration, correcting action can be taken by adjusting the contact settings or even the geometry if needed prior to initiating the solution.

    Wanting contact
    I’m wanting contact
    I’m wanting contact with you
    Shake those hands, shake those hands

    The Contact Tool > Initial Information is another tool we can use to help us obtain accurate solutions in a timely manner.  If you haven’t had the opportunity to use it, please try it out.  I can’t guarantee that it will trigger fond memories, but maybe you’ll have an enjoyable song playing in your head.

    Murphy’s Law of Convergence

    Ted Harris was working at tech support problem and got this convergence graph.  Ouch!

    He calls it “Murphy’s Law of Convergence”

    Jason called it “Asymptotic Sisyphaen Convergence”

    Doug commented:

    “It’s like a pinball machine.  Just shake the monitor a little, but not too much, don’t want to trip the tilt sensor. “

    I call it annoying.  Share your names or comments for this type of convergence below.


    A Moving Look at a Solid Tool: Rigid Dynamics

    Most of us who have been doing this simulation thing  for a while (in fact, if you still call it analysis instead of simulation you probably fall in this group) always think of ADAMS or DADDS when someone brings up Rigid Body Dynamics tools. Most people forget that ANSYS, Inc. has had a strong offering in this area for some time, and at R13 it really is mature and full featured. 


    A good place to start is to step back and describe what Rigid Dynamics is.  That is the name ANSYS, Inc. uses, but many people refer to it as Kinematic/Dynamic, Multibody Dynamics, Rigid Body Dynamics (RBD), or motion simulation. In most case all of these names refer to a numerical simulation where:

    1. More than one part is connected as an assembly
    2. Joints defined between the parts
    3. Large deflection motion
    4. Time dependent motion (if not, then it is just kinematics)
    5. Parts are rigid (not flexible) in most cases

    imageThink back to your dynamics class, probably your sophomore year in college.  Linkages, gears, cams, etc…  You are basically solving for the full equation of motion where inertia, time, gyroscopics. And all that stuff is taken into consideration.  The key thing to remember is that you can model a rigid system as a set of point masses  (Bodies) with inertial properties and six degrees of freedom (UX/Y/Z, and ROTX/Y/Z) You then connect them by defining a relationship (Joints) between their degrees of freedom.  You then apply forces, acceleration, velocity, and/or displacement over time and solve. To solve, the problem must be fully constrained in that there may not be any free DOF’s.

    You can learn more background on Wikipedia:  The most comprehensive jumping off point I found for the topic is a website from a the University of Waterloo:

    Rigid Dynamics in ANSYS

    You have been able to do Rigid Dynamics in ANSYS MAPDL for a long time.  MPC184 for joints and MASS21 for the bodies has been there since the Spice Girls were popular.  But, you had to use the implicit solver in MAPDL which can take some time to solve a long problem.  So ANSYS, Inc. has been working for a few releases now on an explicit rigid body solver that is much more efficient for this type of problem. You get access to the kinematic (no inertia terms, just motion) for free if you have ANSYS Professional NLS or better. To solve dynamics problems, you need a license for the Rigid Body Dynamics Solver as an add-on to ANSYS Professional NLS or better.

    To create a model you simply insert a “Rigid Dynamics” system into your project:

    imageFigure 1: Rigid Dynamics System

    It looks and acts a lot like an ANSYS Mechanical system, but is set up a bit differently to deal with the needs of Rigid Dynamics and to take away the things that you need for flexible FEA in ANSYS Mechanical.

    Analysis Process

    A real model can be very complicated and take some time to create, but the process of making it is the same regardless of the complexity:

    1. Do your normal geometry and material setup
    2. Remove auto contacts if you don’t need them (joints should constrain in most cases)
    3. Add any point masses you want to add
    4. Define Joints
      • Fix to ground as needed
      • Key is to make sure joints are correct and you don’t over/under constrain the problem
      • Coordinate system orientation is critical
      • Constrained DOF’s show up on plot for each joint – Blue=free
      • Lots of nice options
    5. Define Springs
      • Between geometry or geometry to ground
    6. Set Up Analysis
      • Define step controls
      • Solver options
      • Nonlinear Controls
      • Output Controls
    7. Apply Loads
      • Acceleration
      • Gravity
      • Joint Loads
      • Remote Displacement
      • Constraint Equations
    8. Solve
    9. Post Process
      • Deformation, Velocity, Acceleration of Parts
      • Total or Aligned
      • Probes on Parts, Joints, Springs
      • Key thing is to get graphs of behavior
      • Also makes a table you can export

    Joints, Springs & Contact

    An important part of any Rigid Dynamics tool is the Joint Library. This defines what you can and can’t connect.  The ANSYS tool has all of the standard options:

    Table 1: Joints (Courtesy of the ANSYS User Manual)

    ROTZ Free
    UZ, ROTZ Free
    UX Free
    UX, UY, UZ,
    User specified stiffness and damping in all DOF’s
    UX, UY, ROTZ Free
    ROTX, ROTY Free
    User defines free DOF’s as well as stiffness and damping on free DOF’s

    You define joints by selecting the Connections part of the tree and insert the Joint you want.  Joints can be between two bodies or between a body and fixed ground.  Each details view is a little different but most look like the one for the revolute joint:


    imageFigure 2: Revolute Details View

    In most cases, defining the joints is very easy.  You use the geometric feature that is in your real part to define the joint.  Two holes for a revolute joint or a slot for a slot.  The important thing to pay attention to for joints is which body is the reference body, and which is the mobile body. The first body you click on is the reference and the second is the mobile one.  If you get it wrong, RMB on the joint in the tree and choose Flip Reference/Mobile. 

    Next, pay attention to your coordinate systems for the joint. By default, the coordinate system is defined in the reference face of the details and by default is aligned with the geometry you picked to define the reference.  This is because the coordinate system needs to move with the geometry.  If the reference geometry isn’t exactly what you want, you can change it by picking a new face, line, or point.  Make sure you check the graphics window and that the coordinate system is in the right place.

    Once your joints are done, you can insert the other type of connection, a spring. They work the same way except the define a force load on the bodies that is related to their relative position.  You can have tension only, compression only, or a normal spring.  They work like joints in that you specify two bodies or a body and ground to connect.  You have more control on where the end points of the spring are, by specifying a piece of geometry or a point in a coordinate system.  You can also add in preload.

    imageFigure 3: Spring in Assembly

    You can also specify contact.  But before you do, make sure that a joint won’t work first.  If contact makes sense, then put it in just like you would in ANSYS Mechanical.

    imageFigure 4: Contact

    Other Stuff you Should Know

    Sometimes you don’t have geometry? No worries. you can define a point mass and add in its inertia properties to get the same behavior. 

    The Rigid Dynamics solver has its own command language and just like when you use MAPDL as the solver, you can use code snippets to send it commands.  It uses Python and the commands are documented in the help.  The most common use for this is creating relationships between DOF’s that joints can’t do.  Gears are the best example, you can specify that body 2 rotate twice for every time body 1 rotates, etc…  It is also a good way to define a non-linear spring or a complicated stop condition. 

    imageFigure 5: Code Snippets

    Speaking of stops, you can specify stops and locks on the joints to control how far a joint can move and if it reaches that stop, that it freezes (lock).


    And, without going into too much detail, that is Rigid Dynamics in ANSYS.  You can see some cool examples on the ANSYS web site:

    The key as always is to take the time to really read understand the tool then crawl-walk-run.  Don’t start with a 100 part assembly and expect it to work.  Pick a simple assembly or a simple subassembly and go from there.  In fact, regardless of your part, start with a four bar linkage and get that going first.

    Happy motion!

    Keyframe Animation with CFD-Post

    Creating useful visualizations of your fluid simulation results can be sometimes be a challenge. Images of contour plots, isosurfaces, or streamlines may not be sufficient to communicate your point. Sometimes you need to combine results together and use visual effects to transition between them. Cue up keyframe animation. In case you are not familiar with the term, “keyframe” means that multiple visualization techniques (zooming, flying around, fading, time-animation, contours, isosurfaces, etc.) are combined together in one animation file.  Keyframe animations generally require some planning and testing before creating the final result, especially as you get more skilled and start adding more results/effects.  However, if you are willing to put in a bit of extra effort, you may find that you have not only communicated your point well, but have also visually engaged your target audience, whether the audience is your managers, your customers, or that hostile crowd at a technical conference.

    CFD-Post, the post-processing tool of choice for the ANSYS CFD solvers, presents a straightforward interface for creating keyframe animations. If you click on the “film” icon along the top row of GUI, the Animation Dialog appears.


    Figure 1 – Animation Dialog in CFD-Post

    The basic idea of a keyframe animation is that you create multiple keyframes, which are connected by transition frames.  Each keyframe is essentially a snapshot of whatever is in the GUI window, and when you animate them together CFD-Post interpolates in three-dimensional space to create transitions between your keyframes. 

    Let’s walk through an example. I want to do a zoom effect on my race car model. I’ll start from a “global” view of the entire model (Figure 2a).


    Figure 2a – Global view of car model

    Next, I’ll insert my first keyframe by clicking the blank page icon in the keyframe animation dialog menu. This keyframe will have 80 distinct frames (interpolation points) between itself and the next specified keyframe.


    Figure 2b – Create first keyframe: 80 intermediate frames

    Now, since I want to animate the “zoom” portion, I’ll zoom in closer (Figure 3a).


    Figure 3a – Zoom view of car model

    Next, create the second keyframe at this position.


    Figure 3b – Create second keyframe: 80 intermediate frames

    Click “Save Movie” box, and then select the “Play” button to record it. Here is the result:


    Figure 4 – 80-frame animation from Keyframe 1 to Keyframe 2

    For the next part of my keyframe animation example, I’ll add in a CFD result: pressure contours on the surface of the car. First, the solid color on the surface of the car will need to be faded out, and then we can fade in the pressure contour.

    I can create Keyframe 3 to account for the fading out of the sold blue on the surface. First, I need to specify that the transparency = 1 (“0” denotes no transparency, “1” denotes 100% transparent) for the surface group that comprises the surface of the car (Surface Group 1). This will allow the algorithm to start with transparency = 0 at Keyframe 2 (the default), and end with transparency = 1 at Keyframe 3. Next, I need to account for the fading in of the pressure contour. Thus, I create a contour of the static pressure on the surface of the car (Contour 1) and set its transparency = 1. Then, select the new keyframe icon to create Keyframe 3.


    Figure 5 – Create Keyframe 3 to represent fade-out of blue surface

    Finally, I can create Keyframe 4 to account for the fading in of the pressure contours on the surface. Now, all that is required to set the transparency of the pressure contour (Contour 1) to “Transparency = 0”. Then, select the new keyframe icon for Keyframe 4 (reference Figure 6).


    Figure 6 – Create Keyframe 4 to represent fade-in of pressure contour

    Now,to record/save the keyframe animation, select “Save Movie” in the Animation dialog menu shown in Figure 6. I typically select “Loop”, and “Repeat” = 1.  The “Bounce” option reverses the animation when it reaches the end.  The finished product is shown in Figure 7.keyframe1234

    Figure 7– 160-frame animation from Keyframe 1 through Keyframe 4

    Now that I have demonstrated how the keyframe functionality works in CFD-Post, it becomes apparent how to create keyframes. The first step is plan out the keyframes in the animation; essentially, visualize an image that represents each keyframe. Then, experiment with the transitions between them; fading, zooming, flying around, etc. Finally, put the entire keyframe animation (keyframes plus transitions) together and let your customer, manager, or colleagues watch in awe.

    PADT’s New CUBE HVPC Computers

    Editors Note, 7/30/2014:
    This was an early post on our CUBE systems, we have since updated and improved our offering.  Visit to get the latest information.

    Cube_Logo_Trg_100Most of you should have received an e-mail blast yesterday announcing that PADT is now selling  a line computer systems that are configured specifically for CFD and FEA users.  Our CUBE High Value Performance Computers became a product line when a couple of customers asked us about the hardware we ran on and if they could get the same from us.  After doing a couple of systems this way we decided to go mainstream. 

    You can read our announcement here:

    And you can download a brochure here:


    We also have a new website with more details:

    Give us a call (480.813.4884) or send an e-mail ( if you need more information or have questions.

    What is PADT Up To?

    Since The Focus is kind of an engineer-to-engineer exchange, we thought we would use this weeks posting to give the hard core users out there some details on these systems and some things everyone should know.

    The thing we want to point out first is that these are not super-fast high performance systems.These are based on the new AMD chips which are fast per cycle, but run at a lower clock speed. You would need the latest Intel Nahelem chips for maximum performance.  What they are is a balance between speed and price.  As an example, you can go to the Dell website and configure a 3.0 GHz Intel based system with 8 cores that is as close to the CUBE w12 (12 core) system for about $9,800.  the w12 is $5,800.  that is 68% more expensive than the Intel based system for maybe a 40% speed increase.  And, if you have enough parallel licenses, it may be about the same speed 12 AMD cores vs. 8 Intel cores.  That is the kind of tradeoff we look at when configuring these systems.  If you need more than 8 cores then the price difference really gets big.

    That is one way we get the price down. The other is by cutting back on two big price drivers for someone selling computers: service and inventory.  We just don’t cut back, we pretty much cut it out.  When you buy a system we order the parts to build it. This saves us a ton on inventory and we pass that on to you.  We did this because in reality, you really don’t need it tomorrow.  In fact, it will probably take 3-4 weeks just to get buy-in from your management.  So way then pay a lot of money to get the machine in 2-3 days when you already waited 3-4 weeks? 

    We really cut back on support. We test your system here with your software and then ship it off.  You get 8 hours of phone support and a 1 year parts only warranty.  After that, you have to pay for time or parts.  We did this because in the years we have been running computers for simulation, we never use the service and we almost never have a bad part we need replaced after one year.  sometimes you get a bad part right out of the box, but if it lasts a year, it will last till it is out of date.

    So, in a nutshell, we are trying to find a sweet spot for users where, for a reasonable amount of money, they can get a reasonable amount of throughput. 

    Some Highlights

    Portable Mini Cluster – 96 Cores, Two Boxes, 256 GB of RAM: $43,250

    One cool thing to point out about our 96 core system is that we built it to be portable.  We felt that if someone needed that much horsepower they probably wanted to share it.  So we build it into its own rack that we put on wheels.  We also add two UPS’s to provide battery backup power.  So if you need to unplug it and roll it down the hallway, go for it.  This also makes these systems perfect for use in security areas where you need to wheel a box around between closed secure rooms as needed. When you get done with a project, pull the drives and wheel it to the next project.

    Another cool thing about this system is that it uses a special Infiniband interconnect that does not require a switch when you connect only two systems.  This saves a few thousand dollars.

    The “Sweet Spot” – 32 Cores, 128 GB of RAM, $12,300

    We feel that the best deal in the CUBE line is the 32 core system.  Not so much because of the hardware but more because of the ANSYS HPC license packs.  If you buy two packs, you get the ability to two jobs on 8 cores each or one job on 32 cores.  It also comes with enough RAM to handle bigger problems (128GB) at a great price.  Even if you add some bells and whistles it should still be under $15,000.  You need to run the numbers but most users will be able to purchase a CUBE w32 and two HPC packs for about the same or less than two 8 cores systems, an interconnect, and 16 parallel tasks, and still run faster.

    What To Do with all Those Dang Files! – The 10TB Fileserver

    We configured this system after a customer that bought a 96 core system from us told us that they were paying something like $20k for a fancy file storage device.  Now granted, this is a very fast I/O machine with software and hardware that lets you write files to it during a run.  But honestly, why do that?  Why not just put 1-3 TB on your compute server or mini-cluster and then get a cheap box with a bunch of inexpensive disk drive on it. Run on the fast machine and when you get done, copy your files to the inexpensive file server.  Our pre-configured FS10 has 10GB of disk space for $5,000.

    But that only solves one problem: what to do with those big files when you need to keep them around.  What do you do for long term storage?  At PADT we use these external hard drive docking stations (eSTAT Drive Bay) and we go out and buy standard 1TB hard drives.  We plug those guys in, copy our files to the hard drive, unplug it, label the drive, and throw it in a drawer.  To be safe, we often make two copies of the files and actually put one in a fire safe.  The fs10 comes with an external eSATA Dual Drive bay to solve your long term archiving problems in a simple and cost effective way.

    Bottom Line

    So, is a CUBE HVPC system right for you?  You need to decide if giving up some speed, support and delivery time is worth the savings. Also, is a system designed for CFD and FEA a better fit, vs. a database server that they label as an HPC machine.  The best way is to contact us, and we’ll help answer those questions.

    Not the most exciting The Focus posting, but we think you will agree that this approach to computing can be very efficient, so we thought we would share it with you.