Happy 10th Birthday: The Focus

image_thumb1Don’t you hate it when you miss someone’s birthday?  I was looking up an old article in the Focus and noticed that the first issue was published on January, 13th, 2002. 

Happy Belated Birthday!

It was sometime in 2001 that Rod Scholl pointed out that there was no good ANSYS newsletter out there.  People would do one or two issues then get busy and never put out any more, or only publish once in a while.  So we decided that we would not only do a newsletter, but that we would keep it up and publish  on a regular basis. The first issue came out as an HTML email on January 13th of 2002. 

image8

The First Issue of The Focus

And Rod was instrumental in keeping us on track and making sure that with it. Since then we published 74 actual newsletters before switching to a blog format in 2010.  Just before this one goes up on the blog, we will have published 59 The Focus articles.

Thank you to everyone who subscribes to The Focus and reads our postings, rates us, and sends us such great comments and questions.  

Here is to 10 more years!

Files for Webinar: A POST26 Primer

imageWe just finished our webinar for 4/12/2012 on the basics of the POST26 Time History Post Processor.  As promised, here are the files used for examples in the webinar, as well as the PowerPoint:

Tower-of-Test (ToT) test model, APDL:

Tower-of-Test (ToT) test model, ANSYS Mechanical with APDL Snippets:

tower-of-test-transient-2013.wbpz (note this is an R15 file.  See why here.)

 

imagePowerPoint Presentation:

A recording of this webinar will be available on the following site after 4/13/2012:

padtincevents.webex.com

Click on  PADT ANSYS Webinar Series to see all recordings.

Some Revolutionary HPC Talk: 208 Cores+896GB < $60k, GPU’s, & ANSYS Distributed

imageThe last couple of weeks a bunch of stuff has gone on in the area of High Performance computing, or HPC, so we thought we would throw it all into one Focus posting and share some things we have learned, along with some advice, with the greater ANSYS user community.

There is a little bit of a revolution going on in both the FEA and CFD simulation side of things amongst users of ANSYS, Inc. products.  For a while now customers with large numbers of users and big nasty problems to solve have been buying lots of parallel licenses and big monster clusters. The size of problems that these firms, mostly turbomachinery and aerospace, have been growing and growing. And even more so for CFD jobs.   But also FEA for HFSS and ANSYS Mechanical/Mechanical APDL.  That is where the revolution started.

But where it is gaining momentum, where the greater impact is being seen on how simulation is used around the world, is with the smaller customers.  People with one to five seats of ANSYS products.  In the past they were happy with their two “included” Mechanical shared memory parallel tasks.  Or they might spring for 3 or 4 CFD parallel licenses.  But as 4, 6, and 8 core CPU chips become mainstream, even on the desktop, and as ANSYS delivers greater value from parallel, we are seeing more and more people invest in high performance computing. And they are getting a quick return on that investment.

Affordable High Value Hardware

208 Cores + 869 GB + 25 TB + Infiniband + Mobile Rack = $58k = HOT DAMN!

Yes, this is a commercial for PADT’s CUBE machines.  (www.CUBE-HVPC.com) Even if you would rather be an ALGOR user than purchase hardware from a lowly ANSYS Chanel Partner, keep reading. Even if you would rather go to an ANSYS meeting at HQ in the winter than brave asking your IT department if you can buy a machine not made by a major computer manufacturer, keep reading.

Because what we do with CUBE hardware is something you can do yourself, or that you can pressure your name brand hardware provider into doing.

We just got a very large CFD benchmark job for a customer.  They want multiple runs on a piece of “rotating machinery” to generate performance curves.  Lots of runs, and each run can take up to 4 or 5 days on one of our 32 core machines.  So we put together a 208 core cluster.  And we maxed out the RAM on each one to get to just under 900 GB. Here are the details:

Cores: 208 Total
    3 servers x48 2.3 GHz cores each server
    2 servers x32 3.0 GHz cores each server
RAM: 896 GB
    3 servers  x128GB DDR3 1333MHz ECC RAM each server
    2 servers  x256GB DDR3 1600MHz ECC RAM each server
DATA Array:  ~25TB
Interconnect: 5 x Infiniband 40Gbps QDR
Infiniband Switch: 8 port 40Gbps QDR
Mobile Departmental Cluster Rack – 12U

All of this cost us around $58,000 if you add up what we spent on various parts over the past year or so.  That much horsepower for $58,000.  If you look at your hourly burden rate and the impact of schedule slip on project cost, spending $60k on hardware has a quick payback. 

You do need to purchase parallel licenses. But if you go with this type of hardware configuration what you save over a high-priced solution will go a long way towards paying for those licenses.  Even if you do spend $150k on hardware, your payback with the hardware and the license is still pretty quick.

Now this is old hardware (six months to a year –  dinosaur bones).  How much would a mini-cluster departmental server cost now and what would it have inside:

Cores: 320 Total
5 servers x64 2.3 GHz cores each server
RAM: 2.56 TB
5 servers x512 GB DDR3 RAM each server
DATA Array: ~50TB
Interconnect: 5 x Infiniband 40Gbps QDR
Infiniband Switch: 8 port 40Gbps QDR
Mobile Departmental Cluster Rack – 12U

The cost?  around $80,000.  That is $250/core.  Now you need big problems to take advantage of that many cores.  If your problems are not that big, just drop the number of servers in the mini-cluster.  And drop the price proportionally. 

Same if you are a Mechanical user.  The matrices in FEA just don’t scale in parallel like they do for CFD, so a 300+ core machine won’t be 300 times faster. It might even be slower than say 32 cores.  But the cost drop is the same.  See below for some numbers.

Bottom line, hardware cost is now in a range where you will see payback quickly in increased productivity.

GPU’s

image

NVIDIA’s Tesla GPU
We think they should have some 80’s era super model
draped over the front like those Ferrari posters we
had in our dorm rooms.

For you Mechanical/Mechanical APDL users, let’s talk GPU’s.

We just got an NVIDIA TESLA C2075 GPU.  We are not done testing, but our basic results show that no matter how we couple it with cores and solvers, it speeds things up.  Anywhere from 3 times faster to 50% faster depending on the problem, shared vs. distributed memory, and how many cores we throw in with the GPU.  

This is a fundamental problem with answering the question “How much faster?” because it depends a lot on the problem and the hardware. You need to try it on your problems on your hardware. But we feel comfortable in saying if you buy an HPC pack and run on 8 cores with a GPU, the GPU should double your speed relative to just running on 8 cores.  It could even run faster on some problems. 

For some, that is a disappointment.  “The GPU has hundreds of processors, why isn’t it 10 or 50 times faster?”  Well, getting the problem broken up and running on all of those processors takes time.  But still, twice as fast for between $2,000 to $3,000 is a bargain. I don’t know what your burden rate is but it doesn’t take very many hours of saved run time to recover that cost.  And there is no additional license cost because you get a GPU license with an HPC pack.

Plus, at R14 the solver supports a GPU with distributed ANSYS, so even more improvements.  Add to that support for the unsymmetrical or damped Eigensolvers and general GPU performance increases at R14.

PADT’s advice? If you are running ANSYS Mechanical or Mechanical APDL get the HPC Pack and a GPU along with a 12 core machine with gobs of RAM (PADT’s 12 core AMD system with 64GB or RAM and 3TB of disk costs only $6,300 without the GPU, $8,500 with).  You can solve on 8 cores and play Angry Birds on the remaining 4.

Distributed ANSYS

For a long time many of users out there have avoided distributed ANSYS. It was hard to install, and unless you had the right problem you didn’t see much of a benefit for many of the early releases. Shared Memory Parallel, or SMP, was dirt easy – get an HPC license and tell it to run on 8 cores and go.

Well at R14 of ANSYS Mechanical APDL it is time to go distributed.  First off they make the install much easier.  To be honest, we found that this was the biggest deterrent for many small company users.

Second, at R14 a lot more things are supported in distributed ANSYS. This has been going on for some time so most of what people use is supported. At this release they added subspace eigensolving, TRANS, INFINI and PLANE121/230 elements (electrostatics), and SURF251/252. 

Some “issues” have been fixed like restart robustness and you now have control on when and how multiple restart files are combined after the solve. 

All and all, if you have R14, you are solving mechanical problems, and you have an HPC pack, you should be using distributed most of the time.

Conclusions

We get a ton of questions from people about what they should buy and how much.  And every situation is different. But for small and medium sized companies, the HPC revolution is here and everyone should be budgeting for taking advantage of HPC:

    • At least one HPC pack
    • Some new faster/cheaper multicore hardware (CUBE anyone?)
    • A GPU. 

STOP!  I know you were scrolling down to the comments section to write some tirade about how ANSYS, Inc overcharges for parallel, how it is on a moral equivalence with drowning puppies, and how much more reasonable competitor A, B, or C is with parallel costs.  Let me save you the time.

HPC delivers huge value.  Big productivity improvements.  And it does not write itself. It is not an enhancement to existing software.  Scores of developers are going into solver code and implementing complex logic that allows efficiency with older hardware, shared memory, distributed memory, and GPU’s. It has to be written, tested, fixed, tested again, and back and forth every release.  That value is real, and there is a cost associated with it.

And the competitors pricing model? The only thing they can do to differentiate themselves is charge nothing or very little. They have not put the effort or don’t have the expertise to deliver the kind of parallel performance that the ANSYS, Inc. solvers do.  So they give it away to compete.  Trust me, they are not being nice because they like you. They have the same business drivers as ANSYS, Inc.  They price the way they do because they did not incur as much cost and they know if they charged for it you would have no reason to use their solvers.

ANSYS users of the world unit!  Load your multicore hardware with HPC packs, feed it with a GPU, and join the revolution!

Changing Results Values in ANSYS Mechanical APDL–DNSOL and *VPUT

So it is Friday afternoon and that big, involved, deep-dive into some arcane ANSYS capability is still not written.  So, time for plan B and the list of not so involved but still somewhat arcane capabilities that we like to expose in The Focus.  At the top of that list right now is how to change the results in an ANSYS Mechanical APDL (MAPDL) solve. 

One might ask why would you want to do this?  Well the most common usage is that you want to use APDL to calculate some result value and then display it in a plot.  Similarly, you may want to do some sort of calculation or post processing on MAPDL results but using and external piece of code, but still show the results in ANSYS.  Another common usage is to use MAPDL as a post processor for some external solver. 

And, it turns out, it is pretty easy.  And, as you probably have learned by now if you use MAPDL a lot, there is more than one way to do it.

The “Database”

Before we get into how to do this, we need to talk about the “database” in MAPDL. If you read through the documentation on the commands we will use, it will talk about the database.  This is not the jobname.db file.  That is the db file.  The database refers to the representation of your model, including results and the currently active loads, in memory when you are running MAPDL. 

When you do a RESUME command, MAPDL reads the DB file and stores the model, including geometry, mesh, loads, and run parameters, in memory.  When you do a SET command, it then adds the results and related information into memory. 

So when we use the commands we will talk about next, you are changing what is in the database, not what is in the DB file on your disk.  And you are storing it temporarily.  Many different commands cause the database to go back to its original values.  So you need to be very careful in how you use these tools, and don’t assume that once you have used them, the changes are permanent.  

DNSOL

The simplest way to do it is with the DNSOL command: DNSOL, NODE, Item, Comp, V1, V2, V3, V4, V5, V6

So, if you do a dnsol,32,u,x,3.145  the value in memory for deflection in the X direction will be changed to 3.145.  dnsol,32,s,x,1.1,2.2,3.3 will change the stress on node 32 in the X direction to 1.1, in the Y direction to 2.2, and in the Z direction to 3.3. 

The second argument can also be a component, so you can assign a uniform result value to many nodes at the same time. 

Here is an example, very simple, of a block where we set the deflection on the top nodes to 1 in.

   1: finish

   2: /clear

   3: /prep7

   4: blc4,-2,-2,4,4,20

   5: et,1,185

   6: mptemp,1,70

   7: mpdata,ex,1,1,20e6

   8: mpdata,nuxy,1,1,.23

   9: mpdata,dens,1,1,.001

  10:  

  11: vmesh,all

  12: /view,1,1,1,1

  13: /vup,1,z

  14: /RGB,INDEX,100,100,100, 0

  15: /RGB,INDEX, 80, 80, 80,13

  16: /RGB,INDEX, 60, 60, 60,14

  17: /RGB,INDEX, 0, 0, 0,15

  18: /show,png

  19: eplot

  20:  

  21: nsel,s,loc,z,0

  22: cm,nbt,node

  23: d,all,all

  24: nsel,s,loc,z,20

  25: cm,ntp,node

  26: f,all,fx,10

  27: f,all,fy,12

  28: allsel

  29: save

  30:  

  31: /solu

  32: solve

  33: finish

  34: /post1

  35: plnsol,u,sum

  36:  

  37: dnsol,ntp,u,y,1

  38: plnsol,u,sum

  39:  

  40: /show,close

 

 

aa1003The Fancy Mesh

aa1004The Normal Solution

aa1005Solution with 1” deflection DNSOL’d onto the top nodes

Pretty simple. 

NOTE: One key thing to remember is you can not use this with Powergraphics. You must have /graph,full turned on.

*VPUT

The DNSOL is great for a few nodes here and there, or a constant value. But it makes for a big nasty *do loop if you want to do a lot of nodes. So ANSYS, Inc. give us the *VPUT command:

*VPUT, ParR, Entity, ENTNUM, Item1, IT1NUM, Item2, IT2NUM, KLOOP

As you can see, this command has a lot of options, so read the Help before you use it. Most of the time you have an array that stores the value you want stuck on your model in an array (ParR).  Then you specify what MAPDL result you want to overwrite and it takes care of it for you.

*vput,nws1(1),node,1,s,1 will place new values for maximum principal stress on all the nodes covered in the array , starting at 1. 

Here is an example of the code, after the same solve as above, to do a *vput instead of a DNSOL:

   1: finish

   2: /post1

   3: plnsol,u,sum

   4:  

   5: *get,nmnd,node,,num,max

   6: *dim,nwux,,nmnd

   7: *do,i,1,nmnd

   8:     nwux(i) =  i

   9: *enddo

  10:     

  11: *vput,nwux(1),node,1,u,x

  12: plnsol,u,sum

aa1017

The code places a displacement in the X direction equal to its node number.  So node 80 has a UX of 80. 

A key thing to note with *vput is that it is much more transient. Pretty much any other command that involves anything other than plotting or listing blows away the values you assigned with *VPUT.  So we recommend that you do a *vput before every plot or listing you do.

Thoughts, Questions, and Conclusion

Of course you should never use this to fudge results. 

You can get very fancy with this. When I use it I often turn off all the plot controls and then create my own legend. That way I can put hours of life on as temperature or SX and then make my own legend that says LIFE or something of the sort. 

Another thing to note is that if you DNSOL or *VPUT a displacement, then ANSYS will distort your plot by that much.  That is OK if you are changing deflection, but not so good if you are plotting life or some esoteric stress value.

A common question when you play with these commands is if you can store the modified results in the RST file.  You can for degree of freedom results, but not for stresses.  You use LCDEF and RAPPND. 

And what about ANSYS Mechanical?  Well it works totally different, and better. You can do all of this using Design Assessment, which was covered in a webinar and a Focus article you can find here.

The key to using these two commands is pretty much the same as any APDL command: Start on a small test model, write a macro to do it, and keep things simple. And, read the manual.  Everything you need to know is in there.

Utilizing Element Birth and Death for Contact Elements in Workbench Mechanical

In case you missed it, Doug Oatis wrote a Focus blog entry on general use of element birth and death in ANSYS Workbench back in November, 2011, which you can access here: 
http://www.padtinc.com/blog/post/2011/11/29/Sifting-through-the-wreckage-Element-Birth-and-Death-in-Workbench.aspx

This new article narrows the focus to contact elements specifically.  We recently had a tech support question about how to utilize element birth and death for contact elements in ANSYS Workbench.  So, a simple example was put together and is explained below. 

The main idea is that we need multiple load steps (labeled Steps in Workbench) in order for elements to change status from alive to dead or vice versa.  We also need a way to select the elements so that we can identify which ones will be killed or made alive.

Keep in mind that ANSYS Workbench Mechanical is a newer pre- and post-processor for good old ANSYS Mechanical APDL.  That means we can insert ANSYS commands into the object tree in Workbench Mechanical and those commands will be executed when the solver reads the batch input file that is created when we click the solve button.

So, we need at least one set of Mechanical APDL commands to identify which contact/target pairs or contact regions we need to kill or make alive.  In our example we’ll focus on killing elements but the same principal applies to making killed elements come alive.  Note that killing elements does not remove them from the model.  Rather, it reduces their stiffness by a default value of six orders of magnitude so that effectively they do not participate.  The Mechanical APDL commands needed are for the contact/target pair identification are scalar parameter commands. 

ANSYS Workbench employs some ‘magic’ parameter names that automatically plug in the integer pointers used behind the scenes for identification of element types and material properties.  In the case of contact and target elements, these parameter names are ‘cid’ for the contact elements and ‘tid’ for the target elements.  Thus, for each contact region we want to be able to kill, we need to create unique scalar parameter names, such as:

mycont=cid
mytarg=tid

If we had more than one pair, we might use

mycont1=cid
mytarg1=tid

and increment the ‘1’ in the parameter names on the left side for each contact pair so that we end up with mycont1, mycont2, etc.

These commands need to be inserted directly under each desired contact region so that they will be located in the appropriate place in the solver batch input file at solution time.

image

The next command snippet needed is the one that selects the desired contact and target elements and then employs the ANSYS Mechanical APDL command to kill them.  Finally we need to re-select all the elements in the model so that they are all active when the solution takes place.  An example of this command object is:

esel,s,type,,mycont
esel,a,type,,mytarg

!kill selected elements (contact and target)
ekill,all

!select everything
allsel

Note that anything that occurs to the left of a “!” is considered a comment.  This second command object needs to be inserted under the analysis type branch.

image

Next, we need to tell ANSYS to perform at least 2 steps (load steps).  This is accomplished in the Details view for Analysis Settings.  For Step Controls, number of steps needs to be 2 (or more than 2).  Once 2 load steps are specified, we can tell ANSYS to only activate the EKILL command snippet for load step 2.  This is done in the Details view for the command snippet.  Step Selection Mode can be set to By Number and the Step Number set to 2, meaning that the command object will only be active for load step 2.  This will result in the contact elements that have been selected by the above commands being killed in load step 2.

image

In our example, we have two semicircular rings, connected by contact elements where they touch.  One side of the interface uses bonded contact, active for both loads steps.  The other side of the interface uses frictionless contact, active in load step 1 and killed in load step 2.  We would expect that under a compressive load, the frictionless contacts will prevent penetration in load step 1 but allow penetration in load step 2 since they have been killed.

image

That is exactly what the results show.  The contact status for the frictionless contact region goes from 2 (sliding) at the end of load step 1 to zero (far or not touching) at the end of load step 2.

image

Deformation plots indicate that penetration is prevented in load step 1.

image

In load step 2, penetration is allowed because the contact elements at this location have been killed.

image

So, here is a fairly simple Workbench Mechanical example of utilizing command objects to select contact and target elements, and to kill those elements using the Mechanical APDL EKILL command.  You can read up on element birth and death in the Mechanical APDL Help for more details on element birth and death.  We hope this is useful information to you.

Workshops for “Intro to Workbench Framework Scripting”

image

At noon Phoenix time today we will be presenting the Webinar “Intro to Workbench Scripting: Controlling Projects, Materials, and Solution Execution with Python”  This is a very high level, and probably short, introduction to the basics of using the python scripting in Workbench.

To support the talking, I’ve put together four workshops, mostly based on ANSYS material or examples we have presented before here on the Focus.  They should be enough for anyone that is a good programmer or better to customize the Workbench Framework.  We also present it here as a tool for those who don’t attend the webinar (and as this weeks Focus posting, cause we didn’t have time to write one…)

Warning: These were done in a hurry between meetings and some travel… so they are not great from a grammar or typo standpoint. Regardless, we hope you find them useful.

The files you need to run them are in this zip file:

The Workshop Document is here: 

We hope to add to this document over the next year or so and provide it as a more or less complete tutorial for those who want to automate their analysis.

Starting ANSYS Products From the Command Line

imageSometimes you just get tired of clicking on icons.  Sometimes you just need to feel the control and power of launching your applications from the command line.  You type it in, you hit the enter key, and sometimes you can actually hear the disk spin up or the fan run faster to cool the processor as the program you asked for, the program you took time to type out, leaps to life in front of you. Satisfaction.

OK, maybe not.  More often you are scripting some batch solves.  Or maybe you are using the graphical user interface in Workbench but you need to set options for the solvers you are running from within workbench.  Because most of the solvers in the ANSYS family of products predate such new-fangled concepts as GUI’s, and because they are often run remotely on other machines, they have command line interfaces. And that gives the knowledgeable user more power and control.

General Concepts for Launching from the Command Line

Although the number of options available changes from application to applications, there a few common things you should know. 

Paths

The first and most important concept is to be aware of the path.  This is where most errors happen. One of the big changes over the years is that as software gets more complicated, the executable program or script that you use to launch a solver is now buried deep down inside a directory structure.  Since we never run in that actual directory we need to tell the operating system where the executable is. You can do this by including the full directory path in your command line argument, or by adding it to your path. 

On Linux follow the directions in the help:

// Installation and Licensing Documentation // Linux Installation Guide // 5. Post-Installation Instructions

for each of the products you want to run. Generally, you need to set the PATH environmental variable in your .cshrc, .login, or .profile. 

On Windows it is not documented, the assumption being that you will be clicking on icons and not typing into a command window.  So a little detective work is needed. Use a file explorer and the Linux documentation on launching to locate the executable for solvers you want to use:

// Installation and Licensing Documentation // Linux Installation Guide // 5. Post-Installation Instructions // 5.3. Launching ANSYS, Inc. Products

The /ansys_inc part is usually replaced with c:\Program Files\ANSYS Inc.  The rest of the path is pretty much the same, swapping forward slashes with backward slashes.  Use these paths in your command line or add to your path by:

  • From the desktop, right-click My Computer and click Properties.
  • In the System Properties window, click on the Advanced tab.
  • In the Advanced section, click the Environment Variables button.
  • Finally, in the Environment Variables window (as shown below), highlight the Path variable in the Systems Variable section and click the Edit button. Add or modify the path lines with the paths you wish the computer to access. Each different directory is separated with a semicolon as shown below.

    Windows enviromental path settings

    Important note for Windows:  If you are typing the path in on the command line, you need to put it in double quotes.  The convention on Windows is to specify directories with spaces in the name.  But the convention is not to have a command line parser that recognizes this.  So you will get an error if you type:

    C:\Program Files\ANSYS Inc\v140\ansys\bin\winx64\ansys140.exe

    But if you put it in quotes, it works fine:

    "C:\Program Files\ANSYS Inc\v140\ansys\bin\winx64\ansys140.exe"

    Versions Numbering

    If you look at the example for launching MAPDL above you will notice that 140 is used in the directory path and in the name of the executable.  This will change with version:  v130, v145, etc…  Just be aware of that if you are reading this blog posting in 3 years and we are on release 16.5.  you would use:

    "C:\Program Files\ANSYS Inc\v165\ansys\bin\winx64\ansys165.exe"

    Where do you Launch From?

    You of course need a command line to launch a solver.  This is usually a window that lets you type operating system commands: called a Command Prompt in Windows or a shell on Linux.  On Linux it can be an xterm window, a console window, or some other terminal window you have opened. 

    But you can also launch from a script, and that script can be launched from a command prompt or shell, or it can be launched by an icon.  All that needs to happen is that the script needs to be executed with the environmental variables required for the command prompt/script or the GUI.  If you don’t know how to make that happen, contact your IT support or someone who understands your operating system and how it runs processes.

    ANSYS Mechanical APDL

    The solver with the most options and capabilities from the command line is Mechanical APDL. So we will start there.  It is important to know these even if you use Mechanical most of the time.  That is because you can set these, and better control your solves, under Tools->Options->Mechanical APDL.  Here is what that dialog looks like:

    image

    The most common settings have their own widgets,  and the others can all be accessed by using “- string” command line style arguments in the first text widget aptly named: Command Line Options.

    Here are the options, grouped for your studying pleasure:

    Option

    Type

    Description

    -ansexe

    Customization

    In the ANSYS Workbench environment, activates a custom ANSYS executable.

    -custom

    Customization

    Calls a custom ANSYS executable. See help on running custom executables for more

    -acc device

    HPC

    Enables the use of GPU compute accelerator. As this is written, nvidia is the only option. But as other cards become available look for this to have other options. Check the help.

    -dis

    HPC

    Enables Distributed ANSYS. See the Parallel Processing Guide for more information.

    machines

    HPC

    Specifies the machines on which to run a Distributed ANSYS analysis. See Starting Distributed ANSYS in the Parallel Processing Guide for more information.

    -mpi

    HPC

    Specifies the type of MPI to use. See the Parallel Processing Guide for more information.

    -mpifile

    HPC

    Specifies an existing MPI file (appfile) to be used in a Distributed ANSYS run. See Using MPI appfiles in the Parallel Processing Guide for more information.

    -np value

    HPC

    Specifies the number of processors to use when running Distributed ANSYS or Shared-memory ANSYS.

    -d device

    Interface

    Specifies the type of graphics device. This option applies only to interactive mode. For Linux systems, graphics device choices are X11, X11C, or 3D. For Windows systems, graphics device options are WIN32 or WIN32C, or 3D.

    -g

    Interface

    Launches the ANSYS program with the Graphical User Interface (GUI) on. Linux only. On windows do a /SHOW and /MENU,ON to get the GUI up.

    -l language

    Interface

    Specifies a language file to use other than US English.

    -b [list | nolist]

    Launch

    Activates the ANSYS program in batch mode. The options -b list or -b by itself cause the input listing to be included in the output. The -b nolist option causes the input listing not to be included

    -i inputname

    Launch

    Specifies the name of the file to read input into ANSYS for batch processing. On Linux, the preferred method to indicate an input file is <. Requried with -b option.

    -j Jobname

    Launch

    Specifies the initial jobname, a name assigned to all files generated by the program for a specific model. If you omit the -j option, the jobname is assumed to be file.

    -o outputname

    Launch

    Specifies the name of the file to store the output from a batch execution of ANSYS. On Linux, the preferred method to indicate an output file is >.

    -p productname

    Launch

    Defines which ANSYS product will run during the session (ANSYS Multiphysics, ANSYS Structural, etc.). This is how you pull a different licence from the default. Very handing if you have multiple licenses to choose from.

    -s [read | noread]

    Launch

    Specifies whether the program reads the start140.ans file at start-up. If you omit the -s option, ANSYS reads the start140.ans file in interactive mode and not in batch mode.

    -dir

    Luanch

    Defines the initial working directory. Remember to use double quotes if you have spaces in your directory path name. Using the -dir option overrides the ANSYS140_WORKING_DIRECTORY environment variable.

    -db value

    Memory

    Defines the portion of workspace (memory) to be used as the initial allocation for the database. This and -m are the two most important options. If you ever find that ANSYS is writing a *.PAGE file, up this number.

    -m value

    Memory

    Idefines the total memory to reserve for the program. It is always better to reserve it up front rather than letting ANSYS grab as it needs.

    -schost host name

    MFX

    Specifies the host machine on which the coupling service is running (to which the co-simulation participant/solver must connect) in a System Coupling analysis.

    -scname name of the solver

    MFX

    Specifies the unique name used by the co-simulation participant to identify itself to the coupling service in a System Coupling analysis. For Linux systems, you need to escape the quotes or escape the space to have the name recognized with a space:

    -scport port number

    MFX

    Specifies the port on the host machine upon which the coupling service is listening for connections from co-simulation participants in a System Coupling analysis.

    -mfm

    MFX

    Specifies the master field name in an ANSYS Multi-field solver – MFX analysis. See Starting and Stopping an MFX Analysis in the Coupled-Field Analysis Guide for more information.

    -ser value

    MFX

    Specifies the communication port number between ANSYS and CFX in an MFX analysis.

    -dvt

    Other

    Enables ANSYS DesignXplorer advanced task (add-on).

    -dyn

    Other

    Enables LS-DYNA.

    -v

    Other

    Returns the ANSYS release number, update number, copyright date, customer number, and license manager version number. Does not actually run ANSYS MAPDL.

    name value

    Params

    Defines ANSYS parameters at program start-up. The parameter name must be at least two characters long. These get passed into ANSYS and are used by any APDL scripts you run.

    The ones that everyone should know about are: –p, –m, –db.  We find that not using these to define what license to use (-p) or to control how memory is pre-allocated (-m, –db) generate the most tech support questions.    The next most important is the –np command. Use this to define more processors if you have HPC licenses.

    Cheating – Use the Launcher

    Sometimes the options can get long and confusing.  So what I do is I use the “ANSYS Mechanical APDL Launcher” and fill in all the forms. then go to tools->View Display Command Line and I can see all the options.

    image

    Here is a a fancy command line that got generated that way:

    "C:\Program Files\ANSYS Inc\v140\ANSYS\bin\winx64\ansys140.exe"  -g -p ane3flds -np 2 -acc nvidia -dir "c:\temp" -j "grgewrt1" -s read -m 5000 -db 1000 -l en-us -lstp1 32 -t -d win32 -custom "/temp/myansys.exe"  

    It runs interactive (-g), uses a multiphysics license (-p ane3flds), grabs two processors (-np 2), uses an NVIDEA GPU (-acc nvidia (I don’t have one…)), runs in my temp directory (-dir), uses a job name of grgewrt1 (-j), reads the start.ans file (-s), grabs 5000mb and 1000mb for memory (-m, –db), uses English, passes in a parameter called lstp1 and sets it to 32 (-), uses the win32 graphics driver (-d) and runs my custom ANSYS executable (-custom).

    I have no idea what –t is.  Some undocumented option I guess…

    ANSYS Workbench

  • ANSYS Workbench also has some command line arguments. not as rich as what is available in MAPDL, but still powerful. It allows you to run Mechanical in Batch or Interactive mode, supply python commands as needed.  The key thing to remember is that the workbench interface is not Mechanical or FLUENT. It is the infrastructure that other programs run on. Scripting in workbench allows you to control material properties, parameters, and how systems are created and executed.

    Here are the options:

    Argument

    Operation

    -B

    Run ANSYS Workbench in batch mode. In this mode, the user interface is not displayed and a console window is opened. The functionality of the console window is the same as the ANSYS Workbench Command Window.

    -R <ANSYS Workbench script file>

    Replay the specified ANSYS Workbench script file on start-up. If specified in conjunction with –B, ANSYS Workbench will start in batch mode, execute the specified script, and shut down at the completion of script execution.

    -I

    Run ANSYS Workbench in interactive mode. This is typically the default, but if specified in conjunction with –B, both the user interface and console window are opened.

    -X

    Run ANSYS Workbench interactively and then exit upon completion of script execution. Typically used in conjunction with –R.

    -F <ANSYS Workbench project file>

    Load the specified ANSYS Workbench project file on start-up.

    -E <command>

    Execute the specified ANSYS Workbench scripting command on start-up. You can issue multiple commands, separated with a semicolon (;), or specify this argument multiple times and the commands will be executed in order.

    The big deal in this list is the –B argument.  This allows you to run workbench, and applications controlled by the project page, in batch mode.  You will usually use the –R argument to specify the Iron Python script you want to run.  In most cases you will often want to throw in a –X to make sure it exists when the script is done. 

    Other ANSYS Products

    Here is where it gets boring.  The other products just don’t have all those options. At least not documented.  So you simply find the executable and run it.  Here is the list for Linux. Use it to find the location on Windows.

    Product

    Command

    Mechanical APDL

    /ansys_inc/v140/ansys/bin/ansys140

    ANSYS Workbench

    /ansys_inc/v140/Framework/bin/<platform>/runwb2

    ANSYS CFX

    /ansys_inc/v140/CFX/bin/cfx5

    ANSYS FLUENT

    /ansys_inc/v140/fluent/bin/fluent

    ANSYS ICEM CFD

    /ansys_inc/v140/icemcfd/<platform>/bin/icemcfd

    ANSYS POLYFLOW

    /ansys_inc/v140/polyflow/bin/polyman

    ANSYS CFD-Post

      /ansys_inc/v140/CFD-Post/bin/cfdpost

    ANSYS Icepak

    /ansys_inc/v140/Icepak/bin/icepak

    ANSYS TurboGrid

      /ansys_inc/v140/TurboGrid/bin/cfxtg

    ANSYS AUTODYN

    /ansys_inc/v140/autodyn/bin/autodyn140

    APDL Math–Access to the ANSYS Solver Matrices with APDL

    APDL Math.  It is one of the most powerful, uber-user, deep down under the hood, wrapping your hands around the neck of what FEA is, capabilities in the ANSYS Mechanical APDL (MAPDL) solver.  And most users don’t even know it is there.  It kind of snuck in over time, with the developers adding more and more capability each release.  Now it gives you access that you needed custom FORTRAN code to get to in the past… or NASTRAN DMAP, which is really something none of us ever want to do.

    There is a lot of capability in this tool. This posting is just going to cover the basics so that you know what the tool can be used for in case you need it in the future, and hopefully motivate some of you to take a long look at the help. 

    What is APDL Math?

    It is an extension to the APDL command language that drives MAPDL.   Although it runs in a different workspace (chunk of memory in the ANSYS database) it talks to standard APDL by importing and exporting APLD arrays (vectors or matrices).  It consists, at R14, of 18 commands that can be executed at the /SOLU level at any time.  All of the commands start with a * character and look and act like standard APDL commands.

    APDL Math is a tool for users to do two things: 1) get access to view, export or modify matrices and vectors created by the solver, and 2) to control import or modify matrices and vectors then solve them.  The most common uses we have seen is the exporting of a matrix from ANSYS for use in some other program, usually Matlab.  The other is working with sub-structure matrices.

    The entire tool is documented in the Mechanical APDL section of the help under  // ANSYS Parametric Design Language Guide // 4. APDL Math.

    The Commands

    Below is a list of the APDL math commands.  As usual, you really need to read the manual entries to get the full functionality explained.  Just like parameters and arrays in APDL, the matrices and vectors in APDL Math use names to identify them.  Note that you have a set of commands to create the matrix/vector you want, which includes reading from a file or importing an APDL array.  then you have commands to do basic matrix/vector math like multiply, find dot products, and do Fast Fourier Transformations.  Then there are solver commands.

    Commands to create and delete matrices and vectors

    *DMAT, Matrix, Type, Method, Val1, Val2, Val3, Val4, Val5

    Creates a dense matrix that is complex, double or integer. You can allocate it, resize an existing matrix, copy a matrix, or link to a portion of a matrix. You can also import from a file or an APDL variable.

    *SMAT, Matrix, Type, Method, Val1, Val2, Val3

    Creates a sparse matrix. Double or Complex, copied or imported.

    *VEC, Vector, Type, Method, Val1, Val2, Val3, Val4

    Creates a vector. Double, complex or integer. Similar arguments to *DMAT.

    *FREE, Name,

    Deletes a matrix or a solver object and frees its memory allocation. Important to remember to do.

     

    Commands to manipulate matrices

    *AXPY, vr, vi, M1, wr, wi, M2

    Performs the matrix operation M2= v*M1 + w*M2.

    *DOT, Vector1, Vector2, Par_Real, Par_Imag

    Computes the dot (or inner) product of two vectors.

    *FFT, Type, InputData, OutputData, DIM1, DIM2, ResultFormat

    Computes the fast Fourier transformation of the specified matrix or vector.

    *INIT, Name, Method, Val1, Val2, Val3

    Initializes a vector or dense matrix. Used to fill vectors or matrices with zero’s, constant values, random values, or values on the diaganol.

    *MULT, M1, T1, M2, T2, M3

    Performs the matrix multiplication M3 = M1(T1)*M2(T2).

    *NRM, Name, NormType, ParR, Normalize

    Computes the norm of the specified vector or matrix.

    *COMP, Matrix, Algorithm, Threshold

    Compresses the columns of a matrix using a specified Singular value decomposition algorithm (default) or Modified Gram-Schmidt algorithm

     

    Commands to perform solutions

    *LSENGINE, Type, EngineName, Matrix, Option

    Creates a linear solver engine and assignes a name to be used when you want to execute the solve. Does Boeing sparse, MKL sparse, LAPACK or Distributed Sparse.

    *LSFACTOR, EngineName, Option

    Performs the numerical factorization of a linear solver system.

    *LSBAC, EngineName, RhsVector, SolVector

    Performs the solve (forward/backward substitution) of a factorized linear system.

    *ITENGINE, Type, EngineName, PrecondName, Matrix, RhsVector, SolVector, MaxIter, Toler

    Performs a solution using an iterative solver.

    *EIGEN, Kmatrix, Mmatrix, Cmatrix, Evals, Evects

    Performs a modal solution with unsymmetric or damping matrices.

     

    Commands to output matrices

    *EXPORT, Matrix, Format, Fname, Val1, Val2, Val3

    Exports a matrix to a file in the specified format. Supports Matrix Market, ANSYS SUB, DMIG, Harwell-Boeing and ANSYS EMAT. Also used to put values in an APDL array or print in a formated way to a postscript file.

    *PRINT, Matrix, Fname

    Prints the non-zero matrix values to a text file.

     

    Useful APDL Commands

    /CLEAR, Read

    Wipes all APDL and APDL Math values from memory

    WRFULL, Ldstep

    Stops solution after assembling global matrices. Use this to make matrices you need when you don’t want a full solve

    /SYS, String
    Executes an operating system command. This can be used in APDL Math to do some sort of an external operation on a matrix you wrote out to a file, like running a matlab script. After execution the matrix can be read back in and used.

     

    How You Use APDL Math

    When using APDL math you should follow some basic steps.  I’m always a big proponent of the crawl-walk-run approach to anything, so I also recommend that you start with small, simple models to figure stuff out.

    First Step: Know the Math

    imageThe first step, or maybe the zero’th, is to understand your math.  If you charge in and start grabbing the stiffness matrix from the .FULL file and changing values, who knows what you will end up with. Chart out the math you want to do on paper or in a tool like Mathematica. 

    Then makes sure that you understand the math in ANSYS, and that includes the files being used by ANSYS.  A good place to look is in the Programmer’s Manual, Chapter 1 lists the various files and what is in them.  It might also not be a bad idea for you to crack open the theory manual  We all know that ANSYS solve Kx = F, but how, and what matrices and vectors are used.  Section 17.1 explains the static analysis approach used, with lots of links to more detailed explanations.

    Second Step: Create your Matrices/Vectors

    Since the whole point of using APDL math is to do stuff with Matrices/Vectors, you need to start by creating them.  Note that we are not doing anything with APDL Math yet.  We are using APDL, ANSYS, or an external program to get our matrix/vector so that we can then get it into APDL Math.. There are three types of sources you can get matrices/vectors from:

    1. Use APDL to create an array.  *DIM, *SET, *VREAD, *MOPER, etc… 
    2. Use ANSYS to make them as part of a solve, or as part of an “almost solve” using WRFULL.  You can read the .FULL, .EMAT, .SUB, .MODE or .RST
    3. Get a file from some other source and put it into a format that APDL Math can read. It supports Harwell-Boeing, Matrix Market, or NASTRAN DMIG format as well as APDL Math’s own format.

    Third Step: Get the Matrices/Vectors into APDL Math

    Using *DMAT, *SMAT, and *VEC you convert the APDL array, ANSYS file, or external format file into a matrix or a vector.  You can also use *INIT to make one from scratch, filling it with constants, zeros, random numbers or by setting the diagonal or anti-diagonal  values.

    Fourth Step: Manipulate the Matrices/Vectors

    In this step you can modify matrices in a lot of different ways.  The simplest is to use *MULT or *AXPY to do math with several matrices/vectors to get a new matrix/vector. 

    Another simple way to change things is to simply refer to the entries in the matrix using APDL.  As an example, to change the damping at I=4 and J =5 in a damping matrix called dmpmtrx just use dmpmtrx(4,5) – 124.321e4.

    You can take that one step further and use the APDL operators that work on arrays like *SET, *MOPER, *VGUN and whatever *DO loops you need. 

    If you can’t do the modification you need in APDL or APDL Math, then you can use *EXPORT to write the matrices out and use an external tool like matlab or Mathematica to do your manipulation. Of course you then use *DMAT with an IMPORT argument to read the modified matrix/vector back inside.

    Fifth Step: Use the Matrix

    Now it is finally time to use your matrices/vectors.  The most common use is to bundle it up in a substructure matrix (*EXPORT,,SUB) and use it in a solve.  What is great about this is that you can also (I know, we don’t want to use the N-word) save the file as a NASTRAN substructure and give it to that annoying DMAP guru who insists that NASTRAN is the only structural analysis code in the world.  He gets his file, and you get to use ANSYS.

    You can also solve using APDL Math.  This can require multiple steps depending on what you want to do. A typical solve involves using the *LSENGINE command to define how you want to solve, then factor your matrix with *LSFACTOR, then solve with *LSBAK.

    There are also ways to continue a solve in ANSYS after changing the EMAT file.  Unfortunately my plans to do an example of this have been thwarted and ANSYS did not provide one, so I’m not 100% sure on the steps required.  But a LSSOLVE that does not force ANSYS to recreate the matrices should work.  Maybe a topic for a future posting. 

    Other Stuff to Know About

    There is a lot more too this, and the help is where you can learn more.  But a few things everyone needs to be ware of are listed here.

    DOF Order

    A key area where people have problems is understanding how the DOF’s in your matrix or vector are ordered. This is because ANSYS reorders things for efficiency of memory and solve.  The nice thing is that ANSYS stores a map in the full file that you can use to convert back and forth using the *MULT command. 

    Please read section 4.4 ( // ANSYS Parametric Design Language Guide // 4. APDL Math // 4.4. Degree of Freedom Ordering) in the manual. They have a great description and some examples.

    Just remember to remember to deal with DOF ordering.

    Limitations

    This is still a new tool set and as users apply it to real world problems, they are adding functionality.  Right now there are some limitations. 

    • The biggest is that that all of this works on linear matrices.  So you have to be working on a linear problem. Material or geometric non-linarites just don’t work.  This makes perfect sense but may be one of those things some users might not figure out till they have invested some serious time.
    • You can not modify a sparse matrix in APDL Math.  You have to write it out using *EXPORT, modify it with something like matlab, then read it back in with *SMAT.
    • *MULT can not be used to multiply two sparse matrices. One or both must be dense.  the result is always dense.
    • The BCS, DSS and DSP solves only work with sparse matrices. Only the LAPACK solver works for a dense matrix.

    imageReal and Imaginary

    Most of the features in APDL math work with matrices that have imaginary terms in them. Be careful and read the help if you are using complex math, it can get tricky.  Especially read 4.3 on how to specify a position in a matrix for real or imaginary numbers.

    Examples

    There are not a lot out there.  The manual has some in section 4.7.  Take a look at these before you do anything. If you can not find a specific example, contact your technical support provider and see if they have one, or if they can ask development for one.

    Give it a Shot, and Share

    This is a new feature, and a power user feature.  So what is happening is some very smart people are doing some very cool things with it, and not sharing.  It is very important that you share your effort with your Channel Partner or ANSYS, Inc so that they can not only see how people use this tool, but also modify the tool to do even more cool things. 

    It would also be cool if you posted what you do on XANSYS for the masses to see.  Very cool.

    Dean Kamen Visits Phoenix

    Dean-Kamen-ASU-2012-02-22Inventor of medical devices, the man behind the Segway, and FIRST backer Dean Kamenvisited the Phoenix area yesterday and today and we were lucky enough to have people from PADT invited to two different events at which he spoke.  For engineers involved in product development, this is like a visit from an NFL quarterback for most people.  He turned out to be open, engaging, and a very good speaker. 

    We could go on in adoration and explore the guilt and envy we feel after seeing all that he has done.  Instead we thought we would highlight two things we learned from his visit:

    1. The FIRST program that he started and still heads is making a huge difference in this country and around the world.  PADT has been peripherally involved, focusing instead more on the underwater robot scholastic competitions that are very popular here in Phoenix.  But FIRST is now huge, and is still growing.  But what we learned is the positive impact it is having: Students who participate in FIRST are 3 times more likely to become engineers, 30% more likely to attend college, and twice as likely to volunteer in their communities. Those are some positive numbers.  Those of us in the engineering world should take advantage of that and support FIRST.

                                                                        http://www.usfirst.org/

    2. Second, he offered a unique perspective on how engineers see the world.  When he was young he heard the story of David and Goliath.   Most people see a religious message in this story, there are various interpretations. But as a child, Dean Kamen did not see those messages.  What he saw was that David won because he had better technology. He had a sling shot.  That is how he beat the giant.  I found that a very interesting point of view. If you don’t get it, ask an engineer.

    If you ever have the chance to explore what his company, DEKA, is currently doing with a revolutionary power generation and water purification solution for areas of the globe without power or clean water, do so.  It is very leading edge stirling engine and distillation technology

    ANSYS Training Face to Face

    This weeks Focus posting is not going to be very technical. In fact, it is a bit of an editorial.

    Over the past five years or so we have seen a lot of companies who use ANSYS, Inc products move away from traditional face-to-face training with instructors in a classroom.  There are a lot of reasons for this.  The two most common are that 1) the company does not have a travel budget and 2) that training labor hours are considered overhead and managers all have very strict overhead restrictions.  On top of these two, many companies are just plain trying to save money on the cost of training or limiting their overall training budgets.

    What we have seen is a larger number of users either trying to train themselves from manuals or downloaded training material, or people trying to do web-based training.  One can certainly learn how to use an FEA or CFD tool this way, but through our tech support we are starting to see the negative side of this shift: users only understand some of the aspects of the tools and do not have a depth of knowledge that goes beyond the basics.  So when they run into a problem that requires a move beyond those basics, or that might require a more nuanced approach, they struggle or they call tech support for on-the-spot training.

    Interaction

    Even though we engineers are not the most social sub-species of humans, we can still heavily benefit from face-to-face interaction during training.  When PADT teaches a training class we find that a small portion of the time is spent lecturing on and doing workshops for the basics.  Most of the time is spent answering questions that occur to students while they take these basics in.  Some are industry or user specific, some delve deeper into the tool than the training material does.  But they all provide an education to the whole class that never occurs otherwise.

    We have taught, and been students in, web based training classes. The interaction is just not the same.  There are not as many questions and the instructor is not able to use body language clues to see if the class is really getting what they are saying.  In fact, we feel this is the biggest issue. When you are on the phone and sharing a screen you can not even tell if the students are listening.  So the instructor pushes on, the students drift further away, and the true benefits of the class are lost.

    Make a Case for Classroom Training

    The point of all of this is that we feel users out there need to make a case for real classroom training.  When your boss says that there is no travel budget, not enough overhead allocation, or just not enough money, argue strongly that the cost differences of online or self training are not that significant when compared to the productivity problem of not having deep, interactive training.  If you are a boss, admit it, you know we are right.  You should fight a bit harder for the budget because in the long run you will save money.

    Another way to look at it is the relative cost of classroom training versus how much you will use the ANSYS tools you are trained on.  Even if we assume that the company you work for kind of sucks and most engineers move out of there in five years, one to three week of training is nothing when compared to five years as a user.  If your productivity is just 5% higher during that time, the savings are significant. 

    Do the full classroom training.  You will not regret it.

    As a full disclosure:
    We are partly motivated to express this opinion by the fact that we make money doing such training classes, but in reality very few of you reading this will do training with us (although you could use us if you wanted to… hint, hint).  Most of you do your training through other Channel Partners or ANSYS, Inc.  So this posting is not entirely self serving.

    About the pictures:
    I find the stock photographs of what are basically models so contrived and stereotypical that they are hilarious.  So I grabbed few of my favorites to share.  I love it when they always have someone crossing their arms, looking thoughtful. 

    Stomp on a Bug in ANSYS Mechanical R14

    image

    Okay, so it’s not the ugly critter in the picture, (that on was found at my house), but the bug in ANSYS R14 can give you the willies just the same. ANSYS, Inc. is working very hard to get the situation remedied, and they will be sending out defect notices shortly.  They provided a quick-fix however, and we wanted to get it out to you as  quickly as possible. So I have done some screen captures while I ran through the fix.

    The issue found is that if you close a Mechanical, or Meshing, session and then hit ‘Save’ on the project window, you may lose the contents of your Mechanical database.   The problem has arisen in R14 because ANSYS, Inc. has decreased the start up time of Mechanical by pre-loading it when Workbench is started. When you close Mechanical, it stays open in the background as an empty database. There is a 1-second ‘window of opportunity’ after you close the Mechanical editor and save the project file when the process threads are not fully synchronized.  If you save during this period, the blank session gets saved on top of the good session, and all data is lost.  If you wait longer than a second, there shouldn’t be an issue, but customers have been reporting longer times where they have still lost data.  ANSYS, Inc. is working with those customers to find out what is causing the longer times on their boxes.

    Luckily the remedy is simple, and hopefully none of you will have to lose any data.  Since the issue is caused by the pre-loading of Mechanical, the remedy is to simply turn the pre-loading off.  This has to be done from  the Tools > Options menu on the Project window. Just uncheck the Pre-Load box in both the Mechanical and Meshing dialog boxes, and then close Workbench after hitting ‘OK’. The next time you open Workbench, the bug will be neutralized.

    I just wish it was that easy for that creepy guy in the picture! Smile

      image

    Reducing the Size of your RST File–OUTRES is your Friend

    One of the most common questions on XANSYS, and a common tech support question for us is: “Is there any way to make my result file smaller?”  In fact, we just got a support call last week on that very topic.  So we thought it would be a good topic for The Focus, and besides the standard quick answer, we can go a bit deeper and explain why.

    Why so Big?

    The first thing you need to understand is why the result file is so big. One of the fundamental strengths of the ANSYS solver, back to the days when John Swanson and his team were writing the early versions, is that the wealth of information available from an FEA solution is not hidden from the user.  They took that approach that whatever can be calculated will be calculated so the user can get to it without having to solve again.  Most other tools out there do the most common calculations, average them, and then store that in the results file. 

    If you go back to your FEA fundamentals and take a look at our sample 2D mesh, you will quickly get an Idea of what is going on.

    FEA-Mesh-Parts

    When an FEA mesh is solved, the program calculates the Degree of Freedom results (DOF) at each integration point in each element. For most meshes that is at the corner of each element.  This DOF solution is then used to calculate the derived values, usually stress and strain, for each node based upon the solution at the integration points for each element. Then, during post processing, the user decides how they want to average those element

    So the stress in the X direction for Node 2 in our example is different for element 1 and element 2.  Element 1 uses the results it calculated with the elements shape function to get stresses and strains for node 2, and element 2 does the same thing. 

    Now most programs, they average these value at solve time and either store an element average, or a nodal average.  But ANSYS does not.  It stores the values for each node on each element in its results file.

    By default, node 5 in our example will be stored four times in the ANSYS RST file, and each instance will contain 12 result items for a linear run (SX, SY, SXY, S1, S2, S3, SINT, SEQV, EPELX, EPELY, EPELXY, EPELEQV) and even more for 3D, non-linear, and special capability elements.

    And Now if you want, let’s digress into the nitty gritty:

    You can actually see what is in the RST file because ANSYS Mechanical APDL has awesome documentation.  Modern programs don’t even come close to level of detail you can find in the ANSYS MAPDL help.  Go to the Mechanical APDL in the help and them the path:

    // Programmer’s Manual // I. Guide to Interfacing with ANSYS // 1. Format of Binary Data Files // 1.2. Description of the Results File. 

    Look at the file format in section 1.2.3.  Lots of headers and such, but if you take the time to study it you will be able to see how much space each type of thing stored in the result file uses.

    First is stores model information.  Everything that describes the model but the geometry and loads.  If you scroll down to the “Solution data header” (use find to jump there) you have the actual results. This is the big part.

    Scroll down past the constants to the NSL, or the node solution list: 

    C * NSL    dp       1  nnod*Sumdof  The DOF Solution for each node in the nodal coordinate system.

    This says that the size of this record is the number of nodes (nnod) times the number of Degrees of Freedom (Sumdof).  So for a structural solid model with 10,000 nodes and only UX, UY, and UZ DOF’s, it would be 30,000 double precision numbers long.

    After the DOF solutions we have velocity, acceleration, reaction forces, masters, and then boundary conditions and some loads. Even some crack tip info.

    Then comes the big part.  The Element solution information.  Take a look at it. You have miscellaneous data, element forces.  ETABLE type data, etc… Then there is the ENS record, the Element Nodal Component Stresses.  The size is marked as “varies” because there are so many variables that define how big this record is. Read the comments. It is long but goes into explaining the huge amount of information stored here. 

    Study this and you will know more than you ever wanted to about what is in the RST file!.

    Storing all of this information is the safest option. Everything a user might need during post processing is stored so that they do not have to rerun if they realize they need some other bit of information. But, as is usual with MAPDL, the user is given the option to change away from those defaults, and only store what they want. OUTRES really is your friend.

    You Are in Control: OUTRES

    OUTRES is a unique command.  It is cumulative.  Every time you issue a command it adds or removes what you specify from what is stored in the RST file.  The basics of the command are shown in the table below, and more info can be found in the online help.  Use OUTRES, STAT to see what the current state is at any point.  Always start with OUTRES,  ERASE to make sure you have erased any previous settings, including the defaults.  

    Remember that this command affects the current load step. The settings get written to the load step file when you do an LSWRITE.  So if you have multiple load steps and you want the same for each, set it on the first one and it will stay there for all the others.  But if you want to change it for a given load step, you can.

    If you are a GUI person you can access this from four different branches in the menus:

    Main Menu>Preprocessor>Loads>Analysis Type>Sol’n Controls>Basic
    Main Menu>Preprocessor>Loads>Load Step Opts>Output Ctrls>DB/Results File
    Main Menu>Solution>Analysis Type>Sol’n Controls>Basic
    Main Menu>Solution>Load Step Opts>Output Ctrls>DB/Results File

    The first thing to play with on the command is the first argument: Item. 

    The default is to write everything.  What we recommend is that for most runs, using OUTRES, BASIC is good enough.  It tells the program to store displacements, reaction forces, nodal loads, and element stresses. The big thing it skips are the strains. Unless you are looking at strains, why store them.  Same with the MISC values. Most users don’t ever access these. 

    The next thing you can use to reduce file size is to not store the results on every element.  Use the Cname argument to specify a component you want results for.  Maybe you have a huge model but you really care about the stress over time at a few key locations.  So use a node and element components to specify what results you want for which components.  Note, you can’t use ALL, BASIC or RSOL for this option.  You need to specify a specific type of result for each component.  Remember, the command is cumulative so use a series of OUTRES commands to control this.

    OUTRES, Item, Freq, Cname

    ITEM
    Results item for database and file write control:
    ALL — All solution items except SVAR and LOCI. This value is the default.
    CINT — J-integral results.
    ERASE — Resets OUTRES specifications to their default values.
    STAT — Lists the current OUTRES specifications.
    BASIC — Write only NSOL, RSOL, NLOAD, STRS, FGRAD, and FFLUX records to the results file and database.
    NSOL — Nodal DOF solution.
    RSOL— Nodal reaction loads.
    V — Nodal velocity (applicable to structural full transient analysis only(ANTYPE,TRANS)).
    A — Nodal acceleration (applicable to structural full transient analysis only(ANTYPE,TRANS)).
    ESOL— Element solution (includes all items following):
         NLOAD — Element nodal, input constraint, and force
                 loads (also used with the /POST1 commands 
                 PRRFOR, NFORCE, and FSUM to calculate 
                 reaction loads).
        STRS — Element nodal stresses.
        EPEL — Element elastic strains.
        EPTH — Element thermal, initial, and swelling strains.
        EPPL — Element plastic strains.
       
    EPCR — Element creep strains.
        EPDI — Element diffusion strains.
        FGRAD — Element nodal gradients.
        FFLUX — Element nodal fluxes.
        LOCI — Integration point locations.
        SVAR — State variables (used only by USERMAT).
        MISC — Element miscellaneous data
              (SMISC and NMISC items of the ETABLE command).

    Freq
    Specifies how often (that is, at which substeps) to write the specified solution results item. The following values are valid:
     

    Value Description
    n

    Writes the specified results item every nth (and the last) substep of each load step.

    -n

    Writes up to n equally spaced solutions (for automatic loading).

    NONE

    Suppresses writing of the specified results item for all substeps.

    ALL

    Writes the solution of the specified solution results item for every substep. This value is the default for a harmonic analysis (ANTYPE,HARMIC) and for any expansion pass (EXPASS,ON).

    LAST

    Writes the specified solution results item only for the last substep of each load step. This value is the default for a static (ANTYPE,STATIC) or transient (ANTYPE,TRANS) analysis.

    %array%

    Where array is the name of an n x 1 x 1 dimensional array parameter defining n key times, the data for the specified solution results item is written at those key times.
    Key times in the array parameter must appear in ascending order. Values must be greater than or equal to the beginning values of the load step, and less than or equal to the ending time values of the load step.
    For multiple-load-step problems, either change the parameter values to fall between the beginning and ending time values of the load step or erase the current settings and reissue the command with a new array parameter.
    For more information about defining array parameters, see the *DIM command documentation.

    Cname
    The name of the component, created with the CM command, defining the selected set of elements or nodes for which this specification is active. If blank, the set is all entities. A component name is not allowed with the ALL, BASIC, or RSOL items.

     

     

    Use What Works for You

    The help on the OUTRES command has a nice example where the user specifies different solutions for different sub steps. Check it out to get your head around what is happening:  // Command Reference // XVI. O Commands // OUTRES

    Next time you run a small but typical model for you, play with the options.  Most of the time when I have a lot of load steps or a large model, I use the following in my input deck:

    OUTRES,ERASE
    OUTRES,BASIC

    Sometimes I only care about surface stresses so they use (of course last can be replaced with all or any other frequency):

    NSEL,s,ext
    ESLN,S,0
    CM,nxtrnl,node
    CM,extrnl,elem
    OUTRES,erase
    OUTRES,nsol,last,nxtrnl
    OUTRES,strs,last,extrnl
    OUTRES,nload,last,extrnl
    OUTRES,stat

    Use OUTRES on the next couple of runs you do. Try BASIC, try some other things and see if you can save some disk space.

    ANSYS Mesh Connections–Another Tool for Meshing Surface Assemblies

    Anyone who has had to mesh shell assemblies has probably run into trouble with edges that don’t quite line up, edges that meet in the middle of faces and other problems that make the meshing process difficult.  Often geometry operations were required to reconcile those problems and many times significant effort was required to get a continuous mesh.

    Another historically used tool to connect shell assemblies was the use of constraint equations to connect edge nodes to surface nodes on the finite element level.  More recently, advances in contact technology have allowed for the use of nonlinear contact elements to connect shell assembly meshes.  Both of those techniques, while useful, have some drawbacks.  For example, constraint equations do not support large rotations of the geometry as the direction of application does not change as nodes rotate.  Also, contact elements increase the computational expense if they can otherwise be eliminated.

    ANSYS, Inc. now provides us with another technique for handling shell assembly meshing, called Mesh Connections.  First available in version 13.0 and enhanced in version 14.0, mesh connections use the mesher itself to connect shell assemblies toward the goal of getting a continuous or conformal mesh across the surface bodies that make up the assembly.

    Consider this boat hull example.  It consists of panel surfaces defining the hull as well as some stiffening ribs.  All geometry is composed of surface bodies. 

    image

    Some of the ribs line up with edges in the hull surfaces, while others do not as shown in the close up image shown below.

    image

    We can now create mesh connections in the Connections branch after loading this geometry into the Mechanical editor in Workbench 14.0.

    image

    Upon generating the mesh, the mesher will attempt to create a continuous or conformal mesh even though we have do not have intersecting geometry. 

    image

    With the default settings, we can see in this image that it did a fairly good job of creating the mesh for the ribs which do not intersect with the hull surfaces.  Nodes on the hull surface were adjusted so that they connect to the rib geometry. 

    image

    In this case with relatively little effort we were able to obtain a continuous mesh between the ribs and the hull, even though the several of the ribs shared no intersections with the hull surfaces.  In fact, the mesh connections were able to overcome small gaps in between the geometry as well.

    In 14.0, the mesh connections are generally performed after the initial mesh is created by default.  This means that if changes are made only to mesh connection settings, the remeshing operation is fairly quick since the initial mesh does not need to be regenerated in most cases.

    Note that mesh connections exist in the Connections branch, not the Mesh branch. The mesh connection setup works in similar fashion to contact region creation in that searching for edges/faces to connect is based on proximity. The proximity value can be controlled via a slider or by entering an explicit distance, both available in the Mesh Connections details window.

    To activate mesh connections, highlight the Connections branch and click on the Connection Group button in the context sensitive menu above the outline tree.  Change the Connection Type to Mesh Connection in the details.

    image

    Next right click on the Connections branch and select Create Automatic Connections.  You may need to adjust the auto detection tolerance in the details to make sure the tolerance distance is large enough to detect desired gaps between edges and faces or edges and edges for the mesh connection to work.

    If any contact regions have been automatically created that you want to replace with mesh connections, delete or suppress them.  You have the choice of automatically creating mesh connections or manually creating them.  Both options are available by right clicking. 

    image

    In the example shown here, mesh connections are edge to face.  Edge to edge mesh connections are also available.

    With a couple of mesh settings added, we can obtain a better mesh:

    image

     

    Note that the hull surface nodes have moved a bit in order to allow for the mesh connections with the ribs.  Here is a view of the outer hull surface in the mesh connection region:

    image

    There are other considerations as well, such as which geometric entities should be the master or slave.  In general slave geometry is ‘pinched’ into the master geometry.  Also, mesh connections can be setup manually for cases where the auto detection is not appropriate or is not providing the desired level of control.  Note that the mesh can end up as an approximation of the geometry since the mesh will have moved to close gaps.  Here is an example:

    image

    In summary, mesh connections are another tool that are available to us in ANSYS meshing capabilities, having value for shell assemblies.  In cases where shell geometry edges do not meet at intersections we can still obtain a continuous mesh without having to perform additional geometry operations.  Mesh connections can be faster than using contact elements at the edges as well.  There are other features and considerations for mesh connections which are explained in the ANSYS 14.0 Help.  We recommend you give them a try if you are tasked with simulating shell type structures.

    Bringing the Value of Simulation into Perspective

    Those of us who do simulation for a living spend a lot of time focusing on faster, cheaper, better.  But we rarely deal with the hard, and cold reality that better often means safer.  Please take some time to read the blog entry below from an ANSYS employee who survived and insane crash, because a bunch of nerds at Nissan ran simulations over and over again on his car so that it would protect him.  Warning… may make you mist up a bit.

    http://blog.ansys.com/2012/01/26/nissan-understand-behind-realize-your-product-promise/

    ANSYS R14 Quick Install Instructions for Windows

    Editors note: For this weeks Focus posting I’m just taking a PowerPoint that Ted Harris created and putting it on the Blog so the search engines can find it easily.  We share this with our customers to help them quickly install the ANSYS software and hope you find it useful.  You can also download a PDF of the PowerPoint if you wish to keep a copy, share it with your co-workers, or print it out: 


    image

    image

    image

    image

    image

    image

    image