Writing Text files with *VWRITE

A very common need in the world of ANSYS FEA simulation is to write text to a text file from within Mechanical APDL. Sometimes you are running in MAPDL, sometimes you are using ANSYS Mechanical but you still need to write stuff out using APDL with a code snippet. The way most people do that is with *VWRITE. 

Originally written to write out data in arrays, it is a very flexible and powerful command that can be used to write pretty much any type of formatted output. Something that every ANSYS user should have in their back pocket.

The Command

*VWRITE, Par1, Par2, Par3, Par4, Par5, Par6, Par7, Par8, Par9, Par10, Par11, Par12, Par13, Par14, Par15, Par16, Par17, Par18, Par19

Looks pretty simple right, just *vwrite and list what you want printed. But there is a lot more to this command.

A Lot More

First off you need to open up a file to write to.  You have a couple of options.

  1. *CFOPEN,fname, ext, –, Loc
    This opens the specified file for writing with *cfwrite and *vwrite commands.  This is the preferred method.
  2. /output,fname, ext, –, Loc
    By default *VWRITE output to standard output – the jobname.out (batch) file or the command window (interactive). So if you use /output you can redirect to a file instead of the *.out or screen.  We don’t recommend this because other stuff might get written as well to the file.

Now you have a place to write to, next you need to use *VWRITE to write. *VWRITE is a unique command because it actually uses two lines.  The first contains *VWRITE and a list of parameters and/or arrays to write and the second contains a format statement.  We will cover the first line first, and the format second.

Parameter Arguments for *VWRITE

As you can see from the command, you can have up to 19 parameters listed on a *VWRITE command.  PAR1 through PAR19 can be array, scalar, or character parameters. They can also be a constant. This is where the real flexibility comes in.  You can do something like (just look at the *VWRITE line, we will talk about the rest further on):

   1: adiv = ' | '

   2: *dim,nds, ,10

   3: *dim,temps,,10

   4: *vfill,nds(1),ramp,1,1

   5: *vfill,temps(1),rand,70,1500

   6: *cfopen,vw1.out

   7: *VWRITE,'Temp: ',nds(1),temps(1),adiv, 'TREF: ',70

   8: (A6,F8.0,g16.8,A3,A6,F10.4)

   9: *cfclose

This mixes characters, arrays, and constants in one command.  As output you get:

Temp:       1.   429.56308     | TREF:    70.0000
Temp: 2. 263.55403 | TREF: 70.0000
Temp: 3. 1482.8411 | TREF: 70.0000
Temp: 4. 605.95819 | TREF: 70.0000
Temp: 5. 782.33391 | TREF: 70.0000
Temp: 6. 1301.1332 | TREF: 70.0000
Temp: 7. 1119.4253 | TREF: 70.0000
Temp: 8. 202.87298 | TREF: 70.0000
Temp: 9. 1053.4121 | TREF: 70.0000
Temp: 10. 805.71033 | TREF: 70.0000

Array Parameters

The first thing you will notice is no *do loop.  If you supply an array parameter, *vwrite loops on the parameter from the given index (1 in this case) to the end of the array.  But if you don’t want the whole array written, you can control by placing *VLEN and/or *VMASK in front of the *VWRITE command:

  • *VLEN,nrow,ninc
    This will only write out nrow times, skipping based on ninc (defaults to 1)
    • As an example, if you want to write just the fourth value in array A() you would do:
  • *VMASK,Par
    You make a mask array of 0’s and 1 that is the same size as your array, and supply it to *VMASK.  *VWRITE will only write out values for your array if the mask array is not zero for the same index.

You can have a multiple dimensions on your array. *VWRITE only increments the first index. Say you specify X, Y, and Z coordinates in an array call xyz.  It would look like:


String Parameters

Being an older program, you are limited in what you can do with character parameters.  You are limited to 8 characters. So you just use a long string parameter several times and increment the index by 8:

   1: *dim,mystring,string,80

   2: mystring(1) = 'This is a very long sentance'

   3: *cfopen,vw2.out

   4: *VWRITE,mystring(1), mystring(9), mystring(17), mystring(25), mystring(33)

   5: (5A) 

   6: *cfclose

Kind of hokey, but it works.


Sigh.  This is the one thing that I’m not fond of in *VWRITE. The original command did not support outputting integer values.  That is because the FORTRAN I descriptor was not supported, and ANSYS stores everything as an integer anyhow. But people needed to output integer values so they took the ‘C’ format routines for *MSG and made them work with *VWRITE. So you can do a %I.  See the section on ‘C’ formatting below for more information on this.

Close the File

Before you can do anything with the file you create you need to close it.  Not to hard: *CFCLOSE does the trick.

Other Stuff you Need to Know

Don’t put a blank in there.  If you do, *vwrite stops looking at parameters.  So if you need a blank in your file, put in a space ‘ ‘ or use the X FORTRAN descriptor.

Be aware of *CFWRITE as well.  It is a way to write APDL commands to a file. If what you want to do is have your macro write a macro, *CFWRITE is better because you don’t have to do format statements. And those can be a pain when you need commas.

If your arrays are of different lengths, *VWRITE will loop for the length of the longest array. Any shorter arrays will be replaced with zeros for number arrays and blanks for character/string arrays.

You can not use *VWRITE by pasting or typing into the command line.  You have to read it from a file.


The key, and the difficult part of using *VWRITE is the format statement. We recommend that you use the FORTRAN formatting when you are writing out large amounts of columnar data, and use the ‘C’ format if you are writing out text rich information for a report or to inform the user of something.

Many users today may not even know what a FORTRAN statement looks like.  A good place to look is:

Just remember that you can’t use the Integer (I) format.  The list directed format (*) also does not work.  If you are new to it also remember everything goes in parenthesis and it has to fit on one line.  It does not have to start in column 8 (if you think that is funny, you are old)

As to ‘C’ formatting, you have a lot more options but we have found that the behavior is not as consistent between Linux and windows as the FORTRAN.  But if you are more comfortable with ‘C’, do that. Not all of ‘C’ formatting works in APDL, but what does is actually documented under the *MSG command. 

Making it Work

We always recommend you work out your *VWRITE issues in a small macro that you can run over and over again as you work out the formatting.  If you are writing a snippet for ANSYS Mechanical. Go ahead and save your model, bring it up in MAPDL, then work on your *VWRITE statement till you get it right.

Some other useful suggestions are:

  • Keep things simple. Don’t try and format a novel or an HTML page with *VWRITE.  You probably could, but there are better tools for that.
  • Make sure you understand arrays, how they work in APDL, and how you have yours dimensioned.
  • Get familiar with *VMASK and *VLEN. They are useful

Turning on Beta Features in Workbench

It is a busy week and I’m in between meetings for about 30 minutes, just enough time for a very short Focus posting for the week.  So, I thought I would share something I had to remember for the first time in a long time: How to access beta features in ANSYS Workbench.

First off a word of warning:  Beta features are beta features. They are capabilities in the software that have either not finished testing, are not fully documented, or that have a known issue.  They therefore must be used AT YOUR OWN RISK!!!!   If you find a bug or a problem, report it to your technical support provider, they need that feedback. But don’t call up indignant because it is not working the way you want it, or because the documentation is non-existent.  It is a beta feature.

Set the Option

Not too difficult.  From the Project Schematic page go to the tools menu and select Options.


Now in the Options dialog click on Appearance in the tree on the left.  You will not see Beta Options.  Scroll down and near the bottom there are a bunch of check boxes. Check “Beta Options”image

Now, in your project toolbar you should see (Beta) next to the exposed beta functions:


This will also impact any beta features, if any, in the workbench native applications:  Parameter manager, Engineering Data, or DesignXplorer.

That is it. I promised short.  Off to another meeting.


Webinar Information: Constraint Equation Primer

Here are the files from the webinar held on Friday, April 27, 2012: A Constraint Equation Primer: How to Tie Degrees of Freedom Together


Link to webinar recording: 


No models or anything on this one.

Happy 10th Birthday: The Focus

image_thumb1Don’t you hate it when you miss someone’s birthday?  I was looking up an old article in the Focus and noticed that the first issue was published on January, 13th, 2002. 

Happy Belated Birthday!

It was sometime in 2001 that Rod Scholl pointed out that there was no good ANSYS newsletter out there.  People would do one or two issues then get busy and never put out any more, or only publish once in a while.  So we decided that we would not only do a newsletter, but that we would keep it up and publish  on a regular basis. The first issue came out as an HTML email on January 13th of 2002. 


The First Issue of The Focus

And Rod was instrumental in keeping us on track and making sure that with it. Since then we published 74 actual newsletters before switching to a blog format in 2010.  Just before this one goes up on the blog, we will have published 59 The Focus articles.

Thank you to everyone who subscribes to The Focus and reads our postings, rates us, and sends us such great comments and questions.  

Here is to 10 more years!

Files for Webinar: A POST26 Primer

imageWe just finished our webinar for 4/12/2012 on the basics of the POST26 Time History Post Processor.  As promised, here are the files used for examples in the webinar, as well as the PowerPoint:

Tower-of-Test (ToT) test model, APDL:

Tower-of-Test (ToT) test model, ANSYS Mechanical with APDL Snippets:

tower-of-test-transient-2013.wbpz (note this is an R15 file.  See why here.)


imagePowerPoint Presentation:

A recording of this webinar will be available on the following site after 4/13/2012:


Click on  PADT ANSYS Webinar Series to see all recordings.

Some Revolutionary HPC Talk: 208 Cores+896GB < $60k, GPU’s, & ANSYS Distributed

imageThe last couple of weeks a bunch of stuff has gone on in the area of High Performance computing, or HPC, so we thought we would throw it all into one Focus posting and share some things we have learned, along with some advice, with the greater ANSYS user community.

There is a little bit of a revolution going on in both the FEA and CFD simulation side of things amongst users of ANSYS, Inc. products.  For a while now customers with large numbers of users and big nasty problems to solve have been buying lots of parallel licenses and big monster clusters. The size of problems that these firms, mostly turbomachinery and aerospace, have been growing and growing. And even more so for CFD jobs.   But also FEA for HFSS and ANSYS Mechanical/Mechanical APDL.  That is where the revolution started.

But where it is gaining momentum, where the greater impact is being seen on how simulation is used around the world, is with the smaller customers.  People with one to five seats of ANSYS products.  In the past they were happy with their two “included” Mechanical shared memory parallel tasks.  Or they might spring for 3 or 4 CFD parallel licenses.  But as 4, 6, and 8 core CPU chips become mainstream, even on the desktop, and as ANSYS delivers greater value from parallel, we are seeing more and more people invest in high performance computing. And they are getting a quick return on that investment.

Affordable High Value Hardware

208 Cores + 869 GB + 25 TB + Infiniband + Mobile Rack = $58k = HOT DAMN!

Yes, this is a commercial for PADT’s CUBE machines.  (www.CUBE-HVPC.com) Even if you would rather be an ALGOR user than purchase hardware from a lowly ANSYS Chanel Partner, keep reading. Even if you would rather go to an ANSYS meeting at HQ in the winter than brave asking your IT department if you can buy a machine not made by a major computer manufacturer, keep reading.

Because what we do with CUBE hardware is something you can do yourself, or that you can pressure your name brand hardware provider into doing.

We just got a very large CFD benchmark job for a customer.  They want multiple runs on a piece of “rotating machinery” to generate performance curves.  Lots of runs, and each run can take up to 4 or 5 days on one of our 32 core machines.  So we put together a 208 core cluster.  And we maxed out the RAM on each one to get to just under 900 GB. Here are the details:

Cores: 208 Total
    3 servers x48 2.3 GHz cores each server
    2 servers x32 3.0 GHz cores each server
RAM: 896 GB
    3 servers  x128GB DDR3 1333MHz ECC RAM each server
    2 servers  x256GB DDR3 1600MHz ECC RAM each server
DATA Array:  ~25TB
Interconnect: 5 x Infiniband 40Gbps QDR
Infiniband Switch: 8 port 40Gbps QDR
Mobile Departmental Cluster Rack – 12U

All of this cost us around $58,000 if you add up what we spent on various parts over the past year or so.  That much horsepower for $58,000.  If you look at your hourly burden rate and the impact of schedule slip on project cost, spending $60k on hardware has a quick payback. 

You do need to purchase parallel licenses. But if you go with this type of hardware configuration what you save over a high-priced solution will go a long way towards paying for those licenses.  Even if you do spend $150k on hardware, your payback with the hardware and the license is still pretty quick.

Now this is old hardware (six months to a year –  dinosaur bones).  How much would a mini-cluster departmental server cost now and what would it have inside:

Cores: 320 Total
5 servers x64 2.3 GHz cores each server
RAM: 2.56 TB
5 servers x512 GB DDR3 RAM each server
DATA Array: ~50TB
Interconnect: 5 x Infiniband 40Gbps QDR
Infiniband Switch: 8 port 40Gbps QDR
Mobile Departmental Cluster Rack – 12U

The cost?  around $80,000.  That is $250/core.  Now you need big problems to take advantage of that many cores.  If your problems are not that big, just drop the number of servers in the mini-cluster.  And drop the price proportionally. 

Same if you are a Mechanical user.  The matrices in FEA just don’t scale in parallel like they do for CFD, so a 300+ core machine won’t be 300 times faster. It might even be slower than say 32 cores.  But the cost drop is the same.  See below for some numbers.

Bottom line, hardware cost is now in a range where you will see payback quickly in increased productivity.



We think they should have some 80’s era super model
draped over the front like those Ferrari posters we
had in our dorm rooms.

For you Mechanical/Mechanical APDL users, let’s talk GPU’s.

We just got an NVIDIA TESLA C2075 GPU.  We are not done testing, but our basic results show that no matter how we couple it with cores and solvers, it speeds things up.  Anywhere from 3 times faster to 50% faster depending on the problem, shared vs. distributed memory, and how many cores we throw in with the GPU.  

This is a fundamental problem with answering the question “How much faster?” because it depends a lot on the problem and the hardware. You need to try it on your problems on your hardware. But we feel comfortable in saying if you buy an HPC pack and run on 8 cores with a GPU, the GPU should double your speed relative to just running on 8 cores.  It could even run faster on some problems. 

For some, that is a disappointment.  “The GPU has hundreds of processors, why isn’t it 10 or 50 times faster?”  Well, getting the problem broken up and running on all of those processors takes time.  But still, twice as fast for between $2,000 to $3,000 is a bargain. I don’t know what your burden rate is but it doesn’t take very many hours of saved run time to recover that cost.  And there is no additional license cost because you get a GPU license with an HPC pack.

Plus, at R14 the solver supports a GPU with distributed ANSYS, so even more improvements.  Add to that support for the unsymmetrical or damped Eigensolvers and general GPU performance increases at R14.

PADT’s advice? If you are running ANSYS Mechanical or Mechanical APDL get the HPC Pack and a GPU along with a 12 core machine with gobs of RAM (PADT’s 12 core AMD system with 64GB or RAM and 3TB of disk costs only $6,300 without the GPU, $8,500 with).  You can solve on 8 cores and play Angry Birds on the remaining 4.

Distributed ANSYS

For a long time many of users out there have avoided distributed ANSYS. It was hard to install, and unless you had the right problem you didn’t see much of a benefit for many of the early releases. Shared Memory Parallel, or SMP, was dirt easy – get an HPC license and tell it to run on 8 cores and go.

Well at R14 of ANSYS Mechanical APDL it is time to go distributed.  First off they make the install much easier.  To be honest, we found that this was the biggest deterrent for many small company users.

Second, at R14 a lot more things are supported in distributed ANSYS. This has been going on for some time so most of what people use is supported. At this release they added subspace eigensolving, TRANS, INFINI and PLANE121/230 elements (electrostatics), and SURF251/252. 

Some “issues” have been fixed like restart robustness and you now have control on when and how multiple restart files are combined after the solve. 

All and all, if you have R14, you are solving mechanical problems, and you have an HPC pack, you should be using distributed most of the time.


We get a ton of questions from people about what they should buy and how much.  And every situation is different. But for small and medium sized companies, the HPC revolution is here and everyone should be budgeting for taking advantage of HPC:

    • At least one HPC pack
    • Some new faster/cheaper multicore hardware (CUBE anyone?)
    • A GPU. 

STOP!  I know you were scrolling down to the comments section to write some tirade about how ANSYS, Inc overcharges for parallel, how it is on a moral equivalence with drowning puppies, and how much more reasonable competitor A, B, or C is with parallel costs.  Let me save you the time.

HPC delivers huge value.  Big productivity improvements.  And it does not write itself. It is not an enhancement to existing software.  Scores of developers are going into solver code and implementing complex logic that allows efficiency with older hardware, shared memory, distributed memory, and GPU’s. It has to be written, tested, fixed, tested again, and back and forth every release.  That value is real, and there is a cost associated with it.

And the competitors pricing model? The only thing they can do to differentiate themselves is charge nothing or very little. They have not put the effort or don’t have the expertise to deliver the kind of parallel performance that the ANSYS, Inc. solvers do.  So they give it away to compete.  Trust me, they are not being nice because they like you. They have the same business drivers as ANSYS, Inc.  They price the way they do because they did not incur as much cost and they know if they charged for it you would have no reason to use their solvers.

ANSYS users of the world unit!  Load your multicore hardware with HPC packs, feed it with a GPU, and join the revolution!

Changing Results Values in ANSYS Mechanical APDL–DNSOL and *VPUT

So it is Friday afternoon and that big, involved, deep-dive into some arcane ANSYS capability is still not written.  So, time for plan B and the list of not so involved but still somewhat arcane capabilities that we like to expose in The Focus.  At the top of that list right now is how to change the results in an ANSYS Mechanical APDL (MAPDL) solve. 

One might ask why would you want to do this?  Well the most common usage is that you want to use APDL to calculate some result value and then display it in a plot.  Similarly, you may want to do some sort of calculation or post processing on MAPDL results but using and external piece of code, but still show the results in ANSYS.  Another common usage is to use MAPDL as a post processor for some external solver. 

And, it turns out, it is pretty easy.  And, as you probably have learned by now if you use MAPDL a lot, there is more than one way to do it.

The “Database”

Before we get into how to do this, we need to talk about the “database” in MAPDL. If you read through the documentation on the commands we will use, it will talk about the database.  This is not the jobname.db file.  That is the db file.  The database refers to the representation of your model, including results and the currently active loads, in memory when you are running MAPDL. 

When you do a RESUME command, MAPDL reads the DB file and stores the model, including geometry, mesh, loads, and run parameters, in memory.  When you do a SET command, it then adds the results and related information into memory. 

So when we use the commands we will talk about next, you are changing what is in the database, not what is in the DB file on your disk.  And you are storing it temporarily.  Many different commands cause the database to go back to its original values.  So you need to be very careful in how you use these tools, and don’t assume that once you have used them, the changes are permanent.  


The simplest way to do it is with the DNSOL command: DNSOL, NODE, Item, Comp, V1, V2, V3, V4, V5, V6

So, if you do a dnsol,32,u,x,3.145  the value in memory for deflection in the X direction will be changed to 3.145.  dnsol,32,s,x,1.1,2.2,3.3 will change the stress on node 32 in the X direction to 1.1, in the Y direction to 2.2, and in the Z direction to 3.3. 

The second argument can also be a component, so you can assign a uniform result value to many nodes at the same time. 

Here is an example, very simple, of a block where we set the deflection on the top nodes to 1 in.

   1: finish

   2: /clear

   3: /prep7

   4: blc4,-2,-2,4,4,20

   5: et,1,185

   6: mptemp,1,70

   7: mpdata,ex,1,1,20e6

   8: mpdata,nuxy,1,1,.23

   9: mpdata,dens,1,1,.001


  11: vmesh,all

  12: /view,1,1,1,1

  13: /vup,1,z

  14: /RGB,INDEX,100,100,100, 0

  15: /RGB,INDEX, 80, 80, 80,13

  16: /RGB,INDEX, 60, 60, 60,14

  17: /RGB,INDEX, 0, 0, 0,15

  18: /show,png

  19: eplot


  21: nsel,s,loc,z,0

  22: cm,nbt,node

  23: d,all,all

  24: nsel,s,loc,z,20

  25: cm,ntp,node

  26: f,all,fx,10

  27: f,all,fy,12

  28: allsel

  29: save


  31: /solu

  32: solve

  33: finish

  34: /post1

  35: plnsol,u,sum


  37: dnsol,ntp,u,y,1

  38: plnsol,u,sum


  40: /show,close



aa1003The Fancy Mesh

aa1004The Normal Solution

aa1005Solution with 1” deflection DNSOL’d onto the top nodes

Pretty simple. 

NOTE: One key thing to remember is you can not use this with Powergraphics. You must have /graph,full turned on.


The DNSOL is great for a few nodes here and there, or a constant value. But it makes for a big nasty *do loop if you want to do a lot of nodes. So ANSYS, Inc. give us the *VPUT command:

*VPUT, ParR, Entity, ENTNUM, Item1, IT1NUM, Item2, IT2NUM, KLOOP

As you can see, this command has a lot of options, so read the Help before you use it. Most of the time you have an array that stores the value you want stuck on your model in an array (ParR).  Then you specify what MAPDL result you want to overwrite and it takes care of it for you.

*vput,nws1(1),node,1,s,1 will place new values for maximum principal stress on all the nodes covered in the array , starting at 1. 

Here is an example of the code, after the same solve as above, to do a *vput instead of a DNSOL:

   1: finish

   2: /post1

   3: plnsol,u,sum


   5: *get,nmnd,node,,num,max

   6: *dim,nwux,,nmnd

   7: *do,i,1,nmnd

   8:     nwux(i) =  i

   9: *enddo


  11: *vput,nwux(1),node,1,u,x

  12: plnsol,u,sum


The code places a displacement in the X direction equal to its node number.  So node 80 has a UX of 80. 

A key thing to note with *vput is that it is much more transient. Pretty much any other command that involves anything other than plotting or listing blows away the values you assigned with *VPUT.  So we recommend that you do a *vput before every plot or listing you do.

Thoughts, Questions, and Conclusion

Of course you should never use this to fudge results. 

You can get very fancy with this. When I use it I often turn off all the plot controls and then create my own legend. That way I can put hours of life on as temperature or SX and then make my own legend that says LIFE or something of the sort. 

Another thing to note is that if you DNSOL or *VPUT a displacement, then ANSYS will distort your plot by that much.  That is OK if you are changing deflection, but not so good if you are plotting life or some esoteric stress value.

A common question when you play with these commands is if you can store the modified results in the RST file.  You can for degree of freedom results, but not for stresses.  You use LCDEF and RAPPND. 

And what about ANSYS Mechanical?  Well it works totally different, and better. You can do all of this using Design Assessment, which was covered in a webinar and a Focus article you can find here.

The key to using these two commands is pretty much the same as any APDL command: Start on a small test model, write a macro to do it, and keep things simple. And, read the manual.  Everything you need to know is in there.

Workshops for “Intro to Workbench Framework Scripting”


At noon Phoenix time today we will be presenting the Webinar “Intro to Workbench Scripting: Controlling Projects, Materials, and Solution Execution with Python”  This is a very high level, and probably short, introduction to the basics of using the python scripting in Workbench.

To support the talking, I’ve put together four workshops, mostly based on ANSYS material or examples we have presented before here on the Focus.  They should be enough for anyone that is a good programmer or better to customize the Workbench Framework.  We also present it here as a tool for those who don’t attend the webinar (and as this weeks Focus posting, cause we didn’t have time to write one…)

Warning: These were done in a hurry between meetings and some travel… so they are not great from a grammar or typo standpoint. Regardless, we hope you find them useful.

The files you need to run them are in this zip file:

The Workshop Document is here: 

We hope to add to this document over the next year or so and provide it as a more or less complete tutorial for those who want to automate their analysis.

Starting ANSYS Products From the Command Line

imageSometimes you just get tired of clicking on icons.  Sometimes you just need to feel the control and power of launching your applications from the command line.  You type it in, you hit the enter key, and sometimes you can actually hear the disk spin up or the fan run faster to cool the processor as the program you asked for, the program you took time to type out, leaps to life in front of you. Satisfaction.

OK, maybe not.  More often you are scripting some batch solves.  Or maybe you are using the graphical user interface in Workbench but you need to set options for the solvers you are running from within workbench.  Because most of the solvers in the ANSYS family of products predate such new-fangled concepts as GUI’s, and because they are often run remotely on other machines, they have command line interfaces. And that gives the knowledgeable user more power and control.

General Concepts for Launching from the Command Line

Although the number of options available changes from application to applications, there a few common things you should know. 


The first and most important concept is to be aware of the path.  This is where most errors happen. One of the big changes over the years is that as software gets more complicated, the executable program or script that you use to launch a solver is now buried deep down inside a directory structure.  Since we never run in that actual directory we need to tell the operating system where the executable is. You can do this by including the full directory path in your command line argument, or by adding it to your path. 

On Linux follow the directions in the help:

// Installation and Licensing Documentation // Linux Installation Guide // 5. Post-Installation Instructions

for each of the products you want to run. Generally, you need to set the PATH environmental variable in your .cshrc, .login, or .profile. 

On Windows it is not documented, the assumption being that you will be clicking on icons and not typing into a command window.  So a little detective work is needed. Use a file explorer and the Linux documentation on launching to locate the executable for solvers you want to use:

// Installation and Licensing Documentation // Linux Installation Guide // 5. Post-Installation Instructions // 5.3. Launching ANSYS, Inc. Products

The /ansys_inc part is usually replaced with c:\Program Files\ANSYS Inc.  The rest of the path is pretty much the same, swapping forward slashes with backward slashes.  Use these paths in your command line or add to your path by:

  • From the desktop, right-click My Computer and click Properties.
  • In the System Properties window, click on the Advanced tab.
  • In the Advanced section, click the Environment Variables button.
  • Finally, in the Environment Variables window (as shown below), highlight the Path variable in the Systems Variable section and click the Edit button. Add or modify the path lines with the paths you wish the computer to access. Each different directory is separated with a semicolon as shown below.

    Windows enviromental path settings

    Important note for Windows:  If you are typing the path in on the command line, you need to put it in double quotes.  The convention on Windows is to specify directories with spaces in the name.  But the convention is not to have a command line parser that recognizes this.  So you will get an error if you type:

    C:\Program Files\ANSYS Inc\v140\ansys\bin\winx64\ansys140.exe

    But if you put it in quotes, it works fine:

    "C:\Program Files\ANSYS Inc\v140\ansys\bin\winx64\ansys140.exe"

    Versions Numbering

    If you look at the example for launching MAPDL above you will notice that 140 is used in the directory path and in the name of the executable.  This will change with version:  v130, v145, etc…  Just be aware of that if you are reading this blog posting in 3 years and we are on release 16.5.  you would use:

    "C:\Program Files\ANSYS Inc\v165\ansys\bin\winx64\ansys165.exe"

    Where do you Launch From?

    You of course need a command line to launch a solver.  This is usually a window that lets you type operating system commands: called a Command Prompt in Windows or a shell on Linux.  On Linux it can be an xterm window, a console window, or some other terminal window you have opened. 

    But you can also launch from a script, and that script can be launched from a command prompt or shell, or it can be launched by an icon.  All that needs to happen is that the script needs to be executed with the environmental variables required for the command prompt/script or the GUI.  If you don’t know how to make that happen, contact your IT support or someone who understands your operating system and how it runs processes.

    ANSYS Mechanical APDL

    The solver with the most options and capabilities from the command line is Mechanical APDL. So we will start there.  It is important to know these even if you use Mechanical most of the time.  That is because you can set these, and better control your solves, under Tools->Options->Mechanical APDL.  Here is what that dialog looks like:


    The most common settings have their own widgets,  and the others can all be accessed by using “- string” command line style arguments in the first text widget aptly named: Command Line Options.

    Here are the options, grouped for your studying pleasure:






    In the ANSYS Workbench environment, activates a custom ANSYS executable.



    Calls a custom ANSYS executable. See help on running custom executables for more

    -acc device


    Enables the use of GPU compute accelerator. As this is written, nvidia is the only option. But as other cards become available look for this to have other options. Check the help.



    Enables Distributed ANSYS. See the Parallel Processing Guide for more information.



    Specifies the machines on which to run a Distributed ANSYS analysis. See Starting Distributed ANSYS in the Parallel Processing Guide for more information.



    Specifies the type of MPI to use. See the Parallel Processing Guide for more information.



    Specifies an existing MPI file (appfile) to be used in a Distributed ANSYS run. See Using MPI appfiles in the Parallel Processing Guide for more information.

    -np value


    Specifies the number of processors to use when running Distributed ANSYS or Shared-memory ANSYS.

    -d device


    Specifies the type of graphics device. This option applies only to interactive mode. For Linux systems, graphics device choices are X11, X11C, or 3D. For Windows systems, graphics device options are WIN32 or WIN32C, or 3D.



    Launches the ANSYS program with the Graphical User Interface (GUI) on. Linux only. On windows do a /SHOW and /MENU,ON to get the GUI up.

    -l language


    Specifies a language file to use other than US English.

    -b [list | nolist]


    Activates the ANSYS program in batch mode. The options -b list or -b by itself cause the input listing to be included in the output. The -b nolist option causes the input listing not to be included

    -i inputname


    Specifies the name of the file to read input into ANSYS for batch processing. On Linux, the preferred method to indicate an input file is <. Requried with -b option.

    -j Jobname


    Specifies the initial jobname, a name assigned to all files generated by the program for a specific model. If you omit the -j option, the jobname is assumed to be file.

    -o outputname


    Specifies the name of the file to store the output from a batch execution of ANSYS. On Linux, the preferred method to indicate an output file is >.

    -p productname


    Defines which ANSYS product will run during the session (ANSYS Multiphysics, ANSYS Structural, etc.). This is how you pull a different licence from the default. Very handing if you have multiple licenses to choose from.

    -s [read | noread]


    Specifies whether the program reads the start140.ans file at start-up. If you omit the -s option, ANSYS reads the start140.ans file in interactive mode and not in batch mode.



    Defines the initial working directory. Remember to use double quotes if you have spaces in your directory path name. Using the -dir option overrides the ANSYS140_WORKING_DIRECTORY environment variable.

    -db value


    Defines the portion of workspace (memory) to be used as the initial allocation for the database. This and -m are the two most important options. If you ever find that ANSYS is writing a *.PAGE file, up this number.

    -m value


    Idefines the total memory to reserve for the program. It is always better to reserve it up front rather than letting ANSYS grab as it needs.

    -schost host name


    Specifies the host machine on which the coupling service is running (to which the co-simulation participant/solver must connect) in a System Coupling analysis.

    -scname name of the solver


    Specifies the unique name used by the co-simulation participant to identify itself to the coupling service in a System Coupling analysis. For Linux systems, you need to escape the quotes or escape the space to have the name recognized with a space:

    -scport port number


    Specifies the port on the host machine upon which the coupling service is listening for connections from co-simulation participants in a System Coupling analysis.



    Specifies the master field name in an ANSYS Multi-field solver – MFX analysis. See Starting and Stopping an MFX Analysis in the Coupled-Field Analysis Guide for more information.

    -ser value


    Specifies the communication port number between ANSYS and CFX in an MFX analysis.



    Enables ANSYS DesignXplorer advanced task (add-on).



    Enables LS-DYNA.



    Returns the ANSYS release number, update number, copyright date, customer number, and license manager version number. Does not actually run ANSYS MAPDL.

    name value


    Defines ANSYS parameters at program start-up. The parameter name must be at least two characters long. These get passed into ANSYS and are used by any APDL scripts you run.

    The ones that everyone should know about are: –p, –m, –db.  We find that not using these to define what license to use (-p) or to control how memory is pre-allocated (-m, –db) generate the most tech support questions.    The next most important is the –np command. Use this to define more processors if you have HPC licenses.

    Cheating – Use the Launcher

    Sometimes the options can get long and confusing.  So what I do is I use the “ANSYS Mechanical APDL Launcher” and fill in all the forms. then go to tools->View Display Command Line and I can see all the options.


    Here is a a fancy command line that got generated that way:

    "C:\Program Files\ANSYS Inc\v140\ANSYS\bin\winx64\ansys140.exe"  -g -p ane3flds -np 2 -acc nvidia -dir "c:\temp" -j "grgewrt1" -s read -m 5000 -db 1000 -l en-us -lstp1 32 -t -d win32 -custom "/temp/myansys.exe"  

    It runs interactive (-g), uses a multiphysics license (-p ane3flds), grabs two processors (-np 2), uses an NVIDEA GPU (-acc nvidia (I don’t have one…)), runs in my temp directory (-dir), uses a job name of grgewrt1 (-j), reads the start.ans file (-s), grabs 5000mb and 1000mb for memory (-m, –db), uses English, passes in a parameter called lstp1 and sets it to 32 (-), uses the win32 graphics driver (-d) and runs my custom ANSYS executable (-custom).

    I have no idea what –t is.  Some undocumented option I guess…

    ANSYS Workbench

  • ANSYS Workbench also has some command line arguments. not as rich as what is available in MAPDL, but still powerful. It allows you to run Mechanical in Batch or Interactive mode, supply python commands as needed.  The key thing to remember is that the workbench interface is not Mechanical or FLUENT. It is the infrastructure that other programs run on. Scripting in workbench allows you to control material properties, parameters, and how systems are created and executed.

    Here are the options:




    Run ANSYS Workbench in batch mode. In this mode, the user interface is not displayed and a console window is opened. The functionality of the console window is the same as the ANSYS Workbench Command Window.

    -R <ANSYS Workbench script file>

    Replay the specified ANSYS Workbench script file on start-up. If specified in conjunction with –B, ANSYS Workbench will start in batch mode, execute the specified script, and shut down at the completion of script execution.


    Run ANSYS Workbench in interactive mode. This is typically the default, but if specified in conjunction with –B, both the user interface and console window are opened.


    Run ANSYS Workbench interactively and then exit upon completion of script execution. Typically used in conjunction with –R.

    -F <ANSYS Workbench project file>

    Load the specified ANSYS Workbench project file on start-up.

    -E <command>

    Execute the specified ANSYS Workbench scripting command on start-up. You can issue multiple commands, separated with a semicolon (;), or specify this argument multiple times and the commands will be executed in order.

    The big deal in this list is the –B argument.  This allows you to run workbench, and applications controlled by the project page, in batch mode.  You will usually use the –R argument to specify the Iron Python script you want to run.  In most cases you will often want to throw in a –X to make sure it exists when the script is done. 

    Other ANSYS Products

    Here is where it gets boring.  The other products just don’t have all those options. At least not documented.  So you simply find the executable and run it.  Here is the list for Linux. Use it to find the location on Windows.



    Mechanical APDL


    ANSYS Workbench










    ANSYS CFD-Post


    ANSYS Icepak


    ANSYS TurboGrid




    APDL Math–Access to the ANSYS Solver Matrices with APDL

    APDL Math.  It is one of the most powerful, uber-user, deep down under the hood, wrapping your hands around the neck of what FEA is, capabilities in the ANSYS Mechanical APDL (MAPDL) solver.  And most users don’t even know it is there.  It kind of snuck in over time, with the developers adding more and more capability each release.  Now it gives you access that you needed custom FORTRAN code to get to in the past… or NASTRAN DMAP, which is really something none of us ever want to do.

    There is a lot of capability in this tool. This posting is just going to cover the basics so that you know what the tool can be used for in case you need it in the future, and hopefully motivate some of you to take a long look at the help. 

    What is APDL Math?

    It is an extension to the APDL command language that drives MAPDL.   Although it runs in a different workspace (chunk of memory in the ANSYS database) it talks to standard APDL by importing and exporting APLD arrays (vectors or matrices).  It consists, at R14, of 18 commands that can be executed at the /SOLU level at any time.  All of the commands start with a * character and look and act like standard APDL commands.

    APDL Math is a tool for users to do two things: 1) get access to view, export or modify matrices and vectors created by the solver, and 2) to control import or modify matrices and vectors then solve them.  The most common uses we have seen is the exporting of a matrix from ANSYS for use in some other program, usually Matlab.  The other is working with sub-structure matrices.

    The entire tool is documented in the Mechanical APDL section of the help under  // ANSYS Parametric Design Language Guide // 4. APDL Math.

    The Commands

    Below is a list of the APDL math commands.  As usual, you really need to read the manual entries to get the full functionality explained.  Just like parameters and arrays in APDL, the matrices and vectors in APDL Math use names to identify them.  Note that you have a set of commands to create the matrix/vector you want, which includes reading from a file or importing an APDL array.  then you have commands to do basic matrix/vector math like multiply, find dot products, and do Fast Fourier Transformations.  Then there are solver commands.

    Commands to create and delete matrices and vectors

    *DMAT, Matrix, Type, Method, Val1, Val2, Val3, Val4, Val5

    Creates a dense matrix that is complex, double or integer. You can allocate it, resize an existing matrix, copy a matrix, or link to a portion of a matrix. You can also import from a file or an APDL variable.

    *SMAT, Matrix, Type, Method, Val1, Val2, Val3

    Creates a sparse matrix. Double or Complex, copied or imported.

    *VEC, Vector, Type, Method, Val1, Val2, Val3, Val4

    Creates a vector. Double, complex or integer. Similar arguments to *DMAT.

    *FREE, Name,

    Deletes a matrix or a solver object and frees its memory allocation. Important to remember to do.


    Commands to manipulate matrices

    *AXPY, vr, vi, M1, wr, wi, M2

    Performs the matrix operation M2= v*M1 + w*M2.

    *DOT, Vector1, Vector2, Par_Real, Par_Imag

    Computes the dot (or inner) product of two vectors.

    *FFT, Type, InputData, OutputData, DIM1, DIM2, ResultFormat

    Computes the fast Fourier transformation of the specified matrix or vector.

    *INIT, Name, Method, Val1, Val2, Val3

    Initializes a vector or dense matrix. Used to fill vectors or matrices with zero’s, constant values, random values, or values on the diaganol.

    *MULT, M1, T1, M2, T2, M3

    Performs the matrix multiplication M3 = M1(T1)*M2(T2).

    *NRM, Name, NormType, ParR, Normalize

    Computes the norm of the specified vector or matrix.

    *COMP, Matrix, Algorithm, Threshold

    Compresses the columns of a matrix using a specified Singular value decomposition algorithm (default) or Modified Gram-Schmidt algorithm


    Commands to perform solutions

    *LSENGINE, Type, EngineName, Matrix, Option

    Creates a linear solver engine and assignes a name to be used when you want to execute the solve. Does Boeing sparse, MKL sparse, LAPACK or Distributed Sparse.

    *LSFACTOR, EngineName, Option

    Performs the numerical factorization of a linear solver system.

    *LSBAC, EngineName, RhsVector, SolVector

    Performs the solve (forward/backward substitution) of a factorized linear system.

    *ITENGINE, Type, EngineName, PrecondName, Matrix, RhsVector, SolVector, MaxIter, Toler

    Performs a solution using an iterative solver.

    *EIGEN, Kmatrix, Mmatrix, Cmatrix, Evals, Evects

    Performs a modal solution with unsymmetric or damping matrices.


    Commands to output matrices

    *EXPORT, Matrix, Format, Fname, Val1, Val2, Val3

    Exports a matrix to a file in the specified format. Supports Matrix Market, ANSYS SUB, DMIG, Harwell-Boeing and ANSYS EMAT. Also used to put values in an APDL array or print in a formated way to a postscript file.

    *PRINT, Matrix, Fname

    Prints the non-zero matrix values to a text file.


    Useful APDL Commands

    /CLEAR, Read

    Wipes all APDL and APDL Math values from memory

    WRFULL, Ldstep

    Stops solution after assembling global matrices. Use this to make matrices you need when you don’t want a full solve

    /SYS, String
    Executes an operating system command. This can be used in APDL Math to do some sort of an external operation on a matrix you wrote out to a file, like running a matlab script. After execution the matrix can be read back in and used.


    How You Use APDL Math

    When using APDL math you should follow some basic steps.  I’m always a big proponent of the crawl-walk-run approach to anything, so I also recommend that you start with small, simple models to figure stuff out.

    First Step: Know the Math

    imageThe first step, or maybe the zero’th, is to understand your math.  If you charge in and start grabbing the stiffness matrix from the .FULL file and changing values, who knows what you will end up with. Chart out the math you want to do on paper or in a tool like Mathematica. 

    Then makes sure that you understand the math in ANSYS, and that includes the files being used by ANSYS.  A good place to look is in the Programmer’s Manual, Chapter 1 lists the various files and what is in them.  It might also not be a bad idea for you to crack open the theory manual  We all know that ANSYS solve Kx = F, but how, and what matrices and vectors are used.  Section 17.1 explains the static analysis approach used, with lots of links to more detailed explanations.

    Second Step: Create your Matrices/Vectors

    Since the whole point of using APDL math is to do stuff with Matrices/Vectors, you need to start by creating them.  Note that we are not doing anything with APDL Math yet.  We are using APDL, ANSYS, or an external program to get our matrix/vector so that we can then get it into APDL Math.. There are three types of sources you can get matrices/vectors from:

    1. Use APDL to create an array.  *DIM, *SET, *VREAD, *MOPER, etc… 
    2. Use ANSYS to make them as part of a solve, or as part of an “almost solve” using WRFULL.  You can read the .FULL, .EMAT, .SUB, .MODE or .RST
    3. Get a file from some other source and put it into a format that APDL Math can read. It supports Harwell-Boeing, Matrix Market, or NASTRAN DMIG format as well as APDL Math’s own format.

    Third Step: Get the Matrices/Vectors into APDL Math

    Using *DMAT, *SMAT, and *VEC you convert the APDL array, ANSYS file, or external format file into a matrix or a vector.  You can also use *INIT to make one from scratch, filling it with constants, zeros, random numbers or by setting the diagonal or anti-diagonal  values.

    Fourth Step: Manipulate the Matrices/Vectors

    In this step you can modify matrices in a lot of different ways.  The simplest is to use *MULT or *AXPY to do math with several matrices/vectors to get a new matrix/vector. 

    Another simple way to change things is to simply refer to the entries in the matrix using APDL.  As an example, to change the damping at I=4 and J =5 in a damping matrix called dmpmtrx just use dmpmtrx(4,5) – 124.321e4.

    You can take that one step further and use the APDL operators that work on arrays like *SET, *MOPER, *VGUN and whatever *DO loops you need. 

    If you can’t do the modification you need in APDL or APDL Math, then you can use *EXPORT to write the matrices out and use an external tool like matlab or Mathematica to do your manipulation. Of course you then use *DMAT with an IMPORT argument to read the modified matrix/vector back inside.

    Fifth Step: Use the Matrix

    Now it is finally time to use your matrices/vectors.  The most common use is to bundle it up in a substructure matrix (*EXPORT,,SUB) and use it in a solve.  What is great about this is that you can also (I know, we don’t want to use the N-word) save the file as a NASTRAN substructure and give it to that annoying DMAP guru who insists that NASTRAN is the only structural analysis code in the world.  He gets his file, and you get to use ANSYS.

    You can also solve using APDL Math.  This can require multiple steps depending on what you want to do. A typical solve involves using the *LSENGINE command to define how you want to solve, then factor your matrix with *LSFACTOR, then solve with *LSBAK.

    There are also ways to continue a solve in ANSYS after changing the EMAT file.  Unfortunately my plans to do an example of this have been thwarted and ANSYS did not provide one, so I’m not 100% sure on the steps required.  But a LSSOLVE that does not force ANSYS to recreate the matrices should work.  Maybe a topic for a future posting. 

    Other Stuff to Know About

    There is a lot more too this, and the help is where you can learn more.  But a few things everyone needs to be ware of are listed here.

    DOF Order

    A key area where people have problems is understanding how the DOF’s in your matrix or vector are ordered. This is because ANSYS reorders things for efficiency of memory and solve.  The nice thing is that ANSYS stores a map in the full file that you can use to convert back and forth using the *MULT command. 

    Please read section 4.4 ( // ANSYS Parametric Design Language Guide // 4. APDL Math // 4.4. Degree of Freedom Ordering) in the manual. They have a great description and some examples.

    Just remember to remember to deal with DOF ordering.


    This is still a new tool set and as users apply it to real world problems, they are adding functionality.  Right now there are some limitations. 

    • The biggest is that that all of this works on linear matrices.  So you have to be working on a linear problem. Material or geometric non-linarites just don’t work.  This makes perfect sense but may be one of those things some users might not figure out till they have invested some serious time.
    • You can not modify a sparse matrix in APDL Math.  You have to write it out using *EXPORT, modify it with something like matlab, then read it back in with *SMAT.
    • *MULT can not be used to multiply two sparse matrices. One or both must be dense.  the result is always dense.
    • The BCS, DSS and DSP solves only work with sparse matrices. Only the LAPACK solver works for a dense matrix.

    imageReal and Imaginary

    Most of the features in APDL math work with matrices that have imaginary terms in them. Be careful and read the help if you are using complex math, it can get tricky.  Especially read 4.3 on how to specify a position in a matrix for real or imaginary numbers.


    There are not a lot out there.  The manual has some in section 4.7.  Take a look at these before you do anything. If you can not find a specific example, contact your technical support provider and see if they have one, or if they can ask development for one.

    Give it a Shot, and Share

    This is a new feature, and a power user feature.  So what is happening is some very smart people are doing some very cool things with it, and not sharing.  It is very important that you share your effort with your Channel Partner or ANSYS, Inc so that they can not only see how people use this tool, but also modify the tool to do even more cool things. 

    It would also be cool if you posted what you do on XANSYS for the masses to see.  Very cool.

    Dean Kamen Visits Phoenix

    Dean-Kamen-ASU-2012-02-22Inventor of medical devices, the man behind the Segway, and FIRST backer Dean Kamenvisited the Phoenix area yesterday and today and we were lucky enough to have people from PADT invited to two different events at which he spoke.  For engineers involved in product development, this is like a visit from an NFL quarterback for most people.  He turned out to be open, engaging, and a very good speaker. 

    We could go on in adoration and explore the guilt and envy we feel after seeing all that he has done.  Instead we thought we would highlight two things we learned from his visit:

    1. The FIRST program that he started and still heads is making a huge difference in this country and around the world.  PADT has been peripherally involved, focusing instead more on the underwater robot scholastic competitions that are very popular here in Phoenix.  But FIRST is now huge, and is still growing.  But what we learned is the positive impact it is having: Students who participate in FIRST are 3 times more likely to become engineers, 30% more likely to attend college, and twice as likely to volunteer in their communities. Those are some positive numbers.  Those of us in the engineering world should take advantage of that and support FIRST.


    2. Second, he offered a unique perspective on how engineers see the world.  When he was young he heard the story of David and Goliath.   Most people see a religious message in this story, there are various interpretations. But as a child, Dean Kamen did not see those messages.  What he saw was that David won because he had better technology. He had a sling shot.  That is how he beat the giant.  I found that a very interesting point of view. If you don’t get it, ask an engineer.

    If you ever have the chance to explore what his company, DEKA, is currently doing with a revolutionary power generation and water purification solution for areas of the globe without power or clean water, do so.  It is very leading edge stirling engine and distillation technology

    ANSYS Training Face to Face

    This weeks Focus posting is not going to be very technical. In fact, it is a bit of an editorial.

    Over the past five years or so we have seen a lot of companies who use ANSYS, Inc products move away from traditional face-to-face training with instructors in a classroom.  There are a lot of reasons for this.  The two most common are that 1) the company does not have a travel budget and 2) that training labor hours are considered overhead and managers all have very strict overhead restrictions.  On top of these two, many companies are just plain trying to save money on the cost of training or limiting their overall training budgets.

    What we have seen is a larger number of users either trying to train themselves from manuals or downloaded training material, or people trying to do web-based training.  One can certainly learn how to use an FEA or CFD tool this way, but through our tech support we are starting to see the negative side of this shift: users only understand some of the aspects of the tools and do not have a depth of knowledge that goes beyond the basics.  So when they run into a problem that requires a move beyond those basics, or that might require a more nuanced approach, they struggle or they call tech support for on-the-spot training.


    Even though we engineers are not the most social sub-species of humans, we can still heavily benefit from face-to-face interaction during training.  When PADT teaches a training class we find that a small portion of the time is spent lecturing on and doing workshops for the basics.  Most of the time is spent answering questions that occur to students while they take these basics in.  Some are industry or user specific, some delve deeper into the tool than the training material does.  But they all provide an education to the whole class that never occurs otherwise.

    We have taught, and been students in, web based training classes. The interaction is just not the same.  There are not as many questions and the instructor is not able to use body language clues to see if the class is really getting what they are saying.  In fact, we feel this is the biggest issue. When you are on the phone and sharing a screen you can not even tell if the students are listening.  So the instructor pushes on, the students drift further away, and the true benefits of the class are lost.

    Make a Case for Classroom Training

    The point of all of this is that we feel users out there need to make a case for real classroom training.  When your boss says that there is no travel budget, not enough overhead allocation, or just not enough money, argue strongly that the cost differences of online or self training are not that significant when compared to the productivity problem of not having deep, interactive training.  If you are a boss, admit it, you know we are right.  You should fight a bit harder for the budget because in the long run you will save money.

    Another way to look at it is the relative cost of classroom training versus how much you will use the ANSYS tools you are trained on.  Even if we assume that the company you work for kind of sucks and most engineers move out of there in five years, one to three week of training is nothing when compared to five years as a user.  If your productivity is just 5% higher during that time, the savings are significant. 

    Do the full classroom training.  You will not regret it.

    As a full disclosure:
    We are partly motivated to express this opinion by the fact that we make money doing such training classes, but in reality very few of you reading this will do training with us (although you could use us if you wanted to… hint, hint).  Most of you do your training through other Channel Partners or ANSYS, Inc.  So this posting is not entirely self serving.

    About the pictures:
    I find the stock photographs of what are basically models so contrived and stereotypical that they are hilarious.  So I grabbed few of my favorites to share.  I love it when they always have someone crossing their arms, looking thoughtful. 

    Reducing the Size of your RST File–OUTRES is your Friend

    One of the most common questions on XANSYS, and a common tech support question for us is: “Is there any way to make my result file smaller?”  In fact, we just got a support call last week on that very topic.  So we thought it would be a good topic for The Focus, and besides the standard quick answer, we can go a bit deeper and explain why.

    Why so Big?

    The first thing you need to understand is why the result file is so big. One of the fundamental strengths of the ANSYS solver, back to the days when John Swanson and his team were writing the early versions, is that the wealth of information available from an FEA solution is not hidden from the user.  They took that approach that whatever can be calculated will be calculated so the user can get to it without having to solve again.  Most other tools out there do the most common calculations, average them, and then store that in the results file. 

    If you go back to your FEA fundamentals and take a look at our sample 2D mesh, you will quickly get an Idea of what is going on.


    When an FEA mesh is solved, the program calculates the Degree of Freedom results (DOF) at each integration point in each element. For most meshes that is at the corner of each element.  This DOF solution is then used to calculate the derived values, usually stress and strain, for each node based upon the solution at the integration points for each element. Then, during post processing, the user decides how they want to average those element

    So the stress in the X direction for Node 2 in our example is different for element 1 and element 2.  Element 1 uses the results it calculated with the elements shape function to get stresses and strains for node 2, and element 2 does the same thing. 

    Now most programs, they average these value at solve time and either store an element average, or a nodal average.  But ANSYS does not.  It stores the values for each node on each element in its results file.

    By default, node 5 in our example will be stored four times in the ANSYS RST file, and each instance will contain 12 result items for a linear run (SX, SY, SXY, S1, S2, S3, SINT, SEQV, EPELX, EPELY, EPELXY, EPELEQV) and even more for 3D, non-linear, and special capability elements.

    And Now if you want, let’s digress into the nitty gritty:

    You can actually see what is in the RST file because ANSYS Mechanical APDL has awesome documentation.  Modern programs don’t even come close to level of detail you can find in the ANSYS MAPDL help.  Go to the Mechanical APDL in the help and them the path:

    // Programmer’s Manual // I. Guide to Interfacing with ANSYS // 1. Format of Binary Data Files // 1.2. Description of the Results File. 

    Look at the file format in section 1.2.3.  Lots of headers and such, but if you take the time to study it you will be able to see how much space each type of thing stored in the result file uses.

    First is stores model information.  Everything that describes the model but the geometry and loads.  If you scroll down to the “Solution data header” (use find to jump there) you have the actual results. This is the big part.

    Scroll down past the constants to the NSL, or the node solution list: 

    C * NSL    dp       1  nnod*Sumdof  The DOF Solution for each node in the nodal coordinate system.

    This says that the size of this record is the number of nodes (nnod) times the number of Degrees of Freedom (Sumdof).  So for a structural solid model with 10,000 nodes and only UX, UY, and UZ DOF’s, it would be 30,000 double precision numbers long.

    After the DOF solutions we have velocity, acceleration, reaction forces, masters, and then boundary conditions and some loads. Even some crack tip info.

    Then comes the big part.  The Element solution information.  Take a look at it. You have miscellaneous data, element forces.  ETABLE type data, etc… Then there is the ENS record, the Element Nodal Component Stresses.  The size is marked as “varies” because there are so many variables that define how big this record is. Read the comments. It is long but goes into explaining the huge amount of information stored here. 

    Study this and you will know more than you ever wanted to about what is in the RST file!.

    Storing all of this information is the safest option. Everything a user might need during post processing is stored so that they do not have to rerun if they realize they need some other bit of information. But, as is usual with MAPDL, the user is given the option to change away from those defaults, and only store what they want. OUTRES really is your friend.

    You Are in Control: OUTRES

    OUTRES is a unique command.  It is cumulative.  Every time you issue a command it adds or removes what you specify from what is stored in the RST file.  The basics of the command are shown in the table below, and more info can be found in the online help.  Use OUTRES, STAT to see what the current state is at any point.  Always start with OUTRES,  ERASE to make sure you have erased any previous settings, including the defaults.  

    Remember that this command affects the current load step. The settings get written to the load step file when you do an LSWRITE.  So if you have multiple load steps and you want the same for each, set it on the first one and it will stay there for all the others.  But if you want to change it for a given load step, you can.

    If you are a GUI person you can access this from four different branches in the menus:

    Main Menu>Preprocessor>Loads>Analysis Type>Sol’n Controls>Basic
    Main Menu>Preprocessor>Loads>Load Step Opts>Output Ctrls>DB/Results File
    Main Menu>Solution>Analysis Type>Sol’n Controls>Basic
    Main Menu>Solution>Load Step Opts>Output Ctrls>DB/Results File

    The first thing to play with on the command is the first argument: Item. 

    The default is to write everything.  What we recommend is that for most runs, using OUTRES, BASIC is good enough.  It tells the program to store displacements, reaction forces, nodal loads, and element stresses. The big thing it skips are the strains. Unless you are looking at strains, why store them.  Same with the MISC values. Most users don’t ever access these. 

    The next thing you can use to reduce file size is to not store the results on every element.  Use the Cname argument to specify a component you want results for.  Maybe you have a huge model but you really care about the stress over time at a few key locations.  So use a node and element components to specify what results you want for which components.  Note, you can’t use ALL, BASIC or RSOL for this option.  You need to specify a specific type of result for each component.  Remember, the command is cumulative so use a series of OUTRES commands to control this.

    OUTRES, Item, Freq, Cname

    Results item for database and file write control:
    ALL — All solution items except SVAR and LOCI. This value is the default.
    CINT — J-integral results.
    ERASE — Resets OUTRES specifications to their default values.
    STAT — Lists the current OUTRES specifications.
    BASIC — Write only NSOL, RSOL, NLOAD, STRS, FGRAD, and FFLUX records to the results file and database.
    NSOL — Nodal DOF solution.
    RSOL— Nodal reaction loads.
    V — Nodal velocity (applicable to structural full transient analysis only(ANTYPE,TRANS)).
    A — Nodal acceleration (applicable to structural full transient analysis only(ANTYPE,TRANS)).
    ESOL— Element solution (includes all items following):
         NLOAD — Element nodal, input constraint, and force
                 loads (also used with the /POST1 commands 
                 PRRFOR, NFORCE, and FSUM to calculate 
                 reaction loads).
        STRS — Element nodal stresses.
        EPEL — Element elastic strains.
        EPTH — Element thermal, initial, and swelling strains.
        EPPL — Element plastic strains.
    EPCR — Element creep strains.
        EPDI — Element diffusion strains.
        FGRAD — Element nodal gradients.
        FFLUX — Element nodal fluxes.
        LOCI — Integration point locations.
        SVAR — State variables (used only by USERMAT).
        MISC — Element miscellaneous data
              (SMISC and NMISC items of the ETABLE command).

    Specifies how often (that is, at which substeps) to write the specified solution results item. The following values are valid:

    Value Description

    Writes the specified results item every nth (and the last) substep of each load step.


    Writes up to n equally spaced solutions (for automatic loading).


    Suppresses writing of the specified results item for all substeps.


    Writes the solution of the specified solution results item for every substep. This value is the default for a harmonic analysis (ANTYPE,HARMIC) and for any expansion pass (EXPASS,ON).


    Writes the specified solution results item only for the last substep of each load step. This value is the default for a static (ANTYPE,STATIC) or transient (ANTYPE,TRANS) analysis.


    Where array is the name of an n x 1 x 1 dimensional array parameter defining n key times, the data for the specified solution results item is written at those key times.
    Key times in the array parameter must appear in ascending order. Values must be greater than or equal to the beginning values of the load step, and less than or equal to the ending time values of the load step.
    For multiple-load-step problems, either change the parameter values to fall between the beginning and ending time values of the load step or erase the current settings and reissue the command with a new array parameter.
    For more information about defining array parameters, see the *DIM command documentation.

    The name of the component, created with the CM command, defining the selected set of elements or nodes for which this specification is active. If blank, the set is all entities. A component name is not allowed with the ALL, BASIC, or RSOL items.



    Use What Works for You

    The help on the OUTRES command has a nice example where the user specifies different solutions for different sub steps. Check it out to get your head around what is happening:  // Command Reference // XVI. O Commands // OUTRES

    Next time you run a small but typical model for you, play with the options.  Most of the time when I have a lot of load steps or a large model, I use the following in my input deck:


    Sometimes I only care about surface stresses so they use (of course last can be replaced with all or any other frequency):


    Use OUTRES on the next couple of runs you do. Try BASIC, try some other things and see if you can save some disk space.

    Bringing the Value of Simulation into Perspective

    Those of us who do simulation for a living spend a lot of time focusing on faster, cheaper, better.  But we rarely deal with the hard, and cold reality that better often means safer.  Please take some time to read the blog entry below from an ANSYS employee who survived and insane crash, because a bunch of nerds at Nissan ran simulations over and over again on his car so that it would protect him.  Warning… may make you mist up a bit.