Quick Tip: Tool for Putting LaTex Equations in PowerPoint

I’m not a big fan of LaTex. That is probably more a reflection of the fact that I don’t have an advanced degree and didn’t have to write a dissertation than anything else.  But one thing that is worse than LaTex is the equation editor in PowerPoint. If you are like us, you use PowerPoint as your primary reporting tool and dread putting equations in.

Matt Sutton was doing just that the other day and thought “there has got to be a better way!”  He found one. A tool called IguanaTex.

http://www.technion.ac.il/~zvikabh/software/iguanatex/download.html

There is not much to it, it is free, and it works well.

Iguana Tex screen shot

Example of IguanaTex output

Here are some equations from a presentation Matt just did:

Equations

 

 

 

Making APDL Parameters Available in the ANSYS Parameter Manager or DesignXplorer: Prep, Solve, and Post

This is one of those questions that comes up every once in a while that is not so obvious at first glance, but that is simple once you understand how ANSYS Mechanical interacts with ANSYS Mechanical APDL.  After a couple of email exchanges around a tech support question, we thought it would be good to share with everyone.

Before we get started, if you need a refresher on Command Objects in ANSYS Mechanical, the way in which you send APDL commands to the ANSYS Mechanical APDL solver, here is a seminar from a couple of years ago that covers the whole deal:

The basic problem is this: you have an APDL script you execute as a command object that does some sort of model interrogation or stores the result of some calculation, and you want to use that parameter in the parameter manager or in DesignXplorer.  If you look at the details view for a command object you will notice that it only supports input parameters: ARG1-ARG9.

image

If you look at the example (silly) macro you will see that it:

  1. Grabs component (named selection) END1
  2. Figures out how many nodes are attached to END1 (NMND)
  3. Takes ARG1 as the total load applied load
  4. Calculates the per node load by dividing the total load by the number of loads.
  5. Applies that per node load
  6. Reselects all the nodes

If I want to know how many nodes I put the load on and what the per node load is I’m kind of stuck here.  Any command object you add to the tree above the Solution branch only allows input parameters.

But a command snippet applied in the Solution branch is different, it allows you to pull parameters back and share them through the parameter manager.

When you first insert a command object you only get input parameters (ARG1-ARG9) as usual, and an empty section called “Results”

image

The way you get result parameters, or what I think should be called “Output Parameters” is you create a parameter in the command object’s APDL script that starts with “my_”  When you click outside the text input window the program parses you script and if it finds any “my_” parameters in the text, it sticks them in the Results section:

image

Note, the default is “my_” but you can change it n the “Output Search Prefix” line in the Definition block.

Initially they will show up pinkish because the model has not been run and they are not defined. Click on the box to make them parameters that get passed outside of the program and then run:

image

If you pop back out to the project view you will see that we now have a Parameter Set bar with both input and output parameters:

image

And if you open the parameter manager up you can see the input and output parameters:

image

This works because all ANSYS mechanical is doing is making one big batch input file for ANSYS MAPDL.  That file contains any command objects you insert into the tree and extracts any parameters that you tagged in a post processing command object for return to ANSYS Mechanical.

ANSYS FLUENT Performance Comparison: AMD Opteron vs. Intel XEON

AMD Opteron 6308 & INTEL XEON e5-2690 Comparison using ANSYS FLUENT 14.5.7

Note: The information and data contained in this article was complied and generated on September 12, 2013 by PADT, Inc. on CUBE HVPC hardware using FLUEN 14.5.7.  Please remember that hardware and software change with new releases and you should always try to run your own benchmarks, on your own typical problems, to understand how performance will impact you.

A potential customer of ours was interested in a CUBE HVPC mini-cluster. They requested that I run benchmarks and garner some data on a two CPU’s. The CPU’s were benchmarked on two of our CUBE HVPC systems. One mini-cluster has dual INTEL® XEON e5-2690 CPU’s and another mini-cluster has quad AMD® Opteron 8308 CPU’s. The benchmarking was only run on a single server using a total of 16 cores on each machine. The same DDR3-1600 ECC Reg RAM, Supermicro LSI 2208 RAID Controller and Hitachi SAS2 15k RPM hard drives were used on each system.

clip_image002clip_image004clip_image006clip_image008

CUBE HVPC Test configurations:

Server 1: CUBE HVPC c16
  • CPU: 4, AMD Opteron 6308 @ 3.5GHz (Quad Core)
  • Memory: 256GB (32x8G) DDR3-1600 ECC Reg. RAM (1600MHz)
  • Hardware RAID Controller: Supermicro AOC-S2208L-H8iR 6Gbps, PCI-e x 8 Gen3
  • Hard Drives: Supermicro HDD-A0600-HUS156060VLS60 – Hitachi 600G SAS2.0 15K RPM 3.5″
  • OS: Linux 64-bit / Kernel 2.6.32-358.18.1.e16.x86_64
  • App: ANSYS FLUENT 14.5.7
  • MPI: Platform MPI
  • HCA: SMC AOC-UIBQ-M2 – QDR Infiniband
    • The IB card installed however solves were run distributed locally
  • Stack: RDMA 3.6-1.el6
  • Switch: MELLANOX IS5023 Non-Blocking 18-port switch
Server 2: CUBE HVPC c16i
  • CPU: 2, INTEL XEON e5-2690 @ 2.9GHz (Octa Core)
  • Memory: 128GB (16x8G) DDR3-1600 ECC Reg. RAM (1600MHz)
  • RAID Controller: Supermicro AOC-S2208L-H8iR 6Gbps, PCI-e x 8 Gen3
  • Hard Drives: Supermicro HDD-A0600-HUS156060VLS60 – Hitachi 600G SAS2.0 15K RPM 3.5″
  • OS: Windows 7 Professional 64-bit
  • App: ANSYS FLUENT 14.5.7
  • MPI: Platform MPI

ANSYS FLUENT 14.6.7 Performance using the ANSYS FLUENT Benchmark suite provided by ANSYS, Inc.

The models we used can be downloaded from the ANSYS Fluent Benchmark page link: http://www.ansys.com/Support/Platform+Support/Benchmarks+Overview/ANSYS+Fluent+Benchmarks

Release ANSYS FLUENT 14.5.7 Test Cases  (20 Iterations each):
  • Reacting Flow with Eddy Dissipation Model (eddy_417k)
  • Single-stage Turbomachinery Flow (turbo_500k)
  • External Flow Over an Aircraft Wing (aircraft_2m)
  • External Flow Over a Passenger Sedan (sedan_4m)
  • External Flow Over a Truck Body with a Polyhedral Mesh (truck_poly_14m)
  • External Flow Over a Truck Body 14m (truck_14m)
Chart 1: Total Wall Clock Time in seconds: (smaller bar is better)

clip_image011

Chart 2: Average wall-clock time per iteration in seconds: (smaller bar is better)

clip_image015

 

Summary:

Are you sure?

That was the question Eric proposed to me after he reviewed the data and read this blog article before posting. I told him “yes I am sure data is data, and I even triple checked.” I basically re-ran several of the benchmarks to see if the solve times came out the same on these two CUBE HVPC workstations. I went on to tell Eric , “For example, lets dig into the data for the External Flow Over a Truck Body with a Polyhedral Mesh (truck_poly_14m) benchmark and see what we find.”

Quad socket Supermicro motherboard

4 x 4c AMD Opteron 6308 @3.5GHz

Dual socket Supermicro motherboard

2 x 8c INTEL e5-2690 @2.9GHz

clip_image002[1] clip_image004[1]

The INTEL XEON e5-2690 INTEL CPU dual socket motherboard is impressive; it may have been on the Top500 list of some of the fastest computers in the world ten years ago. Anyways, so after each solve I captured the solve data and as you can see below. The AMD Opteron wall clock time was faster than the INTEL XEON wall clock time.

So why did the AMD Opteron 6308 CPU pull away from the INTEL for the ANSYS FLUENT solve times? Lets take a look at couple of reasons why this happened. I will let you make your own conclusions.

  • Clock Speed, but would a 10.4GHz difference in total CPU speed make a 100% speedup in ANSYS Fluent wall-clock times?
  • Theoretical total of:
  • AMD® OPTERON 6308 = 16 x 3.5GHz = 56.0 GH
  • INTEL® XEON e5-2690 = 16 x 2.9GHz – 46.4 GHz
  • The floating point argument? The tic and tock of the great CPU saga continues.
  • At this moment in eternity, it is a known fact that the AMD Opteron 6308 and many of its brothers, have one floating point unit per two integer cores. INTEL has one integer core per one floating point core. However what this means to ANSYS CFD users in my MIS/IT simpleton terms is the AMD CPU was simply able to handle and process more data in this example.
  • It’s possible that there were more integer calculations required than floating point? If that is the case then the AMD CPU would have had eight pipelines for integer calculations. The AMD Opteron is able to process four floating point pipelines. While the INTEL CPU can process eight floating point pipelines.

Let us look at the details of what is on the motherboards as well.  4 data paths vs 2 can make a difference:

Dual socket Supermicro motherboard

2 x 8c INTEL e5-2690 @2.9GHz

Quad socket Supermicro motherboard

4 x 4c AMD Opteron 6308 @3.5GHz

Processor Technology 32-Naometer 32-Naometer SOI (silicon-on-insulator) technology
HyperTransport™ Technology Links

Quick Path Interconnect Links

Two links at up to 8GT/s per link up to 16 GB/s direction peak bandwidth per port Four x16 links at up to 6.4GT/s per link
Memory Integrated DDR3 memory controller – Up to 51.2 GB/s memory bandwidth per socket
Number of Channels and Types of Memory Four links at up to 51.2GB/s per link Four x16 links at up to 6.4GT/s per link
Number of Channels and Types of Memory Quad channel support Quad channel support
Packaging LGA2011-0 Socket G34 – 1944-pin organic Land Grid Array (LGA)
Current pricing of the CPU’s

Here is the up to the minute pricing for each CPU’s. I took these prices off of NewEgg and IngramMicro’s website. The date of the monetary values was captured on September 12, 2013.

  • AMD Opteron 6308 Abu Dhabi 3.5GHz 4MB L2 Cache 16MB L3 Cache Socket G34 115W Quad-Core Server Processor OS6308WKT4GHKWOF
    • $499.99 x 4 = $1999.96
  • Intel Xeon E5-2690 2.90 GHz Processor – Socket LGA-2011, L2 Cache 2MB, L3 Cache 20 MB, 8 GT/s QPI,
    • $2010.02 x 2 = $4020.40

STEP OUT OF THE BOX,
STEP INTO A CUBE

PADT offers a line of high performance computing (HPC) systems specifically designed for CFD and FEA number crunching aimed at a balance between cost and performance. We call this concept High Value Performance Computing, or HVPC. These systems have allowed PADT and our customers to carry out larger simulations, with greater accuracy, in less time, at a lower cost than name-brand solutions. This leaves you more cash to buy more hardware or software.

Let CUBE HVPC by PADT, Inc. quote you a configuration today!

Efficient Engineering Data, Part 1: Creating and Importing Material Properties in Workbench

Note: This is part 1 of a two-part series in Engineering Data customization and default settings. This article essentially serves as a foundation for my next one, which will cover how to set up default material choices and assignments in Workbench.

As you’ve probably noticed, the Workbench installation comes with an extensive set of material libraries. If you haven’t noticed, then open a Workbench session, go into Engineering Data, and click that button on the upper right that looks like a stack of books: image

Click on one of the libraries, say, General Materials, and take a look at the selection of materials.

image

So you see things like Stainless Steel, Aluminum Alloy, Titanium Alloy, etc. but which alloys exactly? 301 1/2-hard steel? 17-4PH? 6061-T6 aluminum? Or cast C355? Titanium 6-4? Or 6-2-4-2? Obviously you’re going to have your own material properties in mind, and you’ll probably use them frequently enough to where you’d like to have them readily accessible. Maybe store them in a library, or something.

As it turns out, you’re not confined to the libraries ANSYS provides with the Workbench installation. You can create your own libraries too. To start this off, first click in the first blank line in the top Engineering Data Sources section, where it says, “Click here to add a new library” (seems pretty straight-forward, doesn’t it?) and type a unique name for the library. I’ll call mine “Jeff’s Materials” because I’m incredibly original that way. Then hit Enter.

image

image

You’ll be prompted for a location and xml file name for the library. Specify these and click Save. All of your material names and properties will be stored in this file.

image

Notice that the new library is checked. That means it is unlocked and able to be edited.

image

At that point you can add material names, insert properties from the left side Toolbox, etc.

image

Type in some material names

image

Then define their properties

Once you’re finished adding and editing materials, uncheck the column B box of the library to lock it up. Click Yes to accept changes. If you want to add or edit materials to your library at a later date, simply unlock it by checking the column B check box again.

image

Now, let’s say you want to share your awesome material library with your co-workers, or maybe you’ve installed a new version of ANSYS and you want to include it, or maybe your library was deleted by gnomes during the night. How do you bring it back into Workbench? Simple. First make sure the xml file is available (you’ll want to email it to your co-workers and have them save it to their disks if you’re sharing it with them). Toggle the libraries on by clicking on the stack of books button. Then simply click the little ellipsis button on the “Click here to add a new library” line.

image

Browse to the appropriate xml file and open it.

image

And now you have your library back.

image

I was too lazy to define all the materials for this article, hence the question marks

This is all well and good, but wouldn’t it be nice if we could change the materials that are immediately available in Engineering Data upon opening Workbench, and set the default material assignment to something besides Structural Steel? As it turns out, you can do both of these, and I’ll show you how in the next installment of Efficient Engineering Data.

Questions Decision Makers Should Ask About Computer Simulations

‘TRUST BUT VERIFY’

A guest posting from Jack Thornton , MINDFEED Marcomm, Sante Fe, NM

image.pngThe computerization of engineering (and everything else) has imposed new burdens on managers and executives who must make critical decisions. Where once they struggled with too little information they now struggle with too much. Until roughly three decades ago, time and money were plowed into searching for more and better information. Today, time and money disappear are plowed into making sense of myriad computer simulations.

For all but the best-organized decision makers, these opposite situations have proven equally frustrating. For nearly all of engineering history, critical decisions were based on a few pieces of seemingly credible data, a handful of measurements, and hand-drawn sketches a la Leonardo DaVinci—leavened with hands-on experience and large dollops of intuition.

Computer simulations are now everywhere in engineering. They have greatly speeded up searches for information, as well as creating it in the first place, and endlessly multiplying it. What has been lost are transparency and traceability—what was done when, by whom and why. Since transparency and traceability are vital to making sound engineering decisions in today’s intensely collaborative technical environments, decision makers and managers say this loss is a big one.

This is not some arcane, hidden war waged by experts, geeks and professors. This is about designing machinery, components, physical systems and assemblies that are globally competitive—and turn a profit doing so. The complexity of modern components, assemblies and systems has been exhaustively and repeatedly described.

Nor is this something engineers and first-line managers can afford to ignore. Given the shortages of engineering talent, relatively inexperienced engineers are constantly being handed responsibility for making key decisions.

Users of computerized simulation systems continually seek ways to answer the inevitable question, “How do we know this or that or whatever to be true?” Several expert users of finite element analysis (FEA), the basic computational toolset of engineering simulation and analysis, were interviewed for this article. Each interviewee is a licensed professional engineer (PE) and each has been recommended by a leading FEA software vendor.

For decision makers, a simulation FEA or otherwise really presents only three options:

  • Signing off on the production of a component or assembly. If it proves to be flawed, warranty claims, recalls, and perhaps much worse may result.
  • Shelving a promising new product, perhaps at the behest of fretful engineers. The investment is written off or expensed as R&D. The marketplace opportunity (amnd its revenue) may be lost forever.
  • Remanding the project to the analysts even while knowing that “paralysis by analysis” will push development costs too high or cause too big a delay in getting to market.

Since executives and other upper-echelon corporate decision makers rarely possess much understanding or FEA, let alone have time to develop it, a “trust but verify” strategy is the only reasonable approach.

The verify part is easy. FEA modelers and solvers have been well wrung-out over the past 10 to 20 years. All of the FEA software vendors will share details of their in-house tests of their commercial code, the experiences of customers doing similar work, and investigations by reviewers who are often on engineering-school faculties. The same is true for industry-specific “home grown” code.

It’s the trust part that’s so challenging, as in FEA trust depends on understanding some very complicated matters.

Analysis experts note that unless the builders of FEA models are questioned, they rarely spell out the model’s underlying assumptions. Even less frequently (and clearly) described is the reasoning behind the dozens or hundreds of choices they made that are dictated by those assumptions.

And worse, these choices are not always clarified when model builders do provide this detail—quite the opposite, in fact. When pressed for explanations, model builders may simply present the mathematical formulas they use to characterize the physics of their work.

Analysis experts are quick to point out that these equations often confuse and intimidate. Decision makers should insist on commonsense explanations and not equations. And every FEA model builder will try earnestly to explain (often at great length) the model’s implications to anyone who takes the time to look.

In the context of FEA and other simulations, “physics” means the real-world forces to be withstood by a printed circuit board, a pump, an engine mount, a turbine, an aircraft wing or engine nacelle, the energy-absorbing structure of a car, or anything else that is mechanically complex and highly stressed.

This is why transparency and traceability are so important in FEA. Analysts note that some of this is codified in the guidelines for simulation and computational analysis in the ASME / ANSI verification and validation standards. Further support comes from company best practices developed by FEA users and managers, although enforcement is rarely consistent, and voluntary industry standards whose applicability varies widely.

The transparency and traceability challenge is that building a model—again, a subset of the real world—requires dozens of assumptions about the mechanical capabilities that the object or assembly must have to meet its requirements. After these basic assumptions have been coded into the model, hundreds of follow-on choices are needed to represent the physical phenomena in the model.

Analysts urge decision makers to question the stated values and ranges of any of the model’s parameters—and in particular values and ranges that have been estimated. Decision makers are routinely urged to probe whether these parameters’ values are statistically significant, and whether those values are even needed in the model.

A survey of experts turns up numerous aspects of FEA and other computerized simulations that decision makers should probe as part of a trust-but-verify approach. Among many examples:

  • Incoming geometry—usually from solid modeling systems used by product designers— and the topologies and boundaries they have chosen.
  • The numerical values representing physical properties such as yield strengths of the chosen materials.
  • Mechanical components and assemblies. How accurately represented are the bolts and welds that hold the assemblies together?
  • The stiffness of structures.
  • The number of load steps. Is the range broad enough? Are there enough intermediate steps so nothing will be missed? How true-to-life are the load vectors?
  • The accuracy of modal analyses. Resonating harmonic frequencies—vibration—can shake things apart and lead to catastrophic failures.
  • Boundary conditions, or where the object being modeled meets “the rest of the world” in the analysis. Are the specifics of the object’s physical and mechanical requirements—the geometry—accurately represented and, again, how do we know?
  • Types of analysis, which range from small, simple linear static to large, highly complex nonlinear dynamic. Should a smaller simpler analysis have been used? Could physical measurements suffice instead of analyses?
  • In fluid dynamics, how well characterized are the flows, volumes, and turbulence? How do we know? In fluid dynamics, representations of flows, volumes, and turbulence are the numerical counterparts of the finite elements used in analyses of solids.
  • Post-processing the results, i.e., making the numerical outputs, the results of the analysis, comprehensible to non-experts.

Underlying all these are the geometric and analytical components that are found in all simulations. In FEA, this means the mesh of elements that embodies the physics of the component or assembly being modeled. Decision makers should always question the choice of elements as there are hundreds to pick from.

Some models use only a handful of elements while a few use tens of millions. Also to be questioned is the sensitivity of those elements to the forces, or loads, that push or pull on the model. A caveat: this gets deeply into the inner workings of FEA, e.g. explanations of the points or nodes where adjacent elements connect, the tallies of degrees of freedom (DOFs) represented by each pair of nodes, and the huge number of partial differential equations required.

The trust-but-verify is valuable in all of the engineering disciplines—mechanical, structural, electrical / electronic, nuclear, fluid dynamics, heat transfer, aerodynamics, noise/ vibration / harshness as well as for sensors, controls, systems, and any embedded software.

Developers of FEA and other simulation systems are working hard to simplify finding these answers or at least make trust-but-verify determinations less taxing. See Sidebar, “Software Vendors Tackle Transparency and Traceability in FEA.”

Proven approaches

A proven approach to understanding FEA models is offered by Eric Miller, co-owner of Phoenix Analysis & Design Technologies or PADT, in Tempe, Ariz. “A decision maker with some understanding of the management of the data in an FEA analysis will ask about how specific inputs affect the results. Such a decision maker will lead the model builder and analyst think more deeply about those inputs. Ultimately a more accurate simulation will be created.”

Miller offers a caveat: “This questioning should be approached as an additional set of eyes looking at the problem from the outside to determine the accuracy of results. The key is to not become adversarial and question the integrity or knowledge of the analyst.”

Jeffrey Crompton, principal of AltaSim Technologies, Columbus, Ohio, goes straight to the heart of the matter: “Let’s start out with the truth – all models are wrong until proven otherwise. Despite all the best attempts of engineers, scientists and computer code developers,” he explained, “a computational model does not give the right answer until you can categorically demonstrate its agreement with reality.”

“Categorically” is a high standard, a term with almost no wiggle room. Unfortunately, given the complexity of simulations, agreement with reality is often not easy to demonstrate. Hence the probing and questioning recommended by FEA experts and engineers.

Secondly, despite tsunamis of data cascading from one engineering department to another, a great deal of the physical world still remains imprecisely quantified. Demonstrating agreement with reality “becomes increasingly difficult,” Crompton added, “when you may not know the value of some parameters, or lack real-world measurements to compare against, or are uncertain exactly how to set up the physics of the problem.”

The challenge for decision makers uncomfortable with the results of FEA analyses is neatly summed up by Gene Mannella, vice president and FEA expert at GB Tubulars Inc. in Houston. “Without a basic understanding of what FEA is, what it can and cannot do, and how to interpret its results, one can easily make bad and costly decisions,” he points out. “FEA results are at best indicators. They were never intended to be accepted” at face value.

As Mannella, Crompton and other FEA consultants regularly remind their clients, an analysis is an approximation. It is an abstraction, a forecast, a prediction. There will always be some margin of error, some irreducible risk. This is the unsettling truth behind the gibe that “all models are bad but some are useful.” No FEA model or analysis can ever be treated as “gospel.” And this is why analysts strive ceaselessly to minimize margins of error, to make sure that every remaining risk is pointed out, and to clearly explain the ramifications.

“To be understood, FEA results must be supplemented by the professional judgment of qualified personnel,” Mannella added. His point is that decision makers relying on the results of FEA analyses should never forget that what they “see” on computer monitor, no matter how visually impressive, is an abstraction of reality. Every analysis is a small subset of one small part the real world, which is constrained by deadlines, budgets, and the boundaries of human comprehension.

Mannella’s work differs from that of most other FEA shops: it is highly specialized. GB Tubulars makes connectors for drilling and producing oil and gas in extreme environments. Its products go into oil and gas projects several miles underground and also often beneath a mile or more of seawater. Pressures are extreme, bordering on the incalculable. The risk of a blowout with massive damage to equipment and the environment is ever-present.

The analysts also stressed probing the correlation with the results of physical experiments. Tests in properly equipped laboratories by qualified experimentalists are single best way to ensure that the model actually does reflect physical reality. Which brings us to the FEA challenge of extrapolations.

Often the most relevant test data is not available because physical testing is slow and costly. The absence of relevant data makes it necessary to extrapolate among the results of similar experiments. Extrapolations can have large impacts on models, so they too should be questioned and understood.

To deal with these difficulties, Crompton and the others analysts recommend, first, managing the numbers with statistical process control (SPC) methods and, second, devising the best ways to set up the model and its analyses with design-of-experiments simulations. Both should be reviewed by decision makers—ideally with a qualified engineer looking over their shoulders.

“Our mantra in this situation is ‘start simple and gradually add complexity.’” Crompton said. “Consider starting with a [relatively simple] closed-form analytical solution. The equation’s results will help foster an understanding of how the physics and boundary conditions need to be implemented for your particular problem.” [A closed-form solution is an equation with a single variable such as stress equals force times area, as opposed to a model; even the simplest simulation and analysis models have several variables.]

Peter Barrett, principal of CAE Associates in Middlebury, Conn., noted that, “the most experienced analysts start with the simple models that can be compared to a closed-form solution or are models so simple that errors are minimized and can be safely ignored.” He commented that the two acronyms that best apply to FEA are KISS (“Keep It Simple, Stupid”) and “garbage in, garbage out,” or GIGO. In other words, probe for the unneeded complexity and bad data.

Model builders are always advised by FEA experts to start by modeling the simplest example of the problem and then build upward and outward until the model reflects all the relevant physics. Decision makers should determine whether this sensible practice was followed.

When pressed for time, “some analysts will try to skip the simple-example problem and analysis,” Barrett said. “They may claim they don’t have time” for that fundamental step, i.e., that the analyst thinks the problem is easily understood. Decision makers should insist that analysts take the extra time. The analysis always benefits from starting as simply as possible,” he continued. “Decision makers will reap the rewards of more accurate analysis, which are a driver for projects being on time and under budget.”

Ken Perry, principal at Echobio LLC, Bainbridge Island, Wash., concurred. “The first general principle of modeling is KISS. Worried decision makers should verify that KISS was applied from the very beginning,” he said. “KISS is also an optimal tool to pick apart existing models that are inflated and overburdened with unnecessary complexity,” Perry added.

A favorite quote of Perry’s comes from statistician R.W. Hamming: “The purpose of computing is insight, not numbers.” Perry elaborated: “Decision makers should guard against the all-too-human tendency to default for the more complicated explanation when we don’t understand something.  Instead, apply Occam’s razor.  Chop the model down to bite-sized chunks for questioning.” [Occam’s Razor is an axiom of logic that says in cases of uncertainty the best solution is the one requiring the fewest assumptions.]

Questioning is especially important, Perry added, “whenever the decision maker’s probing questions evoke hints of voodoo, magic or engineers shaking their head in vague, fuzzy clouds of deference to increasingly specialized disciplines.”  Each of these is a warning flag that the model or analysis has shortcomings.

Perry works in the tightly regulated field of implantable medical and cardiovascular devices. He has one such device himself, a heart valve, and has pictures to prove it on his Web site. Tellingly, Perry began his career not in FEA but as an experimentalist. He worked with interferometry to physically test advanced metal alloys.

Perry is living proof that FEA experts and experimentalists could understand one another if they tried. But often they don’t try, which is another challenge for decision makers.

The last and most cautionary words are from Barrett at CAE Associates. More than anyone else, he was concerned about the risks of inexperienced engineers making critical decisions. Such responsibility often comes with an unlooked-for promotion to a product manager’s job, for example. Unexpected increases in responsibility also can arrive with attrition, departmental shakeups, and corporate acquisitions and divestitures.

“In our introductory FEA training classes we often have engineers signed up who have no prior experience with FEA. They sign up for the intro class,” he said, “because they are expected to review results of analyses that have been outsourced and/or performed overseas.”

Barrett saw this as “very dangerous. These engineers often do not know what to look for. Without knowing how to check, they may assume that the calculations in the analysis were done correctly.  It is virtually impossible to look at a bunch of PowerPoint images of post-processed analysis results and see if the modeling was done correctly. Yet this is often the case.”

20 APDL Commands Every ANSYS Mechanical User Should Know

One of the most powerful things about ANSYS Mechanical is the fact that it creates an input file that is sent to ANSYS Mechanical APDL (MAPDL) to solve. This is awesome because you as a user have complete and full access to the huge breadth and depth available in the MAPDL program.  MAPDL is a good old-fashioned command driven program that takes in text commands one line at a time and executes them. So to access all those features, you just need to enter in the commands you want.

For many older users this is not a problem because we grew up using the text commands. But new users did not get the chance to be exposed to the power of APDL (ANSYS Parametric Design Language) so getting access to those advanced capabilities can be tough. 

In fact, I was in a room next to one of our support engineers while they were showing a customer how to change the elements that the solver would solve (Mechanical defaults to the most common formulation, but you can change them to whatever still makes sense) and the user had to admit he had never really used or even seen APDL commands before. 

So, as a way to get ANSYS Mechanical users out there started down the road of loving APDL commands, we got together and came up with a list of 20 APDL commands that every user should know.  Well, actually, it is more than 20 because we grouped some of them together.  We are not going to give too much detail on their usage, the APDL help is fantastic and it explains everything.  In fact, if you use a copy of PeDAL you can get the help right there live as you type (yes, that was a plug for you to go out and plop down $49 and buy PeDAL).

Also note that we are not getting in to how to script with APDL. It is a truly parametric command language in that you can replace most values in commands with parameters. It also has control logic, functions and other capabilities that you find in most scripting languages.  We will focus on actual commands you use to do things in the program here. If you want to learn more about how to program with APDL, you can purchase a copy of our “Introduction to the ANSYS Parametric Design Language” book. (another plug)

Some APDL Basics

APDL was developed back in the day of punch cards.  It was much easier to use than the other programs out there because the commands you entered didn’t have to be formatted in columns.  Instead arguments for commands are separated by commas.  Therefore, instead of defining a Node in your model as:

345   12.456    17.4567   0.0034 

(note that the location of that decimal point is critical). You create a line as:

N,345,12.456,17.4567, 0.0034

Trust me, that was a big deal. But what you need to know now is that all APDL commands start with a keyword and are followed by arguments. The arguments are explained in the Command Reference in the help.  So the entry for creating a node looks like this:

image

The documentation is very consistent and you will quickly get the hang of how to get what you need out of it.  The layout is explained in the help:  // Command Reference // 3. Command Dictionary

Another key thing to know about commands in MAPDL is that most entities you create (not loads and boundary conditions) have an ID number. You refer to entities by their ID number.  This is a key concept that gets lost if you grew up using GUI’s.  So if you want to make a coordinate system and use it, you define an ID for it and then refer to that ID. Same thing goes for element definitions (Element Types), material properties, etc…  Remember this, it hangs up a lot of newer users.

To use MAPDL commands you simply enter each command on a line in a command object that you place in your model tree. We did a seminar on this very subject about two years ago that you can watch here.

The idea of entity selection is fundamental to APDL.  Above we point out that all entities have an ID.  You can interact with each entity by specifying its ID.  But when you have a lot of them, like nodes and elements, it would be a pain.  So APDL deals with this by letting you select entities of a given type and making them “selected” or “unselected”  Then when you execute commands, instead of specifying an ID, you can specify “ALL” and all of the selected entities are used for that command.  Sometimes we refer to entities as being selected, and sometimes we refer to them as “active.”  The basic concept is that any entity in ANSYS Mechanical APDL can be one of two states: active/selected or inactive/unselected.  inactive/unselected entities are not used by whatever command you might be executing.

If you want to see all of the APDL command that ANSYS Mechanical writes out, simply select the setup branch of your model tree and choose Tools->Write Input File.  You can view it in a text editor, or even better, in PeDAL.

image

One last important note before we go through our list of commands: the old GUI for MAPDL can be used to modify or create models as well as ANSYS Mechanical. Every action you take in the old GUI is converted into a command and stored in the jobname.log file.  Many users will carry out the actions they want in an interactive session, then save the commands they need from the log file.

Wait, one more thing:  Right now you need these commands. But at every release more and more of the solver is exposed in ANSYS Mechanical FUI and we end up using less and less APDL scripts.  So before you write a script, make sure that ANSYS Mechanical can’t already do what you want.

The Commands

1. !

An exclamation point is a comment in APDL. Any characters to the right of one are ignored by the program. Use them often and add great comments to help you and others remember what the heck you were trying to do.

2. /PREP7 – /SOLU – /POST1 – FINISH

The MAPDL program consists of a collection of 10 processors (there were more, but they have been undocumented.) Commands only work in some processors, and most only in one.  If you enter in a preprocessor command when you are in the postprocessor, you will get an error.

When you create a command object in your ANSYS Mechanical model, it will be executed in either the Pre processor, the Solution processor, or in the Post processor.  Depending on where in the model tree you insert the command object.   If you need to go into another processor you can, you simply issue the proper command to change processors.  JUST REMEMBER TO GO BACK TO THE PROCESSOR YOU STARTED IN when you are done with your commands.

/PREP7 – goes to the pre processor. Use this to change elements, create things, or modify your mesh in any way.

/SOLU – goes to the solution processor.  Most of the time you will start there so you most often will use this command if you went into /PREP7 and need to get back. Modify loads, boundary conditions, and solver settings in this processor.

/POST1 – goes to the post processor. This is where you can play with your results, make your own plots, and do some very sophisticated post-processing.

FINISH – goes to the begin level. You will need to go there if you are going to play with file names.

3. TYPE – MAT – REAL – SECNUM

You only really need to know these commands if you will be making your own elements… but one of those things everyone should know because the assignment of element attributes is fundamental to the way APDL works…. so read on even if you don’t need to make your own elements.

Every element in your model is assigned properties that define the element.  When you define an element, instead of specifying all of its properties for each element, you create definitions and give them numbers, then assign the number to each element.  The simplest example are material properties. You define a set of material properties, give it a number, then assign that number to all the elements in your model that you want to solve with those properties.

But you do not specify the ID’s when you create the elements, that would be a pain. Instead, you make the ID for each property type “active” and every element you create will be assigned the active ID’s. 

The commands are self explanatory: Type sets the Element Type, MAT sets the material ID, REAL set the real constant number, and SECNUM sets the active section number. 

So, if  you do the following:

type,4
real,2
mat,34
secnum,112
e,1,2,3,4,11,12,13,14

you get:

     ELEM MAT TYP REL ESY SEC        NODES
      1  34   4   2   0 112      1     2     3     4    11    12    13    14
      2   3   4   4   0 200    101   102   103   104   111   112   113   114

4. ET

The MAPDL solver supports hundreds of elements.   ANSYS Mechanical picks the best element for whatever simulation you want to do from a general sense.  But that may not be the best for your model. In such cases, you can redefine the element definition that ANSYS Mechanical used.

Note: The new element must have the same topology. You can’t change a 4 noded shell into an 8 noded hex.  But if the node ordering is the same (the topology) then you can make that change using the ET command. 

5. EMODIF

If you define a real constant, element type, or material ID in APDL and you want to change a bunch of elements to those new ID’s, you use EMODIF.  This is the fastest way to change an elements definition.

6. MP – MPDATA – MPTEMP –TB – TBDATA – TBTEMP

Probably the most commonly needed APDL command for ANSYS Mechanical users are the  basic material property commands. Linear properties are defined with MP command for a polynomial vs. temperature or MPDATA and MPTEMP for a piece-wise linear temperature response.  Nonlinear material properties are defined with the TB, TBDATA, and TBTEMP commands.

It is always a good idea to stick your material definitions in a text file so you 1) have a record of what you used, and 2) can reuse the material model on other simulation jobs.

7. R – RMODIF

If you define an elements formulation with options on the ET command, and the material properties on the material commands, where do you specify other stuff like shell thickness, contact parameters, or hourglass stiffness?  You put them in real constants.  If you are new to the MAPDL solver the idea of Real constants is a bit hard to get used to. 

The official explanation is:

Data required for the calculation of the element matrices and load vectors, but which cannot be determined by other means, are input as real constants. Typical real constants include hourglass stiffness, contact parameters, stranded coil parameters, and plane thicknesses.

It really is a place to put stuff that has no other place.  R creates a real constant, and RMODIF can be used to change them.

8. NSEL – ESEL

As mentioned, selection logic is a huge part of how MAPDL works.  You never want to work on each object you want to view, change, load, etc… Instead you want to place entities of a given type into an “active” group and then operate on all “active” entities. (you can group them and give them names as well, see CM-CMSEL-CMDELE below to learn about components)

When accessing MAPDL from ANSYS Mechanical you are most often working with either nodes or elements.  NSEL and ESEL are used to manage what nodes and elements are active. These commands have a lot of options, so review the help.

9. NSLE – ESLN

You often select nodes and then need the elements attached to those nodes. Or you select elements and you need the nodes on those elements.  NSLE and ESLN do that.  NSLE selects all of the nodes on the currently active elements and ESLN does the opposite.

10. ALLSEL

A very common mistake for people writing little scripts in APDL for ANSYS Mechanical is they use selection logic to select things that they want to operate on, and then they don’t remember to reselect all the nodes and elements.  If you issue an NSEL and get say the nodes on the top of your part that you want to apply a load to. If you just stop there the solver will generate errors because those will be the only active nodes in the model. 

ALLSEL fixes this. It simply makes everything active. It is a good idea to just stick it at the end of your scripts if you do any selecting.

11. CM – CMSEL

If you use ANSYS Mechanical you should be very familiar with the concept of Named Selections. These are groups of entities (nodes, elements, surfaces, edges, vertices) that you have put into a group so you can scope based on them rather than selecting each time. In ANSYS MAPDL these are called components and commands that work with them start with CM.

Any named selection you create for geometry in ANSYS Mechanical gets turned into a nodal component – all of the nodes that touch the geometry in the Named Selection get thrown into the component. You can also create your own node or element Named Selections and those also get created as components in MAPDL. 

You can use CM to create your own components in your APDL scripts.  You give it a name and operate away.  You can also select components with the CMSEL command.

12. *GET

This is the single most awesomely useful command in APDL.  It is a way to interrogate your model to find out all sorts of useful information: number of nodes, largest Z value for node position, if a node is selected, loads on a node, result information, etc… 

Check out the help on the command. If you ever find yourself writing a script and going “if I only knew blah de blah blah about my model…” then you probably need to use *get.

13. CSYS – LOCAL – RSYS

Coordinate systems are very important in ANSYS Mechanical and ANSYS MAPDL.  In most cases you should create any coordinate systems you need in ANSYS Mechanical. They will be available to you in ANSYS MAPDL, but by default ANSYS Mechanical assigns a default ID. To use a coordinate system in MAPDL you should specify the coordinate system number in the details for a given coordinate system by changing “Coordinate System” from “Program Defined” to “Manual” and then specifying a number for “Coordinate System ID”

image

If you need to make a coordinate system in your APDL script, use the LOCAL command. 

When you want to use a coordinate system, use CSYS to make a given coordinate system active.

Note: Coordinate system 0 is the global Cartesian system. If you change the active coordinate system make sure you set it back to the global system with CSYS,0

RSYS is like CSYS but for results. If you want to plot or list result information in a coordinate system other than the global Cartesian, use RSYS to make the coordinate system you want active.

14: NROTATE

One thing to be very aware of is that each node in a model has a rotation associated with it. By default, the UX, UY, and UZ degrees of freedom are oriented with the global Cartesian coordinate system. In ANSYS Mechanical, when you specify a load or a boundary condition as normal or tangent to a surface, the program actually rotates all of those nodes so a degree of freedom is normal to that surface.

If you need to do that yourself because you want to apply a load or boundary condition in a certain direction besides the global Cartesian, use NROTATE.  You basically select the nodes you want to rotate, specify the active coordinate system with CSYS, then issue NROTATE,ALL to rotate them.

Be careful though. You don’t want to screw with any rotations that ANSYS Mechanical specified.

15. D

The most common boundary condition is displacement, even for temperature.  To specify those in an ANSYS MAPDL script, use the D command.  Most people use nodal selection or components to apply displacements to multiple nodes.

In its simplest form you apply a single value for displacement to one node in one degree of freedom.  But you can specify multiple nodes, multiple degrees of freedom, and more powerfully, the value for deflection can be a table. Learn about tables here.

16. F

The F command is the same as the D command, except it defines forces instead of displacement.  Know, it, use it.

17. SF – SFE

If you need to apply a pressure load, you use either SF to apply to nodes ore SFE to apply to elements. It works a lot like the D and F commands.

18. /OUTPUT

When the ANSYS MAPDL solver is solving away it writes bits and pieces of information to a file called jobename.out, where jobname is the name of your solver job.  Sometimes you may want to write out specific information, say list the stresses for all the currently selected nodes, to a file. use /OUTPUT,filename to redirect output to a file. When you are done specify /OUTPUT with no options and it will go back to the standard output.

19. /SHOW

ANSYS MAPDL has some very sophisticated plotting capabilities.  There are a ton of command and options used to setup and create a plot, but the most important is /SHOW,png.  This tells ANSYS MAPDL that all plots from now on will be written in PNG format to a file. Read all about how to use this command, and how to control your plots, here.

image

20. ETABLE

The ANSYS MAPDL solver solves for a lot of values. The more complex the element you are using, the more the number of values you can store.  But how do you get access to the more obscure ones? ETABLE.  Issue 38 of The Focus from 2005 goes in to some of the things you can do with ETABLE.

Where to go From Here

This is certainly not the definitive list.  Ask 20 ANSYS MAPDL users what APDL commands all ANSYS Mechanical users should know, and you might get five or six in common. But based on the support calls we get and the scripts we write, this 20 are the most common that we use.

Command help is your friend here.  Use it a lot.

The other thing you should do is open up ANSYS MAPDL interactively and play with these commands. See what happens when you execute them.

Video Tips: Section Planes in ANSYS 14.5

A quick video showing a new way to create section planes by using coordinate systems.

Submodeling in ANSYS Mechanical: Easy, Efficient, and Accurate

Back “in the day” when we rode horses into work as Finite Element Analysis Engineers, we had somewhat limited compute capacity.  70,000 elements was a hard and fast limit.  But we still needed accurate results with local refinement in areas of concern.  The way we accomplished that was with a  process called submodeling where you make a refined local model just of the area you care about, and a coarse mesh that modeled the whole part but still fit on the computer.  The displacement field from the coarse model was then applied as a boundary condition on the refined model.

We called the refined model a zoom model or a submodel.  It worked very well for many years. Then computers got bigger and we just started meshing the heck out of those areas of interest in the full part model.  And in many cases that is still the best solution for an accurate localized stress: localized refinement.

Submodeling is one of those “tricks” in stress analysis that used to be used all the time. But until recently it was a bit of a pain to do in ANSYS Mechanical so it fell out of use.  Now, the process of doing submodeling is easy, efficient, and accurate.  The purpose of this posting is to introduce the concept to newer users who have not used it before, and show experienced (old) users how much easier it is to do in ANSYS Mechanical vs. Mechanical APDL.

What is Submodeling?

The best description of submodeling is the illustration that has been in the ANSYS help system, in one form or another, for over 25 years:

image

The basic idea is that you have a coarse model of your whole part or assembly.  You ignore small features in the mesh that don’t have an impact on the overall response of the system – the local stiffness does not have influence on the strain beyond that local region. You then make a very refined model, the submodel, of the region of interest. You use the displacement field (and temperature if you have a temperature gradient) from the coarse model and apply it to the submodel as a boundary condition to get the accurate highly-refined response in the area of interest.

The process is based on St. Venant’s principle: “… the difference between the effects of two different but statically equivalent loads becomes very small at sufficiently large distances from load.”

An aside:
What a cool name this guy had:
Adhémar Jean Claude Barré de Saint-Venant.  To top it off he was not just a mathematician, but he was given the title of Count as well… a count mathematician. And, I have to say, I have serious beard envy.  He had some very nice facial hair, I can’t even grow thick stubble.

Anyhow, what he showed was that if you are looking at the stresses in a part far away from where loads are applied, how those loads are applied does not matter. So we can replace the forces/pressures/etc… from our course model as an equivalent static deflection load and the stress field will be the same.

The way this is done in a Finite Element model is you determine what faces in your submodel are “inside” your course model. These are called the cut boundary faces and the nodes on those faces are the cut boundary nodes. and you apply the displacement field from the coarse model onto the nodes

The most common use is to add mesh refinement in an area without having to solve the whole model. Another common usage is to actually mesh small features like fillets, holes, and groves that were left out of or under-meshed in the full model.  It can also be used to capture material non-linearities if that behavior is highly localized.

But probably the most beneficial use today is to study the parametric variation of small features like the size of a fillet or a hole.  If changing the size of such features does not change the overall response of the system, then you only need to do a parametric study on the submodel – as the guy with the great beard proved, if the static load does not change with your geometric variations, you don’t have to look at the whole structure.

And don’t forget the new crack growth capabilities. You will probably want to do that on a submodel around your crack and not on your whole geometry.

Here is a more modern version of the original example geometry:

image

The red highlight shows the cut boundaries. this is where you need to apply the displacement field.

image

This is the nasty coarse mesh. Now if you were modeling a single part, you would just mesh the fillets and be done with it.  But assume this is in a large assembly.

image

The Submodel. Nice elements in the key area.

You can even set up the radius as a parameter and do a study, where only the Submodel is modified and updated.

image

 

The Process

The process is fairly simple:

  1. Make and solve your full model
  2. Make a geometry model of the area you want a submodel in
  3. Attach the submodel to the engineering data and solution of the full model
  4. Set up and solve the submodel

Before we get started, here is a ANSYS 14.5 archived project for both models we will discuss in this posting:  PADT-Focus-Submodeling-2013_08_14.wbpz

For the sample geometry we showed above, the system looks like this:

image

When you go into ANSYS Mechanical for the sample model, you have a new model branch:

image

When you first get in there, the branch is empty, you have to insert Body Temperature and/or Displacement:

image

The Details for the Displacement object are as follows:

image

There are a lot of options here. It is basically using the external load mapper to map the displacements. Consult the help and just play around with the options to understand them better. In most cases, all you need to do is specify the faces that you want the displacement field applied to for the Scope section.

A cool feature is that once you have specified the faces, you can “Import Load” and then view them by clicking on the object. Graphics Control –>Data = All shows vectors. Total/X/Y/Z shows the applied displacement field as a contour:

image

image

Now you just need to make sure your Submodel is set up correctly, you have the mesh you want, and any other loads that are applied directly to the Submodel are the same as the loads in the full model (see next section).  Run and you get your refined results.

Here is that same process with a more realistic model of a beam with a tube welded on it.  The welds are not modeled in the full model and the fillets in the beam are very coarse.

So here is the geometry. Imagine that these two parts are actually part of a very large assembly so we really can’t refine them the way we want.

image

This is what the systems look like. Note that the geometry comes from one source. I made the submodel in the same solid model in DesignModeler and just suppress the parts I don’t want in each Mechanical model.

image

The loading is simple. I fix one end and put a force on the top of the tube.

image

And here is my coarse mesh. I could probably mesh the tube with a lot more elements, especially along the axis.

image

The results. Not too useful from a stress standpoint. Deflections are good, but the fillet is missing and beam is too coarse.

image

So here is the submodel.  All the fillets are in there and it is just the area around the connection.

image

I used advanced meshing to get a really nice refined mesh. It only solves in about 20 seconds so I can really refine it.

image

Here are the cut boundaries. The bottom of the beam ribs are also selected.

image

And here is the result. A really accurate look at the stresses in the fillet.  I could even put a probe in there and do some nice fatigue or crack growth.

image

The other thing that showed up were some stress problems on the bottom of the beam.  Those could be an issue under a high load. The fillet stress on top my yield out but these stresses under the beam could be a fatigue problem.

image

Tips and Hints

In most cases, doing a sub model is pretty simple. But there is a lot more to it than what we covered here.  Because I need to get back to some very pressing HR tasks, I’ll just list them here so you know that you are aware of them:

  1. Label your systems in the project page with some sort of “full” and “sub” terminology Things get really confusing fast if you don’t.
  2. You can do submodeling with a transient or multiple substep model. In your Imported Displacement/Body Temperature, specify what load step to grab the loads from.
  3. Don’t forget temperature. One of the most common problems is when a user applies temperature and therefore gets thermal stress.  They then forget to apply that to their submodel and everything is wrong.
  4. Make sure you don’t change material properties. Remember, these models are statically identical, you are just looking at a chunk with greater refinement.
  5. Remember that loads need to be away from the area you are zooming in on.  Don’t cut where a load is applied, or even near where one is applied. The exception is temperature. (Sometimes you can get away with pressure loads too, but you have to be very careful to get the same load over the area)
  6. Your can’t have geometry in the submodel sticking too far out of the coarse mesh. The displacement is interpolated onto the fine mesh and if a node on the fine mesh is outside the coarse mesh, the program extrapolates and that can sometimes induce errors. If you see spotty or high stresses on your cut boundaries, that is why.  There are tools in the Submodeling details to help diagnose and fix that.
  7. If you are going to do a parametric study on geometry changes in the submodel, use a separate geometry file to create that model (I just duplicate the original and suppress the full geometry in DM).  Why? Because if you change a parameter in your geometry model, both models will need to resolve since they both use the same geometry file, even if the geometry change occurs on a part that is suppressed in the full model.
  8. You can do submodels of submodels as many levels down as you want.
  9. You can have multiple submodels in one system
  10. Read the help, it is fairly detailed

That is about all for now. As always: crawl, walk, run.  Start with a very simple sub model with obvious cut boundaries and get experienced.

ANSYS Updates in New Mexico

Los-Alamos-Balcony-1Clinton, Bob, Patrick, and Eric on on a trip to New Mexico to do ANSYS updates in Albuquerque and Los Alamos. The groups have been great, lots of deep questions and further insight into how everyone can get greater value out of their ANSYS Mechanical, FLUENT, CFX, and Maxwell usage.

The Los Alamos session is being held at the Holiday Inn Express as you drive in to town.  The view out the meeting from window is fantastic.  Kind of hard to pay attention to the PowerPoint slide on “New compound observables for the Adjoint Solver.”  The pictures do not do the sky justice.

Los-Alamos-Balcony-panorama

Columbia: PADT’s Killer Kilo-Core CUBE Cluster is Online

iIn the back of PADT’s product development lab is a closet.  Yesterday afternoon PADT’s tireless IT team crammed themselves into the back of that closet and powered up our new cluster, bringing 1104 connected cores online.  It sounded like a jet taking off when we submitted a test FLUENT solve across all the cores.  Music to our ears.

We have recently become slammed with benchmarks for ANSYS and CUBE customers as well as our normal load of services work, so we decided it was time to pull the trigger and double the size of our cluster while adding a storage node.  And of course, we needed it yesterday.  So the IT team rolled up their sleeves, configured a design, ordered hardware, built it up, tested it all, and got it on line, in less than two weeks.  This was while they did their normal IT work and dealt with a steady stream of CUBE sales inquiries.  But it was a labor of love. We have all dreamed about breaking that thousand core barrier on one system, and this was our chance to make it happen.

If you need more horsepower and are looking for a solution that hits that sweet spot between cost and performance, visit our CUBE page at www.cube-hvpc.com and learn more about our workstations, servers, and clusters.  Our team (after they get a little rest) will be more than happy to work with you to configure the right system for your real world needs.

Now that the sales plug is done, lets take a look at the stats on this bad boy:

Name: Columbia
After the class of battlestars in Battlestar Galactica
Brand: CUBE High Value Performance Compute Cluster, by PADT
Nodes: 18
17 compute, 1 storage/control node, 4 CPU per Node
Cores: 1104
AMD Opteron: 4 x 6308 3.5 GHz, 32 x 6278 2.4 GHz, 36 x 6380 2.5 GHz
Interconnect: 18 port MELLANOX IB 4X QDR Infiniband switch
Memory: 4.864 Terabytes
Solve Disk: 43.5 TB RAID 0
Storage Disk: 64 TB RAID 50

Here are some pictures of the build and the final product:

a
A huge delivery from our supplier, Supermicro, started the process. This was the first pallet.

b
The build included installing the largest power strip any of us had ever seen.

c
Building a cluster consists of doing the same thing, over and over and over again.

f
We took over PADT’s clean room because it turns out you need a lot of space to build something this big.

g
It is fun to get the chance to build the machine you always wanted to build

h
2AM Selfie: Still going strong!

d
Almost there. After blowing a breaker, we needed to wait for some more
power to be routed to the closet.

e
Up and running!
Ratchet and Clank providing cooling air containment.

David, Sam, and Manny deserve a big shout-out for doing such a great job getting this thing up and running so fast!

When I logged on to my first computer, a TRS-80, in my high-school computer lab, I never, ever thought I would be running on a machine this powerful.  And I would have told people they were crazy if they said a machine with this much throughput would cost less than $300,000.  It is a good time to be a simulation user!

Now I just need to find a bigger closet for when we double the size again…

CUBE-HVPC-Logo-wide

Utilizing a Thermal Contact Conductance Table in ANSYS ANSYS Mechanical

We recently had a tech support request from a customer, asking for the ability to define a spatially varying thermal contact conductance (TCC) on a contact region in ANSYS Mechanical. We came up with a solution for ANSYS 14.5 via an example which includes a couple of verification plots.

The test model consists of two solids, connected via a contact region. The thermal contact conductance at the contact region was defined as a table, with the rows and columns of the table corresponding to local coordinates within the plane of the contact surface. The table was defined and implemented using Mechanical APDL commands in the Mechanical tree.

image

Low values of TCC were used for testing purposes. This helped verify that the tabular values were actually being used as intended. A constant temperature was applied to the face at one end of the model, while a different constant temperature was applied to the face at the extreme other end of the model. This temperature differential caused heat to flow through the contact region, subject to the resistance defined via TCC values.

The coordinates in the plane of the contact surface were Y and Z. Thus, the table of TCC values varied in the Y and Z directions, as shown here:

            Z        
  Y |  0.0        1.0
0.0 | 0.0001    0.0005
1.0 | 0.0005    0.0002

Three ANSYS Mechanical APDL command objects were inserted into the tree in the Mechanical editor. The first command object simply added a scalar parameter to keep track of the contact element type/real constant set number for use later:

image

The second command object was placed in the analysis type branch, meaning this set of commands would be executed just prior to the Solve command. This command object does three things:

1. Defines the TCC table vs. Y and Z coordinates.

2. Reads the table in as an MAPDL real constant for the contact elements identified in the first command object.

3. Issues the command, “rstsuppress,none”. More on this later.

This is how the second command object was defined:

image

That third step mentioned above was a key to getting this technique to work in 14.5. The rstsuppress command is not documented currently, but Al Hanq of ANSYS, Inc. has told me that it will be documented in the future. The default setting turns off contact results from being written to the results file in a thermal analysis. The idea is to help keep results file sizes from getting excessively large, especially for transient thermal runs. In this case, we actually wanted the thermal contact results in the results file, so we issued “rstsuppress,none” so the thermal contact results were not suppressed.

The final command object was for verification of the applied TCC values. This set of commands generates two plots using MAPDL postprocessing commands. The first plot is of heat flux going through the contact elements. The second plot displays the TCC values for node ‘i’ of each contact element (averaged).

Here is the third command object:

image

Both of these plots show up in the tree, labeled as Post Output and Post Output 2 in the image above.

This is the resulting thermal flux at the contact surface:

image

Here is the applied thermal contact conductance, as mapped from the table defined in the second command object:

image

In summary, we took advantage of Mechanical APDL command objects to apply thermal contact conductance values that vary along the contact region. We also used MAPDL commands to create two plots that help verify that the TCC values were applied as intended. Hopefully this is a helpful example.

Corrupt ANSYS Mechanical Database? You Might Be Able to Recover

Most of the time ANSYS Mechanical does a great job of keeping track of all our input and output files needed for a particular simulation. Every once in a while though, a glitch can happen which could lead to a corrupt database that gives you errors, say, if you try to reopen the ANSYS Mechanical editor. If you suspect that somehow your project database for a Mechanical model (or any other model that uses the same interface as ANSYS Mechanical) has been corrupted, you just might be able to recover it using these steps:

1. Copy any .mechdb files from the project directory to a different location. Rename them to a .mechdat extension. These will be named SYS.mechdb, SYS-1.mechdb, etc. The easiest way to find these files is to click on View > Files from the Workbench window, then scroll through the list until you find the .mechdb file or files. Then right click on each one and select “Open Containing Folder.” This will open Windows Explorer in the directory in which the file resides. You can then copy the files to a new location and rename them to .mechdat extensions.

image

image

2. Copy any .agdb (DesignModeler) files or other geometry files from the project directory to a different location. These will be named SYS.agdb , SYS-1.agdb, etc. (for DesignModeler) and can be found using View > Files as I described above. No need to rename these.

image

3. Start a new Workbench session.

4. Click File > Import. Set the type of file to import to “Importable Mechanical File”. Browse to the two .mechdat files created in step 1 (by renaming the copied .mechdb files) and import each.

image

5. If needed for geometry files, in the resulting Project Schematic in the Workbench window, right click on the first block’s geometry cell and select Replace Geometry > Browse. Browse to the copied SYS.agdb file or other geometry file from step 2. Repeat any additional analysis block in similar fashion.

image

6. Then save the project with a new name and directory. 

This should allow you to recreate a Workbench project that allows you to continue working. We hope this suggestion is helpful if the need ever arises to use it.

image

(Artwork by Eric… Ted does much nicer smiley faces)

Linearized Stress – Using Nodal Locations for Path Results in Workbench Mechanical 14.5

Postprocessing results along a path has been part of the Workbench Mechanical capability for several rev’s now. We need to define a path as construction geometry on which to map the results unless we happen to have an edge in the model exactly where we want the path to be or can use an X axis intersection with our model. You have the option to ‘snap’ the path results to nodal locations, but what if you want to use nodal locations to define the path in the first place? We’ll see how to do this below.

For more information on “picking your nodes”, see the Focus blog entry written by Jeff Strain last year: http://www.padtinc.com/blog/the-focus/node-interaction-in-mechanical-part-1-picking-your-nodes

The top level process for postprocessing result along a path is:

  • Define a Path as construction geometry
  • Insert a Linearized Stress result
  • Calculate the desired results along the path using the Linearized Stress item

The key here is to define the path using existing nodes. Why do that? Sometimes it’s easier to figure out where the path should start and stop using nodal locations rather than figure out the coordinates some other way. So, let’s see how we might do that.

  • First, turn on the mesh via the “Show Mesh” button so that it’s visible for the path creation

image

  • From the Model branch in Mechanical, insert Construction Geometry
  • From the new Construction Geometry branch, insert a Path

image

  • Note that the Path must be totally contained by the finite element model, unlike in MAPDL.
  • If you know the starting and ending points of the path, enter them in the Start and End fields in the Details view for the Path.
  • Otherwise, click on the “Hit Point Coordinate” button:

image

  • Pick the node location for the start point, click apply

image

  • Pick the node location for the end point, click apply

image

  • In the Solution branch, insert Linearized Stress (Normal Stress in this case); set the details:
  • Scoping method=Path
  • Select the Path just created
  • Set the Orientation and Coordinate System values as needed
  • Define Time value for results if needed

image

Results are displayed graphically along the path…

image

…as well is in an X-Y plot and a table

image

Besides normal stresses, membrane and bending, etc. results can be accessed using these techniques. So, the next time you need to list or plot results along a path, remember that it can be done in Mechanical, and you can use nodal locations to define the starting and ending points of the path.

Duh! Three ANSYS Mechanical Features I Should Know But Didn’t

Selection Information, Manage Views, and Changing Settings on Multiple Load Steps

There is no way to hide the embarrassing reality. I am supposed to be an expert. I am introduced to people as such. People all over the world read stuff I write about how to use ANSYS products more effectively.  But last week and this week, humility has struck a devastating blow on my ego.  I found three very useful things in ANSYS Mechanical that I either didn’t know, or forgot about. I even mentioned one of them (Manage Views) in an update presentation as “cool and very important feature” then promptly forgot it was there.

As payment for my sins, I will share a brief description of each with all of you, in the hopes that I will: 1) make you feel better about yourself because you already knew this stuff, or 2) give you the knowledge you need to avoid the embarrassment, and lost productivity, that my ignorance has brought me. 

Selection Information

I mention this one first because it was pointed out to me by no less than the ANSYS Mechanical product manager at ANSYS, Inc. Yikes.  I believe he actually did a face palm when I asked him “What is Selection Information? There is an Icon with an i on the toolbar? Really?”

image

There it is, right next to the Worksheet icon, an icon I use all the time.  What it does is give you information about geometry, CAD and nodes, in your model.  There are three ways to get it, not just the icon on the toolbar:

  1. Click the Icon
  2. In the menu go to View>Windows>Selection Information
  3. Double-click on the Selection details at the bottom of the ANSYS Mechanical Window

image

However you use it, you will get a new window, embedded with the existing windows, that shows you information about the geometry entity of entities that you select. Normal selection options apply. You can pick vertices, edges, surfaces, or bodies. I like to drag it out as it’s own window so I can see it all.  (Notice how I talk like I do this all the time… yea, whatever.  I just figured out that it is a lot better if I drag it out and look at it by itself.) 

My sample model is just a cylinder, so If I pick the end and the cylinder I get:

image

See how it lists the two faces, and a summary. There is some internal info in there as well like ID’s that ANSYS mechanical uses to do stuff. The toolbar across the top lets you select a coordinate system to do the calculations in, set options (the green checkbox) or  control if you want individual info, summary info, or both. 

The options are useful because by default, everything is on. Turning some stuff off can reduce the clutter.

image

For nodes, I can get location, node number, and body information:

image

When you are in the window there are some useful things you can do with the list. The first is sort by clicking on the column headers.  What node is at your max X position in your cylindrical coordinate system?  Just set the Coordinate System and click on X(in) twice to sort from max o min:

image

If you select any of the cells, you can right mouse click and get a context menu that lets you reselect the entities being listed, export to a text or Excel file, Refresh, or copy to the clipboard:

image

Give it a shot next time your in a model and want to know some stuff.

Manage Views

One of the more useful capabilities in ANSYS Mechanical APDL is the ability to define views in a macro and call them back up again, getting the same standard views every time. Well you have been able to do that in Workbench when the introduced the “Scary Eye” icon at I think 14.5 (maybe 14):

image

Although it looks like a secret Masonic symbol, the icon actually represents a handy tool for saving views not only in your model but to files.  It is also available in View->Windows->Manage Views.

Not only that, it lets you save the view commands to an external file that you can use with other models or even go in and edit to create a very specific view.

When you start it up, it brings up its own little window as well, that has eye themed icons to control your view saving/recall experience.

image

  • “Spooky Eye Box with a Plus Sign” creates a view from the current view you are seeing
  • “X” deletes the currently selected view or views
  • “Guy with 80’s hair looking at a box” applies the currently selected view. Double-clicking on the view does the same thing.
  • “A-bar-B” is used to rename the selected view
  • “Spooky Eye Box with Green Blob” redefines the currently selected view with whatever the current view settings are in the graphics window. Think of it as an overwrite.
  • “Disk with arrow out” reads in a saved view file from disk.
  • “Disk with arrow in” saves the currently selected view to disk.

So, get your model positioned the way you want it using the mouse to control the view, then click the first icon to save it.  The program puts the window into “rename” mode so you can give it a descriptive name here. Just keep doing that till you have all your views defined.

If at some point you want to change view, no need to delete and recreate it. Simply Click on the view you want to redefine and then click on “Spooky Eye Box with Green Blob.”

Note: You can only select more than one view and delete it.  None of the other commands work for more than one view. But the save views command saves all the views, regardless of how many you have selected.

Here are some views I created:

image

image

image

image

Now it gets cool.  Click on a view and then click on the “Save” (last) icon.  It will save the views as an XML file.  Pop that into your handy-dandy XML editor and you can check out the view definitions:

image

This is where I get excited. Now you can go into this file and create your own view, or modify a view to be very specific.  I didn’t have enough time to figure out what all the options did, but if you get a view that is close to what you want, you should be able to modify it from there.

The last thing to talk about is what happens if you right mouse click on a view?  You get:

image

Yes, copy as MAPDL!  Not only is this useful for us old guys that just like to look at MAPDL, it lets you use the same view for any plots you may make with a code snippet as you used for the plots in ANSYS Mechanical.  So your views are consistent for all your plots!

image

Modifying Multiple Load Steps

This was one of those “there has to be a way to do this” moments. We were talking about different ways to speed up the solution of a transient thermal model and I suggested that instead of using automatic time step controls they put in some values. But for the life of me I couldn’t figure out how to change a bunch of load step settings at the same time, so I was changing them one at a time. For every step, change the step number, then change the value:

image

Yawn!  This started off a “well in ANSYS classic, I could write a script that would… blah… blah… blah…”

There has got to be a better way.  There is.  In the Graph window the load steps are shown on the X-axis. Simply multi-select the steps you want to change there:

image

In the example above I CTRL-Clicked steps 3, 5, and 7. Now my Analysis Settings details view looks like:

image

See how Current Step Number and Step End Time are “Multi Step”.  Any change I make to settings will now be applied to the selected steps.  A huge time savings.  And a big “Duh, I should have known that!”

Why do my ANSYS jobs take days and weeks to finish? Well it depends…

Real World Lessons on How to Minimize Run Time for ANSYS HPC

Recently I had a VP of Engineering start a phone conversation with me that went something like this. “Well Dave, you see this is how it is. We just spent a truckload of money on a 256 core cluster and our solve times are slower now than with our previous 128 core cluster. What the *&(( is going on here?!”

I imagine many of us have heard similar stories or even received the same questions from our co-workers, CEO’s & Directors. I immediately had my concerns and I truly thought carefully as to what I should say next. I recalled a conversation I had with one of my college professors. He had told me that when I find myself stepping into gray areas that a good start to the conversation is to say. “Well it depends…”

Guess what, that is exactly what I said. I said “Well it depends…” followed by going into explaining to him two fundamental pillars of computer science that have plagued most of us since computers were created: I said “Well you may be, CPU bound (compute bound) or I/O bound. He told me that they had paid a premium for the best CPU’s on the market and some other details about the HPC cluster. Garnering some of other details about the cluster my hunch was that his HPC cluster may actually be I/O bound.

I/O Bound

Basically this means that your cluster’s $2,000 worth of CPU’s are basically stalled out and sitting idle. The CPU’s are waiting for new data to process and move on. I also briefly explained that his HPC cluster may be compute bound. I quickly reassured him that the likelihood of his HPC cluster being compute bound was about 10% possible and very unlikely. I knew the specifications on the CPU’s in this HPC cluster and the likelihood that they were the issue of his ANSYS slow run times was low on my radar. These literally were the latest and greatest CPU’s ever to hit this planet (at that moment in time). So, let me step back a minute, to refresh our memories on what it means when a system is compute bound.

Compute Bound

Being compute bound means that the HPC cluster’s CPU’s were sitting at 99 or 100% for long periods of time. When this happens very bad things begin to happen to your HPC cluster. CPU requests to peripherals are delayed or infinitely lost to the ether. The HPC cluster may become unresponsive and even lock up.

All I could hear was silence on the other end. “Dave, I get it, I understand, please find the problem and fix our HPC cluster for us. ” I happily agreed to help out! I concluded our phone conversation asking that he send me the specific details, down to the nuts and bolts of the hardware! I also requested operating system and software that was installed and used on the 256 core HPC cluster.

What NOT to do when configuring an ANSYS Distributed HPC cluster.

Seeking that perfect balance!

After a quick NDA signing, a few dollars exchange and a sprinkle of some other legal things that lawyers get excited about. I set out to discover the cause. After reviewing the information provided to me I almost immediately saw three concerns:

To interconnect what?

Let Merriam-Webster describe it:

Definition of INTERCONNECT

transitive verb
: to connect with one another
intransitive verb
: to be or become mutually connected

— in·ter·con·nec·tion noun
— in·ter·con·nec·tiv·i·ty noun

1. The systems are interconnected with a series of wires.
2. The lessons are designed to show students how the two subjects interconnect
3.  A series of interconnecting stories

First Known Use of INTERCONNECT: 1865

Concern numeral Uno!!! Interconnect me

Though the company’s 256 core HPC cluster had a second dedicated GigE interconnect. Distributed ANSYS is highly bandwidth and latency bound often requiring more bandwidth than a dedicated NIC (Network Interface Card) may provide. Yes, the dedicated second GigE card interconnect was much better than trying to use a single NIC for all of the network traffic which would also include the CPU interconnect. I did have a few of the MAPDL output files from the customer that I could take a peek at. After reviewing the customer output files it became fairly clear that interconnect communication speeds between the 16 core x 16 server in the cluster was not adequate. The master Message Parsing Interface (MPI) process that Distributed ANSYS uses requires a high amount of bandwidth and low latency for proper distributed scaling to the other processes. Theoretically the data bandwidth between cores solving local to the machine will be higher than the bandwidth traveling across the various interconnect methods (see below). ANSYS, Inc. recommends Infiniband for CPU interconnect traffic. Here are a couple of reasons why they recommend this. See how the theoretical data limits increase going from Gigabit Ethernet up to FDR Infiniband.

Theoretical lane bandwidth limits for:

  • Gigabit Ethernet (GigE): ~128MB/s
  • Signal Data Rate (SDR): ~ 328 MB/s
  • Double Data Rate (DDR): ~640 MB/s
  • Quad Data Rate (QDR): ~1,280 MB/s
  • Fourteen Data Rate (FRD): ~1,800 MB/s

GEEK CRED: A few years ago companies such as MELLANOX started aggregating the Infiniband channels. The typical aggregate modifiers are 4X or even a 12X increase. So for example the 4X QDR Infiniband switch and cards I use at PADT and recommended to this customer, would have a (4X 10Gbit/s) or 5,120 MB/s of throughput! Here is a quick video that I made of a MELLANOX IS5023 18-port 4X QDR full bi-directional switch in action:

This is how you do it with a CUBE HVPC! MAPDL output file from our CUBE HVPC w16i-GPU workstation. This is running the ANSYS industry benchmark V14sp-5. I wanted to show the communication speeds between the master MPI process and the other solver processes to see just how fast the solvers can communicate. With a peak communication speed of 9593 MB/s this CUBE HVPC workstation rocks!

Chassis Profile 4u standard depth or rackmountable
CPU 1 x One Dual Socket
Chipset INTEL 602 Chipset
Processors 2 x INTEL e5-2690 @ 2.9GHz
Cores 2 x 8
Memory 128GB DDR3-1600 ECC Reg RAM
OS Drives 2 x 2.5″ SATA III 256GB SSD Drives RAID 0
DATA/HOME Hard Disk Drives 4 x 3.5″ SAS2 600GB 15kRPM drives RAID 0
SAS RAID (Onboard, Optional) RAID 0 (OS RAID)
SAS RAID (RAID card, Optional) LSI 2208 (DATA VOL RAID)
Networking (Onboard) Dual GigE (Intel i350)
Video (Onboard) NVIDIA QUADRO K5000
GPU (Optional) NVIDIA TESLA K2000
Operating System Windows 7 Professional 64-bit
Optional Installed Software ANSYS 14.5 Release

imageStats for CUBE HVPC Model Number : w16i-KGPU

Learn more about this and other CUBE HVPC systems here.

Concern #2: Using RAID 5 Array for Solving Disk Volume

The hard drives that are used for I/O during a solve, the solving volume, were configured in a RAID 5 hard disk array. Some sample data below showing the minimum write speed of a similar RAID 5 array. These are speeds that are better off seen in your long-term storage volume not on your solving/working directory.

LSI 2008 HITACHI ULTASTAR 15K600
Qty / Type / Size / RAID Qty 8 x 3.5″ SAS2 15k 600GB RAID 5
TEST # p1
min Read 204 MB/s
max Read 395  MB/s
Avg Read N/A
min Write 106 MB/s
max Write 243.5 MB/s
Avg Write N/A
Access Time N/A

Concern #3: Using RAID 1 for Operating System

The hard drive array for the OS was configured in a RAID 1 configuration. For a number cruncher server having RAID 1 is not necessary. If you absolutely have to have RAID 1. Please spend the extra money and go to a RAID 10 configuration.

I really don’t want to get into the seemingly infinite details of hard drives speeds, latency. Or even begin to explain to you if I should be using an onboard RAID Controller, dedicated RAID controller or a software RAID configuration completed within the OS. There is so much information available on the web that a person gets overloaded. When it comes to Distributed ANSYS, think fast hard drives and fast RAID controllers. Start researching your hard drives and RAID controllers using the list provided below. Again, only as a suggestion! I have listed the drives in order based on a very scientific and nerdy method. If I saw a pile of hard drives, what hard drive would I reach for first?

  1. I prefer using SEAGATE SAVVIO or HITACHI enterprise class drives. (Serial Attached SCSI) SAS2 6Gbit/s 3.5”15,000 RPM spindle drives (best bang for your dollar of space, more read & write heads over a 2.5” spindle hard drive).
  2. I prefer using Micron or INTEL SSD enterprise class SSD. SATA III Solid State Drive 6 Gbit/s (SSD sizes have increased however you will need more of these for an effective solving array and they still are not cheap).
  3. I prefer using the SEAGATE SAVVIO 2.5” enterprise class spindle drives. SAS2 6Gbit/s 2.5” 15,000 RPM spindle drives (if you need a small form factor, fast and additional storage. But the 2.5” drives do not have as many read & write heads as a 3.5” drive. In a situation where I need to slam 4 or 8 drives into a tight location.
    Right now, SEAGATE SAVVIO 2.5” are the way to go!  Here is a link to a data sheet.
    Another similar option is the HITACHI ULTRASTAR 15k600.  It’s spec sheet is here.
  4. SATA II 3Gbit/s 3.5” 7,200 RPM spindle drives are also a good option.  I prefer Western Digital RE4 1TB or 2TB drives. There spec sheet is here.

LSI 2108 RAID Controller and Hard Drive data/details:

image

How a CUBE HVPC System from PADT, Inc. balanced out this configuration and how much would it cost?

I quoted out the below items, installed and out the door (including my travel expenses, etc.) at: $30,601

The company ended up going with their own preferred hardware vendor. Understandable, one good thing is that we are now on the preferred purchasing supplier list. They were greatly appreciative of my consulting time and indicated that they will request a “must have” quote for a CUBE HVPC system the next refresh in a year. They want to go over 1,000 cores the next refresh.

I recommended that they install the following into the HPC cluster based: (note they already had blazing fast hard drives)

  • 16 – Supermicro AOC-S2208L-H8iR LSI 2208 RAID controller cards.
  • 32 – Supermicro CBL-0294L-01 cabling to connect the LSI RAID cards to the SAS2 hard drives.
  • 1 – MELLANOX IS5023 18-port 4X QDR Infiniband switch
  • 16 – Supermicro AOC-UIBQ-M2 Dual port 4X QDR Infiniband card
  • 16 – Supermicro QSFP Infiniband cables in a couple different lengths

A special thanks and shout out to Sheldon Imaoka of ANSYS, Inc. for inspiring me to write this blog article!