Great Showing at Sandia Technology Showcase

PADT is attending this years Sandia Technology Showcase for the first time this year.  A great turnout:

Sandia-Technology-Showcase-PADT

The Purpose of the showcase is:

The 2nd Annual Sandia Research & Technology Showcase presents cutting edge research and technology development taking place at Sandia National Laboratories. The 2013 Showcase will focus on four themes: bioscience, computing & information science, energy & climate, and nanodevices & microsystems. The event will also provide information on doing business with Sandia National Laboratories through licensing, partnerships, procurement, and economic development programs.

We are very excited at looking to see if any of these technologies fit PADT as new product for us, and we are ready and waiting to help others turn the innovation coming from the labs into viable commercial products.

#SRTSC

 

 

Questions Decision Makers Should Ask About Computer Simulations

‘TRUST BUT VERIFY’

A guest posting from Jack Thornton , MINDFEED Marcomm, Sante Fe, NM

image.pngThe computerization of engineering (and everything else) has imposed new burdens on managers and executives who must make critical decisions. Where once they struggled with too little information they now struggle with too much. Until roughly three decades ago, time and money were plowed into searching for more and better information. Today, time and money disappear are plowed into making sense of myriad computer simulations.

For all but the best-organized decision makers, these opposite situations have proven equally frustrating. For nearly all of engineering history, critical decisions were based on a few pieces of seemingly credible data, a handful of measurements, and hand-drawn sketches a la Leonardo DaVinci—leavened with hands-on experience and large dollops of intuition.

Computer simulations are now everywhere in engineering. They have greatly speeded up searches for information, as well as creating it in the first place, and endlessly multiplying it. What has been lost are transparency and traceability—what was done when, by whom and why. Since transparency and traceability are vital to making sound engineering decisions in today’s intensely collaborative technical environments, decision makers and managers say this loss is a big one.

This is not some arcane, hidden war waged by experts, geeks and professors. This is about designing machinery, components, physical systems and assemblies that are globally competitive—and turn a profit doing so. The complexity of modern components, assemblies and systems has been exhaustively and repeatedly described.

Nor is this something engineers and first-line managers can afford to ignore. Given the shortages of engineering talent, relatively inexperienced engineers are constantly being handed responsibility for making key decisions.

Users of computerized simulation systems continually seek ways to answer the inevitable question, “How do we know this or that or whatever to be true?” Several expert users of finite element analysis (FEA), the basic computational toolset of engineering simulation and analysis, were interviewed for this article. Each interviewee is a licensed professional engineer (PE) and each has been recommended by a leading FEA software vendor.

For decision makers, a simulation FEA or otherwise really presents only three options:

  • Signing off on the production of a component or assembly. If it proves to be flawed, warranty claims, recalls, and perhaps much worse may result.
  • Shelving a promising new product, perhaps at the behest of fretful engineers. The investment is written off or expensed as R&D. The marketplace opportunity (amnd its revenue) may be lost forever.
  • Remanding the project to the analysts even while knowing that “paralysis by analysis” will push development costs too high or cause too big a delay in getting to market.

Since executives and other upper-echelon corporate decision makers rarely possess much understanding or FEA, let alone have time to develop it, a “trust but verify” strategy is the only reasonable approach.

The verify part is easy. FEA modelers and solvers have been well wrung-out over the past 10 to 20 years. All of the FEA software vendors will share details of their in-house tests of their commercial code, the experiences of customers doing similar work, and investigations by reviewers who are often on engineering-school faculties. The same is true for industry-specific “home grown” code.

It’s the trust part that’s so challenging, as in FEA trust depends on understanding some very complicated matters.

Analysis experts note that unless the builders of FEA models are questioned, they rarely spell out the model’s underlying assumptions. Even less frequently (and clearly) described is the reasoning behind the dozens or hundreds of choices they made that are dictated by those assumptions.

And worse, these choices are not always clarified when model builders do provide this detail—quite the opposite, in fact. When pressed for explanations, model builders may simply present the mathematical formulas they use to characterize the physics of their work.

Analysis experts are quick to point out that these equations often confuse and intimidate. Decision makers should insist on commonsense explanations and not equations. And every FEA model builder will try earnestly to explain (often at great length) the model’s implications to anyone who takes the time to look.

In the context of FEA and other simulations, “physics” means the real-world forces to be withstood by a printed circuit board, a pump, an engine mount, a turbine, an aircraft wing or engine nacelle, the energy-absorbing structure of a car, or anything else that is mechanically complex and highly stressed.

This is why transparency and traceability are so important in FEA. Analysts note that some of this is codified in the guidelines for simulation and computational analysis in the ASME / ANSI verification and validation standards. Further support comes from company best practices developed by FEA users and managers, although enforcement is rarely consistent, and voluntary industry standards whose applicability varies widely.

The transparency and traceability challenge is that building a model—again, a subset of the real world—requires dozens of assumptions about the mechanical capabilities that the object or assembly must have to meet its requirements. After these basic assumptions have been coded into the model, hundreds of follow-on choices are needed to represent the physical phenomena in the model.

Analysts urge decision makers to question the stated values and ranges of any of the model’s parameters—and in particular values and ranges that have been estimated. Decision makers are routinely urged to probe whether these parameters’ values are statistically significant, and whether those values are even needed in the model.

A survey of experts turns up numerous aspects of FEA and other computerized simulations that decision makers should probe as part of a trust-but-verify approach. Among many examples:

  • Incoming geometry—usually from solid modeling systems used by product designers— and the topologies and boundaries they have chosen.
  • The numerical values representing physical properties such as yield strengths of the chosen materials.
  • Mechanical components and assemblies. How accurately represented are the bolts and welds that hold the assemblies together?
  • The stiffness of structures.
  • The number of load steps. Is the range broad enough? Are there enough intermediate steps so nothing will be missed? How true-to-life are the load vectors?
  • The accuracy of modal analyses. Resonating harmonic frequencies—vibration—can shake things apart and lead to catastrophic failures.
  • Boundary conditions, or where the object being modeled meets “the rest of the world” in the analysis. Are the specifics of the object’s physical and mechanical requirements—the geometry—accurately represented and, again, how do we know?
  • Types of analysis, which range from small, simple linear static to large, highly complex nonlinear dynamic. Should a smaller simpler analysis have been used? Could physical measurements suffice instead of analyses?
  • In fluid dynamics, how well characterized are the flows, volumes, and turbulence? How do we know? In fluid dynamics, representations of flows, volumes, and turbulence are the numerical counterparts of the finite elements used in analyses of solids.
  • Post-processing the results, i.e., making the numerical outputs, the results of the analysis, comprehensible to non-experts.

Underlying all these are the geometric and analytical components that are found in all simulations. In FEA, this means the mesh of elements that embodies the physics of the component or assembly being modeled. Decision makers should always question the choice of elements as there are hundreds to pick from.

Some models use only a handful of elements while a few use tens of millions. Also to be questioned is the sensitivity of those elements to the forces, or loads, that push or pull on the model. A caveat: this gets deeply into the inner workings of FEA, e.g. explanations of the points or nodes where adjacent elements connect, the tallies of degrees of freedom (DOFs) represented by each pair of nodes, and the huge number of partial differential equations required.

The trust-but-verify is valuable in all of the engineering disciplines—mechanical, structural, electrical / electronic, nuclear, fluid dynamics, heat transfer, aerodynamics, noise/ vibration / harshness as well as for sensors, controls, systems, and any embedded software.

Developers of FEA and other simulation systems are working hard to simplify finding these answers or at least make trust-but-verify determinations less taxing. See Sidebar, “Software Vendors Tackle Transparency and Traceability in FEA.”

Proven approaches

A proven approach to understanding FEA models is offered by Eric Miller, co-owner of Phoenix Analysis & Design Technologies or PADT, in Tempe, Ariz. “A decision maker with some understanding of the management of the data in an FEA analysis will ask about how specific inputs affect the results. Such a decision maker will lead the model builder and analyst think more deeply about those inputs. Ultimately a more accurate simulation will be created.”

Miller offers a caveat: “This questioning should be approached as an additional set of eyes looking at the problem from the outside to determine the accuracy of results. The key is to not become adversarial and question the integrity or knowledge of the analyst.”

Jeffrey Crompton, principal of AltaSim Technologies, Columbus, Ohio, goes straight to the heart of the matter: “Let’s start out with the truth – all models are wrong until proven otherwise. Despite all the best attempts of engineers, scientists and computer code developers,” he explained, “a computational model does not give the right answer until you can categorically demonstrate its agreement with reality.”

“Categorically” is a high standard, a term with almost no wiggle room. Unfortunately, given the complexity of simulations, agreement with reality is often not easy to demonstrate. Hence the probing and questioning recommended by FEA experts and engineers.

Secondly, despite tsunamis of data cascading from one engineering department to another, a great deal of the physical world still remains imprecisely quantified. Demonstrating agreement with reality “becomes increasingly difficult,” Crompton added, “when you may not know the value of some parameters, or lack real-world measurements to compare against, or are uncertain exactly how to set up the physics of the problem.”

The challenge for decision makers uncomfortable with the results of FEA analyses is neatly summed up by Gene Mannella, vice president and FEA expert at GB Tubulars Inc. in Houston. “Without a basic understanding of what FEA is, what it can and cannot do, and how to interpret its results, one can easily make bad and costly decisions,” he points out. “FEA results are at best indicators. They were never intended to be accepted” at face value.

As Mannella, Crompton and other FEA consultants regularly remind their clients, an analysis is an approximation. It is an abstraction, a forecast, a prediction. There will always be some margin of error, some irreducible risk. This is the unsettling truth behind the gibe that “all models are bad but some are useful.” No FEA model or analysis can ever be treated as “gospel.” And this is why analysts strive ceaselessly to minimize margins of error, to make sure that every remaining risk is pointed out, and to clearly explain the ramifications.

“To be understood, FEA results must be supplemented by the professional judgment of qualified personnel,” Mannella added. His point is that decision makers relying on the results of FEA analyses should never forget that what they “see” on computer monitor, no matter how visually impressive, is an abstraction of reality. Every analysis is a small subset of one small part the real world, which is constrained by deadlines, budgets, and the boundaries of human comprehension.

Mannella’s work differs from that of most other FEA shops: it is highly specialized. GB Tubulars makes connectors for drilling and producing oil and gas in extreme environments. Its products go into oil and gas projects several miles underground and also often beneath a mile or more of seawater. Pressures are extreme, bordering on the incalculable. The risk of a blowout with massive damage to equipment and the environment is ever-present.

The analysts also stressed probing the correlation with the results of physical experiments. Tests in properly equipped laboratories by qualified experimentalists are single best way to ensure that the model actually does reflect physical reality. Which brings us to the FEA challenge of extrapolations.

Often the most relevant test data is not available because physical testing is slow and costly. The absence of relevant data makes it necessary to extrapolate among the results of similar experiments. Extrapolations can have large impacts on models, so they too should be questioned and understood.

To deal with these difficulties, Crompton and the others analysts recommend, first, managing the numbers with statistical process control (SPC) methods and, second, devising the best ways to set up the model and its analyses with design-of-experiments simulations. Both should be reviewed by decision makers—ideally with a qualified engineer looking over their shoulders.

“Our mantra in this situation is ‘start simple and gradually add complexity.’” Crompton said. “Consider starting with a [relatively simple] closed-form analytical solution. The equation’s results will help foster an understanding of how the physics and boundary conditions need to be implemented for your particular problem.” [A closed-form solution is an equation with a single variable such as stress equals force times area, as opposed to a model; even the simplest simulation and analysis models have several variables.]

Peter Barrett, principal of CAE Associates in Middlebury, Conn., noted that, “the most experienced analysts start with the simple models that can be compared to a closed-form solution or are models so simple that errors are minimized and can be safely ignored.” He commented that the two acronyms that best apply to FEA are KISS (“Keep It Simple, Stupid”) and “garbage in, garbage out,” or GIGO. In other words, probe for the unneeded complexity and bad data.

Model builders are always advised by FEA experts to start by modeling the simplest example of the problem and then build upward and outward until the model reflects all the relevant physics. Decision makers should determine whether this sensible practice was followed.

When pressed for time, “some analysts will try to skip the simple-example problem and analysis,” Barrett said. “They may claim they don’t have time” for that fundamental step, i.e., that the analyst thinks the problem is easily understood. Decision makers should insist that analysts take the extra time. The analysis always benefits from starting as simply as possible,” he continued. “Decision makers will reap the rewards of more accurate analysis, which are a driver for projects being on time and under budget.”

Ken Perry, principal at Echobio LLC, Bainbridge Island, Wash., concurred. “The first general principle of modeling is KISS. Worried decision makers should verify that KISS was applied from the very beginning,” he said. “KISS is also an optimal tool to pick apart existing models that are inflated and overburdened with unnecessary complexity,” Perry added.

A favorite quote of Perry’s comes from statistician R.W. Hamming: “The purpose of computing is insight, not numbers.” Perry elaborated: “Decision makers should guard against the all-too-human tendency to default for the more complicated explanation when we don’t understand something.  Instead, apply Occam’s razor.  Chop the model down to bite-sized chunks for questioning.” [Occam’s Razor is an axiom of logic that says in cases of uncertainty the best solution is the one requiring the fewest assumptions.]

Questioning is especially important, Perry added, “whenever the decision maker’s probing questions evoke hints of voodoo, magic or engineers shaking their head in vague, fuzzy clouds of deference to increasingly specialized disciplines.”  Each of these is a warning flag that the model or analysis has shortcomings.

Perry works in the tightly regulated field of implantable medical and cardiovascular devices. He has one such device himself, a heart valve, and has pictures to prove it on his Web site. Tellingly, Perry began his career not in FEA but as an experimentalist. He worked with interferometry to physically test advanced metal alloys.

Perry is living proof that FEA experts and experimentalists could understand one another if they tried. But often they don’t try, which is another challenge for decision makers.

The last and most cautionary words are from Barrett at CAE Associates. More than anyone else, he was concerned about the risks of inexperienced engineers making critical decisions. Such responsibility often comes with an unlooked-for promotion to a product manager’s job, for example. Unexpected increases in responsibility also can arrive with attrition, departmental shakeups, and corporate acquisitions and divestitures.

“In our introductory FEA training classes we often have engineers signed up who have no prior experience with FEA. They sign up for the intro class,” he said, “because they are expected to review results of analyses that have been outsourced and/or performed overseas.”

Barrett saw this as “very dangerous. These engineers often do not know what to look for. Without knowing how to check, they may assume that the calculations in the analysis were done correctly.  It is virtually impossible to look at a bunch of PowerPoint images of post-processed analysis results and see if the modeling was done correctly. Yet this is often the case.”

Presentation: Realizing Your Invention in Plastic, 3D Printing to Production

plastics-cover-1

PADT was honored to be invited to present to the Inventors Association of Arizona on September 4th, 2013. This well attended event focused on giving an overview on plastic parts, their design, and there manufacture including a quick look at additive manufacturing.

Here is a link to a PDF of the presentation:
IAA-Realizing-Invention-Plastic-2013_09_04-1

Also, during the presentation some animations showing the various additive manufacturing (3D Printing) processes didn’t play. You can find them here on an previous blog posting.

 

Job Opening at PADT: Part Time Human Resources Professional

PADT, is looking for an experienced Human Resources professional who is seeking 10 to 20 hours per week work with flexible hours. We are almost 20 years old and around 75 employees strong with a very low attrition rate, a strong company culture, and very casual approach to HR. It is time to take HR management duties away from one of the owners and hand them over to a professional.

We do not want to outsource this to another company, we want someone who will become part of our culture, part of our family.  We are also not using placement companies for this position.

Applicants should have the following skills and experience:

  • 5 or more years in HR for a high technology company where the majority of the employees were engineers
  • Understands why Dilbert is funny
  • Can differentiate between and choose HR activities that are high value added versus those that are done as cover or as the latest trend
  • Experience managing/conducting:
    • Management of Employee review process
    • Health care/dental/401k/529/insurance plan sign-up and maintenance for employees
    • HR training
    • New employee setup and processing
    • Out-processing of leaving employees
    • Advertising job openings, setting up interviews, and negotiating offers
    • Terminating employees
    • Gathering and maintaining employee information
    • Maintaining an employee manual
    • Assisting legal team in the process of obtaining H1b visas and permanent resident status
    • Being primary point of contact for employees for benefit and policy questions
    • Primary contact for benefit broker
    • Reviewing compliance with federal and state labor laws and carrying out required tasks or recommending changes
    • Write (with the help of management) job descriptions and keep them up to date
    • Work on building and strengthening a well defined company culture through training, activities, and events.

The responsibilities for this job are basically the experience items requested above. Hours are flexible and many of the tasks can be conducted from home. The person in this position needs to be available by email or phone to answer questions or deal with issues during normal work hours, but may only need to actually work an average of 10 or so hours a week, peaking at 20 a week. Administrative staff will be available to help with tasks and this position will work closely with our existing accounting and legal staff.

If you are interested, visit http://www.padtinc.com/about/careers.html and follow the directions for submitting a resume.

Construction Started on PADT’s new High Speed Fiber Connection

 

photo1

We were very excited to find a construction crew outside of PADT’s Tempe building the morning. After months of negotiations and permitting, construction has begun on laying fiber optic cable to the PADT Innovation Center.

That is one big Interweb Pipe!  Can’t wait for the bandwidth.

20 APDL Commands Every ANSYS Mechanical User Should Know

One of the most powerful things about ANSYS Mechanical is the fact that it creates an input file that is sent to ANSYS Mechanical APDL (MAPDL) to solve. This is awesome because you as a user have complete and full access to the huge breadth and depth available in the MAPDL program.  MAPDL is a good old-fashioned command driven program that takes in text commands one line at a time and executes them. So to access all those features, you just need to enter in the commands you want.

For many older users this is not a problem because we grew up using the text commands. But new users did not get the chance to be exposed to the power of APDL (ANSYS Parametric Design Language) so getting access to those advanced capabilities can be tough. 

In fact, I was in a room next to one of our support engineers while they were showing a customer how to change the elements that the solver would solve (Mechanical defaults to the most common formulation, but you can change them to whatever still makes sense) and the user had to admit he had never really used or even seen APDL commands before. 

So, as a way to get ANSYS Mechanical users out there started down the road of loving APDL commands, we got together and came up with a list of 20 APDL commands that every user should know.  Well, actually, it is more than 20 because we grouped some of them together.  We are not going to give too much detail on their usage, the APDL help is fantastic and it explains everything.  In fact, if you use a copy of PeDAL you can get the help right there live as you type (yes, that was a plug for you to go out and plop down $49 and buy PeDAL).

Also note that we are not getting in to how to script with APDL. It is a truly parametric command language in that you can replace most values in commands with parameters. It also has control logic, functions and other capabilities that you find in most scripting languages.  We will focus on actual commands you use to do things in the program here. If you want to learn more about how to program with APDL, you can purchase a copy of our “Introduction to the ANSYS Parametric Design Language” book. (another plug)

Some APDL Basics

APDL was developed back in the day of punch cards.  It was much easier to use than the other programs out there because the commands you entered didn’t have to be formatted in columns.  Instead arguments for commands are separated by commas.  Therefore, instead of defining a Node in your model as:

345   12.456    17.4567   0.0034 

(note that the location of that decimal point is critical). You create a line as:

N,345,12.456,17.4567, 0.0034

Trust me, that was a big deal. But what you need to know now is that all APDL commands start with a keyword and are followed by arguments. The arguments are explained in the Command Reference in the help.  So the entry for creating a node looks like this:

image

The documentation is very consistent and you will quickly get the hang of how to get what you need out of it.  The layout is explained in the help:  // Command Reference // 3. Command Dictionary

Another key thing to know about commands in MAPDL is that most entities you create (not loads and boundary conditions) have an ID number. You refer to entities by their ID number.  This is a key concept that gets lost if you grew up using GUI’s.  So if you want to make a coordinate system and use it, you define an ID for it and then refer to that ID. Same thing goes for element definitions (Element Types), material properties, etc…  Remember this, it hangs up a lot of newer users.

To use MAPDL commands you simply enter each command on a line in a command object that you place in your model tree. We did a seminar on this very subject about two years ago that you can watch here.

The idea of entity selection is fundamental to APDL.  Above we point out that all entities have an ID.  You can interact with each entity by specifying its ID.  But when you have a lot of them, like nodes and elements, it would be a pain.  So APDL deals with this by letting you select entities of a given type and making them “selected” or “unselected”  Then when you execute commands, instead of specifying an ID, you can specify “ALL” and all of the selected entities are used for that command.  Sometimes we refer to entities as being selected, and sometimes we refer to them as “active.”  The basic concept is that any entity in ANSYS Mechanical APDL can be one of two states: active/selected or inactive/unselected.  inactive/unselected entities are not used by whatever command you might be executing.

If you want to see all of the APDL command that ANSYS Mechanical writes out, simply select the setup branch of your model tree and choose Tools->Write Input File.  You can view it in a text editor, or even better, in PeDAL.

image

One last important note before we go through our list of commands: the old GUI for MAPDL can be used to modify or create models as well as ANSYS Mechanical. Every action you take in the old GUI is converted into a command and stored in the jobname.log file.  Many users will carry out the actions they want in an interactive session, then save the commands they need from the log file.

Wait, one more thing:  Right now you need these commands. But at every release more and more of the solver is exposed in ANSYS Mechanical FUI and we end up using less and less APDL scripts.  So before you write a script, make sure that ANSYS Mechanical can’t already do what you want.

The Commands

1. !

An exclamation point is a comment in APDL. Any characters to the right of one are ignored by the program. Use them often and add great comments to help you and others remember what the heck you were trying to do.

2. /PREP7 – /SOLU – /POST1 – FINISH

The MAPDL program consists of a collection of 10 processors (there were more, but they have been undocumented.) Commands only work in some processors, and most only in one.  If you enter in a preprocessor command when you are in the postprocessor, you will get an error.

When you create a command object in your ANSYS Mechanical model, it will be executed in either the Pre processor, the Solution processor, or in the Post processor.  Depending on where in the model tree you insert the command object.   If you need to go into another processor you can, you simply issue the proper command to change processors.  JUST REMEMBER TO GO BACK TO THE PROCESSOR YOU STARTED IN when you are done with your commands.

/PREP7 – goes to the pre processor. Use this to change elements, create things, or modify your mesh in any way.

/SOLU – goes to the solution processor.  Most of the time you will start there so you most often will use this command if you went into /PREP7 and need to get back. Modify loads, boundary conditions, and solver settings in this processor.

/POST1 – goes to the post processor. This is where you can play with your results, make your own plots, and do some very sophisticated post-processing.

FINISH – goes to the begin level. You will need to go there if you are going to play with file names.

3. TYPE – MAT – REAL – SECNUM

You only really need to know these commands if you will be making your own elements… but one of those things everyone should know because the assignment of element attributes is fundamental to the way APDL works…. so read on even if you don’t need to make your own elements.

Every element in your model is assigned properties that define the element.  When you define an element, instead of specifying all of its properties for each element, you create definitions and give them numbers, then assign the number to each element.  The simplest example are material properties. You define a set of material properties, give it a number, then assign that number to all the elements in your model that you want to solve with those properties.

But you do not specify the ID’s when you create the elements, that would be a pain. Instead, you make the ID for each property type “active” and every element you create will be assigned the active ID’s. 

The commands are self explanatory: Type sets the Element Type, MAT sets the material ID, REAL set the real constant number, and SECNUM sets the active section number. 

So, if  you do the following:

type,4
real,2
mat,34
secnum,112
e,1,2,3,4,11,12,13,14

you get:

     ELEM MAT TYP REL ESY SEC        NODES
      1  34   4   2   0 112      1     2     3     4    11    12    13    14
      2   3   4   4   0 200    101   102   103   104   111   112   113   114

4. ET

The MAPDL solver supports hundreds of elements.   ANSYS Mechanical picks the best element for whatever simulation you want to do from a general sense.  But that may not be the best for your model. In such cases, you can redefine the element definition that ANSYS Mechanical used.

Note: The new element must have the same topology. You can’t change a 4 noded shell into an 8 noded hex.  But if the node ordering is the same (the topology) then you can make that change using the ET command. 

5. EMODIF

If you define a real constant, element type, or material ID in APDL and you want to change a bunch of elements to those new ID’s, you use EMODIF.  This is the fastest way to change an elements definition.

6. MP – MPDATA – MPTEMP –TB – TBDATA – TBTEMP

Probably the most commonly needed APDL command for ANSYS Mechanical users are the  basic material property commands. Linear properties are defined with MP command for a polynomial vs. temperature or MPDATA and MPTEMP for a piece-wise linear temperature response.  Nonlinear material properties are defined with the TB, TBDATA, and TBTEMP commands.

It is always a good idea to stick your material definitions in a text file so you 1) have a record of what you used, and 2) can reuse the material model on other simulation jobs.

7. R – RMODIF

If you define an elements formulation with options on the ET command, and the material properties on the material commands, where do you specify other stuff like shell thickness, contact parameters, or hourglass stiffness?  You put them in real constants.  If you are new to the MAPDL solver the idea of Real constants is a bit hard to get used to. 

The official explanation is:

Data required for the calculation of the element matrices and load vectors, but which cannot be determined by other means, are input as real constants. Typical real constants include hourglass stiffness, contact parameters, stranded coil parameters, and plane thicknesses.

It really is a place to put stuff that has no other place.  R creates a real constant, and RMODIF can be used to change them.

8. NSEL – ESEL

As mentioned, selection logic is a huge part of how MAPDL works.  You never want to work on each object you want to view, change, load, etc… Instead you want to place entities of a given type into an “active” group and then operate on all “active” entities. (you can group them and give them names as well, see CM-CMSEL-CMDELE below to learn about components)

When accessing MAPDL from ANSYS Mechanical you are most often working with either nodes or elements.  NSEL and ESEL are used to manage what nodes and elements are active. These commands have a lot of options, so review the help.

9. NSLE – ESLN

You often select nodes and then need the elements attached to those nodes. Or you select elements and you need the nodes on those elements.  NSLE and ESLN do that.  NSLE selects all of the nodes on the currently active elements and ESLN does the opposite.

10. ALLSEL

A very common mistake for people writing little scripts in APDL for ANSYS Mechanical is they use selection logic to select things that they want to operate on, and then they don’t remember to reselect all the nodes and elements.  If you issue an NSEL and get say the nodes on the top of your part that you want to apply a load to. If you just stop there the solver will generate errors because those will be the only active nodes in the model. 

ALLSEL fixes this. It simply makes everything active. It is a good idea to just stick it at the end of your scripts if you do any selecting.

11. CM – CMSEL

If you use ANSYS Mechanical you should be very familiar with the concept of Named Selections. These are groups of entities (nodes, elements, surfaces, edges, vertices) that you have put into a group so you can scope based on them rather than selecting each time. In ANSYS MAPDL these are called components and commands that work with them start with CM.

Any named selection you create for geometry in ANSYS Mechanical gets turned into a nodal component – all of the nodes that touch the geometry in the Named Selection get thrown into the component. You can also create your own node or element Named Selections and those also get created as components in MAPDL. 

You can use CM to create your own components in your APDL scripts.  You give it a name and operate away.  You can also select components with the CMSEL command.

12. *GET

This is the single most awesomely useful command in APDL.  It is a way to interrogate your model to find out all sorts of useful information: number of nodes, largest Z value for node position, if a node is selected, loads on a node, result information, etc… 

Check out the help on the command. If you ever find yourself writing a script and going “if I only knew blah de blah blah about my model…” then you probably need to use *get.

13. CSYS – LOCAL – RSYS

Coordinate systems are very important in ANSYS Mechanical and ANSYS MAPDL.  In most cases you should create any coordinate systems you need in ANSYS Mechanical. They will be available to you in ANSYS MAPDL, but by default ANSYS Mechanical assigns a default ID. To use a coordinate system in MAPDL you should specify the coordinate system number in the details for a given coordinate system by changing “Coordinate System” from “Program Defined” to “Manual” and then specifying a number for “Coordinate System ID”

image

If you need to make a coordinate system in your APDL script, use the LOCAL command. 

When you want to use a coordinate system, use CSYS to make a given coordinate system active.

Note: Coordinate system 0 is the global Cartesian system. If you change the active coordinate system make sure you set it back to the global system with CSYS,0

RSYS is like CSYS but for results. If you want to plot or list result information in a coordinate system other than the global Cartesian, use RSYS to make the coordinate system you want active.

14: NROTATE

One thing to be very aware of is that each node in a model has a rotation associated with it. By default, the UX, UY, and UZ degrees of freedom are oriented with the global Cartesian coordinate system. In ANSYS Mechanical, when you specify a load or a boundary condition as normal or tangent to a surface, the program actually rotates all of those nodes so a degree of freedom is normal to that surface.

If you need to do that yourself because you want to apply a load or boundary condition in a certain direction besides the global Cartesian, use NROTATE.  You basically select the nodes you want to rotate, specify the active coordinate system with CSYS, then issue NROTATE,ALL to rotate them.

Be careful though. You don’t want to screw with any rotations that ANSYS Mechanical specified.

15. D

The most common boundary condition is displacement, even for temperature.  To specify those in an ANSYS MAPDL script, use the D command.  Most people use nodal selection or components to apply displacements to multiple nodes.

In its simplest form you apply a single value for displacement to one node in one degree of freedom.  But you can specify multiple nodes, multiple degrees of freedom, and more powerfully, the value for deflection can be a table. Learn about tables here.

16. F

The F command is the same as the D command, except it defines forces instead of displacement.  Know, it, use it.

17. SF – SFE

If you need to apply a pressure load, you use either SF to apply to nodes ore SFE to apply to elements. It works a lot like the D and F commands.

18. /OUTPUT

When the ANSYS MAPDL solver is solving away it writes bits and pieces of information to a file called jobename.out, where jobname is the name of your solver job.  Sometimes you may want to write out specific information, say list the stresses for all the currently selected nodes, to a file. use /OUTPUT,filename to redirect output to a file. When you are done specify /OUTPUT with no options and it will go back to the standard output.

19. /SHOW

ANSYS MAPDL has some very sophisticated plotting capabilities.  There are a ton of command and options used to setup and create a plot, but the most important is /SHOW,png.  This tells ANSYS MAPDL that all plots from now on will be written in PNG format to a file. Read all about how to use this command, and how to control your plots, here.

image

20. ETABLE

The ANSYS MAPDL solver solves for a lot of values. The more complex the element you are using, the more the number of values you can store.  But how do you get access to the more obscure ones? ETABLE.  Issue 38 of The Focus from 2005 goes in to some of the things you can do with ETABLE.

Where to go From Here

This is certainly not the definitive list.  Ask 20 ANSYS MAPDL users what APDL commands all ANSYS Mechanical users should know, and you might get five or six in common. But based on the support calls we get and the scripts we write, this 20 are the most common that we use.

Command help is your friend here.  Use it a lot.

The other thing you should do is open up ANSYS MAPDL interactively and play with these commands. See what happens when you execute them.

PADT’s Tempe Open House and AZ Tech Council Progress Forum – 2 Weeks Away

Two Events for the Price of Free!

Just a quick reminder, because the Facebook posts, emails, and calls from our sales people may not be getting through.

Sept 10 starting at 5, going till 8 or whenever people get tired of networking and taking tours.

Register with PADT, Inc.:

Or with the Arizona Technology Council

If you live anywhere near the Phoenix area, we expect to see you there.

Submodeling in ANSYS Mechanical: Easy, Efficient, and Accurate

Back “in the day” when we rode horses into work as Finite Element Analysis Engineers, we had somewhat limited compute capacity.  70,000 elements was a hard and fast limit.  But we still needed accurate results with local refinement in areas of concern.  The way we accomplished that was with a  process called submodeling where you make a refined local model just of the area you care about, and a coarse mesh that modeled the whole part but still fit on the computer.  The displacement field from the coarse model was then applied as a boundary condition on the refined model.

We called the refined model a zoom model or a submodel.  It worked very well for many years. Then computers got bigger and we just started meshing the heck out of those areas of interest in the full part model.  And in many cases that is still the best solution for an accurate localized stress: localized refinement.

Submodeling is one of those “tricks” in stress analysis that used to be used all the time. But until recently it was a bit of a pain to do in ANSYS Mechanical so it fell out of use.  Now, the process of doing submodeling is easy, efficient, and accurate.  The purpose of this posting is to introduce the concept to newer users who have not used it before, and show experienced (old) users how much easier it is to do in ANSYS Mechanical vs. Mechanical APDL.

What is Submodeling?

The best description of submodeling is the illustration that has been in the ANSYS help system, in one form or another, for over 25 years:

image

The basic idea is that you have a coarse model of your whole part or assembly.  You ignore small features in the mesh that don’t have an impact on the overall response of the system – the local stiffness does not have influence on the strain beyond that local region. You then make a very refined model, the submodel, of the region of interest. You use the displacement field (and temperature if you have a temperature gradient) from the coarse model and apply it to the submodel as a boundary condition to get the accurate highly-refined response in the area of interest.

The process is based on St. Venant’s principle: “… the difference between the effects of two different but statically equivalent loads becomes very small at sufficiently large distances from load.”

An aside:
What a cool name this guy had:
Adhémar Jean Claude Barré de Saint-Venant.  To top it off he was not just a mathematician, but he was given the title of Count as well… a count mathematician. And, I have to say, I have serious beard envy.  He had some very nice facial hair, I can’t even grow thick stubble.

Anyhow, what he showed was that if you are looking at the stresses in a part far away from where loads are applied, how those loads are applied does not matter. So we can replace the forces/pressures/etc… from our course model as an equivalent static deflection load and the stress field will be the same.

The way this is done in a Finite Element model is you determine what faces in your submodel are “inside” your course model. These are called the cut boundary faces and the nodes on those faces are the cut boundary nodes. and you apply the displacement field from the coarse model onto the nodes

The most common use is to add mesh refinement in an area without having to solve the whole model. Another common usage is to actually mesh small features like fillets, holes, and groves that were left out of or under-meshed in the full model.  It can also be used to capture material non-linearities if that behavior is highly localized.

But probably the most beneficial use today is to study the parametric variation of small features like the size of a fillet or a hole.  If changing the size of such features does not change the overall response of the system, then you only need to do a parametric study on the submodel – as the guy with the great beard proved, if the static load does not change with your geometric variations, you don’t have to look at the whole structure.

And don’t forget the new crack growth capabilities. You will probably want to do that on a submodel around your crack and not on your whole geometry.

Here is a more modern version of the original example geometry:

image

The red highlight shows the cut boundaries. this is where you need to apply the displacement field.

image

This is the nasty coarse mesh. Now if you were modeling a single part, you would just mesh the fillets and be done with it.  But assume this is in a large assembly.

image

The Submodel. Nice elements in the key area.

You can even set up the radius as a parameter and do a study, where only the Submodel is modified and updated.

image

 

The Process

The process is fairly simple:

  1. Make and solve your full model
  2. Make a geometry model of the area you want a submodel in
  3. Attach the submodel to the engineering data and solution of the full model
  4. Set up and solve the submodel

Before we get started, here is a ANSYS 14.5 archived project for both models we will discuss in this posting:  PADT-Focus-Submodeling-2013_08_14.wbpz

For the sample geometry we showed above, the system looks like this:

image

When you go into ANSYS Mechanical for the sample model, you have a new model branch:

image

When you first get in there, the branch is empty, you have to insert Body Temperature and/or Displacement:

image

The Details for the Displacement object are as follows:

image

There are a lot of options here. It is basically using the external load mapper to map the displacements. Consult the help and just play around with the options to understand them better. In most cases, all you need to do is specify the faces that you want the displacement field applied to for the Scope section.

A cool feature is that once you have specified the faces, you can “Import Load” and then view them by clicking on the object. Graphics Control –>Data = All shows vectors. Total/X/Y/Z shows the applied displacement field as a contour:

image

image

Now you just need to make sure your Submodel is set up correctly, you have the mesh you want, and any other loads that are applied directly to the Submodel are the same as the loads in the full model (see next section).  Run and you get your refined results.

Here is that same process with a more realistic model of a beam with a tube welded on it.  The welds are not modeled in the full model and the fillets in the beam are very coarse.

So here is the geometry. Imagine that these two parts are actually part of a very large assembly so we really can’t refine them the way we want.

image

This is what the systems look like. Note that the geometry comes from one source. I made the submodel in the same solid model in DesignModeler and just suppress the parts I don’t want in each Mechanical model.

image

The loading is simple. I fix one end and put a force on the top of the tube.

image

And here is my coarse mesh. I could probably mesh the tube with a lot more elements, especially along the axis.

image

The results. Not too useful from a stress standpoint. Deflections are good, but the fillet is missing and beam is too coarse.

image

So here is the submodel.  All the fillets are in there and it is just the area around the connection.

image

I used advanced meshing to get a really nice refined mesh. It only solves in about 20 seconds so I can really refine it.

image

Here are the cut boundaries. The bottom of the beam ribs are also selected.

image

And here is the result. A really accurate look at the stresses in the fillet.  I could even put a probe in there and do some nice fatigue or crack growth.

image

The other thing that showed up were some stress problems on the bottom of the beam.  Those could be an issue under a high load. The fillet stress on top my yield out but these stresses under the beam could be a fatigue problem.

image

Tips and Hints

In most cases, doing a sub model is pretty simple. But there is a lot more to it than what we covered here.  Because I need to get back to some very pressing HR tasks, I’ll just list them here so you know that you are aware of them:

  1. Label your systems in the project page with some sort of “full” and “sub” terminology Things get really confusing fast if you don’t.
  2. You can do submodeling with a transient or multiple substep model. In your Imported Displacement/Body Temperature, specify what load step to grab the loads from.
  3. Don’t forget temperature. One of the most common problems is when a user applies temperature and therefore gets thermal stress.  They then forget to apply that to their submodel and everything is wrong.
  4. Make sure you don’t change material properties. Remember, these models are statically identical, you are just looking at a chunk with greater refinement.
  5. Remember that loads need to be away from the area you are zooming in on.  Don’t cut where a load is applied, or even near where one is applied. The exception is temperature. (Sometimes you can get away with pressure loads too, but you have to be very careful to get the same load over the area)
  6. Your can’t have geometry in the submodel sticking too far out of the coarse mesh. The displacement is interpolated onto the fine mesh and if a node on the fine mesh is outside the coarse mesh, the program extrapolates and that can sometimes induce errors. If you see spotty or high stresses on your cut boundaries, that is why.  There are tools in the Submodeling details to help diagnose and fix that.
  7. If you are going to do a parametric study on geometry changes in the submodel, use a separate geometry file to create that model (I just duplicate the original and suppress the full geometry in DM).  Why? Because if you change a parameter in your geometry model, both models will need to resolve since they both use the same geometry file, even if the geometry change occurs on a part that is suppressed in the full model.
  8. You can do submodels of submodels as many levels down as you want.
  9. You can have multiple submodels in one system
  10. Read the help, it is fairly detailed

That is about all for now. As always: crawl, walk, run.  Start with a very simple sub model with obvious cut boundaries and get experienced.

PADT’s Albuquerque Open House a Big Success

a1aPADT was pleased to hold our first Open House in our New Mexico office this Tuesday (8/13/2013).  We had a great crowd show up to see what we are up to in Albuquerque and around the state, learn about the latest in 3D Printing, and even sneak some ANSYS technical support in.

Missed it?  Don’t worry, we have an Open House in Tempe in September and in Colorado in October.

The thing we learned quickly is that our customers here are smart, friendly, and knowledgeable.  Even though many had never met before, it didn’t take long for small clusters to form where people shared their background, the issues they faced, and solutions that worked for them.  Seeing that type of highly technical interaction between people who had just met was great.  Here are just a few pictures from the event:

a5a

The new Polyjet 30Pro was the big hit.  So small, but so capable. Many of the attendees are existing FDM users so they enjoyed learning about the different advantages of Polyjet 3D Printing.

a3a

Lots of great conversations took place in the entry way.

a2a

With an expert like Jeff Strain in town for the day, a couple of customers got in some one-on-one technical support for ANSYS products.  This showed that we definitely need to set up some standard office hours in Albuquerque for the user community.

a4a

We just could not resist playing with the new cleaning station for the Polyjet parts.  Just like Homer Simpson.  Note our special clock for the New Mexico Office, made on our Stratasys FDM machines.

 

Polyjet 3D Printers Up and Running in Denver and Albuquerque Offices

PADT-Polyjet-Albuquerque PADT-Polyjet-Denver

With all the opening and moving of offices we failed to notice that our crack sales team sold all of our demonstration 3D Printing and rapid manufacturing machines out from underneath us.  This made it easier to move, but hard on customers who wanted to see these systems in action.  So we took the opportunity to not only replace the FDM systems in our offices, but to also add Objet30 Pro desktop printers in our New Mexico and Colorado offices.  In the past we only had Polyjet systems in our Tempe facility.

If you are not familiar with the advantages of Polyjet 3D Printing when compared to FDM or other technologies, contact us to arrange a visit to our Littleton, Albuquerque, or Tempe offices to not only see these machines in action, but to also see sample parts we have made on them.

 

 

 

PADT Sponsoring 2013 Desert Vista Thunder Speech, Theater & Debate Team

TSTDC-Sponsorship-DBacks

PADT is pleased to be one of the sponsors for Desert Vista’s 2013 Thunder Speech, Theater & Debate Team.  They just kicked off their new season and were invited to a Diamond Backs game as  special guests.  They showed off the new sponsorship poster and PADT was pleased to be on the board.

The TSTDC team are 10 time state champions focused on offering “intense training in acting, debate, research and rhetoric, public speaking,  as well as interpersonal communication and teamwork skills.  Our students also gain valuable skills needed for high school, college and the professional world.”

We hope to help them make to their 11th championship this year!

 

ANSYS Updates in New Mexico

Los-Alamos-Balcony-1Clinton, Bob, Patrick, and Eric on on a trip to New Mexico to do ANSYS updates in Albuquerque and Los Alamos. The groups have been great, lots of deep questions and further insight into how everyone can get greater value out of their ANSYS Mechanical, FLUENT, CFX, and Maxwell usage.

The Los Alamos session is being held at the Holiday Inn Express as you drive in to town.  The view out the meeting from window is fantastic.  Kind of hard to pay attention to the PowerPoint slide on “New compound observables for the Adjoint Solver.”  The pictures do not do the sky justice.

Los-Alamos-Balcony-panorama

Columbia: PADT’s Killer Kilo-Core CUBE Cluster is Online

iIn the back of PADT’s product development lab is a closet.  Yesterday afternoon PADT’s tireless IT team crammed themselves into the back of that closet and powered up our new cluster, bringing 1104 connected cores online.  It sounded like a jet taking off when we submitted a test FLUENT solve across all the cores.  Music to our ears.

We have recently become slammed with benchmarks for ANSYS and CUBE customers as well as our normal load of services work, so we decided it was time to pull the trigger and double the size of our cluster while adding a storage node.  And of course, we needed it yesterday.  So the IT team rolled up their sleeves, configured a design, ordered hardware, built it up, tested it all, and got it on line, in less than two weeks.  This was while they did their normal IT work and dealt with a steady stream of CUBE sales inquiries.  But it was a labor of love. We have all dreamed about breaking that thousand core barrier on one system, and this was our chance to make it happen.

If you need more horsepower and are looking for a solution that hits that sweet spot between cost and performance, visit our CUBE page at www.cube-hvpc.com and learn more about our workstations, servers, and clusters.  Our team (after they get a little rest) will be more than happy to work with you to configure the right system for your real world needs.

Now that the sales plug is done, lets take a look at the stats on this bad boy:

Name: Columbia
After the class of battlestars in Battlestar Galactica
Brand: CUBE High Value Performance Compute Cluster, by PADT
Nodes: 18
17 compute, 1 storage/control node, 4 CPU per Node
Cores: 1104
AMD Opteron: 4 x 6308 3.5 GHz, 32 x 6278 2.4 GHz, 36 x 6380 2.5 GHz
Interconnect: 18 port MELLANOX IB 4X QDR Infiniband switch
Memory: 4.864 Terabytes
Solve Disk: 43.5 TB RAID 0
Storage Disk: 64 TB RAID 50

Here are some pictures of the build and the final product:

a
A huge delivery from our supplier, Supermicro, started the process. This was the first pallet.

b
The build included installing the largest power strip any of us had ever seen.

c
Building a cluster consists of doing the same thing, over and over and over again.

f
We took over PADT’s clean room because it turns out you need a lot of space to build something this big.

g
It is fun to get the chance to build the machine you always wanted to build

h
2AM Selfie: Still going strong!

d
Almost there. After blowing a breaker, we needed to wait for some more
power to be routed to the closet.

e
Up and running!
Ratchet and Clank providing cooling air containment.

David, Sam, and Manny deserve a big shout-out for doing such a great job getting this thing up and running so fast!

When I logged on to my first computer, a TRS-80, in my high-school computer lab, I never, ever thought I would be running on a machine this powerful.  And I would have told people they were crazy if they said a machine with this much throughput would cost less than $300,000.  It is a good time to be a simulation user!

Now I just need to find a bigger closet for when we double the size again…

CUBE-HVPC-Logo-wide

Settling in at our New Colorado Office

One of the cooler features (or is it kewler?) of our new digs in Littleton is the fact that the balcony on the front of the office has flag poles.  So we went out and got a US and Colorado flag, and had a PADT flag made.  The sun came out and the wind picked up and I have to say, it looked pretty good. Then a rainbow came out.  #goodstuff.

2013-07-29 13.58.57  2013-07-29 18.07.36

2013-07-29 18.20.36

 

It’s Open House Season at PADT!

PADT-Offices
With the opening of our new office in Albuquerque, our move to a larger office in Colorado, and a whole boat load of new stuff going on at the main office in the Phoenix area, there are a lot of reasons to come visit PADT during one of our upcoming Open Houses.
Albuquerque, New Mexico
August 13, 2013, 4:00 PM – 7:00 PM
Tempe, Arizona
Sept. 10, 2013, 5:00 PM – 10:00 PM
Littleton, Colorado
October 16, 2013, 4:00 PM – 8:00 PM
All of our Open House events are a great opportunity for you to meet PADT’s staff, get to know what we do a bit better, and network with other PADT customers and vendors.  We provide food and drink as well as enough technical information to make it an enjoyable, but justifiable use of your time.
We will be giving tours of each facility including some in-depth information about 3D Printing, with demos on our Stratasys prototyping systems.
Don’t miss out on “the” technology social events of the year!
These events are crowded, so we would really appreciate it if you would help us get a head count by registering for the open house you plan on attending: