Congratulations to Orbital Sciences on Successful Docking

imageAnother great customer success to report:

Orbital Sciences Corporation, a PADT customer and former employer of several staff members, was the second company to commercially doc with the International Space Station.  Those other guys in California owned by the Internet Billionaire always get the bigger press, so we wanted to do a shout out to the OSC team and let them know we are proud of them and all they accomplish, often out of the media spotlight.

Read all about it in this Wall Street Journal Article:

http://online.wsj.com/article/SB10001424052702303918804579104920967608730.html

We started working with the Arizona group back when they were Space Data Corporation and we have seen success followed by success as they prove out to be the less flashy leader in commercial space. Most people don’t know that OSC made had their 500th mission back in 2006. We are proud to support them as one of their suppliers and wish them further luck in this and other programs.

 

 

Four PADT Customers Named Finalists for MD+DI’s 2013 Medical Device Manufacturer of the Year

Orthosensor Medical DeviceLast week we found out that PADT’s long time co-located customer, Orthosensor, was named as a finalists in MD+DI’s 2013 Medical Device Manufacturer of the Year competition.  PADT has been working very closely with Orthosensor for many years with them actually putting a team inside PADT’s offices. We know they deserve recognition for the advances they have made. Congratulations!  This recognition not only underscores the technical and clinical successes of Orthosensor, it also highlights that commercial success they have had in partnering with industry leading orthopedic firms.

You can learn more about what PADT has done with Orthosensor by reading this case study.

The competition is pretty significant in the medical device industry and finalists and winners are chosen by the editing staff:

Each year, MD+DI recognizes one or more medical device companies that have risen above the crowd to advance medical device manufacturing. In looking at the field this year, we realized that the firms influencing the medical device business the most come from both within and outside the industry.

Some of our 10 finalists for the 2013 Medical Device Manufacturer of the Year are traditional device companies making waves with novel products and innovative business strategies; others are outsiders that are pushing boundaries by changing the definition of medical device manufacturing. We believe all of them are helping to evolve the industry.

– http://www.mddionline.com/article/2013-medical-device-manufacturer-year-finalists

There is a reader’s poll.  (Hint) We encourage everyone to take a look at the finalistsand voice their opinion (hint, hint) on who should get the award. And if they vote for Orthosensor, they will know they voted for a quality firm that has a close and long relationship with PADT. (Hint, hint, hint)

But wait, there is more! While getting the link for the Orthosensor mention, we were even more pleased to see first one, then two, then three other PADT customers listed. 40% of this years finalists are PADT customers.  That is something we are very proud of because it shows that we are working with customers that are really making a difference in peoples health:

  • Medtronic has been a long time prototyping and simulation services customer of PADT and we know that their wide array of life saving products really make a difference.
  • When Roche Diagnostics purchased long time customer Ventana Medical Systems we knew it would lead to great things. Now their tissue diagnostic systems are evolving faster and a wider range of customers have access to this very important tool in the daily struggle to battle cancer.  They also have one of the most beautiful campus locations of any of our customers. And since all the work we do for them is confidential, a picture of the campus will have to do.
  • Stratasys-PADTStratasys.  Yes that Stratasys. The company that PADT not only sells for but that is also a customer. You didn’t know they were also a customer? Stratasys purchases and bundles PADT’s SCA cleaning system for their Fused Deposition Modeling systems.To see Stratasys listed in this competition is a big deal for us, having used their technology for years to help our medical device customers.  We love the recognition that Rapid Prototyping (even if we have to call it 3D Printing) is getting these days for the real and substantial contribution it is making across industries.What is kind of cool in a rapid-prototyping-links-everything sort of way is that we have used Stratasys hardware to support all three of the device companies companies listed.

With four horses in this race we feel confident we will be congratulating one of them as this years winner!

ANSYS FLUENT Performance Comparison: AMD Opteron vs. Intel XEON

AMD Opteron 6308 & INTEL XEON e5-2690 Comparison using ANSYS FLUENT 14.5.7

Note: The information and data contained in this article was complied and generated on September 12, 2013 by PADT, Inc. on CUBE HVPC hardware using FLUEN 14.5.7.  Please remember that hardware and software change with new releases and you should always try to run your own benchmarks, on your own typical problems, to understand how performance will impact you.

A potential customer of ours was interested in a CUBE HVPC mini-cluster. They requested that I run benchmarks and garner some data on a two CPU’s. The CPU’s were benchmarked on two of our CUBE HVPC systems. One mini-cluster has dual INTEL® XEON e5-2690 CPU’s and another mini-cluster has quad AMD® Opteron 8308 CPU’s. The benchmarking was only run on a single server using a total of 16 cores on each machine. The same DDR3-1600 ECC Reg RAM, Supermicro LSI 2208 RAID Controller and Hitachi SAS2 15k RPM hard drives were used on each system.

clip_image002clip_image004clip_image006clip_image008

CUBE HVPC Test configurations:

Server 1: CUBE HVPC c16
  • CPU: 4, AMD Opteron 6308 @ 3.5GHz (Quad Core)
  • Memory: 256GB (32x8G) DDR3-1600 ECC Reg. RAM (1600MHz)
  • Hardware RAID Controller: Supermicro AOC-S2208L-H8iR 6Gbps, PCI-e x 8 Gen3
  • Hard Drives: Supermicro HDD-A0600-HUS156060VLS60 – Hitachi 600G SAS2.0 15K RPM 3.5″
  • OS: Linux 64-bit / Kernel 2.6.32-358.18.1.e16.x86_64
  • App: ANSYS FLUENT 14.5.7
  • MPI: Platform MPI
  • HCA: SMC AOC-UIBQ-M2 – QDR Infiniband
    • The IB card installed however solves were run distributed locally
  • Stack: RDMA 3.6-1.el6
  • Switch: MELLANOX IS5023 Non-Blocking 18-port switch
Server 2: CUBE HVPC c16i
  • CPU: 2, INTEL XEON e5-2690 @ 2.9GHz (Octa Core)
  • Memory: 128GB (16x8G) DDR3-1600 ECC Reg. RAM (1600MHz)
  • RAID Controller: Supermicro AOC-S2208L-H8iR 6Gbps, PCI-e x 8 Gen3
  • Hard Drives: Supermicro HDD-A0600-HUS156060VLS60 – Hitachi 600G SAS2.0 15K RPM 3.5″
  • OS: Windows 7 Professional 64-bit
  • App: ANSYS FLUENT 14.5.7
  • MPI: Platform MPI

ANSYS FLUENT 14.6.7 Performance using the ANSYS FLUENT Benchmark suite provided by ANSYS, Inc.

The models we used can be downloaded from the ANSYS Fluent Benchmark page link: http://www.ansys.com/Support/Platform+Support/Benchmarks+Overview/ANSYS+Fluent+Benchmarks

Release ANSYS FLUENT 14.5.7 Test Cases  (20 Iterations each):
  • Reacting Flow with Eddy Dissipation Model (eddy_417k)
  • Single-stage Turbomachinery Flow (turbo_500k)
  • External Flow Over an Aircraft Wing (aircraft_2m)
  • External Flow Over a Passenger Sedan (sedan_4m)
  • External Flow Over a Truck Body with a Polyhedral Mesh (truck_poly_14m)
  • External Flow Over a Truck Body 14m (truck_14m)
Chart 1: Total Wall Clock Time in seconds: (smaller bar is better)

clip_image011

Chart 2: Average wall-clock time per iteration in seconds: (smaller bar is better)

clip_image015

 

Summary:

Are you sure?

That was the question Eric proposed to me after he reviewed the data and read this blog article before posting. I told him “yes I am sure data is data, and I even triple checked.” I basically re-ran several of the benchmarks to see if the solve times came out the same on these two CUBE HVPC workstations. I went on to tell Eric , “For example, lets dig into the data for the External Flow Over a Truck Body with a Polyhedral Mesh (truck_poly_14m) benchmark and see what we find.”

Quad socket Supermicro motherboard

4 x 4c AMD Opteron 6308 @3.5GHz

Dual socket Supermicro motherboard

2 x 8c INTEL e5-2690 @2.9GHz

clip_image002[1] clip_image004[1]

The INTEL XEON e5-2690 INTEL CPU dual socket motherboard is impressive; it may have been on the Top500 list of some of the fastest computers in the world ten years ago. Anyways, so after each solve I captured the solve data and as you can see below. The AMD Opteron wall clock time was faster than the INTEL XEON wall clock time.

So why did the AMD Opteron 6308 CPU pull away from the INTEL for the ANSYS FLUENT solve times? Lets take a look at couple of reasons why this happened. I will let you make your own conclusions.

  • Clock Speed, but would a 10.4GHz difference in total CPU speed make a 100% speedup in ANSYS Fluent wall-clock times?
  • Theoretical total of:
  • AMD® OPTERON 6308 = 16 x 3.5GHz = 56.0 GH
  • INTEL® XEON e5-2690 = 16 x 2.9GHz – 46.4 GHz
  • The floating point argument? The tic and tock of the great CPU saga continues.
  • At this moment in eternity, it is a known fact that the AMD Opteron 6308 and many of its brothers, have one floating point unit per two integer cores. INTEL has one integer core per one floating point core. However what this means to ANSYS CFD users in my MIS/IT simpleton terms is the AMD CPU was simply able to handle and process more data in this example.
  • It’s possible that there were more integer calculations required than floating point? If that is the case then the AMD CPU would have had eight pipelines for integer calculations. The AMD Opteron is able to process four floating point pipelines. While the INTEL CPU can process eight floating point pipelines.

Let us look at the details of what is on the motherboards as well.  4 data paths vs 2 can make a difference:

Dual socket Supermicro motherboard

2 x 8c INTEL e5-2690 @2.9GHz

Quad socket Supermicro motherboard

4 x 4c AMD Opteron 6308 @3.5GHz

Processor Technology 32-Naometer 32-Naometer SOI (silicon-on-insulator) technology
HyperTransport™ Technology Links

Quick Path Interconnect Links

Two links at up to 8GT/s per link up to 16 GB/s direction peak bandwidth per port Four x16 links at up to 6.4GT/s per link
Memory Integrated DDR3 memory controller – Up to 51.2 GB/s memory bandwidth per socket
Number of Channels and Types of Memory Four links at up to 51.2GB/s per link Four x16 links at up to 6.4GT/s per link
Number of Channels and Types of Memory Quad channel support Quad channel support
Packaging LGA2011-0 Socket G34 – 1944-pin organic Land Grid Array (LGA)
Current pricing of the CPU’s

Here is the up to the minute pricing for each CPU’s. I took these prices off of NewEgg and IngramMicro’s website. The date of the monetary values was captured on September 12, 2013.

  • AMD Opteron 6308 Abu Dhabi 3.5GHz 4MB L2 Cache 16MB L3 Cache Socket G34 115W Quad-Core Server Processor OS6308WKT4GHKWOF
    • $499.99 x 4 = $1999.96
  • Intel Xeon E5-2690 2.90 GHz Processor – Socket LGA-2011, L2 Cache 2MB, L3 Cache 20 MB, 8 GT/s QPI,
    • $2010.02 x 2 = $4020.40

STEP OUT OF THE BOX,
STEP INTO A CUBE

PADT offers a line of high performance computing (HPC) systems specifically designed for CFD and FEA number crunching aimed at a balance between cost and performance. We call this concept High Value Performance Computing, or HVPC. These systems have allowed PADT and our customers to carry out larger simulations, with greater accuracy, in less time, at a lower cost than name-brand solutions. This leaves you more cash to buy more hardware or software.

Let CUBE HVPC by PADT, Inc. quote you a configuration today!

PADT’s Hosts 300 Guests for Open House

Sometimes you get lucky.  It was 95F or so, humid as heck, and we had hundreds of people coming for an Arizona Technology Council and PADT Open House combined event.  The good news is one of our long time tenants had just move out to a smaller space across the lake so we had there former bullpen area open and, most importantly, air conditioned. Cancel the tents, break out the vacuums

OpenHouse-Empty

Everything came together and we had a great event. Around 300 guests checked in, and suspect a few more sneak in and out before we could grab their contact information.

OpenHouse-Full

The evening started with drinks and food and a lot of networking.  Eight participants in the AZTC partner program were there to talk about the programs and discounts they offer council members.  The Falcon robotics team from Carl Hayden High School was also able to come and show off two of their robots.

After some brief talking by Steven Zylstra, the AZTC President, and Eric Miller, one of PADT’s owners (me), everyone got back to some serious networking and Mexican food eating before the tours of PADT’s facility began at 6:00.

The best part of the networking was watching PADT’s customers, vendors, and friends mingle and get to know each other. Connections were being made all over the room.  Even PADT’s telepresence robot made an appearance and wondered around the room.

During the tours, PADT employees shows off their work (well, the stuff we can show) in Simulation, Product Development, and Rapid Prototyping. As expected, the 3D Printers were the big hit but we heard back from many attendees that they found the talk on Simulation very interesting.  Us old FEA guys like to get some attention.

OpenHouse-Margaret

 

In the end we found that we had a problem. With 30 some PADT employees in attendance, all with smart phones in their pockets, we only took one pictures.  the bottom line was that we were all having such a good time interacting with people that we forgot to snap some shots.  Fortunately, Russ Olinsky, one of the guests, was kind enough to send us a picture from the tour.  (If you have any pictures you can share, please email me (eric.miller@padtinc.com)).

Things wrapped up around 9:00, with most people gone at 8:00.  We hope that all of you who came enjoyed it as much as we did.

We hope to see all of you, and those who could not make it, at our 20th anniversary bash in the spring!

Efficient Engineering Data, Part 1: Creating and Importing Material Properties in Workbench

Note: This is part 1 of a two-part series in Engineering Data customization and default settings. This article essentially serves as a foundation for my next one, which will cover how to set up default material choices and assignments in Workbench.

As you’ve probably noticed, the Workbench installation comes with an extensive set of material libraries. If you haven’t noticed, then open a Workbench session, go into Engineering Data, and click that button on the upper right that looks like a stack of books: image

Click on one of the libraries, say, General Materials, and take a look at the selection of materials.

image

So you see things like Stainless Steel, Aluminum Alloy, Titanium Alloy, etc. but which alloys exactly? 301 1/2-hard steel? 17-4PH? 6061-T6 aluminum? Or cast C355? Titanium 6-4? Or 6-2-4-2? Obviously you’re going to have your own material properties in mind, and you’ll probably use them frequently enough to where you’d like to have them readily accessible. Maybe store them in a library, or something.

As it turns out, you’re not confined to the libraries ANSYS provides with the Workbench installation. You can create your own libraries too. To start this off, first click in the first blank line in the top Engineering Data Sources section, where it says, “Click here to add a new library” (seems pretty straight-forward, doesn’t it?) and type a unique name for the library. I’ll call mine “Jeff’s Materials” because I’m incredibly original that way. Then hit Enter.

image

image

You’ll be prompted for a location and xml file name for the library. Specify these and click Save. All of your material names and properties will be stored in this file.

image

Notice that the new library is checked. That means it is unlocked and able to be edited.

image

At that point you can add material names, insert properties from the left side Toolbox, etc.

image

Type in some material names

image

Then define their properties

Once you’re finished adding and editing materials, uncheck the column B box of the library to lock it up. Click Yes to accept changes. If you want to add or edit materials to your library at a later date, simply unlock it by checking the column B check box again.

image

Now, let’s say you want to share your awesome material library with your co-workers, or maybe you’ve installed a new version of ANSYS and you want to include it, or maybe your library was deleted by gnomes during the night. How do you bring it back into Workbench? Simple. First make sure the xml file is available (you’ll want to email it to your co-workers and have them save it to their disks if you’re sharing it with them). Toggle the libraries on by clicking on the stack of books button. Then simply click the little ellipsis button on the “Click here to add a new library” line.

image

Browse to the appropriate xml file and open it.

image

And now you have your library back.

image

I was too lazy to define all the materials for this article, hence the question marks

This is all well and good, but wouldn’t it be nice if we could change the materials that are immediately available in Engineering Data upon opening Workbench, and set the default material assignment to something besides Structural Steel? As it turns out, you can do both of these, and I’ll show you how in the next installment of Efficient Engineering Data.

Great Showing at Sandia Technology Showcase

PADT is attending this years Sandia Technology Showcase for the first time this year.  A great turnout:

Sandia-Technology-Showcase-PADT

The Purpose of the showcase is:

The 2nd Annual Sandia Research & Technology Showcase presents cutting edge research and technology development taking place at Sandia National Laboratories. The 2013 Showcase will focus on four themes: bioscience, computing & information science, energy & climate, and nanodevices & microsystems. The event will also provide information on doing business with Sandia National Laboratories through licensing, partnerships, procurement, and economic development programs.

We are very excited at looking to see if any of these technologies fit PADT as new product for us, and we are ready and waiting to help others turn the innovation coming from the labs into viable commercial products.

#SRTSC

 

 

Questions Decision Makers Should Ask About Computer Simulations

‘TRUST BUT VERIFY’

A guest posting from Jack Thornton , MINDFEED Marcomm, Sante Fe, NM

image.pngThe computerization of engineering (and everything else) has imposed new burdens on managers and executives who must make critical decisions. Where once they struggled with too little information they now struggle with too much. Until roughly three decades ago, time and money were plowed into searching for more and better information. Today, time and money disappear are plowed into making sense of myriad computer simulations.

For all but the best-organized decision makers, these opposite situations have proven equally frustrating. For nearly all of engineering history, critical decisions were based on a few pieces of seemingly credible data, a handful of measurements, and hand-drawn sketches a la Leonardo DaVinci—leavened with hands-on experience and large dollops of intuition.

Computer simulations are now everywhere in engineering. They have greatly speeded up searches for information, as well as creating it in the first place, and endlessly multiplying it. What has been lost are transparency and traceability—what was done when, by whom and why. Since transparency and traceability are vital to making sound engineering decisions in today’s intensely collaborative technical environments, decision makers and managers say this loss is a big one.

This is not some arcane, hidden war waged by experts, geeks and professors. This is about designing machinery, components, physical systems and assemblies that are globally competitive—and turn a profit doing so. The complexity of modern components, assemblies and systems has been exhaustively and repeatedly described.

Nor is this something engineers and first-line managers can afford to ignore. Given the shortages of engineering talent, relatively inexperienced engineers are constantly being handed responsibility for making key decisions.

Users of computerized simulation systems continually seek ways to answer the inevitable question, “How do we know this or that or whatever to be true?” Several expert users of finite element analysis (FEA), the basic computational toolset of engineering simulation and analysis, were interviewed for this article. Each interviewee is a licensed professional engineer (PE) and each has been recommended by a leading FEA software vendor.

For decision makers, a simulation FEA or otherwise really presents only three options:

  • Signing off on the production of a component or assembly. If it proves to be flawed, warranty claims, recalls, and perhaps much worse may result.
  • Shelving a promising new product, perhaps at the behest of fretful engineers. The investment is written off or expensed as R&D. The marketplace opportunity (amnd its revenue) may be lost forever.
  • Remanding the project to the analysts even while knowing that “paralysis by analysis” will push development costs too high or cause too big a delay in getting to market.

Since executives and other upper-echelon corporate decision makers rarely possess much understanding or FEA, let alone have time to develop it, a “trust but verify” strategy is the only reasonable approach.

The verify part is easy. FEA modelers and solvers have been well wrung-out over the past 10 to 20 years. All of the FEA software vendors will share details of their in-house tests of their commercial code, the experiences of customers doing similar work, and investigations by reviewers who are often on engineering-school faculties. The same is true for industry-specific “home grown” code.

It’s the trust part that’s so challenging, as in FEA trust depends on understanding some very complicated matters.

Analysis experts note that unless the builders of FEA models are questioned, they rarely spell out the model’s underlying assumptions. Even less frequently (and clearly) described is the reasoning behind the dozens or hundreds of choices they made that are dictated by those assumptions.

And worse, these choices are not always clarified when model builders do provide this detail—quite the opposite, in fact. When pressed for explanations, model builders may simply present the mathematical formulas they use to characterize the physics of their work.

Analysis experts are quick to point out that these equations often confuse and intimidate. Decision makers should insist on commonsense explanations and not equations. And every FEA model builder will try earnestly to explain (often at great length) the model’s implications to anyone who takes the time to look.

In the context of FEA and other simulations, “physics” means the real-world forces to be withstood by a printed circuit board, a pump, an engine mount, a turbine, an aircraft wing or engine nacelle, the energy-absorbing structure of a car, or anything else that is mechanically complex and highly stressed.

This is why transparency and traceability are so important in FEA. Analysts note that some of this is codified in the guidelines for simulation and computational analysis in the ASME / ANSI verification and validation standards. Further support comes from company best practices developed by FEA users and managers, although enforcement is rarely consistent, and voluntary industry standards whose applicability varies widely.

The transparency and traceability challenge is that building a model—again, a subset of the real world—requires dozens of assumptions about the mechanical capabilities that the object or assembly must have to meet its requirements. After these basic assumptions have been coded into the model, hundreds of follow-on choices are needed to represent the physical phenomena in the model.

Analysts urge decision makers to question the stated values and ranges of any of the model’s parameters—and in particular values and ranges that have been estimated. Decision makers are routinely urged to probe whether these parameters’ values are statistically significant, and whether those values are even needed in the model.

A survey of experts turns up numerous aspects of FEA and other computerized simulations that decision makers should probe as part of a trust-but-verify approach. Among many examples:

  • Incoming geometry—usually from solid modeling systems used by product designers— and the topologies and boundaries they have chosen.
  • The numerical values representing physical properties such as yield strengths of the chosen materials.
  • Mechanical components and assemblies. How accurately represented are the bolts and welds that hold the assemblies together?
  • The stiffness of structures.
  • The number of load steps. Is the range broad enough? Are there enough intermediate steps so nothing will be missed? How true-to-life are the load vectors?
  • The accuracy of modal analyses. Resonating harmonic frequencies—vibration—can shake things apart and lead to catastrophic failures.
  • Boundary conditions, or where the object being modeled meets “the rest of the world” in the analysis. Are the specifics of the object’s physical and mechanical requirements—the geometry—accurately represented and, again, how do we know?
  • Types of analysis, which range from small, simple linear static to large, highly complex nonlinear dynamic. Should a smaller simpler analysis have been used? Could physical measurements suffice instead of analyses?
  • In fluid dynamics, how well characterized are the flows, volumes, and turbulence? How do we know? In fluid dynamics, representations of flows, volumes, and turbulence are the numerical counterparts of the finite elements used in analyses of solids.
  • Post-processing the results, i.e., making the numerical outputs, the results of the analysis, comprehensible to non-experts.

Underlying all these are the geometric and analytical components that are found in all simulations. In FEA, this means the mesh of elements that embodies the physics of the component or assembly being modeled. Decision makers should always question the choice of elements as there are hundreds to pick from.

Some models use only a handful of elements while a few use tens of millions. Also to be questioned is the sensitivity of those elements to the forces, or loads, that push or pull on the model. A caveat: this gets deeply into the inner workings of FEA, e.g. explanations of the points or nodes where adjacent elements connect, the tallies of degrees of freedom (DOFs) represented by each pair of nodes, and the huge number of partial differential equations required.

The trust-but-verify is valuable in all of the engineering disciplines—mechanical, structural, electrical / electronic, nuclear, fluid dynamics, heat transfer, aerodynamics, noise/ vibration / harshness as well as for sensors, controls, systems, and any embedded software.

Developers of FEA and other simulation systems are working hard to simplify finding these answers or at least make trust-but-verify determinations less taxing. See Sidebar, “Software Vendors Tackle Transparency and Traceability in FEA.”

Proven approaches

A proven approach to understanding FEA models is offered by Eric Miller, co-owner of Phoenix Analysis & Design Technologies or PADT, in Tempe, Ariz. “A decision maker with some understanding of the management of the data in an FEA analysis will ask about how specific inputs affect the results. Such a decision maker will lead the model builder and analyst think more deeply about those inputs. Ultimately a more accurate simulation will be created.”

Miller offers a caveat: “This questioning should be approached as an additional set of eyes looking at the problem from the outside to determine the accuracy of results. The key is to not become adversarial and question the integrity or knowledge of the analyst.”

Jeffrey Crompton, principal of AltaSim Technologies, Columbus, Ohio, goes straight to the heart of the matter: “Let’s start out with the truth – all models are wrong until proven otherwise. Despite all the best attempts of engineers, scientists and computer code developers,” he explained, “a computational model does not give the right answer until you can categorically demonstrate its agreement with reality.”

“Categorically” is a high standard, a term with almost no wiggle room. Unfortunately, given the complexity of simulations, agreement with reality is often not easy to demonstrate. Hence the probing and questioning recommended by FEA experts and engineers.

Secondly, despite tsunamis of data cascading from one engineering department to another, a great deal of the physical world still remains imprecisely quantified. Demonstrating agreement with reality “becomes increasingly difficult,” Crompton added, “when you may not know the value of some parameters, or lack real-world measurements to compare against, or are uncertain exactly how to set up the physics of the problem.”

The challenge for decision makers uncomfortable with the results of FEA analyses is neatly summed up by Gene Mannella, vice president and FEA expert at GB Tubulars Inc. in Houston. “Without a basic understanding of what FEA is, what it can and cannot do, and how to interpret its results, one can easily make bad and costly decisions,” he points out. “FEA results are at best indicators. They were never intended to be accepted” at face value.

As Mannella, Crompton and other FEA consultants regularly remind their clients, an analysis is an approximation. It is an abstraction, a forecast, a prediction. There will always be some margin of error, some irreducible risk. This is the unsettling truth behind the gibe that “all models are bad but some are useful.” No FEA model or analysis can ever be treated as “gospel.” And this is why analysts strive ceaselessly to minimize margins of error, to make sure that every remaining risk is pointed out, and to clearly explain the ramifications.

“To be understood, FEA results must be supplemented by the professional judgment of qualified personnel,” Mannella added. His point is that decision makers relying on the results of FEA analyses should never forget that what they “see” on computer monitor, no matter how visually impressive, is an abstraction of reality. Every analysis is a small subset of one small part the real world, which is constrained by deadlines, budgets, and the boundaries of human comprehension.

Mannella’s work differs from that of most other FEA shops: it is highly specialized. GB Tubulars makes connectors for drilling and producing oil and gas in extreme environments. Its products go into oil and gas projects several miles underground and also often beneath a mile or more of seawater. Pressures are extreme, bordering on the incalculable. The risk of a blowout with massive damage to equipment and the environment is ever-present.

The analysts also stressed probing the correlation with the results of physical experiments. Tests in properly equipped laboratories by qualified experimentalists are single best way to ensure that the model actually does reflect physical reality. Which brings us to the FEA challenge of extrapolations.

Often the most relevant test data is not available because physical testing is slow and costly. The absence of relevant data makes it necessary to extrapolate among the results of similar experiments. Extrapolations can have large impacts on models, so they too should be questioned and understood.

To deal with these difficulties, Crompton and the others analysts recommend, first, managing the numbers with statistical process control (SPC) methods and, second, devising the best ways to set up the model and its analyses with design-of-experiments simulations. Both should be reviewed by decision makers—ideally with a qualified engineer looking over their shoulders.

“Our mantra in this situation is ‘start simple and gradually add complexity.’” Crompton said. “Consider starting with a [relatively simple] closed-form analytical solution. The equation’s results will help foster an understanding of how the physics and boundary conditions need to be implemented for your particular problem.” [A closed-form solution is an equation with a single variable such as stress equals force times area, as opposed to a model; even the simplest simulation and analysis models have several variables.]

Peter Barrett, principal of CAE Associates in Middlebury, Conn., noted that, “the most experienced analysts start with the simple models that can be compared to a closed-form solution or are models so simple that errors are minimized and can be safely ignored.” He commented that the two acronyms that best apply to FEA are KISS (“Keep It Simple, Stupid”) and “garbage in, garbage out,” or GIGO. In other words, probe for the unneeded complexity and bad data.

Model builders are always advised by FEA experts to start by modeling the simplest example of the problem and then build upward and outward until the model reflects all the relevant physics. Decision makers should determine whether this sensible practice was followed.

When pressed for time, “some analysts will try to skip the simple-example problem and analysis,” Barrett said. “They may claim they don’t have time” for that fundamental step, i.e., that the analyst thinks the problem is easily understood. Decision makers should insist that analysts take the extra time. The analysis always benefits from starting as simply as possible,” he continued. “Decision makers will reap the rewards of more accurate analysis, which are a driver for projects being on time and under budget.”

Ken Perry, principal at Echobio LLC, Bainbridge Island, Wash., concurred. “The first general principle of modeling is KISS. Worried decision makers should verify that KISS was applied from the very beginning,” he said. “KISS is also an optimal tool to pick apart existing models that are inflated and overburdened with unnecessary complexity,” Perry added.

A favorite quote of Perry’s comes from statistician R.W. Hamming: “The purpose of computing is insight, not numbers.” Perry elaborated: “Decision makers should guard against the all-too-human tendency to default for the more complicated explanation when we don’t understand something.  Instead, apply Occam’s razor.  Chop the model down to bite-sized chunks for questioning.” [Occam’s Razor is an axiom of logic that says in cases of uncertainty the best solution is the one requiring the fewest assumptions.]

Questioning is especially important, Perry added, “whenever the decision maker’s probing questions evoke hints of voodoo, magic or engineers shaking their head in vague, fuzzy clouds of deference to increasingly specialized disciplines.”  Each of these is a warning flag that the model or analysis has shortcomings.

Perry works in the tightly regulated field of implantable medical and cardiovascular devices. He has one such device himself, a heart valve, and has pictures to prove it on his Web site. Tellingly, Perry began his career not in FEA but as an experimentalist. He worked with interferometry to physically test advanced metal alloys.

Perry is living proof that FEA experts and experimentalists could understand one another if they tried. But often they don’t try, which is another challenge for decision makers.

The last and most cautionary words are from Barrett at CAE Associates. More than anyone else, he was concerned about the risks of inexperienced engineers making critical decisions. Such responsibility often comes with an unlooked-for promotion to a product manager’s job, for example. Unexpected increases in responsibility also can arrive with attrition, departmental shakeups, and corporate acquisitions and divestitures.

“In our introductory FEA training classes we often have engineers signed up who have no prior experience with FEA. They sign up for the intro class,” he said, “because they are expected to review results of analyses that have been outsourced and/or performed overseas.”

Barrett saw this as “very dangerous. These engineers often do not know what to look for. Without knowing how to check, they may assume that the calculations in the analysis were done correctly.  It is virtually impossible to look at a bunch of PowerPoint images of post-processed analysis results and see if the modeling was done correctly. Yet this is often the case.”

Presentation: Realizing Your Invention in Plastic, 3D Printing to Production

plastics-cover-1

PADT was honored to be invited to present to the Inventors Association of Arizona on September 4th, 2013. This well attended event focused on giving an overview on plastic parts, their design, and there manufacture including a quick look at additive manufacturing.

Here is a link to a PDF of the presentation:
IAA-Realizing-Invention-Plastic-2013_09_04-1

Also, during the presentation some animations showing the various additive manufacturing (3D Printing) processes didn’t play. You can find them here on an previous blog posting.

 

Job Opening at PADT: Part Time Human Resources Professional

PADT, is looking for an experienced Human Resources professional who is seeking 10 to 20 hours per week work with flexible hours. We are almost 20 years old and around 75 employees strong with a very low attrition rate, a strong company culture, and very casual approach to HR. It is time to take HR management duties away from one of the owners and hand them over to a professional.

We do not want to outsource this to another company, we want someone who will become part of our culture, part of our family.  We are also not using placement companies for this position.

Applicants should have the following skills and experience:

  • 5 or more years in HR for a high technology company where the majority of the employees were engineers
  • Understands why Dilbert is funny
  • Can differentiate between and choose HR activities that are high value added versus those that are done as cover or as the latest trend
  • Experience managing/conducting:
    • Management of Employee review process
    • Health care/dental/401k/529/insurance plan sign-up and maintenance for employees
    • HR training
    • New employee setup and processing
    • Out-processing of leaving employees
    • Advertising job openings, setting up interviews, and negotiating offers
    • Terminating employees
    • Gathering and maintaining employee information
    • Maintaining an employee manual
    • Assisting legal team in the process of obtaining H1b visas and permanent resident status
    • Being primary point of contact for employees for benefit and policy questions
    • Primary contact for benefit broker
    • Reviewing compliance with federal and state labor laws and carrying out required tasks or recommending changes
    • Write (with the help of management) job descriptions and keep them up to date
    • Work on building and strengthening a well defined company culture through training, activities, and events.

The responsibilities for this job are basically the experience items requested above. Hours are flexible and many of the tasks can be conducted from home. The person in this position needs to be available by email or phone to answer questions or deal with issues during normal work hours, but may only need to actually work an average of 10 or so hours a week, peaking at 20 a week. Administrative staff will be available to help with tasks and this position will work closely with our existing accounting and legal staff.

If you are interested, visit http://www.padtinc.com/about/careers.html and follow the directions for submitting a resume.

Construction Started on PADT’s new High Speed Fiber Connection

 

photo1

We were very excited to find a construction crew outside of PADT’s Tempe building the morning. After months of negotiations and permitting, construction has begun on laying fiber optic cable to the PADT Innovation Center.

That is one big Interweb Pipe!  Can’t wait for the bandwidth.

20 APDL Commands Every ANSYS Mechanical User Should Know

One of the most powerful things about ANSYS Mechanical is the fact that it creates an input file that is sent to ANSYS Mechanical APDL (MAPDL) to solve. This is awesome because you as a user have complete and full access to the huge breadth and depth available in the MAPDL program.  MAPDL is a good old-fashioned command driven program that takes in text commands one line at a time and executes them. So to access all those features, you just need to enter in the commands you want.

For many older users this is not a problem because we grew up using the text commands. But new users did not get the chance to be exposed to the power of APDL (ANSYS Parametric Design Language) so getting access to those advanced capabilities can be tough. 

In fact, I was in a room next to one of our support engineers while they were showing a customer how to change the elements that the solver would solve (Mechanical defaults to the most common formulation, but you can change them to whatever still makes sense) and the user had to admit he had never really used or even seen APDL commands before. 

So, as a way to get ANSYS Mechanical users out there started down the road of loving APDL commands, we got together and came up with a list of 20 APDL commands that every user should know.  Well, actually, it is more than 20 because we grouped some of them together.  We are not going to give too much detail on their usage, the APDL help is fantastic and it explains everything.  In fact, if you use a copy of PeDAL you can get the help right there live as you type (yes, that was a plug for you to go out and plop down $49 and buy PeDAL).

Also note that we are not getting in to how to script with APDL. It is a truly parametric command language in that you can replace most values in commands with parameters. It also has control logic, functions and other capabilities that you find in most scripting languages.  We will focus on actual commands you use to do things in the program here. If you want to learn more about how to program with APDL, you can purchase a copy of our “Introduction to the ANSYS Parametric Design Language” book. (another plug)

Some APDL Basics

APDL was developed back in the day of punch cards.  It was much easier to use than the other programs out there because the commands you entered didn’t have to be formatted in columns.  Instead arguments for commands are separated by commas.  Therefore, instead of defining a Node in your model as:

345   12.456    17.4567   0.0034 

(note that the location of that decimal point is critical). You create a line as:

N,345,12.456,17.4567, 0.0034

Trust me, that was a big deal. But what you need to know now is that all APDL commands start with a keyword and are followed by arguments. The arguments are explained in the Command Reference in the help.  So the entry for creating a node looks like this:

image

The documentation is very consistent and you will quickly get the hang of how to get what you need out of it.  The layout is explained in the help:  // Command Reference // 3. Command Dictionary

Another key thing to know about commands in MAPDL is that most entities you create (not loads and boundary conditions) have an ID number. You refer to entities by their ID number.  This is a key concept that gets lost if you grew up using GUI’s.  So if you want to make a coordinate system and use it, you define an ID for it and then refer to that ID. Same thing goes for element definitions (Element Types), material properties, etc…  Remember this, it hangs up a lot of newer users.

To use MAPDL commands you simply enter each command on a line in a command object that you place in your model tree. We did a seminar on this very subject about two years ago that you can watch here.

The idea of entity selection is fundamental to APDL.  Above we point out that all entities have an ID.  You can interact with each entity by specifying its ID.  But when you have a lot of them, like nodes and elements, it would be a pain.  So APDL deals with this by letting you select entities of a given type and making them “selected” or “unselected”  Then when you execute commands, instead of specifying an ID, you can specify “ALL” and all of the selected entities are used for that command.  Sometimes we refer to entities as being selected, and sometimes we refer to them as “active.”  The basic concept is that any entity in ANSYS Mechanical APDL can be one of two states: active/selected or inactive/unselected.  inactive/unselected entities are not used by whatever command you might be executing.

If you want to see all of the APDL command that ANSYS Mechanical writes out, simply select the setup branch of your model tree and choose Tools->Write Input File.  You can view it in a text editor, or even better, in PeDAL.

image

One last important note before we go through our list of commands: the old GUI for MAPDL can be used to modify or create models as well as ANSYS Mechanical. Every action you take in the old GUI is converted into a command and stored in the jobname.log file.  Many users will carry out the actions they want in an interactive session, then save the commands they need from the log file.

Wait, one more thing:  Right now you need these commands. But at every release more and more of the solver is exposed in ANSYS Mechanical FUI and we end up using less and less APDL scripts.  So before you write a script, make sure that ANSYS Mechanical can’t already do what you want.

The Commands

1. !

An exclamation point is a comment in APDL. Any characters to the right of one are ignored by the program. Use them often and add great comments to help you and others remember what the heck you were trying to do.

2. /PREP7 – /SOLU – /POST1 – FINISH

The MAPDL program consists of a collection of 10 processors (there were more, but they have been undocumented.) Commands only work in some processors, and most only in one.  If you enter in a preprocessor command when you are in the postprocessor, you will get an error.

When you create a command object in your ANSYS Mechanical model, it will be executed in either the Pre processor, the Solution processor, or in the Post processor.  Depending on where in the model tree you insert the command object.   If you need to go into another processor you can, you simply issue the proper command to change processors.  JUST REMEMBER TO GO BACK TO THE PROCESSOR YOU STARTED IN when you are done with your commands.

/PREP7 – goes to the pre processor. Use this to change elements, create things, or modify your mesh in any way.

/SOLU – goes to the solution processor.  Most of the time you will start there so you most often will use this command if you went into /PREP7 and need to get back. Modify loads, boundary conditions, and solver settings in this processor.

/POST1 – goes to the post processor. This is where you can play with your results, make your own plots, and do some very sophisticated post-processing.

FINISH – goes to the begin level. You will need to go there if you are going to play with file names.

3. TYPE – MAT – REAL – SECNUM

You only really need to know these commands if you will be making your own elements… but one of those things everyone should know because the assignment of element attributes is fundamental to the way APDL works…. so read on even if you don’t need to make your own elements.

Every element in your model is assigned properties that define the element.  When you define an element, instead of specifying all of its properties for each element, you create definitions and give them numbers, then assign the number to each element.  The simplest example are material properties. You define a set of material properties, give it a number, then assign that number to all the elements in your model that you want to solve with those properties.

But you do not specify the ID’s when you create the elements, that would be a pain. Instead, you make the ID for each property type “active” and every element you create will be assigned the active ID’s. 

The commands are self explanatory: Type sets the Element Type, MAT sets the material ID, REAL set the real constant number, and SECNUM sets the active section number. 

So, if  you do the following:

type,4
real,2
mat,34
secnum,112
e,1,2,3,4,11,12,13,14

you get:

     ELEM MAT TYP REL ESY SEC        NODES
      1  34   4   2   0 112      1     2     3     4    11    12    13    14
      2   3   4   4   0 200    101   102   103   104   111   112   113   114

4. ET

The MAPDL solver supports hundreds of elements.   ANSYS Mechanical picks the best element for whatever simulation you want to do from a general sense.  But that may not be the best for your model. In such cases, you can redefine the element definition that ANSYS Mechanical used.

Note: The new element must have the same topology. You can’t change a 4 noded shell into an 8 noded hex.  But if the node ordering is the same (the topology) then you can make that change using the ET command. 

5. EMODIF

If you define a real constant, element type, or material ID in APDL and you want to change a bunch of elements to those new ID’s, you use EMODIF.  This is the fastest way to change an elements definition.

6. MP – MPDATA – MPTEMP –TB – TBDATA – TBTEMP

Probably the most commonly needed APDL command for ANSYS Mechanical users are the  basic material property commands. Linear properties are defined with MP command for a polynomial vs. temperature or MPDATA and MPTEMP for a piece-wise linear temperature response.  Nonlinear material properties are defined with the TB, TBDATA, and TBTEMP commands.

It is always a good idea to stick your material definitions in a text file so you 1) have a record of what you used, and 2) can reuse the material model on other simulation jobs.

7. R – RMODIF

If you define an elements formulation with options on the ET command, and the material properties on the material commands, where do you specify other stuff like shell thickness, contact parameters, or hourglass stiffness?  You put them in real constants.  If you are new to the MAPDL solver the idea of Real constants is a bit hard to get used to. 

The official explanation is:

Data required for the calculation of the element matrices and load vectors, but which cannot be determined by other means, are input as real constants. Typical real constants include hourglass stiffness, contact parameters, stranded coil parameters, and plane thicknesses.

It really is a place to put stuff that has no other place.  R creates a real constant, and RMODIF can be used to change them.

8. NSEL – ESEL

As mentioned, selection logic is a huge part of how MAPDL works.  You never want to work on each object you want to view, change, load, etc… Instead you want to place entities of a given type into an “active” group and then operate on all “active” entities. (you can group them and give them names as well, see CM-CMSEL-CMDELE below to learn about components)

When accessing MAPDL from ANSYS Mechanical you are most often working with either nodes or elements.  NSEL and ESEL are used to manage what nodes and elements are active. These commands have a lot of options, so review the help.

9. NSLE – ESLN

You often select nodes and then need the elements attached to those nodes. Or you select elements and you need the nodes on those elements.  NSLE and ESLN do that.  NSLE selects all of the nodes on the currently active elements and ESLN does the opposite.

10. ALLSEL

A very common mistake for people writing little scripts in APDL for ANSYS Mechanical is they use selection logic to select things that they want to operate on, and then they don’t remember to reselect all the nodes and elements.  If you issue an NSEL and get say the nodes on the top of your part that you want to apply a load to. If you just stop there the solver will generate errors because those will be the only active nodes in the model. 

ALLSEL fixes this. It simply makes everything active. It is a good idea to just stick it at the end of your scripts if you do any selecting.

11. CM – CMSEL

If you use ANSYS Mechanical you should be very familiar with the concept of Named Selections. These are groups of entities (nodes, elements, surfaces, edges, vertices) that you have put into a group so you can scope based on them rather than selecting each time. In ANSYS MAPDL these are called components and commands that work with them start with CM.

Any named selection you create for geometry in ANSYS Mechanical gets turned into a nodal component – all of the nodes that touch the geometry in the Named Selection get thrown into the component. You can also create your own node or element Named Selections and those also get created as components in MAPDL. 

You can use CM to create your own components in your APDL scripts.  You give it a name and operate away.  You can also select components with the CMSEL command.

12. *GET

This is the single most awesomely useful command in APDL.  It is a way to interrogate your model to find out all sorts of useful information: number of nodes, largest Z value for node position, if a node is selected, loads on a node, result information, etc… 

Check out the help on the command. If you ever find yourself writing a script and going “if I only knew blah de blah blah about my model…” then you probably need to use *get.

13. CSYS – LOCAL – RSYS

Coordinate systems are very important in ANSYS Mechanical and ANSYS MAPDL.  In most cases you should create any coordinate systems you need in ANSYS Mechanical. They will be available to you in ANSYS MAPDL, but by default ANSYS Mechanical assigns a default ID. To use a coordinate system in MAPDL you should specify the coordinate system number in the details for a given coordinate system by changing “Coordinate System” from “Program Defined” to “Manual” and then specifying a number for “Coordinate System ID”

image

If you need to make a coordinate system in your APDL script, use the LOCAL command. 

When you want to use a coordinate system, use CSYS to make a given coordinate system active.

Note: Coordinate system 0 is the global Cartesian system. If you change the active coordinate system make sure you set it back to the global system with CSYS,0

RSYS is like CSYS but for results. If you want to plot or list result information in a coordinate system other than the global Cartesian, use RSYS to make the coordinate system you want active.

14: NROTATE

One thing to be very aware of is that each node in a model has a rotation associated with it. By default, the UX, UY, and UZ degrees of freedom are oriented with the global Cartesian coordinate system. In ANSYS Mechanical, when you specify a load or a boundary condition as normal or tangent to a surface, the program actually rotates all of those nodes so a degree of freedom is normal to that surface.

If you need to do that yourself because you want to apply a load or boundary condition in a certain direction besides the global Cartesian, use NROTATE.  You basically select the nodes you want to rotate, specify the active coordinate system with CSYS, then issue NROTATE,ALL to rotate them.

Be careful though. You don’t want to screw with any rotations that ANSYS Mechanical specified.

15. D

The most common boundary condition is displacement, even for temperature.  To specify those in an ANSYS MAPDL script, use the D command.  Most people use nodal selection or components to apply displacements to multiple nodes.

In its simplest form you apply a single value for displacement to one node in one degree of freedom.  But you can specify multiple nodes, multiple degrees of freedom, and more powerfully, the value for deflection can be a table. Learn about tables here.

16. F

The F command is the same as the D command, except it defines forces instead of displacement.  Know, it, use it.

17. SF – SFE

If you need to apply a pressure load, you use either SF to apply to nodes ore SFE to apply to elements. It works a lot like the D and F commands.

18. /OUTPUT

When the ANSYS MAPDL solver is solving away it writes bits and pieces of information to a file called jobename.out, where jobname is the name of your solver job.  Sometimes you may want to write out specific information, say list the stresses for all the currently selected nodes, to a file. use /OUTPUT,filename to redirect output to a file. When you are done specify /OUTPUT with no options and it will go back to the standard output.

19. /SHOW

ANSYS MAPDL has some very sophisticated plotting capabilities.  There are a ton of command and options used to setup and create a plot, but the most important is /SHOW,png.  This tells ANSYS MAPDL that all plots from now on will be written in PNG format to a file. Read all about how to use this command, and how to control your plots, here.

image

20. ETABLE

The ANSYS MAPDL solver solves for a lot of values. The more complex the element you are using, the more the number of values you can store.  But how do you get access to the more obscure ones? ETABLE.  Issue 38 of The Focus from 2005 goes in to some of the things you can do with ETABLE.

Where to go From Here

This is certainly not the definitive list.  Ask 20 ANSYS MAPDL users what APDL commands all ANSYS Mechanical users should know, and you might get five or six in common. But based on the support calls we get and the scripts we write, this 20 are the most common that we use.

Command help is your friend here.  Use it a lot.

The other thing you should do is open up ANSYS MAPDL interactively and play with these commands. See what happens when you execute them.

Video Tips: Section Planes in ANSYS 14.5

A quick video showing a new way to create section planes by using coordinate systems.

PADT’s Tempe Open House and AZ Tech Council Progress Forum – 2 Weeks Away

Two Events for the Price of Free!

Just a quick reminder, because the Facebook posts, emails, and calls from our sales people may not be getting through.

Sept 10 starting at 5, going till 8 or whenever people get tired of networking and taking tours.

Register with PADT, Inc.:

Or with the Arizona Technology Council

If you live anywhere near the Phoenix area, we expect to see you there.

Submodeling in ANSYS Mechanical: Easy, Efficient, and Accurate

Back “in the day” when we rode horses into work as Finite Element Analysis Engineers, we had somewhat limited compute capacity.  70,000 elements was a hard and fast limit.  But we still needed accurate results with local refinement in areas of concern.  The way we accomplished that was with a  process called submodeling where you make a refined local model just of the area you care about, and a coarse mesh that modeled the whole part but still fit on the computer.  The displacement field from the coarse model was then applied as a boundary condition on the refined model.

We called the refined model a zoom model or a submodel.  It worked very well for many years. Then computers got bigger and we just started meshing the heck out of those areas of interest in the full part model.  And in many cases that is still the best solution for an accurate localized stress: localized refinement.

Submodeling is one of those “tricks” in stress analysis that used to be used all the time. But until recently it was a bit of a pain to do in ANSYS Mechanical so it fell out of use.  Now, the process of doing submodeling is easy, efficient, and accurate.  The purpose of this posting is to introduce the concept to newer users who have not used it before, and show experienced (old) users how much easier it is to do in ANSYS Mechanical vs. Mechanical APDL.

What is Submodeling?

The best description of submodeling is the illustration that has been in the ANSYS help system, in one form or another, for over 25 years:

image

The basic idea is that you have a coarse model of your whole part or assembly.  You ignore small features in the mesh that don’t have an impact on the overall response of the system – the local stiffness does not have influence on the strain beyond that local region. You then make a very refined model, the submodel, of the region of interest. You use the displacement field (and temperature if you have a temperature gradient) from the coarse model and apply it to the submodel as a boundary condition to get the accurate highly-refined response in the area of interest.

The process is based on St. Venant’s principle: “… the difference between the effects of two different but statically equivalent loads becomes very small at sufficiently large distances from load.”

An aside:
What a cool name this guy had:
Adhémar Jean Claude Barré de Saint-Venant.  To top it off he was not just a mathematician, but he was given the title of Count as well… a count mathematician. And, I have to say, I have serious beard envy.  He had some very nice facial hair, I can’t even grow thick stubble.

Anyhow, what he showed was that if you are looking at the stresses in a part far away from where loads are applied, how those loads are applied does not matter. So we can replace the forces/pressures/etc… from our course model as an equivalent static deflection load and the stress field will be the same.

The way this is done in a Finite Element model is you determine what faces in your submodel are “inside” your course model. These are called the cut boundary faces and the nodes on those faces are the cut boundary nodes. and you apply the displacement field from the coarse model onto the nodes

The most common use is to add mesh refinement in an area without having to solve the whole model. Another common usage is to actually mesh small features like fillets, holes, and groves that were left out of or under-meshed in the full model.  It can also be used to capture material non-linearities if that behavior is highly localized.

But probably the most beneficial use today is to study the parametric variation of small features like the size of a fillet or a hole.  If changing the size of such features does not change the overall response of the system, then you only need to do a parametric study on the submodel – as the guy with the great beard proved, if the static load does not change with your geometric variations, you don’t have to look at the whole structure.

And don’t forget the new crack growth capabilities. You will probably want to do that on a submodel around your crack and not on your whole geometry.

Here is a more modern version of the original example geometry:

image

The red highlight shows the cut boundaries. this is where you need to apply the displacement field.

image

This is the nasty coarse mesh. Now if you were modeling a single part, you would just mesh the fillets and be done with it.  But assume this is in a large assembly.

image

The Submodel. Nice elements in the key area.

You can even set up the radius as a parameter and do a study, where only the Submodel is modified and updated.

image

 

The Process

The process is fairly simple:

  1. Make and solve your full model
  2. Make a geometry model of the area you want a submodel in
  3. Attach the submodel to the engineering data and solution of the full model
  4. Set up and solve the submodel

Before we get started, here is a ANSYS 14.5 archived project for both models we will discuss in this posting:  PADT-Focus-Submodeling-2013_08_14.wbpz

For the sample geometry we showed above, the system looks like this:

image

When you go into ANSYS Mechanical for the sample model, you have a new model branch:

image

When you first get in there, the branch is empty, you have to insert Body Temperature and/or Displacement:

image

The Details for the Displacement object are as follows:

image

There are a lot of options here. It is basically using the external load mapper to map the displacements. Consult the help and just play around with the options to understand them better. In most cases, all you need to do is specify the faces that you want the displacement field applied to for the Scope section.

A cool feature is that once you have specified the faces, you can “Import Load” and then view them by clicking on the object. Graphics Control –>Data = All shows vectors. Total/X/Y/Z shows the applied displacement field as a contour:

image

image

Now you just need to make sure your Submodel is set up correctly, you have the mesh you want, and any other loads that are applied directly to the Submodel are the same as the loads in the full model (see next section).  Run and you get your refined results.

Here is that same process with a more realistic model of a beam with a tube welded on it.  The welds are not modeled in the full model and the fillets in the beam are very coarse.

So here is the geometry. Imagine that these two parts are actually part of a very large assembly so we really can’t refine them the way we want.

image

This is what the systems look like. Note that the geometry comes from one source. I made the submodel in the same solid model in DesignModeler and just suppress the parts I don’t want in each Mechanical model.

image

The loading is simple. I fix one end and put a force on the top of the tube.

image

And here is my coarse mesh. I could probably mesh the tube with a lot more elements, especially along the axis.

image

The results. Not too useful from a stress standpoint. Deflections are good, but the fillet is missing and beam is too coarse.

image

So here is the submodel.  All the fillets are in there and it is just the area around the connection.

image

I used advanced meshing to get a really nice refined mesh. It only solves in about 20 seconds so I can really refine it.

image

Here are the cut boundaries. The bottom of the beam ribs are also selected.

image

And here is the result. A really accurate look at the stresses in the fillet.  I could even put a probe in there and do some nice fatigue or crack growth.

image

The other thing that showed up were some stress problems on the bottom of the beam.  Those could be an issue under a high load. The fillet stress on top my yield out but these stresses under the beam could be a fatigue problem.

image

Tips and Hints

In most cases, doing a sub model is pretty simple. But there is a lot more to it than what we covered here.  Because I need to get back to some very pressing HR tasks, I’ll just list them here so you know that you are aware of them:

  1. Label your systems in the project page with some sort of “full” and “sub” terminology Things get really confusing fast if you don’t.
  2. You can do submodeling with a transient or multiple substep model. In your Imported Displacement/Body Temperature, specify what load step to grab the loads from.
  3. Don’t forget temperature. One of the most common problems is when a user applies temperature and therefore gets thermal stress.  They then forget to apply that to their submodel and everything is wrong.
  4. Make sure you don’t change material properties. Remember, these models are statically identical, you are just looking at a chunk with greater refinement.
  5. Remember that loads need to be away from the area you are zooming in on.  Don’t cut where a load is applied, or even near where one is applied. The exception is temperature. (Sometimes you can get away with pressure loads too, but you have to be very careful to get the same load over the area)
  6. Your can’t have geometry in the submodel sticking too far out of the coarse mesh. The displacement is interpolated onto the fine mesh and if a node on the fine mesh is outside the coarse mesh, the program extrapolates and that can sometimes induce errors. If you see spotty or high stresses on your cut boundaries, that is why.  There are tools in the Submodeling details to help diagnose and fix that.
  7. If you are going to do a parametric study on geometry changes in the submodel, use a separate geometry file to create that model (I just duplicate the original and suppress the full geometry in DM).  Why? Because if you change a parameter in your geometry model, both models will need to resolve since they both use the same geometry file, even if the geometry change occurs on a part that is suppressed in the full model.
  8. You can do submodels of submodels as many levels down as you want.
  9. You can have multiple submodels in one system
  10. Read the help, it is fairly detailed

That is about all for now. As always: crawl, walk, run.  Start with a very simple sub model with obvious cut boundaries and get experienced.

PADT’s Albuquerque Open House a Big Success

a1aPADT was pleased to hold our first Open House in our New Mexico office this Tuesday (8/13/2013).  We had a great crowd show up to see what we are up to in Albuquerque and around the state, learn about the latest in 3D Printing, and even sneak some ANSYS technical support in.

Missed it?  Don’t worry, we have an Open House in Tempe in September and in Colorado in October.

The thing we learned quickly is that our customers here are smart, friendly, and knowledgeable.  Even though many had never met before, it didn’t take long for small clusters to form where people shared their background, the issues they faced, and solutions that worked for them.  Seeing that type of highly technical interaction between people who had just met was great.  Here are just a few pictures from the event:

a5a

The new Polyjet 30Pro was the big hit.  So small, but so capable. Many of the attendees are existing FDM users so they enjoyed learning about the different advantages of Polyjet 3D Printing.

a3a

Lots of great conversations took place in the entry way.

a2a

With an expert like Jeff Strain in town for the day, a couple of customers got in some one-on-one technical support for ANSYS products.  This showed that we definitely need to set up some standard office hours in Albuquerque for the user community.

a4a

We just could not resist playing with the new cleaning station for the Polyjet parts.  Just like Homer Simpson.  Note our special clock for the New Mexico Office, made on our Stratasys FDM machines.