The Focus


 

Ansys Sherlock: A Comprehensive Electronics Reliability Tool

Posted on March 24, 2020, by: Josh Stout

As systems become more complex, the introduction and adoption of detailed Multiphysics / Multidomain tools is becoming more commonplace. Oftentimes, these tools serve as preprocessors and specialized interfaces for linking together other base level tools or models in a meaningful way. This is what Ansys Sherlock does for Circuit Card Assemblies (CCAs), with a heavy emphasis on product reliability through detailed life cycle definitions.

In an ideal scenario, the user will have already compiled a detailed ODB++ archive containing all the relevant model information. For Sherlock, this includes .odb files for each PCB layer, the silkscreens, component lists, component locations separated by top/bottom surface, drilled locations, solder mask maps, mounting points, and test points. This would provide the most streamlined experience from a CCA design through reliability analysis, though any of these components can be imported individually.

These definitions, in combination with an extensive library of package geometries, allow Sherlock to generate a 3D model consisting of components that can be checked against accepted parts lists and material properties. The inclusion of solder mask and silkscreen layers also makes for convenient spot-checking of component location and orientation. If any of these things deviate from the expected or if basic design variation and optimization studies need to be conducted, new components can be added and existing components can be removed, exchanged, or edited entirely within Sherlock.

Figure 1: Sherlock's 2D layer viewer and editor. Each layer can be toggled on/off, and components can be rearranged.

While a few of the available analyses depend on just the component definitions and geometries (Part Validation, DFMEA, and CAF Failure), the rest are in some way connected to the concept of life cycle definitions. The overall life cycle can be organized into life phases, e.g. an operating phase, packaging phase, transport phase, or idle phase, which can then contain any number of unique event definitions. Sherlock provides support for vibration events (random and harmonic), mechanical shock events, and thermal events. At each level, these phases and events can be prescribed a total duration, cycle count, or duty cycle relative to their parent definition. On the Life Cycle definition itself, the total lifespan and accepted failure probability within that lifespan are defined for the generation of final reliability metrics.  Figure 1 demonstrates an example layout for a CCA that may be part of a vehicle system containing both high cycle fatigue thermal and vibration events, and low cycle fatigue shock events.

Figure 2: Product life cycles are broken down into life phases that contain life events. Each event is customizable through its duration, frequency, and profile.

The remaining analysis types can be divided into two categories: FEA and part specification-based. The FEA based tests function by generating a 3D model with detail and mesh criteria determined within Sherlock, which is then passed over to an Ansys Mechanical session for analysis. Sherlock provides quite a lot of customization on the pre-processing level; the menu options include different methods and resolutions for the PCB, explicit modeling of traces, and inclusion or exclusion of part leads, mechanical parts, and potting regions, among others.

Figure 3: Left shows the 3D model options, the middle shows part leads modeled, and right shows a populated board.

Each of the FEA tests, Random Vibration, Harmonic Vibration, Mechanical Shock, and Natural Frequency, correspond to an analysis block within Ansys Workbench. Once these simulations are completed, the results file is read back into Sherlock, and strain values for each component are extracted and applied to either Basquin or Coffin—Manson fatigue models as appropriate for each included life cycle event.

Part specification tests include Component Failure Analysis for electrolytic and ceramic capacitors, Semiconductor Wearout for semiconductor devices, and CTE mismatch issues for Plated Through-Hole and solder fatigue. These analyses are much more component-specific in the sense that an electrolytic capacitor has some completely different failure modes than a semiconductor device and including them allows for a broad range of physics to be accounted for across the CCA.

The result from each type of analysis is ultimately a life prediction for each component in terms of a failure probability curve alongside a time to failure estimate. The curves for every component are then combined into a life prediction for the entire CCA under one failure analysis.

Figure 4: Analysis results for Solder Fatigue including an overview for quantity of parts in each score range along with a detailed breakdown of score for each board component.

Taking it one step further, the results from each analysis are then combined into an overall life prediction for the CCA that encompasses all the defined life events. From Figure 5, we can see that the life prediction for this CCA does not quite meet its 5-year requirement, and that the most troublesome analyses are Solder Fatigue and PTH Fatigue. Since Sherlock makes it easy to identify these as problem areas, we could then iterate on this design by reexamining the severity or frequency of applied thermal cycles or adjusting some of the board material choices to minimize CTE mismatch.

Figure 5: Combined life predictions for all failure analyses and life events.

Sherlock’s convenience for defining life cycle phases and events, alongside the wide variety of component definitions and failure analyses available, really cement Sherlock’s role as a comprehensive electronics reliability tool. As in most analyses, the quality of the results is still dependent on the quality of the input, but all the checks and cross-validations for components vs life events that come along with Sherlock’s preprocessing toolset really assist with this, too.

ANSYS Discovery Live: A Focus on Topology Optimization

Posted on March 10, 2020, by: Josh Stout

For those who are not already familiar with it, Discovery Live is a rapid design tool that shares the Discovery SpaceClaim environment. It is capable of near real-time simulation of basic structural, modal, fluid, electronic, and thermal problems. This is done through leveraging the computational power of a dedicated GPU, though because of the required speed it will necessarily have somewhat less fidelity than the corresponding full Ansys analyses. Even so, the ability to immediately see the effects of modifying, adding, or rearranging geometry through SpaceClaim’s operations provides a tremendous value to designers.

One of the most interesting features within Discovery Live is the ability to perform Topology Optimization for reducing the quantity of material in a design while maintaining optimal stiffness for a designated loading condition. This can be particularly appealing given the rapid adoption of 3D printing and other additive manufacturing techniques where reducing the total material used saves both time and material cost. These also allow the production of complex organic shapes that were not always feasible with more traditional techniques like milling.

With these things in mind, we have recently received requests to demonstrate Discovery Live’s capabilities and provide some training in its use, especially for topology optimization. Given that Discovery Live is amazingly straightforward in its application, this also seems like an ideal topic to expand on in blog form alongside our general Discovery Live workshops!

For this example, we have chosen to work with a generic “engine mount” geometry that was saved in .stp format. The overall dimensions are about 10 cm wide x 5 cm tall x 5 cm deep, and we assume it is made out of stainless steel (though this is not terribly important for this demonstration).

Figure 1: Starting engine mount geometry with fixed supports and a defined load.

The three bolt holes around the perimeter are fixed in position, as if they were firmly clamped to a surface, while a total load of 9069 N (-9000 N in X, 1000 N in Y, and 500 N in Z) is applied to the cylindrical surfaces on the front. From here, we simply tell Discovery Live that we would like to add a topology optimization calculation onto our structural analysis. This opens up the ability to specify a couple more options: the way we define how much material to remove and the amount of material around boundary conditions to preserve. For removing material, we can choose to either reduce the total volume by a percent of the original or to remove material until we reach a specific model volume. For the area around boundary conditions, this is an “inflation” length measured as a normal distance from these surfaces, easily visualizable when highlighting the condition on the solution tree.

Figure 2: Inflation zone shown around each fixed support and load surface.

Since I have already planned out what kind of comparisons I want to make in this analysis, I chose to set the final model volume to 30 cm3. After hitting the simulate button, we get to watch the optimization happen alongside a rough structural analysis. By default, we are provided with a result chart showing the model’s volume, which pretty quickly converges on our target volume. As with any analysis, the duration of this process is fairly sensitive to the fidelity specified, but with default settings this took all of 7 minutes and 50 seconds to complete on my desktop with a Quadro K4000.

Figure 3: Mid-optimization on the top, post-optimization on the bottom.

Once optimization is complete, there are several more operations that become available. In order to gain access to the optimized structure, we need to convert it into a model body. Both options for this result in faceted bodies with the click of a button located in the solution tree; the difference is just that the second has also had a smoothing operation applied to it. One or the other may be preferable, depending on your application.

Figure 4: Converting results to faceted geometry

Text Box: Figure 5: Faceted body post-optimization.

Figure 5: Faceted body post-optimization

Figure 6: Smoothed faceted body post-optimization

Though some rough stress calculations were made throughout the optimization process, the next step is typically a validation. Discovery Live makes this as a simple procedure as right-clicking on the optimized result in the solution tree and selecting the “Create Validation Solution” button. This essentially copies over the newly generated geometry into a new structural analysis while preserving the previously applied supports and loads. This allows for finer control over the fidelity of our validation, but still a very fast confirmation of our results. Using maximum fidelity on our faceted body, we find that the resulting maximum stress is about 360 MPa as compared to our unoptimized structure’s stress of 267 MPa, though of course our new material volume is less than half the original.

Figure 7: Optimized structure validation. Example surfaces that are untouched by optimization are boxed.

It may be that our final stress value is higher than what we find acceptable. At this point, it is important to note one of the limitations in version 2019R3: Discovery Live can only remove material from the original geometry, it does not add. What this means is that any surfaces remaining unchanged throughout the process are important in maintaining structural integrity for the specified load. So, if we really want to optimize our structure, we should start with additional material in these regions to allow for more optimization flexibility.

In this case, we can go back to our original engine mount model in Discovery Live and use the integrated SpaceClaim tools to thicken our backplate and expand the fillets around the load surfaces.

Figure 8: Modified engine mount geometry with a thicker backplate and larger fillets.

We can then run back through the same analysis, specifying the same target volume, to improve the performance of our final component. Indeed, we find that after optimizing back down to a material volume of 30 cm3, our new maximum stress has been decreased to 256 MPa. Keep in mind that this is very doable within Discovery Live, as the entire modification and simulation process can be done in <10 minutes for this model.

Figure 9: Validated results from the modified geometry post-optimization.

Of course, once a promising solution has been attained in Discovery Live, we should then export the model to run a more thorough analysis of in Ansys Mechanical, but hopefully, this provides a useful example of how to leverage this amazing tool!

One final comment is that while this example was performed in the 2019R3 version, 2020R1 has expanded Discovery Live’s optimization capability somewhat. Instead of only being allowed to specify a target volume or percent reduction, you can choose to allow a specified increase in structure compliance while minimizing the volume. In addition to this, there are a couple more knobs to turn for better control over the manufacturability of the result, such as specifying the maximum thickness of any region and preventing any internal overhangs in a specified direction. It is now also possible to link topology optimization to a general-purpose modal analysis, either on its own or coupled to a structural analysis. These continued improvements are great news for users, and we hope that even more features continue to roll out.

Icepak in Ansys Electronic Desktop – Why should you know about it?

Posted on March 5, 2020, by: Josh Stout

The role of Ansys Electronics Desktop Icepak (hereafter referred to as Icepak, not to be confused with Classic Icepak) is in an interesting place. On the back end, it is a tremendously capable CFD solver through the use of the Ansys Fluent code. On the front end, it is an all-in-one pre and post processor that is streamlined for electronics thermal management, including the explicit simulation and effects of fluid convection. In this regard, Icepak can be thought of as a system level Multiphysics simulation tool.

One of the advantages of Icepak is in its interface consistency with the rest of the Electronic Desktop (EDT) products. This not only results in a slick modern appearance but also provides a very familiar environment for the electrical engineers and designers who typically use the other EDT tools. While they may not already be intimately familiar with the physics and setup process for CFD/thermal simulations, being able to follow a very similar workflow certainly lowers the barrier to entry for accessing useful results. Even if complete adoption by these users is not practical, this same environment can serve as a happy medium for collaboration with thermal and fluids experts.

Figure 1: AEDT Icepak interface. The same ribbon menus, project manager, history tree, and display window as other EDT products.

So, beyond these generalities, what does Icepak actually offer for an optimized user experience over other tools, and what kinds of problems/applications are best suited for it?

The first thing that comes to mind for both of these questions is a PCB with attached components. In a real-world environment, anyone that has looked at the inside of a computer is likely familiar with motherboards covered with all kinds of little chips and capacitors and often dominated by a CPU mounted with a heatsink and fan. In most cases, this motherboard is enclosed within some kind of box (a computer case) with vents/filters/fans on at least some of the sides to facilitate controlled airflow. This is an ideal scenario for Icepak. The geometries of the board and its components are typically well represented by rectangular prisms and cylinders, and the thermal management of the system is strongly related to the physics of conjugate heat transfer. For the case geometry, it may be more convenient to import this from a more comprehensive modeler like SpaceClaim and then take advantage of the tools built into Icepak to quickly process the important features.

Figure 2: A computer case with motherboard imported from SpaceClaim. The front and back have vents/fans while the side has a rectangular patterned grille.

For a CAD model like the one above, we may want to include some additional items like heatsinks, fan models, or simple PCB components. Icepak’s geometry tools include some very convenient parameterized functions for quickly constructing and positioning fans and heatsinks, in addition to the basic ability to create and manipulate simple volumes. There are also routines for extracting openings on surface, such as the rectangular vent arrays on the front and back as well as the patterned grille on the side. So, not only can you import detailed CAD from external sources, you can mix, match, and simplify it with Icepak’s geometry, which streamlines the entire design and setup process. For an experienced user, the above model can be prepared for a basic simulation within just a matter of minutes. The resulting configuration with an added heatsink, some RAM, and boundary conditions, could look something like this:

Figure 3: The model from Figure 2 after Icepak processing. Boundary conditions for the fans, vents, and grille have been defined. Icepak primitives have also been added in the form of a heatsink and RAM modules.

Monitor points can then assigned to surfaces or bodies as desired; chances are that for a simulation like this, temperature within the CPU is the most important. Additional temperature points for each RAM module or flow measurements for the fans and openings can also be defined. These points can all be tracked as the simulation proceeds to ensure that convergence is actually attained.

Figure 4: Monitoring chosen solution variables to ensure convergence.

For this simple system containing a 20 W CPU and 8 RAM modules at 2 W each, quite a few of our components are toasty and potentially problematic from a thermal standpoint.

Figure 5: Post-processing with Icepak. Temperature contours are overlaid with flow velocities to better understand the behavior of the system.

With the power of a simulation environment in Icepak at our fingertips, we can now play around with our design parameters to improve the thermal management of this system! Want to see what happens when you block the outlet vents? Easy, select and delete them! Want to use a more powerful fan or try a new material for the motherboard or heatsink? Just edit their properties in the history tree. Want to spin around the board or try changing the number of fins on the heatsink? Also straightforward, although you will have to remesh the model. While these are the kinds of things that are certainly possible in other tools, they are exceptionally easy to do within an all-in-one interface like Icepak.

The physics involved in this example are pretty standard: solid body conduction with conjugate heat transfer to a turbulent K-Omega fluid model. Where Icepak really shines is its ability to integrate with the other tools in the EDT environment. While we assumed that the motherboard was nothing more than a solid chunk of FR-4, this board could have been designed and simulated in detail with another tool like HFSS. The board, along with all of the power losses calculated during the HFSS analysis, could have then been directly imported into the Icepak project. This would allow for each layer to be modeled with its own spatially varying thermal properties according to trace locations as well as a very accurate spatial mapping of heat generation.

This is not at all to say that Icepak is limited to these kinds of PCB and CCA examples. These just often tend to be convenient to think about and relatively easy to geometrically represent. Using Fluent as the solver provides a lot of flexibility, and there are many more classes of problems that could be benefit from Icepak. On the low frequency side, electric motors are a good example of a problem where electronic and thermal behavior are intertwined. As voltage is applied to the windings, currents are induced and heat is generated. For larger motors, these currents, and consequently the associated thermal losses, can be significant. Maxwell is used to model the electronic side for these types of problems, where the results can then be easily brought into an Icepak simulation. I have gone through just such an example rotor/stator/winding motor assembly model in Maxwell, where I then copied everything into an Iecpak project to simulate the resulting steady temperature profile in a box of naturally convecting air.

Figure 6: An example half-motor that was solved in Maxwell as a magnetostatic problem and then copied over to Icepak for thermal analysis.

If it is found that better thermal management is needed, then extra features could then be added on the Icepak side as desired, such as a dedicated heatsink or external fan. Only the components with loads mapped over from Maxwell need to remain unmodified.

On the high frequency side, you may care about the performance of an antenna. HFSS can be used for the electromagnetic side, while Icepak can once again be brought in to analyze the thermal behavior. For high powered antenna, some components could very easily get hot enough for the material properties to appreciably change and for thermal radiation to become a dominant mode of heat transport. A 2-way automatic Icepak coupling is an excellent way to model this. Thermal modifiers may be defined for material properties in HFSS, and radiation is a supported physics model in Icepak. HFSS and Icepak can then be set up to alternately solve and automatically feed each other new loads and boundary conditions until a converged result is attained.

What all of this really comes down to is the question: how easy is it for the user to set up a model that will produce the information they need? For these kinds of electronics questions, I believe the answer for Icepak is “extraordinarily easy”. While functional on its own merit, Icepak really shines when it comes to the ease of coupling thermal management analysis with the EM family of tools.

ANSYS Mechanical: Mesh Time Metric Display

Posted on February 24, 2020, by: Joe Woodward

The things you find out from poking around the Enhancement Request list…

Did you know that you can get ANSYS Mechanical to report the amount of time that the meshing takes? I didn’t until I stumbled across this little gem on the request to show mesh time metrics.

This option is already available for many releases now. Users can turn performance diagnostics by setting to Tools -> Options -> Miscellaneous -> "Report Performance Diagnostics in Messages" to Yes inside Mechanical.

So, of course, I tried it out.

This was in version 2020R1, but it says that the option has been there since R19.0.  Now they just need to add it to the Statistics section of the Mesh Details so that we can use it as an output parameter.

Reduce EMI with Good Signal Integrity Habits

Posted on January 14, 2020, by: Aleksandr Gafarov

Recently the ‘Signal Integrity Journal’ posted their ‘Top 10 Articles’ of 2019. All of the articles included were incredible, however, one stood out to me from the rest - ‘Seven Habits of Successful 2-Layer Board Designers’ by Dr. Eric Bogatin (https://www.signalintegrityjournal.com/blogs/12-fundamentals/post/1207-seven-habits-of-successful-2-layer-board-designers). In this work, Dr. Bogatin and his students were developing a 2-Layer printed circuit board (PCB), while trying to minimize signal and power Integrity issues as much as possible. As a result, they developed a board and described seven ‘golden habits’ for this board development. These are fantastic habits that I’m confident we can all agree with. In particular, there was one habit at which I wanted to take a deeper look:

“…Habit 4: When you need to route a cross-under on the bottom layer, make it short. When you can’t make it short, add a return strap over it..”

Generally speaking, this habit suggests to be very careful with the routing of signal traces over the gap on the ground plane. From the signal integrity point of view, Dr. Bogatin explained it perfectly – “..The signal traces routed above this gap will see a gap in the return path and generate cross talk to other signals also crossing the gap..”. On one hand, crosstalk won’t be a problem if there are no other nets around, so the layout might work just fine in that case. However, crosstalk is not the only risk. Fundamentally, crosstalk is an EMI problem. So, I wanted to explore what happens when this habit is ignored and there are no nearby nets to worry about.

To investigate, I created a simple 2-Layer board with the signal trace, connected to 5V voltage source, going over an air gap. Then I observed the near field and far field results using ANSYS SIwave solution. Here is what I found.

Near and Far Field Analysis

Typically, near and far fields are characterized by solved E and H fields around the model. This feature in ANSYS SIwave gives the engineer the ability to simulate both E and H fields for near field analysis, and E field for Far Field analysis.

First and foremost, we can see, as expected, that both near and far Field have resonances at the same frequencies. Additionally, we can observe from Figure 1 that both E and H fields for near field have the largest radiation spikes at 786.3 MHz and 2.349GHz resonant frequencies.

Figure 1. Plotted E and H fields for both Near and Far Field solutions

If we plot E and H fields for Near Field, we can see at which physical locations we have the maximum radiation.

Figure 2. Plotted E and H fields for Near field simulations

As expected, we see the maximum radiation occurring over the air gap, where there is no return path for the current. Since we know that current is directly related to electromagnetic fields, we can also compute AC current to better understand the flow of the current over the air gap.

Compute AC Currents (PSI)

This feature has a very simple setup interface. The user only needs to make sure that the excitation sources are read correctly and that the frequency range is properly indicated. A few minutes after setting up the simulation, we get frequency dependent results for current. We can review the current flow at any simulated frequency point or view the current flow dynamically by animating the plot.

Figure 3. Computed AC currents

As seen in Figure 3, we observe the current being transferred from the energy source, along the transmission line to the open end of the trace. On the ground layer, we see the return current directed back to the source. However at the location of the air gap there is no metal for the return current to flow, therefore, we can see the unwanted concentration of energy along the plane edges. This energy may cause electromagnetic radiation and potential problems with emission.

If we have a very complicated multi-layer board design, it won’t be easy to simulate current flow on near and far fields for the whole board. It is possible, but the engineer will have to have either extra computing time or extra computing power. To address this issue, SIwave has a feature called EMI Scanner, which helps identify problematic areas on the board without running full simulations.

EMI Scanner

ANSYS EMI Scanner, which is based on geometric rule checks, identifies design issues that might result in electromagnetic interference problems during operation. So, I ran the EMI Scanner to quickly identify areas on the board which may create unwanted EMI effects. It is recommended for engineers, after finding all potentially problematic areas on the board using EMI Scanner, to run more detailed analyses on those areas using other SIwave features or HFSS.

Currently the EMI Scanner contains 17 rules, which are categorized as ‘Signal Reference’, ‘Wiring/Crosstalk’, ‘Decoupling’ and ‘Placement’. For this project, I focused on the ‘Signal Reference’ rules group, to find violations for ‘Net Crossing Split’ and ‘Net Near Edge of Reference’. I will discuss other EMI Scanner rules in more detail in a future blog (so be sure to check back for updates).

Figure 4. Selected rules in EMI Scanner (left) and predicted violations in the project (right)

As expected, the EMI Scanner properly identified 3 violations as highlighted in Figure 4. You can either review or export the report, or we can analyze violations with iQ-Harmony. With this feature, besides generating a user-friendly report with graphical explanations, we are also able to run ‘What-if’ scenarios to see possible results of the geometrical optimization.

Figure 5. Generated report in iQ-Harmony with ‘What-If’ scenario

Based on these results of quick EMI Scanner, the engineer would need to either redesign the board right away or to run more analysis using a more accurate approach.

Conclusion

In this blog, we were able to successfully run simulations using ANSYS SIwave solution to understand the effect of not following Dr.Bogatin’s advice on routing the signal trace over the gap on a 2-Layer board. We also were able to use 4 different features in SIwave, each of which delivered the correct, expected results.

Overall, it is not easy to think about all possible SI/PI/EMI issues while developing a complex board. In these modern times, engineers don’t need to manufacture a physical board to evaluate EMI problems. A lot of developmental steps can now be performed during simulations, and ANSYS SIwave tool in conjunction with HFSS Solver can help to deliver the right design on the first try.

If you would like more information or have any questions please reach out to us at info@padtinc.com.

Defining Antenna Array Excitations with Nested-If Statements in HFSS

Posted on January 7, 2020, by: Sima Noghanian

HFSS offers various methods to define array excitations. For a large array, you may take advantage of an option “Load from File” to load the magnitude and phase of each port. However, in many situations you may have specific cases of array excitation. For example, changing amplitude tapering or the phase variations that happens due to frequency change. In this blog we will look at using the “Edit Sources” method to change the magnitude and phase of each excitation. There are cases that might not be easily automated using a parametric sweep. If the array is relatively small and there are not many individual cases to examine you may set up the cases using “array parameters” and “nested-if”.

In the following example, I used nested-if statements to parameterize the excitations of the pre-built example “planar_flare_dipole_array”, which can be found by choosing File->Open Examples->HFSS->Antennas (Fig. 1) so you can follow along. The file was then saved as “planar_flare_dipole_array_if”. Then one project was copied to create two examples (Phase Variations, Amplitude Variations).

Fig. 1. Planar_flare_dipole_array with 5 antenna elements (HFSS pre-built example).

Phase Variation for Selected Frequencies

In this example, I assumed there were three different frequencies that each had a set of coefficients for the phase shift. Therefore, three array parameters were created. Each array parameter has 5 elements, because the array has 5 excitations:

A1: [0, 0, 0, 0, 0]

A2: [0, 1, 2, 3, 4]

A3: [0, 2, 4, 6, 8]

Then 5 coefficients were created using a nested_if statement. “Freq” is one of built-in HFSS variables that refers to frequency. The simulation was setup for a discrete sweep of 3 frequencies (1.8, 1.9 and 2.0 GHz) (Fig. 2). The coefficients were defined as (Fig. 3):

E1: if(Freq==1.8GHz,A1[0],if(Freq==1.9GHz,A2[0],if(Freq==2.0GHz,A3[0],0)))

E2: if(Freq==1.8GHz,A1[1],if(Freq==1.9GHz,A2[1],if(Freq==2.0GHz,A3[1],0)))

E3: if(Freq==1.8GHz,A1[2],if(Freq==1.9GHz,A2[2],if(Freq==2.0GHz,A3[2],0)))

E4: if(Freq==1.8GHz,A1[3],if(Freq==1.9GHz,A2[3],if(Freq==2.0GHz,A3[3],0)))

E5: if(Freq==1.8GHz,A1[4],if(Freq==1.9GHz,A2[4],if(Freq==2.0GHz,A3[4],0)))

Please note that the last case is the default, so if frequency is none of the three frequencies that were given in the nested-if, the default phase coefficient is chosen (“0” in this case).

Fig. 2. Analysis Setup.

Fig. 3. Parameters definition for phase varaitioin case.

By selecting the menu item HFSS ->Fields->Edit Sources, I defined E1-E5 as coefficients for the phase shift. Note that phase_shift is a variable defined to control the phase, and E1-E5 are meant to be coefficients (Fig. 4):

Fig. 4. Edit sources using the defined variables.

The radiation pattern can now be plotted at each frequency for the phase shifts that were defined (A1 for 1.8 GHz, A2 for 1.9 GHz and A3 for 2.0 GHz) (Figs 5-6):

 Fig. 5. Settings for radiation pattern plots.

Fig. 6. Radiation patten for phi=90 degrees and different frequencies, the variation of phase shifts shows how the main beam has shifted for each frequency.

Amplitude Variation for Selected Cases

In the second example I created three cases that were controlled using the variable “CN”. CN is simply the case number with no units.

The variable definition was similar to the first case. I defined 3 array parameters and 5 coefficients. This time the coefficients were used for the Magnitude. The variable in the nested-if was CN. That means 3 cases and a default case were created. The default coefficient here was chosen as “1” (Figs. 7-8).

A1: [1, 1.5, 2, 1.5, 1]

A2: [1, 1, 1, 1, 1]

A3: [2, 1, 0, 1, 2]

E1: if(CN==1,A1[0],if(CN==2,A2[0],if(CN==3,A3[0],1)))*1W

E2: if(CN==1,A1[1],if(CN==2,A2[1],if(CN==3,A3[1],1)))*1W

E3: if(CN==1,A1[2],if(CN==2,A2[2],if(CN==3,A3[2],1)))*1W

E4: if(CN==1,A1[3],if(CN==2,A2[3],if(CN==3,A3[3],1)))*1W

E5: if(CN==1,A1[4],if(CN==2,A2[4],if(CN==3,A3[4],1)))*1W

Fig. 7. Parameters definition for amplitude varaitioin case.

Fig. 8. Exciation setting for amplitude variation case.

Notice that CN in the parametric definition has the value of “1”. To create the solution for all three cases I used a parametric sweep definition by selecting the menu item Optimetrics->Add->Parametric. In the Add/Edit Sweep I chose the variable “CN”, Start: 1, Stop:3, Step:1. Also, in the Options tab I chose to “Save Fields and Mesh” and “Copy geometrically equivalent meshes”, and “Solve with copied meshes only”. This selection helps not to redo the adaptive meshing as the geometry is not changed (Fig. 9). In plotting the patterns I could now choose the parameter CN and the results of plotting for CN=1, 2, and 3 is shown in Fig. 10. You can see how the tapering of amplitude has affected the side lobe level.

Fig. 9. Parameters definition for amplitude varaitioin case.

 Fig. 10. Radiation patten for phi=90 degrees and different cases of amplitude tapering, the variation of amplitude tapering has caused chagne in the beamwidth and side lobe levels.

Drawback

The drawback of this method is that array parameters are not post-processing variables. This means changing them will create the need to re-run the simulations. Therefore, it is needed that all the possible cases to be defined before running the simulation.

If you would like more information or have any questions please reach out to us at info@padtinc.com.

Getting Bulk Properties for Repeated Structures in ANSYS Mechanical with Material Designer

Posted on January 2, 2020, by: Alex Grishin

Using Material Designer To Perform Homogenization Studies

Editor's Note:

3D Printing and other advanced manufacturing methods are driving the increased use of lattice-type structures in structural designs. This is great for reducing mass and increasing the stiffness of components but can be a real pain for those of us doing simulation. Modeling all of those tiny features across a part is difficult to mesh and takes forever to solve.

PADT has been doing a bit of R&D in this area recently, including a recent PHASE II NASA STTR with ASU and KSU. We see a lot of potential in combining generative design and 3D Printing to drive better structures. The key to this effort is efficient and accurate simulation.

The good news is that we do not have to model every unit cell. Instead, we can do some simulation on a single representative chunk and use the ANSYS Material Designer feature to create an approximate material property that we can use to represent the lattice volume as a homogeneous material.

In the post below, PADT's Alex Grishin explains it all with theory, examples, and a clear step-by-step process that you can use for your lattice filled geometry.

PADT-ANSYS-Lattice-Material_Homogenization

Join PADT in Welcoming Jeff Wells, Business Development Manager, Engineering Services

Posted on November 12, 2019, by: Eric Miller

Here at PADT, we pride ourselves on our ability to make our customers’ ideas for innovation practical and get them to market. No matter how complex the challenge is, we have the engineering expertise and technology tools to work with our customers and deliver tailored solutions to meet their needs. And for every solution we create, there’s a business development team leading the partnership with our customers. We’re excited to welcome the newest leader of this team who introduced the free invoice template, Business Development Manager for Engineering Services, Jeff Wells.

“PADT’s engineering services are thriving behind the work of our outstanding team,” said Eric Miller, co-founder and principal, PADT. “Jeff adds a tremendous amount of experience as both an engineer and a business development leader. His knowledge of the industry and the community will elevate our ability to attract new and innovative customers.”

To help PADT improve its market position in engineering services and product development, Wells will be responsible for building new customer relationships and seeking new opportunities to solve complex challenges. His focus will be on serving customers in a wide variety of industries, including aerospace and defense, medical, and industrial.

“Throughout my many years in engineering here in Arizona, I’ve been keenly aware of the outstanding services provided by PADT,” said Wells. “The company’s reputation and the wonderful people I’ve gotten to know over the years made it an easy decision to join the team. I look forward to contributing to the company’s strategy for growing its engineering services department.”

Jeff and his Family in New Zealand

Wells brings nearly 30 years of engineering, business development, and sales experience to the position. He joins PADT after spending the past five years in the director of business development role at CollabraTech Solutions. Wells joined CollabraTech early in the company’s lifecycle and helped grow the gas and chemical delivery product company from a few million dollars in revenue to over $14 million, by diversifying their customer base, the markets they served and the projects they pursued.

Early in his career, Wells worked as an engineer designing a wide variety of products from parts for Airbus aircraft engines to laser part marking kiosks and semiconductor capital equipment. He quickly realized his propensity for combining his engineering expertise with his communication skills, and in the late ‘90s, he began his career in business development. Wells worked at Advanced Integration Technologies for 10 years as a business development engineer and business development manager. He later worked closely with senior leadership on business development operations at Ultra Clean Technology and led business development for Foresight Processing.

Wells holds a Bachelor of Science in Aerospace Engineering from Arizona State University (ASU). He and his wife, Kate Wells, CEO of the Phoenix Children’s Museum, have been married for 27 years and have two daughters who attend school at Massachusetts Institute of Technology and Barrett, the Honors College at ASU. In their free time, Wells and his family enjoy traveling. A decade ago, Jeff and his wife took their two daughters out of school for 14 months backpacking around the globe, visiting 22 countries. Wells also enjoys being outdoors hiking, playing sports, snowboarding and water skiing.

You can find a writeup in the Phoenix Business Journal here and his LinkedIn profile is here.

To learn more about PADT’s engineering service capabilities and to connect with Jeff Wells, please visit www.padtinc.com/services  or call us at 1-800-293-7238.

Property Controllers in ANSYS ACT

Posted on October 31, 2019, by: Matt Sutton

Customizations developed using ANSYS ACT adhere closely to the user experience that is native to Mechanical and other Workbench apps.  Obviously, this is to be expected, but sometimes it can be a little challenging to fit a particular workflow into the “tree object” plus “object properties” model.   One way to broaden the available set of user experiences from which to construct a customized behavior is to use what are known as property controllers.

Property controllers are classes, which can be implemented either in C# or Python that are associated with a given property in the details pane of a given ACT object.  These classes allow the programmer to specialize the functionality and behavior of the particular property to which they are associated.  The association between a given property and its property controller is made in the ACT XML definition file.  For this article, like most of the ones I write on ACT, I will be using C# as the implementation language.

The degree to which any given property can be specialized by a property controller is quite vast.  Therefore, I won’t be able to touch on all of the possible combinations.  However, I will demonstrate two that I have found particularly useful in various ACT apps I’ve written.

The first is a custom “select” controller that allows the user to pick one of a set of given Mechanical objects.  For most customizations, perhaps the canonical example, is a select controller that allows a user to pick a particular coordinate system out of all of the defined coordinate systems in the model.  Yes, there is a template for this that ships with Mechanical, but I will show a given implementation.  Understanding how the controller works will enable you to apply the same technique to other object types, even other ACT objects within the same app.

The second is a way to “fly out” a dialog box that can contain additional custom controls, and that is “anchored” to the side of the given property box within the details pane.  This is useful for scenarios when we can’t easily fit a particular data entry within a given property field.  Tabular data is a prime example.  Again, there are some templates for this in Mechanical, but understanding how to build it up from scratch will allow you to apply the same principles to more complex dialogs.  This second example will be covered in a subsequent blog post.

Foundations

Before we dive into the individual examples above, let’s understand some of the basics of property controllers in general.  First, how do we associate a given property controller class with a particular property.  This is accomplished by using the “class” attribute in the property tag within the XML definition.  So, here is an example from an extension XML file:

<property name="EngineAxis" 
  caption="Coordinate System" 
  control="select"                         
  class="PADT.PropertyControllers.CoordinateSystemSelectController">
  <attributes type_filter="cylindrical"/>  
</property> 

You can see that we’ve added a “class” attribute inside the property tag and set it equal to a fully qualified class name.  All other property attributes are the same as with a typical property.  In order for this to work, however, we will need to implement a class called “CoordinateSystemSelectController” in the PADT.PropertyControllers namespace.

You will also notice that there is a nested <attributes> tag inside the <property> tag.  This can be used to pass additional configuration data to the controller as we will see.  Clearly, in this case, the additional data is designed to constrain the types of coordinate systems that will be populated within the control.

Example 1: Coordinate System Select Property Controller

The method by which behavior is customized for a given property is by implementing overrides for a series of virtual functions defined on the property itself.  These virtual functions allow us to hook into various points within the properties lifetime and operation.  The names for these virtual functions correspond to the callbacks listed in the ANSYS help for the <property> tag in the ACT XML reference manual.  The names are always lowercase.  Common ones that I use are functions such as “onactivate”, “onshow”, “isvalid”,”value2string” and “getvalue”.  Except for the “value2string” function, most of these are probably self-explanatory as to when they would fire.   For this controller, I’ll demonstrate a few of these functions including when and how to use the “value2string”.

Let’s begin with the “onactivate” function.  This function is called when the user “selects” or activates the property.  So, within this function is a good place to populate the list of currently available coordinate systems.  It is tempting to cache this list so that it doesn’t have to be recomputed.  However, if the user deletes, or adds a coordinate system after we have cached the list, we would not display it as an option the next time they activated this control.  Therefore, on each activation, we build up a list of available coordinate systems.  Here is the code:

You can see the parameters to this function are parameter representing the “tree object” to which this property is a member of that object’s details pane, and a parameter representing this “property”.  The second parameter might seem counterintuitive.  You might think that we are subclassing the property itself and thus this parameter would be redundant. (i.e, it would be equivalent to the “this” object).  However, we are not subclassing the property per say, but rather implementing a controller object that property itself makes calls against to modify its own behavior.  Sounds convoluted, I understand, but my guess is that this is what allows us to specify all of this within the extension XML file.  So, it’s a good thing.

Once we get into the function proper, on line 55 we clear out all of the items within this properties associated drop down control.  Then, in lines 56-58 we figure out what are the enum constants that represent the types of coordinate systems (cylindrical, Cartesian, etc…) that we would like to present to the user.  Note, our attribute “type_filter” could contain multiple types. Then, in lines 59-67, we iterate over all of the coordinate systems current defined in the Mechanical session and pick out the ones that are of the right type.  We then add them to the “options” property of this SimProperty object.  Note, however, that we don’t add the coordinate system objects themselves, but rather string representations of the object Ids.  This is important.  The reason we don’t add the name of the coordinate system is because names (or labels) in Mechanical are not required to be unique.  You can create five coordinate systems and name all of them “Bob”, Mechanical doesn’t care and will treat them as unique.  So, we need a unique attribute of the coordinate system to store in our list of options.  The object Id is guaranteed to be unique.  So, we store this instead.

The final bit of code in this function just makes sure to default select one coordinate system if the user hasn’t already selected one.  That functionality is on lines 69-83.  If the Value property is null, then the user (or this code) has not populated the select with a given coordinate system. So, if there are any coordinate systems that are appropriate we just find the first one and select it automatically.  Note, if the user later changes this to a different CSYS and this function fires a second time, it will not overwrite their choice because the null check will fail on line 69.  The reason for this behavior is because the extension for which this code was written made extensive use of a single cylindrical coordinate system in a number of different objects.  Typically the user would add just this one coordinate system in addition to the default global Cartesian system.  So, by adding this code, the user would not be required to select this coordinate system each time they added a new object, but rather, the tool would do it for them.

The next bit of code to examine is the “value2string” function, which is shown below:

You may recall from above that the data we store in the options property was a list of string representations of the various coordinate system Ids.  Now, if we didn’t implement this function, when the user interacted with the drop down control what they would see would be a list of numbers.  They might see a list like “42” “87” and “94”.  Clearly, it’s not very intuitive as to which coordinate system these numbers may refer. 

So, what the “value2string” function allows us to do is to transform the data that the property actually stores into a visual representation that is meaningful to the user.  However, this is purely a stateless transformation.  The actual data store in the property always remains the string representation of the object’s id.  So, you can think of this function as sitting between the internal code that pulls a value out of the property, and the internal code that renders that value to the screen.  In between these two calls, we have the opportunity to transform what gets rendered.

So, essentially what we do inside this function is parse the Id string back into an integer.  If that’s cool, we then lookup the particular mechanical tree object that has this given Id.  Finally, if everything is kosher with this object, we return the name of the object we looked up.  If at any point something goes wrong, we just return an empty string.

Now, when the user interacts with the property controller, the will see a list of names corresponding to the coordinate systems of the appropriate type.  If they sadistically named all of these coordinate systems the same name, then they will see a list with multiple entries of the same name.  However, each one in the list is a unique coordinate system.  How they figure out which one is the one they actually want is now their problem…

Finally, the last function we will look at is the “getvalue” function.  As the “value2string” function made the experience of the end user more palatable, so too the “getvalue” function makes the experience of the developer more palatable.  Essentially what it does is analogous to the “value2string” function, but rather than returning a string, it returns an actual coordinate system object that can be used in other places in the system.  It looks like the following:

As you can see, it is very similar to the “value2string” object, but instead of returning a string, it returns the actual tree object itself.  Note, you have to cast the return value at the caller site to the appropriate type, but meh… it’s still nice to have.

Finally, to see this property controller in action, I’ve taken a quick screen grab of the properties pane of an ACT object I’ve implemented.  This is a little symmetry object that implements a homebrewed CPCYC, but you can see the coordinate system object.

That’s all for this post.  Next time we’ll look at how to implement the flyout feature.  Good luck with your ACT programming needs.  Oh, and if you need some help, or ever want to have some ANSYS customization done for you, let us know.  We do all sorts of customization work from more run of the mill type

ANSYS Mechanical – Overcoming Convergence Difficulties with the Semi-Implicit Method

Posted on October 9, 2019, by: Ted Harris

In our last blog, we discussed using Nonlinear Adaptive Region to overcome convergence difficulties by having the solver automatically trigger a remesh when elements have become excessively distorted.  You can read it here:  http://www.padtinc.com/blog/ansys-mechanical-overcoming-convergence-difficulties-with-automatic-remeshing-nonlinear-adaptive-region/

This time we look at another tool for overcoming convergence difficulties, the Semi-Implicit method.  ANSYS, Inc. describes the semi-implicit method as a hybrid, combining features of both implicit and explicit finite element methods.

In highly nonlinear problems involving significant deformations we may get a solver error like this one: 

*** ERROR ***                           CP =   18110.688   TIME= 11:58:42
Solution not converged at time 0.921 (load step 1 substep 185).           Run terminated. 

Like it does with other problems that lead to convergence failures, the Solution branch will have telltale red lightning bolts, indicating the solution was not able to complete due to nonconvergence.

In this case, it can be difficult to determine from the error message in the solution output exactly what the problem is.  Plotting the Newton-Raphson residuals can be a good starting point.  In order to plot the Newton-Raphson residuals, though, we need to turn them on prior to solving.  See this older Focus blog for instructions on how to do that:

http://www.padtinc.com/blog/overcoming-convergence-difficulties-in-ansys-workbench-mechanical-part-i-using-newton-raphson-residual-information/

A plot of the Newton-Raphson residuals shows us where the highest force imbalance is in the model:

That’s a nice looking plot, but doesn’t tell us much without knowing more about the simulation.  The model is of a plastic bottle, subject to a force load tending to ‘crush’ the bottle from top to bottom.  There is a slight off center load as well, so that the force is not purely in the downward direction. 

The bottle is constrained with a fixed support on the bottom flat surface, and contact elements between the outer surface of the bottle and a fixed surface representing a table or floor.  This is to prevent the bottle from deflecting below the plane of that surface.

The material used is a polyethylene plastic, from the ANSYS Granta Materials Data for Simulation add-on, which is a great tool to get access to hundreds of materials for ANSYS simulations.  The geometry of the bottle was created in SpaceClaim as a surface body and meshed with shell elements in ANSYS Mechanical. 

The solution was run as nonlinear static, with large deflection effects turned on.  Automatic Time Stepping was manually activated with a starting and minimum number of substeps set to 200 and a maximum number of substeps set to 1000.

With these settings, the solution ran to about 92% of the full load, where it failed to solve after bisecting to the maximum number of substeps (minimum ‘time’ step).  The force convergence plots showed the bisections and failed convergence attempts started at about iteration 230 and ‘time’ 0.92.  (If you are not familiar with the convergence plots from a Newton-Raphson method solution, please see our Focus archives for an article on the topic – look for the link to the GST Plot:  http://www.padtinc.com/blog/wp-content/uploads/oldblog/PADT_TheFocus_08.pdf).

Even though our solution has not converged, it is probably helpful to view the deformation results for substeps which did converge (at partial load) as well as the unconverged results which will be written as the last set of results.

This plot shows the total deformation at the last converged substep (time value 0.92):

This plot shows the unconverged solution, ‘extrapolated’ to time 1.0:

From the unconverged deformation plot we can see that the top of the bottle is tending to experience very large deformations.  It’s not surprizing that convergence difficulties are being encountered.

One of the techniques we can utilize to get past this problem is the Semi-Implicit method in ANSYS Mechanical.  As of 2019 R2, this needs to be activated using a Mechanical APDL command object, but it can be as simple as adding a single word within the Static Structural branch:

SEMIIMPLICIT

There are some optional fields on that command, but minimally just the one word command is needed.

Once the semi-implicit method is activated, if the solver detects the default implicit solver is having trouble, it automatically switches to the semi-implicit solving scheme.  Like a traditional explicit solver, the semi-implicit method can better handle very large deformation, transitory-like effects.  The method can switch back to implicit if conditions warrant for a more efficient solution and in fact can switch back and forth between the two schemes.

The solver output will tell us if the semi-implicit scheme has been activated:

EQUIL ITER  26 COMPLETED.  NEW TRIANG MATRIX.  MAX DOF INC=  0.9526   

     NONLINEAR DIAGNOSTIC DATA HAS BEEN WRITTEN TO  FILE: file.nd004

     DISP CONVERGENCE VALUE   =  0.3918      CRITERION=   1.448     <<< CONVERGED

     LINE SEARCH PARAMETER =  0.4113     SCALED MAX DOF INC =  0.3918   

     FORCE CONVERGENCE VALUE  =   44.44      CRITERION=  0.9960   

     MOMENT CONVERGENCE VALUE =   3.263      CRITERION=  0.1423   

    Writing NEWTON-RAPHSON residual forces to file: file.nr001

    >>> TRANSITIONING TO SEMI-IMPLICIT METHOD

     NONLINEAR DIAGNOSTIC DATA HAS BEEN WRITTEN TO  FILE: file.nd001


    EQUIL ITER   1 COMPLETED.  NEW TRIANG MATRIX.  MAX DOF INC=  0.8788E-04

     NONLINEAR DIAGNOSTIC DATA HAS BEEN WRITTEN TO  FILE: file.nd002

 *** LOAD STEP     1   SUBSTEP   185  COMPLETED.    CUM ITER =    284

 *** TIME =  0.920010         TIME INC =  0.100000E-04

    Kinetic Energy = 0.2157        Potential Energy =  60.59   

 *** AUTO STEP TIME:  NEXT TIME INC = 0.10000E-04  UNCHANGED

     NONLINEAR DIAGNOSTIC DATA HAS BEEN WRITTEN TO  FILE: file.nd003

There are some ‘symptoms’ of the switch from implicit to explicit.  The most obvious is probably that the force convergence plot will stop updating. 

Changing the Solution Output to the Solver Output will show the explicit scheme being used in that case.  The telltale is the information on Response Frequency and Period (the example shown is a static structural solution).

Deformation plot trackers and contact trackers continue to work as expected during the solution, however.

Using the semi-implicit method, the solution was able to successfully converge to the full load, and converged results are available at the last time point:

We also used the new keyframe animation technique to animate the results time history.

The semi-implicit method is well documented within the Mechanical APDL 2019 R2 Help, in the Advanced Analysis Guide, chapter 3 on Semi-Implicit Method.  We suggest reviewing that information to get a much better handle on the technique.

We hope this is helpful in getting your nonlinear solutions to converge the full value of applied loads.

Video Interview: Topology Optimization versus Generative Design

Posted on September 4, 2019, by: Eric Miller

While attending the 2019 RAPID + TCT conference in Detroit this year, I was honored to be interviewed by Stephanie Hendrixson, the Senior Editor of Additive Manufacturing magazine and website. We had a great chat, covering a lot of topics. I do tend to go on, so it turned into two videos.

The first video is about the use of simulation in AM. You should watch that one first, here, because we refer back to some of the basics when we zoomed in on optimization.

Generative design is the use of a variety of tools to drive the design of components and systems to directly meet requirements. One of those tools, the most commonly used, is Topological Optimization. Stephanie and I explore what it is all about, and the power of using these technologies, in this video:

https://youtu.be/QLA92V_85_I

You can view the full article on the Additive Manufacturing website here.

If you have any questions about how you can leverage simulation to add value to your AM processes, contact PADT or shoot me an email at eric.miller@padtinc.com.

Video Interview: 3 Roles for Simulation in Additive Manufacturing

Posted on September 4, 2019, by: Eric Miller

While attending the 2019 RAPID + TCT conference in Detroit this year, I was honored to be interviewed by Stephanie Hendrixson, the Senior Editor of Additive Manufacturing magazine and website. We had a great chat, covering a lot of topics. I do tend to go on, so it turned into two videos.

In the first video, we chat about how simulation can improve the use of Additive Manufacturing for production hardware. We go over the three uses: optimizing the part geometry to take advantage of AM's freedom, verifying that the part you are about to create will survive and perform as expected, and modeling the build process itself.

You can read the article and watch the video here on the Additive Manufacturing website. Or you can watch it here:

https://youtu.be/X5NfOJP_ivo

If you have any questions about how you can leverage simulation to add value to your AM processes, contact PADT or shoot me an email at eric.miller@padtinc.com.

For the second interview, we focus on Topological Optimization, Generative design, and the difference between the two. Check that out here.

ANSYS Mechanical – Overcoming Convergence Difficulties with Automatic Remeshing (Nonlinear Adaptive Region)

Posted on August 19, 2019, by: Ted Harris

One of the problems we can encounter in a nonlinear structural analysis in ANSYS Mechanical is that elements become so distorted that the solver cannot continue.  We get messages saying the solver was unable to complete, and the solver output will contain a message like this one:

 *** ERROR ***                           CP =      37.969   TIME= 14:40:06
 Element 2988 (type = 1, SOLID187) (and maybe other elements) has become
 highly distorted.  Excessive distortion of elements is usually a       
 symptom indicating the need for corrective action elsewhere.  Try      
 incrementing the load more slowly (increase the number of substeps or  
 decrease the time step size).  You may need to improve your mesh to    
 obtain elements with better aspect ratios.  Also consider the behavior 
 of materials, contact pairs, and/or constraint equations.  Please rule 
 out other root causes of this failure before attempting rezoning or    
 nonlinear adaptive solutions.  If this message appears in the first    
 iteration of first substep, be sure to perform element shape checking. 

The Solution branch will have the telltale red lightning bolts, indicating the solution was not able to complete due to nonconvergence.

If you are not aware, one technique we can use to get past this problem of excessive element distortion is to have ANSYS automatically remesh the model or a portion of the model while the solution is progressing.  The current state of the model is then mapped onto the new mesh, in the currently deflected state.  In this manner we can automatically continue with the solution after a slight pause for this remeshing to occur.  Minimally all we need to do as users is insert a Nonlinear Adaptive Region under the Static Structural branch, and review and specify a few settings (more on this later).

Let’s take a look at a simple example.  This is a wedge portion of a circular hyperelastic part, subject to a pressure load on the top surface.  Other boundary conditions include a fixed support on the bottom and frictionless supports on the two cut faces of the wedge.

For this case, the nonlinear adaptive region is the entire part. 

The initial mesh was setup as a default mesh, although note that for 3D models the nonlinear adaptive capability requires a tetrahedral mesh up through the current version, 2019 R2.

Prior to solving with the nonlinear adaptive region included, this model fails to converge at about 56% of the total load.  With the addition of the nonlinear adaptive region, the model is automatically remeshed at the point of excessive element distortion, and the solution is able to proceed until the full load is applied.  The force convergence graph has a solid vertical orange line at the point where remeshing occurred.  The method can result in multiple remeshing steps although in the sample model shown here, only one remeshing was needed.

The image on the left, below, shows the original mesh at the last converged substep before remeshing occurred.  The image on the right is the first result set after remeshing was completed.

The tabular view of a result item will show in the last column if remeshing has occurred during the solution.

Here is the final deformation, for the full amount of pressure load applied on the top surface.

Next, let’s take a look at the nonlinear adaptive region capability in more detail.

First, multiple substeps must be used for the solution.  If we are performing a nonlinear analysis, this will be the case anyway.  Second, Large Deflection needs to be turned on in the Analysis Settings branch.  Also, results must be stored at all time points (note that time is a tracking parameter in a static analysis, but all static as well as transient results in ANSYS Mechanical are associated with a value of ‘time’).

There are several restrictions on features that CAN’T be in the model, such as cyclic symmetry (hence the frictionless support BC’s on the simple model shown above), Auto Asymmetric Contact, Joints, Springs, Remote Forces and Displacements, etc.  Also certain material properties are excluded, such as Cast Iron plasticity and Shape Memory Alloy.  Also, as mentioned above, for 3D models, the mesh must be tetrahedral.  For a full listing of these restrictions, refer to the ANSYS Mechanical User’s Guide.  A search on ‘nonlinear adaptive’ will take you to the right location in the Help.

Nonlinear Adaptive Regions can be scoped to 3D solid and 2D bodies, or to elements via a Named Selection. 

In the Details view for the Nonlinear Adaptive Region, the main option to be defined is the Criterion by which remeshing will be initiated.  There are three options available in Mechanical:  Energy, Box, and Mesh.

The Energy criterion checks the strain energy of each element within the Nonlinear Adaptive Region.  If the strain energy is above a criterion, remeshing is triggered.  The input is an energy coefficient between zero and one, and is a multiplier on the ratio of total strain energy of the component divided by the number of elements of the component.  Recommended values are 0.85-0.9.  A lower coefficient will tend to cause remeshing to be more likely.

The Box criterion defines a geometry region based on a coordinate system and bounds relative to that coordinate system.  Elements in the Nonlinear Adaptive Region whose nodes have all moved within the box will be remeshed.  The idea is that if it’s known that elements will be highly distorted as they move into a certain region, we can ensure that remeshing will occur there.

The Mesh criterion allows us to specify that remeshing will occur if mesh quality measures drop below certain levels as the mesh distorts.  For 3D models, the available measures are Jacobian Ratio and Skewness.  These are described in the Mechanical User’s Guide in the section on Nonlinear Adaptive Region.

In the example shown above, the Energy criterion was used with an energy coefficient of 0.85.

There are some things to be aware of when you are trying to implement a Nonlinear Adaptive Region to help overcome convergence difficulties.  First, if any of the restricted features mentioned above are included in the model, such as remote displacements, it’s not going to work.  Therefore, it’s important to review the list of restrictions in the Help and make sure none of those are applied in your model.  Second, ‘buckling’ or element distortion due to an unstable structure is not a behavior that Nonlinear Adaptive Regions can help with.  The Nonlinear Adaptive Region capability is more suited to problems like hyperelastic seals being compressed or objects that are undergoing a high degree of bending (but not snapping through). 

Also, a coarse mesh that distorts may not produce a usable remesh.  The remeshing step may occur, but the simulation may not be able to proceed beyond that and stops with an error in element formulation error.  More mesh refinement may be needed in this case.

As a further word of caution, self contact problems may not work very well within the context of Nonlinear Adaptive Regions.  If self contact is needed, consider splitting the bodies into multiple parts to avoid self contact. 

There are some other considerations for the method as discussed in the Help, but hopefully the guidelines and recommendations presented here will allow you to filter potential applications appropriately and setup models that can take advantage of the Nonlinear Adaptive Region capability.  We have a short animation which shows the remeshing step in the sample model. 

If you have nonlinear static structural models with convergence difficulties due to excessive element distortion, please consider using this method to help you get a fully converged solution.

Here is a video to help everyone visualize:

Press Release: PADT Awarded U.S. Army Phase I SBIR Grant for Combustor Geometry Research Using 3D Printing, Simulation, and Product Development

Posted on August 15, 2019, by: Eric Miller

We are pleased to announce that the US Army has awarded PADT a Phase I SBIR Grant to explore novel geometries for combustor cooling holes. This is our 15th SBIR/STTR win.

We are excited about this win because it is a project that combines Additive Manufacturing, CFD and Thermal Simulation, and Design in one project. And to make it even better, the work is being done in conjunction with our largest customer, Honeywell Aerospace.

We look forward to getting started on this first phase where we will explore options and then applying for a larger Phase II grant to conduct more thorough simulation then build and test the options we uncover in this phase.

Read more below. The official press release is here for HTML and here for PDF.

If you have any needs to explore new solutions or new geometries using Additive Manufacturing or applying advanced simulation to drive new and unique designs, please contact us at 480.813.4884 or info@padtinc.com.


PADT Awarded U.S. Army Phase I SBIR Grant for Combustor Geometry Research Using 3D Printing, Simulation, and Product Development

The Project Involves the Development of Sand-Plugging Resistant Metallic Combustor Liners

TEMPE, Ariz., August 15, 2019 ─ In recognition of its continued excellence and expertise in 3D printing, simulation, and product development, PADT announced today it has been awarded a $107,750 U.S. Army Phase I Small Business Innovation Research (SBIR) grant. With the support of Honeywell Aerospace, PADT’s research will focus on the development of gas turbine engine combustor liners that are resistant to being clogged with sand.  The purpose of this research is to reduce downtime and improve the readiness of the U.S. Army’s critical helicopters operating in remote locations where dirt and sand can enter their engines.  

“PADT has supported advanced research in a wide variety of fields which have centered around various applications of our services,” said Eric Miller, co-founder and principal, PADT. “We’re especially proud of this award because it requires the use of our three main areas of expertise, 3D printing, simulation and product development. Our team is uniquely capable of combining these three disciplines to develop a novel solution to a problem that impacts the readiness of our armed forces.”

The challenge PADT will be solving is when helicopters are exposed to environments with high concentrations of dust, they can accumulate micro-particles in the engine that clog the metal liner of the engine’s combustor. Combustors are where fuel is burned to produce heat that powers the gas turbine engine. To cool the combustor, thousands of small holes are drilled in the wall, or liner, and cooling air is forced through them. If these holes become blocked, the combustor overheats and can be damaged.  Blockage can only be remedied by taking the engine apart to replace the combustor. These repairs cause long-term downtime and significantly reduce readiness of the Army’s fleets.

PADT will design various cooling hole geometries and simulate how susceptible they are to clogging using advanced computational fluid dynamics (CFD) simulation tools. Once the most-promising designs have been identified through simulation, sample coupons will be metal 3D printed and sent to a test facility to verify their effectiveness.  Additionally, PADT will experiment with ceramic coating processes on the test coupons to determine the best way to thermally protect the 3D printed geometries.

“When we developed new shapes for holes in the past, we had no way to make them using traditional manufacturing,” said Sina Ghods, principal investigator, PADT. “The application of metal additive manufacturing gives PADT an opportunity to create shapes we could never consider to solve a complex challenge for the U.S. Army. It also gives us a chance to demonstrate the innovation and growth of the 3D printing industry and its applications for harsh, real-world environments.”

Honeywell joined PADT to support this research because it is well aligned with the company’s Gas Turbine Engine products. The outcome of this research has the potential to significantly improve the performance of the company’s engines operating in regions with high dust concentrations.

This will be PADT’s 15th SBIR/Small Business Technology Transfer (STTR) award since the company was founded in 1994. In August 2018, the company, in partnership with Arizona State University, was awarded a $127,000 STTR Phase I Grant from NASA to accelerate biomimicry research, the study of 3D printing objects that resemble strong and light structures found in nature such as honeycombs or bamboo.

To learn more about PADT and its advanced capabilities, please visit www.padtinc.com.

About Phoenix Analysis and Design Technologies

Phoenix Analysis and Design Technologies, Inc. (PADT) is an engineering product and services company that focuses on helping customers who develop physical products by providing Numerical Simulation, Product Development, and 3D Printing solutions. PADT’s worldwide reputation for technical excellence and experienced staff is based on its proven record of building long-term win-win partnerships with vendors and customers. Since its establishment in 1994, companies have relied on PADT because “We Make Innovation Work.” With over 80 employees, PADT services customers from its headquarters at the Arizona State University Research Park in Tempe, Arizona, and from offices in Torrance, California, Littleton, Colorado, Albuquerque, New Mexico, Austin, Texas, and Murray, Utah, as well as through staff members located around the country. More information on PADT can be found at www.PADTINC.com.

# # #

Five Takeaways from the New User Interface in ANSYS Mechanical 2019 R2

Posted on July 12, 2019, by: Ted Harris

Ten years is a long time in the life of a software product.  While ANSYS itself has been around since the early 1970’s and what is now known as ANSYS Mechanical is approaching 20 years old, the user interface for ANSYS Mechanical maintained the same look and feel from version 12.0 in 2009 through version 2019 R1 in 2019.  That’s 10 years.  Certainly, there were many, many enhancements over that 10 year period, but the look and feel of the Mechanical window remained the same.

With the release of version 2019 R2, the Mechanical user interface has changed to a more modern ‘ribbon’ window, as shown in the red region here:

After having used the new interface for a while, here are 5 takeaways that are hopefully useful:

  1. It’s easy to use.  Sure, it’s different but the overall process is the same with a simulation tree on the left, details to enter and adjust at lower left, graphics in the middle, message, and graphs at the bottom, and the main menus across the top.
  2. ANSYS, Inc. has helped by providing a 12-slide (some animated) usage tips guide which pops up automatically when you launch ANSYS Mechanical 2019 R2.
  • As in the old menu, the ‘Context’ menu changes based on what you have clicked on in the tree.  For example, if you have clicked on the Mesh branch, the Context menu will display meshing controls across the top of the window.
  • As intuitive as the new ribbon interface is, there are some functionalities that you may have trouble finding.  Not to worry, though, as there is a new Search field at upper right that will likely take you to the right place.  Here I am interested in making a section plane for plotting purposes.  My first thought was that it would appear in the Display menu.  When I didn’t find it there, I simply typed in ‘section’ in the search field and the first hit was the right one.

After clicking “Take me there”:

And the resulting section plot:

  • Some capabilities show up in the File menu other than the expected Save and Save As functionality.  For example, Solve Process Settings is now in the File menu.  However, the main functionality is in the Solution context menu, such as using Distributed solutions and specifying the number of cores.

In short, the 2019 R2 improvements to ANSYS Mechanical allow for easier and faster setup of our simulations.  If you haven’t given it a try, we encourage you to do so.