The Focus


 

Alternating Stresses in Ansys Mechanical – Part 1: Principal Stresses

Posted on June 22, 2020, by: Alex Grishin

Editor's Note:
The following PowerPoint is from one of PADT's inhouse experts on linear dynamics, Alex Grishin.

One of the most valuable results that can come from a harmonic response analysis is the predicted alternating stresses in the part. This feeds fatigue and other downstream calculations as well as predicting maximum possible values. Becasue of the math involved, calculating derived stresses, like Principal Stresses can be done several ways. This post shows how Ansys Mechanical does it and offers an alternative that is considered more accurate for some cases.

Alex also created an example Ansys Workbench archive that goes with the PowerPoint.

What I use Most from my Engineering Management Masters Degree

Posted on June 12, 2020, by: Nathan Huber

Even before finishing my undergraduate degree in Mechanical Engineering at the University of Colorado, Boulder in 2010, I had an interest in furthering my education. The decision I had at that point was whether the next step would be a graduate degree on the technical side or something more like an MBA. I would end up with the chance to study at the University of Denver (DU), focusing on Computational Fluid Dynamics (CFD), and if that field does not make it clear, my first stint in grad school was technical.

At DU, we sourced our Ansys simulation software from a company called, you guessed it, PADT. After finishing this degree, and while working at PADT, the desire to further my education cropped up again after seeing the need for a well-rounded understanding of the technical and business/management side of engineering work. After some research, I decided that a Master’s in Engineering Management program made more sense than an MBA, and I started the program back at my original alma mater, CU Boulder.

Throughout the program, I would find myself using the skills I was learning during lectures immediately in my work at PADT. It is difficult to boil down everything learned in a 10-course program to one skill that is used most often, and as I think about it, I think that what is used most frequently is the new perspective, the new lens through which I can now view situations. It’s taking a step back from the technical work and viewing a given project or situation from a perspective shaped by the curriculum as a whole with courses like EMEN 5020 – Finance and Accounting for Engineers, EMEN 5030/5032 – Fundamentals/Advanced Topics of Project Management, EMEN 5050 – Leading Oneself, EMEN 5080 – Ethical Decision Making, EMEN 5500 – Lean and Agile Management, and more. It is the creation of this new perspective that has been most valuable and influential to my work as an engineer and comes from the time spent completing the full program.

Okay okay, but what is the one thing that I use most often, besides this new engineering management perspective? If I had to boil it down to one skill, it would be the ‘pull’ method for feedback. During the course Leading Oneself, we read Thanks for the Feedback: The Science and Art of Receiving Feedback Well, Even When it is Off Base, Unfair, Poorly Delivered, and, Frankly, You’re Not In The Mood (Douglas Stone and Sheila Heen, 2014), where this method was introduced. By taking an active role in asking for feedback, it has been possible to head-off issues while they remain small, understand where I can do better in my current responsibilities, and grow to increase my value to my group and PADT as a whole.

A Simple Adjustment to Fix a Contact Convergence Problem in Ansys Mechanical

Posted on June 4, 2020, by: Ted Harris

As I write this from home during the Covid-19 crisis, I want to assure you that PADT is conscious of many others working from home while using Ansys software as well.  We’re trying to help with those who may be struggling with certain types of models.  In this posting, I’ll talk about a contact convergence problem in Ansys Mechanical.  I’ll discuss steps we can take to identify the problem and overcome it, as well as a simple setting to make which dramatically helped in this case. 

The geometry in use here is a fairly simple assembly from an old training class.  It’s a wheel or roller held by a housing, which is in turn bolted to something to hold it in place.

A close up of a device

Description automatically generated

The materials used are linear properties for structural steel.  The loading consists of a bearing load representing a downward force applied by some kind of strap or belt looped over the wheel, along with displacement constraints on the back surfaces and around the bolt holes, as shown in the image below.  The flat faces on the back side have a frictionless support applied (allows in plane sliding only), while the circular faces where bolt heads and washers would be are fully constrained with fixed supports.

A close up of a logo

Description automatically generated

As is always the case in Ansys Mechanical, contact pairs are created wherever touching surfaces in the assembly are detected.  The default behavior for those contact pairs is bonded, meaning the touching surfaces can neither slide nor separate.  We will make a change to the default for the wheel on its shaft, though, changing the contact behavior from bonded to frictional.  The friction coefficient defined was 0.2.  This represents some resistance to sliding.  Unlike bonded contact in which the status of the contact pair cannot change during the analysis, frictional contact is truly nonlinear behavior, as the stiffness of the contact pair can change as deflection changes. 

This shows the basic contact settings for the frictional contact pair:

A screenshot of a cell phone

Description automatically generated

At this point, we attempt a solve.  After a while, we get an error message stating, “An internal solution magnitude limit was exceeded,” as shown below.  What this means is that our contact elements are not working as expected, and part of our structure is trying to fly off into space.  Keep in mind in a static analysis there are no inertia effects, so an unconstrained body is truly unconstrained.

At this point, the user may be tempted to start turning multiple knobs to take care of the situation.  Typical things to adjust for contact convergence problems are adding more substeps, reducing contact stiffness, and possibly switching to the unsymmetric solver option when frictional contact is involved.  In this case, a simple adjustment is all it takes to get the solution to easily converge. 

Another thing we might do to help us is to insert a Contact Tool in the Connections branch and interrogate the initial contact status:

This shows us that our frictional contact region is actually not in initial contact but has a gap.  There are multiple techniques available for handling this situation, such as adding weak springs, running a transient solution (computationally expensive), starting with a displacement as a load and then switching to a force load, etc.  However, if we are confident that these parts actually SHOULD be initially touching but are not due to some slop in the CAD geometry, there is a very easy adjustment to handle this.

The Simple Adjustment That Gets This Model to Solve Successfully

Knowing that the parts should be initially in contact, one simple adjustment is all that is needed to close the initial gap and allow the simulation to successfully solve.  The adjustment is to set the Interface Treatment in the Contact Details for the contact region in question to Adjust to Touch:

This change automatically closes the initial gap and, in this case, allows the solution to successfully solve very quickly. 

For your models, if you are confident that parts should be in initial contact, you may also find that this adjustment is a great aid in closing gaps due to small problems in the CAD geometry.  We encourage you to test it out.

An Ansys optiSLang Overview and Optimization Example with Ansys Icepak

Posted on June 3, 2020, by: Josh Stout

Ansys optiSLang is one of the newer pieces of software in the Ansys toolkit that was recently acquired along with the company Dynardo. Functionally, optiSLang provides a flexible top-level platform for all kinds of optimization. It is solver agnostic, in that as long as you can run a solver through batch files and produce text readable result files, you can use said solver with optiSLang. There are also some very convenient integrations with many of the Ansys toolkit solvers in addition to other popular programs. This includes AEDT, Workbench, LS-DYNA, Python, MATLAB, and Excel, among many others.

While the ultimate objective is often to simply minimize or maximize a system output according to a set of inputs, the complexity of the problem can increase dramatically by introducing constraints and multiple optimization goals. And of course, the more complicated the relationships between variables are, the harder it gets to adequately describe them for optimization purposes.

Much of what optiSLang can do is a result of fitting the input data to a Metamodel of Optimal Prognosis (MOP) which is a categorical description for the specific metamodels that optiSLang uses. A user can choose one of the included models (Polynomial, Moving Least Squares, and Ordinary Kriging), define their own model, and/or allow optiSLang to compare the resulting Coefficients of Prognosis (COP) from each model to choose the most appropriate approach.

The COP is calculated in a similar manner as the more common COD or R2 values, except that it is calculated through a cross-validation process where the data is partitioned into subsets that are each used only for the MOP calculation or the COP calculation, not both. For this reason, it is preferred as a measure for how effective the model is at predicting unknown data points, which is particularly valuable in this kind of MOP application.

This whole process really shows where optiSLang’s functionality shines: workflow automation. Not only does optiSLang intelligently select the metamodel based on its applicability to the data, but it can also apply an adaptive method for the improvement of the MOP. It will suggest an automatic sampling method based on the number of problem variables involved, which can then be applied towards refining either the global COP or the minimum local COP. The automation of this process means that once the user has linked optiSLang to a solver with appropriate inputs/outputs and defined the necessary run methodology for optimization, all that is left is to click a button and wait.

As an example of this, we will run through a test case that utilizes the ability to interface with Ansys EDT Icepak.

Figure 1: The EDT Icepak project model.

For our setup, we have a simple board mounted with bodies representative of 3 x 2 watt RAM modules, and 2 x 10 watt CPUs with attached heatsinks. The entire board is contained within an air enclosure, where boundary conditions are defined as walls with two parametrically positioned circular inlets/outlets. The inlet is a fixed mass flow rate surface and the outlet is a zero-pressure boundary. In our design, we permit the y and z coordinates for the inlet and outlet to vary, and we will be searching for the configuration that minimizes the resulting CPU and RAM temperatures.

The optiSLang process generally follows a series of drag-and-drop wizards. We start with the Solver Wizard which guides us through the options for which kind of solver is being used: text-based, direct integrations, or interfaces. In this case, the Icepak project is part of the AEDT interface, so optiSLang will identify any of the parameters defined within EDT as well as the resulting report definitions.  The Parametric Solver System created through the solver wizard then provides the interfacing required to adjust inputs while reading outputs as designs are tested and an MOP is generated.

Figure 2: Resulting block from the Solver wizard with parameters read in from EDT.

Once the parametric solver is defined, we drag and drop in a sensitivity wizard, which starts the AMOP study.  We will start with a total of 100 samples; 40 will be initial designs, and 60 will be across 3 stages of COP refinement with all parameter sets sampled according to the Advanced Latin Hypercube Sampling method.

Figure 3: Resulting block from the Sensitivity wizard with Advanced Latin Hypercube Sampling.

The results of individual runs are tabulated and viewable as the study is conducted, and at the conclusion, a description of the AMOP is provided with response surfaces, residual plots, and variable sensitivities. For instance, we can see that by using these first 100 samples, a decent metamodel with a COP of 90% is generated for the CPU temperature near the inlet. We also note that optiSLang has determined that none of the responses are sensitive to the ‘y’ position of the outlet, so this variable is automatically freed from further analysis.

Figure 4: MOP surface for the temperature of Chip1, resulting from the first round of sampling.

 If we decide that this CoP, or that from any of our other outputs, is not good enough for our purposes, optiSLang makes it very easy to add on to our study. All that is required is dragging and dropping a new sensitivity wizard onto our previous study, which will automatically load the previous results in as starting values. This makes a copy of and visually connects an output from the previous solver block to a new sensitivity analysis on the diagram, which we can then be adjusted independently.

For simplicity and demonstration’s sake, we will add on two more global refinement iterations of 50 samples each. By doing this and then excluding 8 of our 200 total samples that appear as outliers, our “Chip1” CoP can be improved to 97%.

Figure 5: A refined MOP generated by including a new Sensitivity wizard.

Now that we have an MOP of suitable predictive power for our outputs of interest, we can perform some fast optimization. By initially building an MOP based on the overall system behavior, we are now afforded some flexibility in our optimization criteria. As in the previous steps, all that is needed at this point is to drag and drop an optimization wizard onto our “AMOP Addition” system, and optiSLang will guide us through the options with recommendations based on the number of criteria and initial conditions.

In this case, we will define three optimization criteria for thoroughness: a sum of both chip temperatures, a sum of all RAM temperatures, and an average temperature rise from ambient for all components with double weighting applied to the chips. Following the default optimization settings, we end up with an evolutionary algorithm that iterates through 9300 samples in about 14 minutes – far and away faster than directly optimizing the Icepak project. What’s more, if we decide to adjust the optimization criteria, we’ll only need to rerun this ~14 minute evolutionary algorithm.

What we are most interested in for this example are the resulting Pareto fronts which give us a clear view of the tradeoffs between each of our optimization criteria. Each of the designs on this front can easily be selected through the interface, and their corresponding input parameters can be accessed.

Figure 6: Pareto front of the "Chipsum" and "TotalAve" optimization criteria.

Scanning through some of these designs also provides a very convenient way to identify which of our parameters are limiting the design criteria. Two distinct regions can be identified here: the left region is limited by how close we are allowing the inlet fan to be to the board, and the right region is limited by how close to the +xz corner of our domain the outlet vent can be placed. In a situation where these parameters were not physically constrained by geometry, this would be a good opportunity to consider relaxing parameter constraints to further improve our optimization criteria. 

As it is, we can now choose a design based on this Pareto front to verify with the full solver. After choosing a point in the middle of the “Limited by outlet ‘z’” zone, we find that our actual “ChipSum” is 73.33 vs. the predicted 72.78 and the actual “TotalAve” is 17.82 vs. the predicted 17.42. For this demonstration, we consider this small error as satisfactory, and a snapshot of the corresponding Icepak solution is shown below.

Figure 7: The Icepak solution of the final design. The inlet vent is aligned with the outlet side's heatsink, and the outlet vent is in the corner nearest the heatsink. Primary flow through the far heatsink is maximized, while a strong recirculating flow is produced around the front heatsink.

The accuracy of these results is of course dependent not only on how thoroughly we constructed the MOP, but also the accuracy of the 3D solution; creating mesh definitions that remain consistently accurate through parameterized geometry changes can be particularly tricky. Though, with all of this considered, optiSLang provides a great environment for not only managing optimization studies, but displaying the results in such a way that you can gain an improved understanding of the interaction between input/output variables and their optimization criteria.

Advanced Capabilities to Consider when Simulating Blow Molding in Ansys Polyflow or Discovery AIM

Posted on May 21, 2020, by: Daniel Chaparro

Ansys Polyflow is a Finite Element CFD solver with unique capabilities that enable simulation of complex non-Newtonian flows seen in the polymer processing industry. In recent releases, Polyflow has included templates to streamline two of its most common use cases: blow molding and extrusion. Similarly, Ansys Discovery AIM offers a modern user interface that guides users through blow molding and extrusion workflows while still using the proven Polyflow solver under the hood. It is not uncommon for engineers to be unsure about which tool to pursue for their specific application. In this article, I will focus on the blow molding workflow. More specifically, I will point out three features in Polyflow that have not yet been incorporated into Discovery AIM:

  1. The PolyMat curve fitting tool to derive viscoelasticity model input parameters from test data
  2. Automatic parison thickness mapping onto an Ansys Mechanical shell mesh
  3. Parison Programming to optimize parison thickness subject to final part thickness constraints

Keep in mind that either tool will get the job done in most applications, so let us first quickly review some of the core features of blow molding simulations that are common to Polyflow and AIM:

  • Parison/Mold contact detection
  • 3-D Shell Lagrangian automatic remeshing
  • Generalized Newtonian viscosity models
  • Temperature dependent and multi-mode integral viscoelastic models
  • Time dependent mold pressure boundary conditions
  •  Isothermal or non-isothermal conditions

For demonstration purposes, I modeled a sweet submarine toy in SpaceClaim. Unfortunately, I think it will float, but let’s move past that for now.  

Figure 1: Final Submarine shape (Left), Top View of Mold+ Parison (Top Left), Side View of Mold+Parison (Bottom Right)

At this point, you could proceed with Discovery AIM or with Polyflow without any re-work. I’lll proceed with the Polyflow Blow Molding workflow to point out the features currently only available in Polyflow.

PolyMat Curve Fitting Tool

With the blow molding template, you can select whether to treat the parison as isothermal or non-isothermal and whether to model it as general Newtonian or viscoelastic. Suppose we would like to model viscoelasticity with the KBKZ integral viscoelastic model because we were interested in capturing strain hardening as the parison is stretched. The inputs to the KBKZ model are viscosity and relaxation times for each mode. If they are known, the user can input the values directly. This is possible in Discovery AIM as well. However, the PolyMat tool is unique to Polyflow. PolyMat is a built-in curve fitting tool that helps generate input parameters for the various viscosity model available in Polyflow using material data. This is particularly useful when you do not explicitly have the inputs for a viscoelastic model, but perhaps you have other test data such as oscillatory and capillary rheometry data. In this case I have with the loss modulus, storage modulus and shear viscosity for a generic high density polyethylene (HDPE) material. For this material, four modes are enough to anchor the KBKZ model to the data as shown below. We can then load the viscosity/relaxation time into Polyflow and continue. 

Figure 2: Curve Fitting of G’(Ω),G’’(Ω),η() [Left], KBKZ Viscoelastic Model inputs (Right)

The main output of the simulation is the final parison thickness distribution. For this sweet submarine, the initial parison thickness is set to 3mm and the final thickness distribution is shown in the contour plot below.

Figure 3a: Animation of blow molding process

Figure 3b: Final Part Thickness Distribution

Thickness Mapping to Ansys Mechanical

The second Polyflow capability I’d like to point out is the ability to easily map the thickness distribution onto an Ansys mechanical shell mesh. You can map the thickness onto an Ansys Mechanical shell mesh by connecting the polyflow solution component to a structural model in workbench as shown below. The analogous work flow in AIM, would be to create a second simulation for the structural analysis, but you would be confined to specifying a constant thickness.

Figure 4: Polyflow – Ansys Mechanical Parison Thickness Mapping

In Ansys Mechanical, the mapping comes through within the geometry tree as shown below. The imported Data Transfer Summary is a good way to ensure the mapping behaves as expected. In this case we can see that 100% of the nodes were mapped and the thickness contours qualitatively match the Polyflow results in CFD -Post.

Figure 5: Imported Thickness in Ansys Mechanical

Figure 6: Thickness Data Transfer Summary

A force is applied normal to front face of the sail and simulated in Mechanical. The peak stress and deformation are shown below. The predicted stresses are likely acceptable for a toy, especially since my toy is a sweet submarine. Nonetheless, suppose that I was interested in reducing the deformation in the sail under this load condition by thickening the extruded parison. A logical approach would be to increase the initial parison thickness from 3mm to 4mm for example. Polyflow’s parison programming feature takes the guesswork out of the process. 

Figure 7: Clockwise from Top Left: Applied Load on Sail, Stress Distribution, total Deformation, Thickness Distribution

Parison Programming

Parison programming is an iterative optimization work flow within Polyflow for determining the extruded thickness distribution required to meet the final part thickness constraints. To activate it, you create a new post processor sub-task of type parison programming.   

Figure 8: Parison Programming Setup

The inputs to the optimization are straight forward. The only inputs that you typically would need to modify are the direction of optimization, width of stripes, and list of (X,h) pairs. The direction of optimization is the direction of extrusion which is X in this case. If the extruder can vary parison thickness along “stripes” of the parison, then Polyflow can optimize each stripe thickness. The list of (X,h) pairs serves as a list of constraints for the final part thickness where X is the location on the parison along the direction of extrusion and h is the final part thickness constraint.

Figure 9: Thickness Constraints for Parison Programming

In our scenario, the X,h pairs form a piecewise linear thickness distribution to constrain the area around the sail to have a 3.5mm thickness and 2mm everywhere else. After the simulation, Polyflow will write a csv file with to the output directory containing the initial thickness for each node for the next iteration. You will need to copy over the csv file from the output directory of iteration N to the input directory of iteration N+1. The good news is the optimization converges within 3-5 iterations.

Figure 10: Defining the Initial Thickness for the Next Parison Programming Iteration

Polyflow will print the parison strip thickness distribution for the next iteration in the .lst file. The plot below shows the thickness distribution from the first 3 iterations. Note from the charts below that the distribution converged by iteration 2; thus iteration 3 was not actually simulated. The optimized parison thickness distribution is also plotted in the contour plot below.

Figure 11: Optimized Parison Thickness (Top), Final Part Thickness (Bottom)

Figure 12: % of Elements At or Above Thickness Criteria

As a final check, we can evaluate how the modification to the parison thickness reduced the deformation of the submarine. The total deformation contour plot below confirms that the peak deformation decreased from 2mm to 0.8mm.

Figure 13: Total Deformation in Ansys Mechanical After Parison Programming

Summary

Ansys Discovery AIM is a versatile platform with an intuitive and modern user interface. While Aim has incorporated most of the blow molding simulation capabilities from Polyflow, some advanced functionality has not yet been brought into AIM. This article simulated the blow molding process of a toy submarine to demonstrate three capabilities currently only available in Polyflow: the PolyMat curve fitting tool, automatic parison thickness mapping to Ansys Mechanical, and parison programming. Engineers should consider whether any of these capabilities are needed in their application next time they are faced with the decision to create a blow mold simulation using Ansys Discovery AIM or Polyflow.

Changes to Licencing at ANSYS 2020R1

Posted on May 7, 2020, by: Ahmed Fayed

There are three main goals of the licensing changes in the latest release of ANSYS:

  • Deliver Ansys licensing using the FlexLM industry standard
  • Eliminate the Ansys licensing interconnect
  • Provide modular licensing options that are easier to understand
  • Finally – and this is the whopper (or Double Double if you’re an In-N-Out kind of analogy person) – this new licensing model eliminates the need for upgrading the Ansys License Manager with every software update. (please pause for shock recovery)
If you’re still shocked and would to like see a "shocked groundhog" compilation, check this out.

Why is this significant? Well, this was always a sticking point for our customers when upgrading from one version to the next.

Here’s how that usually plays out:

  1. Engineers eager to try out new features or overcome software defects, download the software and install it on their workstations.
    1. Surprise – software throws an obscure licensing error.
    2. Engineer notifies IT or Ansys Channel partner of issue.
    3. After a few calls, maybe a screenshare or two, its determined that the license server needs to be upgraded.
    4. The best-case scenario - IT or PADT Support can get it installed in a few minutes and engineer can be on his way.
    5. The usual scenario - it will take a week to schedule downtime on the server and notify all stakeholders and the engineer is left to simmer on medium-low until those important safeguards are handled.

What does this all mean?

  • Starting in January 2020, all new Ansys keys issued will be in the new format and will require upgrading to the 2020R1 License manager. This should be the last mandatory license server upgrade for a while.
  • Your Ansys Channel Partner will contact you ahead of your next renewal to discuss new license increments and if there are any expected changes.
  • Your IT and Ansys support team will be celebrating in the back office the last mandatory Ansys License Manager upgrade for a while.

How to upgrade the Ansys License Manager?

Download the latest license manager through the Ansys customer portal:

Follow installation instructions and add the latest license file:

  • Ansys has a handy video on this here
  • Make sure that you run the installed as an administrator for best results.

Make sure license server is running and has the correct licenses queued:

  • Look for the green checkmark in the license management center window.
  • Start your application and make sure everything looks good.

This was a high-level flyover of the new Ansys Licensing released with version 2020R1. For specifics contact your PADT Account manager or support@padtinc.com .

Making Sense of DC IR Results in Ansys SIwave

Posted on April 14, 2020, by: Aleksandr Gafarov

In this article I will cover a Voltage Drop (DC IR) simulation in SIwave, applying realistic power delivery setup on a simple 4-layer PCB design. The main goal for this project is to understand what data we receive by running DC IR simulation, how to verify it, and what is the best way of using it.

And before I open my tools and start diving deep into this topic, I would like to thank Zachary Donathan for asking the right questions and having deep meaningful technical discussions with me on some related subjects. He may not have known, but he was helping me to shape up this article in my head!

Design Setup

There are many different power nets present on the board under test, however I will be focusing on two widely spread nets +1.2V and +3.3V. Both nets are being supplied through Voltage Regulator Module (VRM), which will be assigned as a Voltage Source in our analysis. After careful assessment of the board design, I identified the most critical components for the power delivery to include in the analysis as Current Sources (also known as ‘sinks’). Two DRAM small outline integrated circuit (SOIC) components D1 and D2 are supplied with +1.2V. While power net +3.3V provides voltage to two quad flat package (QFP) microcontrollers U20 and U21, mini PCIE connector, and hex Schmitt-Trigger inverter U1.

Fig. 1. Power Delivery Network setting for a DC IR analysis

Figure 1 shows the ‘floor plan’ of the DC IR analysis setup with 1.2V voltage path highlighted in yellow and 3.3V path highlighted in light blue.

Before we assign any Voltage and Current sources, we need to define pin groups for all nets +1.2V, +3.3V and GND for all PDN component mentioned above. Having pin groups will significantly simplify the reviewing process of the results. Also, it is generally a good practice to start the DC IR analysis from the ‘big picture’ to understand if certain component gets enough power from the VRM. If a given IC reports an acceptable level of voltage being delivered with a good margin, then we don’t need to dig deeper; we can instead focus on those which may not have good enough margins.

Once we have created all necessary pin groups, we can assign voltage and current sources. There are several ways of doing that (using wizard or manual), for this project we will use ‘Generate Circuit Element on Components’ feature to manually define all sources. Knowing all the components and having pin groups already created makes the assignment very straight-forward. All current sources draw different amount of current, as indicated in our setting, however all current sources have the same Parasitic Resistance (very large value) and all voltage source also have the same Parasitic Resistance (very small value). This is shown on Figure 2 and Figure 3.

Note: The type of the current source ‘Constant Voltage’ or ‘Distributed Current’ matters only if you are assigning a current source to a component with multiple pins on the same net, and since in this project we are working with pins groups, this setting doesn’t make difference in final results.

Fig. 2. Voltage and Current sources assigned
Fig. 3. Parasitic Resistance assignments for all voltage and current sources

For each power net we have created a voltage source on VRM and multiple current sources on ICs and the connector. All sources have a negative node on a GND net, so we have a good common return path. And in addition, we have assigned a negative node of both voltage sources (one for +1.2V and one for +3.3V) as our reference points for our analysis. So reported voltage values will be referenced to that that node as absolute 0V.

At this point, the DC IR setup is complete and ready for simulation.

Results overview and validation

When the DC IR simulation is finished, there is large amount of data being generated, therefore there are different ways of viewing results, all options are presented on Figure 4. In this article I will be primarily focusing on ‘Power Tree’ and ‘Element Data’. As an additional source if validation we may review the currents and voltages overlaying the design to help us to visualize the current flow and power distribution. Most of the time this helps to understand if our assumption of pin grouping is accurate.

Fig. 4. Options to view different aspects of DC IR simulated data

Power Tree

First let’s look at the Power Tree, presented on Figure 5. Two different power nets were simulated, +1.2V and +3.3V, each of which has specified Current Sources where the power gets delivered. Therefore, when we analyze DC IR results in the Power tree format, we see two ‘trees’, one for each power net. Since we don’t have any pins, which would get both 1.2V and 3.3V at the same time (not very physical example), we don’t have ‘common branches’ on these two ‘trees’.

Now, let’s dissect all the information present in this power tree (taking in consideration only one ‘branch’ for simplicity, although the logic is applicable for all ‘branches’):

  • We were treating both power nets +1.2V and +3.3V as separate voltage loops, so we have assigned negative nodes of each Voltage Source as a reference point. Therefore, we see the ‘GND’ symbol ((1) and (2)) for each voltage source. Now all voltage calculations will be referenced to that node as 0V for its specific tree.
  • Then we see the path from Voltage Source to Current Source, the value ΔV shows the Voltage Drop in that path (3). Ultimately, this is the main value power engineers usually are interested in during this type of analysis. If we subtract ΔV from Vout we will get the ‘Actual Voltage’ delivered to the specific current source positive pin (1.2V – 0.22246V = 0.977V). That value reported in the box for the Current Source (4). Technically, the same voltage drop value is reported in the column ‘IR Drop’, but in this column we get more details – we see what the percentage of the Vout is being dropped. Engineers usually specify the margin value of the acceptable voltage drop as a percentage of Vout, and in our experiment we have specified 15%, as reported in column ‘Specification’. And we see that 18.5% is greater than 15%, therefore we get ‘Fail_I_&_V’ results (6) for that Current Source.
  • Regarding the current – we have manually specified the current value for each Current Source. Current values in Figure 2 are the same as in Figure 5. Also, we can specify the margin for the current to report pass or fail. In our example we assigned 108A as a current at the Current Source (5), while 100A is our current limit (4). Therefore, we also got failed results for the current as well.
  • As mentioned earlier, we assigned current values for each Current Source, but we didn’t set any current values for the Voltage Source. This is because the tool calculates how much current needs to be assigned for the Voltage Source, based on the value at the Current Sources. In our case we have 3 Current Sources 108A, 63A, 63A (5). The sum of these three values is 234A, which is reported as a current at the Voltage Source (7). Later we will see that this value is being used to calculate output power at the Voltage Source.  
Fig. 5. DC IR simulated data viewed as a ‘Power Tree’

Element Data

This option shows us results in the tabular representation. It lists many important calculated data points for specific objects, such as bondwire, current sources, all vias associated with the power distribution network, voltage probes, voltage sources.

Let’s continue reviewing the same power net +1.2V and the power distribution to CPU1 component as we have done for Power Tree (Figure 5). The same way we will be going over the details in point-by-point approach:

  • First and foremost, when we look at the information for Current Sources, we see a ‘Voltage’ value, which may be confusing. The value reported in this table is 0.7247V (8), which is different from the reported value of 0.977V in Power Tree on Figure 5 (4). The reason for the difference is that reported voltage value were calculated at different locations. As mentioned earlier, the reported voltage in the Power Tree is the voltage at the positive pin of the Current Source. The voltage reported in Element Data is the voltage at the negative pin of the Current Source, which doesn’t include the voltage drop across the ground plane of the return path.

To verify the reported voltage values, we can place Voltage Probes (under circuit elements). Once we do that, we will need to rerun the simulation in order to get the results for the probes:

  1. Two terminals of the ‘VPROBE_1’ attached at the positive pin of Voltage Source and at the positive pin of the Current Source. This probe should show us the voltage difference between VRM and IC, which also the same as reported Voltage Drop ΔV. And as we can see ‘VPROBE_1’ = 222.4637mV (13), when ΔV = 222.464mV (3). Correlated perfectly!
  2. Two terminals of the ‘VPROBE_GND’ attached to the negative pin of the Current Source and negative pin of the Voltage Source. The voltage shown by this probe is the voltage drop across the ground plane.

If we have 1.2V at the positive pin of VRM, then voltage drops 222.464mV across the power plane, so the positive pin of IC gets supplied with 0.977V. Then the voltage at the Current Source 0.724827V (8) being drawn, leaving us with (1.2V – 0.222464V – 0.724827V) = 0.252709V at the negative pin of the Current Source. On the return path the voltage drops again across the ground plane 252.4749mV (14) delivering back at the negative pin of VRM (0.252709V – 0.252475V) = 234uV. This is the internal voltage drop in the Voltage Source, as calculated as output current at VRM 234A (7) multiplied by Parasitic Resistance 1E-6Ohm (Figure 3) at VRM. This is Series R Voltage (11)

  • Parallel R Current of the Current source is calculated as Voltage 724.82mV (8) divided by Parasitic Resistance of the Current Source (Figure 3) 5E+7 Ohm = 1.44965E-8 (9)
  • Current of the Voltage Source report in the Element Data 234A (10) is the same value as reported in the Power Tree (sum of all currents of Current Sources for the +1.2V power net) = 234A (7). Knowing this value of the current we can multiple it by Parasitic Resistance of the Voltage Source (Figure 3) 1E-6 Ohm = (234A * 1E-6Ohm) = 234E-6V, which is equal to reported Series R Voltage (11). And considering that the 234A is the output current of the Voltage Source, we can multiple it by output voltage Vout = 1.2V to get a Power Output = (234A * 1.2V) = 280.85W (12)
Fig. 6. DC IR simulated data viewed in the table format as ‘Element Data’

In addition to all provided above calculations and explanations, the video below in Figure 7 highlights all the key points of this article.

Fig. 7. Difference between reporting Voltage values in Power Tree and Element Data

Conclusion

By carefully reviewing the Power Tree and Element Data reporting options, we can determine many important decisions about the power delivery network quality, such as how much voltage gets delivered to the Current Source; how much voltage drop is on the power net and on the ground net, etc. More valuable information can be extracted from other DC IR results options, such as ‘Loop Resistance’, ‘Path Resistance’, ‘RL table’, ‘Spice Netlist’, full ‘Report’. However, all these features deserve a separate topic.

As always, if you would like to receive more information related to this topic or have any questions please reach out to us at info@padtinc.com.

Efficient and Accurate Simulation of Antenna Arrays in HFSS

Posted on April 2, 2020, by: Sima Noghanian

Unit-cell in HFSS

HFSS offers different method of creating and simulating a large array. The explicit method, shown in Figure 1(a) might be the first method that comes to our mind. This is where you create the exact CAD of the array and solve it. While this is the most accurate method of simulating an array, it is computationally extensive. This method may be non-feasible for the initial design of a large array. The use of unit cell (Figure 1(b)) and array theory helps us to start with an estimate of the array performance by a few assumptions. Finite Array Domain Decomposition (or FADDM) takes advantage of unit cell simplicity and creates a full model using the meshing information generated in a unit cell. In this blog we will review the creation of unit cell. In the next blog we will explain how a unit cell can be used to simulate a large array and FADDM.

Fig. 1 (a) Explicit Array
Fig. 1 (b) Unit Cell
Fig. 1 (c) Finite Array Domain Decomposition (FADDM)

In a unit cell, the following assumptions are made:

  • The pattern of each element is identical.
  • The array is uniformly excited in amplitude, but not necessarily in phase.
  • Edge affects and mutual coupling are ignored
Fig. 2 An array consisting of elements amplitude and phases can be estimated with array theory, assuming all elements have the same amplitude and element radiation patterns. In unit cell simulation it is assumed all magnitudes (An’s) are equal (A) and the far field of each single element is equal.

A unit cell works based on Master/Slave (or Primary/Secondary) boundary around the cell. Master/Slave boundaries are always paired. In a rectangular cell you may use the new Lattice Pair boundary that is introduced in Ansys HFSS 2020R1. These boundaries are means of simulating an infinite array and estimating the performance of a relatively large arrays. The use of unit cell reduces the required RAM and solve time.

Primary/Secondary (Master/Slave) (or P/S) boundaries can be combined with Floquet port, radiation or PML boundary to be used in an infinite array or large array setting, as shown in Figure 3.

Fig. 3 Unit cell can be terminated with (a) radiation boundary, (b) Floquet port, (c) PML boundary, or combination of them.

To create a unit cell with P/S boundary, first start with a single element with the exact dimensions of the cell. The next step is creating a vacuum or airbox around the cell. For this step, set the padding in the location of P/S boundary to zero. For example, Figure 4 shows a microstrip patch antenna that we intend to create a 2D array based on this model. The array is placed on the XY plane. An air box is created around the unit cell with zero padding in X and Y directions.

Fig. 4 (a) A unit cell starts with a single element with the exact dimensions as it appears in the lattice
Fig. 4 (b) A vacuum box is added around it

You notice that in this example the vacuum box is larger than usual size of quarter wavelength that is usually used in creating a vacuum region around the antenna. We will get to calculation of this size in a bit, for now let’s just assign a value or parameter to it, as it will be determined later. The next step is to define P/S to generate the lattice. In AEDT 2020R1 this boundary is under “Coupled” boundary. There are two methods to create P/S: (1) Lattice Pair, (2) Primary/Secondary boundary.

Lattice Pair

The Lattice Pair works best for square lattices. It automatically assigns the primary and secondary boundaries. To assign a lattice pair boundary select the two sides that are supposed to create infinite periodic cells, right-click->Assign Boundary->Coupled->Lattice Pair, choose a name and enter the scan angles. Note that scan angles can be assigned as parameters. This feature that is introduced in 2020R1 does not require the user to define the UV directions, they are automatically assigned.

Fig. 5 The lattice pair assignment (a) select two lattice walls
Fig. 5 (b) Assign the lattice pair boundary
Fig. 5 (c) After, right-click and choosing assign boundary > choose Lattice Pair
Fig. 5 (d) Phi and Theta scan angles can be assigned as parameters

Primary/Secondary

Primary/Secondary boundary is the same as what used to be called Master/Slave boundary. In this case, each Secondary (Slave) boundary should be assigned following a Primary (Master) boundary UV directions. First choose the side of the cell that Primary boundary. Right-click->Assign Boundary->Coupled->Primary. In Primary Boundary window define U vector. Next select the secondary wall, right-click->Assign Boundary->Couple->Secondary, choose the Primary Boundary and define U vector exactly in the same direction as the Primary, add the scan angles (the same as Primary scan angles)

Fig. 6 Primary and secondary boundaries highlights.

Floquet Port and Modes Calculator

Floquet port excites and terminates waves propagating down the unit cell. They are similar to waveguide modes. Floquet port is always linked to P/S boundaries. Set of TE and TM modes travel inside the cell. However, keep in mind that the number of modes that are absorbed by the Floquet port are determined by the user. All the other modes are short-circuited back into the model. To assign a Floquet port two major steps should be taken:

Defining Floquet Port

Select the face of the cell that you like to assign the Floquet port. This is determined by the location of P/S boundary. The lattice vectors A and B directions are defined by the direction of lattice (Figure 7).

Fig. 7 Floquet port on top of the cell is defined based on UV direction of P/S pairs

The number of modes to be included are defined with the help of Modes Calculator. In the Mode Setup tab of the Floquet Port window, choose a high number of modes (e.g. 20) and click on Modes Calculator. The Mode Table Calculator will request your input of Frequency and Scan Angles. After selecting those, a table of modes and their attenuation using dB/length units are created. This is your guide in selecting the height of the unit cell and vaccume box. The attenation multiplied by the height of the unit cell (in the project units, defined in Modeler->Units) should be large enough to make sure the modes are attenuated enough so removing them from the calcuatlion does not cause errors. If the unit cell is too short, then you will see many modes are not attenuated enough. The product of the attenuatin and height of the airbox should be at least 50 dB. After the correct size for the airbox is calcualted and entered, the model with high attenuation can be removed from the Floquet port definition.

The 3D Refinement tab is used to control the inclusion of the modes in the 3D refinement of the mesh. It is recommended not to select them for the antenna arrays.

Fig. 8 (Left) Determining the scan angles for the unit cell, (Right) Modes Calculator showing the Attenuation

In our example, Figure 8 shows that the 5th mode has an attenuation of 2.59dB/length. The height of the airbox is around 19.5mm, providing 19.5mm*2.59dB/mm=50.505dB attenuation for the 5th mode. Therefore, only the first 4 modes are kept for the calculations. If the height of the airbox was less than 19.5mm, we would need to increase the height so accordingly for an attenuation of at least 50dB.

Radiation Boundary

A simpler alternative for Floquet port is radiation boundary. It is important to note that the size of the airbox should still be kept around the same size that was calculated for the Floquet port, therefore, higher order modes sufficiently attenuated. In this case the traditional quarter wavelength padding might not be adequate.

Fig. 9 Radiation boundary on top of the unit cell

Perfectly Matched Layer

Although using radiation boundary is much simpler than Floquet port, it is not accurate for large scan angles. It can be a good alternative to Floquet port only if the beam scanning is limited to small angles. Another alternative to Floquet port is to cover the cell by a layer of PML. This is a good compromise and provides very similar results to Floquet port models. However, the P/S boundary need to surround the PML layer as well, which means a few additional steps are required. Here is how you can do it:

  1. Reduce the size of the airbox* slightly, so after adding the PML layer, the unit cell height is the same as the one that was generated using the Modes Calculation. (For example, in our model airbox height was 19mm+substrte thickness, the PML height was 3mm, so we reduced the airbox height to 16mm).
  2. Choose the top face and add PML boundary.
  3. Select each side of the airbox and create an object from that face (Figure 10).
  4. Select each side of the PML and create objects from those faces (Figure 10).
  5. Select the two faces that are on the same plane from the faces created from airbox and PML and unite them to create a side wall (Figure 10).
  6. Then assign P/S boundary to each pair of walls (Figure 10).

*Please note for this method, an auto-size “region” cannot be used, instead draw a box for air/vacuum box. The region does not let you create the faces you need to combine with PML faces.

Fig. 10 Selecting two faces created from airbox and PML and uniting them to assign P/S boundaries

The advantage of PML termination over Floquet port is that it is simpler and sometimes faster calculation. The advantage over Radiation Boundary termination is that it provides accurate results for large scan angles. For better accuracy the mesh for the PML region can be defined as length based.

Seed the Mesh

To improve the accuracy of the PML model further, an option is to use length-based mesh. To do this select the PML box, from the project tree in Project Manager window right-click on Mesh->Assign Mesh Operation->On Selection->Length Based. Select a length smaller than lambda/10.

Fig. 11 Using element length-based mesh refinement can improve the accuracy of PML design

Scanning the Angle

In phased array simulation, we are mostly interested in the performance of the unit cell and array at different scan angles. To add the scanning option, the phase of P/S boundary should be defined by project or design parameters. The parameters can be used to run a parametric sweep, like the one shown in Figure 12. In this example the theta angle is scanned from 0 to 60 degrees.

Fig. 12 Using a parametric sweep, the scanned patterns can be generated

Comparing PML and Floquet Port with Radiation Boundary

To see the accuracy of the radiation boundary vs. PML and Floquet Port, I ran the simulations for scan angles up to 60 degrees for a single element patch antenna. Figure 13 shows that the accuracy of the Radiation boundary drops after around 15 degrees scanning. However, PML and Floquet port show similar performance.

Fig. 13 Comparison of radiation patterns using PML (red), Floquet Port (blue), and Radiation boundary (orange).

S Parameters

To compare the accuracy, we can also check the S parameters. Figure 14 shows the comparison of active S at port 1 for PML and Floquet port models. Active S parameters were used since the unit cell antenna has two ports. Figure 15 shows how S parameters compare for the model with the radiation boundary and the one with the Floquet port.

Fig. 14 Active S parameter comparison for different scan angles, PML vs. Floquet Port model.
Fig. 15 Active S parameter comparison for different scan angles, Radiation Boundary vs. Floquet Port model.

Conclusion

The unit cell definition and options on terminating the cell were discussed here. Stay tuned. In the next blog we discuss how the unit cell is utilized in modeling antenna arrays.

Ansys Sherlock: A Comprehensive Electronics Reliability Tool

Posted on March 24, 2020, by: Josh Stout

As systems become more complex, the introduction and adoption of detailed Multiphysics / Multidomain tools is becoming more commonplace. Oftentimes, these tools serve as preprocessors and specialized interfaces for linking together other base level tools or models in a meaningful way. This is what Ansys Sherlock does for Circuit Card Assemblies (CCAs), with a heavy emphasis on product reliability through detailed life cycle definitions.

In an ideal scenario, the user will have already compiled a detailed ODB++ archive containing all the relevant model information. For Sherlock, this includes .odb files for each PCB layer, the silkscreens, component lists, component locations separated by top/bottom surface, drilled locations, solder mask maps, mounting points, and test points. This would provide the most streamlined experience from a CCA design through reliability analysis, though any of these components can be imported individually.

These definitions, in combination with an extensive library of package geometries, allow Sherlock to generate a 3D model consisting of components that can be checked against accepted parts lists and material properties. The inclusion of solder mask and silkscreen layers also makes for convenient spot-checking of component location and orientation. If any of these things deviate from the expected or if basic design variation and optimization studies need to be conducted, new components can be added and existing components can be removed, exchanged, or edited entirely within Sherlock.

Figure 1: Sherlock's 2D layer viewer and editor. Each layer can be toggled on/off, and components can be rearranged.

While a few of the available analyses depend on just the component definitions and geometries (Part Validation, DFMEA, and CAF Failure), the rest are in some way connected to the concept of life cycle definitions. The overall life cycle can be organized into life phases, e.g. an operating phase, packaging phase, transport phase, or idle phase, which can then contain any number of unique event definitions. Sherlock provides support for vibration events (random and harmonic), mechanical shock events, and thermal events. At each level, these phases and events can be prescribed a total duration, cycle count, or duty cycle relative to their parent definition. On the Life Cycle definition itself, the total lifespan and accepted failure probability within that lifespan are defined for the generation of final reliability metrics.  Figure 1 demonstrates an example layout for a CCA that may be part of a vehicle system containing both high cycle fatigue thermal and vibration events, and low cycle fatigue shock events.

Figure 2: Product life cycles are broken down into life phases that contain life events. Each event is customizable through its duration, frequency, and profile.

The remaining analysis types can be divided into two categories: FEA and part specification-based. The FEA based tests function by generating a 3D model with detail and mesh criteria determined within Sherlock, which is then passed over to an Ansys Mechanical session for analysis. Sherlock provides quite a lot of customization on the pre-processing level; the menu options include different methods and resolutions for the PCB, explicit modeling of traces, and inclusion or exclusion of part leads, mechanical parts, and potting regions, among others.

Figure 3: Left shows the 3D model options, the middle shows part leads modeled, and right shows a populated board.

Each of the FEA tests, Random Vibration, Harmonic Vibration, Mechanical Shock, and Natural Frequency, correspond to an analysis block within Ansys Workbench. Once these simulations are completed, the results file is read back into Sherlock, and strain values for each component are extracted and applied to either Basquin or Coffin—Manson fatigue models as appropriate for each included life cycle event.

Part specification tests include Component Failure Analysis for electrolytic and ceramic capacitors, Semiconductor Wearout for semiconductor devices, and CTE mismatch issues for Plated Through-Hole and solder fatigue. These analyses are much more component-specific in the sense that an electrolytic capacitor has some completely different failure modes than a semiconductor device and including them allows for a broad range of physics to be accounted for across the CCA.

The result from each type of analysis is ultimately a life prediction for each component in terms of a failure probability curve alongside a time to failure estimate. The curves for every component are then combined into a life prediction for the entire CCA under one failure analysis.

Figure 4: Analysis results for Solder Fatigue including an overview for quantity of parts in each score range along with a detailed breakdown of score for each board component.

Taking it one step further, the results from each analysis are then combined into an overall life prediction for the CCA that encompasses all the defined life events. From Figure 5, we can see that the life prediction for this CCA does not quite meet its 5-year requirement, and that the most troublesome analyses are Solder Fatigue and PTH Fatigue. Since Sherlock makes it easy to identify these as problem areas, we could then iterate on this design by reexamining the severity or frequency of applied thermal cycles or adjusting some of the board material choices to minimize CTE mismatch.

Figure 5: Combined life predictions for all failure analyses and life events.

Sherlock’s convenience for defining life cycle phases and events, alongside the wide variety of component definitions and failure analyses available, really cement Sherlock’s role as a comprehensive electronics reliability tool. As in most analyses, the quality of the results is still dependent on the quality of the input, but all the checks and cross-validations for components vs life events that come along with Sherlock’s preprocessing toolset really assist with this, too.

ANSYS Discovery Live: A Focus on Topology Optimization

Posted on March 10, 2020, by: Josh Stout

For those who are not already familiar with it, Discovery Live is a rapid design tool that shares the Discovery SpaceClaim environment. It is capable of near real-time simulation of basic structural, modal, fluid, electronic, and thermal problems. This is done through leveraging the computational power of a dedicated GPU, though because of the required speed it will necessarily have somewhat less fidelity than the corresponding full Ansys analyses. Even so, the ability to immediately see the effects of modifying, adding, or rearranging geometry through SpaceClaim’s operations provides a tremendous value to designers.

One of the most interesting features within Discovery Live is the ability to perform Topology Optimization for reducing the quantity of material in a design while maintaining optimal stiffness for a designated loading condition. This can be particularly appealing given the rapid adoption of 3D printing and other additive manufacturing techniques where reducing the total material used saves both time and material cost. These also allow the production of complex organic shapes that were not always feasible with more traditional techniques like milling.

With these things in mind, we have recently received requests to demonstrate Discovery Live’s capabilities and provide some training in its use, especially for topology optimization. Given that Discovery Live is amazingly straightforward in its application, this also seems like an ideal topic to expand on in blog form alongside our general Discovery Live workshops!

For this example, we have chosen to work with a generic “engine mount” geometry that was saved in .stp format. The overall dimensions are about 10 cm wide x 5 cm tall x 5 cm deep, and we assume it is made out of stainless steel (though this is not terribly important for this demonstration).

Figure 1: Starting engine mount geometry with fixed supports and a defined load.

The three bolt holes around the perimeter are fixed in position, as if they were firmly clamped to a surface, while a total load of 9069 N (-9000 N in X, 1000 N in Y, and 500 N in Z) is applied to the cylindrical surfaces on the front. From here, we simply tell Discovery Live that we would like to add a topology optimization calculation onto our structural analysis. This opens up the ability to specify a couple more options: the way we define how much material to remove and the amount of material around boundary conditions to preserve. For removing material, we can choose to either reduce the total volume by a percent of the original or to remove material until we reach a specific model volume. For the area around boundary conditions, this is an “inflation” length measured as a normal distance from these surfaces, easily visualizable when highlighting the condition on the solution tree.

Figure 2: Inflation zone shown around each fixed support and load surface.

Since I have already planned out what kind of comparisons I want to make in this analysis, I chose to set the final model volume to 30 cm3. After hitting the simulate button, we get to watch the optimization happen alongside a rough structural analysis. By default, we are provided with a result chart showing the model’s volume, which pretty quickly converges on our target volume. As with any analysis, the duration of this process is fairly sensitive to the fidelity specified, but with default settings this took all of 7 minutes and 50 seconds to complete on my desktop with a Quadro K4000.

Figure 3: Mid-optimization on the top, post-optimization on the bottom.

Once optimization is complete, there are several more operations that become available. In order to gain access to the optimized structure, we need to convert it into a model body. Both options for this result in faceted bodies with the click of a button located in the solution tree; the difference is just that the second has also had a smoothing operation applied to it. One or the other may be preferable, depending on your application.

Figure 4: Converting results to faceted geometry

Text Box: Figure 5: Faceted body post-optimization.

Figure 5: Faceted body post-optimization

Figure 6: Smoothed faceted body post-optimization

Though some rough stress calculations were made throughout the optimization process, the next step is typically a validation. Discovery Live makes this as a simple procedure as right-clicking on the optimized result in the solution tree and selecting the “Create Validation Solution” button. This essentially copies over the newly generated geometry into a new structural analysis while preserving the previously applied supports and loads. This allows for finer control over the fidelity of our validation, but still a very fast confirmation of our results. Using maximum fidelity on our faceted body, we find that the resulting maximum stress is about 360 MPa as compared to our unoptimized structure’s stress of 267 MPa, though of course our new material volume is less than half the original.

Figure 7: Optimized structure validation. Example surfaces that are untouched by optimization are boxed.

It may be that our final stress value is higher than what we find acceptable. At this point, it is important to note one of the limitations in version 2019R3: Discovery Live can only remove material from the original geometry, it does not add. What this means is that any surfaces remaining unchanged throughout the process are important in maintaining structural integrity for the specified load. So, if we really want to optimize our structure, we should start with additional material in these regions to allow for more optimization flexibility.

In this case, we can go back to our original engine mount model in Discovery Live and use the integrated SpaceClaim tools to thicken our backplate and expand the fillets around the load surfaces.

Figure 8: Modified engine mount geometry with a thicker backplate and larger fillets.

We can then run back through the same analysis, specifying the same target volume, to improve the performance of our final component. Indeed, we find that after optimizing back down to a material volume of 30 cm3, our new maximum stress has been decreased to 256 MPa. Keep in mind that this is very doable within Discovery Live, as the entire modification and simulation process can be done in <10 minutes for this model.

Figure 9: Validated results from the modified geometry post-optimization.

Of course, once a promising solution has been attained in Discovery Live, we should then export the model to run a more thorough analysis of in Ansys Mechanical, but hopefully, this provides a useful example of how to leverage this amazing tool!

One final comment is that while this example was performed in the 2019R3 version, 2020R1 has expanded Discovery Live’s optimization capability somewhat. Instead of only being allowed to specify a target volume or percent reduction, you can choose to allow a specified increase in structure compliance while minimizing the volume. In addition to this, there are a couple more knobs to turn for better control over the manufacturability of the result, such as specifying the maximum thickness of any region and preventing any internal overhangs in a specified direction. It is now also possible to link topology optimization to a general-purpose modal analysis, either on its own or coupled to a structural analysis. These continued improvements are great news for users, and we hope that even more features continue to roll out.

Icepak in Ansys Electronic Desktop – Why should you know about it?

Posted on March 5, 2020, by: Josh Stout

The role of Ansys Electronics Desktop Icepak (hereafter referred to as Icepak, not to be confused with Classic Icepak) is in an interesting place. On the back end, it is a tremendously capable CFD solver through the use of the Ansys Fluent code. On the front end, it is an all-in-one pre and post processor that is streamlined for electronics thermal management, including the explicit simulation and effects of fluid convection. In this regard, Icepak can be thought of as a system level Multiphysics simulation tool.

One of the advantages of Icepak is in its interface consistency with the rest of the Electronic Desktop (EDT) products. This not only results in a slick modern appearance but also provides a very familiar environment for the electrical engineers and designers who typically use the other EDT tools. While they may not already be intimately familiar with the physics and setup process for CFD/thermal simulations, being able to follow a very similar workflow certainly lowers the barrier to entry for accessing useful results. Even if complete adoption by these users is not practical, this same environment can serve as a happy medium for collaboration with thermal and fluids experts.

Figure 1: AEDT Icepak interface. The same ribbon menus, project manager, history tree, and display window as other EDT products.

So, beyond these generalities, what does Icepak actually offer for an optimized user experience over other tools, and what kinds of problems/applications are best suited for it?

The first thing that comes to mind for both of these questions is a PCB with attached components. In a real-world environment, anyone that has looked at the inside of a computer is likely familiar with motherboards covered with all kinds of little chips and capacitors and often dominated by a CPU mounted with a heatsink and fan. In most cases, this motherboard is enclosed within some kind of box (a computer case) with vents/filters/fans on at least some of the sides to facilitate controlled airflow. This is an ideal scenario for Icepak. The geometries of the board and its components are typically well represented by rectangular prisms and cylinders, and the thermal management of the system is strongly related to the physics of conjugate heat transfer. For the case geometry, it may be more convenient to import this from a more comprehensive modeler like SpaceClaim and then take advantage of the tools built into Icepak to quickly process the important features.

Figure 2: A computer case with motherboard imported from SpaceClaim. The front and back have vents/fans while the side has a rectangular patterned grille.

For a CAD model like the one above, we may want to include some additional items like heatsinks, fan models, or simple PCB components. Icepak’s geometry tools include some very convenient parameterized functions for quickly constructing and positioning fans and heatsinks, in addition to the basic ability to create and manipulate simple volumes. There are also routines for extracting openings on surface, such as the rectangular vent arrays on the front and back as well as the patterned grille on the side. So, not only can you import detailed CAD from external sources, you can mix, match, and simplify it with Icepak’s geometry, which streamlines the entire design and setup process. For an experienced user, the above model can be prepared for a basic simulation within just a matter of minutes. The resulting configuration with an added heatsink, some RAM, and boundary conditions, could look something like this:

Figure 3: The model from Figure 2 after Icepak processing. Boundary conditions for the fans, vents, and grille have been defined. Icepak primitives have also been added in the form of a heatsink and RAM modules.

Monitor points can then assigned to surfaces or bodies as desired; chances are that for a simulation like this, temperature within the CPU is the most important. Additional temperature points for each RAM module or flow measurements for the fans and openings can also be defined. These points can all be tracked as the simulation proceeds to ensure that convergence is actually attained.

Figure 4: Monitoring chosen solution variables to ensure convergence.

For this simple system containing a 20 W CPU and 8 RAM modules at 2 W each, quite a few of our components are toasty and potentially problematic from a thermal standpoint.

Figure 5: Post-processing with Icepak. Temperature contours are overlaid with flow velocities to better understand the behavior of the system.

With the power of a simulation environment in Icepak at our fingertips, we can now play around with our design parameters to improve the thermal management of this system! Want to see what happens when you block the outlet vents? Easy, select and delete them! Want to use a more powerful fan or try a new material for the motherboard or heatsink? Just edit their properties in the history tree. Want to spin around the board or try changing the number of fins on the heatsink? Also straightforward, although you will have to remesh the model. While these are the kinds of things that are certainly possible in other tools, they are exceptionally easy to do within an all-in-one interface like Icepak.

The physics involved in this example are pretty standard: solid body conduction with conjugate heat transfer to a turbulent K-Omega fluid model. Where Icepak really shines is its ability to integrate with the other tools in the EDT environment. While we assumed that the motherboard was nothing more than a solid chunk of FR-4, this board could have been designed and simulated in detail with another tool like HFSS. The board, along with all of the power losses calculated during the HFSS analysis, could have then been directly imported into the Icepak project. This would allow for each layer to be modeled with its own spatially varying thermal properties according to trace locations as well as a very accurate spatial mapping of heat generation.

This is not at all to say that Icepak is limited to these kinds of PCB and CCA examples. These just often tend to be convenient to think about and relatively easy to geometrically represent. Using Fluent as the solver provides a lot of flexibility, and there are many more classes of problems that could be benefit from Icepak. On the low frequency side, electric motors are a good example of a problem where electronic and thermal behavior are intertwined. As voltage is applied to the windings, currents are induced and heat is generated. For larger motors, these currents, and consequently the associated thermal losses, can be significant. Maxwell is used to model the electronic side for these types of problems, where the results can then be easily brought into an Icepak simulation. I have gone through just such an example rotor/stator/winding motor assembly model in Maxwell, where I then copied everything into an Iecpak project to simulate the resulting steady temperature profile in a box of naturally convecting air.

Figure 6: An example half-motor that was solved in Maxwell as a magnetostatic problem and then copied over to Icepak for thermal analysis.

If it is found that better thermal management is needed, then extra features could then be added on the Icepak side as desired, such as a dedicated heatsink or external fan. Only the components with loads mapped over from Maxwell need to remain unmodified.

On the high frequency side, you may care about the performance of an antenna. HFSS can be used for the electromagnetic side, while Icepak can once again be brought in to analyze the thermal behavior. For high powered antenna, some components could very easily get hot enough for the material properties to appreciably change and for thermal radiation to become a dominant mode of heat transport. A 2-way automatic Icepak coupling is an excellent way to model this. Thermal modifiers may be defined for material properties in HFSS, and radiation is a supported physics model in Icepak. HFSS and Icepak can then be set up to alternately solve and automatically feed each other new loads and boundary conditions until a converged result is attained.

What all of this really comes down to is the question: how easy is it for the user to set up a model that will produce the information they need? For these kinds of electronics questions, I believe the answer for Icepak is “extraordinarily easy”. While functional on its own merit, Icepak really shines when it comes to the ease of coupling thermal management analysis with the EM family of tools.

ANSYS Mechanical: Mesh Time Metric Display

Posted on February 24, 2020, by: Joe Woodward

The things you find out from poking around the Enhancement Request list…

Did you know that you can get ANSYS Mechanical to report the amount of time that the meshing takes? I didn’t until I stumbled across this little gem on the request to show mesh time metrics.

This option is already available for many releases now. Users can turn performance diagnostics by setting to Tools -> Options -> Miscellaneous -> "Report Performance Diagnostics in Messages" to Yes inside Mechanical.

So, of course, I tried it out.

This was in version 2020R1, but it says that the option has been there since R19.0.  Now they just need to add it to the Statistics section of the Mesh Details so that we can use it as an output parameter.

Reduce EMI with Good Signal Integrity Habits

Posted on January 14, 2020, by: Aleksandr Gafarov

Recently the ‘Signal Integrity Journal’ posted their ‘Top 10 Articles’ of 2019. All of the articles included were incredible, however, one stood out to me from the rest - ‘Seven Habits of Successful 2-Layer Board Designers’ by Dr. Eric Bogatin (https://www.signalintegrityjournal.com/blogs/12-fundamentals/post/1207-seven-habits-of-successful-2-layer-board-designers). In this work, Dr. Bogatin and his students were developing a 2-Layer printed circuit board (PCB), while trying to minimize signal and power Integrity issues as much as possible. As a result, they developed a board and described seven ‘golden habits’ for this board development. These are fantastic habits that I’m confident we can all agree with. In particular, there was one habit at which I wanted to take a deeper look:

“…Habit 4: When you need to route a cross-under on the bottom layer, make it short. When you can’t make it short, add a return strap over it..”

Generally speaking, this habit suggests to be very careful with the routing of signal traces over the gap on the ground plane. From the signal integrity point of view, Dr. Bogatin explained it perfectly – “..The signal traces routed above this gap will see a gap in the return path and generate cross talk to other signals also crossing the gap..”. On one hand, crosstalk won’t be a problem if there are no other nets around, so the layout might work just fine in that case. However, crosstalk is not the only risk. Fundamentally, crosstalk is an EMI problem. So, I wanted to explore what happens when this habit is ignored and there are no nearby nets to worry about.

To investigate, I created a simple 2-Layer board with the signal trace, connected to 5V voltage source, going over an air gap. Then I observed the near field and far field results using ANSYS SIwave solution. Here is what I found.

Near and Far Field Analysis

Typically, near and far fields are characterized by solved E and H fields around the model. This feature in ANSYS SIwave gives the engineer the ability to simulate both E and H fields for near field analysis, and E field for Far Field analysis.

First and foremost, we can see, as expected, that both near and far Field have resonances at the same frequencies. Additionally, we can observe from Figure 1 that both E and H fields for near field have the largest radiation spikes at 786.3 MHz and 2.349GHz resonant frequencies.

Figure 1. Plotted E and H fields for both Near and Far Field solutions

If we plot E and H fields for Near Field, we can see at which physical locations we have the maximum radiation.

Figure 2. Plotted E and H fields for Near field simulations

As expected, we see the maximum radiation occurring over the air gap, where there is no return path for the current. Since we know that current is directly related to electromagnetic fields, we can also compute AC current to better understand the flow of the current over the air gap.

Compute AC Currents (PSI)

This feature has a very simple setup interface. The user only needs to make sure that the excitation sources are read correctly and that the frequency range is properly indicated. A few minutes after setting up the simulation, we get frequency dependent results for current. We can review the current flow at any simulated frequency point or view the current flow dynamically by animating the plot.

Figure 3. Computed AC currents

As seen in Figure 3, we observe the current being transferred from the energy source, along the transmission line to the open end of the trace. On the ground layer, we see the return current directed back to the source. However at the location of the air gap there is no metal for the return current to flow, therefore, we can see the unwanted concentration of energy along the plane edges. This energy may cause electromagnetic radiation and potential problems with emission.

If we have a very complicated multi-layer board design, it won’t be easy to simulate current flow on near and far fields for the whole board. It is possible, but the engineer will have to have either extra computing time or extra computing power. To address this issue, SIwave has a feature called EMI Scanner, which helps identify problematic areas on the board without running full simulations.

EMI Scanner

ANSYS EMI Scanner, which is based on geometric rule checks, identifies design issues that might result in electromagnetic interference problems during operation. So, I ran the EMI Scanner to quickly identify areas on the board which may create unwanted EMI effects. It is recommended for engineers, after finding all potentially problematic areas on the board using EMI Scanner, to run more detailed analyses on those areas using other SIwave features or HFSS.

Currently the EMI Scanner contains 17 rules, which are categorized as ‘Signal Reference’, ‘Wiring/Crosstalk’, ‘Decoupling’ and ‘Placement’. For this project, I focused on the ‘Signal Reference’ rules group, to find violations for ‘Net Crossing Split’ and ‘Net Near Edge of Reference’. I will discuss other EMI Scanner rules in more detail in a future blog (so be sure to check back for updates).

Figure 4. Selected rules in EMI Scanner (left) and predicted violations in the project (right)

As expected, the EMI Scanner properly identified 3 violations as highlighted in Figure 4. You can either review or export the report, or we can analyze violations with iQ-Harmony. With this feature, besides generating a user-friendly report with graphical explanations, we are also able to run ‘What-if’ scenarios to see possible results of the geometrical optimization.

Figure 5. Generated report in iQ-Harmony with ‘What-If’ scenario

Based on these results of quick EMI Scanner, the engineer would need to either redesign the board right away or to run more analysis using a more accurate approach.

Conclusion

In this blog, we were able to successfully run simulations using ANSYS SIwave solution to understand the effect of not following Dr.Bogatin’s advice on routing the signal trace over the gap on a 2-Layer board. We also were able to use 4 different features in SIwave, each of which delivered the correct, expected results.

Overall, it is not easy to think about all possible SI/PI/EMI issues while developing a complex board. In these modern times, engineers don’t need to manufacture a physical board to evaluate EMI problems. A lot of developmental steps can now be performed during simulations, and ANSYS SIwave tool in conjunction with HFSS Solver can help to deliver the right design on the first try.

If you would like more information or have any questions please reach out to us at info@padtinc.com.

Defining Antenna Array Excitations with Nested-If Statements in HFSS

Posted on January 7, 2020, by: Sima Noghanian

HFSS offers various methods to define array excitations. For a large array, you may take advantage of an option “Load from File” to load the magnitude and phase of each port. However, in many situations you may have specific cases of array excitation. For example, changing amplitude tapering or the phase variations that happens due to frequency change. In this blog we will look at using the “Edit Sources” method to change the magnitude and phase of each excitation. There are cases that might not be easily automated using a parametric sweep. If the array is relatively small and there are not many individual cases to examine you may set up the cases using “array parameters” and “nested-if”.

In the following example, I used nested-if statements to parameterize the excitations of the pre-built example “planar_flare_dipole_array”, which can be found by choosing File->Open Examples->HFSS->Antennas (Fig. 1) so you can follow along. The file was then saved as “planar_flare_dipole_array_if”. Then one project was copied to create two examples (Phase Variations, Amplitude Variations).

Fig. 1. Planar_flare_dipole_array with 5 antenna elements (HFSS pre-built example).

Phase Variation for Selected Frequencies

In this example, I assumed there were three different frequencies that each had a set of coefficients for the phase shift. Therefore, three array parameters were created. Each array parameter has 5 elements, because the array has 5 excitations:

A1: [0, 0, 0, 0, 0]

A2: [0, 1, 2, 3, 4]

A3: [0, 2, 4, 6, 8]

Then 5 coefficients were created using a nested_if statement. “Freq” is one of built-in HFSS variables that refers to frequency. The simulation was setup for a discrete sweep of 3 frequencies (1.8, 1.9 and 2.0 GHz) (Fig. 2). The coefficients were defined as (Fig. 3):

E1: if(Freq==1.8GHz,A1[0],if(Freq==1.9GHz,A2[0],if(Freq==2.0GHz,A3[0],0)))

E2: if(Freq==1.8GHz,A1[1],if(Freq==1.9GHz,A2[1],if(Freq==2.0GHz,A3[1],0)))

E3: if(Freq==1.8GHz,A1[2],if(Freq==1.9GHz,A2[2],if(Freq==2.0GHz,A3[2],0)))

E4: if(Freq==1.8GHz,A1[3],if(Freq==1.9GHz,A2[3],if(Freq==2.0GHz,A3[3],0)))

E5: if(Freq==1.8GHz,A1[4],if(Freq==1.9GHz,A2[4],if(Freq==2.0GHz,A3[4],0)))

Please note that the last case is the default, so if frequency is none of the three frequencies that were given in the nested-if, the default phase coefficient is chosen (“0” in this case).

Fig. 2. Analysis Setup.

Fig. 3. Parameters definition for phase varaitioin case.

By selecting the menu item HFSS ->Fields->Edit Sources, I defined E1-E5 as coefficients for the phase shift. Note that phase_shift is a variable defined to control the phase, and E1-E5 are meant to be coefficients (Fig. 4):

Fig. 4. Edit sources using the defined variables.

The radiation pattern can now be plotted at each frequency for the phase shifts that were defined (A1 for 1.8 GHz, A2 for 1.9 GHz and A3 for 2.0 GHz) (Figs 5-6):

 Fig. 5. Settings for radiation pattern plots.

Fig. 6. Radiation patten for phi=90 degrees and different frequencies, the variation of phase shifts shows how the main beam has shifted for each frequency.

Amplitude Variation for Selected Cases

In the second example I created three cases that were controlled using the variable “CN”. CN is simply the case number with no units.

The variable definition was similar to the first case. I defined 3 array parameters and 5 coefficients. This time the coefficients were used for the Magnitude. The variable in the nested-if was CN. That means 3 cases and a default case were created. The default coefficient here was chosen as “1” (Figs. 7-8).

A1: [1, 1.5, 2, 1.5, 1]

A2: [1, 1, 1, 1, 1]

A3: [2, 1, 0, 1, 2]

E1: if(CN==1,A1[0],if(CN==2,A2[0],if(CN==3,A3[0],1)))*1W

E2: if(CN==1,A1[1],if(CN==2,A2[1],if(CN==3,A3[1],1)))*1W

E3: if(CN==1,A1[2],if(CN==2,A2[2],if(CN==3,A3[2],1)))*1W

E4: if(CN==1,A1[3],if(CN==2,A2[3],if(CN==3,A3[3],1)))*1W

E5: if(CN==1,A1[4],if(CN==2,A2[4],if(CN==3,A3[4],1)))*1W

Fig. 7. Parameters definition for amplitude varaitioin case.

Fig. 8. Exciation setting for amplitude variation case.

Notice that CN in the parametric definition has the value of “1”. To create the solution for all three cases I used a parametric sweep definition by selecting the menu item Optimetrics->Add->Parametric. In the Add/Edit Sweep I chose the variable “CN”, Start: 1, Stop:3, Step:1. Also, in the Options tab I chose to “Save Fields and Mesh” and “Copy geometrically equivalent meshes”, and “Solve with copied meshes only”. This selection helps not to redo the adaptive meshing as the geometry is not changed (Fig. 9). In plotting the patterns I could now choose the parameter CN and the results of plotting for CN=1, 2, and 3 is shown in Fig. 10. You can see how the tapering of amplitude has affected the side lobe level.

Fig. 9. Parameters definition for amplitude varaitioin case.

 Fig. 10. Radiation patten for phi=90 degrees and different cases of amplitude tapering, the variation of amplitude tapering has caused chagne in the beamwidth and side lobe levels.

Drawback

The drawback of this method is that array parameters are not post-processing variables. This means changing them will create the need to re-run the simulations. Therefore, it is needed that all the possible cases to be defined before running the simulation.

If you would like more information or have any questions please reach out to us at info@padtinc.com.

Getting Bulk Properties for Repeated Structures in ANSYS Mechanical with Material Designer

Posted on January 2, 2020, by: Alex Grishin

Using Material Designer To Perform Homogenization Studies

Editor's Note:

3D Printing and other advanced manufacturing methods are driving the increased use of lattice-type structures in structural designs. This is great for reducing mass and increasing the stiffness of components but can be a real pain for those of us doing simulation. Modeling all of those tiny features across a part is difficult to mesh and takes forever to solve.

PADT has been doing a bit of R&D in this area recently, including a recent PHASE II NASA STTR with ASU and KSU. We see a lot of potential in combining generative design and 3D Printing to drive better structures. The key to this effort is efficient and accurate simulation.

The good news is that we do not have to model every unit cell. Instead, we can do some simulation on a single representative chunk and use the ANSYS Material Designer feature to create an approximate material property that we can use to represent the lattice volume as a homogeneous material.

In the post below, PADT's Alex Grishin explains it all with theory, examples, and a clear step-by-step process that you can use for your lattice filled geometry.

PADT-ANSYS-Lattice-Material_Homogenization