Welcome to a New Era in Electronics Reliability Simulation

Simulation itself is no longer a new concept in engineering, but individual fields, applications, and physics are continually improved upon and integrated into the toolbox that is an engineer’s arsenal. Many times, these are incremental additions to a particular solver’s capabilities or a more specialized method of post processing, however this can also occasionally be present through new cross-connections between separate tools or even an entirely new piece of software. As a result of all this, Ansys has now reached critical mass for its solution space surrounding Electronics Reliability. That is, we can essentially approach an electronics reliability problem from any major physics perspective that we like.

So, what is Electronics Reliability and what physics am I referring to? Great question, and I’m glad you asked – I’d like to run through some examples of each physics and their typical use-case / importance, as well as where Ansys fits in. Of course, real life is a convoluted Multiphysics problem in most cases, so having the capability to accommodate and link many different physics together is also an important piece of this puzzle.

Running down the list, we should perhaps start with the most obvious category given the name – Electrical Reliability. In a broad sense, this encompasses all things related to electromagnetic fields as they pertain to transmission of both power and signals. While the electrical side of this topic is not typically in my wheelhouse, it is relatively straightforward to understand the basics around a couple key concepts, Power Integrity and Signal Integrity.

Power integrity, as its name suggests, is the idea that we need to maintain certain standards of quality for the electrical power in a device/board/system. While some kinds of electronics are robust enough that they will continue to function even under large variations in supplied voltage or current, there are also many that rely on extremely regular power supplies that only vary above certain limits or within narrow bounds. Even if we’re looking at a single PCB (as in the image below), in today’s technological environment it will no doubt have electrical traces mapped all throughout it as well as multiple devices present that operate under their own specified electrical conditions.

Figure 1: An example PCB with complex trace and via layouts, courtesy of Ansys

If we were determined to do so, we could certainly measure trace lengths, widths, thicknesses, etc., and make some educated guesses for the resulting voltage drops to individual components. However, considerably more effort would need to be made to account for bends, corners, or variable widths, and that would still completely neglect any environmental effects or potential interactions between traces. It is much better to be able to represent and solve for the entire geometry at once using a dedicated field solver – this is where Ansys SIwave or Ansys HFSS typically come in, giving us the flexibility to accurately determine the electrical reliability, whether we’re talking about AC or DC power sources.

Signal integrity is very much related, except that “signals” in this context often involve different pathways, less energy, and a different set of regulations and tolerances. Common applications involve Chip-signal modeling and DDRx virtual compliance – these have to do with not only the previous general concerns regarding stability and reliability, but also adherence to specific standards (JEDEC) through virtual compliance tests. After all, inductive electromagnetic effects can still occur over nonconductive gaps, and this can be a significant source of noise and instability in cases where conductive paths (like board traces or external connections) cross or run very near each other.

Figure 2: Example use-cases in virtual compliance testing, courtesy of Ansys

Whether we are looking at timings between components, transition times, jitter, or even just noise, HFSS and SIWave can both play roles here. In either case, being able to use a simulation environment to confirm that a certain design will or will not meet certain standards can provide invaluable feedback to the design process.

Other relevant topics to Electrical Reliability may include Electromagnetic Interference (EMI) analysis, antenna performance, and Electrostatic Discharge (ESD) analysis. While I will not expand on these in great detail here, I think it is enough to realize that an excellent electrical design (such as for an antenna) requires some awareness of the operational environment. For instance, we might want to ensure that our chosen or designed component will adequately function while in the presence of some radiation environment, or maybe we would like to test the effectiveness of the environmental shielding on a region of our board. Maybe, there is some concern about the propagation of an ESD through a PCB, and we would like to see how vulnerable certain components are. Ansys tools provide us the capabilities needed to do all of this.

The second area of primary interest is Thermal Reliability, as just about anyone who has worked with or even used electronics knows, they generate some amount of heat while in use. Of course, the quantity, density, and distribution of that heat can vary tremendously depending on the exact device or system under question, but this heat will ultimately result in a rise in temperature somewhere. The point of thermal reliability basically boils down to realizing that the performance and function of many electrical components depends on their temperature. Whether it is simply a matter of accounting for a change in electrical conductivity as temperature rises or a hard limit of functionality for a particular transistor at 150 °C, acknowledging and accounting for these thermal effects is critical when considering electronics reliability. This is a problem with several potential solutions depending on the scale of interest, but generally we cover the package/chip, board, and full system levels. For the component/chip level, a designer will often want to provide some package level specs for OEMs so that a component can be properly scoped in a larger design. Ansys Icepak has toolkits available to help with this process; whether it is simplifying a 3D package down to a detailed network thermal model or identifying the most critical hot spot within a package based on a particular heat distribution. Typically, network models are generated through temperature measurements taken from a sample in a standardized JEDEC test chamber, but Icepak can assist through automatically generating these test environments, as below, and then using simulation results to extract well defined JB and JC values for the package under test.

Figure 3: Automatically generated JEDEC test chambers created by Ansys Icepak, courtesy of Ansys

On the PCB level of detail, we are likely interested in how heat moves across the entire board from component to component or out to the environment. Ansys Icepak lets us read in a detailed ECAD description for said PCB and process its trace and via definitions into an accurate thermal conductivity map that will improve our simulation accuracy. After all, two boards with identical sizing and different copper trace layouts may conduct heat very differently from each other.

Figure 4: Converting ECAD information into thermal conductivity maps using Ansys Icepak, courtesy of Ansys

On the system level of thermal reliability, we are likely looking at the effectiveness of a particular cooling solution on our electronic design. Icepak makes it easy to include the effects of a heat exchanger (like a coldplate) without having to explicitly model its computationally expensive geometry by using a flow network model. Also, many of today’s electronics are expected to constantly run right up against their limit and are kept within thermal spec by using software to throttle their input power in conjunction with an existing cooling strategy. We can use Icepak to implement and test these dynamic thermal management algorithms so that we can track and evaluate their performance across a range of environmental conditions.

The next topic that we should consider is that of Mechanical Reliability. Mechanical concepts tend to be a little more intuitive and relatable due to their more hands-on nature than the other two, though the exact details behind the cause and significance of stresses in materials is of course more involved. In the most general sense, stress is a result of applying force to an object. If this stress is high compared to what is allowed by a material, then bad things tend to happen – like permanent deformation or fracture. For electronic devices consisting of many materials, small structures, and particularly delicate components, we have once again surpassed what can be reasonably accomplished with hand calculations. Whether we are looking at an individual package, the integrity of an entire PCB, or the stability that a rigid housing will provide to a set of PCBs, Ansys has a solution. We might use Ansys Mechanical to look at manufacturing allowances for the permissible force used while mounting a complicated leaded component onto a board, as seen below. Or maybe, we will use mechanical simulation to find the optimal positioning of leads on a new package such that its natural vibrational frequencies are outside normal ambient conditions.

Figure 5: A surface component with discretely modeled leads, courtesy of Ansys

At the PCB level, we face many of the same detail-oriented challenges around representing traces and vias that have been mentioned for the electrical applications. They may not be quite as critical and more easily approximated in some ways, but that does not change the fact that copper traces are mechanically quite different from the resin composites often used as the substrate (like FR-4). Ansys tools like Sherlock provide best in class PCB modeling on this front, allowing us to directly bring in ECAD models with full trace and component detail, and then model them mechanically at several different levels depending on the exact need. Automating a materials property averaging scheme based on the local density of traces may be sufficient if we are looking at the general bending behavior of a board, but we can take it to the next level by explicitly modeling traces as “reinforcement” elements. This brings us to the level of detail where we can much more reliably look at the stresses present in individual traces, such that we can make good design decisions to reduce the risk of traces peeling or delaminating from the surface.

Figure 6: Example trace mapping workflow and methods, courtesy of Ansys

Beyond just looking at possible improvements in the design process, we can also make use of Ansys tools like LS-DYNA or Mechanical to replicate testing or accident conditions that an existing design could be subjected to. As a real-world example, many of us are all too familiar with the occasional consequences of accidentally dropping our smart phones – Ansys is used to test designs against these kind of shock events, where impact against a hard surface can result in high stresses in key locations. This helps us understand where to reinforce a design to protect against the worst damage or even what angle of impact is most likely to cause an operational failure.

As the finale for all of this, I come back to the first comment of reality being a complex Multiphysics problem. Many of the previous topics are not truly isolated to their respective physics (as much as we often simplify them as such), and this is one of the big ways in which the Ansys ecosystem shines: Comprehensive Multiphysics. For the topic of thermal reliability, I simply stated that electronics give off heat. This may be obvious, but that heat is not just a magical result of the device being turned on but is instead a physical and calculable result of the actual electrical behavior. Indeed, this the exact kind of result that we can extract from one of the relevant electronics tools. An HFSS solution will provide us with not only the electrical performance of an antenna but also the three-dimensional distribution of heat that is consequently produced. Ansys lets us very easily feed this information into an Icepak simulation, which then has the ability to give us far more accurate results than a typical uniform heat load assumption provides.

Figure 7: Coupled electrical-thermal simulation between HFSS and Icepak, courtesy of Ansys

If we find that our temperatures are particularly high, we might then decide to bring these results back into HFSS to locally change material properties as a function of temperature to get an even more accurate set of electrical results. It could be that this results in an appreciable shift in our antenna’s frequency, or perhaps the efficiency has decreased, and aspects of the design need to be revisited. These are some of the things that we would likely miss without a comprehensive Multiphysics environment.

On a more mechanical side, the effects on stress and strain from thermal conditions are very well known and understood at this point, but there is no reason we could not use Ansys to bring the electrical alongside this established thermal-mechanical behavior. After all, what is a better representation of the real physics involved than using SIwave or HFSS to model the electrical behavior of a PCB, bringing those result into an Icepak simulation as a heat load to test the performance of a cooling loop or heat sink, and then using at least some of those thermal results to look at stresses through not only a PCB as a whole but also individual traces? Not a whole lot at this moment in time, I would say.

The extension that we can make on these examples, is that they have by and large been representative cases of how an electronics device responds to a particular event or condition and judging its reliability metrics based on that set of results, however many physics might be involved. There is one more piece of the puzzle we have access to that also interweaves itself throughout the Multiphysics domain and that is Reliability Physics. This is mostly relevant to us in electronics reliability for considering how different events, or even just a repetition of the same event, can stack together and accumulate to contribute towards some failure in the future. An easy example of this is a plastic hinge or clip that you might find on any number of inexpensive products – flexing a thin piece of plastic like in these hinges can provide a very convenient method of motion for quite some time, but that hinge will gradually accumulate damage until it inevitably cracks and fails. Every connection within a PCB is susceptible to this same kind of behavior, whether it is the laminations of the PCB itself, the components soldered to the surface, or even the individual leads on a component. If our PCB is mounted on the control board of a bus, satellite, or boat, there will be some vibrations and thermal cycles associated with its life. A single one of these events may be of much smaller magnitude and seemingly negligible compared to something dramatic like a drop test, and yet they can still add up to the point of being significant over a period of months or years.

This is exactly the kind of thing that Ansys Sherlock proves invaluable for: letting us define and track the effect of events that may occur over a PCB’s entire lifecycle. Many of these will revolve around mechanical concepts of fatigue accumulating as a result of material stresses, but it is still important to consider the potential Multiphysics origins of stress. Different simulations will be required for each of mechanical bending during assembly, vibration during transport, and thermal cycling during operation, yet each of these contributes towards the final objective of electronics reliability. Sherlock will bring each of these and more together in a clear description of which components on a board are most likely to fail, how likely they are to fail as a function of time, and which life events are the most impactful.

Figure 8: Example failure predictions over the life cycle of a PCB using Ansys Sherlock, courtesy of Ansys

Really, what all of this comes down to is that when we design and create products, we generally want to make sure that they function in the way that we intend them to. This might be due to a personal pride in our profession or even just the desire to maximize profit through minimizing the costs associated with a component failure, however at the end it just makes sense to anticipate and try to prevent the failures that might occur under normal operating conditions.

For complex problems like electronics devices, there are many physics all intimately tied together in the consideration of overall reliability, but the Ansys ecosystem of tools allows us to approach these problems in a realistic way. Whether we’re looking at the electrical reliability of a circuit or antenna, the thermal performance of a cooling solution or algorithm, or the mechanical resilience of a PCB mounted on a bracket, Ansys provides a path forward.

If you have any questions or would like to learn more, please contact us at info@padtinc.com or visit www.padtinc.com.

Ansys – Software for Electric Machine Design

The electric propulsion system has drawn more and more attention in the last decade. There has been a lot of development of the electric machines which are used in automotive and aerospace. It is essential for engineers to develop electric machines with high efficiency, high power-density, low noise and cost.

Therefore, simulation tools are needed to design the electric machines that can meet the requirement. The product launching time can be reduced significantly with the help of the simulation tools. The design process of electric machine involves the area of Electromagnetics, Mechanical, Thermal and Fluids. This makes Ansys the perfect tool for designing electric machines as it is a multi-physics simulation platform. Ansys offers a complete workflow from electromagnetics to thermal and mechanical which provides accurate and robust designs for electric machines.

To design high performance, more compact and reliable electric machines, design engineers can start with three Ansys tools: RMxprt, Maxwell and Motor-CAD. The capabilities and differences of these three tools will be compared and discussed.

1. Ansys RMxprt

Ansys RMxprt is a template-based tool for electromagnetic designs of electric machines. It covers almost all of the conventional radial types of electric machines. Starting from Ansys 2020R2, some axial types have also been included in RMxprt (IM, PMSM, BLDC).

Fig. 1. Electric machine types in Ansys RMxprt.

Users only need to input the geometry parameters and materials for the machines. The performance data and curves can be obtained for different load types. Since RMxprt uses analytical approaches, it can generate results very fast. It is also capable of running fast coupling/system simulations with Simplorer/Twin Builder. Ready-to-run Maxwell 2D/3D models can be created directly from RMxprt automatically.

2. Ansys Maxwell

Ansys Maxwell is a FEA simulation tool for low-frequency electromagnetic applications. Maxwell can solve static, frequency-domain and time-varying electromagnetic and electric fields. The Maxwell applications can be but not limited to electric machines, transformers, sensors, wireless charging, busbars, biomedical, etc.

Unlike RMxprt which uses analytical method, Maxwell uses the FEA approach which allows it to do high accuracy field simulations. Engineers can either import the geometry or create their own models in Maxwell. Therefore, there is no limit of types of machines that can be modeled in Maxwell. It can model all types of electromagnetic rotary devices such as multi-rotor and multi-stator designs.

Fig. 2. Electric machine detailed model in Ansys Maxwell.

Maxwell can do more detailed electromagnetic simulations for electric machines, for example, the demagnetization of the permanent magnets, end winding simulations and magnetostrictive effects. With Maxwell, engineers are able to run parametric sweep for different design variables and to do optimizations to achieve the optimal design. Maxwell is also capable of creating equivalent circuit extraction (ECE) models. The ECE is one of the reduced order modelling (ROM) techniques, which automatically generates an efficient system-level model. There are several Ansys customization toolkit (ACT) available for Maxwell to quickly create efficiency map and simulate impact of eccentricity. Furthermore, Maxwell can be coupled with Ansys Mechanical/Fluent/Icepak to do thermal and mechanical analysis.

3. Ansys Motor-CAD

Ansys Motor-CAD is suitable to make design decisions in early design phase of electric machines. It includes four modules: electromagnetic, thermal, lab and mechanical. Motor-CAD can perform multiphysics simulations of electric machines across the full torque-speed range. Motor-CAD uses a combination of analytical method and FEA, and it can quickly evaluate motor topologies and optimize designs in terms of performance, efficiency and size.

Motor-CAD is capable of simulating the radial types of electric machines. With its lab module, it can do the duty cycle simulations to analyze electromagnetic, mechanical and thermal performances of electric machines. The thermal module is a standard tool in industry which can provide fast thermal analysis with insight of each thermal node, pressure drop, losses. Motor-CAD mechanical module uses 2D FEA to calculate the stress and deformation. Engineers can also manually correlate the models in Motor-CAD based on the manufacturing impacts or testing data.

Fig. 3. Ansys Motor-CAD GUI and machine types.

Motor-CAD can provide links to Ansys Maxwell, Mechanical, Icepak and Fluent for more detailed analysis in the later phases of motor designs.

  • What to use?

RMxprt and Motor-CAD both can handle most of the radial types of electric machines. RMxprt can also model some conventional axial flux machines. RMxprt can purely model the electromagnetic performance of the machines, while Motor-CAD can simulate electromagnetic, thermal and mechanical performances.

Maxwell can simulate any types of machines (radial, axial, linear, hybrid, etc.) as it can import or draw any geometry. Both static and transient analysis can be conducted in Maxwell.

  • When to use?

RMxprt and Motor-CAD are most suitable in the early design stages of the electric machines. Engineers can get fast results about the machine performance and sizing which can be used as a guideline in the later design phase.

Maxwell can be used in the early design stages for more advanced types of electric machines as well. Maxwell is also capable of doing more detailed electromagnetic designs in the later stage and can be used to do system-level transient-transient co-simulation (coupled with Ansys Simplorer/Twin Builder). More detailed geometries, advanced materials and complex electromagnetic phenomenon can be modeled in Maxwell. In the final stages of running more advanced CFD and NVH analysis, Maxwell can be linked with Ansys Fluent/Icepak/Mechanical to ensure the design robustness of the machines before going into prototyping/production.

  • Who can benefit?

RMxprt and Motor-CAD do not require strong FEA simulation skills as no boundary conditions or solution domain need to be set. Engineers with basic knowledge of electric machines can get familiar with the tools and get results very quickly.

Maxwell requires users to setup the mesh, boundary and excitations as it uses the FEA method. Engineers will need to acquire not only the basic concepts of machines but also some FEA simulation skills in order to get more reasonable results.

Summary

RMxprt: It is a template-based tool for initial electric machine designs which uses analytical analysis approach.

Maxwell: It uses FEA approach model both 2D and 3D models. It is capable of simulating either simple or mode advanced electromagnetics in electric machines.

Motor-CAD: It is suitable for initial machine designs which uses analytical and FEA methods. It can do electromagnetic, thermal and initial mechanical analysis.

If you would like more information related to this topic or have any questions, please reach out to us at info@padtinc.com.

Signal & Power Integrity Updates in Ansys 2021 R1 – Webinar

The use of Ansys Electronics solutions minimizes the testing costs, ensures regulatory compliance, improves reliability and drastically reduces your product development time. All this while helping you build the best-in-class and cutting-edge products.

With signal and power integrity (SI & PI) analysis products, users can mitigate many electrical and thermal issues affecting printed circuit boards such as electromagnetic interference, crosstalk, overheating, etc. Ansys integrated electromagnetics and circuit simulation tools are essential for designing high-speed serial channels, parallel buses, and complete power delivery systems found in modern high-speed electronic devices.

Leverage the simulation capability from Ansys to solve the most critical aspects of your designs. Join PADT’s Electronics expert and application engineer Aleksandr Gafarov for a detailed look at what is new for SI & PI in Ansys 2021 R1, including updates available within the following tools:

• SIwave – Granta support & differential time domain crosstalk

• Q3D – Uniform current terminals

• Circuits – Network data explorer & SPISim

• HFSS 3D – Parallel meshing, encrypted 3D components & IC workflow improvements

• Electronics Desktop – Ansys cloud, Minerva & optiSLang integration

• And much more

Register Here

If this is your first time registering for one of our Bright Talk webinars, simply click the link and fill out the attached form. We promise that the information you provide will only be shared with those promoting the event (PADT).

You will only have to do this once! For all future webinars, you can simply click the link, add the reminder to your calendar and you’re good to go!

All Things Ansys 082: High Frequency Updates on Ansys 2021 R1

 

Published on: February 26th, 2021
With: Eric Miller & Aleksandr Gafarov
Description:  

In this episode your host and Co-Founder of PADT, Eric Miller is joined by PADT’s Electronics Application Engineer Aleksandr Gafarov for a look at what’s new in this electromagnetics release.

When it comes to high frequency electromagnetics, Ansys 2021 R1 delivers a plethora of groundbreaking enhancements. Ansys HFSS Mesh Fusion enables simulation of large, never before possible electromagnetic systems with efficiency and scalability.

If you have any questions, comments, or would like to suggest a topic for the next episode, shoot us an email at podcast@padtinc.com we would love to hear from you!

Listen:
Subscribe:

@ANSYS #ANSYS

High Frequency Updates in Ansys 2021 R1 – Webinar

Whether leveraging improved workflows or leading-edge capabilities with Ansys 2021 R1, teams are tackling design challenges head on, eliminating the need to make costly workflow tradeoffs, developing next-generation innovations with increased speed and significantly enhancing productivity, all in order to deliver high-quality products to market faster than ever.

When it comes to high frequency electromagnetics, Ansys 2021 R1 delivers a plethora of groundbreaking enhancements. Ansys HFSS Mesh Fusion enables simulation of large, never before possible electromagnetic systems with efficiency and scalability. This release also allows for encrypted 3D components supported in HFSS 3D Layout for PCBs, IC packages and IC designs to enable suppliers to share detailed 3D component designs for creating highly accurate simulations.

Join PADT’s Lead Electromagnetics Engineer and high frequency expert Michael Griesi for a presentation on updates made to the Ansys HF suite in the 2021 R1 release, including advancements for:

  • Electronics Desktop
  • HFSS
  • Circuits
  • EMIT
  • And Much More

Register Here

If this is your first time registering for one of our Bright Talk webinars, simply click the link and fill out the attached form. We promise that the information you provide will only be shared with those promoting the event (PADT).

You will only have to do this once! For all future webinars, you can simply click the link, add the reminder to your calendar and you’re good to go!

All Things Ansys 080: 2020 Wrap-up & Predictions for Ansys in the New Year

 

Published on: January 25th, 2021
With: Eric Miller & PADT’s Ansys Support Team
Description:  

In this episode your host and Co-Founder of PADT, Eric Miller is joined by the simulation support team to look back at the past year of Ansys technology and make some predictions regarding what may happen in the year to come.

If you have any questions, comments, or would like to suggest a topic for the next episode, shoot us an email at podcast@padtinc.com we would love to hear from you!

Listen:
Subscribe:

@ANSYS #ANSYS

Revolutionizing the Way Data Moves Through Space with Ansys Simulation – Webinar

Ever since NASA began its race to space, U.S. technology companies have searched for solutions to solve a variety of challenges designed to push us further in our exploration of the stars. Whether the purpose is for space travel or for launching satellites that track weather patterns, space innovation is gaining momentum. One of the most critical challenges we are trying to solve is how to optimize communication with moving spacecrafts. Tucson Arizona’s FreeFall Aerospace has an answer; developing unique antenna systems for both space and ground use.

When working to develop this technology, FreeFall ran into a number of roadblocks due to limitations in its engineering software tool-set. The company was able to bypass these hurdles and successfully optimize development thanks to the introduction of Ansys HFSS, a specialized 3D electromagnetic software used for designing and simulating high-frequency electronic products such as antennas, antenna arrays, RF/microwave components, and much more. Because of the speed of this tool and its ability to solve multiple simulation challenges in different domains, FreeFall is able to make design changes more quickly and with better data.

Join PADT’s Lead Electromagnetics Engineer Michael Griesi and President of FreeFall, Doug Stetson for a discussion on Ansys electromagnetics offerings, and how FreeFall is able to take advantage of them for their unique application.

Register Here

If this is your first time registering for one of our Bright Talk webinars, simply click the link and fill out the attached form. We promise that the information you provide will only be shared with those promoting the event (PADT).

You will only have to do this once! For all future webinars, you can simply click the link, add the reminder to your calendar and you’re good to go!

Making Sense of DC IR Results in Ansys SIwave

In this article I will cover a Voltage Drop (DC IR) simulation in SIwave, applying realistic power delivery setup on a simple 4-layer PCB design. The main goal for this project is to understand what data we receive by running DC IR simulation, how to verify it, and what is the best way of using it.

And before I open my tools and start diving deep into this topic, I would like to thank Zachary Donathan for asking the right questions and having deep meaningful technical discussions with me on some related subjects. He may not have known, but he was helping me to shape up this article in my head!

Design Setup

There are many different power nets present on the board under test, however I will be focusing on two widely spread nets +1.2V and +3.3V. Both nets are being supplied through Voltage Regulator Module (VRM), which will be assigned as a Voltage Source in our analysis. After careful assessment of the board design, I identified the most critical components for the power delivery to include in the analysis as Current Sources (also known as ‘sinks’). Two DRAM small outline integrated circuit (SOIC) components D1 and D2 are supplied with +1.2V. While power net +3.3V provides voltage to two quad flat package (QFP) microcontrollers U20 and U21, mini PCIE connector, and hex Schmitt-Trigger inverter U1.

Fig. 1. Power Delivery Network setting for a DC IR analysis

Figure 1 shows the ‘floor plan’ of the DC IR analysis setup with 1.2V voltage path highlighted in yellow and 3.3V path highlighted in light blue.

Before we assign any Voltage and Current sources, we need to define pin groups for all nets +1.2V, +3.3V and GND for all PDN component mentioned above. Having pin groups will significantly simplify the reviewing process of the results. Also, it is generally a good practice to start the DC IR analysis from the ‘big picture’ to understand if certain component gets enough power from the VRM. If a given IC reports an acceptable level of voltage being delivered with a good margin, then we don’t need to dig deeper; we can instead focus on those which may not have good enough margins.

Once we have created all necessary pin groups, we can assign voltage and current sources. There are several ways of doing that (using wizard or manual), for this project we will use ‘Generate Circuit Element on Components’ feature to manually define all sources. Knowing all the components and having pin groups already created makes the assignment very straight-forward. All current sources draw different amount of current, as indicated in our setting, however all current sources have the same Parasitic Resistance (very large value) and all voltage source also have the same Parasitic Resistance (very small value). This is shown on Figure 2 and Figure 3.

Note: The type of the current source ‘Constant Voltage’ or ‘Distributed Current’ matters only if you are assigning a current source to a component with multiple pins on the same net, and since in this project we are working with pins groups, this setting doesn’t make difference in final results.

Fig. 2. Voltage and Current sources assigned
Fig. 3. Parasitic Resistance assignments for all voltage and current sources

For each power net we have created a voltage source on VRM and multiple current sources on ICs and the connector. All sources have a negative node on a GND net, so we have a good common return path. And in addition, we have assigned a negative node of both voltage sources (one for +1.2V and one for +3.3V) as our reference points for our analysis. So reported voltage values will be referenced to that that node as absolute 0V.

At this point, the DC IR setup is complete and ready for simulation.

Results overview and validation

When the DC IR simulation is finished, there is large amount of data being generated, therefore there are different ways of viewing results, all options are presented on Figure 4. In this article I will be primarily focusing on ‘Power Tree’ and ‘Element Data’. As an additional source if validation we may review the currents and voltages overlaying the design to help us to visualize the current flow and power distribution. Most of the time this helps to understand if our assumption of pin grouping is accurate.

Fig. 4. Options to view different aspects of DC IR simulated data

Power Tree

First let’s look at the Power Tree, presented on Figure 5. Two different power nets were simulated, +1.2V and +3.3V, each of which has specified Current Sources where the power gets delivered. Therefore, when we analyze DC IR results in the Power tree format, we see two ‘trees’, one for each power net. Since we don’t have any pins, which would get both 1.2V and 3.3V at the same time (not very physical example), we don’t have ‘common branches’ on these two ‘trees’.

Now, let’s dissect all the information present in this power tree (taking in consideration only one ‘branch’ for simplicity, although the logic is applicable for all ‘branches’):

  • We were treating both power nets +1.2V and +3.3V as separate voltage loops, so we have assigned negative nodes of each Voltage Source as a reference point. Therefore, we see the ‘GND’ symbol ((1) and (2)) for each voltage source. Now all voltage calculations will be referenced to that node as 0V for its specific tree.
  • Then we see the path from Voltage Source to Current Source, the value ΔV shows the Voltage Drop in that path (3). Ultimately, this is the main value power engineers usually are interested in during this type of analysis. If we subtract ΔV from Vout we will get the ‘Actual Voltage’ delivered to the specific current source positive pin (1.2V – 0.22246V = 0.977V). That value reported in the box for the Current Source (4). Technically, the same voltage drop value is reported in the column ‘IR Drop’, but in this column we get more details – we see what the percentage of the Vout is being dropped. Engineers usually specify the margin value of the acceptable voltage drop as a percentage of Vout, and in our experiment we have specified 15%, as reported in column ‘Specification’. And we see that 18.5% is greater than 15%, therefore we get ‘Fail_I_&_V’ results (6) for that Current Source.
  • Regarding the current – we have manually specified the current value for each Current Source. Current values in Figure 2 are the same as in Figure 5. Also, we can specify the margin for the current to report pass or fail. In our example we assigned 108A as a current at the Current Source (5), while 100A is our current limit (4). Therefore, we also got failed results for the current as well.
  • As mentioned earlier, we assigned current values for each Current Source, but we didn’t set any current values for the Voltage Source. This is because the tool calculates how much current needs to be assigned for the Voltage Source, based on the value at the Current Sources. In our case we have 3 Current Sources 108A, 63A, 63A (5). The sum of these three values is 234A, which is reported as a current at the Voltage Source (7). Later we will see that this value is being used to calculate output power at the Voltage Source.  
Fig. 5. DC IR simulated data viewed as a ‘Power Tree’

Element Data

This option shows us results in the tabular representation. It lists many important calculated data points for specific objects, such as bondwire, current sources, all vias associated with the power distribution network, voltage probes, voltage sources.

Let’s continue reviewing the same power net +1.2V and the power distribution to CPU1 component as we have done for Power Tree (Figure 5). The same way we will be going over the details in point-by-point approach:

  • First and foremost, when we look at the information for Current Sources, we see a ‘Voltage’ value, which may be confusing. The value reported in this table is 0.7247V (8), which is different from the reported value of 0.977V in Power Tree on Figure 5 (4). The reason for the difference is that reported voltage value were calculated at different locations. As mentioned earlier, the reported voltage in the Power Tree is the voltage at the positive pin of the Current Source. The voltage reported in Element Data is the voltage at the negative pin of the Current Source, which doesn’t include the voltage drop across the ground plane of the return path.

To verify the reported voltage values, we can place Voltage Probes (under circuit elements). Once we do that, we will need to rerun the simulation in order to get the results for the probes:

  1. Two terminals of the ‘VPROBE_1’ attached at the positive pin of Voltage Source and at the positive pin of the Current Source. This probe should show us the voltage difference between VRM and IC, which also the same as reported Voltage Drop ΔV. And as we can see ‘VPROBE_1’ = 222.4637mV (13), when ΔV = 222.464mV (3). Correlated perfectly!
  2. Two terminals of the ‘VPROBE_GND’ attached to the negative pin of the Current Source and negative pin of the Voltage Source. The voltage shown by this probe is the voltage drop across the ground plane.

If we have 1.2V at the positive pin of VRM, then voltage drops 222.464mV across the power plane, so the positive pin of IC gets supplied with 0.977V. Then the voltage at the Current Source 0.724827V (8) being drawn, leaving us with (1.2V – 0.222464V – 0.724827V) = 0.252709V at the negative pin of the Current Source. On the return path the voltage drops again across the ground plane 252.4749mV (14) delivering back at the negative pin of VRM (0.252709V – 0.252475V) = 234uV. This is the internal voltage drop in the Voltage Source, as calculated as output current at VRM 234A (7) multiplied by Parasitic Resistance 1E-6Ohm (Figure 3) at VRM. This is Series R Voltage (11)

  • Parallel R Current of the Current source is calculated as Voltage 724.82mV (8) divided by Parasitic Resistance of the Current Source (Figure 3) 5E+7 Ohm = 1.44965E-8 (9)
  • Current of the Voltage Source report in the Element Data 234A (10) is the same value as reported in the Power Tree (sum of all currents of Current Sources for the +1.2V power net) = 234A (7). Knowing this value of the current we can multiple it by Parasitic Resistance of the Voltage Source (Figure 3) 1E-6 Ohm = (234A * 1E-6Ohm) = 234E-6V, which is equal to reported Series R Voltage (11). And considering that the 234A is the output current of the Voltage Source, we can multiple it by output voltage Vout = 1.2V to get a Power Output = (234A * 1.2V) = 280.85W (12)
Fig. 6. DC IR simulated data viewed in the table format as ‘Element Data’

In addition to all provided above calculations and explanations, the video below in Figure 7 highlights all the key points of this article.

Fig. 7. Difference between reporting Voltage values in Power Tree and Element Data

Conclusion

By carefully reviewing the Power Tree and Element Data reporting options, we can determine many important decisions about the power delivery network quality, such as how much voltage gets delivered to the Current Source; how much voltage drop is on the power net and on the ground net, etc. More valuable information can be extracted from other DC IR results options, such as ‘Loop Resistance’, ‘Path Resistance’, ‘RL table’, ‘Spice Netlist’, full ‘Report’. However, all these features deserve a separate topic.

As always, if you would like to receive more information related to this topic or have any questions please reach out to us at info@padtinc.com.

Efficient and Accurate Simulation of Antenna Arrays in Ansys HFSS

Unit-cell in HFSS

HFSS offers different method of creating and simulating a large array. The explicit method, shown in Figure 1(a) might be the first method that comes to our mind. This is where you create the exact CAD of the array and solve it. While this is the most accurate method of simulating an array, it is computationally extensive. This method may be non-feasible for the initial design of a large array. The use of unit cell (Figure 1(b)) and array theory helps us to start with an estimate of the array performance by a few assumptions. Finite Array Domain Decomposition (or FADDM) takes advantage of unit cell simplicity and creates a full model using the meshing information generated in a unit cell. In this blog we will review the creation of unit cell. In the next blog we will explain how a unit cell can be used to simulate a large array and FADDM.

Fig. 1 (a) Explicit Array
Fig. 1 (b) Unit Cell
Fig. 1 (c) Finite Array Domain Decomposition (FADDM)

In a unit cell, the following assumptions are made:

  • The pattern of each element is identical.
  • The array is uniformly excited in amplitude, but not necessarily in phase.
  • Edge affects and mutual coupling are ignored
Fig. 2 An array consisting of elements amplitude and phases can be estimated with array theory, assuming all elements have the same amplitude and element radiation patterns. In unit cell simulation it is assumed all magnitudes (An’s) are equal (A) and the far field of each single element is equal.

A unit cell works based on Master/Slave (or Primary/Secondary) boundary around the cell. Master/Slave boundaries are always paired. In a rectangular cell you may use the new Lattice Pair boundary that is introduced in Ansys HFSS 2020R1. These boundaries are means of simulating an infinite array and estimating the performance of a relatively large arrays. The use of unit cell reduces the required RAM and solve time.

Primary/Secondary (Master/Slave) (or P/S) boundaries can be combined with Floquet port, radiation or PML boundary to be used in an infinite array or large array setting, as shown in Figure 3.

Fig. 3 Unit cell can be terminated with (a) radiation boundary, (b) Floquet port, (c) PML boundary, or combination of them.

To create a unit cell with P/S boundary, first start with a single element with the exact dimensions of the cell. The next step is creating a vacuum or airbox around the cell. For this step, set the padding in the location of P/S boundary to zero. For example, Figure 4 shows a microstrip patch antenna that we intend to create a 2D array based on this model. The array is placed on the XY plane. An air box is created around the unit cell with zero padding in X and Y directions.

Fig. 4 (a) A unit cell starts with a single element with the exact dimensions as it appears in the lattice
Fig. 4 (b) A vacuum box is added around it

You notice that in this example the vacuum box is larger than usual size of quarter wavelength that is usually used in creating a vacuum region around the antenna. We will get to calculation of this size in a bit, for now let’s just assign a value or parameter to it, as it will be determined later. The next step is to define P/S to generate the lattice. In AEDT 2020R1 this boundary is under “Coupled” boundary. There are two methods to create P/S: (1) Lattice Pair, (2) Primary/Secondary boundary.

Lattice Pair

The Lattice Pair works best for square lattices. It automatically assigns the primary and secondary boundaries. To assign a lattice pair boundary select the two sides that are supposed to create infinite periodic cells, right-click->Assign Boundary->Coupled->Lattice Pair, choose a name and enter the scan angles. Note that scan angles can be assigned as parameters. This feature that is introduced in 2020R1 does not require the user to define the UV directions, they are automatically assigned.

Fig. 5 The lattice pair assignment (a) select two lattice walls
Fig. 5 (b) Assign the lattice pair boundary
Fig. 5 (c) After, right-click and choosing assign boundary > choose Lattice Pair
Fig. 5 (d) Phi and Theta scan angles can be assigned as parameters

Primary/Secondary

Primary/Secondary boundary is the same as what used to be called Master/Slave boundary. In this case, each Secondary (Slave) boundary should be assigned following a Primary (Master) boundary UV directions. First choose the side of the cell that Primary boundary. Right-click->Assign Boundary->Coupled->Primary. In Primary Boundary window define U vector. Next select the secondary wall, right-click->Assign Boundary->Couple->Secondary, choose the Primary Boundary and define U vector exactly in the same direction as the Primary, add the scan angles (the same as Primary scan angles)

Fig. 6 Primary and secondary boundaries highlights.

Floquet Port and Modes Calculator

Floquet port excites and terminates waves propagating down the unit cell. They are similar to waveguide modes. Floquet port is always linked to P/S boundaries. Set of TE and TM modes travel inside the cell. However, keep in mind that the number of modes that are absorbed by the Floquet port are determined by the user. All the other modes are short-circuited back into the model. To assign a Floquet port two major steps should be taken:

Defining Floquet Port

Select the face of the cell that you like to assign the Floquet port. This is determined by the location of P/S boundary. The lattice vectors A and B directions are defined by the direction of lattice (Figure 7).

Fig. 7 Floquet port on top of the cell is defined based on UV direction of P/S pairs

The number of modes to be included are defined with the help of Modes Calculator. In the Mode Setup tab of the Floquet Port window, choose a high number of modes (e.g. 20) and click on Modes Calculator. The Mode Table Calculator will request your input of Frequency and Scan Angles. After selecting those, a table of modes and their attenuation using dB/length units are created. This is your guide in selecting the height of the unit cell and vaccume box. The attenation multiplied by the height of the unit cell (in the project units, defined in Modeler->Units) should be large enough to make sure the modes are attenuated enough so removing them from the calcuatlion does not cause errors. If the unit cell is too short, then you will see many modes are not attenuated enough. The product of the attenuatin and height of the airbox should be at least 50 dB. After the correct size for the airbox is calcualted and entered, the model with high attenuation can be removed from the Floquet port definition.

The 3D Refinement tab is used to control the inclusion of the modes in the 3D refinement of the mesh. It is recommended not to select them for the antenna arrays.

Fig. 8 (Left) Determining the scan angles for the unit cell, (Right) Modes Calculator showing the Attenuation

In our example, Figure 8 shows that the 5th mode has an attenuation of 2.59dB/length. The height of the airbox is around 19.5mm, providing 19.5mm*2.59dB/mm=50.505dB attenuation for the 5th mode. Therefore, only the first 4 modes are kept for the calculations. If the height of the airbox was less than 19.5mm, we would need to increase the height so accordingly for an attenuation of at least 50dB.

Radiation Boundary

A simpler alternative for Floquet port is radiation boundary. It is important to note that the size of the airbox should still be kept around the same size that was calculated for the Floquet port, therefore, higher order modes sufficiently attenuated. In this case the traditional quarter wavelength padding might not be adequate.

Fig. 9 Radiation boundary on top of the unit cell

Perfectly Matched Layer

Although using radiation boundary is much simpler than Floquet port, it is not accurate for large scan angles. It can be a good alternative to Floquet port only if the beam scanning is limited to small angles. Another alternative to Floquet port is to cover the cell by a layer of PML. This is a good compromise and provides very similar results to Floquet port models. However, the P/S boundary need to surround the PML layer as well, which means a few additional steps are required. Here is how you can do it:

  1. Reduce the size of the airbox* slightly, so after adding the PML layer, the unit cell height is the same as the one that was generated using the Modes Calculation. (For example, in our model airbox height was 19mm+substrte thickness, the PML height was 3mm, so we reduced the airbox height to 16mm).
  2. Choose the top face and add PML boundary.
  3. Select each side of the airbox and create an object from that face (Figure 10).
  4. Select each side of the PML and create objects from those faces (Figure 10).
  5. Select the two faces that are on the same plane from the faces created from airbox and PML and unite them to create a side wall (Figure 10).
  6. Then assign P/S boundary to each pair of walls (Figure 10).

*Please note for this method, an auto-size “region” cannot be used, instead draw a box for air/vacuum box. The region does not let you create the faces you need to combine with PML faces.

Fig. 10 Selecting two faces created from airbox and PML and uniting them to assign P/S boundaries

The advantage of PML termination over Floquet port is that it is simpler and sometimes faster calculation. The advantage over Radiation Boundary termination is that it provides accurate results for large scan angles. For better accuracy the mesh for the PML region can be defined as length based.

Seed the Mesh

To improve the accuracy of the PML model further, an option is to use length-based mesh. To do this select the PML box, from the project tree in Project Manager window right-click on Mesh->Assign Mesh Operation->On Selection->Length Based. Select a length smaller than lambda/10.

Fig. 11 Using element length-based mesh refinement can improve the accuracy of PML design

Scanning the Angle

In phased array simulation, we are mostly interested in the performance of the unit cell and array at different scan angles. To add the scanning option, the phase of P/S boundary should be defined by project or design parameters. The parameters can be used to run a parametric sweep, like the one shown in Figure 12. In this example the theta angle is scanned from 0 to 60 degrees.

Fig. 12 Using a parametric sweep, the scanned patterns can be generated

Comparing PML and Floquet Port with Radiation Boundary

To see the accuracy of the radiation boundary vs. PML and Floquet Port, I ran the simulations for scan angles up to 60 degrees for a single element patch antenna. Figure 13 shows that the accuracy of the Radiation boundary drops after around 15 degrees scanning. However, PML and Floquet port show similar performance.

Fig. 13 Comparison of radiation patterns using PML (red), Floquet Port (blue), and Radiation boundary (orange).

S Parameters

To compare the accuracy, we can also check the S parameters. Figure 14 shows the comparison of active S at port 1 for PML and Floquet port models. Active S parameters were used since the unit cell antenna has two ports. Figure 15 shows how S parameters compare for the model with the radiation boundary and the one with the Floquet port.

Fig. 14 Active S parameter comparison for different scan angles, PML vs. Floquet Port model.
Fig. 15 Active S parameter comparison for different scan angles, Radiation Boundary vs. Floquet Port model.

Conclusion

The unit cell definition and options on terminating the cell were discussed here. Stay tuned. In the next blog we discuss how the unit cell is utilized in modeling antenna arrays.

All Things ANSYS 056: A Unique Perspective on a Unique Solution – PADT Sales Talks ANSYS Applications

 

Published on: February 10th, 2020
With: Eric Miller, Bob Calvin, Dan Christensen, Brian Benbow, Heather Dean, Ian Scott & Will Kruspe
Description:  

In this episode your host and Co-Founder of PADT, Eric Miller is joined by Bob Calvin, Dan Christensen, Brian Benbow, Heather Dean, Ian Scott, and Will Kruspe from PADT’s ANSYS sales team to discuss the benefits they see in ANSYS as a solution for their unique customer bases, as well as for manufacturers and engineers as a whole. With a combination of technical know-how and knowledge of positioning within different industries, the PADT sales team shares a unique perspective on the value of the various tools that make up the ANSYS suite and how users can best take advantage of them in order to help them succeed.

If you have any questions, comments, or would like to suggest a topic for the next episode, shoot us an email at podcast@padtinc.com we would love to hear from you!

Listen:
Subscribe:

@ANSYS #ANSYS

Reduce EMI with Good Signal Integrity Habits

Recently the ‘Signal Integrity Journal’ posted their ‘Top 10 Articles’ of 2019. All of the articles included were incredible, however, one stood out to me from the rest – ‘Seven Habits of Successful 2-Layer Board Designers’ by Dr. Eric Bogatin (https://www.signalintegrityjournal.com/blogs/12-fundamentals/post/1207-seven-habits-of-successful-2-layer-board-designers). In this work, Dr. Bogatin and his students were developing a 2-Layer printed circuit board (PCB), while trying to minimize signal and power Integrity issues as much as possible. As a result, they developed a board and described seven ‘golden habits’ for this board development. These are fantastic habits that I’m confident we can all agree with. In particular, there was one habit at which I wanted to take a deeper look:

“…Habit 4: When you need to route a cross-under on the bottom layer, make it short. When you can’t make it short, add a return strap over it..”

Generally speaking, this habit suggests to be very careful with the routing of signal traces over the gap on the ground plane. From the signal integrity point of view, Dr. Bogatin explained it perfectly – “..The signal traces routed above this gap will see a gap in the return path and generate cross talk to other signals also crossing the gap..”. On one hand, crosstalk won’t be a problem if there are no other nets around, so the layout might work just fine in that case. However, crosstalk is not the only risk. Fundamentally, crosstalk is an EMI problem. So, I wanted to explore what happens when this habit is ignored and there are no nearby nets to worry about.

To investigate, I created a simple 2-Layer board with the signal trace, connected to 5V voltage source, going over an air gap. Then I observed the near field and far field results using ANSYS SIwave solution. Here is what I found.

Near and Far Field Analysis

Typically, near and far fields are characterized by solved E and H fields around the model. This feature in ANSYS SIwave gives the engineer the ability to simulate both E and H fields for near field analysis, and E field for Far Field analysis.

First and foremost, we can see, as expected, that both near and far Field have resonances at the same frequencies. Additionally, we can observe from Figure 1 that both E and H fields for near field have the largest radiation spikes at 786.3 MHz and 2.349GHz resonant frequencies.

Figure 1. Plotted E and H fields for both Near and Far Field solutions

If we plot E and H fields for Near Field, we can see at which physical locations we have the maximum radiation.

Figure 2. Plotted E and H fields for Near field simulations

As expected, we see the maximum radiation occurring over the air gap, where there is no return path for the current. Since we know that current is directly related to electromagnetic fields, we can also compute AC current to better understand the flow of the current over the air gap.

Compute AC Currents (PSI)

This feature has a very simple setup interface. The user only needs to make sure that the excitation sources are read correctly and that the frequency range is properly indicated. A few minutes after setting up the simulation, we get frequency dependent results for current. We can review the current flow at any simulated frequency point or view the current flow dynamically by animating the plot.

Figure 3. Computed AC currents

As seen in Figure 3, we observe the current being transferred from the energy source, along the transmission line to the open end of the trace. On the ground layer, we see the return current directed back to the source. However at the location of the air gap there is no metal for the return current to flow, therefore, we can see the unwanted concentration of energy along the plane edges. This energy may cause electromagnetic radiation and potential problems with emission.

If we have a very complicated multi-layer board design, it won’t be easy to simulate current flow on near and far fields for the whole board. It is possible, but the engineer will have to have either extra computing time or extra computing power. To address this issue, SIwave has a feature called EMI Scanner, which helps identify problematic areas on the board without running full simulations.

EMI Scanner

ANSYS EMI Scanner, which is based on geometric rule checks, identifies design issues that might result in electromagnetic interference problems during operation. So, I ran the EMI Scanner to quickly identify areas on the board which may create unwanted EMI effects. It is recommended for engineers, after finding all potentially problematic areas on the board using EMI Scanner, to run more detailed analyses on those areas using other SIwave features or HFSS.

Currently the EMI Scanner contains 17 rules, which are categorized as ‘Signal Reference’, ‘Wiring/Crosstalk’, ‘Decoupling’ and ‘Placement’. For this project, I focused on the ‘Signal Reference’ rules group, to find violations for ‘Net Crossing Split’ and ‘Net Near Edge of Reference’. I will discuss other EMI Scanner rules in more detail in a future blog (so be sure to check back for updates).

Figure 4. Selected rules in EMI Scanner (left) and predicted violations in the project (right)

As expected, the EMI Scanner properly identified 3 violations as highlighted in Figure 4. You can either review or export the report, or we can analyze violations with iQ-Harmony. With this feature, besides generating a user-friendly report with graphical explanations, we are also able to run ‘What-if’ scenarios to see possible results of the geometrical optimization.

Figure 5. Generated report in iQ-Harmony with ‘What-If’ scenario

Based on these results of quick EMI Scanner, the engineer would need to either redesign the board right away or to run more analysis using a more accurate approach.

Conclusion

In this blog, we were able to successfully run simulations using ANSYS SIwave solution to understand the effect of not following Dr.Bogatin’s advice on routing the signal trace over the gap on a 2-Layer board. We also were able to use 4 different features in SIwave, each of which delivered the correct, expected results.

Overall, it is not easy to think about all possible SI/PI/EMI issues while developing a complex board. In these modern times, engineers don’t need to manufacture a physical board to evaluate EMI problems. A lot of developmental steps can now be performed during simulations, and ANSYS SIwave tool in conjunction with HFSS Solver can help to deliver the right design on the first try.

If you would like more information or have any questions please reach out to us at info@padtinc.com.

Defining Antenna Array Excitations with Nested-If Statements in HFSS

HFSS offers various methods to define array excitations. For a large array, you may take advantage of an option “Load from File” to load the magnitude and phase of each port. However, in many situations you may have specific cases of array excitation. For example, changing amplitude tapering or the phase variations that happens due to frequency change. In this blog we will look at using the “Edit Sources” method to change the magnitude and phase of each excitation. There are cases that might not be easily automated using a parametric sweep. If the array is relatively small and there are not many individual cases to examine you may set up the cases using “array parameters” and “nested-if”.

In the following example, I used nested-if statements to parameterize the excitations of the pre-built example “planar_flare_dipole_array”, which can be found by choosing File->Open Examples->HFSS->Antennas (Fig. 1) so you can follow along. The file was then saved as “planar_flare_dipole_array_if”. Then one project was copied to create two examples (Phase Variations, Amplitude Variations).

Fig. 1. Planar_flare_dipole_array with 5 antenna elements (HFSS pre-built example).

Phase Variation for Selected Frequencies

In this example, I assumed there were three different frequencies that each had a set of coefficients for the phase shift. Therefore, three array parameters were created. Each array parameter has 5 elements, because the array has 5 excitations:

A1: [0, 0, 0, 0, 0]

A2: [0, 1, 2, 3, 4]

A3: [0, 2, 4, 6, 8]

Then 5 coefficients were created using a nested_if statement. “Freq” is one of built-in HFSS variables that refers to frequency. The simulation was setup for a discrete sweep of 3 frequencies (1.8, 1.9 and 2.0 GHz) (Fig. 2). The coefficients were defined as (Fig. 3):

E1: if(Freq==1.8GHz,A1[0],if(Freq==1.9GHz,A2[0],if(Freq==2.0GHz,A3[0],0)))

E2: if(Freq==1.8GHz,A1[1],if(Freq==1.9GHz,A2[1],if(Freq==2.0GHz,A3[1],0)))

E3: if(Freq==1.8GHz,A1[2],if(Freq==1.9GHz,A2[2],if(Freq==2.0GHz,A3[2],0)))

E4: if(Freq==1.8GHz,A1[3],if(Freq==1.9GHz,A2[3],if(Freq==2.0GHz,A3[3],0)))

E5: if(Freq==1.8GHz,A1[4],if(Freq==1.9GHz,A2[4],if(Freq==2.0GHz,A3[4],0)))

Please note that the last case is the default, so if frequency is none of the three frequencies that were given in the nested-if, the default phase coefficient is chosen (“0” in this case).

Fig. 2. Analysis Setup.

Fig. 3. Parameters definition for phase varaitioin case.

By selecting the menu item HFSS ->Fields->Edit Sources, I defined E1-E5 as coefficients for the phase shift. Note that phase_shift is a variable defined to control the phase, and E1-E5 are meant to be coefficients (Fig. 4):

Fig. 4. Edit sources using the defined variables.

The radiation pattern can now be plotted at each frequency for the phase shifts that were defined (A1 for 1.8 GHz, A2 for 1.9 GHz and A3 for 2.0 GHz) (Figs 5-6):

 Fig. 5. Settings for radiation pattern plots.

Fig. 6. Radiation patten for phi=90 degrees and different frequencies, the variation of phase shifts shows how the main beam has shifted for each frequency.

Amplitude Variation for Selected Cases

In the second example I created three cases that were controlled using the variable “CN”. CN is simply the case number with no units.

The variable definition was similar to the first case. I defined 3 array parameters and 5 coefficients. This time the coefficients were used for the Magnitude. The variable in the nested-if was CN. That means 3 cases and a default case were created. The default coefficient here was chosen as “1” (Figs. 7-8).

A1: [1, 1.5, 2, 1.5, 1]

A2: [1, 1, 1, 1, 1]

A3: [2, 1, 0, 1, 2]

E1: if(CN==1,A1[0],if(CN==2,A2[0],if(CN==3,A3[0],1)))*1W

E2: if(CN==1,A1[1],if(CN==2,A2[1],if(CN==3,A3[1],1)))*1W

E3: if(CN==1,A1[2],if(CN==2,A2[2],if(CN==3,A3[2],1)))*1W

E4: if(CN==1,A1[3],if(CN==2,A2[3],if(CN==3,A3[3],1)))*1W

E5: if(CN==1,A1[4],if(CN==2,A2[4],if(CN==3,A3[4],1)))*1W

Fig. 7. Parameters definition for amplitude varaitioin case.

Fig. 8. Exciation setting for amplitude variation case.

Notice that CN in the parametric definition has the value of “1”. To create the solution for all three cases I used a parametric sweep definition by selecting the menu item Optimetrics->Add->Parametric. In the Add/Edit Sweep I chose the variable “CN”, Start: 1, Stop:3, Step:1. Also, in the Options tab I chose to “Save Fields and Mesh” and “Copy geometrically equivalent meshes”, and “Solve with copied meshes only”. This selection helps not to redo the adaptive meshing as the geometry is not changed (Fig. 9). In plotting the patterns I could now choose the parameter CN and the results of plotting for CN=1, 2, and 3 is shown in Fig. 10. You can see how the tapering of amplitude has affected the side lobe level.

Fig. 9. Parameters definition for amplitude varaitioin case.

 Fig. 10. Radiation patten for phi=90 degrees and different cases of amplitude tapering, the variation of amplitude tapering has caused chagne in the beamwidth and side lobe levels.

Drawback

The drawback of this method is that array parameters are not post-processing variables. This means changing them will create the need to re-run the simulations. Therefore, it is needed that all the possible cases to be defined before running the simulation.

If you would like more information or have any questions please reach out to us at info@padtinc.com.

All Things ANSYS 053: 2019 Wrap-up & Predictions for ANSYS in the New Year

 

Published on: December 20th, 2019
With: Eric Miller, Tom Chadwick, Ted Harris, Sina Ghods & Ahmed Fayad
Description:  

In this episode your host and Co-Founder of PADT, Eric Miller is joined by PADT’s Simulation Support Team, including Tom Chadwick, Ted Harris, Sina Ghods, and Ahmed Fayad for a round-table discussion of their favorite ANSYS features released in 2019, along with predictions on what has yet to come.

If you have any questions, comments, or would like to suggest a topic for the next episode, shoot us an email at podcast@padtinc.com we would love to hear from you!

Listen:
Subscribe:

@ANSYS #ANSYS

New 3D Design Capabilities Available in ANSYS 2019 R3 – Webinar

The ANSYS 3D Design family of products enables CAD modeling and simulation for all design engineers. Since the demands on today’s design engineer to build optimized, lighter and smarter products are greater than ever, using the appropriate design tools is more important than ever.

Rapidly explore ideas, iterate and innovate with ANSYS Discovery 3D design software, evaluate more concepts and rapidly gauge design performance through virtual design testing as you delve deeper into your design’s details, with the same results accuracy as ANSYS flagship products – when and where you need it.

Join PADT’s Training & Support Application Engineer, Robert McCathren for a look at the new 3D design capabilities available in ANSYS 2019 R3 for ANSYS Discovery AIM, Live, and SpaceClaim. These new updates include:

Mass flow outlets

Transient studies with time varying inputs

Structural beam support

Linear buckling support

Physics-aware meshing improvements

Mesh failure localization and visualization improvements

And much more

Register Here

If this is your first time registering for one of our Bright Talk webinars, simply click the link and fill out the attached form. We promise that the information you provide will only be shared with those promoting the event (PADT).

You will only have to do this once! For all future webinars, you can simply click the link, add the reminder to your calendar and you’re good to go!

Frequency Dependent Material Definition in ANSYS HFSS

Electromagnetic models, especially those covering a frequency bandwidth, require a frequency dependent definition of dielectric materials. Material definitions in ANSYS Electronics Desktop can include frequency dependent curves for use in tools such as HFSS and Q3D. However, there are 5 different models to choose from, so you may be asking: What’s the difference?

In this blog, I will cover each of the options in detail. At the end, I will also show how to activate the automatic setting for applying a frequency dependent model that satisfies the Kramers-Kronig conditions for causality and requires a single frequency definition.

Background

Recalling that the dielectric properties of material are coming from the material’s polarization

(1)

where D is the electric flux density, E is the electric field intensity, and P is the polarization vector. The material polarization can be written as the convolution of a general dielectric response (pGDR) and the electric field intensity.

(2)

The dielectric polarization spectrum is characterized by three dispersion relaxation regions α, β, and γ for low (Hz), medium (KHz to MHz) and high frequencies (GHz and above). For example, in the case of human tissue, tissue permittivity increases and effective conductivity decreases with the increase in frequency [1].

Fig. 1. α, β and γ regions of dielectric permittivity

Each of these regions can be modeled with a relaxation time constant

(3)

where τ is the relaxation time.

(4)

The well-known Debye expression can be found by use of spectral representation of complex permittivity (ε(ω)) and it is given as:

(5)
(6)

where ε is the permittivity at frequencies where ωτ>>1, εs is the permittivity at ωτ>>1, and j2=-1. The magnitude of the dispersion is ∆ε = εs.

The multiple pole Debye dispersion equation has also been used to characterize dispersive dielectric properties [2]

(7)

In particular, the complexity of the structure and composition of biological materials may cause that each dispersion region be broadened by multiple combinations. In that case a distribution parameter is introduced and the Debye model is modified to what is known as Cole-Cole model

(8)

where αn, the distribution parameter, is a measure of broadening of the dispersion.

Gabriel et. al [3] measured a number of human tissues in the range of 10 Hz – 100 GHz at the body temperature (37℃). This data is freely available to the public by IFAC [4].

Frequency Dependent Material Definition in HFSS and Q3D

In HFSS you can assign conductivity either directly as bulk conductivity, or as a loss tangent. This provides flexibility, but you should only provide the loss once. The solver uses the loss values just as they are entered.

To define a user-defined material choose Tools->Edit Libraries->Materials (Fig. 2). In Edit Libraries window either find your material from the library or choose “Add Material”.

Fig. 2. Edit Libraries screen shot.

To add frequency dependence information, choose “Set Frequency Dependency” from the “View/Edit Material” window, this will open “Frequency Dependent Material Setup Option” that provides five different ways of defining materials properties (Fig. 3).

Fig. 3. (Left) View/Edit Material window, (Right) Frequency Dependent Material Setup Option.

Before choosing a method of defining the material please note [5]:

  • The Piecewise Linear and Frequency Dependent Data Points models apply to both the electric and magnetic properties of the material. However, they do not guarantee that the material satisfies causality conditions, and so they should only be used for frequency-domain applications.
  • The Debye, Multipole Debye and Djordjevic-Sarkar models apply only to the electrical properties of dielectric materials. These models satisfy the Kramers-Kronig conditions for causality, and so are preferred for applications (such as TDR or Full-Wave SPICE) where time-domain results are needed. They also include an automatic Djordjevic-Sarkar model to ensure causal solutions when solving frequency sweeps for simple constant material properties.
  • HFSS and Q3D can interpolate the property’s values at the desired frequencies during solution generation.

Piecewise Linear

This option is the simplest way to define frequency dependence. It divides the frequency band into three regions. Therefore, two frequencies are needed as input. Lower Frequency and Upper Frequency, and for each frequency Relative Permittivity, Relative Permeability, Dielectric Loss Tangent, and Magnetic Loss Tangent are entered as the input. Between these corner frequencies, both HFSS and Q3D linearly interpolate the material properties; above and below the corner frequencies, HFSS and Q3D extrapolate the property values as constants (Fig. 4).

Fig. 4. Piecewise Linear Frequency Dependent Material Input window.

Once these values are entered, 4 different data sets are created ($ds_epsr1, $ds_mur1, $ds_tande1, $ds_tandm1). These data sets now can be edited. To do so choose Project ->Data sets, and choose the data set you like to edit and click Edit (Fig. 5). This data set can be modified with additional points if desired (Fig. 6).

Fig. 5. (Left) Project data set selection, (right) defined data set for the material.
Fig. 6. A sample data set.

Frequency Dependent

Frequency Dependent material definition is similar to Piecewise Linear method, with one difference. After selecting this option, Enter Frequency Dependent Data Point opens that gives the user the option to use which material property is defined as a dataset, and for each one of them a dataset should be defined. The datasets can be defined ahead of time or on-the-fly. Any number of data points may be entered. There is also the option of importing or editing frequency dependent data sets for each material property (Fig. 7).

Fig. 7. This window provides options of choosing which material property is frequency dependent and enter the data set associated with it.

Djordjevic-Sarkar

This model was developed initially for FR-4, commonly used in printed circuit boards and packages [6]. In fact, it uses an infinite distribution of poles to model the frequency response, and in particular the nearly constant loss tangent, of these materials.

(9)

where ε is the permittivity at very high frequency,  is the conductivity at low (DC) frequency,  j2=-1, ωA is the lower angular frequency (below this frequency permittivity approaches its DC value), ωB is the upper angular frequency (above this frequency permittivity quickly approaches its high-frequency permittivity). The magnitude of the dispersion is ∆ε = εs-ε∞.

Both HFSS and Q3D allow the user to enter the relative permittivity and loss tangent at a single measurement frequency. The relative permittivity and conductivity at DC may optionally be entered. Writing permittivity in the form of complex permittivity [7]

(10)
(11)

Therefore, at the measurement frequency one can separate real and imaginary parts

(12)
(13)

where

(14)

Therefore, the parameters of Djordjevic-Sarkar can be extracted, if the DC conductivity is known

(15)

If DC conductivity is not known, then a heuristic approximation is De = 10 εtan δ1.

The window shown in Fig. 8 is to enter the measurement values.

Fig. 8. The required values to calculate permittivity using Djordjevic-Sarkar model.

Debye Model

As explained in the background section single pole Debye model is a good approximation of lossy dispersive dielectric materials within a limited range of frequency. In some materials, up to about a 10 GHz limit, ion and dipole polarization dominate and a single pole Debye model is adequate.

(16)
(17)
(18)
(19)
(20)

The Debye parameters can be calculated from the two measurements [7]

(21)

Both HFSS and Q3D allow you to specify upper and lower measurement frequencies, and the loss tangent and relative permittivity values at these frequencies. You may optionally enter the permittivity at high frequency, the DC conductivity, and a constant relative permeability (Fig. 9).

Fig. 9. The required values for Single Pole Debye model.

Multipole Debye Model

For Multipole Debye Model multiple frequency measurements are required. The input window provides entry points for the data of relative permittivity and loss tangent versus frequency. Based on this data the software dynamically generates frequency dependent expressions for relative permittivity and loss tangent through the Multipole Debye Model. The input dialog plots these expressions together with your input data through the linear interpolations (Fig. 10).

Fig. 10. The required values for Multipole Debye model.

Cole Cole Material Model

The Cole Cole Model is not an option in the material definition, however, it is possible to generate the frequency dependent datasets and use Frequency Dependent option to upload these values. In fact ANSYS Human Body Models are built based on the data from IFAC database and Frequency Dependent option.

Visualization

Frequency-dependent properties can be plotted in a few different ways. In View/Edit Material dialog right-click and choose View Property vs. Frequency. In addition, the dialogs for each of the frequency dependent material setup options contain plots displaying frequency dependence of the properties.

You can also double-click the material property name to view the plot.

Automatically use causal materials

As mentioned at the beginning, there is a simple automatic method for applying a frequency dependent model in HFSS. Select the menu item HFSS->Design Setting, and check the box next to Automatically use casual materials under Lossy Dielectrics tab.

Fig. 11. Causal material can be enforced in HFSS Design Settings.

This option will automatically apply the Djordjevic-Sarkar model described above to objects with constant material permittivity greater than 1 and dielectric loss tangent greater than 0. Keep in mind, not only is this feature simple to use, but the Djordjevic-Sarkar model satisfies the Kramers-Kronig conditions for causality which is particularly preferred for wideband applications and where time-domain results will also be needed. Please note that if the assigned material is already frequency dependent, automatic creation of frequency dependent lossy materials is ignored.

If you would like more information or have any questions about ANSYS products please email info@padtinc.com

References

  • D.T. Price, MEMS and electrical impedance spectroscopy (EIS) for non-invasive measurement of cells, in MEMS for Biomedical Applications, 2012, https://www.sciencedirect.com/topics/materials-science/electrical-impedance
  • W. D. Hurt, “Multiterm Debye dispersion relations for permittivity of muscle,” IEEE Trans. Biomed. Eng, vol. 32, pp. 60-64, 1985.
  • S. Gabriel, R. W. Lau, and C. Gabriel. “The dielectric properties of biological tissues: III. Parametric models for the dielectric spectrum of tissues.” Physics in Medicine & Biology, vol. 41, no. 11, pp. 2271, 1996.
  • Dielectric Properties of Body Tissues in the Frequency Range 10 Hz – 100 GHz, http://niremf.ifac.cnr.it/tissprop/.
  • ANSYS HFSS Online Help, Nov. 2013, Assigning Materials.
  • A. R. Djordjevic, R. D. Biljic, V. D. Likar-Smiljani, and T. K. Sarkar, “Wideband frequency-domain characterization of FR-4 and time-domain causality,” IEEE Trans. on Electromagnetic Compatibility, vol. 43, no. 4, p. 662-667, Nov. 2001.
  • ANSYS HFSS Online Help, 2019, Materials Technical Notes.

Useful Links

Piecewise Linear Input

Debye Model Input

Multipole Debye Model Input

Djordjevic-Sarkar

Enter Frequency Dependent Data Points

Modifying Datasets.