Thermal Results Visualization – Ansys SIWave Icepak and Ansys Electronics Desktop Icepak

As a typically mechanical / systems engineer, I am not exactly qualified to go through and list exactly what SIWave does and why you need it for any given situation (shoutout to Aleksandr, our actual expert, whose assistance has been invaluable for my simple example case). However, what I think I have grasped is that SIWave is just one of those Ansys tools where if you need it, you probably really need it. Where this becomes relevant to me is of course in a PCB thermal analysis. DCIR is typically the electrical half of this problem that is within SIWave’s expansive toolkit, though SIWave also contains some very easy-to-use thermal-oriented options for co-simulation with Icepak. I’ll admit that I have tended to somewhat dismiss this on my end, as I am already familiar with a couple more advanced thermal analysis tools, so why wouldn’t I just use these if I wanted to look at the thermal response of A PCB? Despite this, I have recently (begrudgingly) taken a more in-depth look at the thermal side of SIWave, and what I have found is that even if the settings available are a little more simplistic than I might always like, it really is incredibly accessible and provides some nice visualization capability. What’s more, it provides not only an easy path to view your existing thermal results in a full Icepak interface, but also serves as a great starting point if you need to analyze some more complex setups than Icepak.

So, having just been through much of this on my own, it seems like a great opportunity to share some tips and tricks for thermal visualization in both Ansys SIWave and Ansys Electronics Desktop (EDT) Icepak, see where each is strong relative to the other, and then perhaps even share some suggestions for using the SIWave solution as a starting point to take an Icepak PCB simulation to the next level!

To start with, we need a SIWave DCIR project. A DCIR solution is required for providing thermal loads for a thermal solution. I am glossing over this, but basically, you need a PCB definition, a voltage source, and a current source. In the model I borrowed from Aleks, I am using these sources to push some current through one section of my PCB’s power layer and then referencing them to the ground layer. To complete the loop. This means that there are EM losses on both the ground layer and power layer.

For the first simulation, we’ll want to set a baseline temperature for our electrical material properties and make sure the toggle for “Export power dissipation for use in ANSYS Icepak and Mechanical” is enabled.

Now, we can set up an Icepak simulation! As I alluded to, the settings available within SIWave are somewhat primitive, although they do an overall good job of adhering to typical best practices. Our choices are basically using a board model without components and strictly modeling thermal conduction within that board, using a board model with components that includes explicit thermal convection to the environment, manipulating a mesh detail slider bar, and choosing the cooling regime used (natural vs forced convection). For this model, I’ll be using forced convection with surface components and “Detailed” meshing so that I have the most to look at, but obviously the exact settings will vary somewhat depending on your use-case. In 2021R2, the default SIWave-Icepak behavior will be to use EDT Icepak as the solver, however, we can choose to specify “Use Classic Icepak” in the simulation setup window. This determines which version of Icepak we have to use for additional postprocessing in as well, so I will leave “Use Classic Icepak” turned off.

The first method of visualization in SIWave is to simply right-click an Icepak simulation definition in the “Results” window and Display temperature.

This gives us a nice temperature contour on the outer surface of all the solid bodies considered during the simulation. If we stick with the top-down view, we can make use of a nice temperature probe that automatically displays at the mouse location. Once we rotate around into a 3D view with the middle mouse button or other view options, we lose this probe but of course, gain a nice graphical representation of the full geometry.

The second method is to use the View > Temperature Plots toolbar option, which gives us some more flexibility for viewing temperature through each layer.

Most commonly, we will probably be working with the XY cutting plane and then selecting the layer of interest from the drop-down menu so that we can see a plane through the entire PCB. For more precise control, we can also use the slider bar or input the exact plane-normal location to use for plotting.

One of the benefits of this approach is that we can use the other cutting plane definitions to get a cross-section view, along with whatever ECAD board elements we would like to plot. For instance, if we’d like to see more clearly how the temperature varies with depth underneath active components, or around via definitions, we can easily explore this, as in the image below.

Depending on your needs, this may be sufficient flexibility for observing the temperatures of interest, and the smoothly moving cut plane with the slider-bar position is certainly an easy way to get a sense of the board’s behavior. However, SIWave only gives us access to temperature within the solid bodies of our PCB/components, and we can free ourselves from this limitation by moving into EDT Icepak. There are a couple of primary ways to do this – one is to right-click on the Icepak simulation definition in Results and “Open project in Icepak” and the other is to use the same option from the “Results” section of the top toolbar. The more manual method is to directly open the .aedt file that gets generated alongside the SIWave project file.

Much like SIWave, temperatures in EDT Icepak are primarily displayed on cut-planes or object surfaces. Three-dimensional contour plots are also available but tend to be less clear, especially on very thin bodies (like layers of a PCB). For a cut-plane, the most straightforward option is to directly draw a plane or create a new coordinate system (a coordinate system will automatically create the 3 component planes), which can both be done through the top toolbar. 

Personally, I find it easiest to quickly create the objects in the graphical window and then select them in the model tree to fine-tune their locations through the properties display, as above. I do think this is one of the places that SIWave has an edge in ease-of-use – having that slider bar definition for a plane is much nicer. Although, using this method in Icepak also lets us angle the plane however we like, so there are still trade-offs to be considered.

Once we have a plane defined, it is then very easy to select this plane in the model tree and right-click > Temperature > Temperature to create a temperature plot.

One of the immediately observable differences is that we can now view temperature contours throughout the volume of air surrounding our PCB in addition to the PCB itself. So, if we were trying to compare against something like an experimental setup with a thermocouple placed in-air near the board, this would be the way to do it!

If we’re not interested in quite so large of a plot, we can also limit it to a certain model volume by choosing one of the objects in the “In Volume” list of the plot properties. In this case, Box1 and Box2 are smaller volumes enclosing the PCB that were automatically generated for mesh controls, which we can easily reuse for trimming down our temperature plot.

To instead plot on the surface of an object, we can select that object in the model tree (for the whole PCB, it is convenient to right-click it in the tree and use the “Select All” option), follow the same Plot Fields > Temperature > Temperature as before, and then make sure to enable “Plot on surface only”.

This should produce a plot that is very similar to what we obtained in SIwave. Another advantage of doing this in Icepak should now become clear — we have the capability to stack multiple field plots! As below, we can see the solid body surface temperatures alongside our cut plane temperature down the center.

We can get as creative with this as we’d like, plotting on many different cut planes simultaneously, or even combining types of plots. Since we have access to the air volume solution, we can even do things like plot velocity vectors around the PCB for more insight into the overall system.

Having access to the full solution field (fluid and solids) means we can also visualize some other helpful values. The surface heat transfer coefficients can help us understand how to improve our setup in some cases, for instance. In the below plot, we can see some clear shadowing behind surface components which is indicative of the primary flow separating from the surface of the PCB. This certainly explains why the back end of the board is so hot – the components in the back are somewhat hidden from the flow field by those in the front. Since component (and component power) density is higher in the back, we might choose to reverse the direction of flow so that the particularly dense section of components receives the brunt of the airflow, or maybe we might explore angling the board relative to the inlet such that the entire top receives more direct flow.

While we might reach the same or similar conclusions by looking at data through SIWave’s interface, we certainly wouldn’t have access to the tools necessary to actually implement all these changes to the simulation.

As an example, I can pretty easily create a new coordinate system, rotate it by 11° from the original, and then assign my air box to the rotated reference. In effect, this angles all of PCB related volumes with respect to the flow field in just a couple of steps.

After solving, I can then compare the new temperature fields to the old and pretty quickly find that the hotspot on the top surface has been greatly reduced and that the maximum temperature of the system has dropped by about 9 °C. Not too bad! Of course, since I have modified at least one of the simulation bodies, we do have to remesh and solve from scratch, however, we already have an existing DCIR simulation to make use of, and it was much easier getting to this point having started in SIWave.

For my last set of tips, the visualization of the PCB itself in Icepak has been rudimentary so far, but we can also adjust this. Much like in SIWave, we can turn on and off the visualization of features for individual layers independently of anything else. These visualization settings are accessible by selecting our board in the 3D components list and then looking at the properties section.

Since these settings are independent of the 3D geometry visualization, we can selectively hide our model objects in order to isolate the detailed ECAD features. In my test case, the dielectric “Unnamed” layers include via definitions – so I can turn on visualization of these layers, hide the geometry for every layer except the bottom, and plot a temperature cut plane to get a nice visualization of how temperature varies around particular vias.

We could do the same for a temperature cut plane through the width/length of the board as well or even look at heat transfer coefficients on the PCB surface in regions of high via density. As is often the case with Ansys tools, the sky is the limit here.  

In summary, the SIWave interface can be both a great starting and ending point for thermal simulation depending on your needs. It makes setting up a complicated simulation very easy, albeit by removing some user flexibility, but it does allow for several methods of viewing thermal results. These include a smooth slider bar visualization for cut-plane temperatures and a dynamic mouse-probe for checking temperature values in the top-down 2D view. Since SIWave makes use of the full Icepak solver in the background, we can also access a whole lot of additional information by simply opening the existing Icepak solution in the full EDT Icepak interface after a solution has been generated. This gives us access to new thermal solution variables, variables from the fluid portion of our solutions, and new ways to plot and visualize all this information. The combination of SIWave and EDT Icepak also provides us with the opportunity to run an initial set of thermal simulations for relatively simple setups and then build on top of those with more complex boundary conditions or geometry configurations, if we either need greater detail or want to try out some more advanced cooling scenarios.

An Ansys Licensing Tip – ANSYSLMD_LICENSE_FILE

Most Ansys users make use of floating licensing setups, and I would say the majority of those actually make use of licenses that are hosted nonlocally but on their network. Within this licensing scheme, there are quite a few different tools and utilities that we can use to specify where we pull our licenses, too. One of the methods that is making a comeback (in my recent experience) as far as success in troubleshooting and overall reliability is specifying the environment variable ANSYSLMD_LICENSE_FILE.

This variable allows you to point directly towards one or more license servers using a port@address definition for the FlexNet port. With just this defined, the interconnect port will default to 2325, but if your server setup requires another interconnect port then you can also specify this using the ANSYSLI_SERVERS environment variable with the same format.

The downside is that this is a completely separate license server specification from the typical ansyslmd.ini approach, so any values specified this way will not be visible in the “Ansys Client License Settings” utility. On the upside, this is a completely separate license server specification! Meaning, if there are permission issues associated with ansyslmd.ini, or the other license utilities experienced some unknown errors on installation, this may be able to circumvent those issues entirely.

Also, for more advanced setups this can be used to assign specific license servers to individual users on a machine or to potentially help with controlling the priority of license access if multiple license servers are present. Anyway, this may be worth looking into if you encounter issues with client-side licensing!

Welcome to a New Era in Electronics Reliability Simulation

Simulation itself is no longer a new concept in engineering, but individual fields, applications, and physics are continually improved upon and integrated into the toolbox that is an engineer’s arsenal. Many times, these are incremental additions to a particular solver’s capabilities or a more specialized method of post processing, however this can also occasionally be present through new cross-connections between separate tools or even an entirely new piece of software. As a result of all this, Ansys has now reached critical mass for its solution space surrounding Electronics Reliability. That is, we can essentially approach an electronics reliability problem from any major physics perspective that we like.

So, what is Electronics Reliability and what physics am I referring to? Great question, and I’m glad you asked – I’d like to run through some examples of each physics and their typical use-case / importance, as well as where Ansys fits in. Of course, real life is a convoluted Multiphysics problem in most cases, so having the capability to accommodate and link many different physics together is also an important piece of this puzzle.

Running down the list, we should perhaps start with the most obvious category given the name – Electrical Reliability. In a broad sense, this encompasses all things related to electromagnetic fields as they pertain to transmission of both power and signals. While the electrical side of this topic is not typically in my wheelhouse, it is relatively straightforward to understand the basics around a couple key concepts, Power Integrity and Signal Integrity.

Power integrity, as its name suggests, is the idea that we need to maintain certain standards of quality for the electrical power in a device/board/system. While some kinds of electronics are robust enough that they will continue to function even under large variations in supplied voltage or current, there are also many that rely on extremely regular power supplies that only vary above certain limits or within narrow bounds. Even if we’re looking at a single PCB (as in the image below), in today’s technological environment it will no doubt have electrical traces mapped all throughout it as well as multiple devices present that operate under their own specified electrical conditions.

Figure 1: An example PCB with complex trace and via layouts, courtesy of Ansys

If we were determined to do so, we could certainly measure trace lengths, widths, thicknesses, etc., and make some educated guesses for the resulting voltage drops to individual components. However, considerably more effort would need to be made to account for bends, corners, or variable widths, and that would still completely neglect any environmental effects or potential interactions between traces. It is much better to be able to represent and solve for the entire geometry at once using a dedicated field solver – this is where Ansys SIwave or Ansys HFSS typically come in, giving us the flexibility to accurately determine the electrical reliability, whether we’re talking about AC or DC power sources.

Signal integrity is very much related, except that “signals” in this context often involve different pathways, less energy, and a different set of regulations and tolerances. Common applications involve Chip-signal modeling and DDRx virtual compliance – these have to do with not only the previous general concerns regarding stability and reliability, but also adherence to specific standards (JEDEC) through virtual compliance tests. After all, inductive electromagnetic effects can still occur over nonconductive gaps, and this can be a significant source of noise and instability in cases where conductive paths (like board traces or external connections) cross or run very near each other.

Figure 2: Example use-cases in virtual compliance testing, courtesy of Ansys

Whether we are looking at timings between components, transition times, jitter, or even just noise, HFSS and SIWave can both play roles here. In either case, being able to use a simulation environment to confirm that a certain design will or will not meet certain standards can provide invaluable feedback to the design process.

Other relevant topics to Electrical Reliability may include Electromagnetic Interference (EMI) analysis, antenna performance, and Electrostatic Discharge (ESD) analysis. While I will not expand on these in great detail here, I think it is enough to realize that an excellent electrical design (such as for an antenna) requires some awareness of the operational environment. For instance, we might want to ensure that our chosen or designed component will adequately function while in the presence of some radiation environment, or maybe we would like to test the effectiveness of the environmental shielding on a region of our board. Maybe, there is some concern about the propagation of an ESD through a PCB, and we would like to see how vulnerable certain components are. Ansys tools provide us the capabilities needed to do all of this.

The second area of primary interest is Thermal Reliability, as just about anyone who has worked with or even used electronics knows, they generate some amount of heat while in use. Of course, the quantity, density, and distribution of that heat can vary tremendously depending on the exact device or system under question, but this heat will ultimately result in a rise in temperature somewhere. The point of thermal reliability basically boils down to realizing that the performance and function of many electrical components depends on their temperature. Whether it is simply a matter of accounting for a change in electrical conductivity as temperature rises or a hard limit of functionality for a particular transistor at 150 °C, acknowledging and accounting for these thermal effects is critical when considering electronics reliability. This is a problem with several potential solutions depending on the scale of interest, but generally we cover the package/chip, board, and full system levels. For the component/chip level, a designer will often want to provide some package level specs for OEMs so that a component can be properly scoped in a larger design. Ansys Icepak has toolkits available to help with this process; whether it is simplifying a 3D package down to a detailed network thermal model or identifying the most critical hot spot within a package based on a particular heat distribution. Typically, network models are generated through temperature measurements taken from a sample in a standardized JEDEC test chamber, but Icepak can assist through automatically generating these test environments, as below, and then using simulation results to extract well defined JB and JC values for the package under test.

Figure 3: Automatically generated JEDEC test chambers created by Ansys Icepak, courtesy of Ansys

On the PCB level of detail, we are likely interested in how heat moves across the entire board from component to component or out to the environment. Ansys Icepak lets us read in a detailed ECAD description for said PCB and process its trace and via definitions into an accurate thermal conductivity map that will improve our simulation accuracy. After all, two boards with identical sizing and different copper trace layouts may conduct heat very differently from each other.

Figure 4: Converting ECAD information into thermal conductivity maps using Ansys Icepak, courtesy of Ansys

On the system level of thermal reliability, we are likely looking at the effectiveness of a particular cooling solution on our electronic design. Icepak makes it easy to include the effects of a heat exchanger (like a coldplate) without having to explicitly model its computationally expensive geometry by using a flow network model. Also, many of today’s electronics are expected to constantly run right up against their limit and are kept within thermal spec by using software to throttle their input power in conjunction with an existing cooling strategy. We can use Icepak to implement and test these dynamic thermal management algorithms so that we can track and evaluate their performance across a range of environmental conditions.

The next topic that we should consider is that of Mechanical Reliability. Mechanical concepts tend to be a little more intuitive and relatable due to their more hands-on nature than the other two, though the exact details behind the cause and significance of stresses in materials is of course more involved. In the most general sense, stress is a result of applying force to an object. If this stress is high compared to what is allowed by a material, then bad things tend to happen – like permanent deformation or fracture. For electronic devices consisting of many materials, small structures, and particularly delicate components, we have once again surpassed what can be reasonably accomplished with hand calculations. Whether we are looking at an individual package, the integrity of an entire PCB, or the stability that a rigid housing will provide to a set of PCBs, Ansys has a solution. We might use Ansys Mechanical to look at manufacturing allowances for the permissible force used while mounting a complicated leaded component onto a board, as seen below. Or maybe, we will use mechanical simulation to find the optimal positioning of leads on a new package such that its natural vibrational frequencies are outside normal ambient conditions.

Figure 5: A surface component with discretely modeled leads, courtesy of Ansys

At the PCB level, we face many of the same detail-oriented challenges around representing traces and vias that have been mentioned for the electrical applications. They may not be quite as critical and more easily approximated in some ways, but that does not change the fact that copper traces are mechanically quite different from the resin composites often used as the substrate (like FR-4). Ansys tools like Sherlock provide best in class PCB modeling on this front, allowing us to directly bring in ECAD models with full trace and component detail, and then model them mechanically at several different levels depending on the exact need. Automating a materials property averaging scheme based on the local density of traces may be sufficient if we are looking at the general bending behavior of a board, but we can take it to the next level by explicitly modeling traces as “reinforcement” elements. This brings us to the level of detail where we can much more reliably look at the stresses present in individual traces, such that we can make good design decisions to reduce the risk of traces peeling or delaminating from the surface.

Figure 6: Example trace mapping workflow and methods, courtesy of Ansys

Beyond just looking at possible improvements in the design process, we can also make use of Ansys tools like LS-DYNA or Mechanical to replicate testing or accident conditions that an existing design could be subjected to. As a real-world example, many of us are all too familiar with the occasional consequences of accidentally dropping our smart phones – Ansys is used to test designs against these kind of shock events, where impact against a hard surface can result in high stresses in key locations. This helps us understand where to reinforce a design to protect against the worst damage or even what angle of impact is most likely to cause an operational failure.

As the finale for all of this, I come back to the first comment of reality being a complex Multiphysics problem. Many of the previous topics are not truly isolated to their respective physics (as much as we often simplify them as such), and this is one of the big ways in which the Ansys ecosystem shines: Comprehensive Multiphysics. For the topic of thermal reliability, I simply stated that electronics give off heat. This may be obvious, but that heat is not just a magical result of the device being turned on but is instead a physical and calculable result of the actual electrical behavior. Indeed, this the exact kind of result that we can extract from one of the relevant electronics tools. An HFSS solution will provide us with not only the electrical performance of an antenna but also the three-dimensional distribution of heat that is consequently produced. Ansys lets us very easily feed this information into an Icepak simulation, which then has the ability to give us far more accurate results than a typical uniform heat load assumption provides.

Figure 7: Coupled electrical-thermal simulation between HFSS and Icepak, courtesy of Ansys

If we find that our temperatures are particularly high, we might then decide to bring these results back into HFSS to locally change material properties as a function of temperature to get an even more accurate set of electrical results. It could be that this results in an appreciable shift in our antenna’s frequency, or perhaps the efficiency has decreased, and aspects of the design need to be revisited. These are some of the things that we would likely miss without a comprehensive Multiphysics environment.

On a more mechanical side, the effects on stress and strain from thermal conditions are very well known and understood at this point, but there is no reason we could not use Ansys to bring the electrical alongside this established thermal-mechanical behavior. After all, what is a better representation of the real physics involved than using SIwave or HFSS to model the electrical behavior of a PCB, bringing those result into an Icepak simulation as a heat load to test the performance of a cooling loop or heat sink, and then using at least some of those thermal results to look at stresses through not only a PCB as a whole but also individual traces? Not a whole lot at this moment in time, I would say.

The extension that we can make on these examples, is that they have by and large been representative cases of how an electronics device responds to a particular event or condition and judging its reliability metrics based on that set of results, however many physics might be involved. There is one more piece of the puzzle we have access to that also interweaves itself throughout the Multiphysics domain and that is Reliability Physics. This is mostly relevant to us in electronics reliability for considering how different events, or even just a repetition of the same event, can stack together and accumulate to contribute towards some failure in the future. An easy example of this is a plastic hinge or clip that you might find on any number of inexpensive products – flexing a thin piece of plastic like in these hinges can provide a very convenient method of motion for quite some time, but that hinge will gradually accumulate damage until it inevitably cracks and fails. Every connection within a PCB is susceptible to this same kind of behavior, whether it is the laminations of the PCB itself, the components soldered to the surface, or even the individual leads on a component. If our PCB is mounted on the control board of a bus, satellite, or boat, there will be some vibrations and thermal cycles associated with its life. A single one of these events may be of much smaller magnitude and seemingly negligible compared to something dramatic like a drop test, and yet they can still add up to the point of being significant over a period of months or years.

This is exactly the kind of thing that Ansys Sherlock proves invaluable for: letting us define and track the effect of events that may occur over a PCB’s entire lifecycle. Many of these will revolve around mechanical concepts of fatigue accumulating as a result of material stresses, but it is still important to consider the potential Multiphysics origins of stress. Different simulations will be required for each of mechanical bending during assembly, vibration during transport, and thermal cycling during operation, yet each of these contributes towards the final objective of electronics reliability. Sherlock will bring each of these and more together in a clear description of which components on a board are most likely to fail, how likely they are to fail as a function of time, and which life events are the most impactful.

Figure 8: Example failure predictions over the life cycle of a PCB using Ansys Sherlock, courtesy of Ansys

Really, what all of this comes down to is that when we design and create products, we generally want to make sure that they function in the way that we intend them to. This might be due to a personal pride in our profession or even just the desire to maximize profit through minimizing the costs associated with a component failure, however at the end it just makes sense to anticipate and try to prevent the failures that might occur under normal operating conditions.

For complex problems like electronics devices, there are many physics all intimately tied together in the consideration of overall reliability, but the Ansys ecosystem of tools allows us to approach these problems in a realistic way. Whether we’re looking at the electrical reliability of a circuit or antenna, the thermal performance of a cooling solution or algorithm, or the mechanical resilience of a PCB mounted on a bracket, Ansys provides a path forward.

If you have any questions or would like to learn more, please contact us at info@padtinc.com or visit www.padtinc.com.

Setting up and Solving a PCB and Enclosure for Thermal Simulation in Ansys Icepak Electronic Desktop

The thought of setting up and running a complex PCB and Enclosure thermal model was something that used to strike fear in the heart of engineers. That is no longer true. In this video, we step through the process of importing, setting up, and solving a PCB thermal simulation.

If you have any questions or would like to learn more, please contact us at info@padtinc.com or www.padtinc.com.

Using Ansys Icepak Results in Ansys Mechanical

With Icepak now falling under the umbrella of Electronics products in the Ansys Pro Premium Enterprise licensing scheme, it is easier than ever to obtain conjugate heat transfer simulation results without a dedicated Fluids license. Because of this, we have received multiple requests regarding methods to transfer Icepak’s results to a workbench environment for more accurate thermal and Mechanical results. So, without further ado, I will outline the procedure for four different methods along with their general use-cases.

1: Temperature from Classic Icepak

The first, and most straightforward, method is simply transferring body temperature directly from the Icepak (Classic) workbench application. This may be the preferred method for the majority of use-cases where getting thermal CHT results into a mechanical project is the goal. The Icepak node needs to be solved as normal, and then the solution can simply be dragged over to the setup node of another project, such as steady state thermal or static structural. Once this has been linked and updated, the transferred body temperatures are accessed through an “Imported Load” folder where the temperatures for individual bodies can be mapped over. The benefits are that as long as the Icepak simulation is set up as needed, you won’t need to resolve anything on the thermal side, and there is no extra manipulation of data required on the user’s end.

2: Heat Transfer Coefficients from Classic Ansys Icepak

The second method that sits natively within Workbench involves mapping heat transfer coefficients onto surfaces. This of course means that the thermal problem must be solved again, but it does provide extra accuracy over uniform HTC approximations, and some extra flexibility for recalculating body temperatures that result from changing power input conditions. This might be the desired approach if you are working with a forced flow and are looking at thermal stress results across a range of CPU loads, for example. HTC coordinate maps can be exported from Classic Icepak through the “Full Report” command with “Only summary information” disabled. 

The complicating factor for this method is that the file format and information is not compatible with Workbench for External Data mapping in its default form.

I wrote a simple python script for this purpose – it reads in the HTC coordinate data, makes it all positive, rewrites it as a CSV, and adds the necessary reference (ambient) temperature column. It is important to note here that there can be an error in reported HTC sign from Icepak. This is because the sign is determined by the direction of heat transfer, which is reported without consideration to the solid body surface normal direction. So, for entirely convex shapes, the sign will be correct, but for more complicated structures like heatsinks with surfaces facing every which way, the signs will be inconsistent. Once this is done, each column needs to be correctly associated in the external data definition and then mapped to the setup of your thermal simulation. In Mechanical, this causes an Imported Load to show up under Analysis, which you will then insert a Convection Coefficient into. This can be scoped to individual faces, which should of course be included with those chosen when exporting from Icepak.

For reference, the python script may look something like:

############################################
import numpy as np
import sys

##Usage is 'python HTCCleanup.py inputfilepath AmbientTemperature'
inputfile = sys.argv[1]
Temperature = float(sys.argv[2])

#Bring in Icepak data file as argument
data = np.loadtxt(inputfile,skiprows=25)

#Make all HTCs positive
data[:,4] = abs(data[:,4])

#Create and append a reference temperature column
temparray = np.ones([len(data[:,0]),1])*Temperature
data = np.append(data,temparray,axis=1)

#Write to file
np.savetxt('ProcessedReport.csv',data,delimiter=',',fmt='%.5e',header='Node#, x, y, z, HTC, TRef')
############################################

3: Temperatures from EDT Icepak

The electronics desktop version of Icepak is a newer and, in my opinion, a more user-friendly environment for Icepak simulations. However, since it does not integrate directly with Workbench, mapping over result data for further structural simulation is not as straightforward. Luckily for us, other users have already addressed this obstacle via an ACT extension!

This is the “Write Thermal Loads” extension that can be downloaded for free from the Ansys App Store (https://catalog.ansys.com).

Once loaded, the interface looks like this:

Basically, this is a guided wizard that will export an external data file with coordinate defined temperatures according to the EDT bodies you select with the Wizard. The wizard also generates some workbench script files that can be used to automate the import process, but the most important part to know is that the temperature data file is brought in through External Data in essentially the same way as the aforementioned HTC file. For those who are familiar with the EDT environment and want to take thermal results straight into a structural analysis, this is the preferred approach.

4: HTCs from EDT Icepak

This is perhaps the most awkward (and advanced) workflow, but it provides the same flexibility as with Classic Icepak HTCs, without the potential error in HTC sign, and with the benefit of working in the EDT environment. The portion of this flow most likely to contain errors is generating the HTC data file, as we must make use of a normally inaccessible operation in the Field Calculator. After solving an Icepak project and generating results, we should first create a face list including all of the convection faces of interest – this is done by selecting those faces in the GUI and then using the Modeler > List > Create > Face List to generate this face. Once the list is created, open the field calculator (Icepak > Fields > Calculator), and then perform the following steps:

  1. Input > Quantity > Heat Transfer Coefficient
  2. Input > Geometry > Surface > Face List
  3. Scalar > Mean > Undo (ONE TIME)
  4. Output > Write

The single undo operation grants us access to the intermediate step where HTC data is accessible as a “SclSrf: SurfaveValue(Surface,HTC)” datatype, and can also be accessed by performing undo after any other scalar operation on a scalar field definition. (such as integration over a surface or body or a min/max calculation, for example)

The .fld file produced with the write operation is close to usable in workbench, but still must be slightly reformatted and appended with a reference temperature column. I would suggest a python script that is very similar to the one used for Classic HTCs.

One thing to note is that these files generated by EDT can end up being much larger than you may expect. This is because the field calculator essentially forms a list of all the surface elements on the surfaces you have specified, decomposes them into triangular elements if necessary, and then reports the HTC value of that triangular element at each connected corner node. So, you end up with 3 times as many data entries as you have surface elements, multiple HTCs reported for each node that touches more than one surface element, and a correspondingly large file for fine meshes on complicated geometries. Still, Workbench will interpret this whole thing fairly well, and you should end up with a good HTC map to make use of in Mechanical. 

An Ansys optiSLang Overview and Optimization Example with Ansys Icepak

Ansys optiSLang is one of the newer pieces of software in the Ansys toolkit that was recently acquired along with the company Dynardo. Functionally, optiSLang provides a flexible top-level platform for all kinds of optimization. It is solver agnostic, in that as long as you can run a solver through batch files and produce text readable result files, you can use said solver with optiSLang. There are also some very convenient integrations with many of the Ansys toolkit solvers in addition to other popular programs. This includes AEDT, Workbench, LS-DYNA, Python, MATLAB, and Excel, among many others.

While the ultimate objective is often to simply minimize or maximize a system output according to a set of inputs, the complexity of the problem can increase dramatically by introducing constraints and multiple optimization goals. And of course, the more complicated the relationships between variables are, the harder it gets to adequately describe them for optimization purposes.

Much of what optiSLang can do is a result of fitting the input data to a Metamodel of Optimal Prognosis (MOP) which is a categorical description for the specific metamodels that optiSLang uses. A user can choose one of the included models (Polynomial, Moving Least Squares, and Ordinary Kriging), define their own model, and/or allow optiSLang to compare the resulting Coefficients of Prognosis (COP) from each model to choose the most appropriate approach.

The COP is calculated in a similar manner as the more common COD or R2 values, except that it is calculated through a cross-validation process where the data is partitioned into subsets that are each used only for the MOP calculation or the COP calculation, not both. For this reason, it is preferred as a measure for how effective the model is at predicting unknown data points, which is particularly valuable in this kind of MOP application.

This whole process really shows where optiSLang’s functionality shines: workflow automation. Not only does optiSLang intelligently select the metamodel based on its applicability to the data, but it can also apply an adaptive method for the improvement of the MOP. It will suggest an automatic sampling method based on the number of problem variables involved, which can then be applied towards refining either the global COP or the minimum local COP. The automation of this process means that once the user has linked optiSLang to a solver with appropriate inputs/outputs and defined the necessary run methodology for optimization, all that is left is to click a button and wait.

As an example of this, we will run through a test case that utilizes the ability to interface with Ansys EDT Icepak.

Figure 1: The EDT Icepak project model.

For our setup, we have a simple board mounted with bodies representative of 3 x 2 watt RAM modules, and 2 x 10 watt CPUs with attached heatsinks. The entire board is contained within an air enclosure, where boundary conditions are defined as walls with two parametrically positioned circular inlets/outlets. The inlet is a fixed mass flow rate surface and the outlet is a zero-pressure boundary. In our design, we permit the y and z coordinates for the inlet and outlet to vary, and we will be searching for the configuration that minimizes the resulting CPU and RAM temperatures.

The optiSLang process generally follows a series of drag-and-drop wizards. We start with the Solver Wizard which guides us through the options for which kind of solver is being used: text-based, direct integrations, or interfaces. In this case, the Icepak project is part of the AEDT interface, so optiSLang will identify any of the parameters defined within EDT as well as the resulting report definitions.  The Parametric Solver System created through the solver wizard then provides the interfacing required to adjust inputs while reading outputs as designs are tested and an MOP is generated.

Figure 2: Resulting block from the Solver wizard with parameters read in from EDT.

Once the parametric solver is defined, we drag and drop in a sensitivity wizard, which starts the AMOP study.  We will start with a total of 100 samples; 40 will be initial designs, and 60 will be across 3 stages of COP refinement with all parameter sets sampled according to the Advanced Latin Hypercube Sampling method.

Figure 3: Resulting block from the Sensitivity wizard with Advanced Latin Hypercube Sampling.

The results of individual runs are tabulated and viewable as the study is conducted, and at the conclusion, a description of the AMOP is provided with response surfaces, residual plots, and variable sensitivities. For instance, we can see that by using these first 100 samples, a decent metamodel with a COP of 90% is generated for the CPU temperature near the inlet. We also note that optiSLang has determined that none of the responses are sensitive to the ‘y’ position of the outlet, so this variable is automatically freed from further analysis.

Figure 4: MOP surface for the temperature of Chip1, resulting from the first round of sampling.

 If we decide that this CoP, or that from any of our other outputs, is not good enough for our purposes, optiSLang makes it very easy to add on to our study. All that is required is dragging and dropping a new sensitivity wizard onto our previous study, which will automatically load the previous results in as starting values. This makes a copy of and visually connects an output from the previous solver block to a new sensitivity analysis on the diagram, which we can then be adjusted independently.

For simplicity and demonstration’s sake, we will add on two more global refinement iterations of 50 samples each. By doing this and then excluding 8 of our 200 total samples that appear as outliers, our “Chip1” CoP can be improved to 97%.

Figure 5: A refined MOP generated by including a new Sensitivity wizard.

Now that we have an MOP of suitable predictive power for our outputs of interest, we can perform some fast optimization. By initially building an MOP based on the overall system behavior, we are now afforded some flexibility in our optimization criteria. As in the previous steps, all that is needed at this point is to drag and drop an optimization wizard onto our “AMOP Addition” system, and optiSLang will guide us through the options with recommendations based on the number of criteria and initial conditions.

In this case, we will define three optimization criteria for thoroughness: a sum of both chip temperatures, a sum of all RAM temperatures, and an average temperature rise from ambient for all components with double weighting applied to the chips. Following the default optimization settings, we end up with an evolutionary algorithm that iterates through 9300 samples in about 14 minutes – far and away faster than directly optimizing the Icepak project. What’s more, if we decide to adjust the optimization criteria, we’ll only need to rerun this ~14 minute evolutionary algorithm.

What we are most interested in for this example are the resulting Pareto fronts which give us a clear view of the tradeoffs between each of our optimization criteria. Each of the designs on this front can easily be selected through the interface, and their corresponding input parameters can be accessed.

Figure 6: Pareto front of the “Chipsum” and “TotalAve” optimization criteria.

Scanning through some of these designs also provides a very convenient way to identify which of our parameters are limiting the design criteria. Two distinct regions can be identified here: the left region is limited by how close we are allowing the inlet fan to be to the board, and the right region is limited by how close to the +xz corner of our domain the outlet vent can be placed. In a situation where these parameters were not physically constrained by geometry, this would be a good opportunity to consider relaxing parameter constraints to further improve our optimization criteria. 

As it is, we can now choose a design based on this Pareto front to verify with the full solver. After choosing a point in the middle of the “Limited by outlet ‘z’” zone, we find that our actual “ChipSum” is 73.33 vs. the predicted 72.78 and the actual “TotalAve” is 17.82 vs. the predicted 17.42. For this demonstration, we consider this small error as satisfactory, and a snapshot of the corresponding Icepak solution is shown below.

Figure 7: The Icepak solution of the final design. The inlet vent is aligned with the outlet side’s heatsink, and the outlet vent is in the corner nearest the heatsink. Primary flow through the far heatsink is maximized, while a strong recirculating flow is produced around the front heatsink.

The accuracy of these results is of course dependent not only on how thoroughly we constructed the MOP, but also the accuracy of the 3D solution; creating mesh definitions that remain consistently accurate through parameterized geometry changes can be particularly tricky. Though, with all of this considered, optiSLang provides a great environment for not only managing optimization studies, but displaying the results in such a way that you can gain an improved understanding of the interaction between input/output variables and their optimization criteria.

Ansys Sherlock: A Comprehensive Electronics Reliability Tool

As systems become more complex, the introduction and adoption of detailed Multiphysics / Multidomain tools is becoming more commonplace. Oftentimes, these tools serve as preprocessors and specialized interfaces for linking together other base level tools or models in a meaningful way. This is what Ansys Sherlock does for Circuit Card Assemblies (CCAs), with a heavy emphasis on product reliability through detailed life cycle definitions.

In an ideal scenario, the user will have already compiled a detailed ODB++ archive containing all the relevant model information. For Sherlock, this includes .odb files for each PCB layer, the silkscreens, component lists, component locations separated by top/bottom surface, drilled locations, solder mask maps, mounting points, and test points. This would provide the most streamlined experience from a CCA design through reliability analysis, though any of these components can be imported individually.

These definitions, in combination with an extensive library of package geometries, allow Sherlock to generate a 3D model consisting of components that can be checked against accepted parts lists and material properties. The inclusion of solder mask and silkscreen layers also makes for convenient spot-checking of component location and orientation. If any of these things deviate from the expected or if basic design variation and optimization studies need to be conducted, new components can be added and existing components can be removed, exchanged, or edited entirely within Sherlock.

Figure 1: Sherlock’s 2D layer viewer and editor. Each layer can be toggled on/off, and components can be rearranged.

While a few of the available analyses depend on just the component definitions and geometries (Part Validation, DFMEA, and CAF Failure), the rest are in some way connected to the concept of life cycle definitions. The overall life cycle can be organized into life phases, e.g. an operating phase, packaging phase, transport phase, or idle phase, which can then contain any number of unique event definitions. Sherlock provides support for vibration events (random and harmonic), mechanical shock events, and thermal events. At each level, these phases and events can be prescribed a total duration, cycle count, or duty cycle relative to their parent definition. On the Life Cycle definition itself, the total lifespan and accepted failure probability within that lifespan are defined for the generation of final reliability metrics.  Figure 1 demonstrates an example layout for a CCA that may be part of a vehicle system containing both high cycle fatigue thermal and vibration events, and low cycle fatigue shock events.

Figure 2: Product life cycles are broken down into life phases that contain life events. Each event is customizable through its duration, frequency, and profile.

The remaining analysis types can be divided into two categories: FEA and part specification-based. The FEA based tests function by generating a 3D model with detail and mesh criteria determined within Sherlock, which is then passed over to an Ansys Mechanical session for analysis. Sherlock provides quite a lot of customization on the pre-processing level; the menu options include different methods and resolutions for the PCB, explicit modeling of traces, and inclusion or exclusion of part leads, mechanical parts, and potting regions, among others.

Figure 3: Left shows the 3D model options, the middle shows part leads modeled, and right shows a populated board.

Each of the FEA tests, Random Vibration, Harmonic Vibration, Mechanical Shock, and Natural Frequency, correspond to an analysis block within Ansys Workbench. Once these simulations are completed, the results file is read back into Sherlock, and strain values for each component are extracted and applied to either Basquin or Coffin—Manson fatigue models as appropriate for each included life cycle event.

Part specification tests include Component Failure Analysis for electrolytic and ceramic capacitors, Semiconductor Wearout for semiconductor devices, and CTE mismatch issues for Plated Through-Hole and solder fatigue. These analyses are much more component-specific in the sense that an electrolytic capacitor has some completely different failure modes than a semiconductor device and including them allows for a broad range of physics to be accounted for across the CCA.

The result from each type of analysis is ultimately a life prediction for each component in terms of a failure probability curve alongside a time to failure estimate. The curves for every component are then combined into a life prediction for the entire CCA under one failure analysis.

Figure 4: Analysis results for Solder Fatigue including an overview for quantity of parts in each score range along with a detailed breakdown of score for each board component.

Taking it one step further, the results from each analysis are then combined into an overall life prediction for the CCA that encompasses all the defined life events. From Figure 5, we can see that the life prediction for this CCA does not quite meet its 5-year requirement, and that the most troublesome analyses are Solder Fatigue and PTH Fatigue. Since Sherlock makes it easy to identify these as problem areas, we could then iterate on this design by reexamining the severity or frequency of applied thermal cycles or adjusting some of the board material choices to minimize CTE mismatch.

Figure 5: Combined life predictions for all failure analyses and life events.

Sherlock’s convenience for defining life cycle phases and events, alongside the wide variety of component definitions and failure analyses available, really cement Sherlock’s role as a comprehensive electronics reliability tool. As in most analyses, the quality of the results is still dependent on the quality of the input, but all the checks and cross-validations for components vs life events that come along with Sherlock’s preprocessing toolset really assist with this, too.

ANSYS Discovery Live: A Focus on Topology Optimization

For those who are not already familiar with it, Discovery Live is a rapid design tool that shares the Discovery SpaceClaim environment. It is capable of near real-time simulation of basic structural, modal, fluid, electronic, and thermal problems. This is done through leveraging the computational power of a dedicated GPU, though because of the required speed it will necessarily have somewhat less fidelity than the corresponding full Ansys analyses. Even so, the ability to immediately see the effects of modifying, adding, or rearranging geometry through SpaceClaim’s operations provides a tremendous value to designers.

One of the most interesting features within Discovery Live is the ability to perform Topology Optimization for reducing the quantity of material in a design while maintaining optimal stiffness for a designated loading condition. This can be particularly appealing given the rapid adoption of 3D printing and other additive manufacturing techniques where reducing the total material used saves both time and material cost. These also allow the production of complex organic shapes that were not always feasible with more traditional techniques like milling.

With these things in mind, we have recently received requests to demonstrate Discovery Live’s capabilities and provide some training in its use, especially for topology optimization. Given that Discovery Live is amazingly straightforward in its application, this also seems like an ideal topic to expand on in blog form alongside our general Discovery Live workshops!

For this example, we have chosen to work with a generic “engine mount” geometry that was saved in .stp format. The overall dimensions are about 10 cm wide x 5 cm tall x 5 cm deep, and we assume it is made out of stainless steel (though this is not terribly important for this demonstration).

Figure 1: Starting engine mount geometry with fixed supports and a defined load.

The three bolt holes around the perimeter are fixed in position, as if they were firmly clamped to a surface, while a total load of 9069 N (-9000 N in X, 1000 N in Y, and 500 N in Z) is applied to the cylindrical surfaces on the front. From here, we simply tell Discovery Live that we would like to add a topology optimization calculation onto our structural analysis. This opens up the ability to specify a couple more options: the way we define how much material to remove and the amount of material around boundary conditions to preserve. For removing material, we can choose to either reduce the total volume by a percent of the original or to remove material until we reach a specific model volume. For the area around boundary conditions, this is an “inflation” length measured as a normal distance from these surfaces, easily visualizable when highlighting the condition on the solution tree.

Figure 2: Inflation zone shown around each fixed support and load surface.

Since I have already planned out what kind of comparisons I want to make in this analysis, I chose to set the final model volume to 30 cm3. After hitting the simulate button, we get to watch the optimization happen alongside a rough structural analysis. By default, we are provided with a result chart showing the model’s volume, which pretty quickly converges on our target volume. As with any analysis, the duration of this process is fairly sensitive to the fidelity specified, but with default settings this took all of 7 minutes and 50 seconds to complete on my desktop with a Quadro K4000.

Figure 3: Mid-optimization on the top, post-optimization on the bottom.

Once optimization is complete, there are several more operations that become available. In order to gain access to the optimized structure, we need to convert it into a model body. Both options for this result in faceted bodies with the click of a button located in the solution tree; the difference is just that the second has also had a smoothing operation applied to it. One or the other may be preferable, depending on your application.

Figure 4: Converting results to faceted geometry

Text Box: Figure 5: Faceted body post-optimization.

Figure 5: Faceted body post-optimization

Figure 6: Smoothed faceted body post-optimization

Though some rough stress calculations were made throughout the optimization process, the next step is typically a validation. Discovery Live makes this as a simple procedure as right-clicking on the optimized result in the solution tree and selecting the “Create Validation Solution” button. This essentially copies over the newly generated geometry into a new structural analysis while preserving the previously applied supports and loads. This allows for finer control over the fidelity of our validation, but still a very fast confirmation of our results. Using maximum fidelity on our faceted body, we find that the resulting maximum stress is about 360 MPa as compared to our unoptimized structure’s stress of 267 MPa, though of course our new material volume is less than half the original.

Figure 7: Optimized structure validation. Example surfaces that are untouched by optimization are boxed.

It may be that our final stress value is higher than what we find acceptable. At this point, it is important to note one of the limitations in version 2019R3: Discovery Live can only remove material from the original geometry, it does not add. What this means is that any surfaces remaining unchanged throughout the process are important in maintaining structural integrity for the specified load. So, if we really want to optimize our structure, we should start with additional material in these regions to allow for more optimization flexibility.

In this case, we can go back to our original engine mount model in Discovery Live and use the integrated SpaceClaim tools to thicken our backplate and expand the fillets around the load surfaces.

Figure 8: Modified engine mount geometry with a thicker backplate and larger fillets.

We can then run back through the same analysis, specifying the same target volume, to improve the performance of our final component. Indeed, we find that after optimizing back down to a material volume of 30 cm3, our new maximum stress has been decreased to 256 MPa. Keep in mind that this is very doable within Discovery Live, as the entire modification and simulation process can be done in <10 minutes for this model.

Figure 9: Validated results from the modified geometry post-optimization.

Of course, once a promising solution has been attained in Discovery Live, we should then export the model to run a more thorough analysis of in Ansys Mechanical, but hopefully, this provides a useful example of how to leverage this amazing tool!

One final comment is that while this example was performed in the 2019R3 version, 2020R1 has expanded Discovery Live’s optimization capability somewhat. Instead of only being allowed to specify a target volume or percent reduction, you can choose to allow a specified increase in structure compliance while minimizing the volume. In addition to this, there are a couple more knobs to turn for better control over the manufacturability of the result, such as specifying the maximum thickness of any region and preventing any internal overhangs in a specified direction. It is now also possible to link topology optimization to a general-purpose modal analysis, either on its own or coupled to a structural analysis. These continued improvements are great news for users, and we hope that even more features continue to roll out.

Icepak in Ansys Electronic Desktop – Why should you know about it?

The role of Ansys Electronics Desktop Icepak (hereafter referred to as Icepak, not to be confused with Classic Icepak) is in an interesting place. On the back end, it is a tremendously capable CFD solver through the use of the Ansys Fluent code. On the front end, it is an all-in-one pre and post processor that is streamlined for electronics thermal management, including the explicit simulation and effects of fluid convection. In this regard, Icepak can be thought of as a system level Multiphysics simulation tool.

One of the advantages of Icepak is in its interface consistency with the rest of the Electronic Desktop (EDT) products. This not only results in a slick modern appearance but also provides a very familiar environment for the electrical engineers and designers who typically use the other EDT tools. While they may not already be intimately familiar with the physics and setup process for CFD/thermal simulations, being able to follow a very similar workflow certainly lowers the barrier to entry for accessing useful results. Even if complete adoption by these users is not practical, this same environment can serve as a happy medium for collaboration with thermal and fluids experts.

Figure 1: AEDT Icepak interface. The same ribbon menus, project manager, history tree, and display window as other EDT products.

So, beyond these generalities, what does Icepak actually offer for an optimized user experience over other tools, and what kinds of problems/applications are best suited for it?

The first thing that comes to mind for both of these questions is a PCB with attached components. In a real-world environment, anyone that has looked at the inside of a computer is likely familiar with motherboards covered with all kinds of little chips and capacitors and often dominated by a CPU mounted with a heatsink and fan. In most cases, this motherboard is enclosed within some kind of box (a computer case) with vents/filters/fans on at least some of the sides to facilitate controlled airflow. This is an ideal scenario for Icepak. The geometries of the board and its components are typically well represented by rectangular prisms and cylinders, and the thermal management of the system is strongly related to the physics of conjugate heat transfer. For the case geometry, it may be more convenient to import this from a more comprehensive modeler like SpaceClaim and then take advantage of the tools built into Icepak to quickly process the important features.

Figure 2: A computer case with motherboard imported from SpaceClaim. The front and back have vents/fans while the side has a rectangular patterned grille.

For a CAD model like the one above, we may want to include some additional items like heatsinks, fan models, or simple PCB components. Icepak’s geometry tools include some very convenient parameterized functions for quickly constructing and positioning fans and heatsinks, in addition to the basic ability to create and manipulate simple volumes. There are also routines for extracting openings on surface, such as the rectangular vent arrays on the front and back as well as the patterned grille on the side. So, not only can you import detailed CAD from external sources, you can mix, match, and simplify it with Icepak’s geometry, which streamlines the entire design and setup process. For an experienced user, the above model can be prepared for a basic simulation within just a matter of minutes. The resulting configuration with an added heatsink, some RAM, and boundary conditions, could look something like this:

Figure 3: The model from Figure 2 after Icepak processing. Boundary conditions for the fans, vents, and grille have been defined. Icepak primitives have also been added in the form of a heatsink and RAM modules.

Monitor points can then assigned to surfaces or bodies as desired; chances are that for a simulation like this, temperature within the CPU is the most important. Additional temperature points for each RAM module or flow measurements for the fans and openings can also be defined. These points can all be tracked as the simulation proceeds to ensure that convergence is actually attained.

Figure 4: Monitoring chosen solution variables to ensure convergence.

For this simple system containing a 20 W CPU and 8 RAM modules at 2 W each, quite a few of our components are toasty and potentially problematic from a thermal standpoint.

Figure 5: Post-processing with Icepak. Temperature contours are overlaid with flow velocities to better understand the behavior of the system.

With the power of a simulation environment in Icepak at our fingertips, we can now play around with our design parameters to improve the thermal management of this system! Want to see what happens when you block the outlet vents? Easy, select and delete them! Want to use a more powerful fan or try a new material for the motherboard or heatsink? Just edit their properties in the history tree. Want to spin around the board or try changing the number of fins on the heatsink? Also straightforward, although you will have to remesh the model. While these are the kinds of things that are certainly possible in other tools, they are exceptionally easy to do within an all-in-one interface like Icepak.

The physics involved in this example are pretty standard: solid body conduction with conjugate heat transfer to a turbulent K-Omega fluid model. Where Icepak really shines is its ability to integrate with the other tools in the EDT environment. While we assumed that the motherboard was nothing more than a solid chunk of FR-4, this board could have been designed and simulated in detail with another tool like HFSS. The board, along with all of the power losses calculated during the HFSS analysis, could have then been directly imported into the Icepak project. This would allow for each layer to be modeled with its own spatially varying thermal properties according to trace locations as well as a very accurate spatial mapping of heat generation.

This is not at all to say that Icepak is limited to these kinds of PCB and CCA examples. These just often tend to be convenient to think about and relatively easy to geometrically represent. Using Fluent as the solver provides a lot of flexibility, and there are many more classes of problems that could be benefit from Icepak. On the low frequency side, electric motors are a good example of a problem where electronic and thermal behavior are intertwined. As voltage is applied to the windings, currents are induced and heat is generated. For larger motors, these currents, and consequently the associated thermal losses, can be significant. Maxwell is used to model the electronic side for these types of problems, where the results can then be easily brought into an Icepak simulation. I have gone through just such an example rotor/stator/winding motor assembly model in Maxwell, where I then copied everything into an Iecpak project to simulate the resulting steady temperature profile in a box of naturally convecting air.

Figure 6: An example half-motor that was solved in Maxwell as a magnetostatic problem and then copied over to Icepak for thermal analysis.

If it is found that better thermal management is needed, then extra features could then be added on the Icepak side as desired, such as a dedicated heatsink or external fan. Only the components with loads mapped over from Maxwell need to remain unmodified.

On the high frequency side, you may care about the performance of an antenna. HFSS can be used for the electromagnetic side, while Icepak can once again be brought in to analyze the thermal behavior. For high powered antenna, some components could very easily get hot enough for the material properties to appreciably change and for thermal radiation to become a dominant mode of heat transport. A 2-way automatic Icepak coupling is an excellent way to model this. Thermal modifiers may be defined for material properties in HFSS, and radiation is a supported physics model in Icepak. HFSS and Icepak can then be set up to alternately solve and automatically feed each other new loads and boundary conditions until a converged result is attained.

What all of this really comes down to is the question: how easy is it for the user to set up a model that will produce the information they need? For these kinds of electronics questions, I believe the answer for Icepak is “extraordinarily easy”. While functional on its own merit, Icepak really shines when it comes to the ease of coupling thermal management analysis with the EM family of tools.