Managing a Subscription List with a Flow in Microsoft Power Automate

This is an unusual HOW-TO post for our blog. Most of the time, we post useful technical content about Ansys, Flownex, 3D Printing, Scanning, and product development. But I’ve been on a no-code kick using Microsoft Power Automate and the flows you can create there. But as I’ve learned the tool, I’ve found a lack of good resources that are similar to the type of content we like to do for our Ansys users, so I thought I’d break the mold and post about a simple flow I did that shows how to add and modify data in Microsoft Excel from the results from a Microsoft form.

It all started with a virtual happy hour I started back at the beginning of the pandemic. I invited a handful of people that I’m used to seeing at Arizona tech community events. Follow us visaliaweddingstyle for further details. Over time, I invited more people, and the regulars invited their friends. The invite list got long. Also, I found that no-one was being asked to be taken off the list, but many people have never shown up.

I needed a subscribe and unsubscribe form that updated my list.

Rather than using a perfectly good and free online tool to manage the list, I decided to use this need as a reason to learn more about flows in MS Power Automate.

Here is what I wanted:

  • Subscribe
    • Person goes to form, enters their email
    • Email is checked against list
    • If the email was on the list:
      • If the email is flagged as unsubscribed
        • Flip flag to subscribed
        • Send a success email
      • If the email was flagged as subscribed
        • Send an email to the person letting them know they are already subscribed
    • If the email was not on the list:
      • Add them to the bottom of the list
      • Send an email letting them know they were added
  • Unsubscribe
    • Person goes to form, enters their email
    • Email is located in list
    • If the email was on the list
      • The email is flagged as Unsubscribed (TRUE in second column)
    • If it was not on the list
      • Send an email letting them know that their email was not found.

So how do we do this in MS Flow? It is actually pretty simple. The basic concept behind flows is that you have some sort of an event on a document or a form that you set a watch on. Then you take the information from that event and do something with other Microsoft tools, and some 3rd party tools. All with no writing of code! You set up a flow chart and fill in forms.

The other thing I like about this example is it shows how to deal with errors and branch when something doesn’t go right.

I’m going to assume if you are reading this, you have a basic familiarity with the tool. If not, run through some basic tutorials and come back.

Before you start doing the flow, you need to create a subscribe form, an unsubscribe form, and an Excel spreadsheet. The forms just ask for an email.

The Subscribe Microsoft Form
The Unsubscribe Microsoft Form

Because flows work on tables, create a table with two columns. The first is for emails, and the second is for a flag on if they have asked to unsubscribe. You can have other fields on your forms and other columns in your table if you want more information, like company or names. For my happy hour, I just want emails. You can start with dummy emails or just your own. Save the file to a SharePoint site that you are part of.

The Table in Microsoft Excel

Unsubscribe

The unsubscribe is simpler, so let’s start there. My flow looks like this:

Let’s look at each block to understand how things work:

I start the flow when my Unsubscribe form is submitted. (If you have Office365 and you are using a different form tool, stop and check out MS Forms. We have been very happy with it. ) All you need to do is pick the form you want. Note, I changed to the title with … > Rename so when I come back in 6 months, I can remember what is going on.

Each block creates output that can go to the next block. All that the form trigger does is return the ID for the response. So we need to now get the information that was submitted with a “Get response details” block:

Notice that you have to re-identify the form. It does not assume that the previous block is where the information is. So select the form again.

For the Response ID value, we will use the results from the trigger block. Any time you fill in a field that is not a dropdown, you get a popup that shows you information passed down from previous blocks. At this point, all we have is the response ID. Click on that to fill the form out. These chunks of information are called Dynamic Content and will have an icon next to their label that reflects the application the information came from.

Now that we have the email address to add, we need to try to add it to a table in Excel. We use an Update a row block for that. Our goal is to set the value to TRUE for the unsubscribe flag.

Flows use files stored in SharePoint. So you need to find first specify the site you stored your Excel file on. Then the folder, then the file. All of these self-populate as you go.

Now, pick the table you want to update. The way this works is you specify a “key column” and a value to look for. The first row that has the supplied value in it gets updated. So we need to specify our “email” column and then the submitted email from the dynamic content.

It auto-populates with the columns in the table, so we can see our two columns that can be updated. We will leave email alone and set Unsubscribe to TRUE.

Now, if that all works just fine, we want to send a confirmation email. If it doesn’t, because the flow could not find the email given, we want to send an email letting the person know if it didn’t work.

We use the failure of the “update a row” block as a way to decide which way to branch. First, we need to make the branch. Add the success email:

Put the submitted email in the To: box and put in a descriptive subject. I then explain what is going on in the body and include the email so they can see what they submitted. I also put a link to the subscribe form if they want to get back at some point.

So that is great; if all goes well, they are marked as unsubscribed and get an email. But if their email was not on the list already, we need to let them know. To do this, you create a parallel branch and set “Configure Run After” to branch for an error.

Click on the + and chose add a parallel branch:

Do another email for that second option. I add in the body that they should use the email address that was in the last invite they got.

Now is the branching part. If you leave it like it is, the flow will send both emails if the update is successful and nothing if it fails. We need to tell the “fail email” to only send on a failure.

Do this by clicking on the … then chose “Configure run after.”

That brings up a form that lets you specify when the block should be run based on the exit status of the previous block. Check only “has failed” and Done.

Notice how the down arrow leading to the block is reddish. This tells you that it only runs if the previous block did not run successfully.

And that is a simple unsubscribe flow! I tried it out by unsubscribing myself and then using an email that is not in the list.

Subscribe: More logic and branching

For subscribing, we are going to add a row to our table, and we also need to check and make sure that the email was not already on the list, which lets us use some “has failed” branching, but we also want to change them from Unsubscribed = TRUE if they are already in the list but want to re-subscribe.

Here is the flow:

The first two blocks are the same. But the third block is a get a row block. It grabs the contents of the first row that matches the supplied Key Value for the Key Column. Some input, but the output is a list of the row values rather than letting them update the row. So we supply the Email column and the email address given.

For the case where it finds the row (we will come back and branch on the failure), we need to first check to see if the Unsubscribe flag is TRUE. So we insert a Condition Block. We put the returned value for Unsubscribe in the first field, set the condition to “is equal to,” and set the third field to true. See in the Dynamic Content dialog how the row results show up?

Note: Excel returns all lower case “true” or “false.” That tripped me up. So use all lower case.

That block generates an If yes and an If no branch.

For the If yes branch, we need to change the value of the row to FALSE and then send an email saying that the person has been resubscribed. So in the If yes block, we first add an Update Row block:

We do everything just like the unsubscribe changing of the row, except the value is now FALSE.

Then we add a new email, letting them know they were turned back on:

Now, if someone tried to subscribe and was already on the list and was subscribed, we should let them know with an email. So we add another email block into the If no Block

Next, we need to go back to handle the case when looking for the row of data showed that they were not already in the list.

We add a parallel Branch that points to an “Add a Row into a Table” block.

The block looks a lot like the other two blocks we have used for excel, except there is no Key Column or Value. You point to the table, then supply the value you want added. For our flow, the email and FALSE for Unsubscribe.

Remember, we add the row when it was not already there, the “get the row” block failed. So use “… > configure run after” and set it to “has failed” only.

Then add a success email after that block:

I have also added an email to me if the attempt to add a row failed. That is not necessary; if a flow fails, you get an email. But I thought it was the right thing to do. So I added one more email block parallel to the success email:

Remember to set its “configure after run” to only execute on a failure.

And it all works! Or seems to so far. And not one line of code.

Final Thoughts

One thing I didn’t do was BCC or CC myself on the emails. If you click “Show advanced options” at the bottom of the email blocks, they let you do a lot more, including BCC and CC addresses.

I could have also created a single form and had a check box for subscribing or unsubscribing. Then added a Condition block to branch based on that value.

As mentioned above, I could have done this with a dozen different free or paid tools. But this was a great way to up my Flow skills for something more serious, like the tool we are building to manage NDA agreements or our project numbers. Powerful stuff.

Or you can build your own list as an excuse to start your own Happy Hour.

PADT has developed expertise in many areas since our founding in 1994, and automating processes and integrating different tools are two areas demonstrated in this example. Please reach out if you need to make your workflows more efficient or need simulation, design, or 3D Printing tools, training, consulting, or services.

Cheers!

Press Release: Expanding its Product Development Expertise, PADT Adds Dr. Tyler Shaw, Former Head of Advanced Manufacturing at PING, as Director of Engineering

Change is an important part of growth. Our mission within the Engineering Services team at PADT is:

Delivering Premier Engineering Services to Enable World-Changing Product Development.

To do that, we need a world class leader. And when our long-time Director of Engineering decided to move to something different, we searched high and low for a new person. The ability and experience of the applicants was amazing and making a decision was difficult. In the end we were fortunate to have Dr. Tyler Shaw join PADT.

Read the official press release below to learn more. We are excited about this new phase for our consulting offering. Tyler’s background and knowlede open new and excited doors.

If you would like to explore how PADT can provide product development or simulation assistance to your organization, contact us, and Tyler along with the rest of the team will be eager to learn more.


Expanding its Product Development Expertise, PADT Adds Dr. Tyler Shaw, Former Head of Advanced Manufacturing at PING, as Director of Engineering

Shaw Tapped to Lead PADT’s Simulation and Product Development Team Who Provide Services Across Industries Worldwide

TEMPE, Ariz., December 3, 2020 PADT, a globally recognized provider of numerical simulation, product development, and 3D printing products and services, today announced it has hired Dr. Tyler Shaw as its Director of Engineering to oversee the company’s simulation and product development consulting team effective immediately. Shaw most recently served as the head of Advanced Manufacturing and Innovation at PING golf, and has worked as an engineer, product manager, and educator across a diverse range of industries for more than 20 years.

“PADT’s ability to help our customers solve tough problems is a key industry differentiator, and we’re thrilled to welcome Tyler as a leader to oversee our team of simulation and design experts,” said Eric Miller, co-founder and principal of PADT. “His experience and impressive technical background will enable us to continue our high-quality service while providing fresh, innovative ideas for developing products to their full potential.”

Dr. Shaw replaces Rob Rowan as the director of Engineering. Rowan spent nearly 20 years with PADT and is credited for driving the growth of PADT’s engineering services and capabilities. “We owe a tremendous debt of gratitude to Rob for his dedication and leadership,” said Miller. “He was greatly admired for his broad engineering knowledge and business acumen and we wish him the best in his future endeavors.”

After a comprehensive search, Dr. Shaw emerged as the most technically advanced, skilled, and capable candidate to assume the role as PADT’s engineering leader. Dr. Shaw will focus on setting strategy, managing resources, and providing technical expertise to solve customer challenges. Prior to working at PADT and PING, Dr. Shaw served as a product manager for Vestas where he led customer-specific technical and commercial solutions for wind turbine sales across North, Central, and South America. He was also a principal systems engineer for Orbital Sciences Corporation, now Northrop Grumman, where he managed projects related to the development of world-class rockets, satellites, and other space systems.

“I am thrilled to join PADT and am ready for the challenge of taking its engineering services to the next level,” said Dr. Shaw. “I’ve worked with PADT in my previous post and was impressed with their capabilities and portfolio of clients, which covers a diverse set of industries. My background and technical knowledge across many of these sectors will serve PADT’s customers well.”

To learn more about Dr. Shaw and PADT’s simulation and product development services, please visit www.padtinc.com.

About PADT

PADT is an engineering product and services company that focuses on helping customers who develop physical products by providing Numerical Simulation, Product Development, and 3D Printing solutions. PADT’s worldwide reputation for technical excellence and experienced staff is based on its proven record of building long-term win-win partnerships with vendors and customers. Since its establishment in 1994, companies have relied on PADT because “We Make Innovation Work.” With over 90 employees, PADT services customers from its headquarters at the Arizona State University Research Park in Tempe, Arizona, and from offices in Torrance, California, Littleton, Colorado, Albuquerque, New Mexico, Austin, Texas, and Murray, Utah, as well as through staff members located around the country. More information on PADT can be found at www.PADTINC.com.

# # #

More formal versions of this Press Release are available here in PDF and here in HTML.

Building a System Model of the RL-10 Rocket Engine in Flownex

When we engineers are building a new system or iterating on an existing design it can be expensive.  Simulating a full system-level model in a 3D CFD program can take days.  Making iterative changes to an existing system can be costly or even impossible. Utilizing a one-dimensional system modeler like Flownex allows us to analyze many different designs very quickly, on the order of seconds or minutes.

Flownex is a thermal-fluid network modeler.  It is a simulation tool that allows for 1D fluid modeling and 2D heat transfer.  It uses a variety of flow components, nodes, and heat transfer elements to model the entire system we are interested in analyzing.  It solves conservation of mass, momentum, and energy to obtain the mass flow, pressure, and temperature of fluids and solids throughout the complete network.  Because of this approach we can analyze large, complex networks very quickly, iterate on designs, and even run short or long transient simulations with ease.

In the example today we are looking at a version of the RL-10 rocket engine, which has been a staple in the delivery of satellites into orbit and an essential part of many spacecraft. The specific iteration of the RL-10 we will be using for building our network model is the RL10A-3-3A. A good place to begin with any system model is a system schematic:

Figure 2: RL10A-3-3A Engine System Schematic – Image from https://ntrs.nasa.gov/citations/19970010379

In Flownex we can assign an image (could be from a P&ID diagram, a CAD cross-section, or even a satellite image!) as the background for our drawing canvas. We simply need to right-click on the drawing canvas and select Edit Page to bring up the drawing canvas properties.

Clicking on the action button under Appearance Style brings up the Styles Editor.  Here we can change the fill style to Image and select the appropriate image for our background.

In the case of the RL-10 we can use the image from figure 2 as our background image.  We may want to consider adjusting the opacity of the image so that it blends into the background a little bit more.

In Flownex building a system model is as simple as drag and drop.  We can build our rocket engine using a variety of flow components from the Flow Solver library. To build the RL-10 system model we will be using the following components:

CEA Adiabatic Flame component to model combustion.

Composite Heat Transfer component to model thermal transport through pipe-walls to ambient and to model the regen.

Boundary Conditions to constrain our system at the inlets and outlet.

Basic Valves to model the different valves in the system,

Flow Resistances to model specified losses where appropriate.

Flow Interfaces to model the fluids entering the combustion chamber (to transfer fluid properties as we switch from two-phase O2 and H2 to gaseous fluids for modeling combustion.

Pipes for modeling various flow-paths.

Restrictors with Discharge Coefficient for our injection ports to the combustion chamber.

Restrictors with Loss Coefficient to model both the Calibrated Orifice and the Venturi contraction/expansion.

Basic Centrifugal Pumps for our Fuel and LOX pumps.

Simple Turbine to model the Fuel Turbine

Shafts to connect our different pumps mechanically.

Gearbox is used to connect the shafts between the LOX pump and the Fuel Pump.

Exit Thrust Nozzle to determine total thrust.

Script is used in assigning O2 properties prior to combustion.

The components may be dragged and dropped from the component library onto the drawing canvas to build our system model. We can also copy and paste components that are already on the canvas into different locations. This can be especially useful when the same inputs for say, a pipe, are used consistently throughout the model. All components have both Inputs and a Results associated with them as seen in the figure below. This is how we will define our flow components.

The completed model of the RL-10 Rocket Engine can be seen below. There are a few simplifications; we are using composite heat transfer components to model free convection to a specified ambient temperature (as though this was a land-based test). Rather than tie the actual temperatures and flow conditions in the nozzle to the regen we are using assumed temperatures and convective heat transfer coefficients. For additional fidelity we could model the heat transfer between these two flow paths with calculated convective heat transfer coefficients and we could model cross-conduction along the pipes which deliver the fuel and oxidizer to the combustion chamber. With additional effort, more complex use cases could also be simulated.

For the sake of demonstration we set up a transient action to slowly vary the oxidizer control valve fraction open; starting at 30% and ending at 100% open and observer the change in thrust at the nozzle as a function of this changing transient action.

Plots may be easily added by dragging a Line Graph from the Visualization > Graphs section of the component library onto our canvas. To choose the characteristics we would like plotted against time we simply need to drag and drop the desired inputs or results onto our newly placed line graph.

RL-10 Transient Thrust Plot

We can plot both the oxidizer control valve fraction open and the thrust versus time to observe the thrust reaction to the opening of the valve. The thrust plot has some jumps that are likely due to numerical singularities – with additional work this could be improved.

As can be seen, setting up complex system models in Flownex is relatively simple with most operations being drag and drop. For ease of sharing models with colleagues or customers adding a background image makes it very easy to see how the flow components in the model correspond with a system schematic. Setting up and plotting the effects of operational transients is a breeze!

For more information on Flownex please reach out to Dan Christensen at dan.christensen@padtinc.com.

Meshing in the New Ansys Fluent Task-based Workflows

Working with a variety of users with different levels of CFD (Computational Fluid Dynamics) backgrounds, I have to admit that Fluent meshing used to be a challenging and confusing task for beginners and even intermediate users.

Ansys has addressed this challenge by redesigning the Fluent user interface to provide a task-based workflow for meshing that enables engineers to do more and solve more complex problems than ever before in less time. The new Fluent task-based workflow streamlines the user experience by providing a single window that offers only relevant choices and options and prompts the user with best practices that deliver better simulation results.

Best practices are embedded into the workflow in the form of defaults and messages to the user. This reduces the amount of training required to start using the software and makes it easier for occasional users to return to the software.

How to Mesh Watertight CFD Geometry in the New Ansys Fluent Task-based Workflow

In order to use this workflow, you need a relatively clean watertight solid and/or fluid regions that can be meshed by surface meshing and then volume filling (no wrapping required.) Geometry can consist of single or multiple bodies.

Going through the task-based workflow is straightforward. You are presented with several steps, like:

  • Surface mesh.
  • Describe geometry. (Fluid and/or solid)
  • Capping. (If you are creating an internal flow volume, then the capping tools in Fluent makes extraction easy)
  • Volume meshing. (If you wish to use the latest Mosaic meshing technology, select “Poly-hexcore”)
Mosaic Meshing Technology

Now, click on “Switch-to-Solution,” to bring the mesh into a familiar Fluent interface.

Fault-Tolerant Workflow for Ansys Fluent Meshing Wraps and Seals Leaks and Gaps

Sometimes CFD simulations contain dirty, non-watertight geometries. For instance, 3D scanned or manufacturing geometry files. These geometries may contain missing faces, gaps, holes, overlaps, and other issues. As a result, they require extensive cleanup before simulation.

To overcome this obstacle, Ansys offers a new Fluent meshing workflow that wraps dirty geometry without cleanup.

The workflow for non-watertight geometry offers distinct advantages over other meshing technologies such as:

Part management:

Users can perform CAD-level changes to any geometry or assembly, including dragging and dropping objects from the CAD model into the simulation model.

Leakages and overlaps:

The fault-tolerant workflow seals leakages caused by gaps and misalignments between solid bodies. This significantly reduces the manual efforts required to clean up geometry.

The fault-tolerant workflow can easily wrap leaky geometry

STL file input

The workflow can create fluid regions directly from STL files or scanned data. This eliminates the need to convert STL files into solid geometry for the biomedical, oil and gas, automotive and other industries.

Imported STL File

2020R2 updates:

There are a few important improvements both in Watertight meshing (WTM) and Fault-Tolerant meshing (FTM) workflows in the 2020R2 release.

FTM/WTM: Wild card selection in lists

The Meshing Workflows now have an option to use a persistent Wildcard string for selecting labels or zones. This is in addition to the Filter Text option previously available. The new Use WildCard option stores the wildcard string itself in recorded workflows instead of an explicit list of locations so that when they are played back with new geometries, the matching will be performed again and pick up any matching zones/labels that were not in the earlier geometry.

WTM: Support of Region-specific Sizing 

You can specify region-specific Max Size and Growth Rates during the Volume Meshing task.  If you enable Region-based Sizing, Fluent will compute default sizing specifications for each region.  These can then be adjusted as required for each region.

WTM: Start From Imported Surface Mesh

This is useful if you have an established surface-meshing workflow or if you already have a mesh generated (perhaps from another preprocessor or an existing Fluent case) and want to use that as a starting point for Fluent meshing. Once you import the surface mesh you have the option of using it as it is, or selectively adding additional Local Size controls and/or remeshing particular surfaces as needed.

FTM: Continuous prism layers for Poly and Poly-Hexcore for Fluids

For the Fault-Tolerant Meshing Workflow you can now create continuous prism layers without stair-stepping within poly and poly-hexcore fluid regions.  Note that this will apply in all zones of the region.

WTM: Support of Local Sizing on Labeled Edges

Once you have labeled the edges, you can select Edge Size in Add Local Sizing to prescribe a target size on the selected edge(s).

Efficient and Accurate Simulation of Antenna Arrays in Ansys HFSS – Part 2

Finite Arrays

In addition to explicit modeling of finite arrays in Ansys HFSS, there are three other methods based on using unit cells. To learn more about the unit cell, please see part 1 of this blog.

In part 2, I will introduce and compare these 3 methods (1) Finite array defined using the array setting in unit cell, (2) Finite Array using Domain Decomposition Method (FADDM), (3) 3D component arrays (Figure 1). Please note that that the method 3 requires HFSS 2020R1 or newer.

(a)

(b)

(c)

Figure 1. (a) Unit cell, (b) FADDM, (c) 3D component array.

Finite Array using Unit Cell

After defining a unit cell (Figure 1a), you may simply define the number of elements, the spacing between them, and the scan angle. The assumption is that there is no mutual coupling, every element has the same radiation pattern and the same excitation. This is a good approximation for large arrays (10×10 or larger). This method may not be accurate enough in some cases, for example for a small number of elements; and when the antenna elements have main beams toward the angles that are close to the plane of the array and toward the other elements, causing a higher level of mutual coupling. However, smaller arrays won’t require as large of a compute resource as a large array.

The advantage of this method is its simulation speed. It requires the minimum memory and time to provide a quick array simulation. To define the array (after running the analysis for unit cell), right-click on Radiation from the Project Manager window. Select Antenna Array Setup, and then Regular Array Setup. In the Antenna Array Setup under Regular Array type, define the location of the first cell, the direction, the distance between the cells and number of cells in each direction.

(a)  

(b)

(c)

Figure 2. Steps to define a finite array, (a) Antenna array setup, (b) & (c) Regular array setup.

Finite Array using Domain Decomposition Method

General Domain Decomposition (DDM) for a single domain provides a way to reduce memory requirement, however, it does not reduce the meshing time for large explicit arrays. Using Finite Array DDM (FADDM) addresses this shortcoming. The FADDM bypasses the adaptive meshing stage by duplicating the mesh that was generated for a unit cell. While the unit cell is used to create the mesh, the assumption of uniform excitation is no longer present. Each element in FADDM can have different magnitude and phase and is individually modeled, however, the mesh created in the unit cell is used to generate the overall mesh, therefore, no CPU time is spent on generating the mesh. This can be seen in Figure 3, by linking the mesh of the unit cell to the FADDM, the mesh is copied, and no mesh refinement will be needed. You may compare it with the explicit array of the same size (Figure 3(c)) where the entire array has to be meshed and mesh refinement will be necessary for adaptive meshing. This can be a huge simulation time saving when the array size is large.

(a)

(b)

(c)

Figure 3. (a) Mesh from a unit cell, (b) mesh linked to FADDM, (c) explicit array mesh.

To create FADDM from unit cell, create a new HFSS design. Then copy the unit cell into the new design. In the Project Manager window, right-click on Model and choose Create Array, as shown in Figure 4. This opens the window that allows user to define the number of elements of the array along the lattice directions. By selecting “Active Cells” tab, user can define where active, passive and padding cells are located. This gives the user a means of creating different lattice shapes.

Please note that padding cells defined in the General tab represent the size of vacuum buffer surrounding the array. They are not visible to the users but are included in the FADDM simulation. The same mesh from the unit cell simulation is duplicated to padding cells (Figure 5). It is also possible to add padding cells in the Active Cells tab, and those cells are also invisible, but can be used to create the array lattice of the desired shape (Figure 6).

Figure 4. The steps to create a DDM array, the Padding Cells are used to create the vacuum box and are invisible to the user.

Figure 5. The FADDM needs a padding cell to create a vacuum box around the design. The padding cells are invisible to the user.

(a)

(b)

Figure 6. (a) Padding cells can be used to create a lattice, (b) the lattice created does not show the padding cells.

The next step is to link the mesh to the unit cell. First, an analysis setup should be created. Choose Advanced Solution Setup. In the Driven Solution Setup General tab, reduce the number of Maximum Number of Passes to 1, as shown in Figure 7(a), then choose Advanced tab and click on Import Mesh (Figure 7(b)). Click on Setup Link. This is to link the simulation to the mesh of a unit cell. There are two steps needed here. First, choose the file or design that contains the mesh information (Figure 7(c)), second is to map the variables Figure 7(d). The last step to setting up the analysis is selecting Advanced Mesh Operation tab and selecting “Ignore mesh operation in target design” (Figure 7(d)). Now the array is ready and simulation can be run. You notice that adaptive meshing goes to only one pass. If in Setup Link window the option of “Simulate source design as needed” is checked (Figure 7(c)), then if a design variable that affects the geometry is changed, the meshing of the unit cell is repeated as needed. After the simulation is completed the elements magnitude and phases can be changed as a post processing step by “Edit Sources” (right-click on Excitations). The source names provided in the edit sources is slightly different than an explicit array (Figure 8)

(a)

(b)

(c)

(d)

(e)

Figure 7. Different windows related to setting up a linked mesh in FADDM.

Figure 8. Edit sources gives the ability to change the magnitude and phase of each element.

To compare the run-time and array patterns an example of circular polarized microstrip patch antenna of a 5 ´ 5 element array is shown in Figure 9 and Figure 10. The differences can be seen at angles away from the broadside angle. This shows how the edge effects are ignored in the unit cell approximation. Table 1 shows the comparison of memory and runtime for the three methods.

Table 1. Comparing run time and memory needed for a 5 x 5 array, explicit array vs FADDM.

Elapsed Time (min:sec) Memory (MB)
Unit cell01:0683.1
FADDM03:1781.4
Explicit Array25:0694.7

Figure 9. Comparison of the far-field patterns for LHCP (co-polarization) created using FADDM and unit cell array.

Figure 10. Comparison of the far-field patterns for RHCP (cross-polarization) created using FADDM and unit cell array.

Finally, we compare the co-polarization pattern with an explicit 5 x 5 element array for scan angles of 0 and 30 degrees in  Figure 11 and Figure 12, respectively.

Figure 11. Comparison of far-field LHCP created by explicit array vs. FADDM.

Figure 12. Comparison of scanned far-field LHCP, scan angle of 30 degrees.

3D Component Array

In 2020R1 the option of 3D component array was added. This option provides a means of combining different unit cells in one array. The unit cells are defined and imported as 3D components. To create a 3D component unit cell, define each type of the cells in a separate HFSS design, run the analysis, then select all objects in the model. In the Model ribbon, click on Create 3D Component, assign a name (no spaces are allowed in the name), add any information you like to add such as owner, email, company, etc., then click OK. Once all the 3D component cells are created, create a new HFSS design for the 3D component FADDM.

The next step is to create Relative CS for each of the 3D component elements in the HFSS design that will contain the array. For this step you need to plan the array lattice ahead of time, so the components are placed in the proper locations (Figure 13). Overlap is not allowed.

The unit cells should have the followings:

  • Identical dimensions of the bounding boxes
  • Identical Primary/Secondary (Master/Slave) boundary on the unit cells.

The generation of FADDM is similar to single unit cell array, except that when you select Model->Create Array, the window will be shown as 3D Component Array Properties (Figure 14). After choosing the number of elements and the number of padding cells, the unit cells window (Figure 15) will give you options of choosing one of the 3D component unit cells for each location of the elements in the lattice. The cells can be color coded. In the example shown in Figure 16  there are 3 components, the blank cell, the vertical cell and the horizontal cell. The sources under Edit Sources window are also arranged based on the name of the 3D component cells. At this point there is no option of linking the mesh. Therefore, the number of passes for adaptive mesh should be set to a number that is appropriate for getting a convergence.

Figure 13. 3D component unit cells are arranged to create a 3D component array.

Figure 14. The 3D component array can be created the same way as creating FADDM using a unit cell.

Figure 15. The unit cells are color coded for easier lattice creation.

Figure 16. The result of lattice created using 3D component array.

Conclusion

Unit cell and Finite Array Domain Decomposition are excellent options for simulating large finite arrays within a reasonable runtime and memory requirements. The 3D component finite array is a nice added feature in 2020R1 that now provides a way to combine unit cells with different geometries in one array.

If you would like more information related to this topic or have any questions, please reach out to us at info@padtinc.com.

SPISim – New addition to the Ansys Electronics family

In this article, I would like to introduce some new features added to the Ansys Electronics Solution 2020R2 release called SPISim. Since this is a new tool, I’ll focus on describing its capabilities as well as some possible applications.

What is SPISim?

Signal, Power Integrity and Simulation (SPISim) focuses on system-level and on-chip SI/PI modeling, simulation, and analysis. The tool presents a variety of different features, which are split on separate modules shown below.

Let us look at each module individually and highlight the key functionality.

There are 2 main Modules VPro and MPro. All the other features (sub-modules) are split between these main two.

VPro Core

This is a versatile GUI for viewing waveforms. It supports a wide variety of formats including .tr0, .ac0, .ibis, .csv, .mat, .raw, .snp, .citi, and more. Besides simple viewing capabilities, VPro can also be used for waveform analysis:

  • Overshoot and Undershoot (for Peaks and Valleys)
  • Threshold Crossings
  • Min/Max Peak-2-Peak
  • Root-Mean-Square Value 
  • FFT, iFFT
  • Correlation
  • Pulse to PDA

Using the information about waveforms, this tool can also plot an eye diagram, perform simple correlations, and run measurements. The viewer also supports framework scripting on JavaScript, Ruby, TCL, etc.

DPro Unit (VPro Module)

DPro (short for DDR Pro) provides comprehensive DDR related post-processing analysis. Key functionalities of this tool:

  • Batch mode of processing one or more waveform files
  • Support of multiple receiver processing
  • Built-in and customizable derating table and derating processing
  • Built-in 100+ measurement functions for typical DDR signal analysis
  • Results cross-probing and show problematic location automatically

The feature is organized in a wizard-like style. The user simply needs to fill out information in 6 tabs and click the ‘Run’ button. Overall, it is very intuitive to use, but like any new features, there is a learning curve for a new user.

TPro Unit (VPro Module)

It provides comprehensive transmission line related modeling, analysis, post-processing, and viewing capabilities. Here are several main functionalities offered by this add-on:

  • Comprehensive stackup planner to model t-lines’ performance in different stackup configurations
  • Advanced t-line modeling viewer for rapid analysis such as impedance, crosstalk, or propagation delay analysis
  • A table viewer for RLCG frequency content
  • What-if analysis for quick impedance/crosstalk calculation, and data processing, such as trimming and merging of frequency points
  • Batch mode processing and measurements for one or more t-line model files, result is a plain .csv file ready for further modeling or analysis

This feature helps the user to run pre-layout ‘what-if’ analysis. Both ‘transmission line analyzer’ and ‘layer stackup planner’ give the user a flexible way of understanding potential design constrains and guidelines.

SPro Unit (VPro Module)

This module is similar to TPro in a sense of the capabilities. However, it is directed to view and analyze S-parameters instead of tabular transmission line data. Also, in contrast to TPro, this feature has a separate tab ‘S-Param’ with all the features listed there.

Here are major capabilities of SPro:

  • Advanced s-parameter viewer for speedy analysis such TDR/TDR or PDA analysis
  • Table viewer for frequency content; export s-parameter data to matlab .mat format and more
  • 20+ advanced analysis functions such as mixed-mode conversion, cascading and renormalization
  • Batch mode processing and measurements for one or more s-parameter files
  • Support customizable s-parameter reporting generation for lab automation and beyond

Besides the conceptual similarities with the TPro, S-parameter’s waveform viewer based on VPro waveform viewer. Therefore, all operations available in VPro can also be found in S-parameter viewer.

Signal Generator Unit

This tool allows the user to generate a signal and use it in a future analysis. The generator offers wide variety of signal patterns (such as PRBS, Pulse, Sine, Square, Sawtooth etc) in combination with the PAM4 and NRZ modulation schemes. The user needs only to specify parameters for the signal and then create it.

This simple, but very powerful feature helps to save time for the engineer. 

MPro Core

By definition, MPro is a modeling unit, which helps the user to work with the data. However, modeling can mean different things. The main advantage of MPro is providing the user with the simple environment for data manipulation. Here are all main functionalities of this core module:

  • Table data processing: combine, extract, summarize statistically, etc
  • Plan sampling with design of experiments, full factorial, Monte Carlo, etc
  • Simulate or collect data using customizable scripts, supporting multi-CPU/multi-thread
  • Visualize data in statistical, 2D or 3D plots
  • Model data using response surface modeling, neural network (feed forward and radial basis), etc
  • Optimization using linear, nonlinear, or genetic algorithm methods

BPro IBIS and AMI Unit (MPro Module)

BPro is one unit, however in this description I have purposefully separated it into two – BPro IBIS and BPro AMI, because the functionality of BPro is very broad. It is easier to focus on a one thing at a time.

Generally, BPro brings comprehensive IBIS related modeling, analysis, post-processing, and viewing capabilities to user. In more detail:

  • Has an inspector to view IBIS model’s textual content and visualize various waveform/current table easily. Tool also allows manual editing of model data with a simple mouse click and drag
  • Built-in advanced IBIS model generation flow from either scratch or existing simulation data. Tool will guide user from modeling setup, spice decks generations, simulation, modeling, syntax checking with golden parser, validation to final figure of merits (FOM) reporting
  • Support batch mode generations of performance reports for one or more model files. Results are in .csv file format and can be used for further analysis
  • IBIS model generation from Spec. or data sheet without performing any simulation. Generated model will also have two sets of waveforms under different loading conditions

Under ‘IBIS’ menu tab, the user will find separate sets of commands for both IBIS and AMI, as well as commands for IBIS-AMI in general.

Summary

This new addition to Ansys Electronics Solution brings a very wide variety of features to engineers. All Waveform Viewer, Signal Generator, IBIS-AMI modeling, DDR analysis, Data optimization, and Transmission line planner are united under one tool – SPISim. We can launch this tool either from within Ansys 3D Layout or SIwave, and, in 2020R2, is accessible through the Electronics Enterprise license.

Here is an overview of the SPISIm functionality:

Besides developing the help documentation and video demos, SPISim engineer team provides users with the detailed information about the tool in their blog – http://www.spisim.com/blogs/blog-articles-index/  and helps to fill out the technical ‘gaps’ by sharing the reference material – http://www.spisim.com/products/ami-spisims-ibis-ami/academic-serdes-ami-reference/

If you would like more information related to this topic or have any questions, please reach out to us at info@padtinc.com.

Windows Update KB4571756 Triggers Error 3221227010 for Ansys Electronics Products

On September 7, 2020 Microsoft released a Windows update KB4571756, which may cause the Ansys electronic products to fail with the Error:

3221227010 at ‘reg_ansysedt.exe’ and ‘reg_siwave.exe’ registration.

This is the error message, users would see if they right-mouse-click and run the following file as administrator:

C:\Program Files\AnsysEM\AnsysEM20.2\Win64\config\ConfigureThisMachine.exe

To resolve this issue, here are the steps we recommend users take:

  1. Revert the updates.
    1. If the issue is not resolved or something your IT won’t let you do, continue to the next steps.
  2. Set an environment variable that turns off the driver that is causing the error. 
    1. Use windows search and type “system environment” and click on “Edit the system environment variables”
    2. This opens the “System Properties” tool
    3. Go to the “Advanced” tab
    4. Click on “Environment Variables…” at the bottom
    5. In the System Variables window click on “New…”
    6. Create the following variable:

      Variable Name: ANSYS_EM_DONOT_PRELOAD_3DDRIVER_DLL
      Variable Value: 1
    7. Click OK 3 times to exit out of the tool and save your changes. 
  3. If the issue is still not resolved, there is one more step:
    1. Go to C:\Program Files\AnsysEM\AnsysEM20.2\Win64\config\
    2. Right-Mouse-Click on “ConfigureThisMachine.exe” and run as Admin. 

If these steps helped to resolve the issue, you will see the following info message when ‘ConfigureThisMachine.exe’ is run:

If this does not work, please contact your Ansys support provider. 

Alternating Stresses in Ansys Mechanical – Part 1: Principal Stresses

Editor’s Note:
The following PowerPoint is from one of PADT’s inhouse experts on linear dynamics, Alex Grishin.

One of the most valuable results that can come from a harmonic response analysis is the predicted alternating stresses in the part. This feeds fatigue and other downstream calculations as well as predicting maximum possible values. Because of the math involved, calculating derived stresses, like Principal Stresses can be done in several ways. This post shows how Ansys Mechanical does it and offers an alternative that is considered more accurate for some cases.

Part 2 on von Misses Stress can be found here.

ANSYS-PADT-Alternating-Principal-Stresses-1

Alex also created an example Ansys Workbench archive that goes with the PowerPoint.

What I use Most from my Engineering Management Masters Degree

Even before finishing my mechanical engineering degree at the University of Colorado, Boulder in 2010, I had an interest in furthering my education. The decision I had at that point was whether the next step would be a graduate degree on the technical side or something more like an MBA. I would end up with the chance to study at the University of Denver (DU), focusing on Computational Fluid Dynamics (CFD), and if that field does not make it clear, my first stint in grad school was technical.

At DU, we sourced our Ansys simulation software from a company called, you guessed it, PADT. After finishing this degree, and while working at PADT, the desire to further my education cropped up again after seeing the need for a well-rounded understanding of the technical and business/management side of engineering work. After some research, I decided that a Master’s in Engineering Management program made more sense than an MBA, and I started the program back at my original alma mater, CU Boulder.

Throughout the program, I would find myself using the skills I was learning during lectures immediately in my work at PADT. It is difficult to boil down everything learned in a 10-course program to one skill that is used most often, and as I think about it, I think that what is used most frequently is the new perspective, the new lens through which I can now view situations. It’s taking a step back from the technical work and viewing a given project or situation from a perspective shaped by the curriculum as a whole with courses like EMEN 5020 – Finance and Accounting for Engineers, EMEN 5030/5032 – Fundamentals/Advanced Topics of Project Management, EMEN 5050 – Leading Oneself, EMEN 5080 – Ethical Decision Making, EMEN 5500 – Lean and Agile Management, and more. It is the creation of this new perspective that has been most valuable and influential to my work as an engineer and comes from the time spent completing the full program.

Okay okay, but what is the one thing that I use most often, besides this new engineering management perspective? If I had to boil it down to one skill, it would be the ‘pull’ method for feedback. During the course Leading Oneself, we read Thanks for the Feedback: The Science and Art of Receiving Feedback Well, Even When it is Off Base, Unfair, Poorly Delivered, and, Frankly, You’re Not In The Mood (Douglas Stone and Sheila Heen, 2014), where this method was introduced. By taking an active role in asking for feedback, it has been possible to head-off issues while they remain small, understand where I can do better in my current responsibilities, and grow to increase my value to my group and PADT as a whole.

A Simple Adjustment to Fix a Contact Convergence Problem in Ansys Mechanical

As I write this from home during the Covid-19 crisis, I want to assure you that PADT is conscious of many others working from home while using Ansys software as well.  We’re trying to help with those who may be struggling with certain types of models.  In this posting, I’ll talk about a contact convergence problem in Ansys Mechanical.  I’ll discuss steps we can take to identify the problem and overcome it, as well as a simple setting to make which dramatically helped in this case. 

The geometry in use here is a fairly simple assembly from an old training class.  It’s a wheel or roller held by a housing, which is in turn bolted to something to hold it in place.

A close up of a device

Description automatically generated

The materials used are linear properties for structural steel.  The loading consists of a bearing load representing a downward force applied by some kind of strap or belt looped over the wheel, along with displacement constraints on the back surfaces and around the bolt holes, as shown in the image below.  The flat faces on the back side have a frictionless support applied (allows in plane sliding only), while the circular faces where bolt heads and washers would be are fully constrained with fixed supports.

A close up of a logo

Description automatically generated

As is always the case in Ansys Mechanical, contact pairs are created wherever touching surfaces in the assembly are detected.  The default behavior for those contact pairs is bonded, meaning the touching surfaces can neither slide nor separate.  We will make a change to the default for the wheel on its shaft, though, changing the contact behavior from bonded to frictional.  The friction coefficient defined was 0.2.  This represents some resistance to sliding.  Unlike bonded contact in which the status of the contact pair cannot change during the analysis, frictional contact is truly nonlinear behavior, as the stiffness of the contact pair can change as deflection changes. 

This shows the basic contact settings for the frictional contact pair:

A screenshot of a cell phone

Description automatically generated

At this point, we attempt a solve.  After a while, we get an error message stating, “An internal solution magnitude limit was exceeded,” as shown below.  What this means is that our contact elements are not working as expected, and part of our structure is trying to fly off into space.  Keep in mind in a static analysis there are no inertia effects, so an unconstrained body is truly unconstrained.

At this point, the user may be tempted to start turning multiple knobs to take care of the situation.  Typical things to adjust for contact convergence problems are adding more substeps, reducing contact stiffness, and possibly switching to the unsymmetric solver option when frictional contact is involved.  In this case, a simple adjustment is all it takes to get the solution to easily converge. 

Another thing we might do to help us is to insert a Contact Tool in the Connections branch and interrogate the initial contact status:

This shows us that our frictional contact region is actually not in initial contact but has a gap.  There are multiple techniques available for handling this situation, such as adding weak springs, running a transient solution (computationally expensive), starting with a displacement as a load and then switching to a force load, etc.  However, if we are confident that these parts actually SHOULD be initially touching but are not due to some slop in the CAD geometry, there is a very easy adjustment to handle this.

The Simple Adjustment That Gets This Model to Solve Successfully

Knowing that the parts should be initially in contact, one simple adjustment is all that is needed to close the initial gap and allow the simulation to successfully solve.  The adjustment is to set the Interface Treatment in the Contact Details for the contact region in question to Adjust to Touch:

This change automatically closes the initial gap and, in this case, allows the solution to successfully solve very quickly. 

For your models, if you are confident that parts should be in initial contact, you may also find that this adjustment is a great aid in closing gaps due to small problems in the CAD geometry.  We encourage you to test it out.

An Ansys optiSLang Overview and Optimization Example with Ansys Icepak

Ansys optiSLang is one of the newer pieces of software in the Ansys toolkit that was recently acquired along with the company Dynardo. Functionally, optiSLang provides a flexible top-level platform for all kinds of optimization. It is solver agnostic, in that as long as you can run a solver through batch files and produce text readable result files, you can use said solver with optiSLang. There are also some very convenient integrations with many of the Ansys toolkit solvers in addition to other popular programs. This includes AEDT, Workbench, LS-DYNA, Python, MATLAB, and Excel, among many others.

While the ultimate objective is often to simply minimize or maximize a system output according to a set of inputs, the complexity of the problem can increase dramatically by introducing constraints and multiple optimization goals. And of course, the more complicated the relationships between variables are, the harder it gets to adequately describe them for optimization purposes.

Much of what optiSLang can do is a result of fitting the input data to a Metamodel of Optimal Prognosis (MOP) which is a categorical description for the specific metamodels that optiSLang uses. A user can choose one of the included models (Polynomial, Moving Least Squares, and Ordinary Kriging), define their own model, and/or allow optiSLang to compare the resulting Coefficients of Prognosis (COP) from each model to choose the most appropriate approach.

The COP is calculated in a similar manner as the more common COD or R2 values, except that it is calculated through a cross-validation process where the data is partitioned into subsets that are each used only for the MOP calculation or the COP calculation, not both. For this reason, it is preferred as a measure for how effective the model is at predicting unknown data points, which is particularly valuable in this kind of MOP application.

This whole process really shows where optiSLang’s functionality shines: workflow automation. Not only does optiSLang intelligently select the metamodel based on its applicability to the data, but it can also apply an adaptive method for the improvement of the MOP. It will suggest an automatic sampling method based on the number of problem variables involved, which can then be applied towards refining either the global COP or the minimum local COP. The automation of this process means that once the user has linked optiSLang to a solver with appropriate inputs/outputs and defined the necessary run methodology for optimization, all that is left is to click a button and wait.

As an example of this, we will run through a test case that utilizes the ability to interface with Ansys EDT Icepak.

Figure 1: The EDT Icepak project model.

For our setup, we have a simple board mounted with bodies representative of 3 x 2 watt RAM modules, and 2 x 10 watt CPUs with attached heatsinks. The entire board is contained within an air enclosure, where boundary conditions are defined as walls with two parametrically positioned circular inlets/outlets. The inlet is a fixed mass flow rate surface and the outlet is a zero-pressure boundary. In our design, we permit the y and z coordinates for the inlet and outlet to vary, and we will be searching for the configuration that minimizes the resulting CPU and RAM temperatures.

The optiSLang process generally follows a series of drag-and-drop wizards. We start with the Solver Wizard which guides us through the options for which kind of solver is being used: text-based, direct integrations, or interfaces. In this case, the Icepak project is part of the AEDT interface, so optiSLang will identify any of the parameters defined within EDT as well as the resulting report definitions.  The Parametric Solver System created through the solver wizard then provides the interfacing required to adjust inputs while reading outputs as designs are tested and an MOP is generated.

Figure 2: Resulting block from the Solver wizard with parameters read in from EDT.

Once the parametric solver is defined, we drag and drop in a sensitivity wizard, which starts the AMOP study.  We will start with a total of 100 samples; 40 will be initial designs, and 60 will be across 3 stages of COP refinement with all parameter sets sampled according to the Advanced Latin Hypercube Sampling method.

Figure 3: Resulting block from the Sensitivity wizard with Advanced Latin Hypercube Sampling.

The results of individual runs are tabulated and viewable as the study is conducted, and at the conclusion, a description of the AMOP is provided with response surfaces, residual plots, and variable sensitivities. For instance, we can see that by using these first 100 samples, a decent metamodel with a COP of 90% is generated for the CPU temperature near the inlet. We also note that optiSLang has determined that none of the responses are sensitive to the ‘y’ position of the outlet, so this variable is automatically freed from further analysis.

Figure 4: MOP surface for the temperature of Chip1, resulting from the first round of sampling.

 If we decide that this CoP, or that from any of our other outputs, is not good enough for our purposes, optiSLang makes it very easy to add on to our study. All that is required is dragging and dropping a new sensitivity wizard onto our previous study, which will automatically load the previous results in as starting values. This makes a copy of and visually connects an output from the previous solver block to a new sensitivity analysis on the diagram, which we can then be adjusted independently.

For simplicity and demonstration’s sake, we will add on two more global refinement iterations of 50 samples each. By doing this and then excluding 8 of our 200 total samples that appear as outliers, our “Chip1” CoP can be improved to 97%.

Figure 5: A refined MOP generated by including a new Sensitivity wizard.

Now that we have an MOP of suitable predictive power for our outputs of interest, we can perform some fast optimization. By initially building an MOP based on the overall system behavior, we are now afforded some flexibility in our optimization criteria. As in the previous steps, all that is needed at this point is to drag and drop an optimization wizard onto our “AMOP Addition” system, and optiSLang will guide us through the options with recommendations based on the number of criteria and initial conditions.

In this case, we will define three optimization criteria for thoroughness: a sum of both chip temperatures, a sum of all RAM temperatures, and an average temperature rise from ambient for all components with double weighting applied to the chips. Following the default optimization settings, we end up with an evolutionary algorithm that iterates through 9300 samples in about 14 minutes – far and away faster than directly optimizing the Icepak project. What’s more, if we decide to adjust the optimization criteria, we’ll only need to rerun this ~14 minute evolutionary algorithm.

What we are most interested in for this example are the resulting Pareto fronts which give us a clear view of the tradeoffs between each of our optimization criteria. Each of the designs on this front can easily be selected through the interface, and their corresponding input parameters can be accessed.

Figure 6: Pareto front of the “Chipsum” and “TotalAve” optimization criteria.

Scanning through some of these designs also provides a very convenient way to identify which of our parameters are limiting the design criteria. Two distinct regions can be identified here: the left region is limited by how close we are allowing the inlet fan to be to the board, and the right region is limited by how close to the +xz corner of our domain the outlet vent can be placed. In a situation where these parameters were not physically constrained by geometry, this would be a good opportunity to consider relaxing parameter constraints to further improve our optimization criteria. 

As it is, we can now choose a design based on this Pareto front to verify with the full solver. After choosing a point in the middle of the “Limited by outlet ‘z’” zone, we find that our actual “ChipSum” is 73.33 vs. the predicted 72.78 and the actual “TotalAve” is 17.82 vs. the predicted 17.42. For this demonstration, we consider this small error as satisfactory, and a snapshot of the corresponding Icepak solution is shown below.

Figure 7: The Icepak solution of the final design. The inlet vent is aligned with the outlet side’s heatsink, and the outlet vent is in the corner nearest the heatsink. Primary flow through the far heatsink is maximized, while a strong recirculating flow is produced around the front heatsink.

The accuracy of these results is of course dependent not only on how thoroughly we constructed the MOP, but also the accuracy of the 3D solution; creating mesh definitions that remain consistently accurate through parameterized geometry changes can be particularly tricky. Though, with all of this considered, optiSLang provides a great environment for not only managing optimization studies, but displaying the results in such a way that you can gain an improved understanding of the interaction between input/output variables and their optimization criteria.

Advanced Capabilities to Consider when Simulating Blow Molding in Ansys Polyflow or Discovery AIM

Ansys Polyflow is a Finite Element CFD solver with unique capabilities that enable simulation of complex non-Newtonian flows seen in the polymer processing industry. In recent releases, Polyflow has included templates to streamline two of its most common use cases: blow molding and extrusion. Similarly, Ansys Discovery AIM offers a modern user interface that guides users through blow molding and extrusion workflows while still using the proven Polyflow solver under the hood. It is not uncommon for engineers to be unsure about which tool to pursue for their specific application. In this article, I will focus on the blow molding workflow. More specifically, I will point out three features in Polyflow that have not yet been incorporated into Discovery AIM:

  1. The PolyMat curve fitting tool to derive viscoelasticity model input parameters from test data
  2. Automatic parison thickness mapping onto an Ansys Mechanical shell mesh
  3. Parison Programming to optimize parison thickness subject to final part thickness constraints

Keep in mind that either tool will get the job done in most applications, so let us first quickly review some of the core features of blow molding simulations that are common to Polyflow and AIM:

  • Parison/Mold contact detection
  • 3-D Shell Lagrangian automatic remeshing
  • Generalized Newtonian viscosity models
  • Temperature dependent and multi-mode integral viscoelastic models
  • Time dependent mold pressure boundary conditions
  •  Isothermal or non-isothermal conditions

For demonstration purposes, I modeled a sweet submarine toy in SpaceClaim. Unfortunately, I think it will float, but let’s move past that for now.  

Figure 1: Final Submarine shape (Left), Top View of Mold+ Parison (Top Left), Side View of Mold+Parison (Bottom Right)

At this point, you could proceed with Discovery AIM or with Polyflow without any re-work. I’lll proceed with the Polyflow Blow Molding workflow to point out the features currently only available in Polyflow.

PolyMat Curve Fitting Tool

With the blow molding template, you can select whether to treat the parison as isothermal or non-isothermal and whether to model it as general Newtonian or viscoelastic. Suppose we would like to model viscoelasticity with the KBKZ integral viscoelastic model because we were interested in capturing strain hardening as the parison is stretched. The inputs to the KBKZ model are viscosity and relaxation times for each mode. If they are known, the user can input the values directly. This is possible in Discovery AIM as well. However, the PolyMat tool is unique to Polyflow. PolyMat is a built-in curve fitting tool that helps generate input parameters for the various viscosity model available in Polyflow using material data. This is particularly useful when you do not explicitly have the inputs for a viscoelastic model, but perhaps you have other test data such as oscillatory and capillary rheometry data. In this case I have with the loss modulus, storage modulus and shear viscosity for a generic high density polyethylene (HDPE) material. For this material, four modes are enough to anchor the KBKZ model to the data as shown below. We can then load the viscosity/relaxation time into Polyflow and continue. 

Figure 2: Curve Fitting of G’(Ω),G’’(Ω),η() [Left], KBKZ Viscoelastic Model inputs (Right)

The main output of the simulation is the final parison thickness distribution. For this sweet submarine, the initial parison thickness is set to 3mm and the final thickness distribution is shown in the contour plot below.

Figure 3a: Animation of blow molding process

Figure 3b: Final Part Thickness Distribution

Thickness Mapping to Ansys Mechanical

The second Polyflow capability I’d like to point out is the ability to easily map the thickness distribution onto an Ansys mechanical shell mesh. You can map the thickness onto an Ansys Mechanical shell mesh by connecting the polyflow solution component to a structural model in workbench as shown below. The analogous work flow in AIM, would be to create a second simulation for the structural analysis, but you would be confined to specifying a constant thickness.

Figure 4: Polyflow – Ansys Mechanical Parison Thickness Mapping

In Ansys Mechanical, the mapping comes through within the geometry tree as shown below. The imported Data Transfer Summary is a good way to ensure the mapping behaves as expected. In this case we can see that 100% of the nodes were mapped and the thickness contours qualitatively match the Polyflow results in CFD -Post.

Figure 5: Imported Thickness in Ansys Mechanical

Figure 6: Thickness Data Transfer Summary

A force is applied normal to front face of the sail and simulated in Mechanical. The peak stress and deformation are shown below. The predicted stresses are likely acceptable for a toy, especially since my toy is a sweet submarine. Nonetheless, suppose that I was interested in reducing the deformation in the sail under this load condition by thickening the extruded parison. A logical approach would be to increase the initial parison thickness from 3mm to 4mm for example. Polyflow’s parison programming feature takes the guesswork out of the process. 

Figure 7: Clockwise from Top Left: Applied Load on Sail, Stress Distribution, total Deformation, Thickness Distribution

Parison Programming

Parison programming is an iterative optimization work flow within Polyflow for determining the extruded thickness distribution required to meet the final part thickness constraints. To activate it, you create a new post processor sub-task of type parison programming.   

Figure 8: Parison Programming Setup

The inputs to the optimization are straight forward. The only inputs that you typically would need to modify are the direction of optimization, width of stripes, and list of (X,h) pairs. The direction of optimization is the direction of extrusion which is X in this case. If the extruder can vary parison thickness along “stripes” of the parison, then Polyflow can optimize each stripe thickness. The list of (X,h) pairs serves as a list of constraints for the final part thickness where X is the location on the parison along the direction of extrusion and h is the final part thickness constraint.

Figure 9: Thickness Constraints for Parison Programming

In our scenario, the X,h pairs form a piecewise linear thickness distribution to constrain the area around the sail to have a 3.5mm thickness and 2mm everywhere else. After the simulation, Polyflow will write a csv file with to the output directory containing the initial thickness for each node for the next iteration. You will need to copy over the csv file from the output directory of iteration N to the input directory of iteration N+1. The good news is the optimization converges within 3-5 iterations.

Figure 10: Defining the Initial Thickness for the Next Parison Programming Iteration

Polyflow will print the parison strip thickness distribution for the next iteration in the .lst file. The plot below shows the thickness distribution from the first 3 iterations. Note from the charts below that the distribution converged by iteration 2; thus iteration 3 was not actually simulated. The optimized parison thickness distribution is also plotted in the contour plot below.

Figure 11: Optimized Parison Thickness (Top), Final Part Thickness (Bottom)

Figure 12: % of Elements At or Above Thickness Criteria

As a final check, we can evaluate how the modification to the parison thickness reduced the deformation of the submarine. The total deformation contour plot below confirms that the peak deformation decreased from 2mm to 0.8mm.

Figure 13: Total Deformation in Ansys Mechanical After Parison Programming

Summary

Ansys Discovery AIM is a versatile platform with an intuitive and modern user interface. While Aim has incorporated most of the blow molding simulation capabilities from Polyflow, some advanced functionality has not yet been brought into AIM. This article simulated the blow molding process of a toy submarine to demonstrate three capabilities currently only available in Polyflow: the PolyMat curve fitting tool, automatic parison thickness mapping to Ansys Mechanical, and parison programming. Engineers should consider whether any of these capabilities are needed in their application next time they are faced with the decision to create a blow mold simulation using Ansys Discovery AIM or Polyflow.

Changes to Licencing at ANSYS 2020R1

There are three main goals of the licensing changes in the latest release of ANSYS:

  • Deliver Ansys licensing using the FlexLM industry standard
  • Eliminate the Ansys licensing interconnect
  • Provide modular licensing options that are easier to understand
  • Finally – and this is the whopper (or Double Double if you’re an In-N-Out kind of analogy person) – this new licensing model eliminates the need for upgrading the Ansys License Manager with every software update. (please pause for shock recovery)
If you’re still shocked and would to like see a “shocked groundhog” compilation, check this out.

Why is this significant? Well, this was always a sticking point for our customers when upgrading from one version to the next.

Here’s how that usually plays out:

  1. Engineers eager to try out new features or overcome software defects, download the software and install it on their workstations.
    1. Surprise – software throws an obscure licensing error.
    2. Engineer notifies IT or Ansys Channel partner of issue.
    3. After a few calls, maybe a screenshare or two, its determined that the license server needs to be upgraded.
    4. The best-case scenario – IT or PADT Support can get it installed in a few minutes and engineer can be on his way.
    5. The usual scenario – it will take a week to schedule downtime on the server and notify all stakeholders and the engineer is left to simmer on medium-low until those important safeguards are handled.

What does this all mean?

  • Starting in January 2020, all new Ansys keys issued will be in the new format and will require upgrading to the 2020R1 License manager. This should be the last mandatory license server upgrade for a while.
  • Your Ansys Channel Partner will contact you ahead of your next renewal to discuss new license increments and if there are any expected changes.
  • Your IT and Ansys support team will be celebrating in the back office the last mandatory Ansys License Manager upgrade for a while.

How to upgrade the Ansys License Manager?

Download the latest license manager through the Ansys customer portal:

Follow installation instructions and add the latest license file:

  • Ansys has a handy video on this here
  • Make sure that you run the installed as an administrator for best results.

Make sure license server is running and has the correct licenses queued:

  • Look for the green checkmark in the license management center window.
  • Start your application and make sure everything looks good.

This was a high-level flyover of the new Ansys Licensing released with version 2020R1. For specifics contact your PADT Account manager or support@padtinc.com .

Making Sense of DC IR Results in Ansys SIwave

In this article I will cover a Voltage Drop (DC IR) simulation in SIwave, applying realistic power delivery setup on a simple 4-layer PCB design. The main goal for this project is to understand what data we receive by running DC IR simulation, how to verify it, and what is the best way of using it.

And before I open my tools and start diving deep into this topic, I would like to thank Zachary Donathan for asking the right questions and having deep meaningful technical discussions with me on some related subjects. He may not have known, but he was helping me to shape up this article in my head!

Design Setup

There are many different power nets present on the board under test, however I will be focusing on two widely spread nets +1.2V and +3.3V. Both nets are being supplied through Voltage Regulator Module (VRM), which will be assigned as a Voltage Source in our analysis. After careful assessment of the board design, I identified the most critical components for the power delivery to include in the analysis as Current Sources (also known as ‘sinks’). Two DRAM small outline integrated circuit (SOIC) components D1 and D2 are supplied with +1.2V. While power net +3.3V provides voltage to two quad flat package (QFP) microcontrollers U20 and U21, mini PCIE connector, and hex Schmitt-Trigger inverter U1.

Fig. 1. Power Delivery Network setting for a DC IR analysis

Figure 1 shows the ‘floor plan’ of the DC IR analysis setup with 1.2V voltage path highlighted in yellow and 3.3V path highlighted in light blue.

Before we assign any Voltage and Current sources, we need to define pin groups for all nets +1.2V, +3.3V and GND for all PDN component mentioned above. Having pin groups will significantly simplify the reviewing process of the results. Also, it is generally a good practice to start the DC IR analysis from the ‘big picture’ to understand if certain component gets enough power from the VRM. If a given IC reports an acceptable level of voltage being delivered with a good margin, then we don’t need to dig deeper; we can instead focus on those which may not have good enough margins.

Once we have created all necessary pin groups, we can assign voltage and current sources. There are several ways of doing that (using wizard or manual), for this project we will use ‘Generate Circuit Element on Components’ feature to manually define all sources. Knowing all the components and having pin groups already created makes the assignment very straight-forward. All current sources draw different amount of current, as indicated in our setting, however all current sources have the same Parasitic Resistance (very large value) and all voltage source also have the same Parasitic Resistance (very small value). This is shown on Figure 2 and Figure 3.

Note: The type of the current source ‘Constant Voltage’ or ‘Distributed Current’ matters only if you are assigning a current source to a component with multiple pins on the same net, and since in this project we are working with pins groups, this setting doesn’t make difference in final results.

Fig. 2. Voltage and Current sources assigned
Fig. 3. Parasitic Resistance assignments for all voltage and current sources

For each power net we have created a voltage source on VRM and multiple current sources on ICs and the connector. All sources have a negative node on a GND net, so we have a good common return path. And in addition, we have assigned a negative node of both voltage sources (one for +1.2V and one for +3.3V) as our reference points for our analysis. So reported voltage values will be referenced to that that node as absolute 0V.

At this point, the DC IR setup is complete and ready for simulation.

Results overview and validation

When the DC IR simulation is finished, there is large amount of data being generated, therefore there are different ways of viewing results, all options are presented on Figure 4. In this article I will be primarily focusing on ‘Power Tree’ and ‘Element Data’. As an additional source if validation we may review the currents and voltages overlaying the design to help us to visualize the current flow and power distribution. Most of the time this helps to understand if our assumption of pin grouping is accurate.

Fig. 4. Options to view different aspects of DC IR simulated data

Power Tree

First let’s look at the Power Tree, presented on Figure 5. Two different power nets were simulated, +1.2V and +3.3V, each of which has specified Current Sources where the power gets delivered. Therefore, when we analyze DC IR results in the Power tree format, we see two ‘trees’, one for each power net. Since we don’t have any pins, which would get both 1.2V and 3.3V at the same time (not very physical example), we don’t have ‘common branches’ on these two ‘trees’.

Now, let’s dissect all the information present in this power tree (taking in consideration only one ‘branch’ for simplicity, although the logic is applicable for all ‘branches’):

  • We were treating both power nets +1.2V and +3.3V as separate voltage loops, so we have assigned negative nodes of each Voltage Source as a reference point. Therefore, we see the ‘GND’ symbol ((1) and (2)) for each voltage source. Now all voltage calculations will be referenced to that node as 0V for its specific tree.
  • Then we see the path from Voltage Source to Current Source, the value ΔV shows the Voltage Drop in that path (3). Ultimately, this is the main value power engineers usually are interested in during this type of analysis. If we subtract ΔV from Vout we will get the ‘Actual Voltage’ delivered to the specific current source positive pin (1.2V – 0.22246V = 0.977V). That value reported in the box for the Current Source (4). Technically, the same voltage drop value is reported in the column ‘IR Drop’, but in this column we get more details – we see what the percentage of the Vout is being dropped. Engineers usually specify the margin value of the acceptable voltage drop as a percentage of Vout, and in our experiment we have specified 15%, as reported in column ‘Specification’. And we see that 18.5% is greater than 15%, therefore we get ‘Fail_I_&_V’ results (6) for that Current Source.
  • Regarding the current – we have manually specified the current value for each Current Source. Current values in Figure 2 are the same as in Figure 5. Also, we can specify the margin for the current to report pass or fail. In our example we assigned 108A as a current at the Current Source (5), while 100A is our current limit (4). Therefore, we also got failed results for the current as well.
  • As mentioned earlier, we assigned current values for each Current Source, but we didn’t set any current values for the Voltage Source. This is because the tool calculates how much current needs to be assigned for the Voltage Source, based on the value at the Current Sources. In our case we have 3 Current Sources 108A, 63A, 63A (5). The sum of these three values is 234A, which is reported as a current at the Voltage Source (7). Later we will see that this value is being used to calculate output power at the Voltage Source.  
Fig. 5. DC IR simulated data viewed as a ‘Power Tree’

Element Data

This option shows us results in the tabular representation. It lists many important calculated data points for specific objects, such as bondwire, current sources, all vias associated with the power distribution network, voltage probes, voltage sources.

Let’s continue reviewing the same power net +1.2V and the power distribution to CPU1 component as we have done for Power Tree (Figure 5). The same way we will be going over the details in point-by-point approach:

  • First and foremost, when we look at the information for Current Sources, we see a ‘Voltage’ value, which may be confusing. The value reported in this table is 0.7247V (8), which is different from the reported value of 0.977V in Power Tree on Figure 5 (4). The reason for the difference is that reported voltage value were calculated at different locations. As mentioned earlier, the reported voltage in the Power Tree is the voltage at the positive pin of the Current Source. The voltage reported in Element Data is the voltage at the negative pin of the Current Source, which doesn’t include the voltage drop across the ground plane of the return path.

To verify the reported voltage values, we can place Voltage Probes (under circuit elements). Once we do that, we will need to rerun the simulation in order to get the results for the probes:

  1. Two terminals of the ‘VPROBE_1’ attached at the positive pin of Voltage Source and at the positive pin of the Current Source. This probe should show us the voltage difference between VRM and IC, which also the same as reported Voltage Drop ΔV. And as we can see ‘VPROBE_1’ = 222.4637mV (13), when ΔV = 222.464mV (3). Correlated perfectly!
  2. Two terminals of the ‘VPROBE_GND’ attached to the negative pin of the Current Source and negative pin of the Voltage Source. The voltage shown by this probe is the voltage drop across the ground plane.

If we have 1.2V at the positive pin of VRM, then voltage drops 222.464mV across the power plane, so the positive pin of IC gets supplied with 0.977V. Then the voltage at the Current Source 0.724827V (8) being drawn, leaving us with (1.2V – 0.222464V – 0.724827V) = 0.252709V at the negative pin of the Current Source. On the return path the voltage drops again across the ground plane 252.4749mV (14) delivering back at the negative pin of VRM (0.252709V – 0.252475V) = 234uV. This is the internal voltage drop in the Voltage Source, as calculated as output current at VRM 234A (7) multiplied by Parasitic Resistance 1E-6Ohm (Figure 3) at VRM. This is Series R Voltage (11)

  • Parallel R Current of the Current source is calculated as Voltage 724.82mV (8) divided by Parasitic Resistance of the Current Source (Figure 3) 5E+7 Ohm = 1.44965E-8 (9)
  • Current of the Voltage Source report in the Element Data 234A (10) is the same value as reported in the Power Tree (sum of all currents of Current Sources for the +1.2V power net) = 234A (7). Knowing this value of the current we can multiple it by Parasitic Resistance of the Voltage Source (Figure 3) 1E-6 Ohm = (234A * 1E-6Ohm) = 234E-6V, which is equal to reported Series R Voltage (11). And considering that the 234A is the output current of the Voltage Source, we can multiple it by output voltage Vout = 1.2V to get a Power Output = (234A * 1.2V) = 280.85W (12)
Fig. 6. DC IR simulated data viewed in the table format as ‘Element Data’

In addition to all provided above calculations and explanations, the video below in Figure 7 highlights all the key points of this article.

Fig. 7. Difference between reporting Voltage values in Power Tree and Element Data

Conclusion

By carefully reviewing the Power Tree and Element Data reporting options, we can determine many important decisions about the power delivery network quality, such as how much voltage gets delivered to the Current Source; how much voltage drop is on the power net and on the ground net, etc. More valuable information can be extracted from other DC IR results options, such as ‘Loop Resistance’, ‘Path Resistance’, ‘RL table’, ‘Spice Netlist’, full ‘Report’. However, all these features deserve a separate topic.

As always, if you would like to receive more information related to this topic or have any questions please reach out to us at info@padtinc.com.

Efficient and Accurate Simulation of Antenna Arrays in Ansys HFSS

Unit-cell in HFSS

HFSS offers different method of creating and simulating a large array. The explicit method, shown in Figure 1(a) might be the first method that comes to our mind. This is where you create the exact CAD of the array and solve it. While this is the most accurate method of simulating an array, it is computationally extensive. This method may be non-feasible for the initial design of a large array. The use of unit cell (Figure 1(b)) and array theory helps us to start with an estimate of the array performance by a few assumptions. Finite Array Domain Decomposition (or FADDM) takes advantage of unit cell simplicity and creates a full model using the meshing information generated in a unit cell. In this blog we will review the creation of unit cell. In the next blog we will explain how a unit cell can be used to simulate a large array and FADDM.

Fig. 1 (a) Explicit Array
Fig. 1 (b) Unit Cell
Fig. 1 (c) Finite Array Domain Decomposition (FADDM)

In a unit cell, the following assumptions are made:

  • The pattern of each element is identical.
  • The array is uniformly excited in amplitude, but not necessarily in phase.
  • Edge affects and mutual coupling are ignored
Fig. 2 An array consisting of elements amplitude and phases can be estimated with array theory, assuming all elements have the same amplitude and element radiation patterns. In unit cell simulation it is assumed all magnitudes (An’s) are equal (A) and the far field of each single element is equal.

A unit cell works based on Master/Slave (or Primary/Secondary) boundary around the cell. Master/Slave boundaries are always paired. In a rectangular cell you may use the new Lattice Pair boundary that is introduced in Ansys HFSS 2020R1. These boundaries are means of simulating an infinite array and estimating the performance of a relatively large arrays. The use of unit cell reduces the required RAM and solve time.

Primary/Secondary (Master/Slave) (or P/S) boundaries can be combined with Floquet port, radiation or PML boundary to be used in an infinite array or large array setting, as shown in Figure 3.

Fig. 3 Unit cell can be terminated with (a) radiation boundary, (b) Floquet port, (c) PML boundary, or combination of them.

To create a unit cell with P/S boundary, first start with a single element with the exact dimensions of the cell. The next step is creating a vacuum or airbox around the cell. For this step, set the padding in the location of P/S boundary to zero. For example, Figure 4 shows a microstrip patch antenna that we intend to create a 2D array based on this model. The array is placed on the XY plane. An air box is created around the unit cell with zero padding in X and Y directions.

Fig. 4 (a) A unit cell starts with a single element with the exact dimensions as it appears in the lattice
Fig. 4 (b) A vacuum box is added around it

You notice that in this example the vacuum box is larger than usual size of quarter wavelength that is usually used in creating a vacuum region around the antenna. We will get to calculation of this size in a bit, for now let’s just assign a value or parameter to it, as it will be determined later. The next step is to define P/S to generate the lattice. In AEDT 2020R1 this boundary is under “Coupled” boundary. There are two methods to create P/S: (1) Lattice Pair, (2) Primary/Secondary boundary.

Lattice Pair

The Lattice Pair works best for square lattices. It automatically assigns the primary and secondary boundaries. To assign a lattice pair boundary select the two sides that are supposed to create infinite periodic cells, right-click->Assign Boundary->Coupled->Lattice Pair, choose a name and enter the scan angles. Note that scan angles can be assigned as parameters. This feature that is introduced in 2020R1 does not require the user to define the UV directions, they are automatically assigned.

Fig. 5 The lattice pair assignment (a) select two lattice walls
Fig. 5 (b) Assign the lattice pair boundary
Fig. 5 (c) After, right-click and choosing assign boundary > choose Lattice Pair
Fig. 5 (d) Phi and Theta scan angles can be assigned as parameters

Primary/Secondary

Primary/Secondary boundary is the same as what used to be called Master/Slave boundary. In this case, each Secondary (Slave) boundary should be assigned following a Primary (Master) boundary UV directions. First choose the side of the cell that Primary boundary. Right-click->Assign Boundary->Coupled->Primary. In Primary Boundary window define U vector. Next select the secondary wall, right-click->Assign Boundary->Couple->Secondary, choose the Primary Boundary and define U vector exactly in the same direction as the Primary, add the scan angles (the same as Primary scan angles)

Fig. 6 Primary and secondary boundaries highlights.

Floquet Port and Modes Calculator

Floquet port excites and terminates waves propagating down the unit cell. They are similar to waveguide modes. Floquet port is always linked to P/S boundaries. Set of TE and TM modes travel inside the cell. However, keep in mind that the number of modes that are absorbed by the Floquet port are determined by the user. All the other modes are short-circuited back into the model. To assign a Floquet port two major steps should be taken:

Defining Floquet Port

Select the face of the cell that you like to assign the Floquet port. This is determined by the location of P/S boundary. The lattice vectors A and B directions are defined by the direction of lattice (Figure 7).

Fig. 7 Floquet port on top of the cell is defined based on UV direction of P/S pairs

The number of modes to be included are defined with the help of Modes Calculator. In the Mode Setup tab of the Floquet Port window, choose a high number of modes (e.g. 20) and click on Modes Calculator. The Mode Table Calculator will request your input of Frequency and Scan Angles. After selecting those, a table of modes and their attenuation using dB/length units are created. This is your guide in selecting the height of the unit cell and vaccume box. The attenation multiplied by the height of the unit cell (in the project units, defined in Modeler->Units) should be large enough to make sure the modes are attenuated enough so removing them from the calcuatlion does not cause errors. If the unit cell is too short, then you will see many modes are not attenuated enough. The product of the attenuatin and height of the airbox should be at least 50 dB. After the correct size for the airbox is calcualted and entered, the model with high attenuation can be removed from the Floquet port definition.

The 3D Refinement tab is used to control the inclusion of the modes in the 3D refinement of the mesh. It is recommended not to select them for the antenna arrays.

Fig. 8 (Left) Determining the scan angles for the unit cell, (Right) Modes Calculator showing the Attenuation

In our example, Figure 8 shows that the 5th mode has an attenuation of 2.59dB/length. The height of the airbox is around 19.5mm, providing 19.5mm*2.59dB/mm=50.505dB attenuation for the 5th mode. Therefore, only the first 4 modes are kept for the calculations. If the height of the airbox was less than 19.5mm, we would need to increase the height so accordingly for an attenuation of at least 50dB.

Radiation Boundary

A simpler alternative for Floquet port is radiation boundary. It is important to note that the size of the airbox should still be kept around the same size that was calculated for the Floquet port, therefore, higher order modes sufficiently attenuated. In this case the traditional quarter wavelength padding might not be adequate.

Fig. 9 Radiation boundary on top of the unit cell

Perfectly Matched Layer

Although using radiation boundary is much simpler than Floquet port, it is not accurate for large scan angles. It can be a good alternative to Floquet port only if the beam scanning is limited to small angles. Another alternative to Floquet port is to cover the cell by a layer of PML. This is a good compromise and provides very similar results to Floquet port models. However, the P/S boundary need to surround the PML layer as well, which means a few additional steps are required. Here is how you can do it:

  1. Reduce the size of the airbox* slightly, so after adding the PML layer, the unit cell height is the same as the one that was generated using the Modes Calculation. (For example, in our model airbox height was 19mm+substrte thickness, the PML height was 3mm, so we reduced the airbox height to 16mm).
  2. Choose the top face and add PML boundary.
  3. Select each side of the airbox and create an object from that face (Figure 10).
  4. Select each side of the PML and create objects from those faces (Figure 10).
  5. Select the two faces that are on the same plane from the faces created from airbox and PML and unite them to create a side wall (Figure 10).
  6. Then assign P/S boundary to each pair of walls (Figure 10).

*Please note for this method, an auto-size “region” cannot be used, instead draw a box for air/vacuum box. The region does not let you create the faces you need to combine with PML faces.

Fig. 10 Selecting two faces created from airbox and PML and uniting them to assign P/S boundaries

The advantage of PML termination over Floquet port is that it is simpler and sometimes faster calculation. The advantage over Radiation Boundary termination is that it provides accurate results for large scan angles. For better accuracy the mesh for the PML region can be defined as length based.

Seed the Mesh

To improve the accuracy of the PML model further, an option is to use length-based mesh. To do this select the PML box, from the project tree in Project Manager window right-click on Mesh->Assign Mesh Operation->On Selection->Length Based. Select a length smaller than lambda/10.

Fig. 11 Using element length-based mesh refinement can improve the accuracy of PML design

Scanning the Angle

In phased array simulation, we are mostly interested in the performance of the unit cell and array at different scan angles. To add the scanning option, the phase of P/S boundary should be defined by project or design parameters. The parameters can be used to run a parametric sweep, like the one shown in Figure 12. In this example the theta angle is scanned from 0 to 60 degrees.

Fig. 12 Using a parametric sweep, the scanned patterns can be generated

Comparing PML and Floquet Port with Radiation Boundary

To see the accuracy of the radiation boundary vs. PML and Floquet Port, I ran the simulations for scan angles up to 60 degrees for a single element patch antenna. Figure 13 shows that the accuracy of the Radiation boundary drops after around 15 degrees scanning. However, PML and Floquet port show similar performance.

Fig. 13 Comparison of radiation patterns using PML (red), Floquet Port (blue), and Radiation boundary (orange).

S Parameters

To compare the accuracy, we can also check the S parameters. Figure 14 shows the comparison of active S at port 1 for PML and Floquet port models. Active S parameters were used since the unit cell antenna has two ports. Figure 15 shows how S parameters compare for the model with the radiation boundary and the one with the Floquet port.

Fig. 14 Active S parameter comparison for different scan angles, PML vs. Floquet Port model.
Fig. 15 Active S parameter comparison for different scan angles, Radiation Boundary vs. Floquet Port model.

Conclusion

The unit cell definition and options on terminating the cell were discussed here. Stay tuned. In the next blog we discuss how the unit cell is utilized in modeling antenna arrays.