The Focus

How-To: Connecting Shells Elements in Surface Models with ANSYS SpaceClaim and ANSYS Mechanical

Posted on April 27, 2017, by: Ted Harris

By using the power of ANSYS SpaceClaim to quickly modify geometry, you can set up your surface models in ANSYS Mechanical to easily be connected.  Take a look in this How-To slide deck to see how easy it is to extend geometry and intersect surfaces. PADT-ANSYS-Connecting-Shells-SpaceClaim-Mechanical

Coupling ANSYS Mechanical and Flownex

Posted on April 26, 2017, by: Stephen Theron

The below example demontrates how to couple Flownex and ANSYS mechanical using the Mechanical Generic Interface component.

For those that don't know, Flownex is a thermal-fluid system modeling tool that is great for modeling heat, flow, pressure, etc... in systems.  At PADT we often connect it to ANSYS Mechanical to do more detailed component level simulation when needed. 

Why the need for the link in the fist place?
  • It is an automated workflow to couple Flownex and ANSYS through direct mapping of Flownex results (HTC and bulk temperatures) as boundary condition to an ANSYS thermal analysis.
  • Represents a conjugate heat transfer model with fluid calculations handled in Flownex
  • Allows one to easily/quickly investigate fluid flow and heat transfer properties under a wide range operating conditions.
First we will discuss the steady state thermal ANSYS Mechanical model that will be linked to Flownex. We have a pipe Pipe with arbritraty geometry and material properties. Convection boundary conditions have been applied to both the internal and external pipe walls. The inernal Bulk Temperature will be supplied by Flownex.
  • External BC
    • HTC 100 w/m2K
    • Bulk Temperature 22C
  • Internal BC
    • HTC 1500 w/m2K
    • Bulk Temperature will be supplied by Flownex
A command snippet, which will calculate the total heat flow through the inner wall surface and write the value out into a text file called d_result, has been inlcuded in the ANSYS Mechanical model. In order to achieve a bidirectional coupling, Flownex will execute the Mechanical APDL batch file. We can generate the Mechanical APDL batch file (ds.dat), from within Mechanical. The soluiton procedure is as follows
  1. Flownex modifies the ds.dat file
  2. Flownex executes the modified ds.dat file
  3. The modified ds.dat file generates the d_result.txt file
  4. Flownex reads the d_result.txt file
  5. Flownex executes an iteration, using value from d_result.txt
  6. Repeat untill solutions are converged.
The next step after creating the ds.dat file is to set up your Flownex model. The Flownex model comprises of a pipe component with arbritrary geomery, filled with air with an inlet temperature and pressure of 500˚C and 120 kPa respectilvy and a flow rate of approximatly 1kg/s. We have connected the pipe component to the Mechanical Generic Interface using data transfer links. The data transfer links pass the bulk fluid temperature form the pipe to the Mechanical Generic Interface component, and return the heat flow value calculated using ANSYS to the pipe. Next we need place the ds.dat file in the AnsysMechanical_Files folder which is located in the Flownex project folder. It is necessary to create a copy of the ds.dat called ModifiedData.dat in the same location. Let’s go over the inputs to the Mechanical Generic Interface component in Flownex: 1) Executable location

C:\Program Files\ANSYS Inc\v180\ansys\bin\winx64\Ansys180.exe

This is the path to ANSYS executable. Pay particular attention to the version number (eg 180, 172), as this will be different depending on the version of ANSYS you have installed.

2) Command line parameters

-b -i ModifiedData.dat -o results

Flownex will launch ANSYS, and execute the ModifiedData.dat Mechanical APDL batch file from the command line, using the above command a detailed description of command line options can be found in another blog post here.

3) Project files folder, Data file name and Modified data file name

Here we specify location of the Mechanical APDL batch files

4) Inputs Here we will define where in ModifiedData.dat the value from Flownex, fluid temperature in this case, will be placed. This is done by determining what the boundary condition variable and ID is, and finding the prefix before the boundary condition value in the ds.dat file. Typically the variable for temperature is _loadvari and for HTC it is _convari. It is possible to know the boundary condition ID by activating the appearance of Beta options in WB. 5) Outputs

Here we will specify the location of the d_result.txt that ANSYS generates. It should appear in the same folder as the Mechanical APDL batch files after successful execution.

Flownex and ANSYS will pass data back and forth every time step of a transient Flownex run. The simulation should continue to run up to, and beyond the point where the Flownex and ANSYS simulation have converged. If we plot out the heat input or temperature value vs time we should be able to visualize convergence, akin to residual plots when running a CFD simulation, and then manually stop the simulation after values have stabilized. Below we increase the fluid inlet temperature form 500˚C to 1000˚C after 10 iterations, and observed a increase in heat flow from ~1.4kW to ~2.8kW.

Making Thermal Contact Conductance a Parameter in ANSYS Mechanical 18.0 and Earlier with an APDL Command Object

Posted on April 6, 2017, by: Ted Harris

A support request from one of our customers recently was for the ability to make Thermal Contact Conductance, which is sort of a reciprocal of thermal resistance at the contact interface, a parameter so it can be varied in a parametric study.  Unfortunately, this property of contact regions is not exposed as a parameter in the ANSYS Mechanical window like many other quantities are. Fortunately, with ANSYS there is almost always a way……in this case we use the capability of an APDL (ANSYS Parametric Design Language) command object within ANSYS Mechanical.  This allows us to access additional functionality that isn’t exposed in the Mechanical menus.  This is a rare occurrence in the recent versions of ANSYS, but I thought this was a good example to explain how it is done including verifying that it works. A key capability is that user-defined parameters within a command object have a ‘magic’ set of parameter names.  These names are ARG1, ARG2, ARG3, etc.  Eric Miller of PADT explained their use in a good PADT Focus blog posting back in 2013 In this application, we want to be able to vary the value of thermal contact conductance.  A low value means less heat will flow across the boundary between parts, while a high value means more heat will flow.  The default value is a calculated high value of conductance, meaning there is little to no resistance to heat flow across the contact boundary. In order to make this work, we need to know how the thermal contact conductance is applied.  In fact, it is a property of the contact elements.  A quick look at the ANSYS Help for the CONTA174 or similar contact elements shows that the 14th field in the Real Constants is the defined value of TCC, the thermal contact conductance.  Real Constants are properties of elements that may need to be defined or may be optional values that can be defined.  Knowing that TCC is the 14th field in the real constant set, we can now build our APDL command object. This is what the command object looks like, including some explanatory comments.  Everything after a “!” is a comment:
! Command object to parameterize thermal contact conductance ! by Ted Harris, PADT, Inc., 3/31/2017 ! Note: This is just an example. It is up to the user to create and verify ! the concept for their own application. ! From the ANSYS help, we can see that real constant TCC is the 14th real constant for ! the 17X contact elements. Therefore, we can define an APDL parameter with the desired ! TCC value and then assign that parameter to the 14th real constant value. ! ! We use ARG1 in the Details view for this command snippet to define and enable the ! parameter to be used for TCC. r,cid ! tells ANSYS we are defining real constants for this contact pair ! any values left blank will not be overwritten from defaults or those ! assigned by Mechanical. R command is used for values 1-6 of the real constants rmore,,,,,, ! values 7-12 for this real constant set rmore,,arg1 ! This assigned value of arg1 to 14th field of real constant ! Now repeat for target side to cover symmetric contact case r,tid ! tells ANSYS we are defining real constants for this contact pair ! any values left blank will not be overwritten from defaults or those ! assigned by Mechanical. R command is used for values 1-6 of the real constants rmore,,,,,, ! values 7-12 for this real constant set rmore,,arg1 ! This assigned value of arg1 to 14th field of real constant
You may have noticed the ‘cid’ and ‘tid’ labels in the command object.  These identify the integer ‘pointers’ for the contact and target element types, respectively.  They also identify the contact and target real constant set number and material property number.  So how do we know what values of integers are used by ‘cid’ and ‘tid’ for a given contact region?  That’s part of the beauty of the command object: you don’t know the values of the cid and tid variables, but you alsp don’t need to know them.  ANSYS automatically plugs in the correct integer values for each contact pair simply by us putting the magic ‘cid’ and ‘tid’ labels in the command snippet.  The top of a command object within the contact branch will automatically contain these comments at the top, which explain it:
!   Commands inserted into this file will be executed just after the contact region definition. !   The type number for the contact type is equal to the parameter "cid". !   The type number for the target type is equal to the parameter "tid". !   The real and mat number for the asymmetric contact pair is equal to the parameter "cid". !   The real and mat number for the symmetric contact pair(if it exists) ! is equal to the parameter "tid".
Next, we need to know how to implement this in ANSYS Mechanical.  We start with a model of a ball valve assembly, using some simple geometry from one of our training classes.  The idea is that hot water passes through the valve represented by a constant temperature of 125 F.  There is a heat sink represented at the OD of the ends of the valve at a constant 74 degrees.  There is also some convection on most of the outer surfaces carrying some heat away. The ball valve and the valve housing are separate parts and contact is used to allow heat to flow from the hotter ball valve into the cooler valve assembly: Here is the command snippet associated with that contact region.  The ‘magic’ is the ARG1 parameter which is given an initial value in the Details view, BEFORE the P box is checked to make it a parameter.  Wherever we need to define the value of TCC in the command object, we use the ARG1 parameter name, as shown here: Next, we verify that it actually works as expected.  Here I have setup a table of design points, with increasing values of TCC (ARG1).  The output parameter that is tracked is the minimum temperature on the inner surface of the valve housing, where it makes contact with the ball.  If conductance is low, little heat should flow so the housing remains cool.  If the conductance is high, more heat should flow into the housing making it hotter.  After solving all the design points in the Workbench window, we see that indeed that’s what happens: And here is a log scale plot showing temperature rise with increasing TCC: So, excluding the comments our command object is 6 lines long.  With those six lines of text as well as knowledge of how to use the ARG1 parameter, we now have thermal contact conductance which varies as a parameter.  This is a simple case and you will certainly want to test and verify for your own use.  Hopefully this helps with explaining the process and how it is done, including verification.      

Active Solution Monitoring during a solve in ANSYS CFX

Posted on April 4, 2017, by: Clinton Smith

One of the cool new features in CFX 18 is the ability to actively review results while the calculation is running. It is supported for steady state and transient calculations, and now includes support for rotating reference frames as well. What follows are some tutorial-esque steps to get you started. crm_10275-active-solution-monitoring-CFDPostR18

On the Functions of Cellular Structures in Nature

Posted on April 3, 2017, by: Dhruv Bhate, PhD

WHY did nature evolve cellular structures? In a previous post, I laid out a structural classification of cellular structures in nature, proposing that they fall into 6 categories. I argued that it is not always apparent to a designer what the best unit cell choice for a given application is. While most mechanical engineers have a feel for what structure to use for high stiffness or energy absorption, we cannot easily address multi-objective problems or apply these to complex geometries with spatially varying requirements (and therefore locally optimum cellular designs). However, nature is full of examples where cellular structures possess multi-objective functionality: bone is one such well-known example. To be able to assign structure to a specific function requires us to connect the two, and to do that, we must identify all the functions in play. In this post, I attempt to do just that and develop a classification of the functions of cellular structures. Any discussion of structure in nature has to contend with a range of drivers and constraints that are typically not part of an engineer's concern. In my discussions with biologists (including my biochemist wife), I quickly run into justified skepticism about whether generalized models associating structure and function can address the diversity and nuance in nature - and I (tend to) agree. However, my attempt here is not to be biologically accurate - it is merely to construct something that is useful and relevant enough for an engineer to use in design. But we must begin with a few caveats to ensure our assessments consider the correct biological context.

1. Uniquely Biological Considerations

Before I attempt to propose a structure-function model, there are some legitimate concerns many have made in the literature that I wish to recap in the context of cellular structures. Three of these in particular are relevant to this discussion and I list them below.

1.1 Design for Growth

Engineers are familiar with "design for manufacturing" where design considers not just the final product but also aspects of its manufacturing, which often place constraints on said design. Nature's "manufacturing" method involves (at the global level of structure), highly complex growth - these natural growth mechanisms have no parallel in most manufacturing processes. Take for example the flower stalk in Fig 1, which is from a Yucca tree that I found in a parking lot in Arizona.

Figure 1. The flower stalk (before bloom) of a Yucca plant in Arizona with overlapping surface cellular structure (Author's image)

At first glance, this looks like a good example of overlapping surfaces, one of the 6 categories of cellular structures I covered before. But when you pause for a moment and query the function of this packing of cells (WHY this shape, size, packing?), you realize there is a powerful growth motive for this design. A few weeks later when I returned to the parking lot, I found many of the Yucca stems simultaneously in various stages of bloom - and captured them in a collage shown in Fig 2. This is a staggering level of structural complexity, including integration with the environment (sunlight, temperature, pollinators) that is both wondrous and for an engineer, very humbling.

Figure 2. From flower stalk to seed pods, with some help from pollinators. Form in nature is often driven by demands of growth. (Author's images)

The lesson here is to recognize growth as a strong driver in every natural structure - the tricky part is determining when the design is constrained by growth as the primary force and when can growth be treated as incidental to achieving an optimum functional objective.

1.2 Multi-functionality

Even setting aside the growth driver mentioned previously, structure in nature is often serving multiple functions at once - and this is true of cellular structures as well. Consider the tessellation of "scutes" on the alligator. If you were tasked with designing armor for a structure, you may be tempted to mimic the alligator skin as shown in Fig. 3.

Figure 3. The cellular scutes on the alligator serve more than just one function: thermal regulation, bio-protection, mobility, fluid loss mitigation are some of the multiple underlying objectives that have been proposed (CCO public domain, Attr. Republica)

As you begin to study the skin, you see it is comprised of multiple scutes that have varying shape, size and cross-sections - see Fig 4 for a close-up.

Figure 4. Close-up of alligator scutes (Attr: Hans Hillewaert, Flickr, Creative Commons)

The pattern varies spatially, but you notice some trends: there exists a pattern on the top but it is different from the sides and the bottom (not pictured here). The only way to make sense of this variation is to ask what functions do these scutes serve? Luckily for us, biologists have given this a great deal of thought and it turns out there are several: bio-protection, thermoregulation, fluid loss mitigation and unrestricted mobility are some of the functions discussed in the literature [1, 2]. So whereas you were initially concerned only with protection (armor), the alligator seeks to accomplish much more - this means the designer either needs to de-confound the various functional aspects spatially and/or expand the search to other examples of natural armor to develop a common principle that emerges independent of multi-functionality specific to each species.

1.3 Sub-Optimal Design

This is an aspect for which I have not found an example in the field of cellular structures (yet), so I will borrow a well-known (and somewhat controversial) example [3] to make this point, and that has to do with the giraffe's Recurrent Laryngeal Nerve (RLN), which connects the Vagus Nerve to the larynx as shown in Figure 5, which it is argued, takes an unnecessarily long circuitous route to connect these two points.

Figure 5. Observe how the RLN in the giraffe emerges from the Vagus Nerve far away from the thorax: a sub-optimal design that was likely carried along through the generations in aid of prioritizing neck growth (Attr: Vladimir V. Medeyko)

We know that from a design standpoint, this is sub-optimal because we have an axiom that states the shortest distance between two points is a straight line. And therefore, the long detour the RLN makes in the giraffe's neck must have some other evolutionary and/or developmental basis (fish do not have this detour) [3]. However, in the case of other entities such as the cellular structures we are focusing on, the complexity of the underlying design principles makes it hard to identify cases where nature has found a sub-optimal design space for the function of interest to us, in favor of other pressing needs determined by selection. What is sufficient for the present moment is to appreciate that such cases may exist and to bear them in mind when studying structure in nature.

2. Classifying Functions

Given the above challenges, the engineer may well ask: why even consider natural form in making determinations involving the design of engineering structures? The biomimic responds by reminding us that nature has had 3.8 billion years to develop a "design guide" and we would be wise to learn from it. Importantly, natural and engineering structures both exist in the same environment and are subject to identical physics and further, are both often tasked with performing similar functions. In the context of cellular structures, we may thus ask: what are the functions of interest to engineers and designers that nature has addressed through cellular design? Through my reading [1-4], I have compiled the classification of functions in Figure 6, though this is likely to grow over time.

Figure 6. A proposed classification of functions of cellular structures in nature (subject to constant change!)

This broad classification into structural and transport may seem a little contrived, but it emerges from an analyst's view of the world. There are two reasons why I propose this separation:
  1. Structural functions involve the spatial allocation of materials in the construction of the cellular structures, while transport functions involve the structure AND some other entity and their interactions (fluid or light for example) - thus additional physics needs to be comprehended for transport functions
  2. Secondly, structural performance needs to be comprehended independent of any transport function: a cellular structure must retain its integrity over the intended lifetime in addition to performing any additional function
Each of these functions is a fascinating case study in its own right and I highly recommend the site AskNature.org [1] as a way to learn more on a specific application, but this is beyond the scope of the current post. More relevant to our high-level discussion is that having listed the various reasons WHY cellular structures are found in nature, the next question is can we connect the structures described in the previous post to the functions tabulated above? This will be the attempt of my next post. Until then, as always, I welcome all inputs and comments, which you can send by messaging me on LinkedIn. Thank you for reading!

References

  1. AskNature.org
  2. Foy (1983), The grand design: Form and colour in animals, Prentice-Hall, 1st edition
  3. Dawkins (2010), The greatest show on earth: the evidence for evolution, Free Press, Reprint of 1st edition
  4. Gibson, Ashby, Harley (2010), Cellular Materials in Nature and Medicine, Cambridge University Press; 1st edition
  5. Ashby, Evans, Fleck, Gibson, Hutchinson, Wadley (2000), Metal Foams: A Design Guide, Butterworth-Heinemann, 1st edition

Making Solids Water Tight in ANSYS Spaceclaim for ANSYS Workbench Meshing

Posted on March 30, 2017, by: Tom Chadwick

Occasionally when solid geometry is imported from CAD into ANSYS SpaceClaim the geometry will come in as solids, but when a mesh is generated on the solids the mesh will appear to “leak” into the surrounding space. Below is an assembly that was imported from CAD into SpaceClaim. In the SpaceClaim Structure Window all of the parts can be seen to be solid components. When the mesh is generated in ANSYS Mechanical it appears like the assembly has been successfully meshed. However, when you look at the mesh a little closer, the mesh can be missing from some of the surfaces and not displayed correctly on others. Additionally, if you create a cross-section through the mesh, the mesh on some of the parts will “leak” outside of the part boundaries and will look like the image below. Based on the mesh color, the mesh of the part in the center of the assembly has grown outside of the surfaces of the part. To repair the part you need to go back to SpaceClaim and rebuild it. First you need to hide the rest of the parts. Next, create a sketch plane that passes through the problem part. In the sketch mode create a rectangle that surrounds the part. When you return to 3D mode in SpaceClaim, that rectangle will become a surface that passes through the part. Now use the Pull tool in SpaceClaim to turn that surface into a part that completely surrounds the part to be repaired, making sure to turn on the “No Merge” option for the pull before you begin. After you have pulled the surface into a solid, it should like the image below where the original part is completely buried inside the new part. Now you will use the Combine tool to divide the box with the original part. Select Combine from the Tool Bar, then select the box that you created in the previous step. The cutter will be activated and you will move the cursor around until the original part is highlighted inside the box. Select it with the left mouse button. The Combine tool will then give you the option to select the part of the box that you want to remove. Select the part that surrounds the original part. After it is finished, close the combine tool and the Structure Tree and 3D window will now look like the following: Now move the new solid that was created with the Combine tool into the location of the original part and turn off the original one and re-activate the other parts of the assembly. The assembly and Structure Tree should now look like the pictures below. Now save the project, re-open the meshing tool, and re-generate the mesh. The mesh should now be correct and not “leaking” beyond the part boundaries.

Cellular Design Strategies in Nature: A Classification

Posted on March 28, 2017, by: Dhruv Bhate, PhD

What types of cellular designs do we find in nature? Cellular structures are an important area of research in Additive Manufacturing (AM), including work we are doing here at PADT. As I described in a previous blog post, the research landscape can be broadly classified into four categories: application, design, modeling and manufacturing. In the context of design, most of the work today is primarily driven by software that represent complex cellular structures efficiently as well as analysis tools that enable optimization of these structures in response to environmental conditions and some desired objective. In most of these software, the designer is given a choice of selecting a specific unit cell to construct the entity being designed. However, it is not always apparent what the best unit cell choice is, and this is where I think a biomimetic approach can add much value. As with most biomimetic approaches, the first step is to frame a question and observe nature as a student. And the first question I asked is the one described at the start of this post: what types of cellular designs do we find in the natural world around us? In this post, I summarize my findings.

Design Strategies

In a previous post, I classified cellular structures into 4 categories. However, this only addressed "volumetric" structures where the objective of the cellular structure is to fill three-dimensional space. Since then, I have decided to frame things a bit differently based on my studies of cellular structures in nature and the mechanics around these structures. First is the need to allow for the discretization of surfaces as well: nature does this often (animal armor or the wings of a dragonfly, for example). Secondly, a simple but important distinction from a modeling standpoint is whether the cellular structure in question uses beam- or shell-type elements in its construction (or a combination of the two). This has led me to expand my 4 categories into 6, which I now present in Figure 1 below.

Figure 1. Classification of cellular structures in nature: Volumetric - Beam: Honeycomb in bee construction (Richard Bartz, Munich Makro Freak & Beemaster Hubert Seibring), Lattice structure in the Venus flower basket sea sponge (Neon); Volumetric - Shell: Foam structure in douglas fir wood (U.S. National Archives and Records Administration), Periodic Surface similar to what is seen in sea urchin skeletal plates (Anders Sandberg); Surface: Tessellation on glypotodon shell (Author's image), Scales on a pangolin (Red Rocket Photography for The Children's Museum of Indianapolis)

Setting aside the "why" of these structures for a future post, here I wish to only present these 6 strategies from a structural design standpoint.
  1. Volumetric - Beam: These are cellular structures that fill space predominantly with beam-like elements. Two sub-categories may be further defined:
    • Honeycomb: Honeycombs are prismatic, 2-dimensional cellular designs extruded in the 3rd dimension, like the well-known hexagonal honeycomb shown in Fig 1. All cross-sections through the 3rd dimension are thus identical. Though the hexagonal honeycomb is most well known, the term applies to all designs that have this prismatic property, including square and triangular honeycombs.
    • Lattice and Open Cell Foam: Freeing up the prismatic requirement on the honeycomb brings us to a fully 3-dimensional lattice or open-cell foam. Lattice designs tend to embody higher stiffness levels while open cell foams enable energy absorption, which is why these may be further separated, as I have argued before. Nature tends to employ both strategies at different levels. One example of a predominantly lattice based strategy is the Venus flower basket sea sponge shown in Fig 1, trabecular bone is another example.
  2. Volumetric - Shell:
    • Closed Cell Foam: Closed cell foams are open-cell foams with enclosed cells. This typically involves a membrane like structure that may be of varying thickness from the strut-like structures. Plant sections often reveal a closed cell foam, such as the douglas fir wood structure shown in Fig 1.
    • Periodic Surface: Periodic surfaces are fascinating mathematical structures that often have multiple orders of symmetry similar to crystalline groups (but on a macro-scale) that make them strong candidates for design of stiff engineering structures and for packing high surface areas in a given volume while promoting flow or exchange. In nature, these are less commonly observed, but seen for example in sea urchin skeletal plates.
  3. Surface:
    • Tessellation: Tessellation describes covering a surface with non-overlapping cells (as we do with tiles on a floor). Examples of tessellation in nature include the armored shells of several animals including the extinct glyptodon shown in Fig 1 and the pineapple and turtle shell shown in Fig 2 below.
    • Overlapping Surface: Overlapping surfaces are a variation on tessellation where the cells are allowed to overlap (as we do with tiles on a roof). The most obvious example of this in nature is scales - including those of the pangolin shown in Fig 1.

Figure 2. Tessellation design strategies on a pineapple and the map Turtle shell [Scans conducted at PADT by Ademola Falade]

What about Function then?

This separation into 6 categories is driven from a designer's and an analyst's perspective - designers tend to think in volumes and surfaces and the analyst investigates how these are modeled (beam- and shell-elements are at the first level of classification used here). However, this is not sufficient since it ignores the function of the cellular design, which both designer and analyst need to also consider. In the case of tessellation on the skin of an alligator for example as shown in Fig 3, was it selected for protection, easy of motion or for controlling temperature and fluid loss?

Figure 3. Varied tessellation on an alligator conceals a range of possible functions (CCO public domain)

In a future post, I will attempt to develop an approach to classifying cellular structures that derives not from its structure or mechanics as I have here, but from its function, with the ultimate goal of attempting to reconcile the two approaches. This is not a trivial undertaking since it involves de-confounding multiple functional requirements, accounting for growth (nature's "design for manufacturing") and unwrapping what is often termed as "evolutionary baggage," where the optimum solution may have been sidestepped by natural selection in favor of other, more pressing needs. Despite these challenges, I believe some first-order themes can be discerned that can in turn be of use to the designer in selecting a particular design strategy for a specific application.

References

This is by no means the first attempt at a classification of cellular structures in nature and while the specific 6 part separation proposed in this post was developed by me, it combines ideas from a lot of previous work, and three of the best that I strongly recommend as further reading on this subject are listed below.
  1. Gibson, Ashby, Harley (2010), Cellular Materials in Nature and Medicine, Cambridge University Press; 1st edition
  2. Naleway, Porter, McKittrick, Meyers (2015), Structural Design Elements in Biological Materials: Application to Bioinspiration. Advanced Materials, 27(37), 5455-5476
  3. Pearce (1980), Structure in Nature is a Strategy for Design, The MIT Press; Reprint edition
As always, I welcome all inputs and comments - if you have an example that does not fit into any of the 6 categories mentioned above, please let me know by messaging me on LinkedIn and I shall include it in the discussion with due credit. Thanks!

License Usage and Reporting with ANSYS License Manager Release 18.0

Posted on March 14, 2017, by: Manoj Mahendran

Remember the good old days of having to peruse through hundreds and thousands of lines of text in multiple files to see ANSYS license usage information?  Trying to hit Ctrl+F and search for license names.  Well those days were only about a couple months ago and they are over…well for the most part. With the ANSYS License Manager Release 18.0, we have some pretty nifty built in license reporting tools that help to extract information from the log files so the administrator can see anything from current license usage to peak usage and even any license denials that occur.  Let’s take a look at how to do this: First thing is to open up the License Management Center:
  • In Windows you can find this by going to Start>Programs>ANSYS Inc License Manager>ANSYS License Management Center
  • On Linux you can find this in the ansys directory /ansys_inc/shared_files/licensing/start_lmcenter
This will open up your License Manager in your default browser as shown below.   For the reporting just take a look at the Reporting Section.  We’ll cover each of these 4 options below.

License Management Center at Release 18.0

License Reporting Options

    VIEW CURRENT LICENSE USAGE As the title says, this is where you’ll go to see a breakdown of the current license usage.  What is great is that you can see all the licenses that you have on the server, how many licenses of each are being used and who is using them (through the color of the bars).  Please note that PADT has access to several ANSYS Licenses.  Your list will only include the licenses available for use on your server.

Scrolling page that shows Current License Usage and Color Coded Usernames

You can also click on Show Tabular Data to see a table view that you can then export to excel if you wanted to do your own manipulation of the data.

Tabular Data of Current License Usage – easy to export

      VIEW LICENSE USAGE HISTORY In this section you will be able to not only isolate the license usage to a specific time period, you can also filter by license type as well.  You can use the first drop down to define a time range, whether that is the previous 1 month, 1 year, all available or even your own custom time range

Isolate License Usage to Specific Time Period

Once you hit Generate you will be able to then isolate by license name as shown below.  I’ve outlined some examples below as well.  The axis on the left shows number of licenses used.

Filter Time History by License Name

1 month history of ANSYS Mechanical Enterprise

 1 month history of ANSYS CFD

Custom Date Range history of ANSYS SpaceClaim Direct Modeler

      VIEW PEAK LICENSE USAGE This section will allow you to see what the peak usage of a particular license during a particular time period and filter it based on data range.  First step is to isolate to a date range as before, for example 1 month.  Then you can select which month you want to look at data for.

Selecting specific month to look at Peak License Usage

Then you can isolate the data to whether or not you want to look at an operational period of 24/7, Monday to Friday 24/5 or even Monday to Friday 9am-5pm.  This way you can isolate license usage between every day of the week, working week or normal working hours in a week. Again, axis on left shows number of licenses.

Isolating data to 24/7, Weekdays or Weekday Working Hours

 Peak License Usage in March 2017 of ANSYS Mechanical Enterprise (24/7)

Peak License Usage in February 2017 of ANSYS CFD (Weekdays Only)

    VIEW LICENSE DENIALS If any of the users who are accessing the License Manager get license denials due to insufficient licenses or for any other reason, this will be displayed in this section.  Since PADT rarely, if ever, gets License Denials, this section is blank for us.  The procedure is identical to the above sections – it involves isolating the data to a time period and filtering the data to your interested quantities.

Isolate data with Time Period as other sections

    Although these 4 options doesn’t include every conceivable filtering method, this should allow managers and administrators to filter through the license usage in many different ways without needing to manually go through all the log files.   This is a very convenient and easy set of options to extract the information. Please let us know if you have any questions on this or anything else with ANSYS.

DesignCon 2017 Trends in Chip, Board, and System Design

Posted on March 14, 2017, by: Eric Miller

Considered the “largest gathering of chip, board, and systems designers in the country,” with over 5,000 attendees this year and over 150 technical presentations and workshops, DesignCon exhibits state of the art trends in high-speed communications and semiconductor communities. Here are the top 5 trends I noticed while attending DesignCon 2017:

1. Higher data rates and power efficiency.

This is of course a continuing trend and the most obvious. Still, I like to see this trend alive and well because I think this gets a bit trickier every year. Aiming towards 400 Gbps solutions, many vendors and papers were demonstrating 56 Gbps and 112 Gbps channels, with no less than 19 sessions with 56 Gbps or more in the title. While IC manufacturers continue to develop low-power chips, connector manufacturers are offering more vented housings as well as integrated sinks to address thermal challenges.

2. More conductor-based signaling.

PAM4 was everywhere on the exhibition floor and there were 11 sessions with PAM4 in the title. Shielded twinaxial cables was the predominant conductor-based technology such as Samtec’s Twinax Flyover and Molex’s BiPass. A touted feature of twinax is the ability to route over components and free up PCB real estate (but there is still concern for enclosing the cabling). My DesignCon 2017 session, titled Replacing High-Speed Bottlenecks with PCB Superhighways, would also fall into this category. Instead of using twinax, I explored the idea of using rectangular waveguides (along with coax feeds), which you can read more about here. I also offered a modular concept that reflects similar routing and real estate advantages.

3. Less optical-based signaling.

Don’t get me wrong, optical-based signaling is still a strong solution for high-speed channels. Many of the twinax solutions are being designed to be compatible with fiber connections and, as Teledyne put it in their QPHY-56G-PAM4 option release at DesignCon, Optical Internetworking Forum (OIF) and IEEE are both rapidly standardizing PAM4-based interfaces. Still, the focus from the vendors was on lower cost conductor-based solutions. So, I think the question of when a full optical transition will be necessary still stands. With that in mind, this trend is relative to what I saw only a couple years back. At DesignCon 2015, it looked as if the path forward was going to be fully embracing optical-based signaling. This year, I saw only one session on fiber and, as far as I could tell, none on photonic devices. That’s compared to DesignCon 2015 with at least 5 sessions on fiber and photonics, as well as a keynote session on silicon photonics from Intel Fellow Dr. Mario Paniccia.

4. More Physics-based Simulations.

As margins continue to shrink, the demand for accurate simulation grows. Dr. Zoltan Cendes, founder of Ansoft, shared the difficulties of electromagnetic simulation over the past 40+ years and how Ansoft (now ANSYS) has improved accuracy, simplified the simulation process, and significantly reduced simulation time. To my personal delight, he also had a rectangular waveguide in his presentation (and I think we were the only two). Dr. Cendes sees high-speed electrical design at a transition point, where engineers have been or will ultimately need to place physics-based simulations at the forefront of the design process, or as he put it, “turning signal integrity simulation inside out.” A closer look at Dr. Cendes’ keynote presentation can be found in DesignNews. 5. More Detailed IC Models. This may or may not be a trend yet, but improving IC models (including improved data sheet details) was a popular topic among presenters and attendees alike; so if nothing else it was a trend of comradery. There were 12 sessions with IBIS-AMI in the title. In truth, I don’t typically attend these sessions, but since behavioral models (such as IBIS-AMI) impact everyone at DesignCon, this topic came up in several sessions that I did attend even though they weren’t focused on this topic. Perhaps with continued development of simulation solutions like ANSYS’ Chip-Package-System, Dr. Cende’s prediction will one day make a comprehensive physics-based design (to include IC models) a practical reality. Until then, I would like to share an interesting quote from George E. P. Box that was restated in one of the sessions: “Essentially all models are wrong, but some are useful.” I think this is good advice that I use for clarity in the moment and excitement for the future. By the way, the visual notes shown above were created by Kelly Kingman from kingmanink.com on the spot during presentations. As an engineer, I was blown away by this. I have a tendency to obsess over details but she somehow captured all of the critical points on the fly with great graphics that clearly relay the message. Amazing!

How To Update The Firmware Of An Intel® Solid-State Drive DC P3600

Posted on March 10, 2017, by: David Mastel

How To Update The Firmware Of An Intel® Solid-State Drive DC P3600 in four easy steps!

The Dr. says to keep that firmware fresh! so in this How To blog post I illustrate to you how to verify and/or update the firmware on a 1.2TB  Intel® Solid-State Drive DC 3600 Series NVMe MLC card.

CUBE Workstation Specifications - The Tester

PADT, Inc. – CUBE w32i Numerical Simulation Workstation

  • 2 x 16c @2.6GHz/ea. (INTEL XEON e5-2697A V4 CPU), 40M Cache, 9.6GT, 145 Watt/each
  • Dual Socket Super Micro X10DAi motherboard
  • 8 x 32GB DDR4-2400MHz ECC REG DIMM
  • 1 x NVIDIA QUADRO M2000 - 4GB GDDR5
  • 1 x  Intel® DC P3600 1.2TB, NVMe PCIe 3.0, MLC AIC 20nm
  • Windows 7 Ultimate Edition 64-bit

Step 1: Prepping

Check for and download the latest downloads for the Intel® Solid-State DC 3600 here: https://downloadcenter.intel.com/product/81000/Intel-SSD-DC-P3600-Series You will need the latest downloads of the:
Intel® SSD Data Center Family for NVMe Drivers
  • Intel® Solid State Drive Toolbox

  • Intel® SSD Data Center Tool

  • Intel® SSD Data Center Family for NVMe Drivers

Step 2: Installation

After instaling, the Intel® Solid State Drive Toolbox and the Intel® SSD Data Center Tool reboot the workstation and move on to the next step.
INTEL SSD Toolbox

INTEL SSD Toolbox

INTEL SSD Toolbox Install

Step 3: Trust But Verify

Check the status of the 1.2TB NVMe card by running the INTEL SSD DATA Center Tool. Next, I will be using the Windows 7 Ultimate 64-bit version for the operating system. Running the INTEL DATA CENTER TOOLS  within an elevated command line prompt. Right-Click --> Run As...Administrator Command Line Text: isdct show –intelssd
INTEL DATA Center Command Line Tool

INTEL DATA Center Command Line Tool

As the image indicates below the firmware for this 1.2TB NVMe card is happy and it's firmware is up to date! Yay! If you have more than one SSD take note of the Drive Number.
  • Pro Tip - In this example the INTEL DC P3600 is Drive number zero. You can gather this information from the output syntax. --> Index : 0
Below is what the command line output text looks like while the firmware process is running. C:\isdct >isdct.exe load –intelssd 0 WARNING! You have selected to update the drives firmware! Proceed with the update? (Y|N): y Updating firmware...The selected Intel SSD contains current firmware as of this tool release. isdct.exe load –intelssd 0 WARNING! You have selected to update the drives firmware! Proceed with the update? (Y|N): n Canceled. isdct.exe load –f –intelssd 0 Updating firmware... The selected Intel SSD contains current firmware as of this tool release. isdct.exe load –intelssd 0 WARNING! You have selected to update the drives firmware! Proceed with the update? (Y|N): y Updating firmware... Firmware update successful.

Step 4: Reboot Workstation

The firmware update process has been completed. shutdown /n

Using External Data in ANSYS Mechanical to Tabular Loads with Multiple Variables

Posted on March 9, 2017, by: Alex Grishin

ANSYS Mechanical is great at applying tabular loads that vary with an independent variable. Say time or Z.  What if you want a tabular load that varies in multiple directions and time. You can use the External Data tool to do just that. You can also create a table with a single variable and modify it in the Command Editor. In the Presentation below, I show how to do all of this in a step-by-step description. PADT-ANSYS-Tabular-Loading-ANSYS-18
You can also download the presentation here.

Experiences with Developing a “Somewhat Large” ACT Extension in ANSYS

Posted on March 7, 2017, by: Matt Sutton

With each release of ANSYS the customization toolkit continues to evolve and grow.  Recently I developed what I would categorize as a decent sized ACT extension.    My purpose in this post is to highlight a few of the techniques and best practices that I learned along the way.

Why I chose C#?

Most ACT extensions are written in Python.  Python is a wonderfully useful language for quickly prototyping and building applications, frankly of all shapes and sizes.  Its weaker type system, plethora of libraries, large ecosystem and native support directly within the ACT console make it a natural choice for most ACT work.  So, why choose to move to C#? The primary reasons I chose to use C# instead of python for my ACT work were the following:
  1. I prefer the slightly stronger type safety afforded by the more strongly typed language. Having a definitive compilation step forces me to show my code first to a compiler.  Only if and when the compiler can generate an assembly for my source do I get to move to the next step of trying to run/debug.  Bugs caught at compile time are the cheapest and generally easiest bugs to fix.  And, by definition, they are the most likely to be fixed.  (You’re stuck until you do…)
  2. The C# development experience is deeply integrated into the Visual Studio developer tool. This affords not only a great editor in which to write the code, but more importantly perhaps the world’s best debugger to figure out when and how things went wrong.   While it is possible to both edit and debug python code in Visual Studio, the C# experience is vastly superior.

The Cost of Doing ACT Business in C#

Unfortunately, writing an ACT extension in C# does incur some development cost in terms setting up the development environment to support the work.  When writing an extension solely in Python you really only need a decent text editor.  Once you setup your ACT extension according to the documented directory structure protocol, you can just edit the python script files directly within that directory structure.  If you recall, ACT requires an XML file to define the extension and then a directory with the same name that contains all of the assets defining the extension like scripts, images, etc…  This “defines” the extension. When it comes to laying out the requisite ACT extension directory structure on disk, C# complicates things a bit.  As mentioned earlier, C# involves a compilation step that produces a DLL.  This DLL must then somehow be loaded into Mechanical to be used within the extension.  To complicate things a little further, Visual Studio uses a predefined project directory structure that places the build products (DLLs, etc…) within specific directories of the project depending on what type of build you are performing.   Therefore the compiled DLL may end up in any number of different directories depending on how you decide to build the project.  Finally, I have found that the debugging experience within Visual Studio is best served by leaving the DLL located precisely wherever Visual Studio created it. Here is a summary list of the requirements/problems I encountered when building an ACT extension using C#
  1. I need to somehow load the produced DLL into Mechanical so my extension can use it.
  2. The DLL that is produced during compilation may end up in any number of different directories on disk.
  3. An ACT Extension must conform to a predefined structural layout on the filesystem. This layout does not map cleanly to the Visual studio project layout.
  4. The debugging experience in Visual Studio is best served by leaving the produced DLL exactly where Visual Studio left it.
The solution that I came up with to solve these problems was twofold. First, the issue of loading the proper DLL into Mechanical was solved by using a combination of environment variables on my development machine in conjunction with some Python programming within the ACT main python script.  Yes, even though the bulk of the extension is written in C#, there is still a python script to sort of boot-load the extension into Mechanical.  More on that below. Second, I decided to completely rebuild the ACT extension directory structure on my local filesystem every time I built the project in C#.  To accomplish this, I created in visual studio what are known as post-build events that allow you to specify an action to occur automatically after the project is successfully built.  This action can be quite generic.  In my case, the “action” was to locally run a python script and provide it with a few arguments on the command line.  More on that below.

Loading the Proper DLL into Mechanical

As I mentioned above, even an ACT extension written in C# requires a bit of Python code to bootstrap it into Mechanical.  It is within this bit of Python that I chose to tackle the problem of deciding which dll to actually load.  The code I came up with looks like the following: Essentially what I am doing above is querying for the presence of a particular environment variable that is on my machine.  (The assumption is that it wouldn’t randomly show up on end user’s machine…) If that variable is found and its value is 1, then I determine whether or not to load a debug or release version of the DLL depending on the type of build.  I use two additional environment variables to specify where the debug and release directories for my Visual Studio project exist.  Finally, if I determine that I’m running on a user’s machine, I simply look for the DLL in the proper location within the extension directory.  Setting up my python script in this way enables me to forget about having to edit it once I’m ready to share my extension with someone else.  It just works.

Rebuilding the ACT Extension Directory Structure

The final piece of the puzzle involves rebuilding the ACT extension directory structure upon the completion of a successful build.  I do this for a few different reasons.
  1. I always want to have a pristine copy of my extension laid out on disk in a manner that could be easily shared with others.
  2. I like to store all of the various extension assets, like images, XML files, python files, etc… within the Visual Studio Project. In this way, I can force the project to be out of date and in need of a rebuild if any of these files change.  I find this particularly useful for working with the XML definition file for the extension.
  3. Having all of these files within the Visual Studio Project makes tracking thing within a version control system like SVN or git much easier.
As I mentioned before, to accomplish this task I use a combination of local python scripting and post build events in Visual Studio.  I won’t show the entire python code, but essentially what it does is programmatically work through my local file system where the C# code is built and extract all of the files needed to form the ACT extension.  It then deletes any old extension files that might exist from a previous build and lays down a completely new ACT extension directory structure in the specified location.  The definition of the post build event is specified within the project settings in Visual Studio as follows: As you can see, all I do is call out to the system python interpreter and pass it a script with some arguments.  Visual Studio provides a great number of predefined variables that you can use to build up the command line for your script.  So, for example, I pass in a string that specifies what type of build I am currently performing, either “Debug” or “Release”.  Other strings are passed in to represent directories, etc…

The Synergies of Using Both Approaches

Finally, I will conclude with a note on the synergies you can achieve by using both of the approaches mentioned above.  One of the final enhancements I made to my post build script was to allow it to “edit” some of the text based assets that are used to define the ACT extension.  A text based asset is something like an XML file or python script.  What I came to realize is that certain aspects of the XML file that define the extension need to be different depending upon whether or not I wish to debug the extension locally or release the extension for an end user to consume.  Since I didn’t want to have to remember to make those modifications before I “released” the extension for someone else to use, I decided to encode those modifications into my post build script.  If the post build script was run after a “debug” build, I coded it to configure the extension for optimal debugging on my local machine.  However, if I built a “release” version of the extension, the post build script would slightly alter the XML definition file and the main python file to make it more suitable for running on an end user machine.   By automating it in this way, I could easily build for either scenario and confidently know that the resulting extension would be optimally configured for the particular end use.

Conclusions

Now that I have some experience in writing ACT extensions in C# I must honestly say that I prefer it over Python.  Much of the “extra plumbing” that one must invest in in order to get a C# extension up and running can be automated using the techniques described within this post.  After the requisite automation is setup, the development process is really straightforward.  From that point onward, the increased debugging fidelity, added type safety and familiarity a C based language make the development experience that much better!  Also, there are some cool things you can do in C# that I’m not 100% sure you can accomplish in Python alone.  More on that in later posts! If you have ideas for an ACT extension to better serve your business needs and would like to speak with someone who has developed some extensions, please drop us a line.  We’d be happy to help out however we can!  

Connection Groups and Your Sanity in ANSYS Mechanical

Posted on March 2, 2017, by: Doug Oatis

You kids don’t know how good you have it with automatic contact creation in Mechanical.  Back in my day, I’d have to use the contact wizard in MAPDL or show off my mastery of the ESURF command to define contacts between parts.  Sure, there were some macros somewhere on the interwebs that would go through and loop for surfaces within a particular offset, but for the sake of this stereotypical “old-tyme” rant, I didn’t use them (I actually didn’t, I was just TOO good at using ESURF to need anyone else’s help).

Image result for old tyme

Hey, it gets me from point A to B

In Mechanical contact is automatically generated based on a set of rules contained in the ‘Connection Group’ object: image It might look a little over-whelming, but really the only thing you’ll need to play around with is the ‘Tolerance Type’.  This can either ‘Slider’ or ‘Value’ (or use sheet thickness if you’re working with shells).  What this controls is the face offset value for which Mechanical will automatically build contact.  So in the picture shown above faces that are 5.9939E-3in apart will automatically have contact created.  You can play around with the slider value to change what the tolerance
image image image
As you can see, the smaller the tolerance slider the larger the ‘acceptable’ gap becomes.  If you change the Tolerance Type to be ‘Value’ then you can just directly type in a number. Typically the default values do a pretty good job automatically defining contact.  However, what happens if you have a large assembly with a lot of thin parts?  Then what you run into is non-sensical contact between parts that don’t actually touch (full disclosure, I actually had to modify the contact settings to have the auto-generated contact do something like this…but I have seen this in other assemblies with very thin/slender parts stacked on top of each other): image In the image above, we see that contact has been defined between the bolt head and a plate when there is clearly a washer present.  So we can fix this by going in and specifying a value of 0, meaning that only surfaces that are touching will have contact defined.  But now let’s say that some parts of your assembly aren’t touching (maybe it’s bad CAD, maybe it’s a welded assembly, maybe you suppressed parts that weren’t important). image The brute force way to handle this would be to set the auto-detection value to be 0 and then go back and manually define the missing contacts using the options shown in the image above.  Or, what we could do is modify the auto-contact to be broken up into groups and apply appropriate rules as necessary.  The other benefit to this is if you’re working in large assemblies, you can retain your sanity by having contact generated region by region.   In the words of the original FE-guru, Honest Abe, it’s easier to manage things when they’re logically broken up into chunks.

image

Said No One Ever

Sorry...that was bad.  I figured in the new alt-fact world with falsely-attributed quotes to historical leaders, I might as well make something up for the oft-overlooked FE-crowd.

So, how do you go about implementing this?  Easy, first just delete the default connection group (right-mouse-click on it and select delete).  Next, just select a group of bodies and click the ‘Connection Group’ button:

image image image
In the image series above, I selected all the bolts and washers, clicked the connection group, and now I have created a connection group that will only automatically generate contact between the bolts and washers.  I don’t have to worry about contact being generated between the bolt and plate.  Rinse, lather, and repeat the process until you’ve created all the groups you want:

image

ALL the Connection Groups!

Now that you have all these connection groups, you can fine-tune the auto-detection rules to meet the ‘needs’ of those individual body groups.  Just zooming in on one of the groups:

image

By default, when I generate contact for this group I’ll get two contact pairs:

image image

While this may work, let’s say I don’t want a single contact pair for the two dome-like structures, but 2.  That way I can just change the behavior on the outer ‘ring’ to be frictionless and force the top to be bonded:

image

I modified the auto-detection tolerance to be a user-defined distance (note that when you type in a number and move your mouse over into the graphics window you will see a bulls-eye that indicates the search radius you just defined).  Next, I told the auto-detection not to group any auto-detected contacts together.  The result is I now get 3 contact pairs defined:

image image image
Now I can just modify the auto-generated contacts to have the middle-picture shown in the series above to be frictionless.  I could certainly just manually define the contact regions, but if you have an assembly of dozens/hundreds of parts it’s significantly easier to have Mechanical build up all the contact regions and then you just have to modify individual contact pairs to have the type/behavior/etc you want (bonded, frictionless, symmetric, asymmetric, custom pinball radius, etc).  This is also useful if you have bodies that need to be connected via face-to-edge or edge-to-edge contact (then you can set the appropriate priority as to which, if any of those types should be preserved over others). So the plus side to doing all of this is that after any kind of geometry update you shouldn’t have much, if any, contact ‘repair’ to do.  All the bodies/rules have already been fine tuned to automatically build what you want/need.  You also know where to look to modify contacts (although using the ‘go to’ functionality makes that pretty easy as well).  That way you can define all these connection groups, leave everything as bonded and do a preliminary solve to ensure things look ‘okay’.  Then go back and start introducing some more reality into the simulation by allowing certain regions to move relative to each other. The downside to doing your contacts this way is you risk missing an interface because you’re now defining the load path.  To deal with that you can just insert a dummy-modal environment into your project, solve, and check that you don’t have any 0-Hz modes.

Exploring High-Frequency Electromagnetic Theory with ANSYS HFSS

Posted on February 28, 2017, by: Michael Griesi

I recently had the opportunity to present an interesting experimental research paper at DesignCon 2017, titled Replacing High-Speed Bottlenecks with PCB Superhighways. The motivation behind the research was to develop a new high-speed signaling system using rectangular waveguides, but the most exciting aspect for me personally was salvaging a (perhaps contentious) 70 year old first-principles electromagnetic model. While it took some time to really understand how to apply the mathematics to design, their application led to an exciting convergence of theory, simulation, and measurement. One of the most critical aspects of the design was exciting the waveguide with a monopole probe antenna. Many different techniques have been developed to match the antenna impedance to the waveguide impedance at the desired frequency, as well as increase the bandwidth. Yet, all of them rely on assumptions and empirical measurement studies. Optimizing a design to nanometer precision empirically would be difficult at best and even if the answer was found it wouldn’t inherently reveal the physics. To solve this problem, we needed a first-principles model, a simulation tool that could quickly iterate designs accurately, and some measurements to validate the simulation methodology. A rigorous first-principles model was developed by Robert Collin in 1960, but this solution has since been forgotten and replaced by simplified rules. Unfortunately, these simplified rules are unable to deliver an optimal design or offer any useful insight to the critical parameters. In fairness, Collin’s equations are difficult to implement in design and validating them with measurement would be tedious and expensive. Because of this, empirical measurements have been considered a faster and cheaper alternative. However, we wanted the best of both worlds… we wanted the best design, for the lowest cost, and we wanted the results quickly. For this study, we used ANSYS HFSS to simulate our designs. Before exploring new designs, we first wanted to validate our simulation methodology by correlating results with available measurements. We were able to demonstrate a strong agreement between Collin’s theory, ANSYS HFSS simulation, and VNA measurement.

Red simulated S-parameters strongly correlated with blue measurements.

To perform a series of parametric studies, we swept thousands of antenna design iterations across a wide frequency range of 50 GHz for structures ranging from 50-100 guide wavelengths long. High-performance computing gave us the ability to solve return loss and insertion loss S-parameters within just a few minutes for each design iteration by distributing across 48 cores.

Sample Parametric Design Sweep

Finally, we used the lessons we learned from Collin’s equations and the parametric study to develop a new signaling system with probe antenna performance never before demonstrated. You can read the full DesignCon paper here. The outcome also pertains to RF applications in addition to potentially addressing Signal Integrity concerns for future high-speed communication channels. Rules-of-thumb are important to fast and practical design, but their application can many times be limited. Competitive innovation demands we explore beyond these limitations but the only way to match the speed and accuracy of design rules is to use simulations capable of offering fast design exploration with the same reliability as measurement. ANSYS HFSS gave us the ability to, not only optimize our design, but also teach us about the physics that explain our design and allow us to accurately predict the behavior of new innovative designs.

Importing Material Properties from Solidworks into ANSYS Mechanical…Finally!

Posted on February 27, 2017, by: Manoj Mahendran

Finally! One of the most common questions we get from our customers who use Solidworks is “Why can't I transfer my materials from Solidworks? I have to type in the values all over again every time."  Unfortunately, until now, ANSYS has not been able to access the Solidworks material library to access that information. There is great news with ANSYS 18.  ANSYS is now able to import the material properties from Solidworks and use them in an analysis within Workbench.  Let’s see how it works. I have a Solidworks assembly that I downloaded from Grabcad.  The creator had pre-defined all the materials for this model as you can see below. Once you bring in the geometry into Workbench, just ensure that the Material Properties item is checked under the Geometry cell’s properties.  If you don’t see the panel, just right-click on the geometry cell and click on Properties. Once you are in ANSYS Mechanical, for example you will see that the parts are already pre-defined with the material specified in Solidworks . The trick now is to find out where this material is getting stored. If we go to Engineering Data, the only thing we will see is Structural Steel. However when we go to Engineering Data Sources that is where we see a new material library called CADMaterials.  That will show you a list of all the materials and their properties that were imported from a CAD tool such as Solidworks, Creo, NX, etc. You can of course copy the material and store it for future use in ANSYS like any other material.  This will save you from having to manually define all the materials for a part or assembly from scratch within ANSYS. Please let us know if you have any questions and we’ll be happy to answer them for you.