Video Tips – Two-way connection between Solidworks and ANSYS HFSS

This video will show you how you can set up a two-way connection between Solidworks and ANSYS HFSS so you can modify dimensions as you are iterating through designs from within HFSS itself. This prevents the need for creating several different CAD model iterations within Solidworks and allows a more seamless workflow.  Note that this process also works for the other ANSYS Electromagnetic tools such as ANSYS Maxwell.

Assembly Modeling with ANSYS

In my previous article, I wrote about how you get what you pay for with your analysis package.  Well, buckle up for some more…but this time we’ll just focus on handling assemblies in your structural/thermal simulations.  If all you’re working on are single components, count yourself lucky.  Almost every simulation deals with one part interacting with another.  You can simplify your boundary conditions a bit to make it equivalent, but if you have significant bearing stresses, misalignments, etc…you need to include the supporting parts.  Better hope your analysis package can handle contact…

Image result for get what you pay for

First off, contact isn’t just for structural simulations.  Contact allows you to pass loads across difference meshes, meaning you don’t need to create a conformal mesh between two parts in order to simulate something.  Here’s a quick listing on the degrees of freedom supported in ANSYS (don’t worry…you don’t need to know how to set these options as ANSYS does it for you when you’re in Workbench):

image

You can use contact for structural, thermal, electrical, porous domain, diffusion, or any combination of those.  The rest of this article is going to focus on the structural side of things, but realize that the same concepts apply to essentially any analysis you can do within ANSYS Mechanical..

First, it’s incredibly easy to create contact in your assembly.  Mechanical automatically looks for surfaces within a certain distance from one another and builds contact.  You can further customize the automated process by defining your own connection groups, as I previous wrote about.  These connection groups can create contact between faces, edges, solids bodies, shell bodies, and line bodies.

image

Second, not only can you create contact to transfer loads across different parts, but you can also automatically create joints to simulate linkages or ‘linearize’ complicated contacts (e.g. cylindrical-to-cylindrical contact for pin joints).  With these joints you can also specify stops and locks to simulate other components not explicitly modeled.  If you want to really model a threaded connection you can specify the pitch diameter and actually ‘turn’ your screw to properly develop the shear stress under the bolt head for a bolted joint simulation without actually needing to model the physical threads (this can also be done using contact geometry corrections)

image Look ma, no threads (modeled)!

image

If you’re *just* defining contact between two surfaces, there’s a lot you simulate.  The default behavior is to bond the surfaces together, essentially weld them closed to transmit tensile and compressive loads.  You also have the ability to let the surfaces move relative to each other by defining frictionless, frictional, rough (infinite coefficient of friction), or no-separation (surfaces don’t transmit shear load but will not separate).

image

Some other ‘fancy’ things you can do with contact is simulate delamination by specifying adhesive properties (type I, II, or III modes of failure).  You can add a wear model to capture surface degradation due to normal stress and tangential velocity of your moving surfaces.  You can simulate a critical bonding temperature by specifying at what temperature your contacts ‘stick’ together instead of slide.  You can specify a ‘wetted’ contact region and see if the applied fluid pressure (not actually solving a CFD simulation, just applying a pressure to open areas of the contact interface) causes your seal to open up.

image

Now, it’s one thing to be able to simulate all of these behaviors.  The reason you’re running a finite element simulation is you need to make some kind of engineering judgement.  You need to know how the force/heat/etc transfers through your assembly.  Within Mechanical you can easily look at the force for each contact pair by dragging/dropping the connection object (contact or joint) into the solution.  This will automatically create a reaction probe to tell you the forces/moments going through that interface.  You can create detailed contour plots of the contact status, pressure, sliding distance, gap, or penetration (depending on formulation used).

image

image

Again, you can generate all of that information for contact between surface-to-surface, surface-to-edge, or edge-to-edge.  This allows you to use solids, shells, beams, or any combination you want, for any physics you want, to simulate essentially any real-world application.  No need to buy additional modules, pay for special solvers, fight through meshing issues by trying to ‘fake’ an assembly through a conformal mesh.  Just import the geometry, simplify as necessary (SpaceClaim is pretty awesome if you haven’t heard), and simulate it.)

For a more detailed, step-by-step look at the process, check out the following video!


Experiences with Developing a “Somewhat Large” ACT Extension in ANSYS

With each release of ANSYS the customization toolkit continues to evolve and grow.  Recently I developed what I would categorize as a decent sized ACT extension.    My purpose in this post is to highlight a few of the techniques and best practices that I learned along the way.

Why I chose C#?

Most ACT extensions are written in Python.  Python is a wonderfully useful language for quickly prototyping and building applications, frankly of all shapes and sizes.  Its weaker type system, plethora of libraries, large ecosystem and native support directly within the ACT console make it a natural choice for most ACT work.  So, why choose to move to C#?

The primary reasons I chose to use C# instead of python for my ACT work were the following:

  1. I prefer the slightly stronger type safety afforded by the more strongly typed language. Having a definitive compilation step forces me to show my code first to a compiler.  Only if and when the compiler can generate an assembly for my source do I get to move to the next step of trying to run/debug.  Bugs caught at compile time are the cheapest and generally easiest bugs to fix.  And, by definition, they are the most likely to be fixed.  (You’re stuck until you do…)
  2. The C# development experience is deeply integrated into the Visual Studio developer tool. This affords not only a great editor in which to write the code, but more importantly perhaps the world’s best debugger to figure out when and how things went wrong.   While it is possible to both edit and debug python code in Visual Studio, the C# experience is vastly superior.

The Cost of Doing ACT Business in C#

Unfortunately, writing an ACT extension in C# does incur some development cost in terms setting up the development environment to support the work.  When writing an extension solely in Python you really only need a decent text editor.  Once you setup your ACT extension according to the documented directory structure protocol, you can just edit the python script files directly within that directory structure.  If you recall, ACT requires an XML file to define the extension and then a directory with the same name that contains all of the assets defining the extension like scripts, images, etc…  This “defines” the extension.

When it comes to laying out the requisite ACT extension directory structure on disk, C# complicates things a bit.  As mentioned earlier, C# involves a compilation step that produces a DLL.  This DLL must then somehow be loaded into Mechanical to be used within the extension.  To complicate things a little further, Visual Studio uses a predefined project directory structure that places the build products (DLLs, etc…) within specific directories of the project depending on what type of build you are performing.   Therefore the compiled DLL may end up in any number of different directories depending on how you decide to build the project.  Finally, I have found that the debugging experience within Visual Studio is best served by leaving the DLL located precisely wherever Visual Studio created it.

Here is a summary list of the requirements/problems I encountered when building an ACT extension using C#

  1. I need to somehow load the produced DLL into Mechanical so my extension can use it.
  2. The DLL that is produced during compilation may end up in any number of different directories on disk.
  3. An ACT Extension must conform to a predefined structural layout on the filesystem. This layout does not map cleanly to the Visual studio project layout.
  4. The debugging experience in Visual Studio is best served by leaving the produced DLL exactly where Visual Studio left it.

The solution that I came up with to solve these problems was twofold.

First, the issue of loading the proper DLL into Mechanical was solved by using a combination of environment variables on my development machine in conjunction with some Python programming within the ACT main python script.  Yes, even though the bulk of the extension is written in C#, there is still a python script to sort of boot-load the extension into Mechanical.  More on that below.

Second, I decided to completely rebuild the ACT extension directory structure on my local filesystem every time I built the project in C#.  To accomplish this, I created in visual studio what are known as post-build events that allow you to specify an action to occur automatically after the project is successfully built.  This action can be quite generic.  In my case, the “action” was to locally run a python script and provide it with a few arguments on the command line.  More on that below.

Loading the Proper DLL into Mechanical

As I mentioned above, even an ACT extension written in C# requires a bit of Python code to bootstrap it into Mechanical.  It is within this bit of Python that I chose to tackle the problem of deciding which dll to actually load.  The code I came up with looks like the following:

Essentially what I am doing above is querying for the presence of a particular environment variable that is on my machine.  (The assumption is that it wouldn’t randomly show up on end user’s machine…) If that variable is found and its value is 1, then I determine whether or not to load a debug or release version of the DLL depending on the type of build.  I use two additional environment variables to specify where the debug and release directories for my Visual Studio project exist.  Finally, if I determine that I’m running on a user’s machine, I simply look for the DLL in the proper location within the extension directory.  Setting up my python script in this way enables me to forget about having to edit it once I’m ready to share my extension with someone else.  It just works.

Rebuilding the ACT Extension Directory Structure

The final piece of the puzzle involves rebuilding the ACT extension directory structure upon the completion of a successful build.  I do this for a few different reasons.

  1. I always want to have a pristine copy of my extension laid out on disk in a manner that could be easily shared with others.
  2. I like to store all of the various extension assets, like images, XML files, python files, etc… within the Visual Studio Project. In this way, I can force the project to be out of date and in need of a rebuild if any of these files change.  I find this particularly useful for working with the XML definition file for the extension.
  3. Having all of these files within the Visual Studio Project makes tracking thing within a version control system like SVN or git much easier.

As I mentioned before, to accomplish this task I use a combination of local python scripting and post build events in Visual Studio.  I won’t show the entire python code, but essentially what it does is programmatically work through my local file system where the C# code is built and extract all of the files needed to form the ACT extension.  It then deletes any old extension files that might exist from a previous build and lays down a completely new ACT extension directory structure in the specified location.  The definition of the post build event is specified within the project settings in Visual Studio as follows:

As you can see, all I do is call out to the system python interpreter and pass it a script with some arguments.  Visual Studio provides a great number of predefined variables that you can use to build up the command line for your script.  So, for example, I pass in a string that specifies what type of build I am currently performing, either “Debug” or “Release”.  Other strings are passed in to represent directories, etc…

The Synergies of Using Both Approaches

Finally, I will conclude with a note on the synergies you can achieve by using both of the approaches mentioned above.  One of the final enhancements I made to my post build script was to allow it to “edit” some of the text based assets that are used to define the ACT extension.  A text based asset is something like an XML file or python script.  What I came to realize is that certain aspects of the XML file that define the extension need to be different depending upon whether or not I wish to debug the extension locally or release the extension for an end user to consume.  Since I didn’t want to have to remember to make those modifications before I “released” the extension for someone else to use, I decided to encode those modifications into my post build script.  If the post build script was run after a “debug” build, I coded it to configure the extension for optimal debugging on my local machine.  However, if I built a “release” version of the extension, the post build script would slightly alter the XML definition file and the main python file to make it more suitable for running on an end user machine.   By automating it in this way, I could easily build for either scenario and confidently know that the resulting extension would be optimally configured for the particular end use.

Conclusions

Now that I have some experience in writing ACT extensions in C# I must honestly say that I prefer it over Python.  Much of the “extra plumbing” that one must invest in in order to get a C# extension up and running can be automated using the techniques described within this post.  After the requisite automation is setup, the development process is really straightforward.  From that point onward, the increased debugging fidelity, added type safety and familiarity a C based language make the development experience that much better!  Also, there are some cool things you can do in C# that I’m not 100% sure you can accomplish in Python alone.  More on that in later posts!

If you have ideas for an ACT extension to better serve your business needs and would like to speak with someone who has developed some extensions, please drop us a line.  We’d be happy to help out however we can!

 

Linearized Stress – Using Nodal Locations for Path Results in Workbench Mechanical 14.5

Postprocessing results along a path has been part of the Workbench Mechanical capability for several rev’s now. We need to define a path as construction geometry on which to map the results unless we happen to have an edge in the model exactly where we want the path to be or can use an X axis intersection with our model. You have the option to ‘snap’ the path results to nodal locations, but what if you want to use nodal locations to define the path in the first place? We’ll see how to do this below.

For more information on “picking your nodes”, see the Focus blog entry written by Jeff Strain last year: http://www.padtinc.com/blog/the-focus/node-interaction-in-mechanical-part-1-picking-your-nodes

The top level process for postprocessing result along a path is:

  • Define a Path as construction geometry
  • Insert a Linearized Stress result
  • Calculate the desired results along the path using the Linearized Stress item

The key here is to define the path using existing nodes. Why do that? Sometimes it’s easier to figure out where the path should start and stop using nodal locations rather than figure out the coordinates some other way. So, let’s see how we might do that.

  • First, turn on the mesh via the “Show Mesh” button so that it’s visible for the path creation

image

  • From the Model branch in Mechanical, insert Construction Geometry
  • From the new Construction Geometry branch, insert a Path

image

  • Note that the Path must be totally contained by the finite element model, unlike in MAPDL.
  • If you know the starting and ending points of the path, enter them in the Start and End fields in the Details view for the Path.
  • Otherwise, click on the “Hit Point Coordinate” button:

image

  • Pick the node location for the start point, click apply

image

  • Pick the node location for the end point, click apply

image

  • In the Solution branch, insert Linearized Stress (Normal Stress in this case); set the details:
  • Scoping method=Path
  • Select the Path just created
  • Set the Orientation and Coordinate System values as needed
  • Define Time value for results if needed

image

Results are displayed graphically along the path…

image

…as well is in an X-Y plot and a table

image

Besides normal stresses, membrane and bending, etc. results can be accessed using these techniques. So, the next time you need to list or plot results along a path, remember that it can be done in Mechanical, and you can use nodal locations to define the starting and ending points of the path.

ICEM CFD as a Data Compliant System in ANSYS Workbench

ICEM CFD is probably the most capable mesher on the planet. Not only do we here at PADT use it as our preferred tool for creating complex hex meshes, it has a whole host of other capabilities and controls that make it the power users choice. But one thing that has been frustrating for some time is that we could not easily add it into a project that automatically updates. At 14.5, ICEM CFD is now data compliant and you can use it in a project with parameters.

ICEM-CFD-System-ANSYS-Workbench

If you know ICEM CFD well you know that there are many aspects of it that do not fit into a project flow, but the most commonly used capabilities do: read in geometry, mesh it, output nodes and elements into a solver or node/element based pre-processor. Because it is node/element based it does not work with ANSYS Mechanical or other tools that require surface or solid geometry, but it does work with FLUENT, CFX, ANSYS Mechanical APDL (MAPDL) and Polyflow, the ANSYS solvers that can work directly with nodes and meshes. Once put into your system, you can modify geometry or ICEM CFD parameters and then update your system to get a new solution.

In this article we will focus on using ICEM CFD with ANSYS MAPDL. That is because 1) most of our readers are ANSYS Mechanical/MAPDL users and 2) it is what I know best. But most everything we are talking about will work with FLUENT, CFX, and Polyflow.

Why is this a Big Deal?

For the vast majority of users, this is not such a big deal because they can do all their meshing with ANSYS MAPDL, ANSYS Mechanical, ANSYS Meshing, or FLUENT (with TGrid meshing). But if you can not, then this is an awesome new capability. This is especially true if you need to use the blocking based hex meshing built into ICEM CFD.

Getting Started and Things to Know

Frist thing we recommend you do is read the help on the ICEM CFD System:

Workbench User Guide // User’s Guide // Systems // Component Systems

Click on ANSYS ICEM CFD and read the whole thing. There are lots of little details that you should be aware of.

The first thing you should note is that if you want to use it with Mechanical APDL you need to turn on Beta Features: Tools>Options>Appearance scroll down and check “Beta Options” to be on.

The next thing is to realize that from a project standpoint, you can feed an ICEM CFD system with any system that has a geometry module. Although ICEM CFD will read a mesh in and use the external surface of that mesh as geometry, that capability is not currently implemented in Workbench. This means if the source mesh changes, you can not automatically update your mesh if the “geometry” mesh changes. See below for a work around.

You do need to make sure that your ICEM CFD model is setup to output to your solver type. Make sure you check this when you are setting up your mesh.

If you have worked in Workbench with legacy mesh you know that named selections can be very important. I did not have enough time to play with all the different options, but it looks like named selections come in from DesignModeler, and if they define a solid, the resulting nodes that are in that solid get written as a component that goes to the MAPDL solver. However, surface, edge, and vertex named selections do not seem to get passed over at this time. I am contacting ANSYS, Inc. to see if there is a way to turn that on.

It also looks like if you are using blocking only the solid elements are written, and no corner, edge, or surface elements are output. I will also be checking on this.

The last, and most important thing to know, is that your ICEM CFD model needs to be robust. Anyone that spends a lot of time in ICEM CFD already knows this. If you make a change to geometry or a parameter, then it needs to update reliably. The key to success with this is to just do your meshing with updates in mind and make it as simple and flexible as possible, especially if you are blocking with HEXA.

A Simple Example

I made a very silly model, because these Focus articles are always about silly models, that sort of shows the process you can use. It is not a flat plate with a hole in it, but it is a block with a cylinder on top.

image

Nothing too fancy. I made the block dimensions, the cylinder diameter, and its offset parameters.

This system feeds the ICEM CFD system where it comes in as points, lines, and surfaces.

image

I then blocked it out:

image

And specified meshing sizes:

image

And generated the mesh:

image

Like I said, a simple model.

Parameters are supported for meshing controls, any user parameters you want to make that you will use in Tcl scripts, or meshing diagnostics.

I made the number of nodes across the width a parameter:

image

Values that you can make into parameters have little white boxes next to them. To make them workbench parameters click on the box and you get the “Blue P” that everyone should know and love from all of the other ANSYS, Inc. applications.

I also wanted mesh parameters so I went to Settings->Workbench Parameters->Workbench Output Parameters and set some of those:

image

Now when I go back to my project and check out the parameters for my ICEM CFD system I get:

image

Now it is time to add the ANSYS Mechanical APDL system. You will want to write a macro that defines material properties, constraints, and loads. Mine also has some output parameters and makes some PNG plots.

This is the mesh I get in MAPDL:

dp0_000

and here are the results. Exciting:

dp0_001

To try the whole thing out I made a design study:

image

Everything updated just fine and I got all my output parameters and my plots in my MAPDL directory for each design point (remember to tell it to save all the design points or it deletes them, or use a macro like the one discussed in the bonus article from this posting).

I made an animated GIF of the different meshes for fun:

DesignPoints_ICEM-CFD-1

Here is a link to an archive of the project I used:  ICEM-wb-1.wbpz

Doing more with ICEM CFD in a Project

This was a basic example. But the cool thing about the implementation is that it will do much more. If there is a replay file, it will execute the file and run whatever scripts you specify in the file. This is how you can get it to work with existing meshes as geometry. And you can do whatever else you want to do.

On an update ICEM CFD does the following:

  1. Update geometry if Tetin file changed
  2. Runs tetra default meshing, if no blocking file and no replay file
  3. If a replay file, run the replay file
  4. Runs Hexa default meshing if a Blocking file exists
  5. Convert any blocked mesh to unstructured mesh file
  6. Convert unstructured mesh file to solver input file
  7. Save the project

So you just need to be aware of this order and plan accordingly. There really is no limit to what you can do.

Next Steps

If there was ever a place to use Crawl-Walk-Run this is it. Make yourself a very simple model and get a feel for things. Then work with your real geometry doing some simple meshing, maybe just blowing a TET mesh on it, then set up you full run. Also, keep the simple model around to try stuff out when you are working with the big model.

The help was very helpful, I recommend that you read it once then reread it after you have played around with this feature a bit.

The Files View in ANSYS Workbench

image

When you watch someone work with a tool as complex as ANSYS Workbench, you quickly realize that they use different tools and features than you do.  One thing I noticed the other day was someone really using the Files View.  So I thought, I should really make sure I know what is there and take advantage of it.  In looking into it I found a few things I was not aware of, and I needed an article, so here we are.

Philosophy of Files in Workbench

Before we get started, you have to realize that the way ANSYS Workbench thinks about files is unique, and you should understand it.  The idea originally was that the program itself would manage all your files. You just had to worry about the project file and the directory tree it points to.  Therefore the directory structure in that tree is pretty complex, and the user can not change the name of a file being used. That is all managed by the program. Times have changed and there are a lot of programs that run in the Workbench that require the user to know about the files, especially some of the legacy solvers.  So we have the Files View to help us with that.

It is very important that you do not go in and rename, delete, or move files around.  ANSYS Workbench has no way of knowing that you have done that. You should just use it to find files, edit their content, and deal with files that non-workbench type solvers (FLUENT, MAPDL, Etc…) use that are not managed by the Workbench.

The Files View

You see your files through view by toggling it on and off. Under the View menu there is Files item.  Click on it to turn on the Files View and click on it again to make it go away.

image

If you see the check and not the view, then use View->Reset Window Layout

image

As with any window in the ANSYS Workbench GUI you can drag the bar at the top of the view, or click the thumbtack in the upper right corner, to break it out as its own window, and drag it anywhere you want. I have two monitors so I like to do that, and have a full size graphics window.

If you look at what is in the view, there are no real surprises.  Like a lot of Workbench applications, the information is presented in a spreadsheet from.  If we take a look at each column we can learn some things:

image

Name:
Nothing spectacular here. The icons are kind of nice to let you know what type of file you are dealing with.

Cell ID:
This one is kind of handy.  It shows you where in your project the file in question is used.  This helps with complex models where you have multiple systems.  If you don’t change the names on your files, then things get confusing quickly.  The Cell ID helps sort it out.

image

Take a look at the Cell ID and the associated project schematic. You can see that the geometry is used in two systems, and that the material properties are used in the Static Structural system.  As you review this, you can see how useful these references can be.

Also notice how some of the files only have a letter for the Cell ID. These are usually solver related files that really apply to the whole system, and not to any one particular cell in the system.

Size:
Not much to say here.  One nice use is to see if your result files are large enough to indicate a successful solve.

Type:
This tells you what type of file you are dealing with, often including the tool that uses it.  What is cool about it is that you can sort on it and you can filter on the file type.  More on that below.

Date Modified:
Always useful for finding out what files were, or were not created and what the most recent work is.

Location:
Again, not much to say here. This is where your files are.  Sometimes you can tell a bit more about where the file is used by looking at what directory it is in.

Interacting with the Files View

You can do some cool stuff in the Files View. The most obvious, is you can click on the upside down triangles and sort by any of the columns: Ascending or Descending.

image

You can also choose Sort Settings… and specify multiple columns to sort on.

 

image

Just add columns and set the Ascending flag as needed. Delete by clicking the X or Remove All.

Notice how the triangle now shows the columns that are being used to sort.

image

When you are done using the sorting, you can click on any of the columns being used in the sort, and choose Cancel sorting.

If you right mouse button (RMB) on any of the cells, you get two options.  They both do what they say: open the folder that contains the file or bring up the File Type Filter.

image

Note, just because you can open the folder that does not mean you can go messing around with file names and locations. Only do that on files that are not managed by Workbench.

The File Type Filter will list all of your file types and let you turn on or off the visibility of any of them.

image

This can be very useful for a very complicated project.

Some Suggested Uses

So using this tool is not that hard.  A better question than how is why?  Here are some suggestions:

Finding Output Files
Many of the solvers in the ANSYS family create log, error, journal, and output files. Instead of poking around and trying to find them through the operating system, you can quickly use the type filter and maybe sort by Date Modified to find the files you need. Then open up the folder containing them and view the contents.

Extracting a Solve
Sometimes you need to get into the lower levels of the directory structure and get all the files associated with a particular solve so that you can run them outside of workbench, or give them to a user who does not use Workbench.  Using this too, you can quickly sort by directory, find the one you need, then bring up the OS file browser tool.

Managing Macros and Input files
If I’m writing macros or input files, I really don’t want to dig around through directories. So when I’m ready to save my macro, I copy the directory that my solver uses out of the cell in the Files View, then paste it into my text editor’s Save As… dialog.

Making a File Table
Because the information is presented like a spread sheet, you can copy and paste any of the columns you want right into Excel. This comes in handy for reports because you can add a column where you add your own description or notes. To copy hold down the CTRL key and click on the column label of any columns you want.

Get to Know your Files View

We recommend that you use the Files View all the time, not just when you have to. The more familiar you are with the files the program is using the better you will understand what is going on when you use the program. Black boxes are fine and dandy when you are learning or in a hurry, but if you are going to be spending a good chunk of your life alone with one of the ANSYS, Inc. products, you should be spending some time looking at what file are created and where it stores them.