With each release of ANSYS the customization toolkit continues to evolve and grow. Recently I developed what I would categorize as a decent sized ACT extension. My purpose in this post is to highlight a few of the techniques and best practices that I learned along the way.
Why I chose C#?
Most ACT extensions are written in Python. Python is a wonderfully useful language for quickly prototyping and building applications, frankly of all shapes and sizes. Its weaker type system, plethora of libraries, large ecosystem and native support directly within the ACT console make it a natural choice for most ACT work. So, why choose to move to C#?
The primary reasons I chose to use C# instead of python for my ACT work were the following:
- I prefer the slightly stronger type safety afforded by the more strongly typed language. Having a definitive compilation step forces me to show my code first to a compiler. Only if and when the compiler can generate an assembly for my source do I get to move to the next step of trying to run/debug. Bugs caught at compile time are the cheapest and generally easiest bugs to fix. And, by definition, they are the most likely to be fixed. (You’re stuck until you do…)
- The C# development experience is deeply integrated into the Visual Studio developer tool. This affords not only a great editor in which to write the code, but more importantly perhaps the world’s best debugger to figure out when and how things went wrong. While it is possible to both edit and debug python code in Visual Studio, the C# experience is vastly superior.
The Cost of Doing ACT Business in C#
Unfortunately, writing an ACT extension in C# does incur some development cost in terms setting up the development environment to support the work. When writing an extension solely in Python you really only need a decent text editor. Once you setup your ACT extension according to the documented directory structure protocol, you can just edit the python script files directly within that directory structure. If you recall, ACT requires an XML file to define the extension and then a directory with the same name that contains all of the assets defining the extension like scripts, images, etc… This “defines” the extension.
When it comes to laying out the requisite ACT extension directory structure on disk, C# complicates things a bit. As mentioned earlier, C# involves a compilation step that produces a DLL. This DLL must then somehow be loaded into Mechanical to be used within the extension. To complicate things a little further, Visual Studio uses a predefined project directory structure that places the build products (DLLs, etc…) within specific directories of the project depending on what type of build you are performing. Therefore the compiled DLL may end up in any number of different directories depending on how you decide to build the project. Finally, I have found that the debugging experience within Visual Studio is best served by leaving the DLL located precisely wherever Visual Studio created it.
Here is a summary list of the requirements/problems I encountered when building an ACT extension using C#
- I need to somehow load the produced DLL into Mechanical so my extension can use it.
- The DLL that is produced during compilation may end up in any number of different directories on disk.
- An ACT Extension must conform to a predefined structural layout on the filesystem. This layout does not map cleanly to the Visual studio project layout.
- The debugging experience in Visual Studio is best served by leaving the produced DLL exactly where Visual Studio left it.
The solution that I came up with to solve these problems was twofold.
First, the issue of loading the proper DLL into Mechanical was solved by using a combination of environment variables on my development machine in conjunction with some Python programming within the ACT main python script. Yes, even though the bulk of the extension is written in C#, there is still a python script to sort of boot-load the extension into Mechanical. More on that below.
Second, I decided to completely rebuild the ACT extension directory structure on my local filesystem every time I built the project in C#. To accomplish this, I created in visual studio what are known as post-build events that allow you to specify an action to occur automatically after the project is successfully built. This action can be quite generic. In my case, the “action” was to locally run a python script and provide it with a few arguments on the command line. More on that below.
Loading the Proper DLL into Mechanical
As I mentioned above, even an ACT extension written in C# requires a bit of Python code to bootstrap it into Mechanical. It is within this bit of Python that I chose to tackle the problem of deciding which dll to actually load. The code I came up with looks like the following:
Essentially what I am doing above is querying for the presence of a particular environment variable that is on my machine. (The assumption is that it wouldn’t randomly show up on end user’s machine…) If that variable is found and its value is 1, then I determine whether or not to load a debug or release version of the DLL depending on the type of build. I use two additional environment variables to specify where the debug and release directories for my Visual Studio project exist. Finally, if I determine that I’m running on a user’s machine, I simply look for the DLL in the proper location within the extension directory. Setting up my python script in this way enables me to forget about having to edit it once I’m ready to share my extension with someone else. It just works.
Rebuilding the ACT Extension Directory Structure
The final piece of the puzzle involves rebuilding the ACT extension directory structure upon the completion of a successful build. I do this for a few different reasons.
- I always want to have a pristine copy of my extension laid out on disk in a manner that could be easily shared with others.
- I like to store all of the various extension assets, like images, XML files, python files, etc… within the Visual Studio Project. In this way, I can force the project to be out of date and in need of a rebuild if any of these files change. I find this particularly useful for working with the XML definition file for the extension.
- Having all of these files within the Visual Studio Project makes tracking thing within a version control system like SVN or git much easier.
As I mentioned before, to accomplish this task I use a combination of local python scripting and post build events in Visual Studio. I won’t show the entire python code, but essentially what it does is programmatically work through my local file system where the C# code is built and extract all of the files needed to form the ACT extension. It then deletes any old extension files that might exist from a previous build and lays down a completely new ACT extension directory structure in the specified location. The definition of the post build event is specified within the project settings in Visual Studio as follows:
As you can see, all I do is call out to the system python interpreter and pass it a script with some arguments. Visual Studio provides a great number of predefined variables that you can use to build up the command line for your script. So, for example, I pass in a string that specifies what type of build I am currently performing, either “Debug” or “Release”. Other strings are passed in to represent directories, etc…
The Synergies of Using Both Approaches
Finally, I will conclude with a note on the synergies you can achieve by using both of the approaches mentioned above. One of the final enhancements I made to my post build script was to allow it to “edit” some of the text based assets that are used to define the ACT extension. A text based asset is something like an XML file or python script. What I came to realize is that certain aspects of the XML file that define the extension need to be different depending upon whether or not I wish to debug the extension locally or release the extension for an end user to consume. Since I didn’t want to have to remember to make those modifications before I “released” the extension for someone else to use, I decided to encode those modifications into my post build script. If the post build script was run after a “debug” build, I coded it to configure the extension for optimal debugging on my local machine. However, if I built a “release” version of the extension, the post build script would slightly alter the XML definition file and the main python file to make it more suitable for running on an end user machine. By automating it in this way, I could easily build for either scenario and confidently know that the resulting extension would be optimally configured for the particular end use.
Now that I have some experience in writing ACT extensions in C# I must honestly say that I prefer it over Python. Much of the “extra plumbing” that one must invest in in order to get a C# extension up and running can be automated using the techniques described within this post. After the requisite automation is setup, the development process is really straightforward. From that point onward, the increased debugging fidelity, added type safety and familiarity a C based language make the development experience that much better! Also, there are some cool things you can do in C# that I’m not 100% sure you can accomplish in Python alone. More on that in later posts!
If you have ideas for an ACT extension to better serve your business needs and would like to speak with someone who has developed some extensions, please drop us a line. We’d be happy to help out however we can!
What is Topological Optimization? If you’re not familiar with the concept, in finite element terms it means performing a shape optimization utilizing mesh information to achieve a goal such as minimizing volume subject to certain loads and constraints. Unlike parameter optimization such as with ANSYS DesignXplorer, we are not varying geometry parameters. Rather, we’re letting the program decide on an optimal shape based on the removal of material, accomplished by deactivating mesh elements. If the mesh is fine enough, we are left with an ‘organic’ sculpted shape elements. Ideally we can then create CAD geometry from this organic looking mesh shape. ANSYS SpaceClaim has tools available to facilitate doing this.
Topological optimization has seen a return to prominence in the last couple of years due to advances in additive manufacturing. With additive manufacturing, it has become much easier to make parts with the organic shapes resulting from topological optimization. ANSYS has had topological optimization capability both in Mechanical APDL and Workbench in the past, but the capabilities as well as the applications at the time were limited, so those tools eventually died off. New to the fold are ANSYS ACT Extensions for Topological Optimization in ANSYS Mechanical for versions 17.0, 17.1, and 17.2. These are free to customers with current maintenance and are available on the ANSYS Customer Portal.
In deciding to write this piece, I decided an interesting example would be the brace that is part of all curved saxophones. This brace connects the bell to the rest of the saxophone body, and provides stiffness and strength to the instrument. Various designs of this brace have been used by different manufacturers over the years. Since saxophone manufacturers like those in other industries are often looking for product differentiation, the use of an optimized organic shape in this structural component could be a nice marketing advantage.
This article is not intended to be a technical discourse on the principles behind topological optimization, nor is it intended to show expertise in saxophone design. Rather, the intent is to show an example of the kind of work that can be done using topological optimization and will hopefully get the creative juices flowing for lots of ANSYS users who now have access to this capability.
That being said, here are some images of example bell to body braces in vintage and modern saxophones. Like anything collectible, saxophones have fans of various manufacturers over the years, and horns going back to production as early as the 1920’s are still being used by some players. The older designs tend to have a simple thin brace connecting two pads soldered to the bell and body on each end. Newer designs can include rings with pivot connections between the brace and soldered pads.
Hopefully those examples show there can be variation in the design of this brace, while not largely tampering with the musical performance of the saxophone in general. The intent was to pick a saxophone part that could undergo topological optimization which would not significantly alter the musical characteristics of the instrument.
The first step was to obtain a CAD model of a saxophone body. Since I was not able to easily find one freely available on the internet that looked accurate enough to be useful, I created my own in ANSYS SpaceClaim using some basic measurements of an example instrument. I then modeled a ‘blob’ of material at the brace location. The idea is that the topological optimization process will remove non-needed material from this blob, leaving an optimized shape after a certain level of volume reduction.
In ANSYS Mechanical, the applied boundary conditions consisted of frictionless support constraints at the thumb rest locations and a vertical displacement constraint at the attachment point for the neck strap. Acceleration due to gravity was applied as well. Other loads, such as sideways inertial acceleration, could have been considered as well but were ignored for the sake of simplicity for this article. The material property used was brass, with values taken from Shigley and Mitchell’s Mechanical Engineering Design text, 1983 edition.
This plot shows the resulting displacement distribution due to the gravity load:
Now that things are looking as I expect, the next step is performing the topological optimization.
Once the topological optimization ACT Extension has been downloaded from the ANSYS Customer Portal and installed, ANSYS Mechanical will automatically include a Topological Optimization menu:
I set the Design Region to be the blog of material that I want to end up as the optimized brace. I did a few trials with varying mesh refinement. Obviously, the finer the mesh, the smoother the surface of the optimized shape as elements that are determined to be unnecessary are removed from consideration. The optimization Objective was set to minimize compliance (maximize stiffness). The optimization Constraint was set to volume at 30%, meaning reduce the volume to 30% of the current value of the ‘blob’.
After running the solution and plotting Averaged Node Values, we can see the ANSYS-determined optimized shape:
What is apparent when looking at these shapes is that the ‘solder patch’ where the brace attaches to the bell on one end and the body on the other end was allowed to be reduced. For example, in the left image we can see that a hole has been ‘drilled’ through the patch that would connect the brace to the body. On the other end, the patch has been split through the middle, making it look something like an alligator clip.
Another optimization run was performed in which the solder pads were held as surfaces that were not to be changed by the optimization. The resulting optimized shape is shown here:
Noticing that my optimized shape seemed on the thick side when compared to production braces, I then changed the ‘blob’ in ANSYS SpaceClaim so that it was thinner to start with. With ANSYS it’s very easy to propagate geometry changes as all of the simulation and topological optimizations settings stay tied to the geometry as long as the topology of those items stays the same.
Here is the thinner chunk after making a simple change in ANSYS SpacClaim:
And here is the result of the topological optimization using the thinner blob as the starting point:
Using the ANSYS SpaceClaim Direct Modeler, the faceted STL file that results from the ANSYS topological optimization can be converted into a geometry file. This can be done in a variety of ways, including a ‘shrink wrap’ onto the faceted geometry as well as surfaces fit onto the facets. Another option is to fit geometry in a more general way in an around the faceted result. These methods can also be combined. SpaceClaim is really a great tool for this. Using SpaceClaim and the topological optimization (faceted) result, I came up with three different ‘looks’ of the optimized part.
Using ANSYS Workbench, it’s very easy to plug the new geometry component into the simulation model that I already had setup and run in ANSYS Mechanical using the ‘blob’ as the brace in the original model. I then checked the displacement and stress results to see how they compared.
First, we have an organic looking shape that is mostly faithful to the results from the topological optimization run. This image is from ANSYS SpaceClaim, after a few minutes of ‘digital filing and sanding’ work on the STL faceted geometry output from ANSYS Mechanical.
This shows the resulting deflection from this first, ‘organic’ candidate:
The next candidate is one where more traditional looking solid geometry was created in SpaceClaim, using the topological optimization result as a guide. This is what it looks like:
This is the same configuration, but showing it in place within the saxophone bell and body model in ANSYS SpaceClaim:
Next, here is the deformation result for our simple loading condition using this second geometry configuration:
The third and final design candidate uses the second set of geometry as a starting point, and then adds a bit of style while still maintaining the topological optimization shape as an overall guide. Here is this third candidate in ANSYS SpaceClaim:
Here are is the resulting displacement distribution using this design:
This shows the maximum principal stress distribution within the brace for this candidate:
Again, I want to emphasize that this was a simple example and there are other considerations that could have been included, such as loading conditions other than acceleration due to gravity. Also, while it’s simple to include modal analysis results, in the interest of brevity I have not included them here. The main point is that topological optimization is a tool available within ANSYS Mechanical using the ACT extension that’s available for download on the customer portal. This is yet another tool available to us within our ANSYS simulation suite. It is my hope that you will also explore what can be done with this tool.
Regarding this effort, clearly a next step would be to 3D print one or more of these designs and test it out for real. Time permitting, we’ll give that a try at some point in the future.
This is a common question that we get, particularly those coming from APDL – how to get nodal and element IDs exposed in ANSYS Mechanical. Whether that’s for troubleshooting or information gathering, it was not available before. This video shows how an ANSYS developed extension accomplishes that pretty easily.
The extension can be found by downloading “FE Info XX” for the version XX of ANSYS you are using at https://support.ansys.com/AnsysCustom…
Some of you have probably already noticed, but ANSYS Mechanical licenses have some changes at version 17. First, the license that for years has been known as ANSYS Mechanical is now known as ANSYS Mechanical Enterprise. Further, ANSYS, Inc. has enabled significantly more functionality with this license at version 17 than was available in prior versions. Note that the license task in the ANSYS license files, ‘ansys’ has not changed.
|16.2 and Older (task)||17.0 (task)|
|ANSYS Mechanical (ansys)||ANSYS Mechanical Enterprise (ansys)|
The 17.0 ANSYS License Manager unlocks additional capability with this license, in addition to the existing Mechanical structural/thermal abilities. Previously, each of these tools used to be an additional cost. The change includes other “Mechanical-” licenses: e.g. Mech-EMAG, Mech CFD. The new tools enabled with ANSYS Mechanical Enterprise licenses at version 17.0 are:
|Fatigue Module||Rigid Body Dynamics||Explicit STR||Composite PrepPost (ACP)|
|SpaceClaim||DesignXplorer||ANSYS Customization Suite||AQWA|
Additionally, at version 17.1 these tools are included as well:
These changes do not apply to the lower level licenses, such as ANSYS Structural and Professional. In fact, these licenses are moving to ‘legacy’ mode at version 17. Two newer products now slot below Mechanical Enterprise. These newer products are ANSYS Mechanical Premium and ANSYS Mechanical Pro. We won’t explain those products here, but your local ANSYS provider can give you more information on these two if needed.
Getting back to the additional capabilities with Mechanical Enterprise, these become available once the ANSYS 17.0 and/or the ANSYS 17.1 license manager is installed. This assumes you have a license file that is current on TECS (enhancements and support). Also, a new license task is needed to enable Simplorer Entry.
Ignoring Simplorer Entry for the moment, once the 17.0/17.1 license manager is installed, the single Mechanical Enterprise license task (ansys) now enables several different tools. Note that:
- Multiple tool windows can be open at once
- g. ANSYS Mechanical and SpaceClaim
- Only one can be “active” at a time
- If solving, can’t edit geometry in SpaceClaim
- Capabilities are then available in older versions, where applicable, once the 17.0/17.1 license manager is installed
Here is a very brief summary of these newly available capabilities:
- Runs in the Mechanical window
- Can calculate fatigue lives for ‘simple’ products (linear static analysis)
- Stress-life for
- Constant amplitude, proportional loading
- Variable amplitude, proportional loading
- Constant amplitude, non-proportional loading
- Constant amplitude, proportional loading
- Activated by inserting the Fatigue Tool in the Mechanical Solution branch
- Postprocess fatigue lives as contour plots, etc.
- Requires fatigue life data as material properties
- Stress-life for
- Runs in the Mechanical window
- ANSYS, Inc.-developed solver using explicit time integration, energy conservation
- Use when only concerned about motion due to joints and contacts
- To determine forces and moments
- Activated via Rigid Dynamics analysis system in the Workbench window
- Runs in the Mechanical window
- Utilizes the Autodyn solver
- For highly nonlinear, short duration structural transient problems
- Drop test simulations, e.g.
- Activated via Explicit Dynamics analysis system in the Workbench window
- Tools for preparing composites models and postprocessing composites solutions
- Define composite layup
- Fiber Directions and Orientations
- Optimize composite design
- Results evaluation
- Layer stresses
- Failure criteria
- Activated via ACP (Pre) and ACP (Post) component systems in the Workbench window
- Geometry creation/preparation/repair/defeaturing tool
- Try it, learn it, love it
- A direct modeler so no history tree
- Just create/modify on the fly
- Import from CAD or create in SpaceClaim
- Can be an incredible time saver in preparing geometry for simulation
- Activated by right clicking on the Geometry cell in the Workbench project schematic
- Design of Experiments/Design Optimization/Robust Design Tool
- Allows for variation of input parameters
- Geometric dimensions including from external CAD, license permitting
- Material property values
- Mesh quantities such as shell thickness, element size specifications
- Track or optimize on results parameters
- Max or min stress
- Max or min temperature
- Max or min displacement
- Mass or volume
- Create design of experiments
- Fit response surfaces
- Perform goals driven optimizations
- Reduce mass
- Drive toward a desired temperature
- Understand sensitivities among parameters
- Perform a Design for Six Sigma study to determine probabilities
- Activated by inserting Design Exploration components into the Workbench project schematic
ANSYS Customization Suite:
- Toolkit for customization of ANSYS Workbench tools
- Includes tools for several ANSYS products
- Top level Workbench
- Based on Python and XML
- Wizards and documentation included
- Offshore tool for ship, floating platform simulation
- Uses hydrodynamic defraction for calculations
- Model up to 50 structures
- Include effects of moorings, fenders, articulated connectors
- Solve in static, frequency, and time domains
- Transfer motion and pressure info to Mechanical
- Activated via Hydrodynamic Diffraction analysis system in the Workbench window
- New, common user interface for multiphysics simulations
- Capabilities expanding with each ANSYS release (was new at 16.0)
- Uses SpaceClaim as geometry tool
- Single window
- Easy to follow workflow
- Activated from the ANSYS 17.0/17.1 Start menu
- System level simulation tool
- Simulate interactions such as between
- Structural Reduced Order Models
- Simple circuitry
- Optimize complex system performance
- Understand interactions and trade offs
- Entry level tool, limited to 30 models (Simplorer Advanced enables more)
- Activated from the ANSYS Electromagnetics tools (separate download)
- Requires an additional license task from ANSYS, Inc.
Where to get more information:
- Your local ANSYS provider
- ANSYS Help System
- ANSYS Customer Portal
PADT is pleased to announce that we have uploaded a new ACT Extension to the ANSYS ACT App Store. This new extension implements a PID based thermostat boundary condition that can be used within a transient thermal simulation. This boundary condition is quite general purpose in nature. For example, it can be setup to use any combination of (P)roportional (I)ntegral or (D)erivate control. It supports locally monitoring the instantaneous temperature of any piece of geometry within the model. For a piece of geometry that is associated with more than one node, such as an edge or a face, it uses a novel averaging scheme implemented using constraint equations so that the control law references a single temperature value regardless of the reference geometry.
The set-point value for the controller can be specified in one of two ways. First, it can be specified as a simple table that is a function of time. In this scenario, the PID ACT Extension will attempt to inject or remove energy from some location on the model such that a potentially different location of the model tracks the tabular values. Alternatively, the PID thermostat boundary condition can be set up to “follow” the temperature value of a portion of the model. This location again can be a vertex, edge or face and the ACT extension uses the same averaging scheme mentioned above for situations in which more than one node is associated with the reference geometry. Finally, an offset value can be specified so that the set point temperature tracks a given location in the model with some nonzero offset.
For thermal models that require some notion of control the PID thermostat element can be used effectively. Please do note, however, that the extension works best with the SI units system (m-kg-s).
With the introduction of ACT, the ANSYS Workbench editors have gained capabilities and shortcuts at much faster rate than what can be introduced in a development cycle. One of first and most far-reaching extensions is the acoustics. Inevitably I was called on by one of our customers to show them how to do a vibro-acoustics analysis (harmonic with acoustic excitation), which I did. Since the need for this type of analysis is quite broad, I’ll share it here too.
There was an extra level of excitement with this, in that I’m a structures specialist with no prior acoustics experience. So, I did my own self-training on this topic. I have to give tons of credit to Sheldon Imaoka of ANSYS Inc., who took the time to thoroughly answer the questions I had. That being said, this article will be from the standpoint of a structures engineer who’s just recently learned acoustics.
It’s at the very top, under ‘A’ for “Acoustics”
One thing you’ll notice when you unzip the Acoustics Extension package is that it contains and entire Acoustics training course. Take advantage of this freebie when learning acoustics analysis. I’ll note that, most of the process outlined in this article comes from the Submarine workshop in the acoustics training course.
Once you’ve installed and turned on the Acoustics extension, insert a Harmonic Analysis system into the project schematic, link to the solid geometry file, and specify the material properties for the solid. You’ll specify the properties for the acoustic region in Mechanical under the appropriate Acoustics extension objects.
Rename as you see fit
Assuming you just have the geometry for the solid and not the acoustics domain, create two acoustics regions around the solid. The first region, surrounding the solid, will function as the fluid region itself, through which the acoustic waves travel and interact with the structure. The second region, surrounding the first acoustics region, will function as the Perfectly Matched Layer (PML). The PML essentially acts as the infinite boundary of the system. (If you’re an electromagnetics expert, you already know this and I’m boring you.) You can easily create these domains using the enclosure tool in DesignModeler.
Now we’re ready for the analysis. Open up Mechanical. Look at all those buttons on the Acoustics toolbar! Yikes! Fortunately we just need a few of them.
Here they are
Insert an Acoustic Body and scope it to the acoustic region surrounding the structural solid. In the Details, enter the density and speed of sound for the fluid. Also set the Acoustic-Structural Coupled Body Options to Coupled With Symmetric Algorithm.
Pay attention to the menu picks, Details, and geometry scoping here and in the rest of the image captures
“Coupled” refers to coupled-field behavior, i.e. the mutual interaction between the structure and the fluid. You’re probably familiar with this. You need that, otherwise the acoustic waves are just bouncing off the structure and the structure isn’t doing anything. Regarding the Symmetric Algorithm: The degrees of freedom for the acoustic system consists of both structural displacements and fluid pressures, giving you an asymmetric stiffness matrix. However, ANSYS has incorporated a symmetrization algorithm to convert the asymmetric stiffness matrix to a symmetric matrix, resulting in half as many equations that need to be solved and thus a faster solution time yadda yadda yadda, so go with that.
Now insert another Acoustic Body, this time scoped to the outer acoustic region (body). This is your Perfectly Matched Layer. Specify fluid density and speed of sound as before. This time, leave the Coupled Body Option as Uncoupled. But, set Perfectly Matched Layers to On.
Apply an Acoustic Pressure of zero to the outer faces of the PML body (Boundary Conditions > Acoustic Pressure). As you may have guessed from the menu pick, this is your acoustics boundary condition.
Now we’ll apply some acoustic wave excitation to this thing. From the Excitation menu, select Wave Sources (Harmonic). In the Details, set the Excitation Type to either Pressure or Velocity, set the Source Location and specify the excitation pressure or velocity value. In this example, I went with Pressure since that’s what MIL-STD-810 specifies, but this option will be based on your customer requirements. I also assumed an external acoustic source (hence, Outside the Model), but again, that will be based on your particular project. You also need to specify the vector of the wave source, via rotations about the Z and Y axes (f and q). In this case I chose 30 and 60 degrees, respectfully, to make it interesting. Once again, enter the density and speed of sound for the fluid.
Insert Scattering Controls under the Analysis Settings menu and specify whether the Field Output should be Total or Scattered. Total gives you constant pressure waves that interact with the solid but not each other. Scattered gives you wave that interact and interfere with each other as well as the solid.
Set up the Fluid-Structural Interaction boundary condition where the structural faces are “wetted” by the acoustic domain. The FSI Interface is found under the Boundary Conditions menu.
Apply structural constraints and specify harmonic analysis settings just like you would with a standard harmonic analysis. Make sure you request Stresses under the Output Controls. Solve the model.
Plot your structural results as you would for a typical harmonic analysis. Acoustic Pressure wave results may be found under the Results menu in the Acoustics toolbar. If you used Total field output for the scattering option, you can verify your wave source direction by looking at the Acoustic Pressure Contours. Keep in mind that the contours will be orthogonal to the axis of the sine wave; you may need to put some extra spatial thought into it to fully understand what’s going on.
Acoustic Pressures: Field Output = Total
Acoustic Pressures: Field Output = Scattered
Von-Mises Stresses, Max Over Phase: Field Output = Scattered
As you’ll note in the training course, there are a number of design questions that can be answered with acoustics analysis. In this article, I’ve addressed what I thought would be one of the more popular applications of acoustics simulation. If the demand is there, I’ll research and compose more articles on various acoustics applications in the future. For instance, another area I’ve examined is natural frequencies of a structure that’s submerged in a fluid. If there’s another acoustics topic you’d like us to write about, please let us know in the comments.
A short video showing how ACT (ANSYS Customization Toolkit) can be used to change Default Settings for analyses done in ANSYS Mechanical. This is a very small subset of the capabilities that ACT can provide. Stay tuned for other videos showing further customization examples.
The example .xml and python file is located below. Please bear in mind that to use these “scripted” ACT extension files you will need to have an ACT license. Compiled versions of extensions don’t require any licenses to use. Please send me an email (firstname.lastname@example.org) if you are wondering how to translate this example into your own needs.
(View part one of this series here.)
So, I’ve done a little of this Workbench customization stuff in a past life. My customizations involved lots of Jscript, some APDL, sweat and tears. I literally would bang my head against Eric Miller’s office door jamb wondering (sometimes out loud) what I had done to be picked as the Workbench customization guy. Copious amounts of alcohol on weeknights helped some, but honestly it still royally sucked. Because of these early childhood scars, I’ve cringed at the thought of this ACT thingy until now. I figured I’d been there, done that and I had zero, and I mean zero desire to relive any of it.
So, I resisted ACT even after multiple “suggestions” from upper management that I figure out something to do with it. That was wrong of me; I should have been quicker to given ACT a fair shot. ACT is a whole bunch better than the bad ol’ days of JScript. How is it better? Well, it has documentation… Also, there are multiple helpful folks back at Canonsburg and elsewhere that know a few things about it. This is opposed to the days when just one (brilliant) guy in India named Rajiv Rath had ever done anything of consequence with JScript. (Without him, my previous customization efforts would simply have put me in the mad house. Oh, and he happens to know a thing or two about ACT as well…)
Look Ma! My First ACT Extension!
In this post we are going to rig up the PID thermostat boundary condition as a new boundary condition type in Mechanical. In ACT jargon, this is known as adding a pre-processing feature. I’m going to refer you primarily to the training material and documentation for ACT that you can obtain from the ANSYS website on their customer portal. I strongly suggest you log on to the portal and download the training material for version 15.0 now.
Planning the Extension
When we create an ACT extension we need to lay things out on the file system in a certain manner. At a high level, there are three categories of file system data that we will use to build our extension. These types of data are:
- Code. This will be comprised of Python scripts and in our case APDL scripts.
- XML. XML files are primarily used for plumbing and for making the rest of the world aware of who we are and what we do.
- Resources. These files are typically images, icons, etc… that spice things up a little bit.
Any single extension will use all three of these categories of files, and so organizing this mess becomes priority number one when building the extension. Fortunately, there is a convention that ACT imposes on us to organize the files. The following image depicts the structure graphically.
We will call our extension PIDThermostat. Therefore, everywhere ExtensionName appears in the image above, we will replace that with PIDThermostat in our file structure.
Creating a UI for our Load
The beauty of ACT is that it allows us to easily create professional looking user experiences for custom loads and results. Let’s start by creating a user interface for our load that looks like the following:
As you can see in the above user interface, all of the controls and inputs for our little PID controller that we designed in Part 1 of this blog series are present. Before we discuss how we create these user elements, let’s start with a description of what they each mean.
The first item in the UI is named Heat Source/Sink Location. This UI element presents to the user a choice between picking a piece of geometry from the model and specifying a named selection. Internal to the PID controller, this location represents where in the model we will attach the control elements such that they supply or remove energy from this location. ACT provides us two large areas of functionality here. First, it provides a way to graphically pick a geometric item from the model. Second, it provides routines to query the underlying mesh associated with this piece of geometry so that we can reference the nodes or elements in our APDL code. Each of these pieces of functionality is substantial in its size and complexity, yet we get it for free with ACT.
The second control is named Temperature Monitor Location. It functions similarly to the heat source/sink location. It gives the user the ability to specify where on the model the control element should monitor, or sense, the temperature. So, our PID controller will add or remove energy from the heat sink/source location to try to keep the monitor location at the specified set point temperature.
The third control group is named Thermostat Control Properties. This group aggregates the various parameters that control the functionality of the thermostat. That is, the user is allowed to specify gain values, and also what type of control is implemented.
The forth control group is named Thermostat Setpoint Properties. This group allows the user to specify how the setpoint changes with time. An interesting ACT feature is implemented with this control group. Based on the selection that the user makes in the “Setpoint Type” dropdown, a different control is presented below for the thermostat setpoint temperature. When the user selects, “User Specified Setpoint” then a control that provides a tabular input is presented. In this case, the user can directly input a table of temperature vs time data that specifies how the setpoint changes with time. However, if the user specifies “Use Model Entity as Setpoint” then the user is presented a geometry picker similar to the controls above to select a location in the model that will define the setpoint temperature. When this option is selected, the PID controller will function more like a follower element. That is, it will try to cause the monitor location to “follow” another location in the model by adding or removing energy from the heat source/sink location. The offset value allows the user to specify a DC offset that they would like to apply to the setpoint value. Internally, this offset term will be incorporated into the constraint equation averaging method to add in the DC offset.
Finally, the last control group allows the user to visualize plots of computed information regarding the PID controller after the solution is finished. Normally this would be presented in the results branch of the tree; however, the results I would like to present for these elements don’t map cleanly to the results objects in ACT. (At least, I can’t map them cleanly in my mind…) More detail on the results will be presented below.
So, now that we know what the control UI does, let’s look at how to specify it in ACT
Specifying the UI in XML
As mentioned at the beginning, ACT relies on XML to specify the UI for controls. The following XML snippet describes the entire UI.
<extension version=“1” name=“ThermalTools”>
<script src=“thermaltools.py” />
<toolbar name=“thermtools” caption=“Thermal Tools”>
<entry name=“PIDThermostatLoad” icon=“ThermostatGray”
caption=“PID Thermostat Control”>
<load name=“pidthermostat” version=“1” caption=“PID Thermostat”
icon=“ThermostatWhite” issupport=“false” isload=“true” color=“#0000FF”>
<property name=“ConnectionGeo” caption= “Heat Source/Sink Location”
<attributes selection_filter=“vertex|edge|face” />
<property name=“MonitorGeo” caption= “Temperature Monitor Location”
<attributes selection_filter=“vertex|edge|face|body” />
<propertygroup name=“ControlProperties” caption=“Thermostat Control Properties”
<property name=“ControlType” caption=“Control Type”
control=“select” default=“Both Heat Source and Sink”>
<attributes options=“Heat Source,Heat Sink,Both Heat Source and Sink”/>
<property name=“ProportionalGain” caption=“Proportional Gain”
<property name=“IntegralGain” caption=“Integral Gain”
<property name=“DerivativeGain” caption=“Derivative Gain”
<propertygroup name=“SetpointProperties” caption=“Thermostat Setpoint Properties”
<propertygroup name=“SetpointType” display=“property” caption=“Setpoint Type”
control=“select” default=“User Specified Setpoint”>
<attributes options=“User Specified Setpoint,Use Model Entity as Setpoint”/>
<propertytable name=“SetPointTemp” caption=“Thermostat Set Point Temperature”
display=“worksheet” visibleon=“User Specified Setpoint”
<property name=“Time” caption=“Time” unit=“Time” control=“float”></property>
<property name=“SetPoint” caption=“Set Point Temperature”
<property name=“SetpointGeo” caption= “Setpoint Geometry”
visibleon=“Use Model Entity as Setpoint” control=“scoping”>
<attributes selection_filter=“vertex|edge|face|body” />
<property name=“SetpointOffset” caption=“Offset” control=“float” default=“0”/>
<propertygroup name=“Results” caption=“Thermostat Results” display=“caption”>
<property name=“ViewResults” caption=“View Results?” control=“select” default=“No”>
Describing this in detail would take far longer than I have time for now, so I’m going to direct you to the ACT documentation. The gist of it is fairly simple though. XML provides a structured, hierarchical mechanism for describing the layout of the UI. Nested structures appear as child widgets of their parents. Callbacks are used within ACT to provide the hooks into the UI events so that we can respond to various user interactions. Beyond that, read the docs!! And, hey, before I hear any whining remember that in the old days of Jscript customization there wasn’t any documentation! When I was a Workbench Customization Kid we had to walk uphill, both ways, barefoot, in 8’ of snow for 35 miles… So shut it!
Making the Magic Happen
So, the UI is snazzy and all, but the heavy lifting really happens under the hood. Ultimately, what ACT provides us, when we are creating new BCs for ANSYS, is a clever way to insert APDL commands into the ds.dat input stream. Remember that at its core all Mechanical is, is a glorified APDL generator. I’m sure the developers love me reducing their hard labor to such mundane terms, but it is what it is… So, at the end of the day, our little ACT load objects are nothing more than miniature APDL writers. You thought we were doing something cool…
So, the magic happens when we collect up all of the input data the user specifies in our snazzy UI and turn that into APDL code that implemented the PID controller. This is obviously why I started by developing the APDL code first outside of WB. The APDL code is the true magic. Collecting up the user inputs and writing them to the ds.dat file occurs inside the getcommands callback. If you look closely at the XML code, you will notice two getcommands callbacks. The first one calls a python function named: writePIDThermostatLoad. This callback is scheduled to fire when Mechanical is finished writing all of the standard loads and boundary conditions that it implements natively and is about to write the first APDL solve command. Our commands will end up in the ds.dat file right at this location. I chose this location for the following reason. Our APDL code for the PID thermostat will be generating new element types and new nodes and elements not generated by Workbench. Sometimes workbench implements certain boundary conditions using surface effect element types. So, these native loads and boundary conditions themselves may generate new elements and element types. Workbench knows about those, because it’s generating them directly; however, it has no idea what I might be up to in my PID thermostat load. So, if it were to write additional boundary conditions after my PID load, it very well might reuse some of my element type ids, and even node/element ids. The safer thing to do is to write our APDL code so that it is robust in the presence of an unknown maximum element type/real constant set/node number/etc… Then, we insert it right before the solve command, after WB has written all of its loads and boundary conditions. Thus, the likelihood of id collisions is greatly reduced or eliminated.
Note, too, that ACT provides some utility functions to generate a new element type id and increment the internal counter within Workbench; however, I have found that these functions do not account for loads and boundary conditions. Therefore, in my testing thus far, I haven’t found them safe to use.
The second getcommands callback is setup to fire when Workbench finishes writing all of the solve commands and has moved to the post processing part of the input stream. I chose to implement a graphing functionality for displaying the relevant output data from the PID elements. Thus, I needed to retrieve this data from ANSYS after the solution is complete so that I can present it to the user. I accomplished this by writing a little bit of APDL code to enter /post26 and export all of the data I wish to plot to a CSV file. By specifying this second getcommands callback, I could instruct Workbench to insert the APDL commands after the solve completed.
Viewing the Results
Once the solution has completed, clicking on the “View Results?” dropdown and choosing “Yes” will bring up the following result viewer I implemented for the PID controller. All of the graphing functionality is provided by ACT in an import module called “chart”. This result viewer is simply implemented as a dialog with a single child control that is the ACT chart widget. This widget also allows you to layout multiple charts in a grid, as we have here. As you can see, we can display all of the relevant output data for the controls cleanly and efficiently using ACT! While this can also be accomplished in ANSYS Mechanical APDL, I think we would all agree that the results are far more pleasing visually when implemented in ACT.
Where Do We Go from Here?
Now that I’ve written an ACT module, my next steps are to clean it up and try to make it a little more production ready. Once I’m satisfied with it, I’ll publish it on this blog and on the appropriate ANSYS library. Look for more posts along the way if I uncover additional insights or gotchas with ACT programming. I will leave you with this, however. If you have put off ACT programming you really should reconsider! Being mostly new to ACT, I was able to get this little boundary condition hooked up and functioning in less than a week’s time. Given the way the user interface turned out and the flexibility thus far of the control, I’m quite pleased with that. Without the documentation and general availability of ACT, this effort would have been far more intense. So, try out ACT! You won’t be disappointed.
I’m going to embark on a multipart blog series chronicling my efforts in writing a PID Thermostat control boundary condition for workbench. I picked this boundary condition for a few of reasons:
- As far as I know, it doesn’t exist in WB proper.
- It involves some techniques and element types in ANSYS Mechanical APDL that are not immediately intuitive to most users. Namely, we will be using the Combin37 element type to manage the control.
- There are a number of different options and parameters that will be used to populate the boundary condition with data, and this affords an opportunity to explore many of the GUI items exposed in ACT.
This first posting goes over how to model a PID controller in ANSYS Mechanical APDL. In future articles I will share my efforts to refine the model and us ACT to include it in ANSYS Workbench.
PID Controller Background
Let’s begin with a little background on PID controllers. Full disclaimer, I’m not controls engineer, so take this info for what it is worth. PID stands for Proportional Integral Differential controller. The idea is fairly simple. Assume you have some output quantity you wish to control by varying some input value. That is, you have a known curve in time that represents what you would like the output to look like. For example:
The trick is to figure out what the input needs to look like in time so that you get the desired output. One way to do that is to use feedback. That is, you measure the current output value at some time, t, and you compare that to what the desired output should be at that time, t. If there is no difference in the measured value and the desired value, then you know whatever you have set the input to be, it is correct at least for this point in time. So, maybe it will be correct for the next moment in time. Let’s all hope…
However, chances are, there is some difference between what is measured and what is desired. For future reference we will call this the error term. The secret sauce is what to do with that information? To make things more concrete, we will ground our discussion in the thermal world and imagine we are trying to maintain something at a prescribed temperature. When the actual temperature of the device is lower than the desired temperature, we will define that as a positive error. Thus, I’m cold; I want to be warmer: that equals positive error. The converse is true. I’m hot; I want to be colder: that equals negative error.
One simple way we could try to control the input would be to say, “Let’s make the input proportional to the error term.” So, when the error term is positive, and thus I’m cold and wish to be warmer, we will add energy proportionate to the magnitude of the error term. Obviously the flip side is also true. If I’m hot and I wish to be cooler my negative error term would mean that remove energy proportionate to the magnitude of the error term. This sounds great! What more do you need? Well, what happens if I’m trying to hold a fixed temperature for a long time? If the system is not perfectly adiabatic, we still have to supply some energy to make up for whatever the system is losing to the surroundings. Obviously, this energy loss occurs even with the system is in a steady state configuration and at the prescribed temperature! But, if the system is exactly at the prescribed temperature, then the error term is zero. Anything proportionate to zero is… zero. That’s a bummer. I need something that won’t go to zero when my error term goes to zero.
What if I could keep a record of what I’ve done in the past? What if I accumulated all of the past error from forever? Obviously, this has the chance of being nonzero even if instantaneously my error term is zero. This sounds promising. Integrating a function of time with respect to time is analogous to accumulating the function values from history past. Thus, what if I integrated my error term and then made my input also proportional to that value? Wouldn’t that help the steady state issue above? Sure it would. Unfortunately, it also means I might go racing right on by my set point and it might take a while for that “mistake” to wash out of the system. Nothing is free. So, now I have kept a record of my entire past and used that to help me in the present, what if I could read the future? What if could extrapolate out in time?
Derivatives allow us to make a local extrapolation (in either direction) about a curve at a fixed point. So, if my curve is a function of time, which in our case the curves are, forward extrapolation is basically jumping ahead into the future. However, we can’t truly predict the future, we can only extrapolate on what has recently happened and make the leap of faith that it will continue to happen just as it has. So, if I take the derivative of my error term with respect to time, I can roll the dice a little a make some of my input proportional to this derivative term. That is, I can extrapolate out in time. If I do it right, I can actually make the system settle out a little faster. Remember that when the error term goes to zero and stays there, the derivative of the error term also goes to zero. So, when we are right on top of our prescribed value this term has no bearing on our input.
So, a PID controller simply takes the three concepts of how to specify an input value based on a function of the error term and mixes them together with differing proportions to arrive at the new value for the input. By “tuning” the system we can make it such that it responds quickly to change and it doesn’t wildly overshoot or oscillate.
Implementing a PID controller in ANSYS MAPDL
We will begin by implementing a PID controller in MAPDL before moving on to implementing the boundary condition in ANSYS Workbench via the ACT. We would like the boundary condition to have the following features:
- Ultimately we would like to “connect” this boundary condition to any number of nodes in our model. That is, we may want to have our energy input occur on a vertex, edge or face of the model in Workbench. So, we would like the boundary condition to support connecting to any number of nodes in the model.
- Likewise, we would like the “measured output” to be influenced by any number of nodes in our model. That is, if more than one node is included in the “measured value” set, we would like ANSYS to use the average temperature of the nodes in that set as our “measured output”. Again, this will allow us to specify a vertex, edge, face or body of the model to function as our measurement location. The measured value should be the average temperature on this entity. Averaging needs to be intelligent. We need to weight the average based on some measure that accounts for the relative influence of a node attached to large elements vs one attached to small elements.
- We would like to be able to independently control the proportional, integral and derivative components of the control algorithm.
- It would be nice to be able to specify whether this boundary condition can only add energy, only remove energy or if it can do both.
- We would like to allow the set point value to also be a function of time so that it too can change with time.
- Finally, it would be nice to be able to post process some of the heat flow quantities, temperature values, etc… associated with this boundary condition.
This is a pretty exhaustive list of requirements for the boundary condition. Fortunately, ANSYS MAPDL has built into it an element type that is perfectly suited for this type of control. That element type is the combin37.
Introducing the Combin37 Element Type
Understanding the combin37 element in ANSYS MAPDL takes a bit of a Zen state of mind… It’s, well, an element only a mother could love. Here is a picture lifted from the help:
OK. Clear as mud right? Actually, this thing can act as a thermostat whether you believe me from the picture or not. Like most/all ANSYS elements that can function in multiple roles, the combin37 is expressed in its structural configuration. It is up to you and me to mentally map it to a different set of physics. So, just trust me that you can forget the damping and FSLIDE and little springy looking thing in the picture. All we are going to worry about is the AFORCE thing. Mentally replace AFORCE with heat flow.
Notice those two little nodes hanging out there all by their lonesome selves labeled “control nodes”. I think they should have joysticks beside them personally, but ANSYS didn’t ask me. Those little guys are appropriately named. One of them, NODE K actually, will function as our set point node. That is, whatever temperature value we specify in time for NODE K, that same value represents the set point temperature we would like our “measured” location take on in time as well. So, that means we need to drive NODE K with our set point curve. That should be easy enough. Just apply a temperature boundary condition that is a function of time to that node and we’re good to go. Likewise, NODE L represents the “measured” temperature somewhere else in the model. So, we need to somehow hook NODE L up to our set of measurement nodes so that it magically takes on the average value of those nodes. More on that trick later.
Now, internally the combin37 subtracts the temperature at NODE K from NODE L to obtain an error term. Moreover, it allows us to specify different mathematical operations we can perform on the error term, and it allows us to take the output from those mathematical operations and drive the magical AFORCE thingy, which is heat flow. Guess what those mathematical operations are? If you guessed simply making the heat flow through the element proportional to the error, proportional to the time integral of the error and proportional to the time derivative of the error you would be right. Sounds like a PID controller doesn’t it? Now, the hard part is making sense of all the options and hooking it all up correctly. Let’s focus on the options first.
Key Option One and the Magic Control Value
Key option 1 for the combin37 controls what mathematical operation we are going to perform on the error term. In order to implement a full PID controller, we are going to need three combin37 elements in our model with each one keyed to a different mathematical operation. ANSYS calls the result of the mathematical operation, Cpar. So, we have the following:
|KEYOPT(1) Value||Mathematical Operation|
Thus, for our purposes, we need to set keyopt(1) equal to 1,4 and 2 for each of the three elements respectively.
Feedback is realized by taking the control parameter Cpar and using it to modify the heat flow through the element, which is called AFORCE. The AFORCE value is specified as a real constant for the element; however, you can also rig up the element so that the value of AFORCE changes with respect to the control parameter. You do this by setting keyopt(6)=6. The manner in which ANSYS adjusts the AFORCE value, which again is heat flow, is described by the following equation:
Thus, the proportionality constant for the Proportional, Integral and Derivative components are specified with the C1 variable. RCONST, C3 and C4 are all set to zero. C2 is set to 1. Also note that ANSYS first takes the absolute value of the control parameter Cpar before plugging it into this equation. Furthermore, the direction of the AFORCE component is important. A positive value for AFORCE means that the element generates an element force (heatflow) in the direction specified in the diagram. That is, it acts as a heat sink. So, assuming the model is attached to node J, the element acts as a heat sink when AFORCE is positive. Conversely, when AFORCE is negative, the element acts like a heat source. However, due to the absolute value, Cpar can never take on a negative value. Thus, when this element needs to act as an energy source to add heat to our model, the coefficient C1 must be negative. The opposite is true when the element needs to act as an energy sink.
Key Option Four and Five and when is it Alive?
If things weren’t confusing enough already, hold on as we discuss Keyopt 4 and 5. Consider the figure below, again lifted straight from the help.
The combination of these two key options controls when the element switches on and becomes “alive”. Let’s take the simple case first. Let’s assume that we are adding energy to the model in order to bring it up to a given temperature. In this case, Cpar will be positive because our set point is higher than our current value. If the element is functioning as a heat source we would like it to be on in this condition. Furthermore, we would like it to stay on as long as our error is positive so that we continue adding energy to bring the system up to temperature. Consider the diagram in the upper left. Imagine that we set ONVAL = 0 and OFFVAL = 0.001. Whenever Cpar is greater than ONVAL. So this sounds like exactly what we want when the element is functioning as a heat source. Thus, keyopt(4)=0 and keyopt(5)=0.001 with OFFVAL=ONVAL=0 is what we want when the element needs to function as a heat source.
What about when it is a heat sink? In this case we want the element to be active when the error term is negative; that is, when the current temperature is higher than the set point temperature. Consider the diagram in the middle left. This time let OFFVAL=0 and OFFVAL=-0.001. In this case, whenever Cpar is negative (less than OFFVAL) then the element will be active. Thus, keyopt(4)=0 and keyopt(5)=1 with OFFVAL=-0.001 ONVAL=0 is what we want when the element needs to function as a heat sink. Notice, that if you set ONVAL=OFFVAL then the element will always stay on; thus, we need to provide the small window to activate the switching nature of the element.
Thus, we see that we need six different combin37 elements, three for a PID controlled heat sink and three for a PID controlled heat source, to fully specify a PID controlled thermal boundary condition. Phew… However, if we set all of the proportionality constants for either set of elements defining the heat sink or heat source to zero, we can effectively turn the boundary condition into only a heat source or only a heat sink, thus meeting requirement four listed above. While we’re marking off requirements, we can also mark off requirements three and five. That is, with this combination of elements we can independently control the P, I and D proportionality constants for the controller. Likewise, by putting a time varying temperature constraint on control node K, we can effectively cause the set point value to change in time. Let’s see if we can now address requirements one and two.
How do we Hook the Combin37 to the Rest of the Model?
We will address this question in two parts. First, how do we hook the “business” end of the combin37 to the part of the model to which we are going to add or remove energy? Second, how do we hook the “control” end of the combin37 to the nodes we want to monitor?
Hooking to the Combin37 to the Nodes that Add or Remove Energy
To hook the combin37 to the model so that we can add or remove energy we will use the convection link elements, link34. These elements effectively act like little thermal resistors with the resistance equation being specified as:
In order to make things nice, we need to “match” the resistances so that each node effectively sees the same resistance back to the combin37 element. We do this by varying the “area” associated with each of these convective links. To get the area associated with a node we use the arnode() intrinsic function. See the listing for details.
Hooking the Combin37 to the Nodes that Function as the Measured Value
As we mentioned in our requirements, we would like to be able to specify more than one or more nodes to function as the measured control value for our boundary condition. More precisely, if more than one node is included in the measurement node set, we would like ANSYS to average the temperatures at those nodes and use that average value as the measurement temperature. This will allow us to specify, for example, the average temperature of a body as the measurement value, not just one node on the body somewhere. However, we would also like for the scheme to work seamlessly if only one node is specified. So, how can we accomplish this? Constraint equations come to our rescue.
Remember that a constraint equation is defined as:
How can we use this to compute the average temperature of a set of nodes, and tie the control node of the combin37 to this average? Let’s begin by formulating an equation for the average temperature of a set of nodes. We would like this average to not be simply a uniform average, but rather be weighted by the relative contribution a given node should play in the overall average of a geometric entity. For example, assume we are interested in calculating the average temperature of a surface in our model. Obviously this surface will have associated with it many nodes connected to many different elements. Assume for the moment that we are interested in one node on this face that is connected to many large elements that span most of the area of this face. Shouldn’t this node’s temperature have a larger contribution to the “average” temperature of the face as say a node connected to a few tiny elements? If we just add up the temperature values and divide by the number of nodes, each node’s temperature has equal weight in the average. A better solution would be to area weight the nodal temperatures based on the area associated with each individual node. Something like:
That looks a little like our constraint equation. However, in the constraint equation I have to specify the constant term, whereas in the equation above, that is the value (Tavg) that I am interested in computing. What can I do? Well, let’s add in another node to our constraint equation that represents the “answer”. For convenience, we’ll make this the control node on our combin37 elements since we need the average temperature of the face to be driving that node anyway. Consider:
Now, our constant term is zero, and our Ci’s are Ai/AT and -1 for the control node. Voila! With this one constraint equation we’ve compute an area weighted average of the temperature over a set of nodes and assigned that value to our control node. CE’s rock!
An Example Model
This post is already way too long, so let’s wrap things up with a little example model. This model will illustrate a simple PI heat source attached to an edge of a plate with a hole. The other outer edges of the plate are given a convective boundary condition to simulate removing heat. The initial condition of the plate is set to 20C. The set point for the thermostat is set to 100C. No attempt is made to tune the PI controller in this example, so you can clearly see the effects of the overshoot due to the integral component being large. However, you can also see how the average temperature eventually settles down to exactly the set point value.
The red squiggly represents where heat is being added with the PI controller. The blue squiggly represents where heat is being removed due to convection. Here is a plot of the average temperature of the body with respect to time where you can see the response of the system to the PI control.
Here is another run, where the set point value ramps up as well. I’ve also tweaked the control values a little to mitigate some of the overshoot. This is looking kind of promising, and it is fun to play with. Next time we will look to integrate it into the workbench environment via an actual ACT extension.
Part 2 is here
I’ve included the model listing below so that you can play with this yourself. In future posts, I will elaborate more on this technique and also look to integrate it into an ACT module.
keyopt,P_et,1,0 ! Control on UK-UL
keyopt,P_et,2,8 ! Control node DOF is Temp
keyopt,P_et,3,8 ! Active node DOF is Temp
keyopt,P_et,4,0 ! Wierdness for the ON/OFF range
keyopt,P_et,5,0 ! More wierdness for the ON/OFF range
keyopt,P_et,6,6 ! Use the force, Luke (aka Combin37)
keyopt,P_et,9,0 ! Use the equation, Duke (where is Daisy…)
keyopt,I_et,1,4 ! Control on integral wrt time
keyopt,I_et,2,8 ! Control node DOF is Temp
keyopt,I_et,3,8 ! Active node DOF is Temp
keyopt,I_et,4,0 ! Wierdness for the ON/OFF range
keyopt,I_et,5,0 ! More wierdness for the ON/OFF range
keyopt,I_et,6,6 ! Use the force, Luke (aka Combin37)
keyopt,I_et,9,0 ! Use the equation, Duke (where is Daisy…)
keyopt,D_et,1,2 ! Control on first derivative wrt time
keyopt,D_et,2,8 ! Control node DOF is Temp
keyopt,D_et,3,8 ! Active node DOF is Temp
keyopt,D_et,4,0 ! Wierdness for the ON/OFF range
keyopt,D_et,5,0 ! More wierdness for the ON/OFF range
keyopt,D_et,6,6 ! Use the force, Luke (aka Combin37)
keyopt,D_et,9,0 ! Use the equation, Duke (where is Daisy…)
keyopt,mass_et,3,1 ! Interpret real constant as DENS*C*Volume
!! S M A L L T E S T M O D E L !!
! Thickness of plate
! Plane55 element
! Make a block
! Make a hole
! Punch a hole
! create an nodal component for the
! ‘attachment’ location
! create a nodal component for the
! ‘monitor’ location
!! B E G I N P I D M O D E L !!
! Real constant and mat prop for the mass element
mp,qrate,mass_et,0 ! Zero heat generation rate for the element
r,mass_et,1e-10 ! Extremely small thermal capacitance
! Material properties for convection element
! make the convection “large”
! Real constant for the combin37 elements
! that ack as heaters
! build the PID elements
! Create the nodes. They can be all coincident
! as we will refer to them solely by their number.
! They will be located at the origin
! Put a thermal mass on the K and L nodes
! for each control element to give them
! thermal DOFs
! Proportional element
! Integral element
! Derivative Element
! Ground the base node
! Get a list of the attachment nodes
! Hook the attachment nodes to the
! end of the control element with
! convection links
! Hook up the monitor nodes
! We are going to need these areas
! so, hold on to them
! Write the constraint equations
! Create a transient setpoint temperature
! Constrain the temperature node to be
! at the setpoint value
! Apply an initial condition of
! 20 C to everything
! Plot the response temperature
! and the setpoint temperature
My mother in law is still getting used to the concept of a smart phone.
MIL: “Do you have a GPS so you know how to get there?”
Me: “There’s an App for that.”
MIL: “Do you have a flashlight?”
Me: “There’s an App for that.”
MIL: “Do you have a chromatic tuner?”
Me: “There’s an app for that.”
OK, maybe my mother-in-law didn’t ask about the tuner, but there is in fact an app for that.
In similar fashion, now that ACT (ANSYS Customization Toolkit) is a reality, we can start answering questions with, “There’s an Extension for that.” What is an extension? It’s a bit of customized software that you can integrate with ANSYS Workbench to have it do things that aren’t built in to the current menus.
We’ll leave the nuts and bolts of how Extensions work for another article, but please be aware that current ANSYS customers can now download several Extensions from the ANSYS Customer Portal. We’ll take a look at one of these in this blog entry.
To access the currently available extensions, you must have a login to the ANSYS Customer Portal and be current on maintenance (TECS). Within the customer portal, the Extensions are available by clicking on Downloads > Extension Library; then click on ACT Library.
As of this writing there are 12 extensions available for download. These vary from the sophisticated Acoustics Extension for 14.5 to simpler extensions such as the one we’ll look at here which allows you to change the material property numbers of entities in Workbench Mechanical.
Once you have downloaded the desired extension, you’ll need to install it. For use in the current project, you click on Extensions at the menu near the top of the Workbench Window and click on Install Extension.
After clicking on Install Extension, you browse to the folder in which you have saved the downloaded extension. The Extension file extension (I’m not making this up) is .wbex. Here is what it looks like when loading the material change extension:
Next you must click on Extensions again the Workbench window, and click on Manage Extensions. That will bring up this window.
Check the box next to any extensions you want to load, then click Close. If you have already launched the Mechanical editor, you will probably need to exit Workbench and get back in or at least click on File > New and reload for the new extension to show up.
When you open the Mechanical editor, the new extension should show up in the menus. Here is what the material change button looks like after the extension has been loaded:
Each time you open a new Workbench session, you’ll need to click on Extensions > Manage Extensions if you want an extension to be loaded into the Mechanical editor.
Alternatively, you can have an extension load every time by clicking on Tools > Options from the Workbench window, followed by a click on Extensions. Enter the name of the desired extension in the box, as shown here.
After clicking OK, any new Mechanical editor session will have the material change extension loaded.
So, what good is it? I will now show a simple example of implementation of the material change extension. The idea here is that we have a bolted connection and we want to look at two different conditions by changing the material properties of the washers to see what effect that has on the results. Using the material change extension, I can force the washers (and nuts and bolts too) to have a specific material number rather than the default value assigned by Workbench. The material number is used in the Mechanical APDL batch input file created by Workbench to identify which elements have which material properties.
Now before you APDL gurus get all riled up, yes, I know this can be done with the magic ‘matid’ parameter. That’s how we’ve been doing things like this for years. The material number extension is nicer since it’s an actual button built into the GUI. We’re really trying to show how extensions work here, not necessarily the best way to simulate a model with changing material properties.
That all being said, here is what it looks like. Clicking on the ‘matchange’ button in the menus inserts a new matchange object in the tree under the analysis type branch. In this example, the matchange button has been clicked three times, resulting in three matchange objects.
The matchange functionality requires that we create Named Selections for any entities for which we want to change material property numbers. How do I know that? When I downloaded the extension from the ANSYS Customer Portal, a nice read me .pdf file came along with it.
Here I have clicked on matchange 2 in the tree and identified the Named Selection for the entities I want to change, in this case the named selection Washers. I then entered my desired integer material number for these entities, 102.
Finally, in order to demonstrate that it works, I added on command snippet under the Static Structural branch, containing these APDL commands:
esel,s,mat,,102 ! select material 102 – washers.
Those commands select the washers by my user-defined material number (I could have also selected by named selection). The commands then define new material properties for material 102. Again, there are other ways to do this, but this shows the effect of the extension. Note that this command snippet is set in the details view to only be active for load step number 3. Load step one applies bolt pretension. Load step 2 solves for the operating load with the original material properties and load step 3 solves for the same loads but with the modified material properties for the washers.
This plot shows von Mises stress in the washers vs. loadstep/substep. As you can see in the graph below the stress plot, indeed the von Mises stress is changing due to the material change from step 2 to step 3. This was a nonlinear analysis with large deflection turned on.
So, this should give you a taste of what extensions are and what can be done with them. The next time you are asked to do something in Workbench for which there isn’t a built-in menu, you may be able to say, “There’s an extension for that!”