All Things ANSYS Episode 007 – ANSYS Training and Customization

Published on: October 23, 2017
With: Ted Harris, Matt Sutton, Eric Miller
Description: In this episode your host and Co-Founder of PADT, Eric Miller is joined by PADT’s Simulation Support Manager Ted Harris and Senior Analyst and Lead Software Developer Matt Sutton for an introduction into the various advantages available thanks to training, along with a discussion on the increased functionality available through the customization of ANSYS software.
Listen:
Subscribe:

 

Standard Roof Rack Fairing Mount Getting In Your Way?! Engineer it better and 3D Print it!

It is no mystery that I love my Subaru. I bought it with the intention of using it and I have continually made modifications with a focus on functionality.

When I bought my roof crossbars in order to mount ski and/or bike racks, I quickly realized I needed to get a fairing in order to reduce drag and wind noise. The fairing functions as designed, and looks great as well. However, when I went to install my bike rack, I noticed that the fairing mount was in the way of mounting at the tower. As a result, I had to mount the rack inboard of the tower by a few inches. This mounting position had a few negative results:

  • The bike was slightly harder to load/unload
  • The additional distance from the tower resulted in additional crossbar flex and bike movement
  • Additional interference between bikes when two racks are installed

These issues could all be solved if the fairing mount was simply inboard a few more inches. If only I had access to the resources to make such a concept a reality…. oh wait, PADT has all the capabilities needed to take this from concept to reality, what a happy coincidence!

First, we used our in-house ZEISS Comet L3D scanner to get a digital version of the standard left fairing mount bracket. The original bracket is coated with Talcum powder to aid in the scanning process.

The output from the scanning software is a faceted model in *.STL format. I imported this faceted CAD into ANSYS SpaceClaim in order to use it as a template to create editable CAD geometry to use as a basis to create my revised design. The standard mounting bracket is an injection molded part and is hollow with the exception of a couple of ribs. I made sure to capture all this geometry to carry forward into my redesigned parts, which would make the move to scaled manufacturing of this design easy.

Continuing in ANSYS SpaceClaim, as it is a direct modeling software instead of traditional feature-based modeling, I was able to split the bracket’s two function ends, the crossbar end and fairing end, and offset them by 4.5 inches, in order to allow the bike rack to mount right at the crossbar tower. I used the geometry from the center section CAD to create my offset structure. A mirrored version allows both the driver and passenger side fairing mount to be moved inboard to enable mounting of two bike racks in optimal positions. The next step is to turn my CAD geometry back into faceted *.STL format for printing, which can be done directly within ANSYS SpaceClaim.

 

After the design has been completed, I spoke with our 3D printing group to discuss what technology and material would be good for these brackets, as the parts will be installed on the car during the Colorado summer and winter. For this application, we decided on our in-house Selective Laser Sintering (SLS) SINTERSTATION 2500 PLUS and glass filled nylon material. As this process uses a powder bed when building the parts, no support is needed for overhanging geometry, so the part can be built fully featured. Find out more about the 3D printing technologies available at PADT here.

Finally, it was time to see the results. The new fairing mount offset brackets installed just like the factory pieces, but allowed the installation of the bike rack right at the tower, reducing the movement that was present when mounted inboard, as well as making it easier to load and unload bikes!!

I am very happy with the end result. The new parts assembled perfectly, just as the factory pieces did, and I have increased the functionality of my vehicle yet again. Stay tuned for some additional work featuring these brackets, and I’m sure the next thing I find that can be engineered better! You can find the files on GrabCAD here.

 

All Things ANSYS EP003: Awesome Additions to ANSYS Mechanical 18.2 and a Look at Scripting with the ANSYS Customization Toolkit

Published on: August 28, 2017
With: Joe Woodward, Ted, Harris, Eric Miller
Description: Ted and Joe join Eric to talk about the recent release of ANSYS 18.2 including a look at the enhancements in ANSYS Mechanical that we will use right away. Our regular look at news and events bracket a fantastic discussion on ANSYS ACT and how to use it to script and build your own applications on top of ANSYS products.
Listen:
Subscribe:

 

Experiences with Developing a “Somewhat Large” ACT Extension in ANSYS

With each release of ANSYS the customization toolkit continues to evolve and grow.  Recently I developed what I would categorize as a decent sized ACT extension.    My purpose in this post is to highlight a few of the techniques and best practices that I learned along the way.

Why I chose C#?

Most ACT extensions are written in Python.  Python is a wonderfully useful language for quickly prototyping and building applications, frankly of all shapes and sizes.  Its weaker type system, plethora of libraries, large ecosystem and native support directly within the ACT console make it a natural choice for most ACT work.  So, why choose to move to C#?

The primary reasons I chose to use C# instead of python for my ACT work were the following:

  1. I prefer the slightly stronger type safety afforded by the more strongly typed language. Having a definitive compilation step forces me to show my code first to a compiler.  Only if and when the compiler can generate an assembly for my source do I get to move to the next step of trying to run/debug.  Bugs caught at compile time are the cheapest and generally easiest bugs to fix.  And, by definition, they are the most likely to be fixed.  (You’re stuck until you do…)
  2. The C# development experience is deeply integrated into the Visual Studio developer tool. This affords not only a great editor in which to write the code, but more importantly perhaps the world’s best debugger to figure out when and how things went wrong.   While it is possible to both edit and debug python code in Visual Studio, the C# experience is vastly superior.

The Cost of Doing ACT Business in C#

Unfortunately, writing an ACT extension in C# does incur some development cost in terms setting up the development environment to support the work.  When writing an extension solely in Python you really only need a decent text editor.  Once you setup your ACT extension according to the documented directory structure protocol, you can just edit the python script files directly within that directory structure.  If you recall, ACT requires an XML file to define the extension and then a directory with the same name that contains all of the assets defining the extension like scripts, images, etc…  This “defines” the extension.

When it comes to laying out the requisite ACT extension directory structure on disk, C# complicates things a bit.  As mentioned earlier, C# involves a compilation step that produces a DLL.  This DLL must then somehow be loaded into Mechanical to be used within the extension.  To complicate things a little further, Visual Studio uses a predefined project directory structure that places the build products (DLLs, etc…) within specific directories of the project depending on what type of build you are performing.   Therefore the compiled DLL may end up in any number of different directories depending on how you decide to build the project.  Finally, I have found that the debugging experience within Visual Studio is best served by leaving the DLL located precisely wherever Visual Studio created it.

Here is a summary list of the requirements/problems I encountered when building an ACT extension using C#

  1. I need to somehow load the produced DLL into Mechanical so my extension can use it.
  2. The DLL that is produced during compilation may end up in any number of different directories on disk.
  3. An ACT Extension must conform to a predefined structural layout on the filesystem. This layout does not map cleanly to the Visual studio project layout.
  4. The debugging experience in Visual Studio is best served by leaving the produced DLL exactly where Visual Studio left it.

The solution that I came up with to solve these problems was twofold.

First, the issue of loading the proper DLL into Mechanical was solved by using a combination of environment variables on my development machine in conjunction with some Python programming within the ACT main python script.  Yes, even though the bulk of the extension is written in C#, there is still a python script to sort of boot-load the extension into Mechanical.  More on that below.

Second, I decided to completely rebuild the ACT extension directory structure on my local filesystem every time I built the project in C#.  To accomplish this, I created in visual studio what are known as post-build events that allow you to specify an action to occur automatically after the project is successfully built.  This action can be quite generic.  In my case, the “action” was to locally run a python script and provide it with a few arguments on the command line.  More on that below.

Loading the Proper DLL into Mechanical

As I mentioned above, even an ACT extension written in C# requires a bit of Python code to bootstrap it into Mechanical.  It is within this bit of Python that I chose to tackle the problem of deciding which dll to actually load.  The code I came up with looks like the following:

Essentially what I am doing above is querying for the presence of a particular environment variable that is on my machine.  (The assumption is that it wouldn’t randomly show up on end user’s machine…) If that variable is found and its value is 1, then I determine whether or not to load a debug or release version of the DLL depending on the type of build.  I use two additional environment variables to specify where the debug and release directories for my Visual Studio project exist.  Finally, if I determine that I’m running on a user’s machine, I simply look for the DLL in the proper location within the extension directory.  Setting up my python script in this way enables me to forget about having to edit it once I’m ready to share my extension with someone else.  It just works.

Rebuilding the ACT Extension Directory Structure

The final piece of the puzzle involves rebuilding the ACT extension directory structure upon the completion of a successful build.  I do this for a few different reasons.

  1. I always want to have a pristine copy of my extension laid out on disk in a manner that could be easily shared with others.
  2. I like to store all of the various extension assets, like images, XML files, python files, etc… within the Visual Studio Project. In this way, I can force the project to be out of date and in need of a rebuild if any of these files change.  I find this particularly useful for working with the XML definition file for the extension.
  3. Having all of these files within the Visual Studio Project makes tracking thing within a version control system like SVN or git much easier.

As I mentioned before, to accomplish this task I use a combination of local python scripting and post build events in Visual Studio.  I won’t show the entire python code, but essentially what it does is programmatically work through my local file system where the C# code is built and extract all of the files needed to form the ACT extension.  It then deletes any old extension files that might exist from a previous build and lays down a completely new ACT extension directory structure in the specified location.  The definition of the post build event is specified within the project settings in Visual Studio as follows:

As you can see, all I do is call out to the system python interpreter and pass it a script with some arguments.  Visual Studio provides a great number of predefined variables that you can use to build up the command line for your script.  So, for example, I pass in a string that specifies what type of build I am currently performing, either “Debug” or “Release”.  Other strings are passed in to represent directories, etc…

The Synergies of Using Both Approaches

Finally, I will conclude with a note on the synergies you can achieve by using both of the approaches mentioned above.  One of the final enhancements I made to my post build script was to allow it to “edit” some of the text based assets that are used to define the ACT extension.  A text based asset is something like an XML file or python script.  What I came to realize is that certain aspects of the XML file that define the extension need to be different depending upon whether or not I wish to debug the extension locally or release the extension for an end user to consume.  Since I didn’t want to have to remember to make those modifications before I “released” the extension for someone else to use, I decided to encode those modifications into my post build script.  If the post build script was run after a “debug” build, I coded it to configure the extension for optimal debugging on my local machine.  However, if I built a “release” version of the extension, the post build script would slightly alter the XML definition file and the main python file to make it more suitable for running on an end user machine.   By automating it in this way, I could easily build for either scenario and confidently know that the resulting extension would be optimally configured for the particular end use.

Conclusions

Now that I have some experience in writing ACT extensions in C# I must honestly say that I prefer it over Python.  Much of the “extra plumbing” that one must invest in in order to get a C# extension up and running can be automated using the techniques described within this post.  After the requisite automation is setup, the development process is really straightforward.  From that point onward, the increased debugging fidelity, added type safety and familiarity a C based language make the development experience that much better!  Also, there are some cool things you can do in C# that I’m not 100% sure you can accomplish in Python alone.  More on that in later posts!

If you have ideas for an ACT extension to better serve your business needs and would like to speak with someone who has developed some extensions, please drop us a line.  We’d be happy to help out however we can!

 

PID Thermostat Boundary Condition ACT Extension for ANSYS Mechanical

ANSYS-ACT-PID-ThermostatPADT is pleased to announce that we have uploaded a new ACT Extension to the ANSYS ACT App Store.  This new extension implements a PID based thermostat boundary condition that can be used within a transient thermal simulation.  This boundary condition is quite general purpose in nature.  For example, it can be setup to use any combination of (P)roportional (I)ntegral or (D)erivate control.   It supports locally monitoring the instantaneous temperature of any piece of geometry within the model.  For a piece of geometry that is associated with more than one node, such as an edge or a face, it uses a novel averaging scheme implemented using constraint equations so that the control law references a single temperature value regardless of the reference geometry.

ANSYS-ACT-PID-Thermostat-img1

The set-point value for the controller can be specified in one of two ways.  First, it can be specified as a simple table that is a function of time.  In this scenario, the PID ACT Extension will attempt to inject or remove energy from some location on the model such that a potentially different location of the model tracks the tabular values.   Alternatively, the PID thermostat boundary condition can be set up to “follow” the temperature value of a portion of the model.  This location again can be a vertex, edge or face and the ACT extension uses the same averaging scheme mentioned above for situations in which more than one node is associated with the reference geometry.  Finally, an offset value can be specified so that the set point temperature tracks a given location in the model with some nonzero offset.

ANSYS-ACT-PID-Thermostat-img2

For thermal models that require some notion of control the PID thermostat element can be used effectively.  Please do note, however, that the extension works best with the SI units system (m-kg-s).

Serial and Parallel ANSYS Mechanical APDL Simulations

ANSYS-APDL-Macro-PeDALThere are times when you want to study the effects of varying parameters.  If you have an existing MAPDL script that is parameterized, the following procedure will allow you to easily run many variations in an organized manner. 

Let’s assume a parameterized MAPDL macro called build_solve that does something you want to simulate many times and has 2 variables called power and scale which are set with argument 1 and 2 respectively.  Running this macro with the classic interface, with power=30 and scale=2.5 would look like this:

build_solve,30,2.5

Next, create a MAPDL macro to launch all of the simulations.  This script could be named control.mac.  The first thing to do here is to create arrays of your parameters and assign values to them.  This example will vary power and scale.  Here are the arrays of values that will be passed to build_solve:

*dim,power,array,4

power(1)=10,20,40,80

*dim,scale,array,6

scale(1)=1,2,3,5,10,20

Most of the control.mac commands will be put inside of nested *do loops.  There will be a *do loop for each of parameters being varied.

*do,ii,1,4

*do,jj,1,6

Next, use *cfopen to set up the arguments to be passed to build_solve.  Each time through the *do loops will create a new run1.mac

*cfopen,run1,mac

  a=power(ii)

  b=scale(jj)

  *vwrite,a,b

  build_solve,%G,%G

*cfclose

One of the key features of this approach is to run anywhere and build directories below the working directory.  Use the /inquire command to store the current directory name.

/INQUIRE,dir_,DIRECTORY

Use *cfopen to create a string that will be used for the directory name.  By using the variables as part of the string, the directories will have unique names.  A time or date stamp could also be included in this string.  This macro is executed immediately to create the string dirnam for use in the commands subsequently.

*dim,power,array,4

*cfopen,temp1,mac

*vwrite,a,b

dirnam='power_%G_scale_%G'

*cfclose

/input,temp1,mac

Eventually, the resulting directory structure will look something like the image below.  Each directory will contain a separate simulation with the arguments of power and scale set respectively.

mapdl-script-automation

The last *cfopen creates a windows batch file which will (when executed)

  1. Create the new directory

  2. Copy all of the macro files from the working directory into the new directory (including run1.mac)

  3. Change into the new directory using CD

  4. Launch ansys in batch mode, in this case using a gpu and 12 cpus, using the run1.mac input and outputting to f.out

  5. Change back to the working directory (ready to do it all again)

The code for the windows batch file is:

*cfopen,rfile,bat

*vwrite,dir_(1),dirnam

MKDIR "%C\%S"

*vwrite,dir_(1),dirnam

COPY *.mac "%C\%S"

*vwrite,dir_(1),dirnam

CD "%C\%S"

*vwrite,

"C:\Program Files\ANSYS Inc\v150\ansys\bin\winx64\ansys150" -b -acc nvidia -np 12 -i run1.mac -o f.out

*vwrite,dir_(1)

CD "%C"

*cfclose

The last step is to run the windows batch file.  /sys is used to make this system call.  If the simulation is not well parallelized and you have enough licenses available, run the simulations in low priority mode immediately.  This will launch all of your simulations in parallel:

  • /sys,start /b /low rfile.bat

If the model is well parallelized (in other words, it will use your system’s gpu/cpus/RAM efficiently) or you only have 1 license available, launch the batch files in high priority mode and use the /wait option which will insure that windows waits for the job to finish before launching the next simulation.

  • /sys,start /b /high /wait rfile.bat

You can download and view the examples control.mac and build_solve.mac from this zip file: build_solve-control-macros.zip

Video Tips: Using ACT to change Default Settings in ANSYS Mechanical

A short video showing how ACT (ANSYS Customization Toolkit) can be used to change Default Settings for analyses done in ANSYS Mechanical.  This is a very small subset of the capabilities that ACT can provide.  Stay tuned for other videos showing further customization examples.

The example .xml and python file is located below.  Please bear in mind that to use these “scripted” ACT extension files you will need to have an ACT license.  Compiled versions of extensions don’t require any licenses to use.  Please send me an email (manoj@padtinc.com) if you are wondering how to translate this example into your own needs.

NLdefaults

ACT Extension for a PID Thermostat Controller (PART 2)

(View part one of this series here.)

So, I’ve done a little of this Workbench customization stuff in a past life. My customizations involved lots of Jscript, some APDL, sweat and tears. I literally would bang my head against Eric Miller’s office door jamb wondering (sometimes out loud) what I had done to be picked as the Workbench customization guy. Copious amounts of alcohol on weeknights helped some, but honestly it still royally sucked. Because of these early childhood scars, I’ve cringed at the thought of this ACT thingy until now. I figured I’d been there, done that and I had zero, and I mean zero desire to relive any of it.

So, I resisted ACT even after multiple “suggestions” from upper management that I figure out something to do with it. That was wrong of me; I should have been quicker to given ACT a fair shot. ACT is a whole bunch better than the bad ol’ days of JScript. How is it better? Well, it has documentation… Also, there are multiple helpful folks back at Canonsburg and elsewhere that know a few things about it. This is opposed to the days when just one (brilliant) guy in India named Rajiv Rath had ever done anything of consequence with JScript. (Without him, my previous customization efforts would simply have put me in the mad house. Oh, and he happens to know a thing or two about ACT as well…)

Look Ma! My First ACT Extension!

In this post we are going to rig up the PID thermostat boundary condition as a new boundary condition type in Mechanical. In ACT jargon, this is known as adding a pre-processing feature. I’m going to refer you primarily to the training material and documentation for ACT that you can obtain from the ANSYS website on their customer portal. I strongly suggest you log on to the portal and download the training material for version 15.0 now.

Planning the Extension

When we create an ACT extension we need to lay things out on the file system in a certain manner. At a high level, there are three categories of file system data that we will use to build our extension. These types of data are:

  1. Code. This will be comprised of Python scripts and in our case APDL scripts.
  2. XML. XML files are primarily used for plumbing and for making the rest of the world aware of who we are and what we do.
  3. Resources. These files are typically images, icons, etc… that spice things up a little bit.

Any single extension will use all three of these categories of files, and so organizing this mess becomes priority number one when building the extension. Fortunately, there is a convention that ACT imposes on us to organize the files. The following image depicts the structure graphically.

image

We will call our extension PIDThermostat. Therefore, everywhere ExtensionName appears in the image above, we will replace that with PIDThermostat in our file structure.

Creating a UI for our Load

The beauty of ACT is that it allows us to easily create professional looking user experiences for custom loads and results. Let’s start by creating a user interface for our load that looks like the following:

clip_image004

As you can see in the above user interface, all of the controls and inputs for our little PID controller that we designed in Part 1 of this blog series are present. Before we discuss how we create these user elements, let’s start with a description of what they each mean.

The first item in the UI is named Heat Source/Sink Location. This UI element presents to the user a choice between picking a piece of geometry from the model and specifying a named selection. Internal to the PID controller, this location represents where in the model we will attach the control elements such that they supply or remove energy from this location. ACT provides us two large areas of functionality here. First, it provides a way to graphically pick a geometric item from the model. Second, it provides routines to query the underlying mesh associated with this piece of geometry so that we can reference the nodes or elements in our APDL code. Each of these pieces of functionality is substantial in its size and complexity, yet we get it for free with ACT.

The second control is named Temperature Monitor Location. It functions similarly to the heat source/sink location. It gives the user the ability to specify where on the model the control element should monitor, or sense, the temperature. So, our PID controller will add or remove energy from the heat sink/source location to try to keep the monitor location at the specified set point temperature.

The third control group is named Thermostat Control Properties. This group aggregates the various parameters that control the functionality of the thermostat. That is, the user is allowed to specify gain values, and also what type of control is implemented.

The forth control group is named Thermostat Setpoint Properties. This group allows the user to specify how the setpoint changes with time. An interesting ACT feature is implemented with this control group. Based on the selection that the user makes in the “Setpoint Type” dropdown, a different control is presented below for the thermostat setpoint temperature. When the user selects, “User Specified Setpoint” then a control that provides a tabular input is presented. In this case, the user can directly input a table of temperature vs time data that specifies how the setpoint changes with time. However, if the user specifies “Use Model Entity as Setpoint” then the user is presented a geometry picker similar to the controls above to select a location in the model that will define the setpoint temperature. When this option is selected, the PID controller will function more like a follower element. That is, it will try to cause the monitor location to “follow” another location in the model by adding or removing energy from the heat source/sink location. The offset value allows the user to specify a DC offset that they would like to apply to the setpoint value. Internally, this offset term will be incorporated into the constraint equation averaging method to add in the DC offset.

Finally, the last control group allows the user to visualize plots of computed information regarding the PID controller after the solution is finished. Normally this would be presented in the results branch of the tree; however, the results I would like to present for these elements don’t map cleanly to the results objects in ACT. (At least, I can’t map them cleanly in my mind…) More detail on the results will be presented below.

So, now that we know what the control UI does, let’s look at how to specify it in ACT

Specifying the UI in XML

As mentioned at the beginning, ACT relies on XML to specify the UI for controls. The following XML snippet describes the entire UI.

<extension version=“1” name=“ThermalTools”>

  <guid shortid=“thermaltools”>852acb16-4731-4e91-bd00-b464be02b361</guid>

  <script src=“thermaltools.py” />

 

  <interface context=“Mechanical”>

    <images>images</images>

    <callbacks>

      <oninit>initThermalToolsToolbar</oninit>

    </callbacks>

    <toolbar name=“thermtools” caption=“Thermal Tools”>

      <entry name=“PIDThermostatLoad” icon=“ThermostatGray”

             caption=“PID Thermostat Control”>

        <callbacks>

          <onclick>createPIDThermostatLoad</onclick>

        </callbacks>

      </entry>

    </toolbar>

  </interface>

 

  <simdata context=“Mechanical”>

    <load name=“pidthermostat” version=“1” caption=“PID Thermostat”

    icon=“ThermostatWhite” issupport=“false” isload=“true” color=“#0000FF”>

      <callbacks>

        <getcommands location=“solve”>writePIDThermostatLoad</getcommands>

        <getcommands location=“post”>writePIDThermostatPost</getcommands>

        <onadd>addPIDThermostat</onadd>

        <onremove>removePIDThermostat</onremove>

        <onsuppress>suppressPIDThermostat</onsuppress>

        <onunsuppress>unsuppressPIDThermostat</onunsuppress>

        <canadd>canaddPIDThermostat</canadd>

        <oncleardata>cleardataPIDThermostat</oncleardata>

      </callbacks>

      <property name=“ConnectionGeo”  caption= “Heat Source/Sink Location”

                control=“scoping”>

        <attributes selection_filter=“vertex|edge|face” />

      </property>

      <property name=“MonitorGeo”  caption= “Temperature Monitor Location”

                control=“scoping”>

        <attributes selection_filter=“vertex|edge|face|body” />

      </property>

      <propertygroup name=“ControlProperties” caption=“Thermostat Control Properties”

                     display=“caption”>

        <property name=“ControlType” caption=“Control Type”

                  control=“select” default=“Both Heat Source and Sink”>

          <attributes options=“Heat Source,Heat Sink,Both Heat Source and Sink”/>

        </property>

        <property name=“ProportionalGain” caption=“Proportional Gain”

                  control=“float” default=“0”/>

        <property name=“IntegralGain” caption=“Integral Gain”

                  control=“float” default=“0”/>

        <property name=“DerivativeGain” caption=“Derivative Gain”

                  control=“float” default=“0”/>

      </propertygroup>

      <propertygroup name=“SetpointProperties” caption=“Thermostat Setpoint Properties”

                     display=“caption”>

        <propertygroup name=“SetpointType” display=“property” caption=“Setpoint Type”

                       control=“select” default=“User Specified Setpoint”>

          <attributes options=“User Specified Setpoint,Use Model Entity as Setpoint”/>

          <propertytable name=“SetPointTemp” caption=“Thermostat Set Point Temperature”

                         display=“worksheet” visibleon=“User Specified Setpoint”

                         control=“applycancel” class=“Worksheet.PropertyGroupEditor.PGEditor”>

            <property name=“Time” caption=“Time” unit=“Time” control=“float”></property>

            <property name=“SetPoint” caption=“Set Point Temperature”

                      unit=“Temperature” control=“float”></property>

          </propertytable>

          <property name=“SetpointGeo”  caption= “Setpoint Geometry”

                    visibleon=“Use Model Entity as Setpoint” control=“scoping”>

            <attributes selection_filter=“vertex|edge|face|body” />

          </property>

        </propertygroup>

        <property name=“SetpointOffset” caption=“Offset” control=“float” default=“0”/>

      </propertygroup>

      <propertygroup name=“Results” caption=“Thermostat Results” display=“caption”>

        <property name=“ViewResults” caption=“View Results?” control=“select” default=“No”>

          <attributes options=“Yes,No”/>

          <callbacks>

            <onvalidate>onViewPIDResults</onvalidate>

          </callbacks>

        </property>

      </propertygroup>

    </load>

  </simdata>

</extension>

Describing this in detail would take far longer than I have time for now, so I’m going to direct you to the ACT documentation. The gist of it is fairly simple though. XML provides a structured, hierarchical mechanism for describing the layout of the UI. Nested structures appear as child widgets of their parents. Callbacks are used within ACT to provide the hooks into the UI events so that we can respond to various user interactions. Beyond that, read the docs!! And, hey, before I hear any whining remember that in the old days of Jscript customization there wasn’t any documentation! When I was a Workbench Customization Kid we had to walk uphill, both ways, barefoot, in 8’ of snow for 35 miles… So shut it!

Making the Magic Happen

So, the UI is snazzy and all, but the heavy lifting really happens under the hood. Ultimately, what ACT provides us, when we are creating new BCs for ANSYS, is a clever way to insert APDL commands into the ds.dat input stream. Remember that at its core all Mechanical is, is a glorified APDL generator. I’m sure the developers love me reducing their hard labor to such mundane terms, but it is what it is… So, at the end of the day, our little ACT load objects are nothing more than miniature APDL writers. You thought we were doing something cool…

So, the magic happens when we collect up all of the input data the user specifies in our snazzy UI and turn that into APDL code that implemented the PID controller. This is obviously why I started by developing the APDL code first outside of WB. The APDL code is the true magic. Collecting up the user inputs and writing them to the ds.dat file occurs inside the getcommands callback. If you look closely at the XML code, you will notice two getcommands callbacks. The first one calls a python function named: writePIDThermostatLoad. This callback is scheduled to fire when Mechanical is finished writing all of the standard loads and boundary conditions that it implements natively and is about to write the first APDL solve command. Our commands will end up in the ds.dat file right at this location. I chose this location for the following reason. Our APDL code for the PID thermostat will be generating new element types and new nodes and elements not generated by Workbench. Sometimes workbench implements certain boundary conditions using surface effect element types. So, these native loads and boundary conditions themselves may generate new elements and element types. Workbench knows about those, because it’s generating them directly; however, it has no idea what I might be up to in my PID thermostat load. So, if it were to write additional boundary conditions after my PID load, it very well might reuse some of my element type ids, and even node/element ids. The safer thing to do is to write our APDL code so that it is robust in the presence of an unknown maximum element type/real constant set/node number/etc… Then, we insert it right before the solve command, after WB has written all of its loads and boundary conditions. Thus, the likelihood of id collisions is greatly reduced or eliminated.

Note, too, that ACT provides some utility functions to generate a new element type id and increment the internal counter within Workbench; however, I have found that these functions do not account for loads and boundary conditions. Therefore, in my testing thus far, I haven’t found them safe to use.

The second getcommands callback is setup to fire when Workbench finishes writing all of the solve commands and has moved to the post processing part of the input stream. I chose to implement a graphing functionality for displaying the relevant output data from the PID elements. Thus, I needed to retrieve this data from ANSYS after the solution is complete so that I can present it to the user. I accomplished this by writing a little bit of APDL code to enter /post26 and export all of the data I wish to plot to a CSV file. By specifying this second getcommands callback, I could instruct Workbench to insert the APDL commands after the solve completed.

Viewing the Results

Once the solution has completed, clicking on the “View Results?” dropdown and choosing “Yes” will bring up the following result viewer I implemented for the PID controller. All of the graphing functionality is provided by ACT in an import module called “chart”. This result viewer is simply implemented as a dialog with a single child control that is the ACT chart widget. This widget also allows you to layout multiple charts in a grid, as we have here. As you can see, we can display all of the relevant output data for the controls cleanly and efficiently using ACT! While this can also be accomplished in ANSYS Mechanical APDL, I think we would all agree that the results are far more pleasing visually when implemented in ACT.

clip_image006

Where Do We Go from Here?

Now that I’ve written an ACT module, my next steps are to clean it up and try to make it a little more production ready. Once I’m satisfied with it, I’ll publish it on this blog and on the appropriate ANSYS library. Look for more posts along the way if I uncover additional insights or gotchas with ACT programming. I will leave you with this, however. If you have put off ACT programming you really should reconsider! Being mostly new to ACT, I was able to get this little boundary condition hooked up and functioning in less than a week’s time. Given the way the user interface turned out and the flexibility thus far of the control, I’m quite pleased with that. Without the documentation and general availability of ACT, this effort would have been far more intense. So, try out ACT! You won’t be disappointed.

ACT Extension for a PID Thermostat Controller (PART 1)

I’m going to embark on a multipart blog series chronicling my efforts in writing a PID Thermostat control boundary condition for workbench. I picked this boundary condition for a few of reasons:

  1. As far as I know, it doesn’t exist in WB proper.
  2. It involves some techniques and element types in ANSYS Mechanical APDL that are not immediately intuitive to most users. Namely, we will be using the Combin37 element type to manage the control.
  3. There are a number of different options and parameters that will be used to populate the boundary condition with data, and this affords an opportunity to explore many of the GUI items exposed in ACT.

This first posting goes over how to model a PID controller in ANSYS Mechanical APDL.  In future articles I will share my efforts to refine the model and us ACT to include it in ANSYS Workbench.

PID Controller Background

Let’s begin with a little background on PID controllers. Full disclaimer, I’m not controls engineer, so take this info for what it is worth. PID stands for Proportional Integral Differential controller. The idea is fairly simple. Assume you have some output quantity you wish to control by varying some input value. That is, you have a known curve in time that represents what you would like the output to look like. For example:

image

The trick is to figure out what the input needs to look like in time so that you get the desired output. One way to do that is to use feedback. That is, you measure the current output value at some time, t, and you compare that to what the desired output should be at that time, t. If there is no difference in the measured value and the desired value, then you know whatever you have set the input to be, it is correct at least for this point in time. So, maybe it will be correct for the next moment in time. Let’s all hope…

However, chances are, there is some difference between what is measured and what is desired. For future reference we will call this the error term. The secret sauce is what to do with that information? To make things more concrete, we will ground our discussion in the thermal world and imagine we are trying to maintain something at a prescribed temperature. When the actual temperature of the device is lower than the desired temperature, we will define that as a positive error. Thus, I’m cold; I want to be warmer: that equals positive error. The converse is true. I’m hot; I want to be colder: that equals negative error.

One simple way we could try to control the input would be to say, “Let’s make the input proportional to the error term.” So, when the error term is positive, and thus I’m cold and wish to be warmer, we will add energy proportionate to the magnitude of the error term. Obviously the flip side is also true. If I’m hot and I wish to be cooler my negative error term would mean that remove energy proportionate to the magnitude of the error term. This sounds great! What more do you need? Well, what happens if I’m trying to hold a fixed temperature for a long time? If the system is not perfectly adiabatic, we still have to supply some energy to make up for whatever the system is losing to the surroundings. Obviously, this energy loss occurs even with the system is in a steady state configuration and at the prescribed temperature! But, if the system is exactly at the prescribed temperature, then the error term is zero. Anything proportionate to zero is… zero. That’s a bummer. I need something that won’t go to zero when my error term goes to zero.

What if I could keep a record of what I’ve done in the past? What if I accumulated all of the past error from forever? Obviously, this has the chance of being nonzero even if instantaneously my error term is zero. This sounds promising. Integrating a function of time with respect to time is analogous to accumulating the function values from history past. Thus, what if I integrated my error term and then made my input also proportional to that value? Wouldn’t that help the steady state issue above? Sure it would. Unfortunately, it also means I might go racing right on by my set point and it might take a while for that “mistake” to wash out of the system. Nothing is free. So, now I have kept a record of my entire past and used that to help me in the present, what if I could read the future? What if could extrapolate out in time?

Derivatives allow us to make a local extrapolation (in either direction) about a curve at a fixed point. So, if my curve is a function of time, which in our case the curves are, forward extrapolation is basically jumping ahead into the future. However, we can’t truly predict the future, we can only extrapolate on what has recently happened and make the leap of faith that it will continue to happen just as it has. So, if I take the derivative of my error term with respect to time, I can roll the dice a little a make some of my input proportional to this derivative term. That is, I can extrapolate out in time. If I do it right, I can actually make the system settle out a little faster. Remember that when the error term goes to zero and stays there, the derivative of the error term also goes to zero. So, when we are right on top of our prescribed value this term has no bearing on our input.

So, a PID controller simply takes the three concepts of how to specify an input value based on a function of the error term and mixes them together with differing proportions to arrive at the new value for the input. By “tuning” the system we can make it such that it responds quickly to change and it doesn’t wildly overshoot or oscillate.

Implementing a PID controller in ANSYS MAPDL

We will begin by implementing a PID controller in MAPDL before moving on to implementing the boundary condition in ANSYS Workbench via the ACT. We would like the boundary condition to have the following features:

  1. Ultimately we would like to “connect” this boundary condition to any number of nodes in our model. That is, we may want to have our energy input occur on a vertex, edge or face of the model in Workbench. So, we would like the boundary condition to support connecting to any number of nodes in the model.
  2. Likewise, we would like the “measured output” to be influenced by any number of nodes in our model. That is, if more than one node is included in the “measured value” set, we would like ANSYS to use the average temperature of the nodes in that set as our “measured output”. Again, this will allow us to specify a vertex, edge, face or body of the model to function as our measurement location. The measured value should be the average temperature on this entity. Averaging needs to be intelligent. We need to weight the average based on some measure that accounts for the relative influence of a node attached to large elements vs one attached to small elements.
  3. We would like to be able to independently control the proportional, integral and derivative components of the control algorithm.
  4. It would be nice to be able to specify whether this boundary condition can only add energy, only remove energy or if it can do both.
  5. We would like to allow the set point value to also be a function of time so that it too can change with time.
  6. Finally, it would be nice to be able to post process some of the heat flow quantities, temperature values, etc… associated with this boundary condition.

This is a pretty exhaustive list of requirements for the boundary condition. Fortunately, ANSYS MAPDL has built into it an element type that is perfectly suited for this type of control. That element type is the combin37.

Introducing the Combin37 Element Type

Understanding the combin37 element in ANSYS MAPDL takes a bit of a Zen state of mind… It’s, well, an element only a mother could love. Here is a picture lifted from the help:

clip_image003

OK. Clear as mud right? Actually, this thing can act as a thermostat whether you believe me from the picture or not. Like most/all ANSYS elements that can function in multiple roles, the combin37 is expressed in its structural configuration. It is up to you and me to mentally map it to a different set of physics. So, just trust me that you can forget the damping and FSLIDE and little springy looking thing in the picture. All we are going to worry about is the AFORCE thing. Mentally replace AFORCE with heat flow.

Notice those two little nodes hanging out there all by their lonesome selves labeled “control nodes”. I think they should have joysticks beside them personally, but ANSYS didn’t ask me. Those little guys are appropriately named. One of them, NODE K actually, will function as our set point node. That is, whatever temperature value we specify in time for NODE K, that same value represents the set point temperature we would like our “measured” location take on in time as well. So, that means we need to drive NODE K with our set point curve. That should be easy enough. Just apply a temperature boundary condition that is a function of time to that node and we’re good to go. Likewise, NODE L represents the “measured” temperature somewhere else in the model. So, we need to somehow hook NODE L up to our set of measurement nodes so that it magically takes on the average value of those nodes. More on that trick later.

Now, internally the combin37 subtracts the temperature at NODE K from NODE L to obtain an error term. Moreover, it allows us to specify different mathematical operations we can perform on the error term, and it allows us to take the output from those mathematical operations and drive the magical AFORCE thingy, which is heat flow. Guess what those mathematical operations are? If you guessed simply making the heat flow through the element proportional to the error, proportional to the time integral of the error and proportional to the time derivative of the error you would be right. Sounds like a PID controller doesn’t it? Now, the hard part is making sense of all the options and hooking it all up correctly. Let’s focus on the options first.

Key Option One and the Magic Control Value

Key option 1 for the combin37 controls what mathematical operation we are going to perform on the error term. In order to implement a full PID controller, we are going to need three combin37 elements in our model with each one keyed to a different mathematical operation. ANSYS calls the result of the mathematical operation, Cpar. So, we have the following:

KEYOPT(1) Value Mathematical Operation
0,1 image
2 image
3 image
4 image
5 image

Thus, for our purposes, we need to set keyopt(1) equal to 1,4 and 2 for each of the three elements respectively.

Feedback is realized by taking the control parameter Cpar and using it to modify the heat flow through the element, which is called AFORCE. The AFORCE value is specified as a real constant for the element; however, you can also rig up the element so that the value of AFORCE changes with respect to the control parameter. You do this by setting keyopt(6)=6. The manner in which ANSYS adjusts the AFORCE value, which again is heat flow, is described by the following equation:

image

Thus, the proportionality constant for the Proportional, Integral and Derivative components are specified with the C1 variable. RCONST, C3 and C4 are all set to zero. C2 is set to 1. Also note that ANSYS first takes the absolute value of the control parameter Cpar before plugging it into this equation. Furthermore, the direction of the AFORCE component is important. A positive value for AFORCE means that the element generates an element force (heatflow) in the direction specified in the diagram.  That is, it acts as a heat sink. So, assuming the model is attached to node J, the element acts as a heat sink when AFORCE is positive. Conversely, when AFORCE is negative, the element acts like a heat source. However, due to the absolute value, Cpar can never take on a negative value. Thus, when this element needs to act as an energy source to add heat to our model, the coefficient C1 must be negative. The opposite is true when the element needs to act as an energy sink.

Key Option Four and Five and when is it Alive?

If things weren’t confusing enough already, hold on as we discuss Keyopt 4 and 5. Consider the figure below, again lifted straight from the help.

clip_image016

The combination of these two key options controls when the element switches on and becomes “alive”. Let’s take the simple case first. Let’s assume that we are adding energy to the model in order to bring it up to a given temperature. In this case, Cpar will be positive because our set point is higher than our current value. If the element is functioning as a heat source we would like it to be on in this condition. Furthermore, we would like it to stay on as long as our error is positive so that we continue adding energy to bring the system up to temperature. Consider the diagram in the upper left. Imagine that we set ONVAL = 0 and OFFVAL = 0.001. Whenever Cpar is greater than ONVAL.  So this sounds like exactly what we want when the element is functioning as a heat source. Thus, keyopt(4)=0 and keyopt(5)=0.001 with OFFVAL=ONVAL=0 is what we want when the element needs to function as a heat source.

What about when it is a heat sink?  In this case we want the element to be active when the error term is negative; that is, when the current temperature is higher than the set point temperature.  Consider the diagram in the middle left.  This time let OFFVAL=0 and OFFVAL=-0.001.  In this case, whenever Cpar is negative (less than OFFVAL) then the element will be active.  Thus, keyopt(4)=0 and keyopt(5)=1 with OFFVAL=-0.001 ONVAL=0 is what we want when the element needs to function as a heat sink.  Notice, that if you set ONVAL=OFFVAL then the element will always stay on; thus, we need to provide the small window to activate the switching nature of the element.

Thus, we see that we need six different combin37 elements, three for a PID controlled heat sink and three for a PID controlled heat source, to fully specify a PID controlled thermal boundary condition. Phew… However, if we set all of the proportionality constants for either set of elements defining the heat sink or heat source to zero, we can effectively turn the boundary condition into only a heat source or only a heat sink, thus meeting requirement four listed above. While we’re marking off requirements, we can also mark off requirements three and five. That is, with this combination of elements we can independently control the P, I and D proportionality constants for the controller. Likewise, by putting a time varying temperature constraint on control node K, we can effectively cause the set point value to change in time. Let’s see if we can now address requirements one and two.

How do we Hook the Combin37 to the Rest of the Model?

We will address this question in two parts. First, how do we hook the “business” end of the combin37 to the part of the model to which we are going to add or remove energy? Second, how do we hook the “control” end of the combin37 to the nodes we want to monitor?

Hooking to the Combin37 to the Nodes that Add or Remove Energy

To hook the combin37 to the model so that we can add or remove energy we will use the convection link elements, link34. These elements effectively act like little thermal resistors with the resistance equation being specified as:

image

In order to make things nice, we need to “match” the resistances so that each node effectively sees the same resistance back to the combin37 element. We do this by varying the “area” associated with each of these convective links. To get the area associated with a node we use the arnode() intrinsic function. See the listing for details.

Hooking the Combin37 to the Nodes that Function as the Measured Value

As we mentioned in our requirements, we would like to be able to specify more than one or more nodes to function as the measured control value for our boundary condition. More precisely, if more than one node is included in the measurement node set, we would like ANSYS to average the temperatures at those nodes and use that average value as the measurement temperature. This will allow us to specify, for example, the average temperature of a body as the measurement value, not just one node on the body somewhere. However, we would also like for the scheme to work seamlessly if only one node is specified. So, how can we accomplish this? Constraint equations come to our rescue.

Remember that a constraint equation is defined as:

image

How can we use this to compute the average temperature of a set of nodes, and tie the control node of the combin37 to this average? Let’s begin by formulating an equation for the average temperature of a set of nodes. We would like this average to not be simply a uniform average, but rather be weighted by the relative contribution a given node should play in the overall average of a geometric entity. For example, assume we are interested in calculating the average temperature of a surface in our model. Obviously this surface will have associated with it many nodes connected to many different elements. Assume for the moment that we are interested in one node on this face that is connected to many large elements that span most of the area of this face. Shouldn’t this node’s temperature have a larger contribution to the “average” temperature of the face as say a node connected to a few tiny elements? If we just add up the temperature values and divide by the number of nodes, each node’s temperature has equal weight in the average. A better solution would be to area weight the nodal temperatures based on the area associated with each individual node. Something like:

image

That looks a little like our constraint equation. However, in the constraint equation I have to specify the constant term, whereas in the equation above, that is the value (Tavg) that I am interested in computing. What can I do? Well, let’s add in another node to our constraint equation that represents the “answer”. For convenience, we’ll make this the control node on our combin37 elements since we need the average temperature of the face to be driving that node anyway. Consider:

image

Now, our constant term is zero, and our Ci’s are Ai/AT and -1 for the control node. Voila! With this one constraint equation we’ve compute an area weighted average of the temperature over a set of nodes and assigned that value to our control node. CE’s rock!

An Example Model

This post is already way too long, so let’s wrap things up with a little example model. This model will illustrate a simple PI heat source attached to an edge of a plate with a hole. The other outer edges of the plate are given a convective boundary condition to simulate removing heat. The initial condition of the plate is set to 20C. The set point for the thermostat is set to 100C. No attempt is made to tune the PI controller in this example, so you can clearly see the effects of the overshoot due to the integral component being large. However, you can also see how the average temperature eventually settles down to exactly the set point value. clip_image026

The red squiggly represents where heat is being added with the PI controller. The blue squiggly represents where heat is being removed due to convection. Here is a plot of the average temperature of the body with respect to time where you can see the response of the system to the PI control.

clip_image027

Here is another run, where the set point value ramps up as well. I’ve also tweaked the control values a little to mitigate some of the overshoot. This is looking kind of promising, and it is fun to play with. Next time we will look to integrate it into the workbench environment via an actual ACT extension.

clip_image028

Part 2 is here

Model Listing

I’ve included the model listing below so that you can play with this yourself. In future posts, I will elaborate more on this technique and also look to integrate it into an ACT module.

 

finish

/clear

 

/prep7

*get,etmax_,etyp,0,num,max

P_et=etmax_+1

I_et=etmax_+2

D_et=etmax_+3

Link_et=etmax_+4

mass_et=etmax_+5

 

 

 

et,P_et,combin37

et,I_et,combin37

et,D_et,combin37

et,Link_et,link34

et,mass_et,mass71

Kp=1

Ki=2

Kd=0

 

keyopt,P_et,1,0    ! Control on UK-UL

keyopt,P_et,2,8    ! Control node DOF is Temp

keyopt,P_et,3,8    ! Active node DOF is Temp

keyopt,P_et,4,0    ! Wierdness for the ON/OFF range

keyopt,P_et,5,0    ! More wierdness for the ON/OFF range

keyopt,P_et,6,6    ! Use the force, Luke (aka Combin37)

keyopt,P_et,9,0    ! Use the equation, Duke (where is Daisy…)

 

 

keyopt,I_et,1,4    ! Control on integral wrt time

keyopt,I_et,2,8    ! Control node DOF is Temp

keyopt,I_et,3,8    ! Active node DOF is Temp

keyopt,I_et,4,0    ! Wierdness for the ON/OFF range

keyopt,I_et,5,0    ! More wierdness for the ON/OFF range

keyopt,I_et,6,6    ! Use the force, Luke (aka Combin37)

keyopt,I_et,9,0    ! Use the equation, Duke (where is Daisy…)

 

 

keyopt,D_et,1,2    ! Control on first derivative wrt time

keyopt,D_et,2,8    ! Control node DOF is Temp

keyopt,D_et,3,8    ! Active node DOF is Temp

keyopt,D_et,4,0    ! Wierdness for the ON/OFF range

keyopt,D_et,5,0    ! More wierdness for the ON/OFF range

keyopt,D_et,6,6    ! Use the force, Luke (aka Combin37)

keyopt,D_et,9,0    ! Use the equation, Duke (where is Daisy…)

 

keyopt,mass_et,3,1 ! Interpret real constant as DENS*C*Volume

 

 

 

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

!!        S M A L L   T E S T   M O D E L       !!

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

test_et=etmax_+10

et,test_et,plane55

 

mp,kxx,test_et,70

mp,dens,test_et,8050

mp,c,test_et,0.4

! Thickness of plate

r,test_et,0.1

 

! Plane55 element

keyopt,test_et,3,3

 

! Make a block

k,1

k,2,1,0

k,3,1,1

k,4,0,1

a,1,2,3,4

! Make a hole

cyl4,0.5,0.5,0.25

! Punch a hole

asba,1,2

 

type,test_et

mat,test_et

real,test_et

esize,0.025

amesh,all

 

! create an nodal component for the

! ‘attachment’ location

lsel,s,loc,x,0

nsll,s,1

cm,pid_attach_n,node

 

! create a nodal component for the

! ‘monitor’ location

allsel,all

cm,pid_monitor_n,node

 

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

!!        B E G I N   P I D   M O D E L         !!

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

 

! Real constant and mat prop for the mass element

mp,qrate,mass_et,0 ! Zero heat generation rate for the element

r,mass_et,1e-10  ! Extremely small thermal capacitance

 

! Material properties for convection element

! make the convection “large”

mp,hf,Link_et,1e5

 

! Real constant for the combin37 elements

! that ack as heaters

on_val_=0

off_val_=1e-3

r,P_et,0,0,0,on_val_,off_val_,0

rmore,0,1,-Kp,1

 

r,I_et,0,0,0,on_val_,off_val_,0

rmore,0,1,-Ki,1

 

r,D_et,0,0,0,on_val_,off_val_,0

rmore,0,1,-Kd,1

 

! build the PID elements

*get,nmax_,node,0,num,max

BASE_NODE=nmax_+1

P_NODE_J=nmax_+2

I_NODE_J=nmax_+3

D_NODE_J=nmax_+4

NODE_K=nmax_+5

NODE_L=nmax_+6

! Create the nodes.  They can be all coincident

! as we will refer to them solely by their number.

! They will be located at the origin

*do,i_,1,6

    n,nmax_+i_

*enddo

 

! Put a thermal mass on the K and L nodes

! for each control element to give them

! thermal DOFs

type,mass_et

mat,mass_et

real,mass_et

e,NODE_K

e,NODE_L

 

! Proportional element

type,P_et

mat,P_et

real,P_et

e,BASE_NODE,P_NODE_J,NODE_K,NODE_L

 

! Integral element

type,I_et

mat,I_et

real,I_et

e,BASE_NODE,I_NODE_J,NODE_K,NODE_L

 

! Derivative Element

type,D_et

mat,D_et

real,D_et

e,BASE_NODE,D_NODE_J,NODE_K,NODE_L

 

 

 

! Ground the base node

d,BASE_NODE,temp,0

 

! Get a list of the attachment nodes

cmsel,s,pid_attach_n

*get,numnod_,node,0,count

attlist_=

*dim,attlist_,,numnod_

*vget,attlist_(1),node,0,nlist

*get,rmax_,rcon,0,num,max

 

! Hook the attachment nodes to the

! end of the control element with

! convection links

*do,i_,1,numnod_

    n1_=attlist_(i_)

    a1_=arnode(n1_)

    r1_=rmax_+i_

    r,r1_,a1_

    type,link_et

    mat,link_et

    real,r1_

    e,P_NODE_J,n1_

    e,I_NODE_J,n1_

    e,D_NODE_J,n1_

*enddo

 

! Hook up the monitor nodes

cmsel,s,pid_monitor_n

*get,numnod_,node,0,count

monlist_=

monarea_=

*dim,monlist_,,numnod_

*dim,monarea_,,numnod_

*vget,monlist_(1),node,0,nlist

! We are going to need these areas

! so, hold on to them

*do,i_,1,numnod_

    monarea_(i_)=arnode(i_)

*enddo

*vscfun,totarea_,sum,monarea_(1)

! Write the constraint equations

/pbc,ce,,0

ce,next,0,NODE_L,temp,-1

*do,i_,1,numnod_

    ce,high,0,monlist_(i_),temp,monarea_(i_)/totarea_

*enddo

 

! Create a transient setpoint temperature

*dim,setpoint_,table,2,,,time

setpoint_(1,0)=0

setpoint_(2,0)=3600

setpoint_(1,1)=100

setpoint_(2,1)=100

 

! Constrain the temperature node to be

! at the setpoint value

allsel,all

d,NODE_K,temp,%setpoint_%

 

 

! Apply an initial condition of

! 20 C to everything

allsel,all

ic,all,temp,20

 

/solu

antype,trans,new

time,1000

deltim,1,1,1

lsel,s,loc,x,0.1,1

nsll,s,1

sf,all,conv,20,20

 

allsel,all

outres,all,all

solve

/post26

! Plot the response temperature

! and the setpoint temperature

nsol,2,NODE_L,temp,,temp_r

nsol,3,NODE_K,temp,,temp_s

xvar,1

plvar,2,3

Be a View Master: Customizing and Managing Views in ANSYS Mechanical

Accessing various predefined views in Mechanical is easy. You can click on the triad axes (including the negative sides of the axes) and view the model down those axes, or click the turquoise isometric ball for an isometric view. Or you can right click the graphics area and select from a variety of views (top, back, left, etc.) from the View menu.

But what if you want a predefined view that has the model rotated “just so” and zoomed out “just so?” What if you want to store these settings not just in your current model, but bring them into other models as well? Starting in R14.5 you can do this, using the Manage Views window.

To open the Manage Views window, click on the eye-in-a-box icon that looks like it was designed by Freemasons. image The Managed Views window appears at the lower left of the GUI. The window consists of the following:image

The labels are pretty self-explanatory, but let’s delve into a couple of examples. As you can see by observing the triad, the model viewpoint shown here does not coincide with any pre-defined view.

image

Click the Create a View button image and give the view a name (defaults to View 1 but any name can be given):

image

After rotating, panning, and zooming, you can return to this view by clicking the Apply a View button. image

image

As mentioned before, you can apply the same view between different models by using the View Export/Import capabilities. To do this, simply highlight the named view to be exported in the originating model and click the Export button. image Specify the xml file to which the view is to be stored. In another model, click the Import button image and browse to the xml file containing the view to be imported. This is basically the Mechanical equivalent of an APDL file containing /VIEW and /ZOOM commands. Example follows.

The following view is to be stored and exported to another model. Highlight the view name (“Sulk”) and click the Export button.

image

Frankie the Frowning Finite Element Model worries that views can’t be shared between models.

Specify the xml file name and click Save.

image

In a different model, click the Import button, browse to the xml stored in the previous step, and click open.

image

Highlight the imported view name and click the Apply a View button.

image

Sammy the Smiling Simulation Model is happy that views can be transferred between models.

The Managed Views window provides a significant amount of viewing versatility over the standard viewing definitions.

Legacy Training Material: Tcl/Tk for ANSYS Mechanical APDL

The Graphical User Interface (GUI) for ANSYS Mechanical APDL is written in a toolset called Tcl/Tk. This is actually the same GUI toolset that ICEM CFD uses.  Way back in the days when dinosaurs roamed the earth and the .com bubble was bursting, PADT wrote and Advanced Customization class for what was then just called ANSYS.  We still use a large portion of that class today, but one area that has really been mothballed is the chapter on Tcl/Tk.

But some users may find some value there so we present it here, in its un-edited and un-verified totality as a resource for the community.

ANSYS Mechanical APDL Tcl/TK Legacy Training

Use it with success, but at your own risk.

Tcl-Tk-ANSYS-Code-ExamplesTcl-Tk-ANSYS-Examples