“Getting to know ANSYS” Video series

The ANSYS Product Suite contains a large number of modules that are each tailored for a particular area in the simulation and analysis world.  We, at PADT, realize that many of our customers are not aware or are confused at where each of these modules fits in to the analysis spectrum. 

The “Getting to know ANSYS” videos will hopefully help everyone to understand these modules a little better.  Each video will focus on one module and will showcase the following in a mixture of presentations and mini-demos: 

  • What each module is
  • What are its capabilities
  • Why is it useful
  • Who can benefit from using it

The videos will be in the “Getting to know ANSYS” playlist on PADT’s Youtube Channel.

Please feel free to let us know how the videos are and definitely let us know which module that you are interested in and that you’d like to see next. That will help us to plan these future videos accordingly.

You can reach out to me directly at manoj.mahendran@padtinc.com for questions or followups to these or the Focus Video Tips” videos. 

Thanks!

ACT Extension for a PID Thermostat Controller (PART 1)

I’m going to embark on a multipart blog series chronicling my efforts in writing a PID Thermostat control boundary condition for workbench. I picked this boundary condition for a few of reasons:

  1. As far as I know, it doesn’t exist in WB proper.
  2. It involves some techniques and element types in ANSYS Mechanical APDL that are not immediately intuitive to most users. Namely, we will be using the Combin37 element type to manage the control.
  3. There are a number of different options and parameters that will be used to populate the boundary condition with data, and this affords an opportunity to explore many of the GUI items exposed in ACT.

This first posting goes over how to model a PID controller in ANSYS Mechanical APDL.  In future articles I will share my efforts to refine the model and us ACT to include it in ANSYS Workbench.

PID Controller Background

Let’s begin with a little background on PID controllers. Full disclaimer, I’m not controls engineer, so take this info for what it is worth. PID stands for Proportional Integral Differential controller. The idea is fairly simple. Assume you have some output quantity you wish to control by varying some input value. That is, you have a known curve in time that represents what you would like the output to look like. For example:

image

The trick is to figure out what the input needs to look like in time so that you get the desired output. One way to do that is to use feedback. That is, you measure the current output value at some time, t, and you compare that to what the desired output should be at that time, t. If there is no difference in the measured value and the desired value, then you know whatever you have set the input to be, it is correct at least for this point in time. So, maybe it will be correct for the next moment in time. Let’s all hope…

However, chances are, there is some difference between what is measured and what is desired. For future reference we will call this the error term. The secret sauce is what to do with that information? To make things more concrete, we will ground our discussion in the thermal world and imagine we are trying to maintain something at a prescribed temperature. When the actual temperature of the device is lower than the desired temperature, we will define that as a positive error. Thus, I’m cold; I want to be warmer: that equals positive error. The converse is true. I’m hot; I want to be colder: that equals negative error.

One simple way we could try to control the input would be to say, “Let’s make the input proportional to the error term.” So, when the error term is positive, and thus I’m cold and wish to be warmer, we will add energy proportionate to the magnitude of the error term. Obviously the flip side is also true. If I’m hot and I wish to be cooler my negative error term would mean that remove energy proportionate to the magnitude of the error term. This sounds great! What more do you need? Well, what happens if I’m trying to hold a fixed temperature for a long time? If the system is not perfectly adiabatic, we still have to supply some energy to make up for whatever the system is losing to the surroundings. Obviously, this energy loss occurs even with the system is in a steady state configuration and at the prescribed temperature! But, if the system is exactly at the prescribed temperature, then the error term is zero. Anything proportionate to zero is… zero. That’s a bummer. I need something that won’t go to zero when my error term goes to zero.

What if I could keep a record of what I’ve done in the past? What if I accumulated all of the past error from forever? Obviously, this has the chance of being nonzero even if instantaneously my error term is zero. This sounds promising. Integrating a function of time with respect to time is analogous to accumulating the function values from history past. Thus, what if I integrated my error term and then made my input also proportional to that value? Wouldn’t that help the steady state issue above? Sure it would. Unfortunately, it also means I might go racing right on by my set point and it might take a while for that “mistake” to wash out of the system. Nothing is free. So, now I have kept a record of my entire past and used that to help me in the present, what if I could read the future? What if could extrapolate out in time?

Derivatives allow us to make a local extrapolation (in either direction) about a curve at a fixed point. So, if my curve is a function of time, which in our case the curves are, forward extrapolation is basically jumping ahead into the future. However, we can’t truly predict the future, we can only extrapolate on what has recently happened and make the leap of faith that it will continue to happen just as it has. So, if I take the derivative of my error term with respect to time, I can roll the dice a little a make some of my input proportional to this derivative term. That is, I can extrapolate out in time. If I do it right, I can actually make the system settle out a little faster. Remember that when the error term goes to zero and stays there, the derivative of the error term also goes to zero. So, when we are right on top of our prescribed value this term has no bearing on our input.

So, a PID controller simply takes the three concepts of how to specify an input value based on a function of the error term and mixes them together with differing proportions to arrive at the new value for the input. By “tuning” the system we can make it such that it responds quickly to change and it doesn’t wildly overshoot or oscillate.

Implementing a PID controller in ANSYS MAPDL

We will begin by implementing a PID controller in MAPDL before moving on to implementing the boundary condition in ANSYS Workbench via the ACT. We would like the boundary condition to have the following features:

  1. Ultimately we would like to “connect” this boundary condition to any number of nodes in our model. That is, we may want to have our energy input occur on a vertex, edge or face of the model in Workbench. So, we would like the boundary condition to support connecting to any number of nodes in the model.
  2. Likewise, we would like the “measured output” to be influenced by any number of nodes in our model. That is, if more than one node is included in the “measured value” set, we would like ANSYS to use the average temperature of the nodes in that set as our “measured output”. Again, this will allow us to specify a vertex, edge, face or body of the model to function as our measurement location. The measured value should be the average temperature on this entity. Averaging needs to be intelligent. We need to weight the average based on some measure that accounts for the relative influence of a node attached to large elements vs one attached to small elements.
  3. We would like to be able to independently control the proportional, integral and derivative components of the control algorithm.
  4. It would be nice to be able to specify whether this boundary condition can only add energy, only remove energy or if it can do both.
  5. We would like to allow the set point value to also be a function of time so that it too can change with time.
  6. Finally, it would be nice to be able to post process some of the heat flow quantities, temperature values, etc… associated with this boundary condition.

This is a pretty exhaustive list of requirements for the boundary condition. Fortunately, ANSYS MAPDL has built into it an element type that is perfectly suited for this type of control. That element type is the combin37.

Introducing the Combin37 Element Type

Understanding the combin37 element in ANSYS MAPDL takes a bit of a Zen state of mind… It’s, well, an element only a mother could love. Here is a picture lifted from the help:

clip_image003

OK. Clear as mud right? Actually, this thing can act as a thermostat whether you believe me from the picture or not. Like most/all ANSYS elements that can function in multiple roles, the combin37 is expressed in its structural configuration. It is up to you and me to mentally map it to a different set of physics. So, just trust me that you can forget the damping and FSLIDE and little springy looking thing in the picture. All we are going to worry about is the AFORCE thing. Mentally replace AFORCE with heat flow.

Notice those two little nodes hanging out there all by their lonesome selves labeled “control nodes”. I think they should have joysticks beside them personally, but ANSYS didn’t ask me. Those little guys are appropriately named. One of them, NODE K actually, will function as our set point node. That is, whatever temperature value we specify in time for NODE K, that same value represents the set point temperature we would like our “measured” location take on in time as well. So, that means we need to drive NODE K with our set point curve. That should be easy enough. Just apply a temperature boundary condition that is a function of time to that node and we’re good to go. Likewise, NODE L represents the “measured” temperature somewhere else in the model. So, we need to somehow hook NODE L up to our set of measurement nodes so that it magically takes on the average value of those nodes. More on that trick later.

Now, internally the combin37 subtracts the temperature at NODE K from NODE L to obtain an error term. Moreover, it allows us to specify different mathematical operations we can perform on the error term, and it allows us to take the output from those mathematical operations and drive the magical AFORCE thingy, which is heat flow. Guess what those mathematical operations are? If you guessed simply making the heat flow through the element proportional to the error, proportional to the time integral of the error and proportional to the time derivative of the error you would be right. Sounds like a PID controller doesn’t it? Now, the hard part is making sense of all the options and hooking it all up correctly. Let’s focus on the options first.

Key Option One and the Magic Control Value

Key option 1 for the combin37 controls what mathematical operation we are going to perform on the error term. In order to implement a full PID controller, we are going to need three combin37 elements in our model with each one keyed to a different mathematical operation. ANSYS calls the result of the mathematical operation, Cpar. So, we have the following:

KEYOPT(1) Value Mathematical Operation
0,1 image
2 image
3 image
4 image
5 image

Thus, for our purposes, we need to set keyopt(1) equal to 1,4 and 2 for each of the three elements respectively.

Feedback is realized by taking the control parameter Cpar and using it to modify the heat flow through the element, which is called AFORCE. The AFORCE value is specified as a real constant for the element; however, you can also rig up the element so that the value of AFORCE changes with respect to the control parameter. You do this by setting keyopt(6)=6. The manner in which ANSYS adjusts the AFORCE value, which again is heat flow, is described by the following equation:

image

Thus, the proportionality constant for the Proportional, Integral and Derivative components are specified with the C1 variable. RCONST, C3 and C4 are all set to zero. C2 is set to 1. Also note that ANSYS first takes the absolute value of the control parameter Cpar before plugging it into this equation. Furthermore, the direction of the AFORCE component is important. A positive value for AFORCE means that the element generates an element force (heatflow) in the direction specified in the diagram.  That is, it acts as a heat sink. So, assuming the model is attached to node J, the element acts as a heat sink when AFORCE is positive. Conversely, when AFORCE is negative, the element acts like a heat source. However, due to the absolute value, Cpar can never take on a negative value. Thus, when this element needs to act as an energy source to add heat to our model, the coefficient C1 must be negative. The opposite is true when the element needs to act as an energy sink.

Key Option Four and Five and when is it Alive?

If things weren’t confusing enough already, hold on as we discuss Keyopt 4 and 5. Consider the figure below, again lifted straight from the help.

clip_image016

The combination of these two key options controls when the element switches on and becomes “alive”. Let’s take the simple case first. Let’s assume that we are adding energy to the model in order to bring it up to a given temperature. In this case, Cpar will be positive because our set point is higher than our current value. If the element is functioning as a heat source we would like it to be on in this condition. Furthermore, we would like it to stay on as long as our error is positive so that we continue adding energy to bring the system up to temperature. Consider the diagram in the upper left. Imagine that we set ONVAL = 0 and OFFVAL = 0.001. Whenever Cpar is greater than ONVAL.  So this sounds like exactly what we want when the element is functioning as a heat source. Thus, keyopt(4)=0 and keyopt(5)=0.001 with OFFVAL=ONVAL=0 is what we want when the element needs to function as a heat source.

What about when it is a heat sink?  In this case we want the element to be active when the error term is negative; that is, when the current temperature is higher than the set point temperature.  Consider the diagram in the middle left.  This time let OFFVAL=0 and OFFVAL=-0.001.  In this case, whenever Cpar is negative (less than OFFVAL) then the element will be active.  Thus, keyopt(4)=0 and keyopt(5)=1 with OFFVAL=-0.001 ONVAL=0 is what we want when the element needs to function as a heat sink.  Notice, that if you set ONVAL=OFFVAL then the element will always stay on; thus, we need to provide the small window to activate the switching nature of the element.

Thus, we see that we need six different combin37 elements, three for a PID controlled heat sink and three for a PID controlled heat source, to fully specify a PID controlled thermal boundary condition. Phew… However, if we set all of the proportionality constants for either set of elements defining the heat sink or heat source to zero, we can effectively turn the boundary condition into only a heat source or only a heat sink, thus meeting requirement four listed above. While we’re marking off requirements, we can also mark off requirements three and five. That is, with this combination of elements we can independently control the P, I and D proportionality constants for the controller. Likewise, by putting a time varying temperature constraint on control node K, we can effectively cause the set point value to change in time. Let’s see if we can now address requirements one and two.

How do we Hook the Combin37 to the Rest of the Model?

We will address this question in two parts. First, how do we hook the “business” end of the combin37 to the part of the model to which we are going to add or remove energy? Second, how do we hook the “control” end of the combin37 to the nodes we want to monitor?

Hooking to the Combin37 to the Nodes that Add or Remove Energy

To hook the combin37 to the model so that we can add or remove energy we will use the convection link elements, link34. These elements effectively act like little thermal resistors with the resistance equation being specified as:

image

In order to make things nice, we need to “match” the resistances so that each node effectively sees the same resistance back to the combin37 element. We do this by varying the “area” associated with each of these convective links. To get the area associated with a node we use the arnode() intrinsic function. See the listing for details.

Hooking the Combin37 to the Nodes that Function as the Measured Value

As we mentioned in our requirements, we would like to be able to specify more than one or more nodes to function as the measured control value for our boundary condition. More precisely, if more than one node is included in the measurement node set, we would like ANSYS to average the temperatures at those nodes and use that average value as the measurement temperature. This will allow us to specify, for example, the average temperature of a body as the measurement value, not just one node on the body somewhere. However, we would also like for the scheme to work seamlessly if only one node is specified. So, how can we accomplish this? Constraint equations come to our rescue.

Remember that a constraint equation is defined as:

image

How can we use this to compute the average temperature of a set of nodes, and tie the control node of the combin37 to this average? Let’s begin by formulating an equation for the average temperature of a set of nodes. We would like this average to not be simply a uniform average, but rather be weighted by the relative contribution a given node should play in the overall average of a geometric entity. For example, assume we are interested in calculating the average temperature of a surface in our model. Obviously this surface will have associated with it many nodes connected to many different elements. Assume for the moment that we are interested in one node on this face that is connected to many large elements that span most of the area of this face. Shouldn’t this node’s temperature have a larger contribution to the “average” temperature of the face as say a node connected to a few tiny elements? If we just add up the temperature values and divide by the number of nodes, each node’s temperature has equal weight in the average. A better solution would be to area weight the nodal temperatures based on the area associated with each individual node. Something like:

image

That looks a little like our constraint equation. However, in the constraint equation I have to specify the constant term, whereas in the equation above, that is the value (Tavg) that I am interested in computing. What can I do? Well, let’s add in another node to our constraint equation that represents the “answer”. For convenience, we’ll make this the control node on our combin37 elements since we need the average temperature of the face to be driving that node anyway. Consider:

image

Now, our constant term is zero, and our Ci’s are Ai/AT and -1 for the control node. Voila! With this one constraint equation we’ve compute an area weighted average of the temperature over a set of nodes and assigned that value to our control node. CE’s rock!

An Example Model

This post is already way too long, so let’s wrap things up with a little example model. This model will illustrate a simple PI heat source attached to an edge of a plate with a hole. The other outer edges of the plate are given a convective boundary condition to simulate removing heat. The initial condition of the plate is set to 20C. The set point for the thermostat is set to 100C. No attempt is made to tune the PI controller in this example, so you can clearly see the effects of the overshoot due to the integral component being large. However, you can also see how the average temperature eventually settles down to exactly the set point value. clip_image026

The red squiggly represents where heat is being added with the PI controller. The blue squiggly represents where heat is being removed due to convection. Here is a plot of the average temperature of the body with respect to time where you can see the response of the system to the PI control.

clip_image027

Here is another run, where the set point value ramps up as well. I’ve also tweaked the control values a little to mitigate some of the overshoot. This is looking kind of promising, and it is fun to play with. Next time we will look to integrate it into the workbench environment via an actual ACT extension.

clip_image028

Part 2 is here

Model Listing

I’ve included the model listing below so that you can play with this yourself. In future posts, I will elaborate more on this technique and also look to integrate it into an ACT module.

 

finish

/clear

 

/prep7

*get,etmax_,etyp,0,num,max

P_et=etmax_+1

I_et=etmax_+2

D_et=etmax_+3

Link_et=etmax_+4

mass_et=etmax_+5

 

 

 

et,P_et,combin37

et,I_et,combin37

et,D_et,combin37

et,Link_et,link34

et,mass_et,mass71

Kp=1

Ki=2

Kd=0

 

keyopt,P_et,1,0    ! Control on UK-UL

keyopt,P_et,2,8    ! Control node DOF is Temp

keyopt,P_et,3,8    ! Active node DOF is Temp

keyopt,P_et,4,0    ! Wierdness for the ON/OFF range

keyopt,P_et,5,0    ! More wierdness for the ON/OFF range

keyopt,P_et,6,6    ! Use the force, Luke (aka Combin37)

keyopt,P_et,9,0    ! Use the equation, Duke (where is Daisy…)

 

 

keyopt,I_et,1,4    ! Control on integral wrt time

keyopt,I_et,2,8    ! Control node DOF is Temp

keyopt,I_et,3,8    ! Active node DOF is Temp

keyopt,I_et,4,0    ! Wierdness for the ON/OFF range

keyopt,I_et,5,0    ! More wierdness for the ON/OFF range

keyopt,I_et,6,6    ! Use the force, Luke (aka Combin37)

keyopt,I_et,9,0    ! Use the equation, Duke (where is Daisy…)

 

 

keyopt,D_et,1,2    ! Control on first derivative wrt time

keyopt,D_et,2,8    ! Control node DOF is Temp

keyopt,D_et,3,8    ! Active node DOF is Temp

keyopt,D_et,4,0    ! Wierdness for the ON/OFF range

keyopt,D_et,5,0    ! More wierdness for the ON/OFF range

keyopt,D_et,6,6    ! Use the force, Luke (aka Combin37)

keyopt,D_et,9,0    ! Use the equation, Duke (where is Daisy…)

 

keyopt,mass_et,3,1 ! Interpret real constant as DENS*C*Volume

 

 

 

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

!!        S M A L L   T E S T   M O D E L       !!

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

test_et=etmax_+10

et,test_et,plane55

 

mp,kxx,test_et,70

mp,dens,test_et,8050

mp,c,test_et,0.4

! Thickness of plate

r,test_et,0.1

 

! Plane55 element

keyopt,test_et,3,3

 

! Make a block

k,1

k,2,1,0

k,3,1,1

k,4,0,1

a,1,2,3,4

! Make a hole

cyl4,0.5,0.5,0.25

! Punch a hole

asba,1,2

 

type,test_et

mat,test_et

real,test_et

esize,0.025

amesh,all

 

! create an nodal component for the

! ‘attachment’ location

lsel,s,loc,x,0

nsll,s,1

cm,pid_attach_n,node

 

! create a nodal component for the

! ‘monitor’ location

allsel,all

cm,pid_monitor_n,node

 

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

!!        B E G I N   P I D   M O D E L         !!

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

 

! Real constant and mat prop for the mass element

mp,qrate,mass_et,0 ! Zero heat generation rate for the element

r,mass_et,1e-10  ! Extremely small thermal capacitance

 

! Material properties for convection element

! make the convection “large”

mp,hf,Link_et,1e5

 

! Real constant for the combin37 elements

! that ack as heaters

on_val_=0

off_val_=1e-3

r,P_et,0,0,0,on_val_,off_val_,0

rmore,0,1,-Kp,1

 

r,I_et,0,0,0,on_val_,off_val_,0

rmore,0,1,-Ki,1

 

r,D_et,0,0,0,on_val_,off_val_,0

rmore,0,1,-Kd,1

 

! build the PID elements

*get,nmax_,node,0,num,max

BASE_NODE=nmax_+1

P_NODE_J=nmax_+2

I_NODE_J=nmax_+3

D_NODE_J=nmax_+4

NODE_K=nmax_+5

NODE_L=nmax_+6

! Create the nodes.  They can be all coincident

! as we will refer to them solely by their number.

! They will be located at the origin

*do,i_,1,6

    n,nmax_+i_

*enddo

 

! Put a thermal mass on the K and L nodes

! for each control element to give them

! thermal DOFs

type,mass_et

mat,mass_et

real,mass_et

e,NODE_K

e,NODE_L

 

! Proportional element

type,P_et

mat,P_et

real,P_et

e,BASE_NODE,P_NODE_J,NODE_K,NODE_L

 

! Integral element

type,I_et

mat,I_et

real,I_et

e,BASE_NODE,I_NODE_J,NODE_K,NODE_L

 

! Derivative Element

type,D_et

mat,D_et

real,D_et

e,BASE_NODE,D_NODE_J,NODE_K,NODE_L

 

 

 

! Ground the base node

d,BASE_NODE,temp,0

 

! Get a list of the attachment nodes

cmsel,s,pid_attach_n

*get,numnod_,node,0,count

attlist_=

*dim,attlist_,,numnod_

*vget,attlist_(1),node,0,nlist

*get,rmax_,rcon,0,num,max

 

! Hook the attachment nodes to the

! end of the control element with

! convection links

*do,i_,1,numnod_

    n1_=attlist_(i_)

    a1_=arnode(n1_)

    r1_=rmax_+i_

    r,r1_,a1_

    type,link_et

    mat,link_et

    real,r1_

    e,P_NODE_J,n1_

    e,I_NODE_J,n1_

    e,D_NODE_J,n1_

*enddo

 

! Hook up the monitor nodes

cmsel,s,pid_monitor_n

*get,numnod_,node,0,count

monlist_=

monarea_=

*dim,monlist_,,numnod_

*dim,monarea_,,numnod_

*vget,monlist_(1),node,0,nlist

! We are going to need these areas

! so, hold on to them

*do,i_,1,numnod_

    monarea_(i_)=arnode(i_)

*enddo

*vscfun,totarea_,sum,monarea_(1)

! Write the constraint equations

/pbc,ce,,0

ce,next,0,NODE_L,temp,-1

*do,i_,1,numnod_

    ce,high,0,monlist_(i_),temp,monarea_(i_)/totarea_

*enddo

 

! Create a transient setpoint temperature

*dim,setpoint_,table,2,,,time

setpoint_(1,0)=0

setpoint_(2,0)=3600

setpoint_(1,1)=100

setpoint_(2,1)=100

 

! Constrain the temperature node to be

! at the setpoint value

allsel,all

d,NODE_K,temp,%setpoint_%

 

 

! Apply an initial condition of

! 20 C to everything

allsel,all

ic,all,temp,20

 

/solu

antype,trans,new

time,1000

deltim,1,1,1

lsel,s,loc,x,0.1,1

nsll,s,1

sf,all,conv,20,20

 

allsel,all

outres,all,all

solve

/post26

! Plot the response temperature

! and the setpoint temperature

nsol,2,NODE_L,temp,,temp_r

nsol,3,NODE_K,temp,,temp_s

xvar,1

plvar,2,3

ANSYS, Inc. Unleashes New ACT Extensions for Version 15.0

act_150_1If you haven’t noticed, ANSYS, Inc. has been making quite a few ACT extensions available for ANSYS 15.0 on the ANSYS Customer Portal. If you are not familiar with ACT (ANSYS Customization Toolkit) Extensions, please see our earlier blog entry, “There’s an Extension for That,” here.

As of this writing, there are 20 ACT extensions available for download from the Customer Portal for version 15.0. There is also a set of training files available from the same link.

Among the new additions is an extension allowing the use of Mechanical APDL User Programmable Features in the Mechanical editor. Previously this could only be done in MAPDL. You will still need to install the customization files as part of the ANSYS installation, and you will still need the proper versions of the FORTRAN compiler and Visual Studio. However, this extension unleashes a significant capability within the Workbench Mechanical tool that wasn’t there previously, access to UPF’s.

The documentation states that it works with both versions 14.5 and 15.0 of ANSYS.
To get to the 15.0 ACT Extensions download area, login to the ANSYS Customer Portal and navigate through Downloads > Extension Library > 15.0. We urge you to browse the list of available extension available from the Customer Portal to see which might have benefits to your simulations.

Here is a list of all of the current extensions:

ACT Intro Training_R150_v1

ACT Introductory Training Materials

ACT Templates for DM_R150_v1

Templates for educational purpose; demonstrates most common scenarios of ACT-based development needs in DesignModeler

ACT Templates for DX_R150_v1

Templates for educational purpose; demonstrate integration of external optimizers

ACT Templates_R150_v1

Templates for educational purpose; cover most common ACT-based development needs

Acoustics Extension R150_v42

Expose 3D acoustics solver capabilities

Convection Extension_R150_v4

Expose convection with pilot node capability in steady-state and transient thermal analyses

Coupled Diffusion_R150_v3

Introduce coupled diffusion analysis (structural diffusion, thermal diffusion, and structural thermal diffusion) in both static and full transient analysis

Coupled Field Physics Extension_R150_v1

Expose piezoelectric, thermal-piezoelectric and thermal-structural solver capabilities

DDAM_R150_v2

Expose the Dynamic Design Analysis Method (DDAM) in Mechanical interface

Design Modeler Utility_R150_v1

Expose some useful functions in DM interface

Distributed Mass_R150_v2

Add distributed mass (rather than a point mass) to a surface as either “total mass” or “mass per unit area”

Enforced Motion_R150_v3

In Mode-Superposition Harmonic and Transient Analyses, allows applying base excitation (displacement or acceleration). Excitation can be either constant or frequency/time dependent

FE Info Extension_R150_v9

Expose node and element related information

FSI_Transient_Load_Mapping_R150_v4

Map temperature and pressure loads (from a CFD calculation) to a multi-step Mechanical analysis for transient one-way FSI. Includes CFD-Post macros

Follower Loads Extension_R150_v2

Create follower forces and moments to follow geometric deformation

Hydrostatic Fluid_R150_v5

Expose hydrostatic fluid elements in Mechanical

MATLAB_Optimizers_for_DX_R150_v1

Expose MATLAB optimization algorithms and user programs in the Optimization component of ANSYS DesignXplorer

MatChange_R150_v2

Change material ID to user specified value for the selected bodies

Offshore_R150_v4

Expose the MAPDL offshore features in Mechanical

Piezo Extension_R150_v8

Expose piezo-electric solver capabilities

UPF Extension_R150_v1

Allow for the use of User Programmable Features (UPF) within Workbench

Be a View Master: Customizing and Managing Views in ANSYS Mechanical

Accessing various predefined views in Mechanical is easy. You can click on the triad axes (including the negative sides of the axes) and view the model down those axes, or click the turquoise isometric ball for an isometric view. Or you can right click the graphics area and select from a variety of views (top, back, left, etc.) from the View menu.

But what if you want a predefined view that has the model rotated “just so” and zoomed out “just so?” What if you want to store these settings not just in your current model, but bring them into other models as well? Starting in R14.5 you can do this, using the Manage Views window.

To open the Manage Views window, click on the eye-in-a-box icon that looks like it was designed by Freemasons. image The Managed Views window appears at the lower left of the GUI. The window consists of the following:image

The labels are pretty self-explanatory, but let’s delve into a couple of examples. As you can see by observing the triad, the model viewpoint shown here does not coincide with any pre-defined view.

image

Click the Create a View button image and give the view a name (defaults to View 1 but any name can be given):

image

After rotating, panning, and zooming, you can return to this view by clicking the Apply a View button. image

image

As mentioned before, you can apply the same view between different models by using the View Export/Import capabilities. To do this, simply highlight the named view to be exported in the originating model and click the Export button. image Specify the xml file to which the view is to be stored. In another model, click the Import button image and browse to the xml file containing the view to be imported. This is basically the Mechanical equivalent of an APDL file containing /VIEW and /ZOOM commands. Example follows.

The following view is to be stored and exported to another model. Highlight the view name (“Sulk”) and click the Export button.

image

Frankie the Frowning Finite Element Model worries that views can’t be shared between models.

Specify the xml file name and click Save.

image

In a different model, click the Import button, browse to the xml stored in the previous step, and click open.

image

Highlight the imported view name and click the Apply a View button.

image

Sammy the Smiling Simulation Model is happy that views can be transferred between models.

The Managed Views window provides a significant amount of viewing versatility over the standard viewing definitions.

Named Selections + Object Generator = Awesome

Guess who’s back…back again.  Yes, just like Slim Shady, I’m back (returned to PADT and writing Focus blogs).

So run and go tell your friends that horrible pop cultural references have returned to ANSYS blog posts.  It’s been too long.

Getting back on track, the object generator debuted in R14.5 Mechanical.  You can access this feature in the toolbar (image below taken from R15):

How the pro's generate objects

What exactly does the object generator do?  Simple answer…it makes your life better.  It uses named selections and a single instance of an object (joint, spring, bolt pretension, etc) and replicates it across all entities in the named selection.  Let’s play around with this feature on the following (dummy) assembly:

image

Above is a t-pipe with three covers, one of them has bolt ‘bodies’ modeled.  We’ll use fixed-fixed joints to connect the two ‘bolt-less’ bodies together, and then define bolt preloads on the bolt pattern.  To get started, we need to build up the named selections. 

I’m planning on defining the fixed-fixed joint between the two cylindrical surfaces:

image

This is a pretty simple assembly, and I could easily just manually select them all, right-mouse-click, and generate the named selection.  In the real world, things aren’t always so easy, so we’ll get a little fancy.  First, create a named selection of the bodies that contain faces we want to joint together:

image

I’ve created two named selections, called ‘joint_cover’ and ‘joint_pipe’ and utilized the ‘random colors’ option to display them in different colors.  Next, I insert a named selection but set the scoping method to be ‘by worksheet’:

image

I’ll then use selection logic (MAPDL hipsters will recognize the logic as the xSEL commands):

image

Now, order is important here, as the selection logic ‘flows’ from top to bottom.  First, this named selection selects the bodies contained in the existing named selection ‘joint_cover’ (note:  this object MUST exist above the worksheet-created named selection in the tree).  At this point in time, we have two bodies selected.  Next, it converts my body selection to faces belonging to those bodies.  Finally, it filters out any face that has a radius less than .05m (units are set by the ‘units’ drop-down menu, values entered in worksheet scale when units are changed).  Hit ‘generate’ and you get the following:

image

You may need to switch to the ‘graphics’ tab (circled in red in the above image).  This is great, we now have all of our faces highlighted.  Next, we need to reproduce this behavior on the pipe.  Rather than redo all of this work, just right-mouse-click on our new named selection and select ‘duplicate’. 

image image

Select the duplicated named selection, and edit the first line to use a different named selection.  Hit generate:

image

Perfect.  We can go back and add/remove bodies to the existing named selections and re-generate the named selections to have it automatically re-create these named selections. 

Next, we’ll create the original ‘joint’ we want to re-create across the two flanges. 

image

After making the joint, make note of which part is the ‘reference’ and ‘mobile’.  For the image above, the cover is the ‘reference’ while the pipe is the ‘mobile’.  Highlight this joint and select the object generator:

image

If we use the object generator on a joint, it will ask us to define the named selections that contain the reference and mobile faces.  From above, we know that the cover faces are contained in the ‘cover_faces’ named selection.  We then duplicated that and swapped the body selection, meaning the faces for the pipe are contained in ‘cover_faces 2’ (I’m lazy and didn’t rename it…sorry).  Next, we define the minimum distance between centroids.  This acts as a filter for re-creating each joint.  What happens when we hit ‘generate’ is it looks at the distance between the centroids of each face in the two named selections.  If it finds ‘matching’ faces within that distance it creates the joint. 

image

In the image above, if I use a distance equal to the red line, I will get incorrect joints defined.  I’ll get the following (a=cover, b=pipe): 1a-1b, 1a-2b, 2a-2b, 2a-1b…

What I need to do is limit the distance to the blue line, which is big enough to find the correct pairs but filter out the wrong ones.  To figure out a proper distance, you can use the ‘selection information’ window to figure out the centroid information:

image

Once you’re set, hit ‘generate’:

image

What a time to be alive!  It’s always a good idea to go through joint-by-joint to make sure everything is correct…or you can always just count the number of joints created and confirm that the number is correct (I have 15 total faces in the cover_faces named selection…so I should have 15 joints…and I do).

Next, let’s look at the bolt pretension definition.  We start with a named selection of the face where the bolt pretension will be applied:

image

Next, we create our original bolt pretension load:

image

I’ve setup my bolt pretension to solve for a 100N axial load in load step 1 and then lock the solved-for relative displacement in for load step 2.  We select the bolt pretension in the tree, then select the object generator:

image

Select the named selection that contains the bolt faces, and hit generate:

image

This is incredibly useful for bolt pretension for two reasons.  The first reason is obvious…it significantly cuts down on the amount of work you need to do for large bolt patterns.  The second reason…you can only make changes to bolt pretension objects one at a time.  By that, I mean you cannot multi-select all your bolt pretensions and change the load and step behavior (e.g. change load to 200N, open in load step 2, etc). 

image

If you select all the bolt pretensions, the changes you make in the tabular data window are only applied to the first selected object.  All other bolt pretensions are kept the same.  So if you suddenly realize the pretension was setup incorrectly, it’s best to delete all but one of the pretension object, make the necessary changes, then duplicate it.  That way you can be sure all the bolt pretensions are correct (unless you’re simulating a bolt opening up…then ignore). 

One very important thing to note is that the object generator is not parametrically linked to anything.  If I go back and change the number of holes/bolts/etc in my model, I may need to re-generate the duplicated joints/bolts/etc.  The named selections should update just fine, assuming you didn’t open the hole up bigger than the selection tolerance.  I would recommend deleting all but the original joint/bolt pretension and just re-create everything after the CAD update (this may actually speed up the CAD transfer as it’s not trying to link up incorrect entity IDs).

Hopefully this will save you some time/frustration in your next analysis.  The documentation in R15 can be accessed here:  help/wb_sim/ds_object_generator.html

Introduction to APDL Book Turns One

PADT-Intro-APDL-coverWe got our monthly report from Amazon on our book  “Introduction to the ANSYS Parametric Design Language (APDL)” and we noticed that it has been one year since we published it.  This was our first foray into self publishing so we thought it was worth noting that it has been a year.

Being engineers, we are kind of obsessed with numbers.  The first number is a bit discouraging, 194 units sold.  That is not going to make any best seller lists (more on lessons learned below).  51% were sold on Amazon.com, 19% by Amazon Europe, and 16% on Amazon UK, with 13% sold by non-Amazon affiliates.  

Lessons Learned

This is our first time doing self publishing we have learned some lessons worth sharing:

  1. You can’t publish a work document as an e-book.  
    We figured we would format it for a paper book, then just publish the same file as an e-book.  WRONG.  The formatting, didn’t translate at all. If it was a novel, it would have worked fine, but with all the figures and code, it was a mess. So we took it off the site.  We have received feedback that this has kept some people from buying the book.
  2. Reviews matter.
    We got one review, and it was not good because they bought the E-Book (see 1).We have resisted the temptation to publish our own review… everyone does it… It would be great if anyone reading this could put up a review.
  3. We should have done this 5 years ago.
    The reality is that APDL usage is down as ANSYS Mechanical keeps getting better and better.  So the need to do advanced APDL scripting is not what it used to be. Plus, many new users are never exposed to APDL.
  4. Amazon fiddles with your price.
    It may or may not be a bad thing, but Amazon lowers your price if their affiliates start selling a book for less than you originally set the price at.  So the initial $75 price has gone as low as $55 when demand was high (several copies a week!).  In that the whole thing is an experiment, this has caused no grief but it is something to be aware of.
  5. Overall, the whole process was easy and a nice business model
    Let’s be honest, there is not a huge demand for a book like this. The CreateSpace.com (owned by Amazon) model is a great model for niche publishing like this. It was easy to upload, easy to monitor, and those fat royalty checks (what is the emoticon for sarcasm?) come in once a month. The best part is that because it is print-on-demand, there is no need pay for an inventory up front.

If you don’t have a copy (and only 190 some of you do so I’m guessing you don’t) head on over to our page on amazon and check it out.  You can spin it around and see the front and back cover!

If you are one of the select few, maybe write a review and help us out a bit?

ANSYS 15.0: Summary of Available Updates as of 3/11/14

ANSYS_r15.0Since the release of ANSYS 15.0 in December, 2013, ANSYS, Inc. has released 4 updates.  Here are details on each, so you can decide if you need to install them or not: 

15.0.1

This update fixes a defect related to CFD models with zero thickness walls (baffles).  The problem in the initial 15.0 release was that baffles do not display properly in ANSYS (Workbench) Meshing when viewing the mesh on Named Selections, and the baffles are not output correctly to CFX, Fluent, or Polyflow.  This update is available for Windows 32 bit, Windows 64 bit, and Linux 64 bit.

15.0.3

This update fixes a problem in the initial 15.0 release in which ANSYS LS-DYNA could fail with a stack overflow problem on Windows machines.  This update is available for Windows 32 bit and Windows 64 bit.

 15.0.4

This update fixes a license problem with the initial ANSYS 15.0 TurboGrid tool, in which TurboGrid on 64 bit Windows systems could check out a license for both TurboGrid and ICEM CFD.  This update is available for Windows 64 bit.

15.0.5

This update addresses problems in the initial ANSYS 15.0 release with ANSYS Mechanical Rigid Body Dynamics as well as any mechanical models that include joints.  Data from the Mechanical Redundancy Analysis tool was not being updated after redundancy analyses were performed, so the code could not identify redundant or inconsistent constraints.  This update is available for Windows 32 bit, Windows 64 bit, and Linux 64 bit.

Video Tips: Drop Impact using ANSYS AUTODYN

This is a quick video showing an example of doing an impact study using a steel slug and a reinforced concrete block.

)

Caps and Limits on Hardware Resources in Microsoft Windows and Red Hat Enterprise Linux

windows-caps(Revised and updated February 10, 2014 to include pertinent, relevant Windows Server 2012 information as it relates to the world of numerical simulation)

Hi – One of our more popular blog articles from January 14, 2011. It has been over three years now and the blog article needs a refresh. It seems that as operating system provider’s release a new OS iteration, for Windows Operating System or Linux, that this may contribute to confusion when selecting the proper licensing for the numerical simulation computers physical hardware.

Hopefully this updated blog article will assist you in making sure your numerical simulation machines are licensed properly.

Sometime around 3am in October 2010. I found myself beating my head up against a server rack. I was frustrated with trying to figure out what was limiting my server hardware. I was aware of a couple limits that Microsoft had placed into its OS software. However, I had no idea how far reaching the limits were. I researched into two manufactures of two of the most used Operating Systems on the planet. I figured it would be best if I had a better understanding of these hardware limits. The physical socket and memory limit caps that are placed on the hardware by two of the most popular Operating Systems on the planet: Microsoft Windows 7, Windows Server 2008 R2, Windows Server 2012 and Red Hat Enterprise Linux.

So now let us fast-forward over three years, not much has changed because change is constant. The new Windows Server 2012 changes up the naming convention on us IT geeks. So pay attention because the Windows Server Standard or Enterprise edition you may have been used to has changed.

Limits on Cores, RAM, and USERS by Operating System

  • Microsoft Windows Operating Systems
    • Windows 7
      • Professional / Enterprise / Ultimate
        • Processor: 2 Socket limit (many cores)
        • Core limits:
          • 64-bit: 256 max quantity of cores in 1 physical processor
          • 32-bit: 32 cores max quantity of cores in 1 physical processor
        • RAM: 192 GB limit to amount of accessible
      • Home Premium
        • RAM: 16GB
      • Home Basic
        • RAM: 8GB
      • Starter Edition
        • RAM: 2 GB
    • Windows Server 2008
      • Standard & R2
        • Processor: 4 socket limit – (many cores)
          • (4 – Parts x 12core) = 48 cores
        • RAM: 32 GB
      • Windows Server 2008 R2 Foundation  (R2 releases are 64-bit only)
        • RAM: 128 GB
      • HPC Edition 2008 R2 (R2 releases are 64-bit only)
        • RAM: 128 GB
      • Windows Server 2008 R2 Datacenter (R2 releases are 64-bit only)
        • Processor: 8 socket limit
        • RAM: 2TB
      • Windows Server 2008 R2 Enterprise (R2 releases are 64-bit only)
        • Processor: 8 socket limit
        • RAM: 2TB
    • Windows Server 2012
      • Foundation
        • Processor: 1 socket licensed – (many cores)
        • RAM: 32 GB
        • User Limit: 15 users
      • Essentials
        • Processor: 2 socket licensed – (many cores)
        • RAM: 64 GB
        • User Limit: 25 users
      • Standard
        • Processor:  4 socket licensed* – (many cores)
        • RAM: 4TB
        • User Limit: unlimited
      • Datacenter
        • Processor: 4 socket licensed* – (many cores)
        • RAM: 4TB
        • User Limit: unlimited
      • R2
        • Processor: 4 socket licensed* – (many cores)
        • RAM: 4TB
        • User Limit: unlimited
  • Red Hat Enterprise Linux – 64-bit
    • Red Hat defines a logical CPU as any schedulable entity. So every core/thread in a multi-core/thread processor is a logical CPU
    • This information is by Product default.  Not the maximums of a fully licensed/subscribed REHL product.
    • Desktop
      • Processor: 1-2 CPU
      • RAM: 64 GB
    • Basic
      • Processor: 1-2 CPU
      • RAM: 16 GB
    • Enterprise
      • Processor: 1-8 CPU
      • RAM: 64 GB
    • NOTE: Red Hat would be happy to create custom subscriptions with yearly fees for other configurations to fit your specific environment. Please contact Red Hat to check on costs.

Okay great but what operating system platforms can I use with ANSYS R15?

ANSYS 15.0 Supported Platforms

ANSYS 15.0 is the currently released version. The specific operating system versions supported by ANSYS 15.0 products and License Manager are documented and posted at: 
   www.ansys.com/Support/Platform+Support.

ANSYS 15.0 includes support for the following:

  • Windows XP and Windows 7 (32-bit and 64-bit Professional and Enterprise versions)
  • Windows 8 (64-bit Professional and Enterprise versions)
  • Windows Server 2008 R2 Enterprise
  • Windows HPC Server 2008 R2 (64-bit)
  • Windows Server 2012 Standard version
  • Red Hat Enterprise Linux (RHEL) 5.7-5.9 and 6.2-6.4 (64-bit)
  • SUSE Enterprise Linux Server and Desktop (SLES / SLED) 11 SP1-SP2 (64-bit)

Not all applications are supported on all of these platforms. See detailed information, by product, at the URL noted above.

Final Thoughts

Approximate additional licensing cost to License Windows Server 2012 for a Quad Socket CPU motherboard:

  • Windows Server 2012 Foundation: Please call your OEM partner
  • Windows Server 2012 Essentials: $429 + User Client Access Licensing $$$
  • Windows Server 2012 Standard:  $ 1,500  + User Client Access Licensing $$$
  • Windows Server 2012 Datacenter: $ 10,500 + User Client Access Licensing $$$

References

 

Help! My New HPC System is not High Performance!

It is an all too common feeling, that sinking feeling that leads to the phrase “Oh Crap” being muttered under your breath. You just spent almost a year getting management to pay for a new compute workstation, server or cluster. You did the ROI and showed an eight-month payback because of how much faster your team’s runs will be. But now you have the benchmark data on real models, and they are not good. “Oh Crap”

Although a frequent problem, and the root causes are often the same, the solutions can very. In this posting I will try and share with you what our IT and ANSYS technical support staff here at PADT have learned.

Hopefully this article can help you learn what to do to avoid or circumvent any future or current pitfalls if you order an HPC system. PADT loves numerical simulation, we have been doing this for twenty years now. We enjoy helping, and if you are stuck in this situation let us know.

Wall Clock Time

It is very easy to get excited about clock speeds, bus bandwidth, and disk access latency. But if you are solving large FEA or CFD models you really only care about one thing. Wall Clock Time. We cannot tell you how many times we have worked with customers, hardware vendors, and sometimes developers, who get all wrapped up in the optimization of one little aspect of the solving process. The problem with this is that high performance computing is about working in a system, and the system is only as good as its weakest link.

We see people spend thousands on disk drives and high speed disk controllers but come to discover that their solves are CPU bound, adding better disk drives makes no difference. We also see people blow their budget on the very best CPU’s but don’t invest in enough memory to solve their problems in-core. This often happens because when they look at benchmark data they look at one small portion and maximize that measurement, when that measurement often doesn’t really matter.

The fundamental thing that you need to keep in mind while ordering or fixing an HPC system for numerical simulation is this: all that matters is how long it takes in the real world from when you click “Solve” till your job is finished. I bring this up first because it is so fundamental, and so often ignored.

The Causes

As mentioned above, an HPC server or cluster is a system made up of hardware, software, and people who support it. And it is only as good as its weakest link. The key to designing or fixing your HPC system is to look at it as a system, find the weakest links, and improve that links performance. (OK, who remembers the “Weakest Link” lady? You know you kind of miss her…)

In our experience we have found that the cause for most poorly performing systems can be grouped into one of these categories:

  • Unbalanced System for the Problems Being Solved:

    One of the components in the system cannot keep up with the others. This can be hardware or software. More often than not it is the hardware being used. Let’s take a quick look at several gotchas in a misconfigured numerical simulation machine.

  • I/O is a Bottleneck
    Number crunching, memory, and storage are only as fast as the devices that transfer data between them.
  • Configured Wrong

    Out of simple lack of experience the wrong hardware is used, the OS settings are wrong, or drivers are not configured properly.

  • Unnecessary Stuff Added out of Fear

    People tend to overcompensate out of fear that something bad might happen, so they burden a system with software and redundant hardware to avoid a one in a hundred chance of failure, and slow down the other ninety-nine runs in the process.

Avoiding an Expensive Medium Performance Computing (MPC) System

The key to avoiding these situations is to work with an expert who knows the hardware AND the software, or become that expert yourself. That starts with reading the ANSYS documentation, which is fairly complete and detailed.

Often times your hardware provider will present themselves as the expert, and their heart may be in the right place. But only a handful of hardware providers really understand HPC for simulation. Most simply try and sell you the “best” configuration you can afford and don’t understand the causes of poor performance listed above. More often than we like, they sell a system that is great for databases, web serving, or virtual machines. That is not what you need.

A true numerical simulation hardware or software expert should ask you questions about the following, if they don’t, you should move on:

  • What solver will you use the most?
  • What is more important, cost or performance? Or better: Where do you want to be on the cost vs. performance curve?
  • How much scratch space do you need during a solve? How much storage do you need for the files you keep from a run?
  • How will you be accessing the systems, sending data back and forth, and managing your runs?

Another good test of an expert is if you have both FEA and CFD needs, they should not recommend a single system for you. You may be constrained by budget, but an expert should know the difference between the two solvers vis-à-vis HPC and design separate solutions for each.

If they push virtual machines on you, show them the door.

The next thing you should do is step back and take the advice of writing instructors. Start cutting stuff. (I know, if you have read my blog posts for a while, you know I’m not practicing what I preach. But you should see the first drafts…) You really don’t need huge costly UPS’, the expensive archival backup system, or some arctic chill bubbling liquid nitrogen cooling system. Think of it as a race car, if it doesn’t make the car go faster or keep the driver safe, you don’t need it.

A hard but important step in cutting things down to the basics is to try and let go of the emotional aspect. It is in many ways like picking out a car and the truth is, the red paint job doesn’t make it go any faster, and the fancy tail pipes will look good, but also don’t help. Don’t design for the worst-case model either. If 90% of your models run in 32GB or RAM, don’t do a 128GB system for that one run you need to do a year that is that big. Suffer a slow solve on that one and use the money to get a faster CPU, a better disk array, or maybe a second box.

Pull back, be an engineer, and just get what you need. Tape robots look cool, blinky lights and flashy plastic case covers even cooler. Do you really need that? Most of time the numerical simulation cruncher is locked up in a cold dark room. Having an intern move data to USB drives once a month may be a more practical solution.

Another aspect of cutting back is dealing with that fear thing. The most common mistake we see is people using RAID configurations for storing redundant data, not read/write speed. Turn off that redundant writing and dump across as many drives as you can in parallel, RAID 0. Yes you may lose a drive. Yes that means you lose a run. But if that happens once every six months, which is very unlikely, the lost productivity from those lost runs is small compared to the lost productivity of solving all those other runs on a slow disk array.

Intel-AMD-Flunet-Part2-Chart2Lastly, benchmark. This is obvious but often hard to do right. The key is to find real problems that represent a spectrum of the runs you plan on doing. Often different runs, even within the same solver, have different HPC needs. It is a good idea to understand which are more common and bias your design to those. Do not benchmark with standard benchmarks, use industry accepted benchmarks for numerical simulation. Yes it’s an amazing feeling knowing that your new cluster is number 500 on the Top 500 list. However if it is number 5000 on the ANSYS Numerical simulation benchmark list nobody wins.

Fixing the System You Have

As of late we have started tearing down clusters in numerous companies around the US. Of course we would love to sell you new hardware however at PADT, as mentioned before, we love numerical simulation. Fixing your current system may allow you to stretch that investment another year or more. As a co-owner of a twenty year old company, this makes me feel good about that initial investment. When we sick our IT team on extending the life of one of our systems, I start thinking about and planning for that next $150k investment we will need to do in a year or more.

Breathing new life into your existing hardware basically requires almost the same steps as avoiding a bad system in the first place. PADT has sent our team around the country helping companies breath new life into their existing infrastructure. The steps they use are the same but instead of designing stuff, we change things. Work with an expert, start cutting stuff out, breath new life into the growing old hardware, avoid fear and “cool factor” based choices, and verify everything.

Take a look and understand the output from your solvers, there is a lot of data in there. As an example, here is an article we wrote describing some of those hidden gems within your numerical simulation outputs. http://www.padtinc.com/blog/the-focus/ansys-mechanical-io-bound-cpu-bound

Play with things, see what helps and what hurts. It may be time to bring in an outside expert to look at things with fresh eyes.

Do not be afraid to push back against what IT is suggesting, unless you are very fortunate, they probably don’t have the same understanding as you do when it comes to numerical simulation computing. They care about security and minimizing the cost of maintaining systems. They may not be risk takers and they don’t like non-standard solutions. All of these can often result in a system that is configured for IT, and not fast numerical simulation solves. You may have to bring in senior management to solve this issue.

PADT is Here to Help

Cube_Logo_Target1The easiest way to avoid all of this is to simply purchase your HPC hardware from PADT.  We know simulation, we know HPC, and we can translate between engineers and IT.  This is simply because simulation is what we do, and have done since 1994.   We can configure the right system to meet your needs, at that point on the price performance curve you want.  Our CUBE systems also come preloaded and tested with your simulation software, so you don’t have to worry about getting things to work once the hardware shows up.

If you already have a system or are locked in to a provider, we are still here to help.  Our system architects can consult over the phone or in person, bringing their expertise to the table on fixing existing systems or spec’ing new ones.  In fact, the idea for this article came when our IT manager was reconfiguring a customer’s “name brand” cluster here in Phoenix, and he got a call from a user in the Midwest that had the exact same problem.  Lots of expensive hardware, and disappointing performance. They both had the wrong hardware for their problems, system bottlenecks, and configuration issues.

Learn more on our HPC Server and Cluster Performance Tuning page, or by contacting us. We would love to help out. It is what we like to do and we are good at it.

I’m All Bound Up! : A Brief Discussion on What It Means To Be Compute Bound or I/O Bound

CUBE-HVPC-512-core-closeup2-1000hWe often get questions from our customers, both ANSYS product and CUBE HVPC users, on how to get their jobs to run faster. Should they get better disk drives or focus on CPU performance. We have found that disk drive performance often gets the blame when it is undeserving. To help figure this out, the first thing we do t is look at the output from an ANSYS Mechanical/Mechanical APDL run. Here is an email, slightly modified to leave the user anonymous, that shows our most recent case of this:

From: David Mastel – PADT, Inc.
To: John Engineering

Subject: Re: Re: Re: Relatively no difference between SSD vs. SAS2 15k RPM solve times?

Hi John, so I took a look at your ANSYS Mechanical output files – Based on the problem you are running the machine is Compute Bound. Here is the process on how I came to that conclusion. Additionally, at the end of this email I have included a few recommendations.

All the best,
David

Example 1:

The bottom section of an ANSYS out file for a 2 x 240GB
Samsung 843 SSD RAID0 array:

Total CPU time for main thread                    :      105.9 seconds
Total CPU time summed for all threads             :      119.1 seconds

Elapsed time spent pre-processing model (/PREP7)  :        0.0 seconds
Elapsed time spent solution – preprocessing       :       10.3 seconds
Elapsed time spent computing solution             :       83.5 seconds
Elapsed time spent solution – postprocessing      :        3.9 seconds
Elapsed time spent post-processing model (/POST1) :        0.0 seconds

Equation solver computational rate                :   319444.9 Mflops
Equation solver effective I/O rate                :    26540.1 MB/sec

Maximum total memory used                         :    48999.0 MB
Maximum total memory allocated                    :    54896.0 MB
Maximum total memory available                    :        128 GB

+—— E N D   D I S T R I B U T E D   A N S Y S   S T A T I S T I C S ——-+

*—————————————————————————*
|                                                                           |
|                       DISTRIBUTED ANSYS RUN COMPLETED                     |
|                                                                           |
|—————————————————————————|
|                                                                           |
|            Release 14.5.7         UP20130316         WINDOWS x64          |
|                                                                           |
|—————————————————————————|
|                                                                           |
| Database Requested(-db)   512 MB    Scratch Memory Requested       512 MB |
| Maximum Database Used     447 MB    Maximum Scratch Memory Used   4523 MB |
|                                                                           |
|—————————————————————————|
|                                                                           |
|        CP Time      (sec) =        119.01       Time  =  15:41:54         |
|        Elapsed Time (sec) =        117.000       Date  =  10/21/2013      |
|                                                                           |
*—————————————————————————*

For a quick refresher on what it means to be compute bound or I/O bound, let’s review what ANSYS Mechanical APDL tells you.

When looking at your ANSYS Mechanical APDL (this file is created during the solve in ANSYS Mechanical, since ANSYS Mechanical is just running ANSYS Mechanical APDL behind the scenes) out files; I/O bound and Compute bound are essentially the following:

I/O Bound:

  1. When Elapsed Time greater than Main Thread CPU time

Compute Bound:

  1. When Elapsed time is equals (approx) to Main Thread CPU time

Example 2:
CUBE HVPC – Samsung 843 – 240GB SATA III SSD 6Gbps – RAID 0
Total CPU time for main thread :  105.9  seconds
Elapsed Time (sec) :   117.000        seconds

CUBE HVPC – Hitachi 600GB SAS2 15k RPM – RAID 0
Total CPU time for main thread  :  109.0 seconds
Elapsed Time (sec) :   120.000       seconds

Recommendations for a CPU compute bound ANSYS server or workstation:

When computers are compute bound I normally recommend the following.

  1. Add faster processors – check!
  2. Use more cores for solve – I think you are in the process of doing this now?
  3. Instead of running SMP go DMP – unable to use DMP with your solve
  4. Add an Accelerator card (NVidia Tesla K20x). Which unfortunately does not help in your particular solving situation.

Please let me know if you need any further information:

David

David Mastel
Manager, Information Technology

Phoenix Analysis & Design Technologies
7755 S. Research Dr, Suite 110
Tempe, AZ  85284
David.Mastel@PADTINC.com

The hardware we have available changes every couple of months or so, but right now, for a user who is running this type of ANSYS Mechanical/Mechanical APDL model, we are recommending the following configuration:

CUBE HVPC Recommended Workstation for ANSYS Mechanical: CUBE HVPC w16-KGPU

CUBE HVPC
MODEL

COST

CHASIS

PROCESSOR

CORES

MEMORY

CUBE HVPC
w16i-kgpu

$ 16,164.00

Mid-Tower
(black quiet edition)

Dual Socket INTEL XEON
e5-2687 v2,
16 cores
@ 3.4GHz

16 = 2 x 8

128GB DDR3-1866 ECC REG

STORAGE

RAID
CONTROLLER

GRAPHICS

ACCELERATOR

OS

OTHER

4 x 240GB
SATA II SSD 6 Gbps
2 x 600GB
SAS2 6Gbps
(2.1 TB)

SMC LSI 2208
6Gbps

NVIDIA
QUADRO K5000

NVIDIA
TESLA K20

Microsoft
Windows 7
Professional 64-bit

ANSYS R14.5.7
ANSYS R15

Cube_logo_Trg_Wide_150w

Here are some references to the some basic information and the systems we recommend:
http://www.intel.com
http://en.wikipedia.org/wiki/CPU_bound
http://en.wikipedia.org/wiki/I/O_bound
http://www.supermicro.com
http://www.cube-hvpc.com
http://www.ansys.com
http://www.nvidia.com

ANSYS & 3D Printing: Converting your ANSYS Mechanical or MAPDL Model into an STL File

image3D printing is all the rage these days.  PADT has been involved in what should be called Additive Manufacturing since our founding twenty years ago.  So people in the ANSYS world often come to us for advice on things 3D Printer’ish.  And last week we got an email asking if we had a way to convert a deformed mesh into a STL file that can be used to print that deformed geometry.  This email caused neurons to fire that had not fired in some time. I remembered writing something but it was a long time ago.

Fortunately I have Google Desktop on my computer so I searched for ans2stl, knowing that I always called my translators ans2nnn of some kind. There it was.  Last updated in 2001, written in maybe 1995. C.  I guess I shouldn’t complain, it could have been FORTRAN. The notes say that the program has been successfully tested on Windows NT. That was a long time ago.

So I dusted it off and present it here as a way to get results from your ANSYS Mechanical or ANSYS Mechanical APDL model as a deformed STL file.

UPDATE – 7/8/2014

Since this article was written, we have done some more work with STL files. This Macro works fine on a tetrahedral mesh, but if you have hex elements, it won’t work – it assumes triangles on the face.  It also requires a macro and some ‘C’ code, which is an extra pain. So we wrote a more generic macro that works with Hex or Tet meshes, and writes the file directly. It can be a bit slow but no annoyingly slow.  We recommend you use this method instead of the ones outlined below.

Here is the macro:  writstl.zip

The Process

An STL file is basically a faceted representation of geometry. Triangles on the surface of your model. So to get an STL file of an FEA model, you simply need to generate triangles on your mesh face, write them out to a file, and convert them to an STL format.  If you want deformed geometry, simply use the UPGEOM command to move your nodes to the deformed position.

The Program

Here is the source code for the windows version of the program:

/*
---------------------------------------------------------------------------

 PADT--------------------------------------------------- Phoenix Analysis &
                                                        Design Technologies

---------------------------------------------------------------------------
                             www.padtinc.com
---------------------------------------------------------------------------

       Package: ans2stl

          File: ans2stl.c
          Args: rootname
        Author: Eric Miller, PADT
		(480) 813-4884 
		eric.miller@padtinc.com

	Simple program that takes the nodes and elements from the
	surface of an ANSYS FE model and converts it to a binary
	STL file.

	USAGE:
		Create and ANSYS surface mesh one of two ways:
			1: amesh the surface with triangles
			2: esurf an existing mesh with triangles
         	Write the triangle surface mesh out with nwrite/ewrite
		Run ans2stl with the rootname of the *.node and *.elem files
		   as the only argument
		This should create a binary STL file

	ASSUMPTIONS:
		The ANSYS elements are 4 noded shells (MESH200 is suggested)
		in triangular format (nodes 3 and 4 the same)

		This code has been succesfully compiled and tested
		on WindowsNT

		NOTE: There is a known issue on UNIX with byte order
				Please contact me if you need a UNIX version

	COMPILE:
		gcc -o ans2stl_win ans2stl_win.c

       10/31/01:       Cleaned up for release to XANSYS and such
       1/13/2014:	Yikes, its been 12+ years. A little update 
       			and publish on The Focus blog
			Checked it to see if it works with Windows 7. 
			It still compiles with GCC just fine.

---------------------------------------------------------------------------
PADT, Inc. provides this software to the general public as a curtesy.
Neither the company or its employees are responsible for the use or
accuracy of this software.  In short, it is free, and you get what
you pay for.
---------------------------------------------------------------------------
*/
/*======================================================

   SAMPLE ANSYS INPUT DECK THAT SHOWS USAGE

finish
/clear
/file,a2stest
/PREP7  
!----------
! Build silly geometry
BLC4,-0.6,0.35,1,-0.75,0.55 
SPH4,-0.8,-0.4,0.45 
CON4,-0.15,-0.55,0.05,0.35,0.55 
VADD,all
!------------------------
! Mesh surface with non-solved (MESH200) triangles
et,1,200,4
MSHAPE,1,2D   ! Use triangles for Areas
MSHKEY,0      ! Free mesh
SMRTSIZE,,,,,5
AMESH,all
!----------------------
! Write out nodes and elements
nwrite,a2stest,node
ewrite,a2stest,elem
!--------------------
! Execute the ans2stl program
/sys,ans2stl_win.exe a2stest

======================================================= */

#include 
#include 
#include 

typedef struct vertStruct *vert;
typedef struct facetStruct *facets;
typedef struct facetListStruct *facetList;

        int     ie[8][999999];
        float   coord[3][999999];
        int	np[999999];

struct vertStruct {
  float	x,y,z;
  float	nx,ny,nz;
  int  ivrt;
  facetList	firstFacet;
};

struct facetListStruct {
  facets	facet;
  facetList	next;
};

struct facetStruct {
  float	xn,yn,zn;
  vert	v1,v2,v3;
};

facets	theFacets;
vert	theVerts;

char	stlInpFile[80];
float	xmin,xmax,ymin,ymax,zmin,zmax;
float   ftrAngle;
int	nf,nv;  

void swapit();
void readBin();
void getnorm();
long readnodes();
long readelems();

/*--------------------------------*/
main(argc,argv)
     int argc;
     char *argv[];
{
  char nfname[255];
  char efname[255];
  char sfname[255];
  char s4[4];
  FILE	*sfile;
  int	nnode,nelem,i,i1,i2,i3;
  float	xn,yn,zn;

  if(argc <= 1){
        puts("Usage:  ans2stl file_root");
        exit(1);
  }
  sprintf(nfname,"%s.node",argv[1]);
  sprintf(efname,"%s.elem",argv[1]);
  sprintf(sfname,"%s.stl",argv[1]);

  nnode = readnodes(nfname);
  nelem = readelems(efname);
  nf = nelem;

  sfile = fopen(sfname,"wb");
  fwrite("PADT STL File, Solid Binary",80,1,sfile);
  swapit(&nelem,s4);    fwrite(s4,4,1,sfile);

  for(i=0;i<nelem;i++){ 
      i1 = np[ie[0][i]];
      i2 = np[ie[1][i]];
      i3 = np[ie[2][i]];
      getnorm(&xn,&yn,&zn,i1,i2,i3);

      swapit(&xn,s4);	fwrite(s4,4,1,sfile);
      swapit(&yn,s4);	fwrite(s4,4,1,sfile);
      swapit(&zn,s4);	fwrite(s4,4,1,sfile);

      swapit(&coord[0][i1],s4);	fwrite(s4,4,1,sfile);
      swapit(&coord[1][i1],s4);	fwrite(s4,4,1,sfile);
      swapit(&coord[2][i1],s4);	fwrite(s4,4,1,sfile);

      swapit(&coord[0][i2],s4);	fwrite(s4,4,1,sfile);
      swapit(&coord[1][i2],s4);	fwrite(s4,4,1,sfile);
      swapit(&coord[2][i2],s4);	fwrite(s4,4,1,sfile);

      swapit(&coord[0][i3],s4);	fwrite(s4,4,1,sfile);
      swapit(&coord[1][i3],s4);	fwrite(s4,4,1,sfile);
      swapit(&coord[2][i3],s4);	fwrite(s4,4,1,sfile);
      fwrite(s4,2,1,sfile);
  }
  fclose(sfile);
    puts(" ");
  printf("  STL Data Written to %s.stl \n",argv[1]);
    puts("  Done!!!!!!!!!");
  exit(0);
}

void  getnorm(xn,yn,zn,i1,i2,i3)
	float	*xn,*yn,*zn;
	int	i1,i2,i3;
{
	float	v1[3],v2[3];
	int	i;

        for(i=0;i<3;i++){
	  v1[i] = coord[i][i3] - coord[i][i2];
	  v2[i] = coord[i][i1] - coord[i][i2];
	}

	*xn = (v1[1]*v2[2]) - (v1[2]*v2[1]);
	*yn = (v1[2]*v2[0]) - (v1[0]*v2[2]);
	*zn = (v1[0]*v2[1]) - (v1[1]*v2[0]);
}
long readelems(fname)
        char    *fname;
{
        long num,i;
        FILE *nfile;
        char    string[256],s1[7];

        num = 0;
        nfile = fopen(fname,"r");
		if(!nfile){
			puts(" error on element file open, bye!");
			exit(1);
		}
        while(fgets(string,86,nfile)){
          for(i=0;i<8;i++){
            strncpy(s1,&string[6*i],6);
            s1[6] = '\0';
            sscanf(s1,"%d",&ie[i][num]);
          }
          num++;
        }

        printf("Number of element read: %d\n",num);
        return(num);
}

long readnodes(fname)
        char	*fname;
{
        FILE    *nfile;
        long     num,typeflag,nval,ifoo;
        char    string[256];

        num = 0;
        nfile = fopen(fname,"r");
		if(!nfile){
			puts(" error on node file open, bye!");
			exit(1);

		}
        while(fgets(string,100,nfile)){
          sscanf(string,"%d ",&nval);
          switch(nval){
            case(-888):
                typeflag = 1;
            break;
            case(-999):
                typeflag = 0;
            break;
            default:
                np[nval] = num;
                if(typeflag){
                        sscanf(string,"%d %g %g %g",
                           &ifoo,&coord[0][num],&coord[1][num],&coord[2][num]);
                }else{
                        sscanf(string,"%d %g %g %g",
                           &ifoo,&coord[0][num],&coord[1][num],&coord[2][num]);
                        fgets(string,81,nfile);
                }
num++;
            break;
        }

        }
        printf("Number of nodes read %d\n",num);
        return(num);

}

/* A Little ditty to swap the byte order, STL files are for DOS */
void swapit(s1,s2)
     char s1[4],s2[4];
{
  s2[0] = s1[0];
  s2[1] = s1[1];
  s2[2] = s1[2];
  s2[3] = s1[3];
}

ans2stl_win_2014_01_28.zip

Creating the Nodes and Elements

I’ve created a little example macro that can be used to make an STL of deformed geometry.  If you do not want the deformed geometry, simply remove or comment out the UPGEOM command.  This macro is good for MAPDL or ANSYS Mechanical, just comment out the last line  to use it with MAPDL:

et,999,200,4

type,999

esurf,all

finish ! exit whatever preprocessor your in

! move the RST file to a temp file for the UPCOORD. Comment out if you want

! the original geometry

/copy,file,rst,,stl_temp,rst

/prep7 ! Go in to PREP7

et,999,200,4 ! Create a dummy triangle element type, non-solved (200)

type,999 ! Make it the active type

esurf,all ! Surface mesh your model

!

! Update the geometry to the deformed shape

! The first argument is the scale factor, adjust to the appropriate level

! Comment this line out if you don’t want deformed geometry

upgeom,1000,,,stl_temp,rst

!

esel,type,999 ! Select those new elements

nelem ! Select the nodes associated with them

nwrite,stl_temp,node ! write the node file

ewrite,stl_temp,elem ! Write the element file

! Run the program to convert

! This assumes your executable in in c:\temp. If not, change to the proper

! location

/sys,c:\temp\ans2stl_win.exe stl_temp

! If this is a ANSYS Mechanical code snippet, then copy the resulting STL file up to

! the root directory for the project

! For MAPDL, Comment this line out.

/copy,stl_temp,stl,,stl_temp,stl,..\..

An Example

To prove this out using modern computing technology (remember, last time I used this was in 2001) I brought up my trusty valve body model and slammed 5000 lbs on one end, holding it on the top flange.  I then inserted the Commands object into the post processing branch:

image

When the model is solved, that command object will get executed after ANSYS is done doing all of its post processing, creating an STL of the deformed geometry. Here is what it looks like in the output file. You can see what it looks like when APDL executes the various commands:

/COPY FILE FROM FILE= file.rst

TO FILE= stl_temp.rst

FILE file.rst COPIED TO stl_temp.rst

1

***** ANSYS – ENGINEERING ANALYSIS SYSTEM RELEASE 15.0 *****

ANSYS Multiphysics

65420042 VERSION=WINDOWS x64 08:39:44 JAN 14, 2014 CP= 22.074

valve_stl–Static Structural (A5)

Note – This ANSYS version was linked by Licensee

***** ANSYS ANALYSIS DEFINITION (PREP7) *****

ELEMENT TYPE 999 IS MESH200 3-NODE TRIA MESHING FACET

KEYOPT( 1- 6)= 4 0 0 0 0 0

KEYOPT( 7-12)= 0 0 0 0 0 0

KEYOPT(13-18)= 0 0 0 0 0 0

CURRENT NODAL DOF SET IS UX UY UZ

THREE-DIMENSIONAL MODEL

ELEMENT TYPE SET TO 999

GENERATE ELEMENTS ON SURFACE DEFINED BY SELECTED NODES

TYPE= 999 REAL= 1 MATERIAL= 1 ESYS= 0

NUMBER OF ELEMENTS GENERATED= 13648

USING FILE stl_temp.rst

THE SCALE FACTOR HAS BEEN SET TO 1000.0

USING FILE stl_temp.rst

ESEL FOR LABEL= TYPE FROM 999 TO 999 BY 1

13648 ELEMENTS (OF 43707 DEFINED) SELECTED BY ESEL COMMAND.

SELECT ALL NODES HAVING ANY ELEMENT IN ELEMENT SET.

6814 NODES (OF 53895 DEFINED) SELECTED FROM

13648 SELECTED ELEMENTS BY NELE COMMAND.

WRITE ALL SELECTED NODES TO THE NODES FILE.

START WRITING AT THE BEGINNING OF FILE stl_temp.node

6814 NODES WERE WRITTEN TO FILE= stl_temp.node

WRITE ALL SELECTED ELEMENTS TO THE ELEMENT FILE.

START WRITTING AT THE BEGINNING OF FILE stl_temp.elem

Using Format = 14(I6)

13648 ELEMENTS WERE WRITTEN TO FILE= stl_temp.elem

SYSTEM=

c:\temp\ans2stl_win.exe stl_temp

Number of nodes read 6814

Number of element read: 13648

STL Data Written to stl_temp.stl

Done!!!!!!!!!

/COPY FILE FROM FILE= stl_temp.stl

TO FILE= ..\..\stl_temp.stl

FILE stl_temp.stl COPIED TO ..\..\stl_temp.stl

image

The resulting STL file looks great:

image

I use MeshLab to view my STL files because… well it is free.  Do note that the mesh looks coarser.  This is because the ANSYS mesh uses TETS with midside nodes.  When those faces get converted to triangles those midside nodes are removed, so you do get a coarser looking model.

And after getting bumped from the queue a couple of times by “paying” jobs, our RP group printed up a nice FDM version for me on one of our Stratasys uPrint Plus machines:

image

It’s kind of hard to see, so I went out to the parking lot and recorded a short video of the part, twisting it around a bit:

Here is the ANSYS Mechanical project archive if you want to play with it yourself.

Other Things to Consider

Using FE Modeler

You can use FE Modeler in a couple of different ways with STL files. First off, you can read an STL file made using the method above. If you don’t have an STL preview tool, it is an easy way to check your distorted mesh.  Just chose STL as the input file format:

image

You get this:

image

If you look back up at the open dialog you will notice that it reads a bunch of mesh formats. So one thing you could do instead of using my little program, is use FE Modeler to make your STL.  Instead of executing the program with a /SYS command, simply use a CDWRITE,DB command and then read the resulting *.CDB file into FE Modeler.  To write out the STL, just set the “Target System” to STL and then click “Write Solver File”

image

You may know, or may have noticed in the image above, that FE Modeler can read other FEA meshes.  So if you are using some other FEA package, which you should not, then you can make an STL file in FE Modeler as well.

Color Contours

The next obvious question is how do I get my color contours on the plot. Right now we don’t have that type of printer here at PADT, but I believe that the dominant 3D Color printer out, the former Z-Corp and now 3D Systems machines, will read ANSYS results files. Stratasys JUST announced a new color 3D Printer that makes usable parts. Right now they don’t have a way to do contours, but as soon as they do we will publish something.

Another option is to use a /SHOW,vrml option and then convert that to STL with the color information.

Scaling

Scaling is something you should think about. Not only the scaling on your deformed geometry, but the scaling on your model for printing.  Units can be tricky with STL files so make sure you check your model size before you print.

Smoother STL Surfaces

Your FEA mesh may be kind of coarse and the resulting STL file is even coarser because of the whole midside node thing.  Most of the smoothing tools out there will also get rid of sharp edges, so you don’t want those. Your best best is to refine your mesh or using a tool like Geomagic.

Making a CAD Model from my Deformed Mesh

Perhaps you stumbled on this posting not wanting to print your model. Maybe you want a CAD model of your deformed geometry.  You would use the same process, and then use Geomagic Studio.  It actually works very well and give you a usable CAD model when you are done.

Efficient Engineering Data, Part 2: Setting Default Materials and Assignments aka No, You’re Not Stuck with Structural Steel for the Rest of Your Life

Longer ago than I care to admit, I wrote an article about creating and using your own material libraries in Workbench. This is the long awaited follow-up, which concerns setting the default Engineering Data materials and default material assignments in Mechanical and other analysis editors.

imageNote:
Part of the reason it’s taken me this long is that I moved to New Mexico to help staff PADT’s new office there, and to shadow Walter White. It has been a hectic, exhausting endeavor but I’m here and I’m finally settled in. If you’re in New Mexico and are interested in ANSYS, engineering services, product development, or rapid prototyping (e.g. 3D printing), please feel free to contact me.

In order to make the best use of the procedures here, you will probably want to know how to create your own material libraries. Part 1 describes how to do this. This will also work with the material libraries that come with the ANSYS installation, though.

Pick Favorites

The first step is to get into Engineering Data and expose the material libraries by clicking on the book stack button ( image ). Then, drag the materials of your choice from the appropriate library(ies) to the Favorites Data Source. These can include materials you want to have available in Mechanical by default as well as materials that you would like to consolidate into a single location for quick access. At this point, the default material availability and assignments have not been altered. These will be handled in the next couple of steps.

image

Drag and Drop Materials to Favorites

Set Default Material Availability

To specify which materials will be immediately available for assignment in future analyses, go to the Favorites Data Source and check all applicable materials in column D. Though not assigned to the immediate set of engineering data, these will be on the default list of available materials in subsequent analyses, i.e. when you create a new analysis in the same project schematic or when you exit and reopen Workbench.

image

Check to Add to Default List of Available Materials

image

Materials Immediately Available Inside Mechanical

Set Default Material Assignment

Now our most commonly used materials are immediately available in our analysis editor. But Structural Steel still lingers. In many, if not most, cases, we would prefer our default assignment to be something else.

The fix is easy. Once again, go to the Favorites Data Source, right click the material you wish to have as your default material, and select Default Solid Material (and if you’re doing Emag or CFD, you can set your default fluid or field material with the right-click menu too). Your default solid material will now replace Structural Steel in subsequent analyses.

image

Example: Aluminum 6061-T651 Set as Default Material Assignment

image

Becomes Default Material Assignment in Analysis

Note that you can stop at any step in this process. If you want to consolidate favorite materials, but don’t want to have them immediately in your analysis editor, you can do that. If you want a default list of materials to select from without specifying a default material assignment, you can do that too. More than likely, though, you’ll want to do all three.

Press Release: PADT and M-Tech Industries to Highlight Fluid-Thermal System Modeling for Mining with Flownex at 2014 SME Annual Meeting and Exhibit

987786-Flownex-SME-2014_Mine-Simulation-3We are very excited about the upcoming 2014 SME Annual Meeting and Exhibit in Salt Lake City, Utah.  Not only is this in our very own back yard (or is it our front or side yard?) it is a great place for us to show off Flownex Simulation Environment and how useful it is for simulation mining systems. Besides promoting Flownex, we will hae a booth in the exhibit area and we will be presenting a paper on some work we did with ANSYS software for mining.  Last years show in Denver was a great experience and we know this years will be as well.

To promote the event and Flownex usage in the industry, we just published the following press release:

image

The release is accompanied by two great videos that Stephen did showing the usage of Flownex on some real mining problems.

Part 1

Part 2

Also, don’t forget that we still have room in our free Denver, Colorado Introduction to Flownex Class.

As always with Flownex, contact Roy Haynie (roy.haynie@padtinc.com) to learn more.

Video Tips: Multiphysics Simulation with ANSYS Maxwell and ANSYS Mechanical – Part 2

This is Part 2 of our 2 part video series showing you a multiphysics simulation with ANSYS Maxwell and ANSYS Mechanical. In this video we take the results from ANSYS Maxwell and use it to compute the temperature distribution and finally the structural deformation due to the current through the parts.

The Part 1 video can be found here