ANSYS Mechanical Scripting: HOWTO Part 5

Where we Left Off Last Time…

In the last post of this series, I left you with an introduction to the Microsoft Visual Studio debugger and a description of how to debug scripts that we write.  In this post, I’m going to illustrate how to write what might actually turn out to be a useful script!  The idea for this script came from a request from one of our users, Craig Hildreth, who has always wished that there was a better way to define slice planes in Mechanical. 

Problem Description

Lets say you want to create a slice plane in ANSYS Mechanical that cuts precisely through a certain location in your model and that is precisely oriented to be normal to a certain direction.  How would you do it?  If you’ve used the slice tool in Mechanical before, you know that you basically yank a line segment with the mouse across the screen.  The resulting slice plane is defined such that if you were to extend the line segment you created with the mouse to infinity and then extrude the line in the direction of the current view the resulting plane formed by this operation would be the slice plane.  Can it be more intuitive?  So, of you go spinning and panning your model around to where you think you are looking right down the slice plane, then with the steadiness of a surgeon you try with all your might to draw a straight line with the mouse.  (Good luck!)  Now, to ANSYS’ credit, this is a really cool tool if you just want to whip a quick slice plane on the model to look inside somewhere.  Especially when you’ve got that cool little drag widget.  However, if you are trying to use the slice tool for any kind of meaningful post processing, your probably out of luck.  What would be awfully darn useful would be create or use a Cartesian coordinate system that would then define the slice plane.  That sounds kind of intuitive, right?  Anyway, you might optimistically think that functionality should already be there, but alas to my knowledge it is not.  So lets see if we can fake it.  (If you just want the code and don’t care how we write it, check out this post where we give the code.  This is a long post because it turns out this is trickier than it looks…)

How the Heck Do You Create a Slice Plane… Programmatically that is?

So, step one is to figure out what Mechanical does under the hood when we create a slice plane.  If you haven’t read Part 2of this series, I suggest you do so now.  We’re going to use the techniques in that post to figure out what Mechanical is doing when we create a slice plane. 

Searching the ANSYS Source Files for Info on Creating a Slice Plane

If you haven’t started up Mechanical, do so now and make sure that the slice plane window is visible by ensuring that menu item View->Windows->Section Planes has a check mark beside it.  In the section plane window, hover your mouse of the create section plane button.  You should see a tool tip like the following:

image

We see that the tooltip is called “New Section Plane”.  This is a string for which we can search.  So, search the entire directory tree C:\Program Files\ANSYS Inc\v130\aisol\DesignSpace\ for the string “New Section Plane”.  We see that in C:\Program Files\ANSYS Inc\v130\aisol\DesignSpace\DSPages\Language\en-us\xml\dsstringtable.xml we find that string.  Remember that this is where you are normally going to find GUI related strings.  This string has the string id: ID_NewSectionPlane.  So, lets search for that.  One place we find this string is inside the file: C:\Program Files\ANSYS Inc\v130\aisol\DesignSpace\DSPages\scripts\DSSectionPlanesScript.js  This file name sounds promising.  Here is some code from that file showing where the id shows up.

 

btnAdd = toolbar.Buttons.AddButton( TBT_BUTTON, 
"NewSectionPlane", ID_ADD, 
"NewSectionPlane", false, "", 
localString("ID_NewSectionPlane") );

btnAdd.OnClick.AddCOM(g_sectionPaneObjects, 
"SectionPane_Add");
  

This code is reformatted to fit better in the blog.  You can see that the code appears to be adding a button to a toolbar and also registering a callback for when the button is clicked.  The callback is what we’re interested in, so lets search for “SectionPane_Add”. Searching for that leads us to the following code:

this.SectionPane_Add = function sectionPane_Add(sender, args)
{
    if (!AllowSlicing ())  return;
    doAddSectionPlane();
 }

We’re getting closer!  It looks like we need to peek into doAddSectionPlane().  Let’s search for that.  It turns out that that function is a bit of a dead end, but lucky for us right below it we see the following code:

function doDeleteSectionPlane()
{
    var sliceTool  = ds.Graphics.SliceTool;
    if (sliceTool == null)
        return;
   ...
}

Hey!  There is something called a SliceTool that is a member of the Graphics object associated with the global DesignSpace Object.  Lets quit grepping and slogging through the ANSYS Mechanical code base and fire up the debugger to see what we can learn about the SliceTool.

Using the Visual Studio Debugger to Learn About the Slice Tool

Now that we’ve figured out that there is an object called SliceTool that is a member of the Graphics object, lets use the debugger to interrogate that object.  Create the following javascript code and save it somewhere on your hard drive.

function slice_tool_interrogate() {
    debugger;
    var slice_tool = DS.Graphics.SliceTool;
    // Useless code to give us a place
    // to stop
    var i = 1;
}

slice_tool_interrogate();

The code above will allow us to stop in the debugger and look at the slice tool object.  The following screen shot shows the slice_tool object’s properties and methods in the debugger.

image

Hmm… Well, there is one function there called SetPlane(ix1, iy1, ix2, iy2) that looks kind of promising.  Unfortunately, I don’t see any function that would seem to hint at defining the plane with a point and normal, for example.  It looks like the programmatic interface mimics the user interface.  Arg…  So, lets just guess that given two points in screen coordinates (x1, y1 and x2,y2) we can create a slice plane programmatically.  What should x1, y1 and x2, y2 be?  Are screen coordinates normalized? Where is the screen origin; top left or bottom right or middle of the screen? The fact that the arguments have an “I” in front of them hints that perhaps they are integers, which would rule out normalized values.  What to do?  Well, lets try this.  Lets orient some geometry so that it is mostly centered with the screen and, using the immediate window in the debugger, pass in some values to see what happens.  Here is a screen shot prior to using the immediate window to test things out.

image

Now, I’m going to run our little test script again under the debugger, but after the slice_tool object is created, I’m going to enter the following into the debugger’s immediate window:

slice_tool.SetPlane(0,0.5,1,0.5)

When I do that, I get the following:

image

So, we created a slice plane!!!  Unfortunately, it doesn’t appear that it cut through anything.  We learned a couple of things.

  1. The function DS.Graphics.SliceTool.SetPlane(x1, y1, x2, y2) will indeed create a slice plane object.
  2. The coordinates don’t appear to be normalized.  (Can’t rule this out definitively, but lets try something else.)

Delete this plane, and lets work with the hypothesis that we need to pass in the screen coordinates as integer values.  If I were a programmer, I would choose pixels as my unit of measure for screen coordinates.  Things like mouse clicks in particular can be directly translated easily.  So, now the question is how big is the screen?  Or more precisely, how big is the graphics window?  This DS.Graphics object might hold the answers.  If you poke around this object inside the debugger you’ll notice that it has a property called GfxWindow.  That sounds promising!  If we look inside that object, we see:

image 

Ah ha!  This thing has a height and a width property associated with it, which I assume by the values associated with these properties are in units of pixels.  So lets try the following code.  You can modify your little test script as follows:

function slice_tool_interrogate() {
    debugger;
    var slice_tool = DS.Graphics.SliceTool;
    var height = DS.Graphics.GfxWindow.height;
    var width = DS.Graphics.GfxWindow.width;
    var slice_plane = 
        slice_tool.SetPlane(0, height / 2, width, height / 2);
    // Useless code to give us a place
    // to stop
    var i = 1;
}
slice_tool_interrogate();

When I run this little script.  My “after photo” looks like:

image

Ha!  We got ourselves a genuine slice plane.  And, we got it where we want it.  Well, sort of…  OK, so forgive me for celebrating a minor victory, but we were able to create a slice plane and draw a perfectly straight line right across the middle of the screen.   Now, if only we could programmatically rotate and pan the model around so that we are centered right on the origin of the plane we are interested in and we are looking right down parallel to the plane.  If we can do that, then we just draw a quick line with the SetPlane function and voila, we’re done.

How the Heck to You Rotate and Pan the View… Programmatically that is?

You know, this Graphics object is pretty handy.  Lets see what other goodies it might hold.  If we poke around in it a bit, we see that it has a property called Camera.  Camera sounds like something that would help us look at things from a certain point of view.   Here is a peek into the camera object inside the debugger:

image

This is looking promising.  You’ll notice that we’ve got a pan and rotate function.  And, it looks like we’ve got a focus point and view direction vector.  This is looking good.  Maybe we can write a function that will pan the view around until the origin of our coordinate system is centered where we want it and simultaneously rotate the view around until our view direction is oriented how we want it.  One other thing we’ll have to figure out is that we could rotate the view about the view direction vector so that it spins around the axis we are looking down.  Think of a drill bit spinning.  In that case neither the view direction or the focus point would likely change, but there is that rotateZ() function that might bail us out.  Lets see if we can figure out what all these things do/mean.  Lets start with the following test functions:

function do_pan(up, right) {
    var camera = DS.Graphics.Camera;
    var graphics = DS.Graphics;
    var a = 0;
    for (var i = 0; i < 50; i++) {
        camera.pan(up, right);
        // Just spin her so that the "animation" is more smooth.
        for (var j = 0; j < 1000; j++) {
            a += Math.sin(j);
        }
        graphics.Refresh();
    }
    // I don't know if Javascript does dead code removal, but
    // try to trick any optimizations the javascript interpreter
    // might do to our busy loop.
    return a;
}

function do_rotate(up, right) {
    var camera = DS.Graphics.Camera;
    var graphics = DS.Graphics;
    var a = 0;
    for (var i = 0; i < 50; i++) {
        camera.rotate(up, right);
        // Just spin her so that the "animation" is more smooth.
        for (var j = 0; j < 1000; j++) {
            a += Math.sin(j);
        }
        graphics.Refresh();
    }
    // I don't know if Javascript does dead code removal, but
    // try to trick any optimizations the javascript interpreter
    // might do to our busy loop.
    return a;
}

These two functions basically call the Camera pan and rotate functions repeatedly with some values passed in.  Note that if you include these in your little test script, once you stop in the debugger, you can call these functions inside the immediate window and watch what happens.  Just type do_pan(5,0) for example.  If you play around with these functions, you’ll realize that the rotate function seems to rotate the model about either the screen vertical axis or the screen horizontal axis.  Interestingly, the pan function seems to be backwards.  That is, the amountUp argument seems to move the model to the right and the amountRight argument seems to move the model up.  Ah well…  We’ll just make note of it.  How do we make sense of all these things?

Math, Math… Wonderful Math! Vectors and Rotations Galore!

Note, if math makes your head spin, we’re going to spin.  We’re going to take dot products and cross products and do all kinds of other gyrations.  So, feel free to skip this section if you are not interested.  However, this section constitutes the meat of figuring out what Mechanical is doing with the graphics given the few bits of information we have highlighted above. 

Lets start with rotating the view so that we’re looking where we want to look.  Our rotate function spins the model about the screen vertical and horizontal axes, so I’m going to propose an iterative technique that will sequentially rotate up and rotate right until the view vector is parallel to a supplied orientation vector.  Remember that we need to look edge on to the plane we wish to use as a cut plane so that we can draw a straight line across our screen and create the cut plane.  So, our view vector needs to be “in the plane” of our cut plane.  Consider the image below:

view_direction

The red vector represents the current view direction vector.  Now, assume that we call the rotate function associated with the camera object and request an up rotation of 15 degrees.  The resulting new view direction vector is represented by the green vector above.  Lets call this the test vector.  Taking the cross product of these two vectors (green and red) allows us to determine the normal vector, shown in black above, that represents an axis about which our requested rotation occurred.  We now know that if we call the rotate function with a value for the up rotation given the current view vector, we will rotate about the normal vector above.  So, now rotate the view back to the original view direction.  Let’s assume that our desired view direction is show above as the blue vector.  In general, it will not be in the plane of our current rotation.  However, if we project this vector onto the plane defined by the normal vector and the origin we obtain the purple vector above.  Now I know that I can rotate the view in the “up” direction by some angle and get to the purple vector. Its not the blue vector that I want, but it is the projection of the view vector on the plane in which my rotations are occurring. I can calculate a good guess for this angle by taking the ratio of the angle between the green and red vector with respect to my original trial rotation of 15 degrees.  Then, if I take the angle between the purple vector and the red vector, divide by my ratio, I’ll get a good guess as to what I should try to rotate my view so that I get to the purple vector.  If it works according to plan, this will get me closer to my desired view direction.  Now, I repeat this whole process by rotating in the other direction, that is rotating to the right.  Between each of these update rotations, I check to see if my current view vector is parallel to my desired view vector within some small tolerance.  If things go well, I should walk my way around to the desired view vector.  Consider the code below.

// Calculate the dot product of two vectors
function dot(va, vb) {
    return va[0] * vb[0] + va[1] * vb[1] + va[2] * vb[2];
}

// Calculate the length of a vector
function norm(va) {
    return Math.sqrt(va[0] * va[0] + va[1] * va[1] + va[2] * va[2]);
}

// Normalize a vector
function normalize(va) {
    var len = norm(va);
    if (len != 0.0) {
        return [va[0] / len, va[1] / len, va[2] / len];
    }
}

// Calculate the cross product of two vectors
function cross(va, vb) {
    return [va[1] * vb[2] - va[2] * vb[1],
            va[2] * vb[0] - va[0] * vb[2], 
            va[0] * vb[1] - va[1] * vb[0]];
}

// Scale a vector by a scalar
function scale(va, sc) {
    return [va[0] * sc, va[1] * sc, va[2] * sc];
}

// Add two vectors
function add(va, vb) {
    return [va[0] + vb[0],
            va[1] + vb[1],
            va[2] + vb[2]];
}

// Subtract vector b from vector a
function sub(va, vb) {
    return [va[0] - vb[0],
            va[1] - vb[1],
            va[2] - vb[2]];
}

function are_parallel(va, vb, tol) {
    return 1.0 - (Math.abs(dot(va, vb)) / (norm(va) * norm(vb))) < tol ? true : false;
}

function are_perpendicular(va, vb, tol) {
    return (Math.abs(dot(va, vb)) / (norm(va) * norm(vb))) < tol ? true : false;
}

// Rotate the view around so that we are looking in the 
// direction of the desired view
function rotate_view(desired_view) {
    // Get the camera and graphics objects
    var camera = DS.Graphics.Camera;
    var graphics = DS.Graphics;
    // This is our tolerance for being parallel to the desired view
    var eps = 1e-7;
    // This is the maximum number of iterations we'll try.  
    // While loops with a tolerance check only scare me...
    var max_iter = 200;
    // Get the current view
    var view_c = normalize([camera.ViewDirectionX,
                            camera.ViewDirectionY,
                            camera.ViewDirectionZ]);
    // Make sure we're normalized
    dvd = normalize(desired_view);
    // This should be close to 1 if parallel.

    var cnt = 1;
    var b_first_arg = true;
    var normal = null;
    var trial = null;
    var view_p = null;
    var trial_ang = 15;
    var applied_rotation = null;
    var previous_up = 0;
    var previous_right = 0;
    var right_factor = up_factor = -1.0;
    // Loop until we're parallel, or we give up trying
    do {
        var factor = cnt / max_iter;
        factor *= factor;
        factor *= factor;
        // Get the view direction
        vd = normalize([camera.ViewDirectionX,
                        camera.ViewDirectionY,
                        camera.ViewDirectionZ]);
        // Get the current up vector
        camera.rotate(trial_ang, 0);
        trial = normalize([camera.ViewDirectionX,
                       camera.ViewDirectionY,
                       camera.ViewDirectionZ]);
        current_up = normalize(cross(trial, vd));
        // Rotate back so we don't lose our place
        camera.rotate(-trial_ang, 0);
        

        vd = normalize([camera.ViewDirectionX,
                        camera.ViewDirectionY,
                        camera.ViewDirectionZ]);

        if (b_first_arg) {
            camera.rotate(trial_ang, 0);
        } else {
            camera.rotate(0, trial_ang);
        }
        trial = normalize([camera.ViewDirectionX,
                           camera.ViewDirectionY,
                           camera.ViewDirectionZ]);
        n = normalize(cross(vd, trial));
        // Rotate back
        if (b_first_arg) {
            camera.rotate(-trial_ang, 0);
        } else {
            camera.rotate(0, -trial_ang);
        }
        vd = normalize([camera.ViewDirectionX,
                        camera.ViewDirectionY,
                        camera.ViewDirectionZ]);
        // Only do this rotation if our desired view vector
        // is not nearly parallel to our current axis of rotation
        // If it is, then the code inside the if statement will
        // be skipped.
        if (!are_parallel(dvd, n, eps)) {
            // Project the desired view vector onto our plane of rotation
            vp = normalize(cross(n, normalize(cross(dvd, n))));
            // Calculate the angle between the projected vector
            // and our current view direction
            dot_product = dot(vp, vd);
            needed_rotation = Math.acos(dot(vp, vd)) * 180 / Math.PI
            
            // Knock it down by a factor associated with our iteration
            // scheme.  This helps prevent spurious jittering as we 
            // make our way there.
            needed_rotation *= (1.0 - factor);
            if (b_first_arg) {
                // If we start to diverge, try rotating back the other way
                if(previous_up < needed_rotation) {
                    up_factor = (up_factor < 0.0) ? 1.0 : -1.0;
                    previous_up = needed_rotation;
                }
                camera.rotate(up_factor * needed_rotation, 0.0);
                
                
            } else {
                if (previous_right < needed_rotation) {
                    right_factor = (right_factor < 0.0) ? 1.0 : -1.0;
                    previous_right = needed_rotation;
                }
                camera.rotate(0.0, right_factor * needed_rotation);
            }
        }
        // See if we are there yet
        vd = normalize([camera.ViewDirectionX,
                        camera.ViewDirectionY,
                        camera.ViewDirectionZ]);
        b_rotate_converged = are_parallel(vd, dvd, eps);
        if (b_rotate_converged) break;
        // Flip directions
        b_first_arg = !b_first_arg;
//        DS.Graphics.Refresh();
    } while (cnt++ < max_iter);
        
}

You can try this function out with some test vectors to see if indeed it rotates the view into the desired orientation.  One thing you will notice, however, is that the view sometimes is spun at a funny angle as we look down this axis.  So all we’ve done is ensured that the camera is pointing in a certain direction, but there still is one degree of freedom, so to speak, left unspecified with the camera.  That is, you could spin the camera any way you want around this view direction and still technically be looking down the view direction.  So, now how do we figure out what “spin” has been applied to the camera?  Moreover, what spin do we want applied?

The answer for the second question above is related to the coordinate system that will define our cut plane. Lets assume that we are wanting to look down the Y axis of the coordinate system which will define our cutting plane, and we want the Z axis of this coordinate system to be pointing straight up on the screen.  If we do this, when we draw a straight cut line horizontally across the middle of the screen, we will be defining a cut plane in the XY plane of the specified coordinate system.  That is exactly what we want.  In the code above we’ve got the part that will enable us to look down the Y axis, now we just need to figure out how to make the Z axis point straight up

In most computer graphics implementations, there is this notion of an “Up Vector” which is defined to be perpendicular to the line of sight and is used to determine “which way is up”.  That is, the up vector points in the positive Y axis on the screen.  If you look back up at the screen shot for the camera object displayed in the debugger, you will indeed see that there is an UpVector as a member of that object!  Yippee!!! Unfortunately, it says “Bad Variable Type”.  Bummer.  That’s code for “this variable exists but you can’t access it in the scripting language”.  We do have this variable, AngleRotationZ, that looks promising, but what is the reference for measuring this angle?  Also, we have a function called rotateZ(), which again looks promising, but how are we going to know how much to rotate?  We need to know which way is up, and how much to rotate around the camera’s Z axis so that we can orient our view such that when we draw a straight line across the middle of the screen, we’re cutting the model exactly on our desired plane. 

Remember that the first argument to the function camera.rotate(amountUp, amountRight) will rotate the view about the Up Vector.  Consider the image below:

rotate_z

Lets assume that we start with our current view direction vector, which is represented in the image above by the red vector.  Next, we perform a trial rotation of 15 degrees about the unknown up vector using the camera.rotate(15,0).  This moves our view direction vector the green vector shown above.  By taking the cross product of these two vectors, we can determine which way is “up” in the world coordinate system.  We then rotate the view back to where we started.  Now that we know what the current up vector is, we can take the dot product with the desired up vector, shown in purple above, and determine the angle about the camera Z axis we need to rotate so that up is truly up.  Not that this assumes that the desired up vector is perpendicular to the current view direction.  Consider the code below.

function rotate_up(desired_up_vector) {
    
    // Get the camera and graphics objects
    var camera = DS.Graphics.Camera;
    var graphics = DS.Graphics;
    var view_c = null;
    var trial = null;
    var current_up = null;
    var theta = null;
    var trial_ang = 15;
    var theta1;
    var theta2;
    var eps = 0.01;
    // Make sure our passed in value is normalized
    duv = normalize(desired_up_vector);

    // Which way are we currently looking now?
    vd = normalize([camera.ViewDirectionX,
                    camera.ViewDirectionY,
                    camera.ViewDirectionZ]);

    // Perform our trial rotation about the screen vertical axis
    camera.rotate(trial_ang, 0);
    trial = normalize([camera.ViewDirectionX,
                       camera.ViewDirectionY,
                       camera.ViewDirectionZ]);
    up = normalize(cross(trial, vd));
    // Rotate back so we don't lose our place
    camera.rotate(-trial_ang, 0);
    // How much to we need to rotate?
    theta1 = Math.acos(dot(duv, up)) * 180 / Math.PI;
    // Rotate assuming a positive rotation
    camera.rotateZ(theta1);
    DS.Graphics.Refresh();
    // Now we need to test to see if this is indeed the proper direction
    // Which way are we currently looking now?
    vd = normalize([camera.ViewDirectionX,
                    camera.ViewDirectionY,
                    camera.ViewDirectionZ]);
    // Perform our trial rotation about the screen vertical axis
    camera.rotate(trial_ang, 0);
    trial = normalize([camera.ViewDirectionX,
                       camera.ViewDirectionY,
                       camera.ViewDirectionZ]);
    up = normalize(cross(trial, vd));
    // Rotate back so we don't lose our place
    camera.rotate(-trial_ang, 0);
    // How much to we need to rotate?
    theta2 = Math.acos(dot(duv, up)) * 180 / Math.PI;
    if (theta2 > theta1) {
        // Rotate backwards if our second guess is larger than our first
        camera.rotateZ(-theta2);
//        DS.Graphics.Refresh();
    }
}

In this code we implement the algorithm sketched out above.  The only little trick is that we might end up rotating in the wrong direction.  We can determine this by peforming the same test again, and if we are still off then we can rotate back the other direction.

Now that we can look in a particular direction and we can deduce which was is up, all that is left is to pan the view around so that the location of our cut plane is right in the center of the screen.  This amounts to figuring out how much to pan in the up and right directions. 

If you play with the camera.pan() function you will notice that it appears to use screen coordinates as its arguments.  This makes sense from a GUI interface standpoint in that when the user pans the view, they typically do so with the mouse and therefore start and end screen coordinates of a mouse movement are the most natural inputs to a pan routine.  Unfortunately for us, that makes panning more of a challenge for us.  Nonetheless, lets approach the problem from an iterative viewpoint similar to what we did with the rotations. 

We’re going to attach the panning problem using a similar approach as we used for the rotate view function.  That is, we’re going to start with a trial pan in one of the directions, look to see what that does in world coordinates, then perform a real pan in that direction, or its direct opposite, to try to get closer to our target.  In a similar fashion to the rotate code, we project the pan direction vector in world coordinates onto a plane defined by our current view vector.  At each iteration we test the vector from our focus point to the desired point to see if it is parallel to the view vector.  If it is, then that means our desired point lies right down the axis of the view vector passing through the focus point.  Thus, it would be in the center of the screen.  The following code implements this technique.

function pan_view(desired_point) {
    // Get the camera and graphics objects
    var camera = DS.Graphics.Camera;
    var graphics = DS.Graphics;
    // Maximum amount to pan by
    var height = DS.Graphics.GfxWindow.height;
    var width = DS.Graphics.GfxWindow.width;
    // The current focus point
    var fp = null;
    // The trial focus point
    var fp_trial = null;
    // The trial amount to pan
    var trial_pan = (height + width) / 40;  //Unit is pixels...
    // View direction
    var view_v = null;
    // Vector from the focus point to the desired point
    var fp_to_dp = null;
    // Vector of pan direction
    var pan_dir = null;
    // The amount we are going to pan
    var pan_amount = null;
    // Tolerance for determining if the view vector is parallel
    // to the focus point to desired point vector
    var eps = 1e-4;
    // We'll flip back and forth between panning up and panning right
    var b_first_arg = true;
    // The current count of our iteration
    var cnt = 0;
    // The maximum number of iterations to perform
    var max_iter = 200;

    var look_at = desired_point;
    var max_pan = (height + width) / 20;
    do {
        fp = [camera.FocusPointX,
              camera.FocusPointY,
              camera.FocusPointZ];
        vd = normalize([camera.ViewDirectionX,
                        camera.ViewDirectionY,
                        camera.ViewDirectionZ]);

        if (b_first_arg == true) {
            camera.pan(trial_pan, 0);
        } else {
            camera.pan(0, trial_pan);
        }
        trial = [camera.FocusPointX,
                 camera.FocusPointY,
                 camera.FocusPointZ];
        // In world coordinates, see which direction this pan took us
        pd = sub(trial, fp);
        // Project this onto a plane defined by the current view 
        // vector
        pd_in_view = cross(vd, cross(pd, vd));

        // Move back to our original location
        if (b_first_arg == true) {
            camera.pan(-trial_pan, 0);
        } else {
            camera.pan(0, -trial_pan);
        }
        fp = [camera.FocusPointX,
              camera.FocusPointY,
              camera.FocusPointZ];
        // Take a vector from the current focus point to the
        // desired center point
        fp_to_dp = normalize(sub(look_at, fp));
        // Only try to pan the view if the vector between our focus
        // point and view vector are not parallel.
        if (Math.abs(1.0 - Math.abs(dot(fp_to_dp, vd))) > eps) {
            // Project this onto a plane defined by the view direction
            fp_to_dp_in_view = cross(vd, cross(fp_to_dp, vd));

            // Normalize the pan test vector so we can walk our way there
            pd_in_view = normalize(pd_in_view);

            // Figure out how much we need to pan

            pan_amount = trial_pan * dot(fp_to_dp_in_view, pd_in_view);
            // Knock it down by a factor associated with our iteration
            // scheme.  This helps prevent spurious jittering as we 
            // make our way there.
            var factor = cnt / max_iter;
            pan_amount *= (1.0 - factor * factor * factor);

            if (b_first_arg == true) {
                camera.pan(pan_amount, 0);
            } else {
                camera.pan(0, pan_amount);
            }
            b_pan_converged = false;
        } else {
            b_pan_converged = true;
        }
        // Flip the direction
        b_first_arg = !b_first_arg;
//        graphics.Refresh();
    } while (cnt++ < max_iter && !b_pan_converged);
}

If you’ve hung with me this far, you know that we’ve got a function that will rotate our view so that we’re looking down a particular axis.  We’ve also got a function that will rotate the view about our line of sight so that we can make sure a particular direction is up.  Finally, we’ve got a function that will pan the view around so that a particular point in space shows up right in the middle of the screen.  All of this was accomplished with the limited set of functions and properties we could deduce from the Camera object using the debugger.  We also demonstrated how to create a slice plane using the technique of “drawing” al line straight across the middle of the screen.  With these four functions we should have all we need conceptually to create a slice plane right where we want it, oriented exactly as we want it. 

Implementation

The following code implements these ideas and creates a little dialog box that is presented to the user.  I’ll cover the implementation of a dialog box in a separate post, however, you may be able to decode the secret sauce here anyway.   The user interface works as follows.  You pick a coordinate system from the drop down list for which you would like to create a slice plane.  You can then either create a slice plane from this coordinate system, or have the code automatically reorient the view so that you are looking down the Z axis of this coordinate system.   Below is a screen shot of the dialog box that is presented, and finally, a listing of the entire code for the script.

image

Listing of the Code

Here is a listing of the entire script.

// This string contains the HTML code for the dialog box
// we will present to the user
var html_text = 
'<html xmlns="http://www.w3.org/1999/xhtml" >\n'+
'<head>\n'+
'<title>Slice Plane Creation Tool</title>\n'+
'<style type="text/css">\n'+
'    body\n'+
'    {\n'+
'        background: buttonface;\n'+
'        font: messagebox;\n'+
'    }\n'+
'    table\n'+
'    {\n'+
'        border-collapse:collapse;\n'+
'        font: messagebox;\n'+
'    }\n'+
'    select\n'+
'    {\n'+
'        margin: 2px 8px 2px 4px;\n'+
'        width: 250px;    \n'+
'        height: 28px;\n'+
'    }\n'+
'    input\n'+
'    {\n'+
'        margin: 0px 4px 0px 0px;\n'+
'    }\n'+
'</style>\n'+
'<script type="text/javascript">\n'+
'    var wb, xfer;\n'+
'    function window_onload() {\n'+
'        xfer = window.external;\n'+
'        wb = xfer("wb");\n'+
'        var cs_group = xfer(\'csys_group\');\n'+
'        var cs_select = xfer(\'csys_select\');\n'+
'        if (cs_group != null) {\n'+
'            populate_coordinate_systems(cs_group, cs_select);\n'+
'        }\n'+
'    }\n'+
'    function window_onunload() {\n'+
'        xfer("CSYS") = parseInt(csys_select.value);\n'+
'        xfer("CreateSlicePlane") = create_plane.checked;\n'+
'        xfer("ViewPlane") = view_plane.checked;\n'+
'    }\n'+
'    function populate_coordinate_systems(cs_group, cs_select) {\n'+
'        csys_select.options.length=0;\n'+
'        for (var i = 1; i <= cs_group.Children.Count; i++) {\n'+
'            var csys = cs_group.Children.Item(i);\n'+
'            var csys_name = csys.Name;\n'+
'            var csys_id = csys.ID;\n'+
'            if (csys_id == cs_select) {\n'+
'                csys_select.options[i - 1] = new Option(csys_name, csys_id, true, true);\n'+
'            } else {\n'+
'                csys_select.options[i - 1] = new Option(csys_name, csys_id, false, false);\n'+
'            }\n'+
'        }\n'+
'    }\n'+
'</script>\n'+
'</head>\n'+
'<body onload="window_onload()" onunload="window_onunload()" scroll="no">\n'+
'<table>\n'+
'<tr>\n'+
'<td>\n'+
'<table>\n'+
'<tr>\n'+
'<td nowrap="true">Coordinate System: </td>\n'+
'<td><select id="csys_select"></select></td>\n'+
'</tr>\n'+
'</table>\n'+
'</td>\n'+
'</tr>\n'+
'<tr>\n'+
'<td>\n'+
'<input id="create_plane" type="checkbox" checked="checked" />Create Slice Plane\n'+
'</td>\n'+
'</tr>\n'+
'<tr>\n'+
'<td>\n'+
'<input id="view_plane" type="checkbox"  />Look at Plane\n'+
'</td>\n'+
'</tr>\n'+
'</table>\n'+
'</body>\n'+
'</html>\n';


var SC = DS.Script;

// Entry point for the whole script.
function main() {
 
    // Create the dialog
    var slice_plane_dialog =
        SC.CreateActiveXObject(
            SC.GenWBProgId('WBControls.WBHTMLDialog')
            );
    var dlg_OK = 1;
    var dlg_Cancel = 2;
    var dlg_VScroll = 4;
    var flags = dlg_OK | dlg_Cancel | dlg_VScroll;
    var caption = 'Slice Plane From Coordinate System';
    var width = 410;
    var height = 110;
    var b_modal = false;
    // Create the html code from a temporary file
    var fso = SC.fso;
    var tfolder = fso.GetSpecialFolder(2);
    var tname = fso.GetTempName();
    var tfile = tfolder.CreateTextFile(tname, true);
    tfile.WriteLine(html_text);
    tfile.Close();
    var path = fso.BuildPath(tfolder.Path, tname);
    var xfer =
        SC.CreateActiveXObject(
            SC.GenWBProgId('WBControls.DlgArgs')
            );

    xfer('wb') = WB;
    xfer('dlg') = slice_plane_dialog;
    var csys_group = get_coordinate_system_group();
    
    if(csys_group == null) {
        SC.WBScript.Out('Cannot find the coordinate system group in the tree.' +
                        ' Please create a coordinate system first', true);
        return;
    }
    // Pass over to the dialog the coordinate system group.
    xfer('csys_group') = csys_group;
    var active_obj = DS.Tree.FirstActiveObject;
    if (active_obj.Class == SC.id_CoordinateSystem) {
        xfer('csys_select') = active_obj.ID;
    } else {
        xfer('csys_select') = -1;
    }
    // Show the dialog
    var ret = slice_plane_dialog.DoDialog(WB.hWnd,
                                          path,
                                          xfer,
                                          b_modal,
                                          width,
                                          height,
                                          flags, 
                                          caption);
    if (ret == 1) {
        // We get here if the user presses OK on the dialog, 
        // so lets grab the user's choices back from the selection.
        var selected_csys = xfer('CSYS');
        var create_plane = xfer('CreateSlicePlane');
        var look_at= xfer('ViewPlane');
        // Do the work here
        var origin = get_coordinate_system_origin(selected_csys);
        var z_axis = get_coordinate_system_z_axis(selected_csys);
        var y_axis = get_coordinate_system_y_axis(selected_csys);
        // Bail out if we can't create the vectors for the coordinate system
        if (origin == null || z_axis == null || y_axis == null) {
            SC.WBScript.Out('Cannot determine the coordinate system vectors.' + 
                            ' Unable to create slice plane', true);
            return;
        }
        if (create_plane == true) {
            create_slice_plane(origin, z_axis, y_axis);
        }
        if (look_at == true) {
            look_at_plane(origin, z_axis, y_axis);
        }
    }

    // Delete the html file to clean up
    fso.DeleteFile(path);
}

// Very simple utility function that is used to write out an html
// file to another file in javascript string format.  This is useful
// for pasting into a source file to package the html file into
// a single source script for distribution purposes.
function write_html(path) {
    var fso = SC.fso;
    var input = fso.OpenTextFile(path, 1, false);
    var output = fso.CreateTextFile(path + '.out', true);

    while (!input.AtEndOfStream) {
        var line = input.ReadLine();
        output.WriteLine('\'' + line + '\\' + 'n\'+');

    }
    output.WriteLine('\'\';');
    output.Close();
    input.Close();
}    

// Return the origin of a given coordinate system
function get_coordinate_system_origin(csys_id) {
    var csys = get_coordinate_system(csys_id);
    if (csys == null) {
        return null;
    }
    return [csys.OriginXLocation, 
            csys.OriginYLocation,
            csys.OriginZLocation];
}

// Return a vector oriented along the z axis of a given
// coordinate system
function get_coordinate_system_z_axis(csys_id) {
    var csys = get_coordinate_system(csys_id);
    if (csys == null) {
        return null;
    }
    return normalize([csys.ZDirectionXValue,
                      csys.ZDirectionYValue,
                      csys.ZDirectionZValue]);
}

// Return a vector oriented along the y axis of a given
// coordinate system
function get_coordinate_system_y_axis(csys_id) {
    var csys = get_coordinate_system(csys_id);
    if (csys == null) {
        return null;
    }
    return normalize([csys.YDirectionXValue,
                      csys.YDirectionYValue,
                      csys.YDirectionZValue]);
}

// Return an actual coordinate system object given
// a tree ID.
function get_coordinate_system(csys_id) {
    var group = get_coordinate_system_group();
    if (group == null) {
        return null;
    }
    for (var i = 1; i <= group.Children.Count; i++) {
        var child = group.Children.Item(i);
        if (child.ID == csys_id) {
            return child;
        }
    }
    return null;
}

// Return the coordinate system group in the tree.
function get_coordinate_system_group() {
    var branch = SC.getActiveBranch();
    if (branch == null) {
        return null;
    }
    return branch.CoordinateSystemGroup;
}

// Create a slice plane given a point, plane normal and up
// vector.  This function uses the orientation functions
// below
function create_slice_plane(point, plane_normal, inplane_up) {
    // Get a handle to the slicetool
    var slice_tool = DS.Graphics.SliceTool;

    // Orient the view using our view orientation functions
    for (var i = 0; i < 3; i++) {
        rotate_view(inplane_up);
        pan_view(point);
        rotate_up(plane_normal);
        DS.Graphics.Refresh();
    }

    // Get the window's dimensions
    var height = DS.Graphics.GfxWindow.height;
    var width = DS.Graphics.GfxWindow.width;

    // Create the slice plane
    var slice_plane =
        slice_tool.SetPlane(0, height / 2, width, height / 2);
    DS.Graphics.Refresh();
}

// Look at a particular point along an axis.  This is useful
// for looking at a slice plane after it has been created.
function look_at_plane(point, normal, up) {
    // Orient the view using our view orientation functions

    // Orient the view using our view orientation functions
    for (var i = 0; i < 3; i++) {
        rotate_view(normal);
        pan_view(point);
        rotate_up(up);
        DS.Graphics.Refresh();
    }

}

// Calculate the dot product of two vectors
function dot(va, vb) {
    return va[0] * vb[0] + va[1] * vb[1] + va[2] * vb[2];
}

// Calculate the length of a vector
function norm(va) {
    return Math.sqrt(va[0] * va[0] + va[1] * va[1] + va[2] * va[2]);
}

// Normalize a vector
function normalize(va) {
    var len = norm(va);
    if (len != 0.0) {
        return [va[0] / len, va[1] / len, va[2] / len];
    }
}

// Calculate the cross product of two vectors
function cross(va, vb) {
    return [va[1] * vb[2] - va[2] * vb[1],
            va[2] * vb[0] - va[0] * vb[2], 
            va[0] * vb[1] - va[1] * vb[0]];
}

// Scale a vector by a scalar
function scale(va, sc) {
    return [va[0] * sc, va[1] * sc, va[2] * sc];
}

// Add two vectors
function add(va, vb) {
    return [va[0] + vb[0],
            va[1] + vb[1],
            va[2] + vb[2]];
}

// Subtract vector b from vector a
function sub(va, vb) {
    return [va[0] - vb[0],
            va[1] - vb[1],
            va[2] - vb[2]];
}

function are_parallel(va, vb, tol) {
    return 1.0 - (Math.abs(dot(va, vb)) / (norm(va) * norm(vb))) < tol ? true : false;
}

function are_perpendicular(va, vb, tol) {
    return (Math.abs(dot(va, vb)) / (norm(va) * norm(vb))) < tol ? true : false;
}

// Rotate the view around so that we are looking in the 
// direction of the desired view
function rotate_view(desired_view) {
    // Get the camera and graphics objects
    var camera = DS.Graphics.Camera;
    var graphics = DS.Graphics;
    // This is our tolerance for being parallel to the desired view
    var eps = 1e-7;
    // This is the maximum number of iterations we'll try.  
    // While loops with a tolerance check only scare me...
    var max_iter = 200;
    // Get the current view
    var view_c = normalize([camera.ViewDirectionX,
                            camera.ViewDirectionY,
                            camera.ViewDirectionZ]);
    // Make sure we're normalized
    dvd = normalize(desired_view);
    // This should be close to 1 if parallel.

    var cnt = 1;
    var b_first_arg = true;
    var normal = null;
    var trial = null;
    var view_p = null;
    var trial_ang = 15;
    var applied_rotation = null;
    var previous_up = 0;
    var previous_right = 0;
    var right_factor = up_factor = -1.0;
    // Loop until we're parallel, or we give up trying
    do {
        var factor = cnt / max_iter;
        factor *= factor;
        factor *= factor;
        // Get the view direction
        vd = normalize([camera.ViewDirectionX,
                        camera.ViewDirectionY,
                        camera.ViewDirectionZ]);
        // Get the current up vector
        camera.rotate(trial_ang, 0);
        trial = normalize([camera.ViewDirectionX,
                       camera.ViewDirectionY,
                       camera.ViewDirectionZ]);
        current_up = normalize(cross(trial, vd));
        // Rotate back so we don't lose our place
        camera.rotate(-trial_ang, 0);
        

        vd = normalize([camera.ViewDirectionX,
                        camera.ViewDirectionY,
                        camera.ViewDirectionZ]);

        if (b_first_arg) {
            camera.rotate(trial_ang, 0);
        } else {
            camera.rotate(0, trial_ang);
        }
        trial = normalize([camera.ViewDirectionX,
                           camera.ViewDirectionY,
                           camera.ViewDirectionZ]);
        n = normalize(cross(vd, trial));
        // Rotate back
        if (b_first_arg) {
            camera.rotate(-trial_ang, 0);
        } else {
            camera.rotate(0, -trial_ang);
        }
        vd = normalize([camera.ViewDirectionX,
                        camera.ViewDirectionY,
                        camera.ViewDirectionZ]);
        // Only do this rotation if our desired view vector
        // is not nearly parallel to our current axis of rotation
        // If it is, then the code inside the if statement will
        // be skipped.
        if (!are_parallel(dvd, n, eps)) {
            // Project the desired view vector onto our plane of rotation
            vp = normalize(cross(n, normalize(cross(dvd, n))));
            // Calculate the angle between the projected vector
            // and our current view direction
            dot_product = dot(vp, vd);
            needed_rotation = Math.acos(dot(vp, vd)) * 180 / Math.PI
            
            // Knock it down by a factor associated with our iteration
            // scheme.  This helps prevent spurious jittering as we 
            // make our way there.
            needed_rotation *= (1.0 - factor);
            if (b_first_arg) {
                // If we start to diverge, try rotating back the other way
                if(previous_up < needed_rotation) {
                    up_factor = (up_factor < 0.0) ? 1.0 : -1.0;
                    previous_up = needed_rotation;
                }
                camera.rotate(up_factor * needed_rotation, 0.0);
                
                
            } else {
                if (previous_right < needed_rotation) {
                    right_factor = (right_factor < 0.0) ? 1.0 : -1.0;
                    previous_right = needed_rotation;
                }
                camera.rotate(0.0, right_factor * needed_rotation);
            }
        }
        // See if we are there yet
        vd = normalize([camera.ViewDirectionX,
                        camera.ViewDirectionY,
                        camera.ViewDirectionZ]);
        b_rotate_converged = are_parallel(vd, dvd, eps);
        if (b_rotate_converged) break;
        // Flip directions
        b_first_arg = !b_first_arg;
//        DS.Graphics.Refresh();
    } while (cnt++ < max_iter);
        
}

function rotate_up(desired_up_vector) {
    
    // Get the camera and graphics objects
    var camera = DS.Graphics.Camera;
    var graphics = DS.Graphics;
    var view_c = null;
    var trial = null;
    var current_up = null;
    var theta = null;
    var trial_ang = 15;
    var theta1;
    var theta2;
    var eps = 0.01;
    // Make sure our passed in value is normalized
    duv = normalize(desired_up_vector);

    // Which way are we currently looking now?
    vd = normalize([camera.ViewDirectionX,
                    camera.ViewDirectionY,
                    camera.ViewDirectionZ]);

    // Perform our trial rotation about the screen vertical axis
    camera.rotate(trial_ang, 0);
    trial = normalize([camera.ViewDirectionX,
                       camera.ViewDirectionY,
                       camera.ViewDirectionZ]);
    up = normalize(cross(trial, vd));
    // Rotate back so we don't lose our place
    camera.rotate(-trial_ang, 0);
    // How much to we need to rotate?
    theta1 = Math.acos(dot(duv, up)) * 180 / Math.PI;
    // Rotate assuming a positive rotation
    camera.rotateZ(theta1);
    DS.Graphics.Refresh();
    // Now we need to test to see if this is indeed the proper direction
    // Which way are we currently looking now?
    vd = normalize([camera.ViewDirectionX,
                    camera.ViewDirectionY,
                    camera.ViewDirectionZ]);
    // Perform our trial rotation about the screen vertical axis
    camera.rotate(trial_ang, 0);
    trial = normalize([camera.ViewDirectionX,
                       camera.ViewDirectionY,
                       camera.ViewDirectionZ]);
    up = normalize(cross(trial, vd));
    // Rotate back so we don't lose our place
    camera.rotate(-trial_ang, 0);
    // How much to we need to rotate?
    theta2 = Math.acos(dot(duv, up)) * 180 / Math.PI;
    if (theta2 > theta1) {
        // Rotate backwards if our second guess is larger than our first
        camera.rotateZ(-theta2);
//        DS.Graphics.Refresh();
    }
}

function pan_view(desired_point) {
    // Get the camera and graphics objects
    var camera = DS.Graphics.Camera;
    var graphics = DS.Graphics;
    // Maximum amount to pan by
    var height = DS.Graphics.GfxWindow.height;
    var width = DS.Graphics.GfxWindow.width;
    // The current focus point
    var fp = null;
    // The trial focus point
    var fp_trial = null;
    // The trial amount to pan
    var trial_pan = (height + width) / 40;  //Unit is pixels...
    // View direction
    var view_v = null;
    // Vector from the focus point to the desired point
    var fp_to_dp = null;
    // Vector of pan direction
    var pan_dir = null;
    // The amount we are going to pan
    var pan_amount = null;
    // Tolerance for determining if the view vector is parallel
    // to the focus point to desired point vector
    var eps = 1e-4;
    // We'll flip back and forth between panning up and panning right
    var b_first_arg = true;
    // The current count of our iteration
    var cnt = 0;
    // The maximum number of iterations to perform
    var max_iter = 200;

    var look_at = desired_point;
    var max_pan = (height + width) / 20;
    do {
        fp = [camera.FocusPointX,
              camera.FocusPointY,
              camera.FocusPointZ];
        vd = normalize([camera.ViewDirectionX,
                        camera.ViewDirectionY,
                        camera.ViewDirectionZ]);

        if (b_first_arg == true) {
            camera.pan(trial_pan, 0);
        } else {
            camera.pan(0, trial_pan);
        }
        trial = [camera.FocusPointX,
                 camera.FocusPointY,
                 camera.FocusPointZ];
        // In world coordinates, see which direction this pan took us
        pd = sub(trial, fp);
        // Project this onto a plane defined by the current view 
        // vector
        pd_in_view = cross(vd, cross(pd, vd));

        // Move back to our original location
        if (b_first_arg == true) {
            camera.pan(-trial_pan, 0);
        } else {
            camera.pan(0, -trial_pan);
        }
        fp = [camera.FocusPointX,
              camera.FocusPointY,
              camera.FocusPointZ];
        // Take a vector from the current focus point to the
        // desired center point
        fp_to_dp = normalize(sub(look_at, fp));
        // Only try to pan the view if the vector between our focus
        // point and view vector are not parallel.
        if (Math.abs(1.0 - Math.abs(dot(fp_to_dp, vd))) > eps) {
            // Project this onto a plane defined by the view direction
            fp_to_dp_in_view = cross(vd, cross(fp_to_dp, vd));

            // Normalize the pan test vector so we can walk our way there
            pd_in_view = normalize(pd_in_view);

            // Figure out how much we need to pan

            pan_amount = trial_pan * dot(fp_to_dp_in_view, pd_in_view);
            // Knock it down by a factor associated with our iteration
            // scheme.  This helps prevent spurious jittering as we 
            // make our way there.
            var factor = cnt / max_iter;
            pan_amount *= (1.0 - factor * factor * factor);

            if (b_first_arg == true) {
                camera.pan(pan_amount, 0);
            } else {
                camera.pan(0, pan_amount);
            }
            b_pan_converged = false;
        } else {
            b_pan_converged = true;
        }
        // Flip the direction
        b_first_arg = !b_first_arg;
//        graphics.Refresh();
    } while (cnt++ < max_iter && !b_pan_converged);
}

main();

ANSYS 13.0 Enhanced Modal Analyses with Linear Perturbation

“Rest, rest, perturbed spirit!”  – William Shakespeare

If you have ever performed a large deflection prestressed modal analysis in ANSYS Mechanical APDL prior to version 13.0, your spirit might have been perturbed as well.  The procedure was not very user friendly, to sum it up.  For example, unless you were careful, the modal results would over-write the static preload results.  Thankfully, at 13.0 we have a smoother and more capable tool for handling large deflection prestressed modal analyses.  This new procedure is called Linear Perturbation.

We’ll focus on modal analyses in this article, but be aware that linear perturbation also applies to linear buckling analyses at 13.0, but only following a linear preload solution, and only in Workbench.  The capability for modal analyses is supported in both Workbench and Mechanical APDL.  Also, the preload, or ‘base’ analysis has to have multiframe restart capability turned on.  This will happen by default for a nonlinear analysis but needs to be manually activated in MAPDL for a linear analysis by issuing the command RESCONTROL,LINEAR.

In fairly simple terms, the prestress effects are included in a modal analysis via the change in the stiffness matrix that occurs during the prestress (typically nonlinear static) analysis.  This is the method that has been used in ANSYS for years.  What’s new at 13.0 is that the program keeps track of different components of the augmented tangent stiffness matrix.  The five possible contributing components are material property effects, stress stiffening effects, load stiffening effects, contact element effects, and spin softening effects.

While the material effects must remain linear, the contact stiffness can be altered, if desired, in the subsequent modal analysis.  More on that later.

The typical Mechanical APDL procedure to perform a nonlinear static structural prestress run followed by a modal analysis which utilizes those prestress effects is as follows:

! With static model prepared

/solu                            ! enter solution module
antype,0                         ! specify static analysis type
nlgeom,1                         ! turn on large deflection effects (nonlinear)
pstres,on                        ! turn on prestress effects for subsequent modal
nsub,10,10,10                    ! specify substep range
save                             ! save the database
solve                            ! solve the nonlinear static prestress case
finish                           ! leave the solution module
/solu                            ! re-enter solution so we can do a new analysis
antype,,restart,1,10,perturb     ! specify restart option for linear perturbation
! from last substep in this case
perturb,modal                    ! specify modal as next analysis
solve,elform                     ! calculate element formulation with solve command
modopt,lanb,12                   ! specify modal options for solution
mxpand,12                        ! specify number of modes for results calc
solve                            ! solve the prestress modal analysis
/post1                           ! enter general postprocessor
INRES,ALL                        ! make sure we read in all results from file
FILE,’nonlinear_static’,’rstp’   ! specify special results file for modal results
set,first                        ! read in results for first mode
plns,u,sum                       ! plot mode shape
set,next                         ! read in results for next mode
/repl                            ! plot mode shape, etc.

 

Note that the linear perturbation (modal) analysis has its own results file, the .rstp file.  Because of this, the preload results are still available in their own .rst file as it does not get overwritten by the modal step.

Here is a table of frequency results for a simple test case.  Three modal analyses were run:

1.  No prestress at all.

2.  With a linear static prestress state.

3.  With a nonlinear static prestress state.

image

Here is the model used for these runs in its initial configuration.  The block at the base was fixed in all DOF’s and the preload applied was a pressure load on one side of the vane.

undeformed

Here is the model with the deformed mesh due to the nonlinear prestress:

nonlin_prestress_deformed

Here is a mode shape plot for mode 12:

nonlin_prestress_mode12

The above example is all well and good but could have been done in prior versions of ANSYS using the old partial solve method.  What’s nice about the newer linear perturbation method is that it’s easy to get the mode shape plots relative to the deformed mesh from the prior prestress run, and you don’t need to worry about over-writing the prestress results with the modal results, since the corresponding results files are different.

Further, we can now perform modal analyses using different restart points in the static prestress run, assuming multiple restart points are available.

Finally, we can actually change some contact options between the static prestress solution and the modal solution.  For example, if the prestress analysis was run using frictional contact, the subsequent modal analysis can be run utilizing the prestressed state of the structure but with one of three contact states for the modal analysis: true status (that of the prior static analysis), force (to be) sticking, or  force (to be) bonded. The sticking option applies only to contact elements with a nonzero coefficient of friction. The bonded option will force contact pairs that are in contact in the static analysis to be bonded in the modal analysis.

The Mechanical APDL command sequence for this procedure would be something like this:

 

! first perform nonlinear static prestress run, then

/solu

antype,,restart,,,perturbation

perturb,modal,,BONDED,PARDELE                    ! pre-stress modal analysis, switch contact to bonded, delete

                                                                                              ! loads in case future MSUP

solve                                                                ! Generate matrices needed for perturbation analysis

! Next perform modal analysis

modopt,lanb,6

mxpand                    ! default expand in case of complex solution

solve

! modal results are now available for postprocessing

In Workbench Mechanical, the appropriate command sequence is sent to the solver when we link a modal analysis to a prior prestress analysis.  If the model involves contact, then in the modal analysis we’ll have choices for how the contact should be treated in the Pre-Stress branch under the Modal branch in the Outline Tree.  For frictional contact in the static prestress analysis, the choices in the Details view for the Pre-Stress branch in the modal analysis will be Use True Stress, Force Sticking, or Force Bonded as described above.

Here are some example plots for this scenario:

Two 3D plates subject to in plane bending, fixed at right ends, frictional contact between them.

image

Resulting contact status for static run (sliding is occurring)

image

Resulting static deformation:

def_c

 

Mode 6 result, “true” contact behavior:

mode6_nonlin_true_c.

 

Mode 6 result, “force bonded” contact behavior:

mode6_nonlin_force_bonded_c

Those last two images show a dramatic difference in modal results simply by changing the contact status behavior in the modal analysis.  In the first of those images, the contact status is set to ‘true’, meaning essentially the same as in the prestress analysis, subject to the linear nature of the modal analysis.  In this example, the frictional behavior in the static prestress run becomes no separation in the modal analysis, so the two plates can have mode shapes in which the plates slide relative to each other.  In the last plot, the contact status has been changed to ‘force bonded’ for the modal solution.  As the plot shows, mode shapes can only exist in which the two plates are bonded together.  Both modal analyses have the same prestress condition however.

Here is a frequency table comparing the first six modes of the two modal analyses.  Note that with the contact forced to be bonded we get a stiffening response as we might expect.

WB_freq_compare

So, although on the surface it might initially appear to be a black art, linear perturbation is a nice enhancement in ANSYS 13.0 that gives us a more robust and capable method for performing modal analyses with prestress effects included.  The prestress run is typically a linear or nonlinear static analysis, but it will also work with a full transient analysis to define the prestress state.  The ANSYS 13.0 Help has more information (see section 9.2 of the Mechanical APDL Structural Analysis Guide and section 17.8 of the Theory Reference).  We also recommend you try out the procedure on your own models.

Making Pretty Plots in ANSYS Mechanical and Mechanical APDL

imageI was looking for some new images for our web page and all of the plots I had were pretty much screen grabs.  Not the high quality I was looking for.  And putting a picture of Angelina behind a mode plot probably won’t go over too well, or will go over too well and will distract everyone.  So I brought up the original models and started playing around trying to get a better quality image, and did pretty well.  And, needing an article for this week, I thought I’d share what I found. If you don’t use MAPDL, skip on down to the ANSYS Mechanical bit.

ANSYS Mechanical APDL

For my MAPDL plots I actually referred back to an old “The Focus” article I did in 2002. Yes, 2002.  You can go back to that article for the details but here are the important bits below:

  • Invest in a good image editing tool. If you are ultra-cheap, use the open source GIMP.  But Adobe Photoshop and Corel Paint Shop Pro are worth the investment.  With Paint Shop Pro having the price/performance advantage if you don’t already have Photo Shop
  • You don’t want to do screen grabs.  MAPDL has great plotting to file abilities and you want to use those.
  • Use the /show command to output high quality graphics and skip using the pixels on your screen. You have two great options:
    • Use /SHOW,PNG to tell MAPDL to write to a PNG file when you issue plot commands
      •   PNG is the best choice for smaller files and high resolution.  JPG may be more common but you will find that the images can be fuzzy, especially around edges.  Go PNG.

image

  • If you are really hard-core and you want an exact representation of you model that can be scaled, then do a /show,PSCR,,,8.  This makes an encapsulated postscript file.
    • If you don’t know what that is, we advice you don’t bother.
    • But if you do, pop it into Illustrator, group it and now you have a vector graphic you can scale, trim, edit and really do cool things with.
  • Get to know how to use the /ERASE, /NOERASE button for making multiple plots on a page.  You can use /win with the standard view commands to show results from different modes, different load steps, or if you get real clever, even different models.   Here is an example for plotting 4 different mode shapes:

! Plots first 4 modes
/erase
/win,1,ltop $/win,2,rtop
/win,3,lbot $/win,4,rbot
/view,all,1,1,1,1
/vup,all,Z
/angle,all,0
/dist,all,1.3
/focus,all,.2,.08,.05
/win,2,off $/win,3,off $/win,4,off
/plopt,info,0
/annot,delete
/TSPEC, 15,0.700,1,0,0
*dim,anx,,4
*dim,any,,4
*dim,frq,,4
anx(1) = -.98, .4, -.98, .4
any(1) = .01, .01, -.92,-.92
*do,i,1,4
set,1,I
*get,frq(i),mode,i,freq
plnsol,u,sum
/win,i,off
/win,i+1,on
/noerase
*enddo
/tlab,anx(1),any(1),Mode 1: %frq(1)% Hz/t
/tlab,anx(2),any(2),Mode 2: %frq(2)% Hz/t
/tlab,anx(3),any(3),Mode 3: %frq(3)% Hz/t
/tlab,anx(4),any(4),Mode 4: %frq(4)% Hz/t
/replot
/erase
anx=
any= 
image
  • Get rid of that black or blue-blue background by changing your screen to reverse video.
    • Through the GUI: PlotCtrls->Style->Colors->Reverse Video
    • Or put the following in a macro:
      /RGB,INDEX,100,100,100, 0
      /RGB,INDEX, 80, 80, 80,13
      /RGB,INDEX, 60, 60, 60,14
      /RGB,INDEX, 0, 0, 0,15

image

  • Use /TYPE to turn on slower, but higher quality hidden line removal
    • Default is z-buffer, which is fast but not a accurate
    • Precise is the best: /TYPE,,4 for hidden
  • Get smoother curved surfaces by increasing the number of facets on your polygons with /EFACET
    • /GRAPH,POWER is needed for it to work (default)
    • /EFACET,4 puts 16 (4/edge) faces on every element face.

image

  • Getting lots of pixels is the most important.  Just make a big file with lots of pixels by using /GFILE, 2400.
    • This gives you a file that is 2400 pixels tall
    •   You can then scale it down when you want to use it and get nice clean images.

There is a lot more you can do, like annotation, backgrounds, changing fonts, etc… But this covers the basics that everyone should know.  Take a look at the original article and help on the following command for more: /PLOPT, /EDGE, /TRIAD, /LIGHT, /DEV,FONT, /TEXTRE, /GFORMAT, /ANNOT.

ANSYS Mechanical

Now things get a little different. In that MAPDL is old, older than some of you reading this article, it has some wicked cool tools that were put in there from the days when many of us didn’t even have a color monitor.  Now the world is online and everything is about an 200pixel wide JPEG file to be uploaded onto Facebook, you don’t have as many options to get high quality images out of the Workbench interface.  But the good news is that the default image, is better than most of the high quality ones you have to work for to get out of MAPDL.

The Basics for ANSYS Mechanical

So, for a lot of cases, just doing a screen grab of your graphics window in ANSYS Mechanical is good enough. But if you want nicer plots, here are some suggestions:

  • If you are on Windows Vista or 7, then you need to turn of Windows Aero.  This is the transparent fancy-pants options for how windows are displayed. But the screen capture used in ANSYS Mechanical doesn’t work if it is turned on.  So if you are going to be doing Images or Animations, you need to turn it off.
  • Beyond the basic screen grab you should become familiar with the options under the little “New Figure or Image” Icon on the main icon bar

image

  • If you choose Figure or Image it adds a Figure or Image object under whatever object you have selected.
    • A figure is a 3D image that updates every time you click on it.  It stores the view settings when it is made, but you can rotate and zoom at any time.  It is a 3D picture
    • An image is a screen capture that is stored in the tree.  You can also import an image from your hard drive.
  • The easiest way to get a nice plot is to get what you want on the screen and use “Image to File”.
    • By default this will make a PNG file that is the resolution of the image on your screen.
  • For a higher quality image, make your graphics window bigger.  Remember, all ANSYS is doing is grabbing your screen and saving it to a file.
  • The default background is a bit annoying for plots, especially if you want just your model.  To turn it off go back to your project page and use Tools->Options->Appearance and set Background Style to Solid and Background color to white.
    • If you want to go old school and have white text and black background, you can set that here as well.

bg1bg2bg3

  • If you want to make an image that can float in your PowerPoint (no background, transparent) then I like to turn off all the decorations.  Do this by making sure your background is solid (white is usually good enough) and then go to View in the menu and uncheck Ruler, Legend, and Triad.
    • Once in PowerPoint, use the “Set Transparent Color” feature to set the white bits to clear.
    • Or, use you fancy image editing software to remove the background. (and put a cheesy one in if you want, as shown here:)
      image
  • Play with the options to get the image you want.  The contour style, if you want your elements shown or not, etc…
  • Another thing to know is how to change the default format for saving the images.  The default is PNG, which we recommend.  But if you need JPEG or BMP you can change it in Mechanical under  Tools>Options>Mechanical>Miscellaneous>Image>Image Transfer Type.

Getting Better Resolution on a Single Plot in ANSYS Mechanical

The above will work for most plots, but what to do if the image you get from screen grabs is just not good enough?  Maybe you are on a laptop and you can’t get the resolution you need. I especially find this true when I want to view my mesh.  This is the default I get with mesh on:

sc1

By using the Print Preview tab, I can get a very high resolution image:

  • Get the view you want in the graphics window.
  • Click on the Print Preview tab at the bottom of the graphics window
  • You should see what you saw on the screen.  Don’t panic.
  • On the Print Preview tool bar click on the little picture and choose High Resolution
  • It should update with a very nice high-rez image.
  • This is an HTML page, so you can right click on the image and copy it, or save it to disk.

preview

Getting Better Resolution on a Multiple Plots in ANSYS Mechanical

The above will work for a single image but can be a pain if you want a bunch.  For that we recommend that you leverage the Report Writer capability, even if you don’t want the report.  The Report Writer makes an HTML report of everything in your Model tree.  So if you put Figures in your tree, it will make those for you when you do a Report Preview, all in one nice little directory.  Here is how to use it:

  • For every plot you want, click on the object you want the plot of and then sue the “New Figure or Image” drop down to insert figures for every view you want.
    • Don’t use Images, remember those are just screen grabs and won’t give you better resolution.
  • When you are done, click on the “Report Preview” tab under the Graphics Window.
  • ANSYS Mechanical will go of and think for a while, then it will show you your HTML report.
  • You can cut and paste from the report if you don’t have a lot of images.
  • If you do have a lot of images, you will want to click on the Publish button
    • This brings up a dialog.  Don’t use the default Single file format, change it to “Page, Figures in Subfolder (*.html)” and pick a directory
    • they will be sitting in the Project_Files directory.
  • The default images are 600 pixels wide, which may be OK.  But if not, then you need to go into Options and tell ANSYS to make bigger, better images.
  • Go to Tools->Options->Mechanical->Report->Figure Dimensions
    • Set the Graphics Height and Width higher.  I like 1800 x 1500.  That is three time more than the default
    • You can get even more resolution by changing the Graphics Resolution to 2:1 or 4:1.  This makes the pictures 2 or 4 times larger.  But that can take a while…
  • Now go back and redo the Report Preview and you should see much larger images, and much higher quality.
  • To save to a folder, publish them.
  • Click on the images below to see them in their full resolution:

Figure0008
Default 600px wide image

Figure0008
1800px wide image

Figure0008
3600px wide image

Conclusion

And those are the high points.  Like everything else, the help is your friend and has much more detailed information.  And don’t be afraid to just try stuff out.  Explore the options and settings, make lots of images.  That worst thing that can happen is that you will create some files you need to delete.

We hope this will help you make pretty pictures for your next report so that project guy won’t grill you on stuff he really doesn’t understand…

To 40 Gb/s Infiniband QDR and Beyond! HOW-TO

In this article I will show you how to install and configure two 64-bit Linux servers with a 40Gb/s Infiniband interconnect. The two servers are connected directly together with a 7mm QDR Cable. No switch is used in this configuration or setup document.

The Inifinband cards used for the How-To have the Mellanox® ConnectX®-2 IB QDR chip MT26428. They are a dual Infiniband QSFP port with a 40Gb/s data rate per port. The cards use a PCI-E 2.0 x8 (5GT/s) slot in a UIO low-profile half-length form factor designed by SuperMicro.

Step 1: Install OpenFabric software – CentOS

  • Select OpenFabrics Enterprise Distribution

f1

  • Next, install the openib and mstflint
    • Select Optional Packages

f2

      • Select openib
      • Select mstflint
      • Click Close
    • Click Apply
    • Allow download and install to complete

Step 2: Verify that you have the most recent firmware for the Infiniband cards.

  • Go to the manufacture website and locate the firmware update bin file for your Infiniband card.
    • While you are at the site you also may want to download the latest software drivers for your specific Infiniband card.
    •  www.mellanox.com – Mellanox Technologies
  • Next, update the firmware for our Mellanox Technologies MT26428 Infiniband card
  • Open a Terminal session and type the following commands: your output will look similar to the output listed below. lspci

f3

Please note: the device id of your Infiniband card. In the above screen shot you will see that the pci device id for the card is 41:00.0

 

  • Next, begin the firmware update:
    • Make sure that your terminal window session is in your firmware update location. I saved the update.bin file on the Desktop of the root user.
    • Next type: mstflint –d 41:00.0 -i AOC.bin b
      • Checkout the screen shot below, amake sure within your file name there are no are in the file name of the update.bin file. If in the below command line entry the bin file read AOC-1.bin the firmware update would fail.

f4

  • After successful firmware update, restart the server and move on to Step 3.

Step 3: Configuring the network-scripts on Linux

  • In a terminal window change your path location to /etc/sysconfig/network-scripts/
  • Type vi ifcfg-ib0 to enter into text editor mode.

f5

  • Next, enter in the following text for the Infiniband card.

DEVICE=ib0
ONBOOT=yes
IPADDR=192.168.0.x
NETMASK=255.255.255.0
TYPE=Ethernet

PLEASE NOTE: Pertaining to the IPADDR entry above, a good rule of thumb rule that I use is to add an additional digit to the first IP number of your server. For example, if your server IP is 10.0.0.100 I would make the Infiniband IP address for IB port 0: 11.0.0.100 as well as using the correct subnetmask for your IP address range.

  • Finally, save the ifcfg-ib0 file by performing the following keystroke commands
    • Press shift and then :
    • Next type wq
      • Here is link to a blog article that I wrote listing several vi commands that I use the most. (You will need to scroll down they are under Step 2 )

http://www.padtinc.com/blog/post/2011/01/07/IT-Tips-Setting-Up-FreeNX-for-Remote-Access-to-Linux-Machines-from-Windows.aspx

Step 4: Verify that the following Infiniband services startup on reboot:

  • Open Service configuration manager off of the file menu click >System >Administration >Services
    • Select start on boot
  • Select opensm & openib on MASTER node
    • Please Note: Only one subnet manager needs to be running on any given Infiniband interconnect network
  • openib on each additional server within the Infiniband interconnect network

f6

  • Select the check box. Click Start and click Save
  • Restart servers

Done – The installation and configuration of the Infiniband card is now completed.

Important Linux Infiniband commands that I used during the installations

  • lspci – Lists all of the device ids
  • Ibv_devinfo – checks that the IB driver is running on all nodes and shows port status
  • Sminfo – check to see if the subnet manager is running.
  • Ibchecknet – checks the network connectivity status
  • Ibdiagnet – performs a set of tests on the IB network
  • Ibhosts – simple discover IB hosts
  • Ibstat – checks state of local IB port
  • Ibnodes – discovery of nodes

ANSYS Mechanical Scripting: HOWTO Part 4

Where we left Off Last Time…

In the third part of this series, I left you with an introduction to the ANSYS Selection Manager and Geometry B-Rep.  We also created a sample script that iterated over all of the faces in a selected body and created a named selection for each face.   We built on our technique described in Part 2 for discovering ANSYS functionality. 

Introducing the Microsoft Visual Studio Script Debugger

In this post I want to introduce you to the Visual Studio Script Debugger.  Running ANSYS Mechanical underneath the debugger is definitely the most powerful tool we have at our disposal when it comes to creating custom scripts.  If you can learn to use this tool effectively, you will be able to watch the inner workings of Mechanical interact with your scripts in a precisely controlled environment.  Furthermore, the rich debugger displays will allow you to see most all of the mechanical objects in suspended animation and will also allow you to investigate their properties and methods in real time.  Finally, the “Immediate Window” in the debugger will allow you to execute random pieces of code that you type in on the fly while Mechanical is suspended in the debugger.  This will allow you to test out various techniques and code sequences in one debugging session.   Simply put, you must become proficient at using the debugger to have any real hope of successfully scripting the ANSYS Mechanical tool.

Some Preliminaries

There are a number of steps you need to take to actually get the debugger environment set up properly.  Here are a list of steps you need to take:

  1. You obviously need to install Visual Studio.  I use Visual Studio Professional 2010, but I think any version including 2005 and above will probably be sufficient.  I cannot speak to whether or not the Express Editions will work for this task, so if someone has experience with one of these editions and can shed some light, please post a comment.
  2. In the Internet Explorer “Internet Options” dialog (arrived at through Tools->Internet Options…) you need to make sure that “Disable Script Debugging (Other)” is unchecked.  See image below.
  3. Finally, you need to adjust the registry setting HKEY_CURRENT_USER\Software\Microsoft\Windows Script\Settings\JITDebug = 1

 

image

 

Debugging Code

The easiest way to stop Mechanical in the debugger is to insert the javascript “debugger” command wherever you would like to stop execution and look around.  So, for example, lets assume we have the following script:

function my_func() {
    debugger;
    var my_array = new Array();
    for(var i = 0; i < 10; i++) {
        my_array[i] = 2*i;
    }
}

my_func();

As you can see above, this simple script consists of one function called my_func that just populates an array.  The first statement in this function is the keyword debugger.   If all works according to plan, when we execute this script, we’ll end up in the Microsoft Debugger.  To make this all happen, you need to follow a ten step program:

  1. Start MS Visual Studio
  2. Start ANSYS Mechanical
  3. Copy the above script to a file named debug_test.js anywhere on your harddrive.
  4. Inside MS Visual Studio, navigate to Debug->Attach to Process…image
  5. In the dialog that pops up, look for the AnsysWBU.exe executable in the list.  image
  6. Select AnsysWBU.exe and press the Attach button.
  7. Switch to the Mechanical Application
  8. From the tools Menu, choose Tools->Run Macro…
  9. Navigate to the location of debug_test.js and click open.
  10. Start debugging…

At this point you can set breakpoints, step into and out of functions, interrogate variable values, etc…  The debugger is truly a remarkable tool, as we shall briefly touch on next.

Whirlwind Tour of the MS Debugger

Below is a screen shot of our little script inside the visual studio debugger.

image

As you can see, the current execution line is located over the debugger statement.  Pressing F10 will advance one line at a time.  Also, you will notice that there is a window below the code window with a tab called “Locals”  This tab shows the local variables defined in this function.  At this point in the execution, they are not defined.  However, as we step the execution, these variables will come into play.  You can watch what happens to variables as you step through your code here.

Also, you will notice a second window called the “Immediate Window”.  This window may not be visible for your default debugging layout.  If you don’t see it, navigate to Debug->Windows->Immediate Window

image

In the immediate window you can enter in code that will be executed “immediately” while the program is stopped in the debugger.  So, for example, if you see a problem in your script that might be fixed with a code change you may be tempted to leave the debugger fix it and then re-run.  However, if you are unsure of your fix, or if you want to try a number of alternatives, you can just type them into the immediate window and see what happens.  It works as if you were executing new code at the current breakpoint location that isn’t in your script.  You can then step forward in your script to execute a few lines, enter something in the immediate window, and then continue on stepping.  It is a very powerful technique.

Lets step through a few lines.  I’m going to press F10 four or five times.  At that point I get the following screen.

image

You can see that my_array now has an ellipse next to it and I is equal to 1.  If I click on the plus symbol next to my_array in the locals pane, I get the following:

image

You can see that we’ve filled the zero-th item in the array with the value zero.  Lets hit F10 some more…  Note, you can also hover your mouse over the array in the code window.  You can see I’ve filled in a few more values

image

The print screen function doesn’t capture it well, but you can see the values in the array have changed.  Now, lets say I just realized that for some code below to work properly, the fifth value in this array needs to be less than 10.  (We don’t have any code below, and this is quite contrived, but bear with me and use your imagination…)  So, I just pop down into the immediate window and type:

my_array[5] = 8;

image

When I press return, this will execute that single line of code. You can see that my_array[5] now holds the value 8 instead of 10.

image

 

You can also set break points in your code.    Just right click on the line, and choose Breakpoint->Insert Breakpoint

image

Breakpoints work kind of like the debugger command, only they don’t litter your code with commands.  I usually insert one debugger command at the top of my script and then breakpoints where needed.   Breakpoints can also be conditional.  That is, they won’t break unless some condition is true.  This is great for executing most of a loop, but breaking on the last or next to last item.  After you have inserted a breakpoint, just right click on the line and choose Breakpoint->Condition…

image

You can insert a condition like:

image

From here, press F5 which is the Run command, and the code will execute inside the loop until I == 8.

Conclusion

The debugger is your friend; plain and simple.  Use it wisely and use it often.  In fact, if you want to script in Mechanical, you are going to have to become intimately familiar with the debugger.  In a future article, I will show you more powerful debugging tips when the debugger by itself lets you down. Stay tuned…

IT Tips – Is SP1 for Windows 7 Going to Slow Down my Machine?

ANSYS R13 Lemans CPU Wall Clock Benchmark Results Before and After a Windows 7 64-bit SP1 Update

I was curious what sort of impact the Windows 7 SP1 upgrade would have on a new install of ANSYS R13. Would the Service Pack upgrade have a positive or negative impact on the ANSYS R13 and additionally the Lemans benchmarks?  Here is what we found:

  • Hardware test machine: Single socket 6 core AMD C32 running 2.6GHz, 8GB of RAM, 2 x 160GB Intel SSD drives in RAID0, Windows 7 Professional
  • Benchmarks completed by Clinton Smith, Consulting Mechanical Engineer CFD and Thermal Analysis at PADT, Inc.

Results:

  • The benchmark results appear approximately the same before and after the install of the Service Pack.
  • No apparent issues or crashes with ANSYS R13 after installing the Windows 7 Service Pack 1 update
  • Six core benchmarks and one core benchmarks are slightly faster after SP1 upgrade.
  • Four core and two core benchmarks are slightly slower after SP1 upgrade:

ANSYS Le Mans Model, 1,864,025 Nodes

Cores

Wall Time (s)
Pre SP1

Wall Time (s)
With SP1

Diff

% Diff

1

7,220

7,190

-30

-0.4%

2

3,970

3,980

10

0.3%

4

2,300

2,320

20

0.9%

6

1,850

1,840

10

-0.5%

Conclusion:

It appears that overall the impact of the upgrade is a positive. No system stability or ANSYS R13 issues related to the upgrade.

FE Modeler Part 3: Morphing Meshes

Morph-lady-catIn the first two parts of this series we talked about how to use FE Modeler to translate meshes and how to create geometry from meshes.  The third “big” thing that this tool does is allow you to morph a mesh to a new shape.  There are a few less important things that FE Modeler does, and we will cover those in the last article of this series.

First, let’s take a good long look at the picture to the left.  Mesh morphing is nothing like what the image shows, but it is pretty creepy to look at so I thought I’d throw it up there.

Mesh morphing in FE Modeler works by taking the faceted surface geometry you can create, covered in the previous article, and allowing the user to apply transformations or a projection to that geometry.  You would use this if you want to change your geometry while keeping the same basic mesh.  This is because when you are doing optimization or parametric study and changing the CAD geometry, you force a remesh every time and sometimes the change in mesh is large enough to effect your study.  You would also need to use this method if all you had was an FE mesh.

Basic Process

Let’s start with a very simple example to show how it is done. 

Figure 1 shows a piece of example geometry.  It has some nice features that we will do more complex morphs on.

f1_geometry
Figure 1: Simple Example Geometry, Itching to be Morphed

To start off we do the same geometry (covered in detail in Part 2 of this series):

  1. Read in geometry
  2. Select Geometry Synthesis in the tree and make sure Use Node components is Yes
  3. Click on the Initial Geometry Icon
  4. If everything works you should have a solid that you can now work with

Now you need to define how you want to modify the faceted geometry.  Do this by selecting Initial Geometry in the tree and then clicking on the “New Target Configuration” Icon.  See Figure 2.

 

f2_MakeTargetConfigurationFigure 2: Creating the Target Configuration

When you do this the program will go out and configure some things internally and then add two new things to the tree: 1) a “Target Configuration” that is a child of Initial Geometry and 2) a Parameterized configuration branch. Figure 3 shows the result in the tree.

 

f3_Tree_After_Target_ConfigFigure 3: Model Tree After Creation of Target Configuration

So, at this point you might be trying to figure out why there are branches and children where they are. The way I think about it is that the Geometry Synthesis listing is not a branch and is not a child of something.  It is it’s own “tree”  So everything on the first level under it, its children, are different things you can do with Geometry Synthesis:  Find skins, define working Geometry,  Set up mesh Morphing (Initial Geometry), and view what the current mesh looks like based on the current parameters (Parameterized Configuration).  I feel that the Initial Geometry branch is mislabeled and should say Mesh Morphing, but it is too late to change the name now.

Back to morphing meshes…  The way you specify what you want morphed is to define Design Points that have some sort of translation or projection.  Then you specify how far to translate with a parameter that you control in the Project Schematic.  For our first run through, we are going to offset a surface. 

Do this by clicking on Design Point and then on the Transformation Icon and Select “Face Offset” (or RMB –>Insert->Face Offset) and fill out the details view.  For this example I used the end face and set the maximum offset value to 30 mm.  Note, even though it doesn’t say it in the GUI, the help points out that you should put in your maximum value that you might use.  the program then scales from the original to this value as you change the offset parameter.  Figure 4 shows the setup for this example.

f4_Offset_face
Figure 4: Specifying the Face Offset, Before Generation

Click on Generate the Design Point to see what the offset looks like:

f5_Offset_Face_2
Figure 5: The Offset Face after Generation

Here is where it gets a little tricky.  All you did was define a potential geometry change.  That change is not applied to the model until you actually set parameters and apply them.   And, the parameter is controlled at the Project Schematic level, not within FE Modeler.  The value you put in under Design Point, that is simply a maximum value that the program uses to figure out how to do the Morphing.  The actual morphing uses the parameter as defined in the Project Schematic.

To see this click on Parameterized Configuration and you will see that there is 1 parameter, it is called Mesh.Morpher.1, and its value is 0.  (Figure 6).

f6_ParamValues
Figure 6: Configuration Information after Offset is Defined

Therefore, your next step is to go to the project schematic and change the offset value to some new value you want to morph the mesh to (remember, the value we put in for the Design Point is our guess at the maximum value, we can use any value we want for the parameter value).  To change the value in the Project Schematic, bring up the schematic and click on the parameter bar.  You should now see the parameter in the Outline view. For this example, I changed it to 17.354.  Now exit back to the parameter manager and click on Update Project.  Go back to your FE Modeler window, click on Parameterized Mesh, and you should see the morphed mesh:

f7_morphed_mesh
Figure 7: Morphed Mesh

Figure 8 shows an animation from an offset of 0 to 30: 

SimpleMorph1

Figure 8: Mesh Morphing from an Offset of 0mm to 30mm

To get the mesh out and usable in your model you have one more step.  If you click on Write Solver File now you will get your original geometry.  You need to click on Parameterized Mesh then click on the Update FE Modeler Mesh icon.  Now set your Target System and chose Write Solver file.  Or, if you are staying in workbench, drag a new system onto the Model in the Project Schematic.  The cool thing about this is that you now have a parametric model that is linked to an analysis in Workbench. It is therefore easy to do design studies, optimization, and the rest of the cool things Workbench is great at, and without CAD geometry!  Figure 9 shows an example of using the mesh for a modal analysis.

f9_Use_Morphed_MeshFigure 9: Using the Morphed Mesh in a Modal Analysis

Getting More Complicated

Although the process is the same, you can get a lot more complicated.  The program allows for five different translations, offsets or projections, and some of them have multiple options within. They are all kind of self explanatory.

  f10_tranlation_options
Figure 10: Morphing Options

Here is the description from the help as a reference:

  • Translation (of vertices, edges, surfaces, or parts): A translation is given in the global Cartesian coordinate system or by the definition of a translation vector between two points.
  • Rotation (of vertices, edges, surfaces, or parts): You must define a rotation axis between two points or a point and a vector and then give a rotation angle in degrees or radians.
  • Face (Surface) Offset – a Face Offset can be:
    • Uniform – Enter a negative or positive Offset Value to move the face inward or outward.
    • Non Uniform – Enter a negative or positive Offset Value to move the face inward or outward. With this transformation, you can offset a surface with a nonlinear curve. In addition, a Non Uniform surface offset includes the following options:
      • Distance to the edges – Define the distance from the edges to the maximum displacement of the transformed face.
      • Function type – Select a function type based on the shape you want to obtain, options include: Linear, Double Tangent, Linear-tangent, Tangent-linear.
      • Immobile edges – By default, all of the edges for the target surface are selected. You can de-select edges if desired.
  • Edge Offset: An offset of one edge along a face by a specified distance; always with a given sign depending on the edge normal.
  • Projection: a projection of a face, an edge, or a vertex onto a face, edge, or vertex or a group of faces or a group of edges. The Projection transformation works in tandem with the Working Geometries feature. Using an imported Working Geometry, you can project the entities of a Target Configuration onto the entities of the imported (Working) geometry.

For this example I added a rotation for the block sticking up, a translation for the top circle face of the cylinder, and an edge offset for the lower right edge.  the result is shown in Figure 11. 

 f11_Morphed-Unmorphed
Figure 11: Part with Offset, Rotation, Translation and Edge Offset Applied. Overlaid on Original Mesh

f12_Morphed_With_labels

Figure 12: Part with Offset, Rotation, Translation and Edge Offset Applied. Labeled. 

Mesh_Morph2

Figure 13: Animation of Morphing

Whew, that is a lot of material, and we are out of time to get this article back.  Look for a Part 3.5 in the near future filling in some missing pieces and a few more examples.

If you want to try this yourself, ask you sales professional for  temp Key of the ANSYS Mesh Morpher. And as always, start simple and work your way to more complex parts.

Mapped Face Meshing in ANSYS Workbench

In the era of automatic tetrahedral meshing, many have lost their way.  Wondering analysts simply read in their geometry, specify a few sizing controls, hit a mesh button and get a mesh.  But in the end, they receive a mesh that is not ideal, especially on the surface.  What these meandering meshers need is a map… a mapped mesh. 

OK, that is a pretty lame introduction, but I’ve run out of Monty Python references and I don’t have Doug’s B movie knowledge. 

Bad intro aside, many users are not aware of the strong capabilities available in Workbench meshing for creating really nice mapped meshes on surfaces.  Once created, these mapped meshes can be used to mesh 2D and shell models as well as to extrude a mesh (sweep method) for a 3D Hex mesh or a ‘seed’ mesh for a tetrahedral mesh (patch conforming algorithm).  By using a controlled mapped mesh, users can ‘find’ their way to a better mesh.

Mapped Mesh Method

The Mapped Mesh Method has been an Mesh Control that has been in ANSYS for a while.  It is found in the Mesh Control menu item when you are in the Mesh part of the tree, or by RMB->Insert->Mapped Face Meshing.  For simple geometry, and some not so simple, you simply slap that control on a face or faces and let ANSYS figure out the best way to make a mapped mesh.

F1-Add_MethodFigure 1: Insert Mapped Face Meshing

In the past, the area had to be a pretty “square” topology – four easily identified edges.  But over the last couple of releases more intelligence has been added to recognize complex geometries as mapable. Figure 2 shows several different topologies and how the mesher does a nice job of mapping them with no user modification.

F2-MapExamplesFigure 2: Automatic Meshing of Complex Topologies with Mapped Meshes

Note that the last topology did not map mesh.  We will come back to that.  But the other five all meshed with a nice looking mapped mesh. So how does ANSYS figure out how to do this? 

What they do is take geometry and break it up into 4 sided chunks.  They call this making Submaps.  But to do this they need to identify the outside edges in terms of squarish topology. The secret to doing this is identifying the vertices (points where edges connect) as either a corner, a side or an edge.  Figure 3 (stolen from the user manual) shows an example of each:

F3-Vertex_Types
Figure 3: Vertex Types

The algorithm identifies the vertex type by looking at the inside (mesh side) angle formed by the two edges attached to the vertex using based on the following table:

Vertex Type

Angle Range

Number of Elements Connected

Image

End

0°-135°

1

T1-EndVertex

Side

136°-224°

2

T2-SideVertex

Corner

225°-314°

3

T3-CornerVertex

You can get a feel for the algorithm (but not the actual one, it is much more complex) by stepping through breaking up the geometry in Figure 3 and follow the following steps in your head:

  1. Mesh the edges creating nodes on the edges based on local and global meshing settings
  2. March around the edges making virtual edges that combine by combining any edges linked by Side vertices (the only one is F-G-A in this example).
  3. Go to the first corner you encountered.  Shoot a line along the vector from B-C until you intersect with an edge. Find F-E.
  4. Count number of nodes from A-B, call it N. Then count N nodes from F toward E and make the Nth node a virtual vertex H.
  5. Draw a virtual edge from C-H. Note, B-C-H is now one virtual edge because in B-C-H, C is now a Side vertex.
  6. Keep marching around from H till you get back to A.  You end up with 4 corners so now you have a sub map.
  7. Go back to where you broke off, C, and march around to C (remember H-C is now an edge and C looks like an End in this topology) until you get another corner.  You now have the second sub map.

Pretty cool, huh. I wish I would have thought of that.  The actual method is of course much more complex and has allsorts of checks and “if this, than do that” stuff in it.  

Helping the Algorithm Out

Sometimes the angles just don’t work out so you need to go in and tell the mesher what is a corner, side, or end.  You do this in the Details view for the Mapped Face Meshing method you attach to the face.

F4-DetailsArea
Figure 4: Details View for Mapped Face Meshing Method

You basically click on the cell for Sides, Corners and Ends and then identify the vertices for each.  The last example in figure three has rounded corners so the algorithm identifies the vertices on the rounds as Sides because they are 180° apart. The result is the free mesh shown in Figure 5. If you want a mapped mesh, you need to specify the vertices on the rounds. Figure 6 shows the result. Well, maybe sometimes the free mesh is better.

F5_RoundedCornerFreeMesh
Figure 5: Resulting Free Mesh with Automatic Identification of Vertices

f6-RoundedCornerMapMeshed
Figure 6: Mapped Mesh after Manual Identification of Vertices

The other option on really nasty geometry is to take it into DesignModeler and imprint the surfaces to create your own sub maps.

Conclusions

In most cases, just putting a Mapped Face Meshing control on a surface will give you a nice mapped mesh.  The mapped mesh is usually more uniform, has less distorted elements, no triangles, and usually has less nodes. But a mapped mesh is not always better, you need to use your engineering judgment to decide which is best in each application.  I like to use this control on fillets and on blocky parts.

So, when you are not liking the look of the default surface mesh, even if you are not hex meshing your model, turn off the autopilot, and try a Mapped Face Meshing control. 

Mid-side nodes: Do they really help?

Do those pesky mid-side nodes really do anything other than increase my node count and runtime?  Over the years, I have heard that question over and over. “Do I really need them?”  If you are using tetra elements, then the answer is absolutely, “Yes!”  They even do more than just give you nice curvature to follow your geometry.  For tetras, the midside nodes are necessary to predict proper stresses.  Lower ordered tetras are overly stiff and will under predict deflection and stresses.  Let’s look at the underlying equations that make up the shape functions in both element types.

image

Yeah, the equations for tetras are a little more complicated than that, but I really didn’t think you  wanted to go down that road. But I will remind you that low-order elements have linear shape functions, while high-order elements have quadratic shape functions. So let’s look at an example, instead, to show that lower-order tetras will give unconservative results.

For the example, I made a simply-supported, rectangular-cross-section plank.  It’s 100 inches long, by 10 inches wide, by 1 inch thick, with a 1 psi pressure load. I set the mesh size at 1 inch and solved with low-order tetras, high-order tetras, and also with low and high-order hex elements.  The results are tabulated below along with the hand-calculated solution from the 7th Edition of Roark’s Formulas for Stress and Strain, Table 8.1, case 2e.

image

image

  Deflection Def. % Diff Stress Stress  % Diff Nodes Elements
Roark’s -0.53864   7500.0 psi      
20-Node Hex -0.54159 0.55% 7527.0 psi 0.36% 7553 1,000
8-Node Hex -0.54142 0.52% 7529.5 psi 0.39% 2222 1,000
10-Node Tetra -0.54158 0.55% 7525.9 psi 0.35% 20661 11,998
4-Node Tetra* -0.27151 49.6% 2249.3 psi  –70.0% 3222 11,999

As you can see, the hex elements along with the 10-node tetras get close to the solution and provide conservative results.  The 4-node tetras, however, which are actually degenerate 8-node hex elements because ANSYS removed their 4-node tetra elements along time ago, show only half the deflection under the same load.   The extra stiffness also causes the modal frequencies to be higher. In this case, the frequencies of the low-order tetras were about 40% higher than the high-order tetras.  When you’re looking at operating ranges, this could also lead to unconservative conclusions.

Of course this is a pretty coarse mesh with only one element through the thickness. So how do the results change as the mesh is refined? We should only need to cut the element size by half to get the extra nodes and the same results, right? 

    Deflection Def. % Diff Stress Stress  % Diff Nodes Elements
Roark’s -0.53864”   7500.0 psi      
1”  -0.27151”

–49.6%

2249.3 psi

-70.0%

3,222 11,999
0.5” -0.39768”

-26.2%

4879.1 psi

-34.9%

17,624 77,612
0.3” -0.47582”

-11.7%

6069.5 psi

-19.1%

69,040 334,433
0.2” -0.51023”

-5.3%

6760.9 psi

-9.8%

213,122 1,105,641
0.1” -0.53345”

-0.96%

7309.6 psi

-2.5%

1,599,327 8,877,326

As you can see the answer is, “No.”  Because  the high-order elements have quadratic shape functions , it takes far more linear elements to make up the difference. Here is a little animation to show why. You can see the error left by each set of linear elements when try to match a quadratic function.JoeAnim1

For another example, I made a long square-cross-section bar with holes in it.  It is cantilevered with a shear load at the other end.  I set the Physics preference to Mechanical and the Relevance Center to ‘Fine’ and I got the mesh that you see below. (I’d say its not too bad for 15 seconds of works. )image

   imageimage

Just to make sure that my mesh was the same in both cases, I told Mechanical to Drop the midside nodes and then I added a Command Object in the high-order  case to convert the elements to Solid 187 tetras during the run. Since no geometry is sent from Mechanical, the midside nodes were added and the edges were kept straight, so no further curvature was picked up.  Here’s the command object to try out yourself.

imageimage

 

By scoping the results to just the hole closest to the fixed end we can see the difference in the stress values for the two runs.  The low-order tetras yield results that are 12% lower for deflection and over 19% lower for stress than the same mesh with midside nodes.  (But look, Boss! I saved 4 seconds on the solve time!!!)

  Elements Nodes Total
Deflection
SEQV Solver Runtime
Low-order 18383 4854 0.28744” 177.51 Ksi 6.677 Sec.
High-order 18383 31332 0.32633” 219.46 Ksi 10.951 Sec.

 

imageimage

imageimage

So how many low-order tetras are needed to get the same accuracy this time?   I used the Convergence Tool in Workbench to find out this time. The Convergence tool refines the mesh only for the scoped region of the results plot.  So the resultant mesh looks like this.

image

imageimage

 

clip_image001[6]

  Elements Nodes SEQV Solver Runtime
Low-order

1,271,619

234,484

21.951 Ksi

349.785 Sec.

High-order

18,383

31,332

21.946 Ksi

10.951 Sec.

The last pass took just under 6 minutes to run (349.8 sec). The stress finally gets as high as it is with the high-order elements, but the deflection results are still incorrect because the model was only refined at the hole.  And say it with me, “If the deflection is wrong, then the stress ain’t gonna be right!!!”  So basically I’ve wasted a half an hour getting the wrong solution so that you won’t have to do the same. Just remember, mid-side nodes are your friends. And just like any good friend, take advantage of them when you can!

Using the Solar Load Model in Fluent

One of the useful features in ANSYS Fluent is the solar load model. This capability can be used to calculate the effective radiation load based on the position on the earth’s surface (latitude and longitude), the model orientation with respect to North, the time of day, the season, and established conditions for clear or cloudy weather. The applications are widespread, but readily apparent are calculations of radiative heating on automotive cabins and buildings.

Two options are available for the solar load model; solar ray tracing or discrete ordinates. Solar ray tracing is coupled to Fluent by computing heat fluxes from incident solar radiation using a ray tracing and shading algorithm, and then applies the calculated heat flux on element faces of semi-transparent walls. Solar ray tracing presents less computational overhead than discrete ordinates, as it calculates the solar loads once at the beginning of a steady-state simulation. However, it uses some simplifying assumptions to do so (another example of the accuracy/solution time tradeoff that simulation analysts often encounter). It does not calculate the emission from surfaces, and the reflecting component of the incident load is distributed uniformly across all participating surfaces rather than retained locally at the surfaces reflected to. However, solar ray tracing can be coupled with one of the supported radiation models (P-1, Rosseland, Surface-to-Surface) if surface emission effects are important.

The discrete ordinates method is coupled to Fluent by calculating radiation heat fluxes on semi-transparent walls. The discrete ordinates approach directly calculates the irradiation flux as a boundary condition for the solution of the radiative transfer equation.

A key part of the Solar Load Model in Fluent is the Solar Calculator, which calculates the solar beam direction and irradiation for a known time and position. The Solar Calculator inputs are displayed in Figure 1, followed by an explanation of each.

Solar_calc_GUI

Figure 1 – Solar Calculator Panel for Fluent’s Solar Load Model

The inputs for the Solar Calculator are:

· Global position (latitude, longitude, and time zone). The time zone is measured as a plus or minus from Greenwich Mean Time (GMT).

· Starting date and time

· Orientation of the model with respect to North/East

· Solar irradiation method

· Sunshine factor

There are two choices for the irradiation method; Fair Weather Conditions (ASHRAE) and Theoretical Maximum (NREL). These methods are similar, but the Fair Weather Conditions method applies greater attenuation on the radiative load. The sunshine factor is simply a linear multiplier that allows the incident load to be reduced in order to account for cloud cover. Once the irradiation and solar beam direction are known, they are applied as inputs to the solar ray tracing algorithm or the discrete ordinates method.

After selecting the appropriate input conditions for the Flatiron Solar Calculator, you can click “Apply” to generate the irradiation and solar beam direction information. This information is printed to the Fluent TUI (text user interface) window, and typically looks something like:

Theoretical Maximum:

Sun Direction Vector: X: -0.0722563, Y: 0.799654, Z: -0.596097

Sunshine Fraction: 1

Direct Normal Solar Irradiation (at Earth’s surface) [W/m^2]: 1327.15

Diffuse Solar Irradiation – vertical surface: [W/m^2]: 105.195

Diffuse Solar Irradiation – horizontal surface [W/m^2]: 113.693

Ground Reflected Solar Irradiation – vertical surface [W/m^2]: 117.495

Figure 2 – Printed result in the Fluent interface after applying the Solar Calculator

This information is printed so that it can be used in the specification of boundary conditions. The Solar Load Model (for both Ray Tracing and Discrete Ordinates) applies a calculated solar flux on semi-transparent surfaces only, so if there are opaque surfaces in the model, the solar load will have to be applied as a boundary condition on the opaque surfaces explicitly.

The explicit application of the solar load to the opaque surfaces in the model requires the calculation of the heat flux on that surface. The solar load is a vector quantity applied to the model along the “Sun Direction Vector” (Figure 2). The incident solar load on a particular surface will be the component of this vector projected into the direction of the local surface normal vector. The result of the projection is then multiplied by the “Direct Normal Solar Irradiation” shown in the printed output in Figure 2. This value represents the heat flux which must be applied as a boundary condition on the opaque surfaces of the model. Intuitively, if the result of the dot product between the “Sun Direction Vector” and the unit normal on the surface is a negative scalar, then the surface in question is not irradiated by the sun.

The Solar Load Model is applied to calculate the load on a simple model of two solid aluminum blocks subjected to solar heating in Phoenix on a summer day at 1 pm with an ambient temperature of 100 degrees Fahrenheit (see Figure 3). The view in Figure 3 is from the South (positive y direction).

fluent01

Figure 3 – Simple geometry for coupled thermal and fluid solution with solar loading

The grid contains ~200e3 cells, and uses hex elements to resolve the interior of the blocks and tetra elements to resolve the air volume of the surroundings. Inflation layers are used at the interfaces between the air volume and the solids. The solar loads on each surface and contributions from natural convection (in the form of heat transfer coefficients) are applied as heat flux conditions on the walls of the aluminum blocks. In this case, the Solar Calculator is used to get the sun direction vector and applied load, which are applied as individual boundary conditions. The Solar ray tracing and the coupling of the Solar Load model with the radiation models is not utilized in this example, primarily because the boundaries on which the solar loads are applied are all opaque, and the re-radiation of energy is not accounted for. The coupled solver with pseudo-transient relaxation is applied for the solution of the momentum, energy, and turbulence equations. A few representative plots are shown in Figure 4 a-c.

 

fluent03fluent04fluent06

(a)                                                                (b)                                                              (c)

Figure 4 – Simulation results: (a) Temperature contours on aluminum blocks; (b) Temperature contours in a plane; (c) Vertical velocity contours in a plane

In summary, the Solar Load Model in Fluent presents a way to easily calculate details about the solar heat flux conditions given a certain geographical location, season, and time of day. Furthermore, Fluent’s Solar Load model has the capability to resolve the physics associated with solar heating of transparent media and re-radiation problems.

FE Modeler Part 2: Converting a Mesh to Geometry

HolyGrailofFEA_thumb1One of the “Holy Grails” of simulation is the ability to take a Finite Element mesh and convert it into geometry that you can then use to create a new mesh or import it into a CAD system as CAD geometry.  What most users do not know is that ANSYS, Inc. has a product that can do this -  FE Modeler.  With the addition of a very affordable Mesh Morpher License users have access to a straightforward process to convert a mesh into geometry.

People usually want to do such a conversion for one of two reasons.  The first is the “legacy mesh.”  They have a FEA mesh from the past that has no geometry associated with it and they want to make a new mesh or use it to create a mesh around it for CFD.  This can also be an STL file from somewhere.  The second is that they want to do an analysis or get a CAD part of a part that is distorted in some way by a load.  They may want to do a metal forming simulation and get the final shape for their assembly model or they may need to do a flow analysis on a piece of geometry that gets highly distorted.

The Basics:Mesh_Witch_thumb

If you have ever tried to take a mesh to geometry yourself using other tools or a CAD tool, you know it is very hard to do unless you are working with a part made out of nothing but flat planes.  This is because the way a tool like this must work is to:

  1. Take a 3D FEA model and finds only external faces.
  2. Find edges between faces that can be considered an edge.  This is done by looking at the angle between the normals of the faces and if they are above a certain criteria, they are tagged as an edge.
  3. Gather the edges into loops and find all the element faces enclosed by those loops.
  4. Fit a cylinder, plane, cone, or NURB surface to the faces and bound them with the edge loops.

If you have a simple, blocky part where all the faces meet at right angles, no problem. But add a few fillets and boom, the tools can never sort out the loops. So what to do?

You need to help the algorithm out by splitting up your faces into components (or Named Selections in Workbench speak) and make sure that those components define a chunk of external element faces that are fairly easy for the program to work with. Figures 1-4 show this for some simple blocks.

Block 1 has right angle corners so the algorithm makes quick work of it and finds six surface and 12 edges.  Unfortunately, real parts rarely have all right angles.

Block 2 has fillets and a fine enough mesh such that the algorithm can’t find any edges.  It tries to turn the entire outer surface of the mesh into one big surface, and fails.  No closed loops, bummer.

Block 3 is the same as 2, except that the top surface patch is defined by a nodal component.  This gives the algorithm enough information to create two loops and therefore a valid solid. But maybe not what you want because the larger surface kind of wraps around most of the block, and when you try to turn that into a Parasolid it usually fails, as shown in Figure 4.

The 4th block has a component defined for each surface in the original CAD model.  The algorithm loves this and makes each patch into a nice surface that then converts to usable CAD geometry.  This is fairly easy (if a bit tedious) to do if you are starting with CAD geometry, meshing it, and using the deformed shape to create a new mesh.  But if you have just a mesh you will need to be a bit of a Mechanical APDL selection guru to make it happen, but more on that below.

p2_f1_Mesh_thumb

Figure 1: Mesh from *.CDB File

p2_f2_NodalComponents_thumb

Figure 2: Nodal Components

p2_f3_Surfaces_thumb

Figure 3: Surfaces After Detection

p2_f4_CAD_Geometry_thumb

Figure 4: CAD Surfaces

The Process:

To illustrate the process, we will assume we have a real world problem:  You have a piece of tubing as shown in Figure 5 that you want to install into place, but to do so you really have to distort it (Figure 6 is the stress analysis results in the installed shape).  As you can see, there is significant ovalization and you need to figure out what the flow will be in this distorted shape.

p2_f5_TubeCADGeometry_thumb

p2_f6_TubeDeflectedShape_thumb

Figure 5: CAD Geometry

Figure 6: Installed Shape

p2_f7_SystemSchematic_thumb
Figure 7: Project Schematic of Stress and Fluid Runs

So the first step is really in the CAD area.  You need to use ANSYS DesignModeler to split your geometry up into surfaces that can be easily found and refit by FE Modeler.  One thing that always helps is to split any cylinders in half.  Break up long, nasty surfaces and merge together any small and complex ones.  You can create your components (Named Selections) here or in ANSYS Mechanical, your call.  But either way every surface that is not a plane needs to be given an individual name.  If your CAD systems supports it, you can do it in the CAD system as well before you slice and prep in ANSYS DesignModeler.

While I was writing this article Matt Sutton was doing Part 3 of his Mechanical Scripting HOWTO.  I asked him if he could make an example script that would automatically create a Named Selection for each surface in a part.  You can find it here:

http://www.padtinc.com/blog/post/2011/01/05/An-ANSYS-Mechanical-Scripting-Example.aspx

If you have complex geometry, this script is highly recommended.

Next, move into ANSYS mechanical and set up your analysis like you normally would.  If you didn’t bring over Named Selections, define them now.  Now you need to insert a simple code snippet into the  Solution Branch with two commands:

      upcoord, 1
      cdwrite

p2_f8_command_snippet_thumb2
Figure 8: Command Snippet

The UPCOORD command tells MAPDL to take the current solution (by default the last load step. If you want a different point in the solution, you need to add a SET command first) and move the nodes to the deflected shape that has been calculated. After this command your nodes are in your deformed shape. The CDWRITE command simply writes the model to a text file, and because your nodes are in the deflected position, that file will contain your distorted geometry.  Run your analysis and when it is done, the snippet is executed and the *.cdb file is written.  If you already have a solution and do not want to rerun, just go into ANSYS MAPDL and type the commands into /POST1.

Now that you have the distorted mesh, you need to get it into FE modeler.  Create a new standalone FE Modeler system and RMB on Model and choose Add Input Mesh…  Your *.cdb file is going to be in your project directory in the dp0/sys/Mech directory (of course, this assumes that you only have the one model in your project.  Use the appropriate dp* and sys* to identify your file if you are using something other than the first model in your project).

If you are not using SI units you will need to set the properties for the FE Modeler system and for the file in the Project Schematic or things will get all messed up. To do this RMB on the Model cell and choose properties.  Set the units for FE Modeler here.  Then RMB again on the cell and choose Manage Input Meshes.  This shows the file in the outline view and when you click on it, you can change the Unit System property.

Up2_f8p5_SettingUnits_thumb

Figure 8.5: Setting Units on FE Modeler and on your input mesh file

Launch FE modeler and make sure your mesh came in without any problems.  Check the Bodies and the Element types, etc…  Then look at your components in the tree.  Note that it not only captured the surface components you defined, any nodal constraint or load that you defined will show up as well.  You may need to delete some components. You can also use the selection capabilities in FE Modeler to make new components.

Once you are happy with that, it is time to let the program find faces.  Select Geometry Synthesis in the tree and then in the Details make sure that “Use Node Components” is set to Yes!  This is very important and the most common mistake users make. 

p2_f9_components_in_FEM_thumb

Figure 9: Nodal Components in FE Modeler

Now click on the Initial Geometry icon and it will go do its thing.

If you get bad surfaces, you need to go back and add more components and break things up a bit.  but if it works you should see a solid of your part.  You are not done yet.  These are not “CAD” surfaces but rather faceted (a bunch of triangles) surfaces. 

p2_f10_InitialGeometry_thumb

Figure 10: Deformed Faceted Surface

Select Initial Geometry in the tree and you should see a “Convert to Parasolid” icon on the ribbon bar.  Click that and it will use the Parasolid modeling kernal to try and fit your surfaces.  This can take a while on a complex part.  If you split things up well enough and your mesh was fine enough, you should now have a surface model of your geometry! 

p2_f11_ParasolidSurfaces_thumb

Figure 11: Parasolid Surface Geometry in FE Modeler

If you geometry is simple, it will even sew it into a solid.  But if the little icon next to your part in the tree is a surface and not a block, you still need to sew things together.  Do this by choosing the Add a Sew Tool icon or RMB-> Add a Sew Tool.  Select the surfaces you want to sew, using the tree being the easiest way to select all of them.  Then hit the generate button and see what you get.  If it does not make a solid, make the tolerance number a bit bigger.  If that still does not work, you may need to export the surfaces to a CAD tool with good surfacing and sew it there. 

To export your distorted geometry, select Parasolid Geometry in the tree and choose Export to a Parasolid File or RMB-> Export to a Parasolid File.

You have done it, you have real CAD geometry of your deformed shape!

p2_f12_SolidWorksDistortedModel_thum

Figure 12: The Holy Grail: You Distorted Geometry in Your CAD System

Getting the distorted shape into CAD is pretty critical, at least now your drawings are realistic instead of some guess at the installed shape.  But if you need to do more analysis, that proceeds like any other simulation.  You read in your CAD geometry (CAD geometry you made from distorted nodes, yea, that is what I’m talking about! Hoooo haaaa!) and set it up for your analysis and run it.  In our example we took the Parasolid file into DM and created a fluid volume inside the pipe (which is way easy to do in DM) and then did a CFD run.

p2_f13_CFD_Solution_thumb1

All in ANSYS products, all within the Workbench Environment. 

Starting with a Legacy Mesh or STL file:

The reason why the above method worked so well is that we had original CAD geometry that we could make components for.  When you start with just a legacy mesh without components you need to get in there an make nodal components on the surface.  FE Modeler has some basic node picking capabilities that you can use, but for real parts you really need to get into ANSYS Mechanical APDL and use the following method:

    1. Use ESURF with MESH200 elements to create a surface mesh from your solid mesh. 
    2. Delete the solid mesh
    3. Use picking, selection commands and such to break those elements into surfaceable chunks.
    4. When you have things split up fairly well, CDWRITE to make a CDB file for FE Modeler.

This can be pretty tedious, but a bit of up-front work will result in nicer surfaces, or determine if you even get a solid model.

Comments:

The guts of this product have been around for many release with a couple of different names, Paramesh being the most common.  At R13 we have really found that it is robust and usable for simple to moderately complex geometry. The key are the nodal components.  Set those up right and it should work out.

We have also found that working with a refined mesh is much better than a coarse one.  So if you are running an analysis to get a distorted shape, you may want more refinement than the analysis actually requires, in order to get good surfaces for a distorted solid model.

It should be noted, that although this is a pretty amazing tool, it can not do miracles.  Really complex, distorted or coarse meshes just will not work.  But do not give up.  Get what surface you can into your CAD system and patch it up there.  Bring_me_a_Shrubbery_thumb

You can download the project that has the blocks and the distorted tube and take a look at this yourself, or try it with your own geometry.  You will need to ask your ANSYS sales professional for a temp key of ANSYS Mesh Morpher to use this tool.  Play with it, get a feel for it.

So, next time your boss says “Bring me…. a…… CAD file of a Deflected Shrubbery!”  you know what to do.

Direct Modeling with SpaceClaim

Those keeping up with ANSYS, Inc. update presentations may have seen mention of a new product called ANSYS SpaceClaim Direct Modeler.  ANSYS SpaceClaim Direct Modeler is a  slightly modified version of the CAD package SpaceClaim (from the SpaceClaim Corporation).  It is sold by ANSYS, Inc. and can operate either as a stand-alone application or integrated into the Workbench environment.  It’s a nice combination of a slick CAD tool for use in a high-powered FEA program.

scdm

In case you’re keeping count, SpaceClaim brings the total number of geometry processing modules ANSYS offers up to 3:

The very first one is /prep7, available within ANSYS MAPDL.  If you’re using this for serious CAD work, you have way too much time on your hands.  It’s good at doing basic modifications (surface imprints, solid gluing, surface stitching) but not much else.  The nice thing about it is that you can harness APDL to create batch jobs.  I’ve seen this used heavily in board and package level semiconductor analyses for creating BGAs and all the other small parts that need to be considered.

The next module is DesignModeler.  This is the default CAD creation/manipulation/repair module within ANSYS Workbench (requires additional license).  DesignModeler (or DM as the cool kids call it) does everything except create detailed drawings.  It’s extremely useful for creating fluid domains, gluing bodies for conformal meshes in Workbench, repairing/modifying geometry, and is overall an indispensable tool for creating efficient models within Workbench. 

As I mentioned at the top, there is a new module called ANSYS SpaceClaim Direct Modeler (still waiting for confirmation that the cool kids call it SCDM, as the product is fairly new) which is an alternate CAD module within Workbench.  So what’s the difference between DesignModeler and SpaceClaim?  Glad you asked!  Here’s a 30k-ft top-level view:

 

DesignModeler SpaceClaim
History-Based CAD Creation Direct (Explicit) CAD Creation
Uses CAD Plug-Ins to Import Geometry Uses CAD Translators to Import Geometry
Assembling Multiple Parts is ‘Character Building’ Assembling Multiple Parts is Easy

So let’s dig in to these differences.

History vs Direct CAD Creation:

Regardless of how you create geometry, underneath the hood there’s some fairly complicated math going on to build NURB surfaces.  The overall shape of the geometry is defined by the order, control points, and knot vector in addition to all the continuity conditions that are enforced.  Now that I’ve lost your attention, the difference between history and direct CAD modeling is in how all these controls are defined.  A history-based modeler has (as you guessed) a history, typically in the form of a tree.  This means you need to have some design intent in place to have downstream features created relative to existing objects.  Long story short, this makes it easier to tweak designs while still maintaining the design intent as well as aide in any CNC machining operations (hopefully your history matches what would be performed by the machinist).

Direct modeling, on the other hand, doesn’t have a history.  It just is.  So unlike history-based modeling, you won’t run into an issue of deleting a fillet/chamfer and have downstream operations fail.  You don’t have to worry about implied datums or anything else like that.  Where the wheels fall off the direct modeling method are if you need to repeat a bunch of downstream operations.  For example, I had just completed a major defeaturing effort on a sheet-metal riveted nightmare.  The analysis showed that the current design was inadequate, so more rivets were added.  Using SpaceClaim, I can’t just go back to an upstream import and redefine a few downstream operations.  I wound up repeating the same defeaturing operations on the negative margin portion of the model, then slapped it into the existing model.  Direct modeling really shines if you’re trying to tweak the shape of a part by just pulling/pushing certain faces together, if you were using a history-based modeler you would have to base all of your operations on existing objects, which can be extremely tedious and/or impossible.  An example is shown below, where a tiny step is removed by drafting (through a ‘Pull’ operation) a face to an existing edge:

example1

Long story short, history-based offers a good ‘turn the crank’ capabilities to recreate entire assemblies based off of a few minor tweaks.  Direct modeling skips the hassle of accidental datums and other parent/child relationships.  Direct modeling may feel like CAD without a safety net, I prefer to view it as more of a Buddhist-esque CAD system.  Stop grasping after the history tree and you’ll find enlightenment (for most situations).

CAD Plug-ins vs Translators:

This one is simple.  Plug-ins require the CAD system to be installed locally on the machine.  Translators do not.  Both SC and DM will import SolidWorks, Pro/E, and all your other favorite CAD systems.  The difference is that if you’re using DM, you’ll need to have the source-CAD tool installed as well.  If you’re using SC you can free up that disk space. 

Creating Assemblies:

 

For anyone who’s ever used DM to assemble multiple parts together, I feel your pain.  It essentially uses the same methodology of /prep7’s VTRAN command to translate a selected part from one coordinate system into another.  Starting at R12, you could specify translations and rotations, but you still had to know how far to move the part to place it properly.  Once you get the hang of it, it’s not hard (just tedious).  Throw in the frozen vs active (aka unfrozen) behavior, and it can make for some choice words after hitting ‘Generate’

Oh, how I’ve wished that there was a way to specify a mate/align/insert constraint.  In comes SpaceClaim, like a knight in shining armor (or like Will Ferrell as a lumberjack in Stepbrothers…I’ll let you look it up on youtube).  Super easy to assign constraints and assemble your parts.  Like most other operations in SC, it’s intuitive.  Pick two cylindrical faces and hit ‘align’ and BAM, you’ve got a coaxial constraint.  Pick to faces and hit ‘align’, KAPOW, you’ve got an align/mate constraint. 

 


 

On top of these things, here’s a quick list of other features.

Cross-Section Mode Easy to create cross-section views

Changes made to cross-section view propagates through entire solid (awesomely awesome if you ask me)

Keyboard Shortcuts Basic keyboard shortcuts for most common operations
Midsurfacing (DM) Extract midsurface representation and thickness for use in shell meshing
Shared Topology (DM) Ability to define multi-body parts for conformal mesh
Interference Detection Find (and fix) overlapping regions in an assembly
Multiple Tabs Ability to open a single part in its own tab.  Changes made on separate tab propagate back to assembly

(DM) represents existing features currently within DesignModeler

So, now that you’re tripping over yourself to get your hands on this new tool, where do you go to learn more?  SpaceClaim has some excellent (and free) video tutorials complete with input files.  You can access these by going to:

www.spaceclaim.com

Then go to Support > Tutorials, and start downloading.

Finally, in an effort to leave you ‘wanting more’, here’s a sneak peak at what I’ll be doing during the Webinar:

example

Caps and Limits on Hardware Resources in Microsoft Windows

Sometime around 3am last October I found myself beating my head up against a server rack. I was frustrated with trying to figure out what was limiting my server hardware. I was aware of a couple limits that Microsoft had placed into its OS software. However, I had no idea how far reaching the limits were. I figured it would be best if I had a better understanding of these hardware limits. So I researched the caps that are placed on the hardware by two of the most popular Operating Systems on the planet: Microsoft Windows 7, Windows Server 2008 R2 editions and REHL.

I have compiled this easy to read superlative analysis of my facts and findings. Read on, I know you are curious to find out what I’ve uncovered.

Enjoy!


“They capped and limited US!"

Microsoft Windows Operating Systems

·        Windows 7

a.     Professional / Enterprise / Ultimate

                                                              i.      Processor: 2 Socket limit (many cores)

1.     Core limits:

a.     64-bit: 256 max quantity of cores in 1 physical processor

b.     32-bit: 32 cores max quantity of cores in 1 physical processor

                                                            ii.      RAM: 192 GB limit to amount of accessible

b.     Home Premium

                                                              i.      RAM: 16GB

c.      Home Basic

                                                              i.      RAM: 8GB

d.     Starter Edition

                                                              i.      RAM: 2 GB


 

·        Windows Server 2008

a.      Standard & R2

                                                              i.      Processor: 4 socket limit – (many cores)

1.      (4 – Parts x 12core) = 48 cores

                                                            ii.      RAM: 32 GB

·        Windows Server 2008 R2 Foundation  (R2 releases are 64-bit only)

                                                          iii.      RAM: 128 GB

·        HPC Edition 2008 R2 (R2 releases are 64-bit only)

                                                         iv.      RAM: 128 GB

·        Windows Server 2008 R2 Datacenter (R2 releases are 64-bit only)

a.     Processor: 8 socket limit

b.     RAM: 2TB

·        Windows Server 2008 R2 Enterprise (R2 releases are 64-bit only)

a.     Processor: 8 socket limit

b.     RAM: 2TB

Red Hat Enterprise Linux – 64-bit

·         Red Hat defines a logical CPU as any schedulable entity. So every core/thread in a multi-core/thread processor is a logical CPU

·         This information is by Product default.  Not the maximums of a fully licensed/subscribed REHL product.

a.     Desktop

                                                              i.      Processor: 1-2 CPU

                                                            ii.      RAM: 64 GB

b.     Basic

                                                              i.      Processor: 1-2 CPU

                                                            ii.      RAM: 16 GB

c.      Enterprise

                                                              i.      Processor: 1-8 CPU

1.     RAM: 64 GB

d.     *** Red Hat Enterprise Linux: Red Hat would be happy to create custom subscriptions with yearly fees for other configurations to fit your specific environment. Please contact Red Hat to check on costs.

References

http://msdn.microsoft.com/en-us/library/aa366778(VS.85).aspx#memory_limits

http://www.microsoft.com/hpc/en/us/default.aspx

http://www.redhat.com/rhel/compare/

AMD Opteron Magny-Cours Processor Wins InfoWorld Best CPU (Parallel Processing)

It was nice to see that the chip PADT has been using to build very inexpensive compute clusters just got recognition from an industry leading magazine:

Read the InfoWorld article here.

When we talk about these systems some folks look at us like we are a bit insane, “AMD can’t be a better solution than Intel”  It is good to see that some experts are agreeing with us.  Makes us feel less lonely out there waving the Opteron flag.

ANSYS CFX: Going 2D in CFX

If you have had the chance to work with CFX, most likely you have been working only with three-dimensional models. You usually receive a CAD model or create your own geometry and then use one of the many meshing tools available in ANSYS to mesh your 3D geometry. However, what if you are tasked to solve a simple two-dimensional model? Without thinking twice about it you create a 2D planar or axisymmetric geometry and generate a mesh. Unfortunately, when you try to read this simple 2D mesh into CFX you get an error while trying to load it. Unlike FLUENT, CFX does not have a separate 2D and 3D solver and doesn’t read in simple 2D meshes. This may first appear as a big problem but the work-around is rather simple. This article will give you the information needed to solve “2D” problems in CFX.

Since CFX is naturally a 3D solver, using a planar or axisymmetric 2D mesh is not possible. Instead, you will need to create a thin 3D volume or a thin 3D wedge, respectively. You can call these thin geometries “quasi-2D.” For example, consider the 2D planar geometry shown in Fig. 1 that represents a channel with flow entering through the boundary on the left and exiting on the right boundary. The upper and lower boundaries are no-slip walls. To solve this type of problem in CFX, you need to give the planar geometry some thickness in the 3rd dimension.

clip_image002

Figure 1: Simple 2D planar geometry representing a channel.

The first question is how thick or thin should you make the volume. You need to consider the quality of the cells so creating a thin volume usually works well. For a 2D planar geometry, a thickness of approximately 1/100th the length of the largest dimension in the model generally provides a nice mesh with good quality cells. Depending on possible smaller features in the geometry, this rule-of-thumb value can be adjusted. Thus, when you create your geometry, you can build a volume with the proper thickness or simply extrude an existing 2D planar model.

Figure 2 shows the channel geometry from Fig. 1 extruded in the 3rd dimension using the rule-of-thumb value from above. Note that if you create an extremely thin volume it may be difficult to select and to see the thin side areas for naming the boundaries. A very thin model is also not necessary and may affect the quality of the cells. The CFX solver is very robust so the thickness can vary quite a bit without any problems; however, it is best to stay consistent and using the suggestions outlined in this article are recommended.

clip_image004

Figure 2: Quasi-2D thin volume representing a channel.

Since the model is a 2D representation, you can just use a single cell thickness in the 3rd dimension. You can mesh the 2D face the same way you would mesh a regular 2D planar model. Once meshed and loaded into CFX, you can easily apply your inlet boundary, outlet boundary and upper and lower wall boundaries. The question now is what to do with the faces in the 3rd dimension. In the example above and with any geometry that represents a 2D planar model you will need to use symmetry boundaries.

Creating a “quasi-2D” model for a simple 2D planar geometry is easy enough. What about creating a model for a 2D axisymmetric geometry? In this case, we need to extrude a wedge. Again, the question is how many degrees should the wedge be extruded.

clip_image006

Figure 3: Quasi-2D thin wedge representing an axisymmetric pipe.

For a 2D axisymmetric geometry, a wedge having a thickness of about 5 degrees works just fine. An example of a 5 degree extrusion for an axisymmetric geometry representing a pipe is shown in Fig. 3. Once again, you only need a single cell in the 3rd dimension. There is no need for any special meshing near the axis. You can mesh the planar face just as you would a normal 2D geometry and take into consideration the need to resolve gradients and other important features in the flow. After loading the mesh into CFX, you apply your inlet boundary, outlet boundary and the outer wall boundary. There is no need to apply any boundary at the axis. There are now two options for the boundary condition on the faces of the 3rd dimension. If there is no flow in the 3rd direction, then you apply a symmetry boundary just as is done in the 2D planar case. If you are trying to model an axisymmetric problem with swirl, then you will need to apply a periodic boundary condition on the faces. Remember to use a periodic boundary only if you have swirling flow.

 

Summary

This article shows that solving a “2D” model in CFX is relatively straightforward. Except for the step of having to create a thin volume or wedge and applying the extra boundary conditions (symmetry or periodic) as specified in the article, setting up these types of models is the same as any other 2D solver and the results are basically identical. A helpful hint that may prove useful is as follows. The CFX solver requires that symmetry boundaries be planar to within a certain tolerance and sometimes a symmetry boundary mesh fails to meet this tolerance. This will require that you go back and remesh the model and ensure that the symmetry planes are essentially flat. However, if the solver continues to complain, you can change the offending symmetry boundary to a wall with a slip condition. The resulting solution will be the same as using a symmetry boundary.

Questions, comments and suggestions for future thermal/CFD FOCUS articles can be sent to J. Luis Rosales at the following email address: luis.rosales@padtinc.com.