Keyless SSH in two easy steps. Wait, What?!

cubelogo-2014Within the ANSYS community and even more specifically, in regards to the various numerical simulation techniques the ANSYS users use to solve their problems.

One of the most powerful approaches to overcome the limitations of new complex problems is take multiple CPU’s and link them together over a distributed network of computers. Further unpacking this for you the reader, one critical piece to using parallel processing using a quality high performance message passing interface (MPI). The latest IBM® Platform™ MPI version 9.1.3 Fix Pack 1 is provided for you in your release with ANSYS 17.0.

When solving a model using distributed parallel algorithms, lately the communication for authenticating your credentials to make this login process seamless is known as Secure Shell, or SSH. SSH is a cryptographic (encrypted) network protocol to allow remote login and other network services to operate securely over an unsecured network.

Today let us all take the mystery and hocus pocus out setting up your keyless or password ssh keys. As you will see this very easy process to complete.

I begin my voyage into keyless freedom by first logging into one of our CUBE Linux server’s.

ssh-1

STEP 1 – Create the key

  • Type ssh-keygen –t rsa
  • Press the enter key three times
    •  (in some instances as shown in the screen capture below, You may see a prompt asking you to overwrite. In that case type y)

ssh-2

STEP 2 – Apply the key

  • Type ssh-copy-id –i ~/.ssh/id_rsa.pub mastel@cs0.padtinc.com
  • Type ssh-copy-id –i ~/.ssh/id_rsa.pub mastel@cs1.padtinc.com
  • Enter your current password that you would use to login to cs1.padtinc.com

ssh-4

All Done!

Now give it a try and verify test.
Login to the first server you setup, In my case CS0.
At the terminal command prompt type ssh cs1

ssh-5

BEST PRACTICE TIP:

I find it is my best practice is to also repeat the ssh-copy-id command using the simple name at the same time on each of the server.

That command would look like:
1. After you have completed Step 2.a listed out below you will also perform the same command locally.
a. ssh-copy-id –i ~/.ssh/id_rsa.pub mastel@cs0
b. enter your old password press enter.

Done, Done!

Additional Links:

10x with ANSYS 17.0 – Get an Order of Magnitude Impact

The ANSYS 17.0 release improves the impact of driving design with simulation by a factor of 10.  This 10x  jump is across physics and delivers real step-change enhancements in how simulation is done or the improvements that can be realized in products.

ANSYS-R17-Banner
Unless you were disconnected from the simulation world last week you should be aware of the fact that ANSYS, Inc released their latest version of the entire product suite.  We wanted to let the initial announcement get out there and spread the word, then come back and talk a little about the details.  This blog post is the start of a what should be a long line of discussions on how you can realize 10x impact from your investment in ANSYS tools.

As you may have noticed, the theme for this release is 10x. A 10x improvement in speed, efficiency, capability, and impact.  Watch this short video to get an idea of what we are talking about.

Where is the Meat?

We are already seeing this type of improvement here at PADT and with our customers. There is some great stuff in this release that delivers some real game-changing efficiency and/or capability.  That is fine and dandy, but how is this 10x achieved.  There are a lot of little changes and enhancements, but they can mostly be summed up with the following four things:

temperature-on-a-cpu-cooler-bgTighter Integration of Multiphysics

Having the best in breed simulation tools is worth a lot, and the ANSYS suite leads in almost every physics.  But real power comes when these products can easily work together.  At ANSYS 17.0 almost all of the various tools that ANSYS, Inc. has written or acquired can be used together. Multiphysics simulation allows you to remove assumption and approximations and get a more accurate simulation of your products.

And Multiphysics is about more than doing bi-directional simulation, which ANSYS is very good at. It is about being able to transfer loads, properties, and even geometry between different software tools. It is about being able to look at your full design space across multiple physics and getting more accurate answers in less time.  You can take heat loads generated in ANSYS HFSS and use them in ANSYS Mechanical or ANSYS FLUENT.  You can take the temperatures from ANSYS FLUENT and use them with ANSYS SiWave.  And you can run a full bidirectional fluid-solid model with all the bells and whistles and without the hassles of hooking together other packages.

simplorer-17-1500-modelica-components-smTo top it all off, the system level modeler ANSYS Simplorer has been improved and integrated further, allowing for true system level Multiphysics virtual prototyping of your entire system.  One of the changes we are most excited about is full support for Modelica models – allowing you to stay in Simplorer to model your entire system.

Improved Usability

Speed is always good, and we have come to expect 10%-30% increases in productivity at almost every release. A new feature here, a new module there. This time the developers went a lot further and across the product lines.

clip-regions-with-named-selections-bgThe closer integration of ANSYS SpaceClaim really delivers on a 10x or better speedup for geometry creation and cleanup when compared to other methods. We love SpaceClaim here at PADT and have been using it for some time.  Version 17 is not only integrated tighter, it also introduces scripting that allows users to take processes they have automated in older and clunker interfaces into this new more powerful tool.

One of our other favorites is the new interface in ANSYS Fluent, just making things faster and easier. More capability in the ANSYS Customization Toolkit (ACT) also allows users to get 10x or better improvements in productivity.  And for those who work with electronics, a host of ECAD geometry import tools are making that whole process an order of magnitude faster.

import-ecad-layout-geometry-bgIndustry Specific Workflows

Many of the past releases have been focused on establishing underlying technology, integration, and adding features. This has all paid off and at 17.0 we are starting to see some industry specific workflows that get models done faster and produce more accurate results.

The workflow for semiconductor packaging, the Chip Package System or CPS, is the best example of this. Here is a video showing how power integrity, signal integrity, thermal modeling, and integration across tools:

A similar effort was released in Turbomachinary with improvements to advanced blade row simulation, meshing, and HPC performance.

ansys-fluent-hpc-max-coresOverall Capability Enhancements

A large portion of the improvements at 17.0 are made up of relatively small enhancements that add up to so big benefits.  The largest development team in simulation has not been sitting around for a year, they have been hard at work adding and improving functionality.  We will cover a lot of these in coming posts, but some of our favorites are:

  1. Improvements to distributed solving in ANSYS Mechanical that show good scaling on dozens of cores
  2. Enhancements to ACT allowing for greater automation in ANSYS Mechanical
  3. ACT is now available to automate your CFD processes
  4. Significant improvements in meshing robustness, accuracy and speed (If you are using that other CFD package because of meshing, its time to look at ANSYS Fluent again)
  5. Fracture mechanics
  6. ECAD import in electromagnetic, fluids, and mechanical products.
  7. A new solver in ANSYS Maxwell that solves more than 10x faster for transient runs
  8. ANSYS AIM just keeps getting more functions and easier to use
  9. A pile of SpaceClaim new and improved features that greatly speed up geometry repair and modification
  10. Improved rigid body dynamics in ANSYS Mechanical

ansys-17-ribbons-UIMore to Come

And a ton more. It may take us all of the time we have before ANSYS 18.0 comes out before we have a chance to go over in The Focus all of the great new stuff.  But we will be giving a try in the coming weeks and months. ANSYS, Inc. will be hosting some great webinars as well.

If you see something that interests you or something you would like to see that was not there, shoot us an email at support@padtinc.com or call 480.813.4884.

Key Process Phenomena in the Laser Fusion of Metals

Metal 3D printing involves a combination of complex interacting phenomena at a range of length and time scales. In this blog post, I discuss three of these that lie at the core of the laser fusion of metals: phase changes, residual stresses and solidification structure (see Figure 1). I describe each phenomenon briefly and then why understanding it matters. In future posts I will dive deeper into each one of these areas and review what work is being done to advance our understanding of them.

Fig. 1 Schematic showing the process of laser fusion of metals and the four key phenomena of phase changes, melt pool behavior, thermomechanical effects and microstructure evolution
Fig. 1 Schematic showing the process of laser fusion of metals and the three key phenomena of phase changes, residual stresses and solidification structure

Phase Changes

Phases and the mechanisms by which they transition from one to the other
Fig. 2 Phases and the mechanisms by which they transition

Phase changes describe the transition from one phase to another, as shown in Figure 2. All phases are present in the process of laser fusion of metals. Metal in powder form (solid) is heated by means of a laser beam with spot sizes on the order of tens of microns. The powder then melts to form a melt pool (liquid) and then solidifies to form a portion of a layer of the final part (solid). During this process, there is visible gas and smoke, some of which ionizes to plasma.

The transition from powder to melt pool to solid part, as shown in Figure 3, is the essence of this process and understanding this is of vital importance. For example, if the laser fluence is too high, defects such as balling or discontinuous welds are possible and for low laser fluence, a full melt may not be obtained and thus lead to voids. Selecting the right laser, material and build parameters is thus essential to optimize the size and depth of the liquid melt pool, which in turn governs the density and structure of the final part. Finally, and this is more true of high power lasers, excessive gas and plasma generation can interfere with the incident laser fluence to reduce its effectiveness.

Primary phase changes from powder to melt pool to solid part
Fig. 3 Primary phase changes from powder to melt pool to solid part

Residual Stresses

Residual stresses are stresses that exist in a structure after it reaches equilibrium with its environment. In the laser metal fusion process, residual stresses arise due to two related mechanisms [Mercelis & Kruth, 2006]:

  • Thermal Gradient: A steep temperature gradient develops during laser heating, with higher temperatures on the surface driving expansion against the cooler underlying layers and thereby introducing thermal stresses that could lead to plastic deformation.
  • Volume Shrinkage: Shrinkage in volume in the laser metal fusion process occurs due to several reasons: shrinkage from a powder to a liquid, shrinkage as the liquid itself cools, shrinkage during phase transition from liquid to solid and final shrinkage as the solid itself cools. These shrinkage events occur to a greater extent at the top layer, and reduce as one goes to lower layers.
Fig. 4 Residual stresses resulting from thermal gradients and volume changes
Fig. 4 Residual stresses resulting from thermal gradients and volume changes

After cooling, these two mechanisms together have the effect of creating compressive stresses on the top layers of the part, and tensile stresses on the bottom layers as shown in Figure 4. Since parts are held down by supports, these stresses could have the effect of peeling off supports from the build plate, or breaking off the supports from the part itself as shown in Figure 4. Thus, managing residual stresses is essential to ensuring a built part stays secured on the base plate and also for minimizing the amount of supports needed. A range of strategies are employed to mitigate residual stresses including laser rastering strategies, heated build plates and post-process thermal stress-relieving.

Solidification Structure

Solidification structure refers to the material structure of the resulting part that arises due to the solidification of the metal from a molten state, as is accomplished in the laser fusion of metals. It is well known that the structure of a metal alloy strongly influences its properties and further, that solidification process history has a strong influence on this structure, as does any post processing such as a thermal exposure. The wide range of materials and processing equipment in the laser metal fusion process makes it challenging to develop a cohesive theory on the nature of structure for these metals, but one approach is to study this on four length scales as shown in Figure 5. As an example, I have summarized the current understanding of each of these structures specifically for Ti-6Al-4V, which is one of the more popular alloys used in metal additive manufacturing. Of greatest interest are the macro-, meso- and microstructure, all of which influence mechanical properties of the final part. Understanding the nature of this structure, and correlating it to measured properties is a key step in certifying these materials and structures for end-use application.

FIg. 5 Four levels of solidification structure and the typical observations for Ti-6Al-4V
FIg. 5 Four levels of solidification structure and the typical observations for Ti-6Al-4V

Discussion

Phase changes, residual stresses and solidification structure are three areas where an understanding of the fundamentals is crucial to solve problems and explore new opportunities that can accelerate the adoption of metal additive manufacturing. Over the past decade, most of this work has been, and continues to be, experimental in nature. However, in the last few years, progress has been made in deriving this understanding through simulation, but significant challenges remain, making this an exciting area of research in additive manufacturing to watch in the coming years.

References

  1. Mercelis, P., & Kruth, J. (2006). Residual stresses in selective laser sintering and selective laser melting. Rapid Prototyping Journal, 12(5), 254-265.
  2. Simonelli, M., Tse, Y.Y., Tuck, C., (2012) Further Understanding of Ti-6Al-4V selective laser melting using texture analysis, SFF Symposium
  3. King, W. E. and Anderson, A. T. and Ferencz, R. M. and Hodge, N. E. and Kamath, C. and Khairallah, S. A. and Rubenchik, A. M., (2015) Laser powder bed fusion additive manufacturing of metals; physics, computational, and materials challenges, Applied Physics Reviews, 2, 041304

ANSYS, Inc. Launches New Magazine, Dimensions

ansys_dimensions-1There are so many aspects to numerical simulation worth talking about these days, and a lot of resources to get that information.  Applications, theory, how-to, and where it fits into the business of making stuff. Here on The Focus we tend to concentrate on practical hot-to things, and the ANSYS Advantage magazine has focused on the application stories along with some how-to. What has been missing a a resource for how simulation impacts business, and how users of simulation are making other improvements in their business.

ansys_dimensions-3Enter “Dimensions.”  This new e-publication is from the same team that does the ANSYS Blog and  ANSYS Advantage, but it has a decided business slant – WAIT!!!.  I know, your an engineer, the world “business” scares you.  Don’t worry, this is value added info, not a bunch of fluff.

Take a look at the first issue here.  I’ll be honest, I kind of ansys_dimensions-5opened up expecting to page through going “whatever,” “right, no one does that,” and “who cares.”  But I found myself skimming all of the articles with interest, and reading a couple completely.  There is some good stuff in here.  LIke an interview with Airbus engineers on about the challenge they face in designing their products. Or who Whirlpool uses social networking to facilitate communication between their users around the world. There is some simulation stuff in there, like how Siemens Power leverages simulation to make better power generation products.  And a lot more.

Take a look, it won’t hurt, I promise.  If you want something more technical, forward the link to your boss at least.

ansys_dimensions-4 ansys_dimensions-2

Constitutive Modeling of 3D Printed FDM Parts: Part 2 (Approaches)

In part 1 of this two-part post, I reviewed the challenges in the constitutive modeling of 3D printed parts using the Fused Deposition Modeling (FDM) process. In this second part, I discuss some of the approaches that may be used to enable analyses of FDM parts even in presence of these challenges. I present them below in increasing order of the detail captured by the model.

  • Conservative Value: The simplest method is to represent the material with an isotropic material model using the most conservative value of the 3 directions specified in the material datasheet, such as the one from Stratasys shown below for ULTEM-9085 showing the lower of the two modulii selected. The conservative value can be selected based on the desired risk assessment (e.g. lower modulus if maximum deflection is the key concern). This simplification brings with it a few problems:
    • The material property reported is only good for the specific build parameters, stacking and layer thickness used in the creation of the samples used to collect the data
    • This gives no insight into build orientation or processing conditions that can be improved and as such has limited value to an anlayst seeking to use simulation to improve part design and performance
    • Finally, in terms of failure prediction, the conservative value approach disregards inter-layer effects and defects described in the previous blog post and is not recommended to be used for this reason
ULTEM-9085 datasheet from Stratasys - selecting the conservative value is the easiest way to enable preliminary analysis
ULTEM-9085 datasheet from Stratasys – selecting the conservative value is the easiest way to enable preliminary analysis
  • Orthotropic Properties: A significant improvement from an isotropic assumption is to develop a constitutive model with orthotropic properties, which has properties defined in all three directions. Solid mechanicians will recognize the equation below as the compliance matrix representation of the Hooke’s Law for an orthortropic material, with the strain matrix on the left equal to the compliance matrix by the stress matrix on the right. The large compliance matrix in the middle is composed of three elastic modulii (E), Poisson’s ratios (v) and shear modulii (G) that need to be determined experimentally.
Hooke's Law for Orthotropic Materials (Compliance Form)
Hooke’s Law for Orthotropic Materials (Compliance Form)

Good agreement between numerical and experimental results can be achieved using orthotropic properties when the structures being modeled are simple rectangular structures with uniaxial loading states. In addition to require extensive testing to collect this data set (as shown in this 2007 Master’s thesis), this approach does have a few limitations. Like the isotropic assumption, it is only valid for the specific set of build parameters that were used to manufacture the test samples from which the data was initially obtained. Additionally, since the model has no explicit sense of layers and inter-layer effects, it is unlikely to perform well at stresses leading up to failure, especially for complex loading conditions.  This was shown in a 2010 paper that demonstrated these limitations  in the analysis of a bracket that itself was built in three different orientations. The authors concluded however that there was good agreement at low loads and deflections for all build directions, and that the margin of error as load increased varied across the three build orientations.

An FDM bracket modeled with Orthotropic properties compared to experimentally observed results
An FDM bracket modeled with Orthotropic properties compared to experimentally observed results
  • Laminar Composite Theory: The FDM process results in structures that are very similar to laminar composites, with a stack of plies consisting of individual fibers/filaments laid down next to each other. The only difference is the absence of a matrix binder – in the FDM process, the filaments fuse with neighboring filaments to form a meso-structure. As shown in this 2014 project report, a laminar approach allows one to model different ply raster angles that are not possible with the orthotropic approach. This is exciting because it could expand insight into optimizing raster angles for optimum performance of a part, and in theory reduce the experimental datasets needed to develop models. At this time however, there is very limited data validating predicted values against experiments. ANSYS and other software that have been designed for composite modeling (see image below from ANSYS Composite PrepPost) can be used as starting points to explore this space.
Schematic of a laminate build-up as analyzed in ANSYS Composite PrepPost
Schematic of a laminate build-up as analyzed in ANSYS Composite PrepPost
  • Hybrid Tool-path Composite Representation: One of the limitations of the above approach is that it does not model any of the details within the layer. As we saw in part 1 of this post, each layer is composed of tool-paths that leave behind voids and curvature errors that could be significant in simulation, particularly in failure modeling. Perhaps the most promising approach to modeling FDM parts is to explicitly link tool-path information in the build software to the analysis software. Coupling this with existing composite simulation is another potential idea that would help reduce computational expense. This is an idea I have captured below in the schematic that shows one possible way this could be done, using ANSYS Composite PrepPost as an example platform.
Potential approach to blending toolpath information with composite analysis software
Potential approach to blending toolpath information with composite analysis software

Discussion: At the present moment, the orthotropic approach is perhaps the most appropriate method for modeling parts since it is allows some level of build orientation optimization, as well as for meaningful design comparisons and comparison to bulk properties one may expect from alternative technologies such as injection molding. However, as the application of FDM in end-use parts increases, the demands on simulation are also likely to increase, one of which will involve representing these materials more accurately than continuum solids.

Constitutive Modeling of 3D Printed FDM Parts: Part 1 (Challenges)

As I showed in a prior blog post, Fused Deposition Modeling (FDM) is increasingly being used to make functional plastic parts in the aerospace industry. All functional parts have an expected performance that they must sustain during their lifetime. Ensuring this performance is attained is crucial for aerospace components, but important in all applications. Finite Element Analysis (FEA) is an important predictor of part performance in a wide range of indusrties, but this is not straightforward for the simulation of FDM parts due to difficulties in accurately representing the material behavior in a constitutive model. In part 1 of this article, I list some of the challenges in the development of constitutive models for FDM parts. In part 2, I will discuss possible approaches to addressing these challenges while developing constitutive models that offer some value to the analyst.

It helps to first take a look at the fundamental multi-scale structure of an FDM part. A 2002 paper by Li et. al. details the multi-scale structure of an FDM part as it is built up from individually deposited filaments all the way to a three-dimensional part as shown in the image below.

Multiscale structure of an FDM part
Multiscale structure of an FDM part

This multi-scale structure, and the deposition process inherent to FDM, make for 4 challenges that need to be accounted for in any constitutive modeling effort.

  • Anisotropy: The first challenge is clear from the above image – FDM parts have different structure depending on which direction you look at the part from. Their layered structure is more akin to composites than traditional plastics from injection molding. For ULTEM-9085, which is one of the high temperature polymers available from Stratasys, the datasheets clearly show a difference in properties depending on the orientation the part was built in, as seen in the table below with some select mechanical properties.
Stratasys ULTEM 9085 datasheet material properties showing anisotropy
Stratasys ULTEM 9085 datasheet material properties showing anisotropy
  • Toolpath Definition: In addition to the variation in material properties that arise from the layered approach in the FDM process, there is significant variation possible within a layer in terms of how toolpaths are defined: this is essentially the layout of how the filament is deposited. Specifically, there are at least 4 parameters in a layer as shown in the image below (filament width, raster to raster air gap, perimeter to raster air gap and the raster angle). I compiled data from two sources (Stratasys’ data sheet and a 2011 paper by Bagsik et al that show how for ULTEM 9085, the Ultimate Tensile Strength varies as a function of not just build orientation, but also as a function of the parameter settings – the yellow bars show the best condition the authors were able to achieve against the orange and gray bars that represent the default settings in the tool.  The blue bar represents the value reported for injection molded ULTEM 9085.
Ultimate Tensile Strength of FDM ULTEM 9085 for three different build orientations, compared to injection molded value (84 MPa) for two different data sources, and two different process parameter settings from the same source. On the right are shown the different orientations and process parameters varied.
Ultimate Tensile Strength of FDM ULTEM 9085 for three different build orientations, compared to injection molded value (84 MPa) for two different data sources, and two different process parameter settings from the same source. On the right are shown the different orientations and process parameters varied.
  • Layer Thickness: Most FDM tools offer a range of layer thicknesses, typical values ranging from 0.005″ to 0.013″. It is well known that thicker layers have greater strength than thinner ones. Thinner layers are generally used when finer feature detail or smoother surfaces are prioritized over out-of-plane strength of the part. In fact, Stratasys’s values above are specified for the default 0.010″ thickness layer only.
  • Defects: Like all manufacturing processes, improper material and machine performance and setup and other conditions may lead to process defects, but those are not ones that constitutive models typically account for. Additionally and somewhat unique to 3D printing technologies, interactions of build sheet and support structures can also influence properties, though there is little understanding of how significant these are. There are additional defects that arise from purely geometric limitations of the FDM process, and may influence properties of parts, particularly relating to crack initiation and propagation. These were classified by Huang in a 2014 Ph.D. thesis as surface and internal defects.
    • Surface defects include the staircase error shown below, but can also come from curve-approximation errors in the originating STL file.
    • Internal defects include voids just inside the perimeter (at the contour-raster intersection) as well as within rasters. Voids around the perimeter occur either due to normal raster curvature or are attributable to raster discontinuities.
FDM Defects: Staircase error (top), Internal defects (bottom)
FDM Defects: Staircase error (top), Internal defects (bottom)

Thus, any constitutive model for FDM that is to accurately predict a part’s response needs to account for its anisotropy, be informed by the specifics of the process parameters that were involved in creating the part and ensure that geometric non-idealities are comprehended or shown to be insignificant. In my next blog post, I will describe a few ways these challenges can be addressed, along with the pros and cons of each approach.

Click here to see part 2 of this post

Activating Hyperdrive in ANSYS Simulations

punch-it-chewie-ansysWith PADT and the rest of the world getting ready to pile into dark rooms to watch a saga that we’ve been waiting for 10 years to see, I figured I’d take this opportunity to address a common, yet simple, question that we get:

“How do I turn on HPC to use multiple cores when running an analysis?”

For those that don’t know, ANSYS spends a significant amount of resources into making the various solvers it has utilize multiple CPU processors more efficiently than before.  By default, depending on the solver, you are able to use between 1-2 cores without needing HPC licenses.

With the utilization of HPC licenses, users can unlock hyperdrive in ANSYS.  If you are equipped with HPC licenses it’s just a matter of where to look for each of the ANSYS products to activate it.

ANSYS Mechanical

Whether or not you are performing a structural, thermal or explicit simulation the process to activate multiple cores is identical.

  1. Go to Tools > Solve Process Settings
  2. The Solve Process Settings Window will pop up
  3. Click on Advanced to open up the Advanced Settings window
  4. You will see an option for Max number of utilized cores
  5. Simply change the value to your desired core count
  6. You will see below an option to allow for GPU acceleration (if your computer is equipped with the appropriate hardware)
  7. Select the GPU type from the dropdown and choose how many GPUs you want to utilize
  8. Click Ok and close
hyperdrive-ansys-f01
Go the proper settings dialog
hyperdrive-ansys-f02
Choose Advanced…
hyperdrive-ansys-f03
Specify the number of cores to use

Distributed Solve in ANSYS Mechanical

One other thing you’ll notice in the Advanced Settings Window is the option to turn “Distributed” On or Off using the checkbox.

In many cases Distributing a solution can be significantly faster than the opposite (Shared Memory Parallel).  It requires that MPI be configured properly (PADT can help guide you through those steps).  Please see this article by Eric Miller that references GPU usage and Distributed solve in ANSYS Mechanical

hyperdrive-ansys-f04
Turn on Distributed Solve if MPI is Configured

ANSYS Fluent

Whether launching Fluent through Workbench or standalone you will first see the Fluent Launcher window.  It has several options regarding the project.

  1. Under the Processing Options you will see 2 options: Serial and Parallel
  2. Simply select Parallel and you will see 2 new dropdowns
  3. The first dropdown lets you select the number of processes (equal to the number of cores) to use in not only during Fluent’s calculations but also during pre-processing as well
Default Settings in Fluent Launch Window
Default Settings in Fluent Launch Window
Options When Parallel is Picked
Options When Parallel is Picked

ANSYS CFX

For CFX simulations through Workbench, the option to activate HPC exists in the Solution Manager

  1. Open the CFX Solver Manager
  2. You will see a dropdown for Run Mode
  3. Rather than the default “Serial” option choose from one of the available “Parallel” options.
  4. For example, if running on the same machine select Platform MPI Local Parallel
  5. Once selected in the section below you will see the name of the computer and a column called Partitions
  6. Simply type the desired number of cores under the Partitions column and then either click “Save Settings” or “Start Run”
Change the Run Mode
Change the Run Mode
Specify number of cores for each machine
Specify number of cores for each machine

ANSYS Electronics Desktop/HFSS/Maxwell

Regardless of which electromagnetic solver you are using: HFSS or Maxwell you can access the ability to change the number of cores by going to the HPC and Analysis Options.

  1. Go to Tools > Options > HPC and Analysis Options.
  2. In the window that pops up you will see a summary of the HPC configuration
  3. Click on Edit and you will see a column for Tasks and a column for Cores.
  4. Tasks relate to job distribution utilizing Optimetrics and DSO licenses
  5. To simply increase the number of cores you want to run the simulation on, change the cores column to your desired value
  6. Click OK on all windows
hyperdrive-ansys-f09
Select the proper settings dialog
hyperdrive-ansys-f10
Select Edit to change the configuration
Specify Tasks and Cores
Specify Tasks and Cores

There you have it.  That’s how easy it is to turn on Hyperdrive in the flagship ANSYS products to advance your simulations and get to your endpoint faster than before.

If you have any questions or would like to discuss the possibility of upgrading your ship with Hyperdrive (HPC capabilities) please feel free to call us at 1-800-293-PADT or email us at support@padtinc.com.

New Enhancements to Flownex 2015: Even Better Fluid-Thermal Simulation

987786-flownex_simulation_environment-11_12_13The developers of Flownex have been hard at work again and have put out a fantastic update to Flownex 2015.  These additions go far beyond what most simulation programs include in an update, so we thought it was worth a bit of a blog article to share it with everyone.  You can also download the full release notes here: FlownexSE 2015 Update 1 – Enhancements and Fixes

What is Flownex?

Some of you may not be familiar with Flownex. It is a simulation tool that models Fluid-Thermal networks.  It is a 1-D tool that is very easy to use, powerful, and comprehensive. The technology advancements delivered by Flownex offer a fast, reliable and accurate total system and subsystem approach to simulation that complements component level simulation in tools like ANSYS Fluent, ANSYS CFX, and ANSYS Mechanical.  We use it to model everything from turbine engine combustors to water treatment plants. Learn more here

Major Enhancements

A lot went in to this update, much hidden behind the scenes in the forms of code improvements and fixes.  There are also a slew of major new or enhanced features worth mentioning.

Shared Company Database

One of the great things about Flownex is that you can create modeling objects that you drag and drop into your system model. Now you can share those components, fluids, charts, compounds, and default settings across your company, department, or group.    There is no limit on the number of databases that are shared and access can be controlled. This will allow users to reuse information across your company.

Shared Database
Shared Database

Static Pressure Boundary Conditions

In the past Flownex always used a total pressure boundary condition. Based on user requests, this update includes a new boundary condition object that allows the user to specify the static pressure as a boundary condition. This is useful because many tests of real hardware only provide static pressure. It is also a common boundary condition in typical rotational flow fields in turbo machinery secondary flow.

Subdivided Cavities

Another turbo machinery request was the ability to break cavities up into several radial zones, giving a more accurate pressure distribution in secondary flow applications for Rotor-Rotor and Rotor-Stator cavities.  These subdivisions can be automatically created in the radial direction by Flownex.

flownex_rotor-stator-stator-cavities
Subdivided Cavity Input Dialog

Excel Input Sheets and Parameter Tables

The connection between Microsoft Excel and Flownex has always been strong and useful, and it just get even better.  So many people were connecting cells to their Flownex model parameters that the developers decided to directly connect the two programs so the user no longer has to establish data connection links.  Now an properties in Flownex can be hooked to a cell in Excel.

The next thing users wanted was the ability to work with tables of parameters, so that was added as well.  The user can hook a table of values in Excel to Flownex parameters and then have Flownex solve for the whole table, even returning resulting parameters.  This makes parametric studies driven from Excel simple and powerful.

flownex_Parametric-Tables
Excel Parameter Tables

Component Enhancements

Users can now create component defaults and save them in a library. This saves time because in the past the user had to specify the parameters for a given component. Now thy just drag and job the existing defaults into their model.

Compound components have also been enhanced by the development team so you no loner have to restart Flownex when you move, export, or import a compound component.

Find Based on Property Values

Users can now search through properties on all the objects in their model based on the value assigned to those properties.  As an example, you can type > 27.35 to get a list of all properties with an assigned value that is larger than 27.35.  This saves time because the user no longer has to look through properties or remember what properties were assigned.

Network Creation through Programming

Users can now write programs through the API or scripting tool to build their network models. This will allow companies to create vertical applications or automate the creation of complex networks based on user input. Of all the enhancements in this update, this improvement has the potential to deliver the greatest productivity improvements.

Automatic Elevations Importing in GIS

Users who are specifying flow networks over real terrain can now pull elevation data from the internet, rather than requiring that the data be defined when the network is specified. This enhancement will greatly speed up the modeling of large fluid-thermal systems, especially when part of the simulation process is moving components of the system over terrain.

Multiple Fluid Interface Component

A very common requirement in fluid-thermal systems is the ability to model different fluids or fluid types and how they interact. With this update users can now model two separate fluid networks and define a coupling between the two. The mass balance and resulting pressure at the interface is maintained.

Static Condition Calculation Improvements

Many simulation require an accurate calculation of static pressures. To do this, the upstream and downstream areas and equivalent pipe diameters are needed to obtain the proper values.  Many components now allow upstream and downstream areas to be defined, including restrictors and nozzles.

flownex_upstream-downstream-area
Dialog for upstream and downstream area specification

Scaled Drawing

The ability to create a scale 2-Dimensional drawing was added to Flownex. The user can easily add components onto an existing scaled drawing that is used as a background image in Flownex. These components will automatically detect and input lengths based on the drawing scale and distance between nodes. This results in much less time and effort spent setting up larger models where actual geometric sizes are important.

Scaled Drawing Tools
Scaled Drawing Tools

How do I Try this Out?

As you can see by the breadth and depth of enhancements, Flownex is a very capable tool that delivers on user needs.  Written and maintained by a consulting company that uses the tool every day, it has that rare mix of detailed theory and practical application that most simulation engineers crave.  If you model fluid-thermal systems, or feel you should be simulating your systems, contact Brian Duncan at 480.813.4884 or brain.duncan@padtinc.com. We can do a quick demo over the internet and learn more about what your simulation needs are.  Even if you are using a different tool, you should look at Flownex, it is an great tool.

PID Thermostat Boundary Condition ACT Extension for ANSYS Mechanical

ANSYS-ACT-PID-ThermostatPADT is pleased to announce that we have uploaded a new ACT Extension to the ANSYS ACT App Store.  This new extension implements a PID based thermostat boundary condition that can be used within a transient thermal simulation.  This boundary condition is quite general purpose in nature.  For example, it can be setup to use any combination of (P)roportional (I)ntegral or (D)erivate control.   It supports locally monitoring the instantaneous temperature of any piece of geometry within the model.  For a piece of geometry that is associated with more than one node, such as an edge or a face, it uses a novel averaging scheme implemented using constraint equations so that the control law references a single temperature value regardless of the reference geometry.

ANSYS-ACT-PID-Thermostat-img1

The set-point value for the controller can be specified in one of two ways.  First, it can be specified as a simple table that is a function of time.  In this scenario, the PID ACT Extension will attempt to inject or remove energy from some location on the model such that a potentially different location of the model tracks the tabular values.   Alternatively, the PID thermostat boundary condition can be set up to “follow” the temperature value of a portion of the model.  This location again can be a vertex, edge or face and the ACT extension uses the same averaging scheme mentioned above for situations in which more than one node is associated with the reference geometry.  Finally, an offset value can be specified so that the set point temperature tracks a given location in the model with some nonzero offset.

ANSYS-ACT-PID-Thermostat-img2

For thermal models that require some notion of control the PID thermostat element can be used effectively.  Please do note, however, that the extension works best with the SI units system (m-kg-s).

A Guide to Crawling, Walking, and Running with ANSYS Structural Analysis

crawl-walk-runAt PADT, we apply a Crawl, Walk, Run philosophy to just about everything we do. Start with the basics, build knowledge and capability on that, and then continue to develop your skills throughout your career. Unfortunately, all too often I run across some poor new grad, two weeks out of school, contending with a problem that’s more befitting someone with about a decade of experience under his or her belt.

Now, the point of this article isn’t to call anyone out. Rather, I sincerely hope that managers and supervisors see this and use it as a guideline in assigning tasks to their direct reports. Note that the recommendations are relative and general. Some people may be quite competent in the “run” categories after just a few months of usage and study while others may have been using the software for a decade and still have trouble figuring out how to even start it. It’s also possible that, for certain projects, the “crawl” categories may actually end up being more difficult to contend with than the “run” categories.

With those caveats in mind, here is our list of recommendations for Crawling, Walking, and Running with ANSYS. Note that these apply to structural analysis. I fully plan to hit up my colleagues for similar blog posts about heat transfer, CFD, and electrical simulation.

Crawlsimple-stress1

  • Linear static
  • Basic modal
  • Eigenvalue (linear) buckling, but don’t forget to apply a knock-down factor

Walkstruct-techtip6-contacts-between-bolts

  • Nonlinearities
    • Large Deflection
    • Rate-independent plasticity
    • Nonlinear contact (frictionless and frictional)
  • Dynamics
    • Modal with linear perturbation
    • Spectrum analyses (running the analysis is easy; understanding what you’re doing and interpreting results correctly is hard)
      • Shock/Single point response
      • Random Vibration (PSD)
    • Harmonic analysis
  • Fatigue

Runvibration-pumping platforms

  • Nonlinearities
    • Advanced element options
    • Hyperelasticity
    • Rate-dependent phenomena
      • Creep
      • Viscoelasticity
      • Viscoplasticity
    • Other advanced material models such as shape memory alloy and gaskets
    • Element birth and death
  • Dynamics
    • Transient dynamics (implicit)
    • Explicit dynamics (e.g. LS-Dyna and Autodyn)
    • Rotordynamics
  • Fracture and crack growth

So what’s the best, quickest way to move from crawling to walking or walking to running? Invest in general or consultative (or even better, both) ANSYS training with PADT. We’ll help you get to where you need to be.

Be a Pinball Wizard with Contact Regions in ANSYS Mechanical

pinball-wizard-pinball-machine-ANSYS-3
A pinball machine based on The Who’s Tommy

I had a very cool music teacher back in 6th or 7th grade in the 1970’s in upstate New York.  Today we’d probably say she was eclectic.  In that class we listened to and discussed fairly recent songs in addition to general music studies.  Two songs I remember in particular are ‘Hurdy Gurdy Man’ by Donovan and ‘Pinball Wizard’ by The Who.  If you’re not familiar with Pinball Wizard, it’s from The Who’s rock opera Tommy, and is about a deaf, mute, blind young man who happens to be adept at the game of pinball.  Yes, he is a Pinball Wizard.  This sing popped into my head recently when we had some customer questions here at PADT regarding the pinball region concept as it pertains to ANSYS contact regions.

I’m not sure if the developers at ANSYS, Inc. had this song in mind when they came up with the nomenclature for the 17X (latest and greatest) series of contact elements in ANSYS, but regardless, you too can be a pinball wizard when it comes to understanding contact elements in ANSYS Mechanical and MAPDL.

Fans of this blog may remember one of my prior posts on contact regions in ANSYS that also had a musical theme (bringing to mind Peter Gabriel’s song “I Have the Touch”):

In this current entry we will go more in depth on the pinball region, also known as the pinball radius.  The pinball region is involved with the distance from contact element to target element in a given contact region.  Outside the pinball region, ANSYS doesn’t bother to check to see if the elements on opposite sides of the contact region are touching or not.  The program assumes they are far away from each other and doesn’t worry about any additional calculations for the most part.

Here is an illustration.  The gray elements on the left represent the contact body and the red elements on the right represent the target body (assuming asymmetric contact).  Target elements outside the pinball radius will not be checked for contact.  The contact and target elements actually ‘coat’ the underlying solid elements so they are shown as dashed lines slightly offset from the solid elements for the sake of visibility.  Here the pinball radius is displayed as a dashed blue circle, centered on the contact elements, with a radius of 2X the depth of the underlying solid elements.

pinball_radius_contact_illustration

So, outside the pinball region, we know ANSYS doesn’t check to see if the contact and target are actually in contact.  It just assumes they are far away and not in contact.  What about what happens if the contact and target are inside the pinball region?  The answer to that question depends on which contact type we have selected.

For frictionless contact (aka standard contact in MAPDL) and frictional contact, the program will then check to see if the contact and target are truly touching.  If they are touching, the program will check to see if they are sliding or possibly separating.  If they are touching and penetrating, the program will check to see if the penetration exceeds the allowable amount and will make adjustments, etc.  In other words, for frictionless and frictional contact, if the contact and target elements are close enough to be inside the pinball region, the program will make all sorts of checks and adjustments to make sure the contact behavior is adequately captured.

The other scenario is for bonded and no separation contact.  With these contact types, the program’s behavior when the contact and target elements are within the pinball region is different.  For these types, as long as the contact and target are close enough to be within the pinball region, the program considers the contact region to be closed.  So, for bonded and no separation, your contact and target elements do not need to be line on line touching in order for contact to be recognized.  The contact and target pairs just need to be inside the pinball region.  This can be good, in that it allows for some ‘slop’ in the geometry to be automatically ignored, but it also can have a downside if we have a curved surface touching a flat surface for example.  In that case, more of the curved surface may be considered in contact than would be the case if the pinball region was smaller.  This effect is shown in the image below.  Reducing the pinball radius to an appropriate smaller amount would be the fix for eliminating this ‘overconstraint’ if desired.

pinball_radius_bonded_noseparation

There is a default value for the pinball region/radius.  It can be changed if needed.  We’ll add more details in a moment.  First, why is it called the “pinball” region?  I like to think it’s because when it’s visualized in the Mechanical window, it looks like a blue pinball from an actual pinball arcade game, but I’ll admit that the ANSYS terminology may predate the Mechanical interface.  The image below shows what I mean.  The blue balls are the different pinball radii for different contact regions.

pinball_radius_visualization

 

Note that you don’t see the pinball region displayed as shown in the above image unless you have manually changed the pinball size in Mechanical.  The pinball region can be changed in the Mechanical window in the details view for each contact region by changing Pinball Region from Program Controlled to Radius, like this:

pinball_radius_change

In MAPDL, the pinball radius value can be changed by defining or editing the real constant labeled PINB.

By now you’re probably wondering what is the default value for the pinball radius?  The good news is that it is intelligently decided by the program for each contact region.  The default is always a scale factor on the depth of the underlying elements of each contact region.  In the first pinball region image shown near the beginning of this article, the example plot shows the pinball region/radius as two times the depth of the underlying elements.

The table below summarizes the default pinball radius values for most circumstances for 2D and 3D solid element models.  More detailed information is available in the ANSYS Help.

Default Pinball Radius ValuesLarge Deflection Off
Flexible-Flexible
Large Deflection On
Flexible-Flexible
Frictionless and Frictional1* Underlying Element Depth2*Underlying Element Depth
Bonded and No Seperation0.25*Underlying Element Depth0.5*Underlying Element Depth
Rigid-Flexible Contact: Typically the Default Values are Doubled

Summing it all up:  we have seen how the default values are calculated and also how to change them.  We have seen what they look like as blue balls in a plot of contact regions in Mechanical if the pinball radius has been explicitly defined.  We also discussed what the pinball radius does and how it’s different for frictionless/frictional contact and bonded/no separation contact.

You should be well on your way to becoming a pinball wizard at this point.

Does performing simulation in ANSYS make you think of certain songs, or are there songs you like to listen to while working away on your simulations an addition to The Who’s “Pinball Wizard” and Peter Gabriel’s “I Have the Touch”?  If so, we’d love to hear about your song preferences in the comments below.

7 Reasons why ANSYS AIM Will Change the Way Simulation is Done

ANSYS-AIM-Icon1When ANSYS, Inc. released their ANSYS AIM product they didn’t just introduce a better way to do simulation, they introduced a tool that will change the way we all do simulation.  A bold statement, but after PADT has used the tool here, and worked with customers who are using it, we feel confident that this is a software package will drive that level of change.   It enables the type of change that will drive down schedule time and cost for product development, and allow companies to use simulation more effectively to drive their product development towards better performance and robustness.

It’s Time for a Productivity Increase

AIM-7-old-modelIf you have been doing simulation as long as I have (29 years for me) you have heard it before. And sometimes it was true.  GUI’s on solvers was the first big change I saw. Then came robust 3D tetrahedral meshing, which we coasted on for a while until fully associative and parametric CAD connections made another giant step forward in productivity and simulation accuracy. Then more recently, robust CFD meshing of dirty geometry. And of course HPC improvements on the solver side.

That was then.  Right now everyone is happily working away in their tool of choice, simulating their physics of choice.  ANSYS Mechanical for structural, ANSYS Fluent for fluids, and maybe ANSYS HFSS for electromagnetics. Insert your tool of choice, it doesn’t really matter. They are all best-in-breed advanced tools for doing a certain type of physical simulation.  Most users are actually pretty happy. But if you talk to their managers or methods engineers, you find less happiness. Why? They want more engineers to have access to these great tools and they also want people to be working together more with less specialization.

Putting it all Together in One Place

AIM-7-valve2-multiphysicsANSYS AIM is, among many other things, an answer to this need.  Instead of one new way of doing something or a new breakthrough feature, it is more of a product that puts everything together to deliver a step change in productivity. It is built on top of these same world class best-in-bread solvers. But from the ground up it is an environment that enables productivity, processes, ease-of-use, collaboration, and automation. All in one tool, with one interface.

Changing the Way Simulation is Done

Before we list where we see things changing, let’s repeat that list of what AIM brings to the table, because those key deliverables in the software are what are driving the change:

  • IAIM-7-pipe-setupmproved Productivity
  • Standardized Processes
  • True Ease-of-Use
  • Inherent Collaboration
  • Intuitive Automation
  • Single Interface

Each of these on their own would be good, but together, they allow a fundamental shift in how a simulation tool can be used. And here are the seven way we predict you will be doing things differently.

1) Standardized processes across an organization

The workflow in ANSYS AIM is process oriented from the beginning, which is a key step in standardizing processes.  This is amplified by tools that allow users, not just programmers, to create templates, capturing the preferred steps for a given type of simulation.  Others have tried this in the past, but the workflows were either too rigid or not able to capture complex simulations.  This experience was used to make sure the same thing does not happen in ANSYS AIM.

2) No more “good enough” simulation done by Design Engineers

Ease of use and training issue has kept robust simulation tools out of the hands of design engineers.  Programs for that group of users have usually been so watered down or lack so much functionality, that they simply deliver a quick answer. The math is the same, but it is not as detailed or accurate.  ANSYS AIM solves this by give the design engineer a tool they can pick up and use, but that also gives them access to the most capable solvers on the market.

3) Multiphysics by one user

Multiphysics simulation often involves the use of multiple simulation tools.  Say a CFD Solver and a Thermal Solver. The problem is that very few users have the time to learn two or more tools, and to learn how to hook them together. So some Multiphysics is done with several experts working together, some in tools that do multiple physics, but none well, or by a rare expert that has multi-tool expertise.  Because ANSYS AIM is a Multiphysics tool from the ground up, built on high-power physics solvers, the limitations go away and almost any engineer can now do Multiphysics simulation.

AIM-7-study4) True collaboration

The issues discussed above about Multiphysics requiring multiple users in most tools, also inhibit true collaboration. Using one user’s model in one tool is difficult when another user has another tool. Collaboration is difficult when so much is different in processes as well.  The workflow-driven approach in ANSYS AIM lends itself to collaboration, and the consistent look-and-feel makes it happen.

5) Enables use when you need it

This is a huge one.  Many engineers do not use simulation tools because they are occasional users.  They feel that the time required to re-familiarize themselves with their tools is longer than it takes to do the simulation. The combination of features unique to ANSYS AIM deal with this in an effective manner, making accurate simulation something a user can pick up when they need it, use it to drive their design, and move on to the next task.

6) Stepping away from CAD embedded Simulation

The growth of CAD embedded simulation tools, programs that are built into a CAD product, has been driven by the need to tightly integrate with geometry and provide ease of use for the users who only occasionally need to do simulation. Although the geometry integration was solved years ago, the ease-of-use and process control needed is only now becoming available in a dedicated simulation tool with ANSYS AIM.

7) A Return to home-grown automation for simulation

AIM-7-scriptIf you have been doing simulation since the 80’s like I have, you probably remember a day when every company had scripts and tools they used to automate their simulation process. They were extremely powerful and delivered huge productivity gains. But as tools got more powerful and user interfaces became more mature, the ability to create your own automation tools faded.  You needed to be a programmer. ANSYS AIM brings this back with recording and scripting for every feature in the tool, with a common and easy to use language, Python.

How does this Impact Me and or my Company?

It is kind of fun to play prognosticator and try and figure out how a revolutionary advance in our industry is going to impact that industry. But in the end it really does not matter unless the changes improve the product development process. We feel pretty strongly that it does.  Because of the changes in how simulation is done, brought about by ANSYS AIM, we feel that more companies will use simulation to drive their product development, more users within a company will have access to those tools, and the impact of simulation will be greater.

AIM-f1_car_pressure_ui

To fully grasp the impact you need to step back and ponder why you do simulation.  The fast cars and crazy parties are just gravy. The core reason is to quickly and effectively test your designs.  By using virtual testing, you can explore how your product behaves early in the design process and answer those questions that always come up.  The sooner, faster, and more accurately you answer those questions, the lower the cost of your product development and the better your final product.

Along comes a product like ANSYS AIM.  It is designed by the largest simulation software company in the world to give the users of today and tomorrow access to the power they need. It enables that “sooner, faster, and more accurately” by allowing us to change, for the better, the way we do virtual testing.

The best way to see this for yourself is to explore ANSYS AIM.  Sign up for our AIM Resource Kit here or contact us and we will be more than happy to show it to you.

AIM_City_CFD

Video Tips: Fluid Volume Extraction

This video shows a really quick and easy way to extract a fluid domain from a structural model without having to do any Boolean subtract operations.

Free ANSYS AIM Resource Kit — Expert Advice, Insights and Best Practices for Multiphysics Simulation

ANSYS-AIM-Icon1We have been talking a lot about ANSYS AIM lately.  Mostly because we really like ANSYS AIM and we think a large number of engineers out there need to know more about it and understand it’s advantages.  And the way we do that is through blog posts, emails, seminars, and training sessions.  A new tool that we have started using are “Resource and Productivity Kits,” collections of information that users can download.

Earlier in the year we introduced several kits, including ANSYS Structural, ANSYS Fluids, and ANSYS ElectroMechanical.  Now we are pleased to offer up a collection of useful information on ANSYS AIM.  This kit includes:

  • “Getting to know ANSYS AIM,” a video by PADT application engineer Manoj Mahendran
  • “What I like about ANSYS AIM,” a video featuring insights on the tool
  • Six ANSYS AIM demonstration videos, including simulations and a custom template demonstration
  • Five slide decks that provide an overview of ANSYS AIM and describe its new features
  • An exclusive whitepaper on effectively training product development engineers in simulation.

You can download the kit here.

If you need more info, view the ANSYS AIM Overview video or read about it on our ANSYS AIM page.

Watch this blog for more useful content on AIM in the future.


AIM_City_CFD

To Use Large Deflection or Not, That Is the Question

Hamlet-Large-DeflectionIt seems like I’ve been explaining large deflection effects a lot recently. Between co-teaching an engineering class at nearby Arizona State University and also having a couple of customer issues regarding the concept, large deflection in structural analyses has been on my mind.

Before I explain any further, the thing you should note if you are an ANSYS Mechanical simulation user is this: If you don’t know if you need large deflection or not, you should turn it on. There is really no way to know for certain if it’s needed or not unless you perform a comparison study with and without it.

So, what are large deflection effects? In simple terms the inclusion of large deflection means that ANSYS accounts for changes in stiffness due to changes in shape of the parts you are simulating. The classic case to consider is the loaded fishing rod.

In its undeflected state, the fishing rod is very flexible at the tip. With a heavy fish on the end of the line, the rod deflects downward and it is then easy to observe that the stiffness of the rod has increased. In other words, when the rod is lightly loaded, a small amount of force will cause a certain downward deflection at the top. When the rod is heavily loaded however, a much larger amount of force will be needed to cause the tip to deflect downward by the same amount.

This change in the force amount required to achieve the same change in displacement implies that we do not have a linear relationship between force and displacement.
Consider Hooke’s law, also known as the spring equation:

F = Kx

Where F is the force applied, K is the stiffness of the structure, and x is the deflection. In a linear system, doubling the force results in double the displacement. In our fishing rod case, though, we have a nonlinear system. We might need to triple the force to double the displacement, depending on how much the rod is loaded relative to its size and other properties, and then to double the displacement again we might need to apply four times that force, just using numbers out of my head as examples.

Ted-rod-fishing1

So, in the case of the fishing rod, Hooke’s law in a linear form does not apply. In order to capture the nonlinear effect we need a way for the stiffness to change as the shape of the rod changes. In our finite element solution in ANSYS, it means that we want to recalculate the stiffness as the structure deflects.

This recalculation of the stiffness as the structure deflects is activated by turning on large deflection effects. Without large deflection turned on, we are constrained to using the linear equation, and no matter how much the structure deflects we are still using the original stiffness.

So, why not just have large deflection on by default and use it all the time? My understanding is that since large deflection adds computation expense to have it on, it’s off by default. It’s the same as for a lot of advanced usage, such as frictionless or frictional contact vs. the default bonded (simpler) behavior. In other words, turning on large deflection will trigger a nonlinear solution, meaning multiple passes through the solver using the Newton Raphson method instead of the single pass needed for a linear problem.

Here is an example of a simplified fishing rod. The image shows the undeflected rod (top), which is held fixed on the left side and has a downward force load applied on the right end. The bottom image shows the final deflected shape, with large deflection effects included. The deflection at the tip in this case is 34 inches.

Undeformed_deformed_rod

In comparison running the same load with large deflection turned off resulted in a tip deflection of 40 inches. Thus, the calculated tip deflection is 15% less with large deflection turned on, since we are now accounting for change in stiffness with change in shape as the rod deflects.

Below we have a force (horizontal axis) vs. deflection (vertical axis) plot for a nonlinear simulation of a fishing rod with large deflection turned on. The fact that the curve is not a straight line confirms that this is a nonlinear problem, with the stiffness (slope of the curve) not constant. We can also see that as the force gets higher, the slope of the curve is more horizontal, meaning that more force is needed for each incremental amount of displacement. This matches our observations of the fishing rod behavior.

Force_vs_Deflection

So, getting back to our original point, it’s often the case that we don’t know if we need to include large deflection effects or not. When in doubt, run cases with and without. If you don’t see a change in your key results, you can probably do without large deflection.

Here is an example using an idealized compressor vane. In this case, the deflections and stresses with and without large deflection effects are nearly the same (the stress difference is about 0.2%).

Large Deflection On:blade_large_defl

Small Deflection:blade_small_defl

Bottom line: when in doubt, try it out, with and without large deflection. In ANSYS Mechanical, Large Deflection effects are turned on or off in the details of the Analysis Settings branch.

It’s worth noting that turning on large deflection in ANSYS actually activates four different behaviors, known as large deflection which include large rotation, large strain, stress stiffening, and spin softening. All of these involve change in stiffness due to deformation in one way or another.

If you like this kind of info, or find it useful, we cover topics like this in our training classes. For more info, check out our training pages at http://www.padtinc.com/support/software/training.html.