Home Grown HPC on CUBE Systems

compute-cluster-1

A Little Project Background

Recently I’ve been working on developing a computer vision system for a long standing customer. We are developing software that enables them to use computers to “see” where a particular object is space, and accurately determine its precise location with respect to the camera. From that information, they can do all kinds of useful things.

In order to figure out where something is in 3D space from a 2D image you have to perform what is commonly referred to as pose estimation. It’s a highly interesting problem by itself, but it’s not something I want to focus on in detail here. If you are interested in obtaining more information, you can Google pose estimation or PnP problems. There are, however, a couple of aspects of that problem that do pertain to this blog article. First, pose estimation is typically a nonlinear, iterative process. (Not all algorithms are iterative, but the ones I’m using are.) Second, like any algorithm, its output is dependent upon its input; namely, the accuracy of its pose estimate is dependent upon the accuracy of the upstream image processing techniques. Whatever error happens upstream of this algorithm typically gets magnified as the algorithm processes the input.

The Problem I Wish to Solve

You might be wondering where we are going with HPC given all this talk about computer vision. It’s true that computer vision, especially image processing, is computationally intensive, but I’m not going to focus on that aspect. The problem I wanted to solve was this: Is there a particular kind of pattern that I can use as a target for the vision system such that the pose estimation is less sensitive to the input noise? In order to quantify “less sensitive” I needed to do some statistics. Statistics is almost-math, but just a hair shy. You can translate that statement as: My brain neither likes nor speaks statistics… (The probability of me not understanding statistical jargon is statistically significant. I took a p-test in a cup to figure that out…) At any rate, one thing that ALL statistics requires is a data set. A big data set. Making big data sets sounds like an HPC problem, and hence it was time to roll my own HPC.

The Toolbox and the Solution

My problem reduced down to a classic Monte Carlo type simulation. This particular type of problem maps very nicely onto a parallel processing paradigm known as Map-Reduce. The concept is shown below:
matt-hpc-1

The idea is pretty simple. You break the problem into chunks and you “Map” those chunks onto available processors. The processors do some work and then you “Reduce” the solution from each chunk into a single answer. This algorithm is recursive. That is, any single “Chunk” can itself become a new blue “Problem” that can be subdivided. As you can see, you can get explosive parallelism.

Now, there are tools that exist for this kind of thing. Hadoop is one such tool. I’m sure it is vastly superior to what I ended up using and implementing. However, I didn’t want to invest at this time in learning a specialized tool for this particular problem. I wanted to investigate a lower level tool on which this type of solution can be built. The tool I chose was node.js (www.nodejs.org).

I’m finding Node to be an awesome tool for hooking computers together in new and novel ways. It acts kind of like the post office in that you can send letters and messages and get letters and messages all while going about your normal day. It handles all of the coordinating and transporting. It basically sends out a helpful postman who taps you on the shoulder and says, “Hey, here’s a letter”. You are expected to do something (quickly) and maybe send back a letter to the original sender or someone else. More specifically, node turns everything that a computer can do into a “tap on the shoulder”, or an event. Things like: “Hey, go read this file for me.”, turns into, “OK. I’m happy to do that. I tell you what, I’ll tap you on the shoulder when I’m done. No need to wait for me.” So, now, instead of twiddling your thumbs while the computer spins up the harddrive, finds the file and reads it, you get to go do something else you need to do. As you can imagine, this is a really awesome way of doing things when stuff like network latency, hard drives spinning and little child processes that are doing useful work are all chewing up valuable time. Time that you could be using getting someone else started on some useful work. Also, like all children, these little helpful child processes that are doing real work never seem to take the same time to do the same task twice. However, simply being notified when they are done allows the coordinator to move on to other children. Think of a teacher in a class room. Everyone is doing work, but not at the same pace. Imagine if the teacher could only focus on one child at a time until that child fully finished. Nothing would ever get done!

Here is a little graph of our internal cluster at PADT cranking away on my Monte Carlo simulation.
matt-hpc-2

It’s probably impossible to read the axes, but that’s 1200+ cores cranking away. Now, here is the real kicker. All of the machines have an instance of node running on them, but one machine is coordinating the whole thing. The CPU on the master node barely nudges above idle. That is, this computer can manage and distribute all this work by barely lifting a finger.

Conclusion

There are a couple of things I want to draw your attention to as I wrap this up.

  1. CUBE systems aren’t only useful for CAE simulation HPC! They can be used for a wide range of HPC needs.
  2. PADT has a great deal of experience in software development both within the CAE ecosystem and outside of this ecosystem. This is one of the more enjoyable aspects of my job in particular.
  3. Learning new things is a blast and can have benefit in other aspects of life. Thinking about how to structure a problem as a series of events rather than a sequential series of steps has been very enlightening. In more ways than one, it is also why this blog article exists. My Monte Carlo simulator is running right now. I’m waiting on it to finish. My natural tendency is to busy wait. That is, spin brain cycles watching the CPU graph or the status counter tick down. However, in the time I’ve taken to write this article, my simulator has proceeded in parallel to my effort by eight steps. Each step represents generating and reducing a sample of 500,000,000 pose estimates! That is over 4 billion pose estimates in a little under an hour. I’ve managed to write 1,167 words…

CUBE_Logo_150w

Continue a Workbench Analysis in ANSYS MAPDL R15

stopsignThis article outlines the steps required to continue a partially solved Workbench based analysis using a Multi-Frame Restart and the MAPDL Batch mode.

In this article you will learn:

  • Some ways to interface between ANSYS Workbench and ANSYS MAPDL
  • How to re-launch a run using a Multi-Frame Restart in ANSYS Batch mode
  • The value of the jobname.abt functionality for Static Structural and Transient Structural analyses

Recently I was working in the ANSYS Workbench interface within the Mechanical application running a Transient Structural analysis. I began my run thinking that my workstation had the necessary resources to complete the analysis in a reasonable amount of time. As the analysis slowly progressed, I began to realize that I needed to make a change and switch to a computer that had more resources. But some of my analysis was already complete and I did not want to lose that progress. In addition, I wanted to be sure that I could monitor the analysis intermediately to ensure that it was advancing as I would like. This meant that however I decided to proceed I needed to make sure that I could still read my results back into Mechanical along with having the capability to restart again from a later point. Here were my options.

1: I could use the Remote Solve Manager (RSM) to continue running my analysis on a compute server machine. Check out this article for more on that.

I did use RSM in part but perhaps you do not have RSM configured or your computer resources are not connected through a network. Then I will show the other option you can use.

2: A Multi-Frame Restart using MADPL in ANSYS Batch mode

Here’s the process:

1. Make note of the current load step and last converged substep that your analysis completed when you hit the Interrupt Solution button
rs1

2. Copy the *.rdb, *.ldhi, *.Rnnn files from the Solver Files Directory on the local machine to the Working Directory on the computing machine
rs2

You can find your Solver Files Directory by right clicking on the Solution Branch in the Model Tree and selecting Open Solver Files Directory:
p1

3. Write an MAPDL input file with the commands to launch a restart and save it in the Working Directory on the computing machine (save with extension *.inp)

Below is an example of an input that will work well for restarting an analysis, but feel free to adjust it with the understanding that the ANSYS Programming Design Language (APDL) is a sophisticated language with a vast array of capability.
rs4

4. Start the MADPL Product Launcher interface on the computing machine and:
    a: Set Simulation Environment to ANSYS Batch
    b. Navigate to your Working Directory
    c. Set the jobname to the same name as that of the *.rdb file
    d. Browse to the input file you generated in Step 3
    e. Give your output file a descriptive name
    f. Adjust parallel processing and memory settings as desired
    g. Run

rs5

5. Look at the output file to see progress and monitor the run
rs6
rs7

6. Write “nonlinear” in a text file and save it as jobname.abt inside the Working Directory to cleanly interrupt the run and generate restart files when desired
rs8

rs9
The jobname.abt will appear briefly in the Working Directory

rs10
The output file will read the following:
rs11

Note that the jobname.abt interruption process is the exact process that ANSYS uses in the background when the Interrupt Solution button is pressed interactively in Mechanical
rs12

Read more about the jobname.abt functionality in the Help Documentation links at the end of this article.

7. Copy all newly created files in Working Directory on the computing machine to the Solver Files Directory on the local machine
rs13

8. Back in the Mechanical application, highlight the Solution branch of the model tree, select Tools menu>Read Results Files… and navigate to the Solver Files Directory and read the updated *.rst file
rs14

After you have read in the results file, notice that the restart file generated from the interruption through the jobname.abt process appears as an option within the Mechanical interface under Analysis Settings
rs15

9. Review intermediate results to determine if analysis should continue or if adjustments need to be made

10. Repeat entire process to continue analysis using the new current loadstep and substep

Happy solving!

Here are some useful Help Documentation sections in ANSYS 15 for your reference:

  • Understanding Solving:
    • help/wb_sim/ds_Solving.html
  • Mechanical APDL: Multiframe Restart:
    • help/ans_bas/Hlp_G_BAS3_12.html#BASmultrestmap52199

And, as always, please contact PADT with your questions!

Video Tips: Create and Display Custom Units in ANSYS CFD-Post

By: Susanna Young

ANSYS CFD-Post is a powerful tool capable of post-processing results from multiple ANSYS tools including FLUENT, CFX, and Icepak. There are almost endless customizable options in ANSYS CFD-Post. This is a short video demonstrating how to create and display a set of custom units within the tool. Stay tuned for additional videos on tips for more effective post-processing in ANSYS CFD-Post.

ANSYS Remote Solve Manager (RSM): Answers to Some Frequently Asked Questions

rsm-1For you readers out there that use the ANSYS Remote Solve Manager (RSM) and have had one or all of the below questions, this post might just be for you!

  1. What actually happens after I submit my job to RSM?
  2. Where are the files needed to run the solve go?
  3. How do the files get returned to the client machine, or do they?
  4. What if something goes wrong with my solve or in the RSM file downloading process, is there any hope of recovery?
  5. Are there any recommendations out there for how best to use RSM?

If your question is, how do I setup RSM as a user? You answers are here from a post by Ted Harris. The post today is a deeper dive into RSM.

The answers to questions 1 through 3 above are really only necessary if you would like to know the answer to question 4. My reason for giving you a greater understanding of the RSM process is so that you can do a better job of troubleshooting should your RSM job run into an issue.  Also, please note that this process is specifically for an RSM job submitted for ANSYS Mechanical. I have not tested this yet for a fluid flow run.

What happens when a job gets submitted to RSM?

The following will answer questions 1-3 above.

When a job is run locally (on your machine), ANSYS uses the Solver Files Directory to store and update data. That folder can be found by right clicking on the Solution branch in the Model tree and selecting Open Solver Files Directory.

p1
The project directory will be opened and you can see all of the existing files stored for your particular solution:
p2

When a job gets submitted to RSM, the files that are stored in the above folder will be transferred to a series of two temporary directories. One temporary directory on the client side (where you launched the job from) and one temporary directory on the compute server side (where the numbers get crunched).

After you hit solve for a remote solve, you will notice that your project solver directory gets emptied. Those files are transferred to a temporary directory under the _ProjectScratch directory:
p3 p4

p5
Next, these files get transferred to a temporary directory on the compute server. The files in the _ProjectScratch directory will remain there but the folder will not be updated again until the solve is interrupted or finished.

You can find the location of the compute server temporary directory by looking at the output log in the RSM queueing interface:
p6

If you navigate to that directory on your compute server, you will see all of the necessary files needed to run. Depending on your IT structure, you may or may not have access to this directory, but it is there.

Here is a graphical overview of the route that your files will experience during the RSM solve process.
 ss1ss2

Once your run is completed or you have interrupted it to review intermediate results and your results have been downloaded and transferred to the solver files folder, both of the temporary directories get cleaned up and removed. I have just outlined the basic process that goes on behind the scenes when you have submitted a job to RSM.

What if something goes wrong with my RSM job? Can I recover my data and re-read it into Workbench?

Recently, I ran into a problem with one of my RSM jobs that resulted in me losing all of the data that had been generated during a two day run. The exact cause of this problem I haven’t determined but it did force me to dive into the RSM process and discover what I am sharing with you today. By pin-pointing and understanding what goes on after the job is submitted to RSM, I did determine that it can be possible to recover data, but only under certain circumstances and setup.

First, if you have the “Delete Job Files in Working Directory” box checked in the compute server properties menu accessed from the RSM queue interface (see below) and RSM sees your job as being completed, the answer to the above question is no, you will not be able to recover your data. Essentially, because the compute server is cleaned up and the temporary directory gets deleted, the files are lost.
p9

To avoid lost data and prepare for such a catastrophe, my recommendation is that you or your IT department, uncheck the “Delete Job Files in Working Directory” box. That way, you have a backup copy of your files stored on the server that you can delete later when you are sure you have all of your files safely transferred to your solver files folder within your project directory structure.

The downside to having this box unchecked is that you have to manually cleanup your server. Your IT department might not like, or even allow you to do this because it could clutter your server if you do not stay on top of things. But, it could be worth the safety net.

As for getting your data back into Workbench, you will need to manually copy the files on the compute server to your solver files folder in your Workbench project directory structure. I explained how to access this folder at the beginning of this post. Once you have copied those files, back in the Mechanical application, with the Solution branch of your model tree highlighted, selects Tools>Read Results Files… (see below graphic), navigate to your solver files directory, select the *.rst file and read it in.

p10
Once the results file is read in, you should see whatever information is available.

Recommendations

  • Though it is possible to run concurrent RSM jobs from the same project, my recommendation is to only run one RSM job at a time from the same project in order to avoid communication or licensing holdups

  • Unless you are confident that you will not ever need to recover files, consider unchecking the “Delete Job Files in Working Directory” box in the compute server properties menu.

    • Note: if you are not allowed access to your compute server temporary directories, you should probably consult your IT department to get approval for this action.

    • Caution: if you uncheck this box, be sure that you stay on top cleaning up your compute server once you have your files successfully downloaded

  • Depending on your network speed, when your results files get large, >15GB, be prepared to wait for upload and download times. There is likely activity, but you might not be able to “see” it in the progress information on the RSM output feed. Be patient or work outside of RSM using a batch MAPDL process.

  • Avoid hitting the “Interrupt Solution” command more than once. I have not verified this, but I believe this can cause mis-communication between the compute server and local machine temporary directories which can cause RSM to think that there are no files associated with your run to be transferred.

p11

Default Contact Stiffness Behavior for Bonded Contact

p7It recently came to my attention that the default contact stiffness factor for bonded contact can change based on other contact regions in a model. This applies both to Mechanical as well as Mechanical APDL. If all contacts are bonded, the default contact stiffness factor is 10.0. This means that in our bonded region, the stiffness tending to hold the two sides of contact together is 10 times the underlying stiffness of the underlying solid or shell elements.

However, if there is at least one other contact region that has a type set to anything other than bonded, then the default contact stiffness for ALL contact pairs becomes 1.0. This is the default behavior as documented in the ANSYS Mechanical APDL Help, in section 3.9 of the Contact Technology Guide in the notes for Table 3.1:

“FKN = 10 for bonded. For all other, FKN = 1.0, but if bonded and other contact behavior exists, FKN = 1 for all.”

So, why should we care about this? It’s possible that if you are relying on bonded contact to simulate a connection between one part and another, the resulting stress in those parts could be different in a run with all bonded contact vs. a run with all bonded and one or more contact pairs set to a type other than bonded. The default contact stiffness is now less than it would be if all the contact regions were set to bonded.

This can occur even if the non-bonded contact is in a region of the model that is in no way connected to the bonded region of interest. Simply the presence of any non-bonded contact region results in the contact stiffness factor for all contact pairs to have a default value of 1.0 rather than the 10.0 value you might expect.

Here is an example, consisting of a simple static structural model. In this model, we have an inner column with a disk on top. There are also two blocks supporting a ring. The inner column and disk are completely separate from the blocks and ring, sharing no load path or other interaction. Initially all contact pairs are set to bonded for the contact type. All default settings are used for contact.
p1

Loading consists of a uniform temperature differential as well as a bearing load on the disk at the top. Both blocks as well as the column have their bases constrained in all degrees of freedom.
p2

After solving, this is the calculated maximum principal stress distribution in the ring. The max value is 41,382.
p3

Next, to demonstrate the behavior described above, we changed the contact type for the connection between the column and the disk from bonded to rough, all else remaining the same.
p4

After solving, we check the stresses in the ring again. The max stress in the ring has dropped from 41,283 to 15,277 as you can see in the figure below. Again, the only change that was made was in a part of the model that was in no way connected to the ring for which we are checking stresses. The change in stress is due solely to a change in contact type setting in a different part of the model. The reason the stress has decreased is that the stiffness of the bonded connection is less by a factor of 10, so the bonded region is a softer connection than it was in the original run.

p5

So, what do we as analysts need to do in light of this information? A good practice would be to manually specify the contact stiffness factor for all contact pairs. This behavior only crops up when the default values for contact stiffness factor are utilized. We can define these stiffness factors easily in ANSYS Mechanical in the details view for each contact region. Further, we need to always remember that ANSYS as well as other analytical tools are just that – tools. It’s up to us to ensure that the results of interest we are getting are not sensitive to factors we can adjust, such as mesh density, contact stiffness, weak spring stiffness, stabilization factors, etc.

Learn Linux on edX

edx_linuxThe balance of Linux vs. Windows for simulation users is always in flux. For some time it was predicted that Windows would win the battle but in recent years Linux has made a resurgence, especially on clusters and in the cloud.  We strongly recommend that ANSYS users who want to be power users gain a good understanding of Linux from a user and sysadmin perspective. Especially CFD users since they are most likely to be solving on a Linux devices.  Too many of the people we interface with are left at the mercy of an IT support team that doesn’t know, or even fears Linux.

The best way to solve this problem is to learn Linux yourself. To help people get there, recommended a few books and “learn by doing.” Now we have a better option.

edX offers an Introduction to Linux class that looks outstanding, and you can audit it for free or take the course for real for a $250 minimum contribution.  The quality of these courses is fantastic. The material is thorough and practical.

If you do take the class, give us some feedback when you finish in the comments below.

Here is the video describing the course.  

Using Probes to Obtain Contact Forces in ANSYS Mechanical

Recently we have had a few questions on obtaining contact results in ANSYS Mechanical. A lot of contact results can be accessed using the Contact Tool, but to obtain contact forces we use Probes. Since not everyone is familiar with how it’s done, we’ll explain the basics here.

Below is a screen shot of a Mechanical model involving two parts. One part has a load that causes it to be deflected into the other part.

p1

We are interested in obtaining the total force that is being transmitted across the contact elements as the analysis progresses. Fortunately this is easy to do using Probes in Mechanical.

The first thing we do is click on the Solution branch in the tree so we can see the Probes button in the context toolbar. We then click on the Probe drop down button and select Force Reaction, as shown here:

p2

Next, we click on the resulting Force Reaction result item under the Solution branch to continue with the configuration. We first change the Location Method from Boundary Condition to Contact Region:

p3

We then specify the desired contact region for the force calculation from the Contact Region dropdown:

p4

Note that the coordinate system for force calculation can either be Cartesian or Cylindrical. You can setup a coordinate system wherever you need it, selectable via the Orientation dropdown.

There is also an Extraction dropdown with various options for using the contact elements themselves, the elements underlying the contact elements, or the elements underlying the target elements (target elements themselves have no reaction forces or other results calculated). Care must be taken when using underlying elements to make sure we’re not also calculating forces from other contact regions that are part of the same elements, or from applied loads or constraints. In most cases you will want to use either Contact (Underlying Element) or Target (Underlying Element). If contact is non-symmetric, only one of these will have non zero values.

In this case, the setting Contact (Contact Element) was a choice that gave us appropriate results, based on our contact behavior method of Asymmetric:

p5

Here are the details including the contact force results:

p6

This is a close up of the force vs. ‘time’ graphs and table (this was a static structural analysis with a varying pressure load):

p7
p8


***** SUMMATION OF TOTAL FORCES AND MOMENTS IN THE GLOBAL COORDINATE SYSTEM *****

FX = -0.4640219E-04
FY = -251.1265
FZ = -0.1995618E-06
MX = 62.78195
MY = -0.1096794E-04
MZ = -688.9742
SUMMATION POINT= 0.0000 0.0000 0.0000

We hope this information is useful to you in being able to quickly and easily obtain your contact forces.

Video Tips: Using ACT to change Default Settings in ANSYS Mechanical

A short video showing how ACT (ANSYS Customization Toolkit) can be used to change Default Settings for analyses done in ANSYS Mechanical.  This is a very small subset of the capabilities that ACT can provide.  Stay tuned for other videos showing further customization examples.

The example .xml and python file is located below.  Please bear in mind that to use these “scripted” ACT extension files you will need to have an ACT license.  Compiled versions of extensions don’t require any licenses to use.  Please send me an email (manoj@padtinc.com) if you are wondering how to translate this example into your own needs.

NLdefaults

Flownex and PADT Sponsor University of Houston’s Rankin Rollers Team

rankin-rollers-logoA group of enthusiastic students at the University of Houston are doing their part at solving that age old academia problem: not enough hand’s on experience.  They are designing and building a working steam turbine for the schools Thermodynamics lab so students can experiment with a Rankin cycle, learn how to take meaningful measurements, and study how to control a real thermodynamic system.

rankin-rollers-facebook
Look! Flownex and PADT on Social Media! Thanks for the plug guys.
After meeting a team member at the 2014 Houston ANSYS User conference, PADT saw a great opportunity to help the team by providing them with access to a full seat of Flownex SE so that they can create a virtual prototype of their steam turbine and the control system they are developing. 

The four team members have the following goals for their project:

    1. Create a fully automated system control
    2. Create system with rolling frame for ease of transport
    3. Create system with dimensions of 4x2x3.5 ft
    4. Deliver pre-made lab experiments
    5.  Produce an aesthetically pleasing product

    Flownex should be a great tool for them, allowing the team to simulate the thermodynamics and flow in the system as well as the system controls before committing to hardware. 

    You can learn more about the team on their Facebook page here, or on their website here

    We hope to share their models and what they have learned when their project is complete. If you are interested in using Flownex for your work or school project, contact PADT.

    steam-turbine-table-setup
    This is the Team’s proposed configuration for the final test bench.
    flow-schematic
    We can’t wait to see this flow diagram translated into Flownex.

    A 3D Mouse Testimonial

    The following is from an email that I received from Johnathon Wright.  I think he likes his brand new 3DConnexion Space Pilot Pro.
    -David Mastel
      IT Manager
      PADT, Inc.

    ——————-

    top-panel-deviceRecently PADT became a certified reseller for 3Dconnexion. Shortly following the agreement a sleek and elegant SpacePilot PRO landed on my desk. Immediately the ergonomic design, LCD display, and blue LED under the space ball appealed to the techie inside of me. As a new 3D mouse user I was a little skeptical about the effectiveness of this little machine, yet it quickly has gained my trust as an invaluable tool to any Designer or Engineer. On a daily basis it allows me to seamlessly transition from CAD to 3D printing software and then to Geomagic Scanning software, allowing dynamic control of my models, screen views, hotkeys and shortcuts.

    Outside of its consistency as an exceptional 3D modeling aid, the SpacePilot PRO also has a configurable home screen that allows quick navigation of email, calendar or tasks. This ensures that I can keep in touch with my team without having to ever leave my engineering programs, which is invaluable to my production on a daily basis. Whether you are a first time user who is looking to tryout a 3D Mouse for the first time or an experienced 3D mouse user who is looking to upgrade, you need to check out the SpacePilot Pro. I can’t imagine returning to producing CAD models or manipulating scan data without one. Combine the SpacePilot PRO cross-compatibility with its programmability and ease of use and you have a quality computer tool that applies to a wide range of users who are looking at new ways to increase productivity.

    Link to You Tube video – watch it do its thing along with a look at my 3D scanning workstation, the GEOCUBE: http://youtu.be/fsfkLPaZJe4

    Johnathon Wright
    Applications Engineer,
    Hardware Solutions
    PADT, Inc.

    ———————————————————————————————-
    Editors Note:

    Not familiar with what a 3D Mouse is?  It is a device that lets a use control 3D objects on their computer in an intuitive manner. Just as you move a 2D mouse on the plane of your desk, you spin a 3D Mouse in all three dimensions.  Learn more here

    spacepilot-pro-cad-professional-2-209-p

    Integrating ANSYS Fluent and Mechanical with Flownex

    Component boundaries generated in Flownex are useful in CFD simulation (inlet velocities, pressures, temperatures, mass flow). Generation of fluid and surface temperature distribution results from Flownex can also be useful in many FEA simulations. For this reason the latest release of Flownex SE was enhance to include several levels of integration with ANSYS.  

    ANF Import

    By simply clicking on an Import ANF icon on the Flownex Ribbon bar users can select the file that they want to import. The user will be requested to select whether the file must be imported as 3D Geometry which conserves the coordinates system or as an isometric drawing.

    The user can also select the type of component which should be imported in the Flownex library. Since the import only supports lines and line related items this will typically be a pipe component.

    Following a similar procedure, a DXF importer allows users to import files from AutoCAD.

    This rapid model construction gives Flownex users the ability to create and simulate networks quicker. With faster model construction, users can easily get to results and spend less time constructing models.

    p1

    ANSYS Flow Solver Coupling and Generic Interface

    The Flownex library was extended to include components for co-simulation with ANSYS Fluent and ANSYS Mechanical.
    p2

    These include a flow solver coupling checks, combined convergence and exchanges data on each iteration, and a generic coupling that can be used for cases when convergence between the two software programs is not necessary.

    The general procedure for both the Fluent and Mechanical co-simulation is the same:

    1. By identifying specified named selections, Flownex will replace values in a Fluent journal file or ds.dat file in the case of Mechanical.
    2. From Flownex, Fluent/Mechanical will then be run in batch mode
    3. The ANSYS results are then written into text files that are used inputs into Flownex.
    4. When applicable, specified convergence criteria will be checked and the procedure repeated if necessary.

    p3

    Learn More

    To learn more about Flownex or how Flownex and ANSYS Mechanical contact PADT at 480.813.4884 or roy.haynie@padtinc.com.  You can also learn more about Flownex at www.flownex.com.

    FDA Opening to Simulation Supported Verification and Validation for Medical Devices

    FDA-CDRH-Medical-Devices-SimulationBringing new medical device products to market requires verification and validation (V&V) of the product’s safety and efficacy. V&V is required by the FDA as part of their submission/approval process. The overall product development process is illustrated in the chart below and phases 4 and 5 show where verification is used to prove the device meets the design inputs (requirements) and where validation is used to prove the device’s efficacy. Historically, the V&V processes have required extensive and expensive testing. However, recently, the FDA’s Center for Devices and Radiological Health (CDRH) has issued a guidance document that helps companies uses computational modeling (e.g FEA and CFD) to support the medical device submission/approval process.

    FDA-Medical-Device-Design-Process-Verification-Validation
    Phases and Controls of Medical Device Development Process, Including Verification and Validation
     The document called, “Reporting of Computational Modeling Studies in Medical Device Submissions”, is a draft guidance document that was issued on January 17th, 2014. The guidance document specifically addresses the use of computation in the following areas for verification and/or validation:

    1. Computational Fluid Dynamics and Mass Transport
    2. Computation Solid Mechanics
    3. Computational Electromagnetics and Optics
    4. Computational Ultrasound
    5. Computational Heat Transfer

    The guidance specifically outlines what form reports need to take if a device developer is going to use simulation for V&V.  By following the guidance, a device sponsor can be assured that all the information required by the FDA is included. The FDA can also work with a consistent set of input from various applicants. 

    drug-delivery-1-large
    CFD Simulation of a Drug Delivery System. Used to Verify Uniform Distribution of Drug

    Computational Modeling & Simulation, or what we usually call simulation, has always been an ideal tool for reducing the cost of V&V by allowing virtual testing on the computer before physical testing. This reduces the number of iterations on physical testing and avoids the discovery of design problems during testing, which is usually late in the development process and when making changes is the most expensive. But in the past, you had to still conduct the physical testing. With these new guidelines, you may now be able to submit simulation results to reduce the amount of required testing.
    mm_model_stresses
    Simulation to Identify Stresses and Loads on Critical Components While Manipulating a Surgical Device

    Validation and verification using simulation has been part of the product development process in the aerospace industry for decades and has been very successful in increasing product performance and safety while reducing development costs.  It has proven to be a very effective tool, when applied properly.  Just as with physical testing, it is important that the virtual test be designed to verify and validate specific items in the design, and that the simulation makes the right assumptions and that the results are meaningful and accurate.

    PADT is somewhat unique because we have broad experience with product development, various types of computational modeling and simulation, and the process of submission/approval with the FDA. In addition, we are ISO 13485 certified. We can provide the testing that is needed for the V&V process and employ simulation to accelerate and support that testing to help our medical device customers get their products to market faster and with less testing cost.  We can also work with customers to help them understand the proper application of simulation in their product development process while operating within their quality system.

    Flownex 2014 Released and Webinars Announced

    987786-flownex_simulation_environment-11_12_13The June release of Flownex SE software includes numerous updates for companies that model thermal fluid systems; videos and webinars are available to showcase the impact of these enhancements.

    Flownex SE has increased the ability of engineers to accurately model their fluid-thermal with the release of version of Flownex 2014 on June 19th, 2014. The program is known for its in ease of use, breadth of capability, and depth of functionality.  With enhancements in turbomachinery modeling, support for 3D networks, GIS data import, heat transfer and a myriad of additional new features impacting efficiency, integration, and automation, this release expands the industries that can take advantage of it, and will help current users model their systems more accurately with greater ease.

    7271351-Flownex2014-GIS

    To help the user community understand the impact of these significant enhancements, PADT is offering two webinars. Both webinars will include a brief introduction to the tool, so if you are new to Flownex SE you will have a good foundation to get started.

    Webinar Sign-Up:

    Overview webinar: July 24, 2014, 1:00-2:00 PM MST

    This webinar will focus all of the new features in Flownex SE 8.3.6.  
    Register here

    7271351-Flownex2014-Rotating_ComponentsTurbomachinery webinar: August 7, 2014, 1:00-2:00 PM MST

    This webinar will be a deep dive into the extensive turbomachinery capabilities added in this release, and will be of interest to anyone simulating turbine engines, pumps, blowers, or other rotating machinery that involves fluids.
    Register here

    All registrants will be sent links to recordings so they can view the presentation even if they cannot attend them live.

    Video Resources:

    A video is also available that hits the important new capabilities: 

    If you are new to Flownex SE, visit PADT’s Flownex page to learn more:  

    Key Features:

    The key features introduced in Flownex 2014 (Flownex SE 8.3.6) are:  

    1. Rotating components, Swirl Boundary, and General Turbine and Compressor Models
    2. Importing and Geometries
    3. GIS File Support
    4. Connections to ANSYS Products
    5. Link to Mathcad
    6. Graphical Script Generation Tool
    7. New Designer Tools to Quickly Model Common Systems.
    8. Five Additional Convection Models
    9. Exit Thrust Nozzle Added
    10. Additional Enhancements ranging from 3D Graphs to Support for Miter Bends in Piping

    7271351-Flownex2014-Pipe-Results

    Visit here to see a detailed list of these key features, or download the complete release notes here.

    These additional features reflect the growing diversity of industries that are using Flownex SE to model their systems.  Users in oil and gas, mining, chemical processing, and turbomachinery will all see additional accuracy, functionality, and efficiency from this release. Built on an existing strong foundation that offers un-paralleled capability with  intuitive ease of use, a short look at Flownex SE will show you why so many users around the world are choosing it as their thermo-fluid modeling tool.

    PADT is the distributor of Flownex SE in the United States.  Our experienced staff is eager to discuss your system modeling needs and is ready to show you how Flownex SE can start delivering value almost immediately. Contact us today to meet with our experts.

    Video Tips: Workflow for Designing Electric Motors in ANSYS

    A quick video showing you a great workflow for designing electric motors. It shows going from a quick template based design tool to a full 3D analysis tool

    Top 10 New Thermal Fluid Modeling Capabilities in Flownex 2014

    3D graphWe are pleased to announce the release of Flownex SE 2014.  This is a very exciting release for all of us involved in Flownex because it introduces a mix of advanced features and usability enhancements – we love better and easier.  We will be publishing more information about this release, as well as videos and webinars. While we set all of that up, we wanted to whet everyone’s appetite and give you a list of what we feel are the 10 most important enhancements.

    1. Rotating components, Swirl Boundary, and General Turbine and Compressor Models 
      A new library has been added which models rotating flow on a system level. Focusing on secondary flow and heat transfer in turbine engines, it includes all the components needed including compressors, turbines, seals, gaps, nozzles, and cavities. A complete library for Steam Turbine modeling was also added. 
    2. Importing and Geometries
      Users can read in 2D and 3D layout files in common formats and directly create Flownex models from the geometry. The model and results can then be visualized with the 3D geometry.
    3. GIS File Support
      When modeling systems that cover a large area, such as water or gas pipelines, the geographical data can be imported for display and to automatically include altitude into the model. 
    4. Connections to ANSYS Products
      Users can import 3D Pipe geometry as an ANF file, and connect to ANSYS Mechanical and ANSYS Fluent for co-simulation.
    5. Link to Mathcad
      Users can transfer parametric data to and from Mathcad worksheets
    6. Graphical Script Generation Tool
      Users can use Quick Script to create complex scripts to customize their processes or models without having to learn the full scripting language
    7. New Designer Tools to Quickly Model Common Systems.
      Designer tools atomically iterate on a user’s model to calculate unknown values for them. This release includes tools for calculating mass flow when only pressure is known at a boundary, automatically calculating steady state conditions in a two-phase tank, and a component designer that calculates input parameters for common components so that those components deliver the users requested mass flow.
    8. Five Additional Convection Models 
      Based on user input, five new models were added to the Dittus-Boelter correlation for calculating heat transfer coefficients: tube, shell-side single phase, shell-side horizontal tube condensation, ribbed wall channel, and channel with pedestals. 
    9. Exit Thrust Nozzle Added
      New model in subsonic and supersonic flow at the outlet of a flow network with gasses and superheated fluids
    10. Additional Enhancements:
      Support for miter bends in piping
      3D graphs
      Radiation supports multiple surface enclosures
      The range of methane two phase fluid was increased
      Support for 64 bit 
      Several more values can be changed during a transient solution

    The best way to learn more about these additions, or anything about Flownex, is to contact Roy Haynie at roy.haynie@padtinc.com or 480-813-4884.  
    There is also some more detailed material here: