All Things ANSYS 011 – New Years Resolutions & how Metldown and Spectre Hardware vulnerability Impacts Simulation Users


Published on: January 15, 2018
With: Ted Harris, Joe Woodward, Doug Oatis, Jim Peters, Ahmed Fayed, Eric Miller
Description: In this episode your host and Co-Founder of PADT, Eric Miller is joined by PADT’s Jim Peters, Joe Woodword, Ahmed Fayed, Doag Oatis and Ted Harris, for a discussion on new and ongoing simulation-related resolutions, as well as a look at how the newly discovered Meltdown and Spectre Hardware vulnerability impact simulation users.


Phoenix Business Journal: Exploring Easy – Software tools can deliver a huge productivity punch

Software is part of our everyday life. The problem with having all this software around is that we forget it is there. In “Software tools can deliver a huge productivity punch” I take a look at how, with a little bit of attention to what you use, you can make your business day much easier, and more productive.

What we Learned at the Geomagic Conference about Design X and Control X

On September 11th and 12th Mario Vargas (Hardware Manager for PADT Inc.) and I (James Barker, Application Engineer for PADT Inc.) attended Convergence 2017 in Los Angeles, CA.  This event is held by 3D Systems and is the America’s Software Partner Meeting.  Many strategic partners were in attendance from all across the USA, Canada, and Latin America.  We were able to learn about some new enhancements to Geomagic that will help you with Inspection or Reverse Engineering BIG time!  The first day of meetings we heard from Vyomesh Joshi (CEO of 3D Systems referred to as VJ).  He mentioned that 3D Systems has committed 17% to R&D and after going to this event it is apparent!  VJ briefly talked about each of their software options.  The 1st being Control X and how Polyworks currently has the edge for inspection software but after this next software release, he and other 3D Systems employees seemed confident that they could surpass Polyworks.  The 2nd software he talked about was Freeform which allows users to freely design parts by using a haptic device.  This software would be great for creating custom shapes on a whim.  If you haven’t tried a haptic device, you need to!  It will blow your mind as a designer with the freedom you get by using the haptic device and this Freeform Software.  The 3rd software he talked about was Cimatron which aids in the design of mold and die design.  Of the top 10 largest USA mold makers, 7 of them use Cimatron Software.  The 4th software is something new that will be released later this month.  I would love to tell you more about it but can’t….  sorry!

A little about why Mario and I attended this convention, PADT Inc. offers 3D Scanning as both a service and also as hardware or software you can buy.  We use both Geomagic Design X and Geomagic Control X and have experts that are scanning parts for customers for either inspection results or for reverse engineering purposes at our Tempe, AZ office.  The scanner that we use is a CMM quality scanner from Zeiss.  This scanner is capable of scanning 5 million points per scan!  We also offer 3D Systems Capture and Capture Mini scanners which are great tools for reverse engineering.  Each time they scan a part they are capturing about 1 million points per scan.  I am located in the Salt Lake City, Utah office and have a Capture Mini scanner that anyone wanting to see and demo, can come look at and evaluate at our office.  Same holds true for the Capture scanner and Zeiss scanner in our Tempe, AZ Headquarters.  Since we offer these services, we love knowing what new tools are available with these product releases.

Jumping back to the conference, on September 12th, there were breakout sessions.  We chose to go to the Geomagic Design X session to see what enhancements have been made.  This software is the preferred software in all of the industry for reverse engineering parts.  There were many different vendors/partners in the room we were at.  There was even a rep from Faro who prefers to sell Geomagic Design X software with each Faro Arm that he sells because this software is so powerful.  The neat thing about this software is all of the improvements that have been made to it.  If you are accustomed to designing parts with Solidworks, Solidedge, NX, Catia, Pro-E or any of the other CAD software, you will be able to use this software with ease.  Every command that you execute within Design X is editable just like the major CAD software.  You have the ability to create sketches on planes or to make life even easier, there are wizards that automatically create sketches and perform a command like an extrusion or revolve that is editable after completing the wizard.  After you have finished reverse engineering your parts within Design X, you can live transfer your new CAD data over to the above-mentioned CAD software.  Once you have imported this data into NX or Solidworks, you can again edit any of the sketches that were created within Design X but now in your software of choice!  I would love to show you how powerful this software is.  There is a reason why it is the preferred reverse engineering software in the industry.

Geomagic Control X session was next.  It also happened to be the last session of the day.  To be honest, I have only used Design X so I was looking forward to learning more about this software.  From all the demo’s that I have seen in the past from this software, it appeared really hard to use.  That is all changing with this new software release and is the reason why VJ is confident that it will compete and could exceed Polyworks as the preferred software for inspection.  The biggest thing that stuck out to me was the ability to set up a workflow for scanned data for inspection so that you can create your inspection reports.  The idea is that if you have a part that needs to be inspected for quality, you 3D scan the part and then import the CAD file.  By overlaying the scanned data over the CAD data you can show the deviation within the 2 parts and you are able to have different views in a 3D PDF to share with others the actual quality of the part.  As you are assigning your GD&T to this first inspection file, you are creating the first steps of the workflow.  There are many options for the workflow that you can create and 3D Systems has made it easy to create the workflow.  I feel that the power of this software is when you can open up the results of the first inspection report and do a split screen on your monitor to show the 100th or 1,000th part side by side and see how that part deviates from the first.

I had a great time in California at this event even though all of our time was spent at the hotel.  The streets looked nice from the window on the 11th floor.  Maybe next time we will venture out!  If anyone from 3D Systems is reading this, let’s go out to eat next time instead of eating at the hotel for breakfast, lunch, and dinner!  Although the view from the dining room was nice!

If you have any questions about 3D scanning whether it is for Inspection or for Reverse Engineering, let us know at PADT Inc.  We look forward to helping you.

Phoenix Business Journal: Why it may be time to rethink how we think about apps

Apps have been around for almost 10 years now (I know!) and when you take a step back and look at them, they often reflect the thinking of those early days.  That is “Why it may be time to rethink how we think about apps” if your tech company uses apps in any way.  The post talks about what makes a good app and what we should be looking for as what is next in mobile applications.

The Additive Manufacturing Software Conundrum

Why are there so many different software solutions in Additive Manufacturing and which ones do I really need?

This was a question I was asked at lunch during the recently concluded RAPID 3D printing conference by a manager at an aerospace company. I gave her my thoughts as I was stuffing down my very average panini, but the question lingered on long after the conference was over – several weeks later, I decided to expand on my response in this blog post.

There are over 25 software solutions available and being used for different aspects of Additive Manufacturing (AM). To answer the question above, I found it best to classify these solutions into four main categories based on their purpose, and allow sub-categories to emerge as appropriate. This classification is shown in Figure 1 below – and each of the 7 sub-categories are discussed in more detail in this post.

Figure 1. Seven sub-categories of software that are applicable to Additive Manufacturing, sorted by the need into four main categories

Basic Requirements

1. Design Modeler

You need this if you intend to create or modify designs

Most designs are created in CAD software such as SOLIDWorks, CATIA and SpaceClaim (now ANSYS SpaceClaim).  These have been in use long before the more recent rise in interest in AM and most companies have access to some CAD software internally already. Wikipedia has a comparison of different CAD software that is a good starting point to get a sense of the wide range of CAD solutions out there.

2. Build Preparation

You need this if you plan on using any AM technology yourself (as opposed to sending your designs outside for manufacturing)

Once you have a CAD file, you need to ensure you get the best print possible with the printer you have available. Most equipment suppliers will provide associated software with their machines that enable this. Stand-alone software packages do exist, such as the one developed by Materialise called Magics, which is a preferred solution for Stereolithography (SLA) and metal powder bed fusion in particular – some of the features of Magics are shown in the video below.

Scanning & File Transfer

3. Geometry Repair

You need this if you deal with low-quality geometries – either from scans or since you work with customers with poor CAD generation capabilities

Geomagic Design X is arguably the industry’s most comprehensive reverse engineering software which combines history-based CAD with 3D scan data processing so you can create feature-based, editable solid models compatible with your existing CAD software. If you are using ANSYS, their SpaceClaim has a powerful repair solution as well, as demonstrated in the video below.

Improving Performance Through Analysis

4. Topology Optimization

You need this if you stand to benefit from designing towards a specific objective like reducing mass, increasing stiffness etc. such as the control-arm shown in Figure 2

Topology optimization applied to the design of an automobile upper control-arm done with GENESIS Topology for ANSYS Mechanical (GTAM) from Vanderplaats Research & Development and ANSYS SpaceClaim
Figure 2. Topology optimization applied to the design of an automobile upper control-arm done with GENESIS Topology for ANSYS Mechanical (GTAM) from Vanderplaats Research & Development and ANSYS SpaceClaim

Of all the ways design freedom can be meaningfully exploited, topology optimization is arguably the most promising. The ability to now bring analysis up-front in the design cycle and design towards a certain objective (such as maximizing stiffness-to-weight) is compelling, particularly for high performance, material usage sensitive applications like aerospace. The most visible commercial solutions in the AM space come from Altair: with their Optistruct solution (for advanced users) and SolidThinking Inspire (which is a more user-friendly solution that uses Altair’s solver). ANSYS and Autodesk 360 Inventor also offer optimization solutions. A complete list, including freeware, can be availed of at this link.

5. Lattice Generation

You need this if you wish to take advantage of cellular/lattice structure properties for applications like such as lightweight structural panels, energy absorption devices, thermal insulation as well as medical applications like porous implants with optimum bone integration and stiffness and scaffolds for tissue engineering.

Broadly speaking, there are 3 different approaches that have been taken to lattice design software:

I will cover the differences between these approaches in detail in a future blog post. A general guideline is that the generative design approach taken by Autodesk’s Within is well suited to medical applications, while Lattice generation through topology optimization seems to be a sensible next step for those that are already performing topology optimization, as is the case with most aerospace companies pursuing AM technology. The infill/conformal approach is limiting in that it does not enable optimization of lattice structures in response to an objective function and typically involves a-priori definition of a lattice density and type which cannot then be modified locally. This is a fast evolving field – between new software and updates to existing ones, there is a new release on an almost quarterly, if not monthly basis – some recent examples are nTopology and the open source IntraLattice.

Below is a short video demo of Autodesk’s Within:

6. Analysis

You need this if you do either topology optimization or lattice design, or need it for part performance simulation

Basic mechanical FE analysis solvers are integrated into most topology optimization and lattice generation software. For topology optimization, the digitally represented part at the end of the optimization typically has jarring surfaces that are smoothed and then need to be reanalyzed to ensure that the design changes have not shifted the part’s performance outside the required window. Beyond topology optimization & lattice design, analysis has a major role to play in simulating performance – this is also true for those seeking to compare performance between traditionally manufactured and 3D printed parts. The key challenge is the availability of valid constitutive and failure material models for AM, which needs to be sourced through independent testing, from the Senvol database or from publications.

Process Development

7. Process Simulation

You need this if you would like to simulate the actual process to allow for improved part and process parameter selection, or to assess how changes in parameters influence part behavior

The real benefit for process simulation has been seen for metal AM. In this space, there are broadly speaking two approaches: simulating at the level of the part, or at the level of the powder.

  • Part Level Simulation: This involves either the use of stand-alone AM-specific solutions like 3DSIM and Pan Computing (acquired by Autodesk in March 2016), or the use of commercially available FE software such as ANSYS & ABAQUS. The focus of these efforts is on intelligent support design, accounting for residual stresses and part distortion, and simulating thermal gradients in the part during the process. ANSYS recently announced a new effort with the University of Pittsburgh in this regard.
  • Powder Level Simulation: R&D efforts in this space are led by Lawrence Livermore National Labs (LLNL) and the focus here is on fundamental understanding to explain observed defects and also to enable process optimization to accelerate new materials and process research

Part level simulation is of great interest for companies seeking to go down a production route with metal AM. In particular there is a need to predict part distortion and correct for it in the design – this distortion can be unacceptable in many geometries – one such example is shown in the Pan Computing (now Autodesk) video below.

A Note on Convergence

Some companies have ownership of more than one aspect of the 7 categories represented above, and are seeking to converge them either through enabling smooth handshakes or truly integrate them into one platform. In fact, Stratasys announced their GrabCAD solution at the RAPID conference, which aims to do some of this (minus the analysis aspects, and only limited to their printers at the moment – all of which are for polymers only). Companies like Autodesk, Dassault Systemes and ANSYS have many elements of the 7 software solutions listed above and while it is not clear what level of convergence they have in mind, all have recognized the potential for a solution that can address the AM design community’s needs. Autodesk for example, has in the past 2 years acquired Within (for lattice generation), netfabb (for build preparation) and Pan Computing (for simulation), to go with their existing design suite.

Conclusion: So what do I need again?

What you need depends primarily on what you are using AM technologies for. I recommend the following approach:

  • Identify which of the 4 main categories apply to you
  • Enumerate existing capabilities: This is a simple task of listing the software you have access to already that have capabilities described in the sub-categories
  • Assess gaps in software relative to meeting requirements
  • Develop an efficient road-map to get there: be aware that some software only make sense (or are available) for certain processes

In the end, one of the things AM enables is design freedom, and to quote the novelist Toni Morrison: Freedom is not having no responsibilities; it is choosing the ones you want.”  AT PADT, we work with design and analysis software as well as AM machines on a daily basis and would love to discuss choosing the appropriate software solutions for your needs in greater detail. Send us a note at and cite this blog post, or contact me directly on LinkedIn. .

Thank you for reading!

Phoenix Business Journal: Businesses should not worry about the software they need and use what they have

pbj-phoenix-business-journal-logoThis week’s TECHFLASH blog post in the Phoenix Business Journal takes a look at what companies can do to maximize the software investments they have already made. “Businesses should not worry about the software they need and use what they have” I offer up some best practices for maximizing your company’s software investment.

Programming a Simple Polygon Editor

polygon-editor-icon-1Part of my job at PADT is writing custom software for our various clients.  We focus primarily on developing technical software for the engineering community, with a particular emphasis on tools that integrate with the ANSYS suite of simulation tools.  Frankly, writing software is my favorite thing to do at PADT, simply because software development is all about problem solving.

This morning I got to work on a fairly simple feature of a much larger tool that I am currently developing.  The feature I’m working on involves graphically editing polygons.  Why, you ask am I doing this?  Well, that I can’t say, but nonetheless I can share a particularly interesting problem (to me at least) that I got to take a swing at solving.  The problem is this:

When a user is editing a node in the polygon by dragging it around on the screen, how do you handle the case when they drop it on an existing node?

Consider this polygon I sketched out in a prototype of the tool.


What should happen if the user drags this node over on top of that node:polygon-editor-f02

Well, I think the most logical thing to do is that you merge the two nodes together.  Implementing that is pretty easy.  The slightly harder question is what to do with the remaining structure of the polygon?  For my use case, polygons have to be manifold in that no vertex is connected to more than two edges. (The polygons can be open and thus have two end vertices connected to only one edge.)  So, what part do you delete?  Well, my solution is that you delete the “smaller” part, where “smaller” is defined as the part that has the fewest nodes.  So, for example, this is what my polygon looks like after the “drop”

polygon-editor-f03Conceptually, this sounds pretty simple, but how do you do it programmatically?  To give some background, note that the nodes in my polygon class are stored in a simple, ordered C++ std::list<>.

Now, I use a std::list<> simply because I know I’m going to be inserting and deleting nodes at random places.  Linked lists are great for that, and for rendering, I have to walk the whole list anyway, so there’s no performance hit there.  Graphically, my data structure looks
something like this:polygon-editor-f04Pretty simple.  For a closed polygon, my class maintains a flag and simply draws an edge from the last node to the first node.

The rub comes when you start to realize that there are tons of different ways a user might try to merge nodes together in either an open or closed polygon.  I’ve illustrated a few below along with what nodes would need to be merged in the corresponding data structure.  In the data structure pictures, the red node is the target (the node on which the user will be dropping) and the green node is the one they are manipulating (the source node).

Here is one example:


Here is another example:


Finally, here is one more:polygon-editor-f08polygon-editor-f09

In all these examples, we have different “cases” that we need to handle.  For instance, in the first example the portion of the data structure we want to keep is the stuff between the source and target nodes.  So, the stuff on the “ends” of the list needs to be deleted.  In the middle case, we just need to merge the source and target together.  Finally, in the last case, the nodes between the source and target need to be deleted, whereas the stuff at the “ends” of the list need to be kept.

This simple type of problem causes shivers in many programmers, and I’ll admit, I was nervous at first that this problem was going to lead to a solution that handled each individual case respectively.  Nothing in all of programming is more hideous than that.  So, there has to be a simple way to figure out what part of the list to keep, and what part of the list to throw away.

Now, I’m sure this problem has been solved numerous times before, but I wanted to take a shot at it without googling.  (I still haven’t googled, yet… so if this is similar to any other approach, they get the credit and I just reinvented the wheel…)  I remember a long time ago listening to a C++ programmer espouse the wonders of the standard library’s algorithm section.  I vaguely remember him droning on about how wonderful the std::rotate algorithm is.  At the time, I didn’t see what all the fuss was about.  Now, I’m right there with him.  std::rotate is pretty awesome!

std::rotate is a simple algorithm.  Essentially what it does is it takes the first element in a list, pops it off the list and moves it to the rear of the list.  Everything else in the list shifts up one spot.  This is called a left rotate, because you can imagine the items in the list rotating to the left until they get to the front of the line, at which point they fall off and are put back on the end of the list.  (Using reverse iterators you can effectively perform a right rotate as well.)  So, how can we take advantage of this to simplify figuring out what needs to be deleted from our list of nodes?

Well, the answer is remarkably simple.  Once we locate the source and target nodes in the list, regardless of their relative position with respect to one another or to the ends of the list, we simply left rotate the list until the target becomes the head of the list.  That is, if we start with this:polygon-editor-f10We left rotate until we have this:polygon-editor-f11That’s great, but what does that buy us?  Well, now that one of the participating nodes is at the head of the list, our problem is much simpler because all of the nodes that we need to delete are now at either end of the list.  The only question left to answer is which end of the list do we trim off?  The answer to that question is trivial.  We simply trim off the shorter end of the list with respect to the source node (the green node in the diagram). The “lengths” of the two lists are defined as follows.  For the head section, it’s the number of nodes up to, but not including the source. (This section obviously includes the target node)  For the tail, it’s the number of nodes from the source to the end, including the source.  (This section includes the source node).  Since we define the two sections this way we are guaranteed to delete either the source or the target, but not both.  Its fine to delete either one of them, because at this point we’ve deemed the geometrically coincident, but we must not accidentally delete both!!

In the example just given, after the rotate, we would delete the head of the list.  However, let’s take a look at our first example.  Here is the original list:

polygon-editor-f12Here is the rotated list:polygon-editor-f13So, in this case, the “end” of the list (including the source) is the shortest.  If it is a tie, then it doesn’t matter, just pick one.  Interestingly enough, if the two nodes are adjacent in the original list, then the rotated list will look like either this:polygon-editor-f14 Or this, if the source is “before” the target in the original list:polygon-editor-f15In either case, the algorithm works unchanged, and we only delete one node.  It’s beautiful! (At least in my opinion…)  Modern C++ makes this type of code really clean and easy to write.  Here is the entire thing, including the search to located geometrically adjacent nodes as well as the merge. The standard library algorithms really help out!

// Search lambda function for looking for any other node in the list that is
// coindicent to this node, except this node.
auto searchAdjacentFun = [this, pNode](const NodeListTool::AdjustNodePtrT &amp;pOtherNode)-&gt;bool
if (pNode-&gt;tag() == pOtherNode-&gt;tag()) return false;
return (QVector2D(pNode-&gt;pos() - pOtherNode-&gt;pos()).length() &lt; m_snapTolerance); }; auto targetLoc = std::find_if(m_nodes.begin(), m_nodes.end(), searchAdjacentFun); // If we don't find an adjacent node within the tolerance, then we can't merge if (targetLoc == m_nodes.end()) { return false; } // Tidy things up so that the source has exactly the same position as the target pNode-&gt;setPos((*targetLoc)-&gt;pos());
// Begin the merge by left rotating the target so that it is at the
// beginning of the list
std::rotate(m_nodes.begin(), targetLoc, m_nodes.end());
// Find this node in the list
auto searchThis = [this, pNode](const NodeListTool::AdjustNodePtrT &amp;pOtherNode)-&gt;bool
return (pNode-&gt;tag() == pOtherNode-&gt;tag());
auto sourceLoc = std::find_if(m_nodes.begin(), m_nodes.end(), searchThis);
// Now, figure out which nodes we are going to delete.
auto distToBeg = std::distance(m_nodes.begin(), sourceLoc);
auto distToEnd = std::distance(sourceLoc, m_nodes.end());
if (distToBeg &lt; distToEnd) { // If our source is closer to the beginning (which is the target) // than it is to the end of the list, then we need to delete // the nodes at the front of the list m_nodes.erase(m_nodes.begin(), sourceLoc); } else { // Otherwise, delete the nodes at the end of the list m_nodes.erase(sourceLoc, m_nodes.end()); } // Now, see if we still have more than 2 vertices if (m_nodes.size() &gt; 2) {
m_bClosed = true;
else {
m_bClosed = false;
return true;