Press Release: New Expansion into Texas Grows PADT’s ANSYS Sales & Support Across the Entire Southwest

When people look at PADT and where we are located, they almost always say “You should open an office in Austin, the tech community there is a perfect fit for your skills and culture.” We finally listened and are proud to announce that our newest location is in Austin Texas.  This new office will be initially focused on ANSYS Sales and Support across the great state of Texas.  We have had customers for other products and services in the state for decades and are pleased to have a permanent local presence now.

As an Elite ANSYS Channel partner, we provide sales of the complete ANSYS product suite to any and all entities that can benefit from the application of numerical simulation. Across industries, we bring a unique technical approach to both sales and support that is focused on identifying need and then selecting the right toolset, training, and support to deliver a return on the customer’s investment as soon as possible.  And the initial product purchase is just the start. Our ANSYS customers are our partners that we grow with, always ready to help them be better at whatever it is what they do.  Customers in Southern California, Nevada, Arizona, Utah, New Mexico, and Colorado already know this, and it is time for the engineering community in Texas to benefit from the experience.

Because we will be there for the long term, we are taking our time to look around the area.  Our new salesperson, Ian Scott, is an Austin native and who has worked in the engineering software space for some time. He will be working with existing customers and partners in the area to find the right location for us long-term. But we are already putting plans in place to deliver outstanding training, hold meetings, and maybe even a celebration or two while we settle in.

Over time we will add local engineers and additional sales staff to meet the needs of the state, which as you know is big.  And we have big plans for PADT and Texas starting with this ANSYS Sales and Support role, it is just the beginning.

Make sure you watch this blog, social media, or our newsletter for announcements on a celebration for our new office as well as technical events we will start holding very soon.

We look forward to reconnecting with old friends and making new ones.  If you are in Texas, please reach out to us and send us any suggestions or recommendations you may have.  We are really looking forward to growing in Austin and across the Lone Star State.

Please find the official press release on this expansion below as well as versions in PDF and HTML.

Press Release:

Simulation, Product Development and 3D Printing Services Leader, PADT, Opens New Office in Austin, Texas

PADT Becomes the Only ANSYS Elite Channel Partner to Serve the Entire Southwest Region

TEMPE, Ariz., Austin, Texas, February 6, 2018

Phoenix Analysis and Design Technologies (PADT), the Southwest’s largest provider of simulation, product development, and rapid prototyping services and products, today announced it has opened an office in Austin, Texas. With this move, PADT is expanding its sales and support for ANSYS simulation software, becoming the only ANSYS Elite Channel Partner to cover the entire Southwest region.

“This is a major expansion for PADT with the opportunity to significantly grow our customer base,” said Ward Rand, co-owner and principal, PADT. “We have worked with Texas companies on and off since we founded the company in 1994, our success over the last decade has provided the opportunity to become a full-time resident in the vibrant and growing Austin business and technology community.”

Although the initial focus for the PADT Austin office will be on ANSYS sales and support, the company plans to offer its wide array of other products and services in the future. PADT will host a grand opening celebration for customers, partners and media in March, 2018. Ian Scott an Austin native, will be launching the new office and leading the sales effort in the region.

“PADT’s expertise in simulation-driven product development will be a welcome addition to the Austin community,” said Scott. “Our focus at launch will be on educating the Austin technology scene on how to derive the best value from their engineering simulation software investment and building stronger relationships with our new neighbors.”

In 2017, PADT experienced a very successful year in regards to growing its capabilities, as well as in public recognition. PADT was named an ANSYS Elite Channel Partner for North America, partnered with Desktop Metal and Carbon to upgrade 3D printing capabilities and services and was named to Entrepreneur Magazine’s list of the top small businesses in the nation, the Entrepreneur 360 List. The success of the company has enabled PADT to take this step towards further expansion.

To learn more about PADT, visit or call 1-800-293-7238.

About Phoenix Analysis and Design Technologies
Phoenix Analysis and Design Technologies, Inc. (PADT) is an engineering product and services company that focuses on helping customers who develop physical products by providing Numerical Simulation, Product Development, and 3D Printing solutions. PADT’s worldwide reputation for technical excellence and experienced staff is based on its proven record of building long-term win-win partnerships with vendors and customers. Since its establishment in 1994, companies have relied on PADT because “We Make Innovation Work.” With over 80 employees, PADT services customers from its headquarters at the Arizona State University Research Park in Tempe, Arizona, and from offices in Torrance, California; Littleton, Colorado; Albuquerque, New Mexico; Murray, Utah, and Austin, Texas, as well as through staff members located around the country. More information on PADT can be found at

Media Contact
Alec Robertson
TechTHiNQ on behalf of PADT
PADT Contact
Eric Miller
PADT, Inc.
Principal & Co-Owner

Finding curve directions in ANSYS SpaceClaim   

As it so often does, another blog article idea came from a tech support question that I received the other day. “How do you view edge directions in ANSYS SpaceClaim?

You can do it in Mechanical, on the Edge Graphics Options Toolbar:

This will turn on arrows so that you can see the edge directions. The directions of the edges or curves affects things like mesh biasing factors and mass flow rate boundary conditions. You need to make sure that all your pipes in a thermal analysis, for instance, are flowing in the same direction.

(I have also had three tech support calls about weird spikes showing up in customers’ geometry. The Display Edge Direction is also how you turn those off.)

In ANSYS SpaceClaim, there is no way to just display the edge directions. The directions are controlled by which point you pick first while sketching, so if you are careful, you can make sure they are all consistent. But that doesn’t help when you read in CAD files.  So I thought I would share with you what I found, after a little bit of digging and playing. I discovered that the Move Tool behaves in a very specific way, a way that we can use for our need.

When you pick on the edge of a surface or solid, or even a straight sketched line, the red arrow of the Move Tool will point in the direction of the curve. These directions match what gets shown in Mechanical.

For splines, it’s a little bit different. If you just pick a spline with the Move Tool, the triad will align with the global coordinate system.

To see the spline direction, you first have to hover over the spline, to show the vertices of the spline.

Then you can pick an interior vertex, and the Blue arrow of the Move Tool will follow the spline direction.

This only works at the interior vertices, and not at the ends. At the ends, the Blue tool arrow will always point outward from the spline endpoints, so you won’t really know which is the correct spline direction.

I have also found that this technique does not work on sketched circles or arc because the tool always anchors to the center of the curve, and not to the curve itself.  You can, however, use the Repair>Fit Curves tool to convert arcs to splines, using only the Spline option. Then the Move tool will show those directions as described above.  For circles, you have to make one more step, and first, use the Split tool to split the circle into two arcs.  All that though is, in my opinion, more work than it’s worth.

I hope this helps make your lives just a little easier. Have a great day.

Spectre Side-Channel and Meltdown – How will living in this new reality affect the world of numerical simulation?

Literally, while I was sorting and running benchmarks and prepping the new benchmarks data originally titled. ANSYS Release 18.2 Ball Grid Array Benchmark information using two sixteen core INTEL® XEON® Gold 6130 CPU’s. I noticed that my news feeds had started to blow up with late breaking HPC news. The news as you may have guessed is the Spectre and Meltdown flaws that were recently published.

I thought to myself “Well this is just great the benchmarks that I just ran are no longer relevant.  My next thought was wait now I can show a real world example of exactly a percentage change. I have waited this long to run the ANSYS numerical simulation benchmarks on this new CPU architecture. I can wait a little longer to post my findings.” What now? Oh my more Late Breaking News! Research findings, Execution orders no barriers! Side channels used to get access to private address areas of the hardware! Wow this is a bad day. As I sat reading more news, then I drifted off daydreaming, then back to  my screen then the clock on the wall, great it is 2am already!, just go home…” Then thoughts immediate shifted and I was back thinking about indeed, how these hardware flaws impact the missing middle market? HPC numerical simulation!!! I dug in deep and pressed forward content with starting over on the benchmarks knowing after the patches released around Jan 9th will be a whole new world.

I decided to spare the ugly details related to the Spectre array bounds/brand prediction attack flaws. The out of order meltdown vulnerabilities! UGH! I seriously believe that someone has AI writing news articles written five or six different ways with each somehow saying the same thing. I also provide the links to the information and legal statements directly from a who’s who list of accountable parties:

Executive Summary:

  • * Remember every case is different so please do your run your own tests to verify how this new reality affects your hardware and software environment.*
    • Due to costs this machine has a single NVMe M.2 for the primary drive with a single 2TB SATA drive for its Mid-Term Storage area.
  • What was the impact for my benchmark?
    • Positive takeaway:
      • In all of the years of running the sp5 benchmark. I recorded the fastest benchmark time using this CUBE w32s, dual INTEL® XEON® Gold 6130 CPU workstation.
      • Using all thirty two cores 125.7 seconds for Solution Time (Time Spent Computing Solution).
        • Next, Coming in at 135.7 seconds the Solution Time metric after running the OS patches is my second fastest data point for the ANSYS sp5 benchmark.
          • ANSYS sp5 benchmark data – PADT, Inc. Currently from 2005 until this time.
      • The Solution Times continued to solve faster with each bump in cores.
      • Performance per dollar was maximized in this configuration.
    • Depending on number of cores used that I used for the ANSYS sp5 benchmark. I give the actual data below showing the percentage differences before and after:
      • Largest percentage difference:
        • Solution Time: -9.81% using four CPU cores.
        • Total Time: -7.87% using two CPU cores.
  • The need to turn the security screws down within your corporate enterprise network is now.
  • A rogue malicious agent needs to be on the inside of your corporate network to execute any sort of crafted attack. Much of these details are outlined in the Project Zero abstract.
  • Pay extra attention to just who you let on your internal network.
    • I reiterate the recommendations of many security professionals that you should already be restricting your internal company network and workstations to employee use. If you are not sure ask again.
  1. Spectre flaw:
    1. INTEL, ARM & AMD CPU’s are affected by the Spectre array bounds hardware attacks.
  2. Meltdown flaw:
    1. INTEL CPU’s and some ARM high performance CPU’s are affected by the “easier to exploit” Meltdown vulnerability.

I am also interested to see how continued insertion of code barriers and changed memory mappings affect my gaming performance. Haha! No, I am just kidding my numerical simulation performance benchmarks.

Clarifications & Definitions:

  • Unpatched Benchmark Data – No mitigation patches from Microsoft and NVidia addressing the Spectre and Meltdown flaws have been applied to the Windows 10 Professional OS running on the CUBE w32s that I use in this benchmark.
  • Patched Benchmark Data – I installed the batch of patches released by Microsoft as well as the NVDIA graphics card driver update released by NVIDIA addressing. NVIDIA indicates in their advisory that “their hardware their GPU hardware is not affected but they are updating their drivers to help mitigate the CPU security issue.” Huh? Installing now…
  • Solution Time – The amount of time in seconds that the CPU’s spent computing the solution. “The Time Spent Computing Solution”
  • Total Time – Total time in seconds that the entire process took. How the solve felt to the user also known as wall clock time.

The CUBE machine that I used in this ANSYS Test Case represent a fine balance based on price, performance and ANSYS HPC licenses used.

  • CUBE w32s, INTEL® XEON® Gold 6130 CPU, 128GB’s DDR4-2667MHz (1Rx4) ECC REG DIMM, Windows 10 Professional, ANSYS Release 18.2, INTEL MPI, 32 Total Cores, NVIDIA QUADRO P4000, Samsung EVO 960 Pro NVMe M.2, Toshiba 2TB 7200 RPM SATA 3 Drive.
  • Other notables, are you still paying attention?
    • My Supermicro X11Dai-N BIOS Settings:
      • BIOS Version: 2.0a
      • Execute Disable Bit: DISABLE
      • Hyper threading: ON
      • Intel Virtualization Technology: DISABLE
      • Core Enabled: 0
      • Power Technology: CUSTOM
      • Energy Performance Tuning: DISABLE
      • Energy performance BIAS setting: PERFORMANCE
      • P-State Coordination: HW_ALL
      • Package C-State Limit: C0/C1 State
      • CPU C3 Report: DISABLE
      • CPU C6 Report: DISABLE
      • Enhanced Halt State: DISABLE
    • With a read performance of up to 3,200MB/s and write performance of up to 1,900 MB/s using the Samsung NVMe M.2 drive was to tempting to pass up as my solve and temp solve area location. The bandwidth from the little feller was to impressive and continued to impress throughout the numerical simulation benchmarks.

My first overall impressions of this configuration is Wow! this workstation is fast, quiet and as you will see number crunches its way right on through to being my fastest documented workstation benchmark in this class. This extremely challenging and I/O intensive ANSYS benchmark is no match for this solver! Thumbs up and cheers to happy solving!

  • Cube w32s by PADT, Inc. ANSYS Release 18.2 FEA Benchmark
  • BGA (V18sp-5)
  • Transient nonlinear structural analysis of a electronic ball grid arrary
  • Analysis Type: Static Nonlinear Structural
  • Number of Degrees of Freedom: 6,000,000
  • Matrix: Symmetic

It Is All About The Data:

Benchmark data related to Pre and Post Spectre and Meltdown industry software patches on the CUBE w32s.

Table 1 – ANSYS sp5 Benchmark  – UnPatched Windows 10 Professional

ANSYS sp5 Benchmark  – Unpatched Windows 10 Professinal for Spectre and Meltdown hardware vulnerability – CUBE w32s
CPUs Solution Time Total Time
2 631.3 671
4 366.8 422
8 216 259
12 193 235
16 144.3 185
20 143.9 187
24 131.9 175
28 137.4 185
31 142.4 185
32 125.7 171
Apples to Apples, meltdown, spectre, ANSYS numerical simulation benchmark data
ANSYS Release 18.2 – SP5 Benchmark – Unpatched Windows 10 Professional CUBE w32s Solution and Total Time Values

Table 1.1 – ANSYS sp5 Benchmark  – Patched Windows 10 Professional

ANSYS sp5 Benchmark  – Patched Windows 10 Professional – CUBE w32s
CPUs Solution Time Total Time
2 683 726
4 405.5 446
8 235.8 277
12 209.2 251
16 148.8 191
20 145.7 189
24 136.3 182
28 138.7 186
31 134.6 179
32 135.7 179
Apples to Apples, meltdown, spectre, ANSYS numerical simulation benchmark data
ANSYS Release 18.2 – SP5 Benchmark – Patched Windows 10 Professional for the Sprectre and Meltdown hardware flaw – Solution And Total Time Values

Table 2 – ANSYS sp5 Benchmark  – The Before and After In Percentage Difference.

Percentage Difference – Not Patched vs. Patched for Sprectre, Meltdown
Solution Time Total Time
-7.94 -7.87
-9.81 -5.53
-8.34 -6.72
-7.57 -6.58
-2.73 -3.19
-1.09 -1.06
-2.87 -3.92
-0.81 -0.54
4.76 3.30
-6.74 -4.57

Fig 2.a

Percentage of impact for this example. Negative value means in this example. The patched Windows 10 Professional CUBE w32s is taking a performance hit.
Percentage of impact for this example. Negative value means “performance hit” in this example. Notice a very interesting blip of positive percentage at 31 cores. A patched CUBE w32s Windows 10 Professional for Sprectre and Meltdown hardware vulnerability. The data from this Windows 10 Professional CUBE w32s INTEL® XEON® Gold 6130 CPU is showing an impact related to the patches.

FIg 2.b

Percentage of impact for this example. Negative value means in this example. The patched Windows 10 Professional CUBE w32s is taking a performance hit.
Percentage of impact for this example. Negative value means there is some sort of impact. The patched Windows 10 Professional CUBE w32s will feel longer to solve by looking at the clock on the wall.
CUBE w32s in action - January 2018
CUBE w32s in action – January 2018

Please contact your local ANSYS Software Sales Representative for more information on purchasing ANSYS HPC Packs. You too may be able to speed up your solve times by unlocking more compute power!

What the heck is a CUBE? For more information regarding our Numerical Simulation workstations and clusters please contact our CUBE Hardware Sales Representative at SALES@PADTINC.COM

Designed, tested and configured within your budget. We are happy to help and to listen to your specific needs.

CUBE w32s in action - January 2018
CUBE w32s in action – January 2018

Save, Save, Save! Setting up a Save after Solution in ANSYS Mechanical


Is this the reaction you have when you come in on Monday morning, and realize that another Windows update has, once again, rebooted your PC before you had a chance to save the 30-hour run that should have finished over the weekend? There a Workbench setting that can help relieve some of that stress.

The “Save Project After Solution” option will save the entire project as soon as the solution has finished. So when your model runs for 30 hours over the weekend, it gets saved before a Windows update shuts everything down.  These settings are persistent, so once you’ve changed them to ‘Yes’, then you are all set for next time. You just need to make sure that you change them for each ANSYS version if you have more than one installed.

Now on to my next blog… “How to recover a run if you forgot to change the settings above.” (Grumble Grumble!)

Transitioning to ANSYS

Before joining PADT last July, I have worked on FEA and CFD analyses but my exposure to ANSYS was limited and I was concerned about the transition. To my delight, the software was very easy to learn; most often than not intuitive and self-explanatory (e.g. mechanical wizard), the setup time was minimized after learning couple simple features (e.g. named selection, object generator etc.) and the resources on the ANSYS portal were very instrumental in the learning process. Furthermore, the colleagues at PADT proved to be very knowledgeable and experienced and more importantly responsive and eager to jump for help.

One of the most attractive features that caught my attention was the streamline of the Multiphysics nature that ANSYS has. I have been satisfied with the performance of standalone CFD packages in the past, and same goes for structural ones. But never have I dealt with an extensive software that maintained the quality of a specialized one. The importance of this attribute is showing more and more its powers in recent years given the development of new convoluted products of Multiphysics nature e.g. medical applications.

Using ANSYS to simulate medical applications is one of the most rewarding experience I personally enjoy. Even though, it is definitely satisfying to be able to help accelerate innovation in the aerospace, automotive, and a myriad of other industrial areas…the experience in the medical area has a more refreshing taste, probably due to the clear and direct link to human lives. From intravascular procedures to shoulder implants and microdevices, there is one common factor: ANSYS is decreasing the risks of catastrophic failures, improving the product capabilities and shortening the innovation cycle.

Editors Note: Ziad is part of PADT’s team in Southern California.   He is a graduate of USC and has worked at Boeing, Meggit, and Pacific Consolidated Industries before joining PADT.  He works with the rest of our ANSYS technical staff to make sure our users are getting the most from their ANSYS investment. 



Without Risk There Can Be No Progress

I’m sure most people don’t know the name George M. Low.  He was an early employee at NASA, serving as Chief of Manned Space Flight and later as a leader in NASA’a Apollo moon program in the late 1960’s.  In fact, he was named Manager of the Apollo Spacecraft Program after the deadly Apollo 1 fire in 1967, and helped the program move forward to the successful moon landings starting in 1969.

As most alumni of Rensselaer Polytechnic Institute know, he returned to Rensselaer, his alma mater, serving as president from 1976 until his death in the 1980’s.  I still recall the rousing speech he gave to us incoming freshman at the Troy Music Hall on a hot September afternoon.  On our class rings is his quote, “Without risk there can be no progress.”

I’ve pondered that quote many times in the years since.  It’s easy to coast along in many facets of life and accept and even embrace the status quo.  Over the years, though, I have observed that George Low was right, and the truth is that risk is required to move forward and improve.  The hard part is determining the level of risk that is appropriate, but it’s a sure bet that by not taking any risk, we will lag behind.

How is that realization applicable to our world of engineering simulation?  Surely those already doing simulation have moved from the old process of design > test > break > redesign > test > produce to embrace the faster and more efficient simulate > test > product, right?  Perhaps, but even if they have, that doesn’t mean there can’t be progress with some additional risk.

Let’s look at a couple of examples in the simulation world where some risk taking can have significant payoffs.

First, transitioning from ANSYS Mechanical APDL to ANSYS Mechanical (Workbench).  Most have already made the switch.  I’ll allow there are still some applications that can be completely scripted within the old Mechanical Ansys Parametric Design Language in an incredibly efficient manner.  However, if you are dealing with geometry that’s even remotely complex, I’ll wager that your simulation preparation time will be much faster using the improved CAD import and geometry manipulation capabilities within the ANSYS Workbench Mechanical workflow.  Let alone meshing.  Meshing is lights out faster, more robust, and better quality in modern versions of Mechanical than anything we can do in the older Mechanical APDL mesher.

Second, using ANSYS SpaceClaim to clean up, modify, create, and otherwise manipulate geometry.  It doesn’t matter what the source of the geometry is, SpaceClaim is an incredible tool for quickly making it useable for simulation as well as lots of other purposes.  I recently used the SpaceClaim tools within ANSYS Discovery live to combine assemblies from Inventor and SolidWorks into one model, seamlessly, and was able to move, rotate, orient, and modify the geometry to what I needed in a matter of minutes (see the Discovery Live image at the bottom).  The cleanup tools are amazing as well.

Third, looking into ANSYS Discovery Live.  Most of us can benefit from quick feedback on design ideas and changes.  The new Discovery Live tool makes that a reality.  Currently, in a technology demonstration mode, it’s free to download and try it from ANSYS, Inc. through early 2018.  I’m utterly amazed by how fast it can read in a complex assembly and start generating results for basic structural, CFD, and thermal simulations.  What used to take weeks or months can now be done in a few minutes.

Credits:  Motorcycle geometry downloaded from GrabCAD, model by Shashikant Soren.  Human figure geometry downloaded from GrabCAD, model by Jari Ikonen.  Models combined and manipulated within ANSYS Discovery Live. George M. Low image from

I encourage you to take some risks for the sake of progress.

Estimating Structural Response to Random Vibration in ANSYS Mechanical: Reaction Forces

One of the key outputs from any random vibration analysis is determining the response of the object you are analyzing in terms of reaction forces.  In the presentation below. Alex Grishin shares the theory behind getting accurate forces and then how to do so in ANSYS Mechanical.


As always, please contact PADT for your ANSYS simulation, training, and customization needs.



IEEE Day 2017: Smart Antennas for IoT and 5G

IEEE Day celebrates the first time in history when engineers worldwide and IEEE members gathered to share their technical ideas in 1884. Events were held around the world by 846 IEEE Chapters this year. So, to celebrate, I attended a joint chapter meeting in at The Museum of Flight in Seattle with technical presentations focused on “Smart Antennas for IoT and 5G”. There were approximately 60 in attendance, so assuming this was the average attendance globally results in over 50,000 engineers celebrating IEEE Day worldwide!

The Seattle seminar featured three speakers that spanned theory, design, test, integration, and application of smart antennas. There was much discussion about the complexity and challenges of meeting the ambitious goals of 5G, which extend beyond mobile broadband data access. Some key objectives of 5G are to increase capacity, increase data rates, reduce latency, increase availability, and improve spectral and energy efficiency by 2020. A critical technology behind achieving these goals is beamforming antenna arrays, which were at the forefront of each presentation.

Anil Kumar from Boeing focused on the application of mmWave technology on aircraft. Test data was used to analyze EM radiation leakage through coated and uncoated aircraft windows. However, since existing regulations don’t consider the increased path loss associated with such high frequencies, the integration of 5G wireless applications may be restricted or delayed. Beyond this regulatory challenge, Anil discussed how multipath reflectors and absorbers will present significant challenges to successful integration inside the cabin. Although testing is always required for validation, designing the layout of the onboard transceivers may be impractical to optimize without an asymptotic EM simulation tool that can account for creeping waves, diffraction, and multi-bounce.

Considering the test and measurement perspective, Jari Vikstedt from ETS-Lindgren focused on the challenges of testing smart antenna systems. Smart or adaptive antenna systems will not likely perform the same in an anechoic chamber test as they would in real systems. Of particular difficulty, radiation null placement is just as critical as beam placement. This poses a difficult challenge to the number and location of probes in a test environment. Not only would a large number of probes become impractical, there is significant shadowing at mmWave frequencies which can negatively impact the measurement. Furthermore, compact ranges can significantly impact testing and line of sight measurements become particularly challenging. While not a purely test-oriented observation, this lead to considering the challenge of tower hand off. If a handset and tower use beamforming to maintain a link, if is difficult for an approaching tower to even sense the handset to negotiate the hand-off.

In contrast, if the handset was continuously scanning, the approaching tower could be sensed to negotiate the hand-off before the link is jeopardized.

The keynote speaker, who also traveled from Phoenix to Seattle, was ASU Professor Dr. Constantine Balanis. Dr. Balanis opened his presentation by making a distinction between conventional “dumb antennas” and “smart antennas”. In reality, there are no smart antennas, but instead smart antenna systems. This is a critical point from an engineering perspective since it highlights the complexity and challenge of designing modern communication systems. The focus of his presentation was using an adaptive system to steer null points in addition to the beam in an antenna array using a least mean square (LMS) algorithm. He began with a simple linear patch array with fixed uniform amplitude weights, since an analytic solution was practical and could be used to validate a simulation setup. However, once the simulation results were verified for confidence, designing a more complex array with weighted amplitudes accompanying the element phase shift was only practical through simulation. While beam steering will create a device centric system by targeting individual users on massive multiple input multiple output (MIMO) networks, null steering can improve efficiency by minimizing interference to other devices.

Whether spatial processing is truly the “last frontier in the battle for cellular system capacity”, 5G technology will most certainly usher in a new era of high capacity, high speed, efficient, and ubiquitous means of communication. If you would like to learn more about how PADT approaches antenna simulation, you can read about it here and contact us directly at

Parameterizing Solid Models for ANSYS HFSS

ANSYS HFSS features an integrated “history-based modeler”. This means that an object’s final shape is dependent on each and every operation performed on that object. History-based modelers are a perfect choice for analysis since they naturally support parameterization for design exploration and optimization. However, editing imported solid 3D Mechanical CAD (or MCAD) models can sometimes be challenging with a history-based modeler since there are no imported parameters, the order of operation is important, and operational dependencies can sometimes lead to logic errors. Conversely, direct modelers are not bound by previous operations which can offer more freedom to edit geometry in any order without historic logic errors. This makes direct modelers a popular choice for CAD software but, since dependencies are not maintained, they are not typically the natural choice for parametric analysis. If only there was a way to leverage the best of both worlds… Well, with ANSYS, there is a way.

As discussed in a previous blog post, since the release of ANSYS 18.1, ANSYS SpaceClaim Direct Modeler (SCDM) and the MCAD translator used to import geometry from third-party CAD tools are now packaged together. The post also covered a few simple procedures to import and prepare a solid model for electromagnetic analysis. However, this blog post will demonstrate how to define parameters in SCDM, directly link the model in SCDM to HFSS, and drive a parametric sweep from HFSS. This link unites the geometric flexibility of a direct modeler to the parametric flexibility of a history-based modeler.

You can download a copy of this model here to follow along. If you need access to SCDM, you can contact us at It’s also worth noting that the processes discussed throughout this article work the same for HFSS-IE, Q3D, and Maxwell designs as well.

[1] To begin, open ANSYS SpaceClaim and select File > Open to import the step file.

[2] Split the patch antenna and reference plane from the dielectric. Click here for steps to splitting geometry. Notice the objects can be renamed and colors can be changed under the Display tab.

This slideshow requires JavaScript.

[1] Click and hold the center mouse button to rotate the model, zoom into the microstrip feed using the mouse scroll, then select the side of the trace.

[2] Rotate to the other side of the microstrip feed, hold the Ctrl key, and select the other side of the trace. Note the distance between the faces is shown as 3mm in the Status Bar at the bottom of the screen, which is the initial trace width.

[3] Select Design > Edit > Pull and select No merge under Options – Pull.

[4] Click the yellow arrow in the model, and drag the side of the trace. Notice how both faces move in or out to change the trace width. After releasing the mouse, a P will appear next to the measurement box. Click this P to create a parameter.

[5] Select the Groups panel under the Structure tree. Change “Group1” to “traceWidth” and reset the Ruler dimension to 0mm. Then, save the project as UWB_Patch_Antenna_PCB.scdoc and leave SCDM open.

This slideshow requires JavaScript.

[1] Open ANSYS Electronics Desktop (AEDT), insert a new HFSS Design, and select the menu item Modeler > SpaceClaim Link > Connect to Active Session… Notice that there is an option to browse and open any SCDM project if the session is not currently active (or open).

[2] Select the active UWB_Patch_Antenna_PCB session and click Connect.

[3] The geometry from SCDM is automatically imported into HFSS.

This slideshow requires JavaScript.

[1] Double-click the SpaceClaim1 model in the HFSS modeler tree and select the Parameters tab in the pop-up dialogue box. Notice the SCDM parameter can now be controlled within HFSS. Change the Value of traceWidth to SCDM_traceWidth to create a local variable and set SCDM_traceWidth equal to -1mm. Then click OK. Notice a lightning bolt over the SpaceClaim1 model to indicate changes have been made.

[2] Right-click SpaceClaim1 in the modeler tree and select Send Parameters and Generate.

[3] Notice how the HFSS geometry reflects the changes.

[4] Notice how the SCDM also reflects the changes. In practice, it is generally recommended to browse to unopen SCDM projects (rather than connecting to an active session) to avoid accidentally editing the same geometry in two places.

This slideshow requires JavaScript.

At this point, not only can the geometry in SCDM be controlled by variables in HFSS, but a parametric analysis can now be performed on geometry within a direct modeler. The best of both worlds!

Use the typical steps within HFSS to setup a parametric sweep or optimization. When performing a parametric analysis, the geometry will automatically update the link between HFSS and SCDM, so step [2] above does not need to be performed manually. Be sure to follow the typical HFSS setup procedures such as assigning materials, defining ports and boundaries, and creating a solution setup before solving.

Here are some additional pro-tips:

  1. Create local variables in HFSS that can be used for both local and linked geometry. For example, create a variable in HFSS for traceWidth = 3mm (which was the previously noted width). Define SCDM_traceWidth = (traceWidth-3mm)/2. Now the port width can scale with the trace width.

This slideshow requires JavaScript.

  1. Link to multiple SCDM projects. Either move and rotate parts as needed or create a separate coordinate system for each component. For example, link an SMA end connector to the same HFSS project to analyze both components. Notice that each component has variables and the substrate thickness changes both SCDM projects.

This slideshow requires JavaScript.

  1. Design other objects in the native HFSS history-based modeler that are dependent on the SCDM design variables. For example, the void in an enclosure could be a function of SCDM_dielectricHeight. Notice that the enclosure void is dependent on the SCDM dielectric height.

This slideshow requires JavaScript.

Using External Data in ANSYS Mechanical to Tabular Loads with Multiple Variables, Part 2

ANSYS Mechanical is great at applying tabular loads that vary with an independent variable. Say time or Z.  What if you want a tabular load that varies in multiple directions and time. In part one of this series, I covered who you can use the External Data tool to do just that. In this second part, I show how you can alternatively use APDL commands added to your ANSYS Mechanical model to define the tabular loading.


Working Wonders with ADPL Math Illustrated: Thermal Modal Analysis

Guest Blogger

We are pleased to publish this very useful post from Nicolas Jobert from Synchrotron SOLEIL in France. Nicolas is a Mechanical Engineer with more than 20 years of experience using ANSYS for engineering design and analysis in academia and industry. He currently is Senior Mechanical Engineer at Synchrotron SOLEIL, the French synchrotron radiation facility. He also teaches various courses on Design and Validation in the field of structural and optomechanics. He graduated from the Ecole Centrale Marseille, France, and is a EUSPEN member.

As Time Goes By

Do you remember the moment you first heard about ANSYS introducing APDL Math?

I, for one, do, and I have a vivid memory of thinking “Wow, that can be a powerful tool, I’m dead sure it won’t be long before an opportunity arises and I’ll start developing pretty useful procedures and tools”. Well, that was half a decade ago, and to my great shame, nothing quite like that has happened so far. Reasons for this are obvious and probably the same for most of us: lack of time and energy to learn yet another set of commands, fear of the ever present risk of developing procedures that are eventually rejected as nonstandard use of the software and therefore error-prone (those of you working under quality assurance, raise your hand!), anxiety of working directly under the hood on real projects with little means to double check your results, to name a few.

That said, finally an opportunity presented itself, and before I knew it, I was up and running with APDL Math. The objective of this article is to showcase some simple yet insightful applications and hopefully remove the prevention one can have regarding using these additional capabilities.

For the sake of demonstration, I will begin with a somewhat uncommon analysis tool that should nevertheless ring a bell for most of you, that is: modal analysis (and yes, the pun is intended). You may wonder what is the purpose of using APDL Math to perform a task that is a standard ANSYS capability since say revision 2.0, 40 years ago? But wait, did I mention that by modal analysis, I mean thermal modal Analysis?

Thermal Modal Analysis at a Glance

Although scarcely used, thermal modal analysis is both an analysis and a design validation tool, mostly used in the field of precision engineering and or optomechanics. Specifically, it can serve a number of purposes such as:

Q: Will my system settle fast enough to fulfill design requirements?
A: Compute the system Thermal Time Constants

Q: Where should I place sensors to get information rich / robust measurements?
A: Compute Thermal modes and place your sensors away for large thermal gradients

Q: Can I develop a reduced model to solve large transient thermal mechanical problems?
A: Modal basis allows for the construction of such reduced problem effectively converting a high-order coupled system to a low order, uncoupled set of equations.

Q: How to develop a reduced order state-space matrices representation of my thermal system (equivalent to SPMWRITE command)?
A: Modal analysis provides every result needed to build those matrices directly within ANSYS.

Although you might only be vaguely familiar with many or all of those topics, the idea behind this article is really to show that APDL Math does exactly what you need it to do: allow the user to efficiently address specific needs, with a minimal amount of additional work. Minimal? Let’s see what it looks like in reality, and you will soon enough be in a position to make your own opinion on the matter.

Thermal Modal Analysis using APDL Math

To begin with, it is worth underlining the similarities and differences between structural (vibration) modes and thermal modes.

Mathematically, both look very much the same, i.e. modes are solutions of the dynamics equation in the absence of forcing (external) term:


Equation solved

Terms Explained


[K] is the stiffness matrix
[M] is the mass matrix


[K] is the conductivity matrix
[C] is the capacitance matrix

Now, the fundamental difference is that the eigenvalues have completely different physical interpretations (This is a direct consequence of the fact that dynamical systems are 2nd order systems, whereas thermal systems a 1st order systems. While after being disturbed the former will oscillate around equilibrium position, the latter will return to its initial state via exponential decay. Mind you, there is no such thing as thermal resonances!) :

  • Structural : λ=ω², i.e. the square of a circular frequency
  • Thermal: λ=1/τ, i.e. the inverse of time constant

No big deal, right? Hence, the APDL Math code for Thermal Modal Analysis should be a straightforward adaption of the original. As it turns out, the modifications are quite small. Below is a table comparing input codes to perform both type of analyses, using APDL Math.




! Setup Model

! Make ONE dummy transient solve 
! to write stiffness and mass
! matrices to .FULL file 

! Get Stiffness and Mass 

! Eigenvalue Extraction

! No need to convert eigenvalues
! to frequencies, ANSYS does 
! it automatically

! Done !
! Setup Model

! Make TWO dummy transient solve 
! to separately write conductivity
! and capacitances matrices to .FULL file

! Zero out capacitance terms
 ! Get Conductivity Matrice
 ! Restore capacitance and zero out 
 ! conductivity terms
 ! Get Capacitance Matrice

! Eigenvalue Extraction

! Convert Eigenvalues for Frequency 
! to Thermal time Constants

! Done !

The only data requested from the users is the number of requested modes (NRM) as well as the upper frequency (or for that matter, the shortest time constant of interest). Also, note that in the thermal case, one needs to perform two separate dummy analyses to store the conductivity and capacitance matrices, since internally those are merged into an equivalent stiffness (conductivity) matrix:

If you are familiar with APDL, some important differences are apparent here:

  • Results from the eigenvalues are stored in a vector (EiV) and a matrix (MatPhi), which need not be declared but are created when executing the *EIGEN command (no *DIM required).
  • For each APDL Math entity, ANSYS automatically maintains variables named Param_rowDim and Param_colDim, hence removing the burden to keep track of dimensions.

But where on Earth is my eye candy?

Now that we have some procedure and results, we would like to be able to show this to the outside world (and to be honest, some graphical results would also help getting confidence in results).

The additional task to do so is really minimal. What we need to do is simply to put back those numerical results into the ANSYS database so that we can use all the conventional post-processing capabilities. This can be made using the appropriate POST1 commands, essentially: DNSOL. And, while we are at it, why not do a hardcopy to an image file? Here is the corresponding input.

 … User should place all nodes with non-prescribed temperatures in a component named MyNodeComponent

… First, convert Eigenvectors from solver to BCS ordering
 ! Conversion needed

! Then, read in mapping vector to convert to user ordering

! Put the results in ANSYS database

To=NINT(Tau*10)/10 ! compress to 1 digit after comma
 /title,Mode #%ind_mode% - Tau=%To%s
 ! Hardcopy to BMP file

This way, modes can be displayed, or even written to a conventional .RTH file (using RAPPND), and used as any regular ANSYS solver result.

Nice, but an actual example wouldn’t hurt, would it?

Now you may wonder what the results look like in reality. To remain within the field of precision engineering, let’s use a support structure typically designed for high-stability positioning. From a structural point of view, it must have a high dynamic stiffness and a low total mass so that a Delta shaped bracket is appropriate. Since we want the system to rapidly evacuate any heat load, we choose aluminum as candidate material. We do know from first principles that any applied disturbance will exponentially vanish and the system will go back to equilibrium state. Now, what will be the time constants of this decay?

For the sake of simplicity we restrict the analysis to a highly simplified, 2D model of such a support. PLANE55 elements are used to model the structural part while the heat sink is accounted for using SURF151. Boundary conditions are enforced using an extra node.

After applying boundary conditions, we execute the modal solution to obtain say – the first 8 modes.

Index Time Constant [s] Comment
1 535.9 Quasi-uniform temperature field (i.e. “rigid body” mode)
2 32.1 1st order (one wavelength along perimeter)
3 23.8 1st order (one wavelength along perimeter)
4 8.1 2nd order (two wavelengths  along perimeter)
5 6.8 2nd order (two wavelengths  along perimeter)
6 3.5 3rd order (three wavelengths  along perimeter)
7 3.1 3rd order (three wavelengths  along perimeter)
8 2.2 4th order (four wavelengths along perimeter)

The output is strictly the same as the one a standard modal analysis, except for the two additional lines at the end of the solving sequence.

Allocate a [8] Vector : EIV

Allocate a [227][8] Dense Matrix : MatPhi

Please note that the solution has 227 DOFs whereas the entire problem has 228 DOFs. This is the consequence of having introduced the boundary conditions as an enforced temperature on a node, which DOF is therefore removed from the DOF set to be obtained by the solver.

Also, we might want to use the modal shapes information to decide which locations are best suited to capture the entire temperature field on the structure. Without knowledge of the excitation source, one straightforward way to do so is to retain for each mode the node that has the largest amplitude. This is made even easier in this situation, since we have normalized each mode to have unit maximum amplitude we just need to select nodes having modal amplitude equal to 1 (or -1). On the figure below, each temperature sensor location is marked with a ‘TSm’ label where m is the mode index.

Doing so, we reach a pretty satisfactory distribution for the sensors locations, completely consistent with intuition. In numerical terms, we can also check that the modal matrix [Φ]_sensors, i.e. the original full matrix restricted to the selected DOF, has an excellent condition number. But there are many other things we could do starting from this. For example, with additional information, such as the location and the frequency content of the temperature fluctuations, one could further restrict the set of needed temperature sensors by running a dummy transient analysis and choosing locations where the correlation between sensors readings is as low as possible (using *MOPER,,,CORR). Even better, one can estimate the thermally induced displacements and select locations best suited to build an empirical model (typically using AR or ARMA), allowing one to predict structural displacements induced by temperature fluctuations using just a couple of sensors. This in turn can be used to select control strategies, check modal controllability… all within ANSYS.


APDL Math was presented as an alternate route for users who need to include specialized steps in an otherwise standard FE process, and in my opinion it does just that. The benefits can be immense and the learning curve is steep but short. As long as the user knows what he/she is doing, there is little possibility to get lost: after all, APDL Math only comprises 18 additional commands.

What hindered me so far was the necessity to account for internal, BCS and user ordering, but it really is not a big deal, as seen from the above example.

What is more, the possibility to store the created results in the Mechanical APDL database (DNSOL and RAPPND are your friends!) provides every means to control your results and finally to build confidence in your developments.

And for those of us who prefer to stay within Workbench environment, there is nothing preventing from including APDL Math procedures into Workbench command snippets.

This was just an introductory example, since many other applications could be found, to name a few just in the fields of precision engineering and/or opto-mechanics:

  • Speed up transient thermal mechanical analyses
  • Perform harmonic analysis of thermal models
  • Virtual testing of physical setup, including real-time control systems (model based)
  • Modal testing, error localization, automated model updating

Let us know your opinion on the matter, and if further introductory articles on APDL Math could be of use to the ANSYS users community.

Six Very Useful Enhancements in ANSYS Mechanical 18

By now you’ve probably heard that ANSYS versions 18.0, 18.1, and 18.2 have all been released in 2017. While 18.0 was the ‘point’ release in January, it should be noted that 18.1 and 18.2 are not ‘patches’ or service packs, but are full releases each with significant enhancements to the code. We’ll present some significant and useful enhancements for each.


Number 1: First and foremost – info on the new features is more readily accessible with the Mechanical Highlights list. The first time you launch Mechanical, you’ll see a hyperlinked list of new release highlights.

One you actually do something in Mechanical, though, that list goes away. There is a simple way to get it back: Click on the Project branch in the Mechanical tree, then click on the Worksheet button in the menu near the top of the window.
Clicking on the hyperlinks in the list or simply scrolling down gives us more information on each of the listed enhancements. Keep in mind the list is only highlights and by no means has all of the new features listed. A more detailed list can be found in the ANSYS Help, in the Release Notes.

Number 2: A major new feature that became available in 18.0 is Topology Optimization. We’ve written more about Topology Optimization here

Number 3: Another really useful enhancement in 18.0 is the ability to define a beam connection as a pretensioned bolt. This means we no longer need to have a geometry representation of a bolt if we want a simpler model. We can simply insert a beam connection between the two sides of the bolted geometry, and define the pretension on that resulting beam.
Beam connections are inserted in the Connections branch in Mechanical. Once the beam is fully defined, it can have a bolt pretension load applied to it, just like as if the beam geometry was defined as a solid or beam in your geometry tool. Here you can see a beam connection used for bolt pretension on the left, with a traditional geometric representation of a pretensioned bolt on the right:


Number 4: A very nice capability added in version 18.1 is drag and drop contact regions for contact sizing in the Mesh branch. Contact elements work best when the element sizes on both sizes of the interface are similar, especially for nonlinear contact. ANSYS Mechanical has had Contact Sizing available as a mesh control for a long time. Contact Sizing allows us to specify an element size or relevance level once, for both sides of one or more contact regions.

What’s new in 18.1 is the ability to drag and drop selected contacts from the Connections branch into the Mesh branch. Just select the desired contact regions with the mouse, then drag that selection into the Mesh branch. Then specify the desired mesh sizing controls for contact.
This is what the dragging and dropping looks like:

After dropping into the Mesh branch, we can specify the element size for the contact regions:

This shows the effect of the contact sizing specification on the mesh:


Number 5: An awesome new feature in 18.2 is element face selection, and what you can do with it. There is a new selection filter just for element face selection, shown here in the red box:

Once the element face select button is clicked, element faces can be individually selected, box selected, or paint selected simply by holding down the left mouse button and dragging. The green element faces on the near side have been selected this way:

The selected faces can then be converted to a Named Selection, or items such as results plots can be scoped to the face selection:

Number 6: Finally, to finish up, some new hotkeys were added in 18.2. Two really handy ones are:

  • Z = zoom fit or zoom to the current selection of entities
  • <Ctrl> K = activate element face selection
  • F11 = make the graphics window full screen!
  • Click F11 again to toggle back to normal size

Please realize that this list is just a tiny subset of the new features in ANSYS 18. We encourage you to try them out on your own, and investigate others that may be of benefit to you. Keep the Mechanical Highlights list from Number 1 in mind as a good source for info on new capabilities.

ANSYS Discovery Live: Observations on What it Is and Suggestions for Trying it Out

Yesterday ANSYS, Inc. did a webinar about a technology that was going to “Change the way simulation is done.”  If you have been around the world of FEA and CFD for the 30+ years I have you have heard that statement before.  And rarely does the actual product change match the hype.  Not true for ANSYS Discovery Live.  If anything, I think they are holding back.  This is disruptive, this is a tool that will change how people do simulation.  In this post I’ll share my thoughts on what it is and why I think it is so transformative, and then in the second half (go ahead, if you don’t want to listen to me go on and on about how much I like this tool, skip ahead) there are some tips on how to get your hands on it to see for yourself.

What is ANSYS Discovery Live?

ANSYS Discovery Live is a new multiple physics simulation platform that combines several key ingredients to produce a software tool that engineers can use to do almost instantaneous virtual prototypes of the behavior of their designs directly from their solid models. The developers at ANSYS, Inc. have combined their knowledge of advanced solver technology, making solvers parallel for Graphical Processor Units (GPUs, high-end graphics cards), direct solid modeling (SpaceClaim), and some advanced stuff on the discretization side I don’t think I can talk about. All of those things embedded inside SpaceClaim make ANSYS Discovery Live.

Once you have a solid model in the tool, you simply define what physics you want to solve and some boundary conditions, then it solves.  In almost real time. Right there in front of you. The equivalent steps of meshing, building the model, solving it, extracting results, and displaying the results are done automatically. It may iterate a few times to converge on a solution, but in a few seconds, you will have a good enough answer to give you insight into your design.

And that is the key point. This is not a replacement for ANSYS Mechanical, FLUENT, or HFSS. It is a tool for exploring your designs and gaining insight into their behavior. It allows the design engineer, with very little training or expertise, to exercise their design and see what happens.

The product lives inside ANSYS Spaceclaim and can be installed on its own.  It runs on Windows and requires a NVidia graphics card with a newer GPU (see below for more on that).  Right now the product is in pre-release mode and anyone, yes anyone, can go to and download it and try it out. And please, share your feedback.  Expect the product to be released in the first quarter of 2018. Pricing and bundling have not been firmed up yet, but from what we have seen the plans are reasonable and make sense.

Why is it Unique in the Industry?

Some of the first comments I saw on social media about ANSYS Discovery Live after the webinar were that it is not a unique tool.  There are other GPU based solvers out there. That is true. But even though those tools are super fast at solving, they have not been widely adopted.  The ANSYS product is unique because it: 1) combines GPU based solvers for multiple physics and 2) is built into a fully functioning solid modeling tools.  A third might be that it is also an ANSYS product, which means it will be backed technically and supported well.

Why I think that the Simple Fact that it Exists is Important?

During an interview for a magazine article about innovation in product development this week I was asked what is keeping innovation from happening more often.  My answer was that most companies with the resources, both money and people, to innovate are choosing to acquire rather than innovate internally.  They let others raise money, take all the risk, work out all the problems, deal with all the issues of trying to make something new. And then when they succeed, they buy them. There is nothing morally wrong with that approach, it is just inefficient and inaccurate.  Every innovation has to not only survive its technical challenges, it has to survive being a startup.

What ANSYS, Inc. has done is the opposite. They could have purchased a GPU based solver startup and checked the box. But instead, they took people from different business units, several that were acquired, and put them together and said: “innovate… but make it something very useful.”  And they did.  The fact that they executed on the logistics of a new product that used new and old technology across physics and across software development realms, is fantastic.  It makes me feel good about ANSYS, Inc’s true dedication to improving their products.

How will it Change Simulation?

In my career, I have had the same conversation dozens of times “Let me go out to the lab and tinker with it, I’ll figure out what is going on.” That is the way you had to explore your product to get a “feel” for what is going on. Simulation took too long and you became so wrapped up in the process of building and running a model that you could not really explore the behavior of your product. Now we can.

ANSYS Discovery Live is called Discovery Live not because anyone at ANSYS is a marketing genius (sorry guys…) but because that is what it lets you do. Discover the behavior of your product live. You simply play with it and see what happens. And this will change simulation because we know can move from verification or optimization to simply experimenting and gaining a deeper understanding, early in the design process. We will still do what is now I guess called traditional simulation.  We will need more accuracy, more complex physics, loads, and behavior.  But early on we can learn so much by virtually experimenting.

Is it the Perfect Tool Right out of the Box?

This is not a perfect-does-everything tool.  First off, it is a pre-release.  The basic functionality to make it useful is there.  More than I thought would be available in a first release. But there are limitations because it is new, or because of the approach.  It is not as accurate as more traditional approaches. The way it works takes some shortcuts on geometry and can’t include some behaviors. This should improve over time but it will never be accurate as more time-consuming approaches that simply have more functionality.

Over the next two to three years we will see it mature and add functionality and accuracy. The GPU’s the tool depends on will offer more performance for less money as well. This is a journey, but right now everyone I have talked to who has actually played with the pre-release is very happy with the functionality and accuracy that is there now. Because it is sufficient to do the experimentation and exploration it was designed to allow.

How do you Try it Out?

ANSYS, Inc. realized that this type of tool demos so well, and is so different, that a skeptical group of engineers will not accept what they see in a webinar as accurate.  So they have made the pre-release available for use. You can download it and install it, or explore with it in the cloud through your browser.

  • To get started, go to and look around. The videos are awesome!  When you are ready to try it out, click on Download Now. Fill out the form. Don’t complain.  Yes you will get a few emails and a salesperson (gasp!) may call you. It’s worth some emails and maybe a phone call.
  • Set yourself up there.  There is a verification code step and once you put that in and create your login, you have to click on some legal agreements, including export controls.  Save your login info, you will need it to get back in.
  • After that either start the download or the Cloud Trial Option.  The cloud trial didn’t work for me, read below how I got to that function.
  • If you chose download it will download a big Zip File, over 1 GB. It is a full solid modeler and CFD/Structural/Thermal solver…  so it is big.
  • Once it is there, unzip, and  run Setup.exe. follow the steps and you will be there.
  • If you don’t have a graphics card that will run this, then use the cloud demo.  Like I said above, the button didn’t work for me.  If you have that problem or you want to use it after your first login, go to:
  • and click on “Getting Started.”
  • Scroll down a bit and find the “Cloud Trial” post. That one takes you to the page where you can find a server near you to try things out on. It’s pretty slick.
  • If you need to get back here, use and log in with the email and password you gave at registration,
 Here is a PDF Guide with even more details and a quick start.

Hardware Requirements

The only sticky bit about this whole thing is that it run a subset of Nvidia graphics cards. So you have to have one of those cards. According to the information in the forum:

ANSYS Discovery Live relies on the latest GPU technology to provide its computation and visual experience.  To run the software, you will require:

– A dedicated NVIDIA GPU card based on the Kepler, Maxwell or Pascal architecture. Most dedicated NVIDIA GPU cards produced in 2013 or later will be based on one of these architectures.
– At least 4GB of video RAM (8GB preferred) on the GPU.

Also, please ensure you have the latest driver for your graphics card, available from NVIDIA Driver Downloads.  You can also refer to the post on Graphics Performance Benchmarks. Performance of Discovery Live is less dependent on machine CPU and RAM.  A recent generation 64-bit CPU running Windows, and at least 4GB of RAM will be sufficient. If you do not have a graphics card that meets these specifications, the software will not run. However, you can try ANSYS Discovery Live through an online cloud-based trial, which requires only an internet browser and a reasonably fast internet connection.

I didn’t know if my GPU on my laptop would work, so I went to and put in my card model (nvidia m500m) and it told me it was Maxwell technology.

Go Forth and Discover, and Share

Don’t hesitate, download this and try it out.  Even if you are a high-end combustion simulation expert that will never need it, if you are interested in Simulation you should still try it out.   Use the forum to share your thoughts and questions.  The gallery is already filling up with some fantastic real world examples.

ANSYS ACT Console Snippets

So this is just a quick post to point out a handy feature in ANSYS Workbench, the ACT Console. There are times when you want some functionality in Mechanical that just is not yet there. In this example, a customer wanted the ability to get a text list of all the Named Selections in his model.  A quick Python script does just that.

import string,re

a=ExtAPI.DataModel.AnalysisList[0]  #Get the first Analysis if multiple are present 

#Put the output file in the "user_files" directory for the project. 

#Use the name of the system in case the snippet is 
#used on multiple independent systems in the project. 
system_name=re.sub(" ","_",a.Name)  
model = ExtAPI.DataModel.Project.Model 
nsels = model.NamedSelections                  #Get the list of Named Selections 

if nsels:    #Do this if there are any Named Selections
     f=open("%s\\\\%s_named_selections_checked.txt"%(userdir,system_name), "w") 
     for child in nsels.Children:

So to use a piece of Python code, like this, we use the ACT Console in Mechanical. To access the ACT Console in Mechanical 17.0, or later, just hit this icon in the toolbar.

The Console allows you to type, or paste, text directly into the black command line at the bottom.  But if we are going to reuse this code, then the use of Snippets is the way to go. In R17.0 they were called ‘Bookmarks’, but they worked the same way.

When you add a Snippet, a new window allows you to name the snippet and type in, or paste in, your code.

When you hit Apply, your named snippet is added to the list

From then on, to use the snippet you just click on it, and hit ‘Enter’. The text is basically, repasted into the command window, so you can set any variables needed prior to hitting your snippet.

The snippets are persistent and remain in the console, so they are available for all new projects. Using snippets is a great way to reduce time for repetitive tasks, without having to create a full blown ACT extension.

Happy coding!