The Focus


 

What is New and Exciting in Mechanical Simulation – Webinar Recording

Posted on April 25, 2018, by: Eric Miller

The use of FEA and CFD techniques to simulate the behavior of structures, fluids, and electromagnetic fields has gone from an occasional task done by experts to a standard method for driving product development. The webinar below is a presentation by PADT's Co-Owner and Principal, Eric Miller discussing recent advances in simulation that are pushing the technology towards covering more phenomenon, faster run times, and greater accuracy. From up-front real-time stress and fluid flow to massive combustion models with chemistry, fluid flow, thermals, and turbulence; simulation is how products are designed. The talk covers:
  • What is Simulation and How did we Get Where we are
  • Five Current Technology Trends in Simulation
  • Business Trends to be Aware Of
  • What Is Next?
  • How to Keep Up
If you would like to learn more, especially how simulation can drive your company's product development, please contact PADT.

ANSYS Licensing FAQ

Posted on April 24, 2018, by: Ziad Melhem

Were you so excited to jump on your analysis only to have a “server is down or not responsive” message pop out and alienate you from the fun like a prestigiously exclusive club would make their patrons wait at the door? It might have been your manager running a reverse psychology trick on you or maybe not. If it is the latter, you are not alone. As a matter of fact, licensing questions come to us on a regular basis. And even though there are plenty of information on the web, we figured it would be beneficial to have the most frequent answers gathered into one place: an FAQ document (attached on this blog). The Table of Contents includes the following topics:
  1. Server down or not responsive
  2. Installation/Migration
  3. VPN
  4. TECS and license expiry
  5. Versions compatibility
  1. Overuse of licenses
  2. Include list
  3. HPC
  4. Virtual server
ANSYS LICENSING FAQ
Download the PDF here. The document was written with the assumption of the reader having no prior experience with ANSYS or licensing in general. It is formatted in an easy step by step format with photos. The table of contents has hyperlinks embedded in it and can be used to easily navigate to the relevant sections. We do hope that this document will bring value in solving your licensing issues, and we are always here to help if it doesn’t:

1-800-293-PADT or 480-813-4884

support@padtinc.com

Extracting Relative Displacements in ANSYS Mechanical

Posted on April 4, 2018, by: Alex Grishin

A recurring theme in ANSYS Technical Support queries involves the separation of rigid-body from material deformations without performing an additional analysis. Many users simply assume this capability should exist as a simple post-processing query(or that in any case, this shouldn’t be a difficult operation). “Rigid-Body” displacements implies a transient dynamic analysis (as such displacements should not occur in static analyses), but as we’ll see, there are contexts within static structural environments where this concept DOES play an important engineering role. In static structural contexts, such rigid-body motion implies motion transmitted across multiple-bodies. There are two important and loosely related contexts we’ll look at; zero strain rotations of the CG and those rotations combined with strain-based displacement. The following presentation explains the issues including the math behind it, offers solutions including useful APDL marcros, and then gives examples. The models and macros used are in this zip file: PADT-ANSYS-Extract-Dsp-Files PADT-ANSYS-Mechanical-Extracting-Relative-Displacements-20180404
You can also download the PDF here. Find this interesting? This is just a small sample of PADT deep and practical understand of the entire ANSYS Suite of products.  Please consider us for your training, mentoring, and outsourced simulation services needs.

Press Release: New Expansion into Texas Grows PADT’s ANSYS Sales & Support Across the Entire Southwest

Posted on February 6, 2018, by: Eric Miller

When people look at PADT and where we are located, they almost always say "You should open an office in Austin, the tech community there is a perfect fit for your skills and culture." We finally listened and are proud to announce that our newest location is in Austin Texas.  This new office will be initially focused on ANSYS Sales and Support across the great state of Texas.  We have had customers for other products and services in the state for decades and are pleased to have a permanent local presence now. As an Elite ANSYS Channel partner, we provide sales of the complete ANSYS product suite to any and all entities that can benefit from the application of numerical simulation. Across industries, we bring a unique technical approach to both sales and support that is focused on identifying need and then selecting the right toolset, training, and support to deliver a return on the customer's investment as soon as possible.  And the initial product purchase is just the start. Our ANSYS customers are our partners that we grow with, always ready to help them be better at whatever it is what they do.  Customers in Southern California, Nevada, Arizona, Utah, New Mexico, and Colorado already know this, and it is time for the engineering community in Texas to benefit from the experience. Because we will be there for the long term, we are taking our time to look around the area.  Our new salesperson, Ian Scott, is an Austin native and who has worked in the engineering software space for some time. He will be working with existing customers and partners in the area to find the right location for us long-term. But we are already putting plans in place to deliver outstanding training, hold meetings, and maybe even a celebration or two while we settle in. Over time we will add local engineers and additional sales staff to meet the needs of the state, which as you know is big.  And we have big plans for PADT and Texas starting with this ANSYS Sales and Support role, it is just the beginning. Make sure you watch this blog, social media, or our newsletter for announcements on a celebration for our new office as well as technical events we will start holding very soon. We look forward to reconnecting with old friends and making new ones.  If you are in Texas, please reach out to us and send us any suggestions or recommendations you may have.  We are really looking forward to growing in Austin and across the Lone Star State. Please find the official press release on this expansion below as well as versions in PDF and HTML.

Press Release:

Simulation, Product Development and 3D Printing Services Leader, PADT, Opens New Office in Austin, Texas

PADT Becomes the Only ANSYS Elite Channel Partner to Serve the Entire Southwest Region

TEMPE, Ariz., Austin, Texas, February 6, 2018Phoenix Analysis and Design Technologies (PADT), the Southwest’s largest provider of simulation, product development, and rapid prototyping services and products, today announced it has opened an office in Austin, Texas. With this move, PADT is expanding its sales and support for ANSYS simulation software, becoming the only ANSYS Elite Channel Partner to cover the entire Southwest region. “This is a major expansion for PADT with the opportunity to significantly grow our customer base,” said Ward Rand, co-owner and principal, PADT. “We have worked with Texas companies on and off since we founded the company in 1994, our success over the last decade has provided the opportunity to become a full-time resident in the vibrant and growing Austin business and technology community.” Although the initial focus for the PADT Austin office will be on ANSYS sales and support, the company plans to offer its wide array of other products and services in the future. PADT will host a grand opening celebration for customers, partners and media in March, 2018. Ian Scott an Austin native, will be launching the new office and leading the sales effort in the region. “PADT’s expertise in simulation-driven product development will be a welcome addition to the Austin community,” said Scott. “Our focus at launch will be on educating the Austin technology scene on how to derive the best value from their engineering simulation software investment and building stronger relationships with our new neighbors.” In 2017, PADT experienced a very successful year in regards to growing its capabilities, as well as in public recognition. PADT was named an ANSYS Elite Channel Partner for North America, partnered with Desktop Metal and Carbon to upgrade 3D printing capabilities and services and was named to Entrepreneur Magazine’s list of the top small businesses in the nation, the Entrepreneur 360 List. The success of the company has enabled PADT to take this step towards further expansion. To learn more about PADT, visit www.padtinc.com or call 1-800-293-7238. About Phoenix Analysis and Design Technologies Phoenix Analysis and Design Technologies, Inc. (PADT) is an engineering product and services company that focuses on helping customers who develop physical products by providing Numerical Simulation, Product Development, and 3D Printing solutions. PADT’s worldwide reputation for technical excellence and experienced staff is based on its proven record of building long-term win-win partnerships with vendors and customers. Since its establishment in 1994, companies have relied on PADT because “We Make Innovation Work.” With over 80 employees, PADT services customers from its headquarters at the Arizona State University Research Park in Tempe, Arizona, and from offices in Torrance, California; Littleton, Colorado; Albuquerque, New Mexico; Murray, Utah, and Austin, Texas, as well as through staff members located around the country. More information on PADT can be found at www.PADTINC.com.
Media Contact Alec Robertson TechTHiNQ on behalf of PADT 585-281-6399 alec.robertson@techthinq.com PADT Contact Eric Miller PADT, Inc. Principal & Co-Owner 480.813.4884 eric.miller@padtinc.com

Finding curve directions in ANSYS SpaceClaim   

Posted on January 25, 2018, by: Joe Woodward

As it so often does, another blog article idea came from a tech support question that I received the other day. “How do you view edge directions in ANSYS SpaceClaim? You can do it in Mechanical, on the Edge Graphics Options Toolbar: This will turn on arrows so that you can see the edge directions. The directions of the edges or curves affects things like mesh biasing factors and mass flow rate boundary conditions. You need to make sure that all your pipes in a thermal analysis, for instance, are flowing in the same direction. (I have also had three tech support calls about weird spikes showing up in customers’ geometry. The Display Edge Direction is also how you turn those off.) In ANSYS SpaceClaim, there is no way to just display the edge directions. The directions are controlled by which point you pick first while sketching, so if you are careful, you can make sure they are all consistent. But that doesn’t help when you read in CAD files.  So I thought I would share with you what I found, after a little bit of digging and playing. I discovered that the Move Tool behaves in a very specific way, a way that we can use for our need. When you pick on the edge of a surface or solid, or even a straight sketched line, the red arrow of the Move Tool will point in the direction of the curve. These directions match what gets shown in Mechanical. For splines, it’s a little bit different. If you just pick a spline with the Move Tool, the triad will align with the global coordinate system. To see the spline direction, you first have to hover over the spline, to show the vertices of the spline. Then you can pick an interior vertex, and the Blue arrow of the Move Tool will follow the spline direction. This only works at the interior vertices, and not at the ends. At the ends, the Blue tool arrow will always point outward from the spline endpoints, so you won’t really know which is the correct spline direction. I have also found that this technique does not work on sketched circles or arc because the tool always anchors to the center of the curve, and not to the curve itself.  You can, however, use the Repair>Fit Curves tool to convert arcs to splines, using only the Spline option. Then the Move tool will show those directions as described above.  For circles, you have to make one more step, and first, use the Split tool to split the circle into two arcs.  All that though is, in my opinion, more work than it’s worth. I hope this helps make your lives just a little easier. Have a great day.

PADT Dives into ANSYS EKM

Posted on January 24, 2018, by: Ahmed Fayed

Part of the PADT core Philosophy is to “Provide flexible solutions of higher quality in less time for less money”. This part of the philosophy also applies to how we design and build PADT’s internal structure, tools we use, and processes we adopt. Among the growing pains of most engineering and simulation organizations is the constant growing demand for storage capacity, data management, and protection, and BOATLOADS of computing power. Sadly, PADT engineers have yet to develop a near infinite storage capacity (like DNA for storage) or a working quantum computer that can run ANSYS. So we’re in the same boat with everyone else. We have been exploring what are our major pains and what optimizations can be made to our simulation environment (about 2,000 cores of Cube Simulation Cluster Appliances) as well as a structured, controlled solution for engineering data management. As always we started by looking inwards:
  • What skills are available, or learnable within PADT that can help address the need?
  •  What tools & resources do we have access to?
  • What do we need to acquire or buy?
The immediate and most obvious answer was to utilize PADT’s internal pool of knowledge and an ANSYS product called Engineering Knowledge Manager (EKM for short). ANSYS EKM is a tool purposely developed to provide a turnkey solution for simulation process and simulation data management. This means that users can – through a single interface – perform a full simulation lifecycle. In the next few paragraphs, I will briefly go over some of the main features of ANSYS EKM with a couple of screenshots for good measure.
Figure 1. ANSYS EKM Architecture

Interactive and batch submission to high-performance computing resources

For PADT, a very practical feature of EKM is the ability to easily interface to existing High-Performance Computing (HPC) infrastructure. By communicating through ANSYS Remote Solve Manager (RSM), EKM is able to effortlessly interface to most HPC schedulers and resource managers for both the Windows and Linux worlds. This feature is huge because analysts can seamlessly upload their models into the secure EKM repository, submit the jobs to the HPC cluster/s, monitor their runs, and upload their choice of results directly into EKM for review and post-processing. EKM works hard to keep the interface familiar to flatten the learning curve and keep things simple by making the batch submission menus as close as possible to the local ones.
Figure 2. Simplicity of Batch Jobs
At PADT, whenever we are debugging models or application behavior, we want to have an interactive session to have the most control and visibility of the environment. With EKM, we can utilize the remote visualization & acceleration tool Nice – DCV. DCV is launched from within EKM and provides access to an interactive desktop on a cluster target while also accelerating OpenGL graphics for visually intensive programs.
Figure 3. EKM & Nice-DCV provide a Full Featured & Accelerated Interactive Desktop

Storage and archiving of simulation data with built-in version control, data aging, and expiry.

ANSYS EKM provides a comprehensive data management toolset that is derived from real-world needs. Features like highly granular access control, file and folder sharing and collaboration, version control, check-out and check-in procedures, and many more are enabled and very powerful out of the box. Other more advanced features such as data aging, auto-archiving, auto unpacking option for zip files are also very useful.
Figure 4. Data Management Interface
The capabilities don’t end here as EKM integrates directly with ANSYS Workbench. Analysts can seamlessly access their EKM repository from Workbench to perform any modifications and directly save back to EKM without the need to switch applications. Check-outs are automatically checked back in and new version numbers can be created automatically as well.

Metadata extraction

An extremely powerful piece of EKM is the metadata extraction engine that is baked into the core. EKM stores files as two entries, original file, and file metadata. EKM goes beyond the basic filename, date, owner metadata and digs deeper. It digs into the CAE meaningful metadata of the model, setup, material properties, element counts, mesh type and so on. It also extracts snapshots of the geometry, contours and in some cases even provides a 3d model that can be directly manipulated by the user. A sample of an ANSYS Fluent case metadata is below.
Figure 5. Metadata Sample
Another feature of metadata extraction is the ability to take a quick look at simulation results, perform cutplanes, pan, tilt, and zoom as well as add comments and even capture and share snapshots without leaving the browser window.
Figure 6. Metadata Includes Interactive 3D Models
Metadata extraction is supported for ANSYS data types and the ability to define new data types is straightforward and easy to do for any other CAE data types or in-house codes.

A rich search capability that goes beyond filename, owner and timestamps.

How many times have I kicked myself for not using meaningful file names with versions and useful time stamps and ended up spending hours opening a file for a quick peek to find that it isn’t the file I am looking for? Too many. CAE models have hundreds of variables and parameters that are embedded in them. Wouldn’t it be useful if someone came up with a system to store CAE models where an analyst can simply type a search variable and it would search not only name and timestamps but actually dig into the guts of the model and search those? Well EKM is one such system. Analysts can search using thousands of field combinations that encompass everything from material properties to partitioning methods, boundary conditions to cell counts, you get the idea, it's pretty awesome!
Figure 7. Advanced Search Option

Simulation process and workflow management

In EKM, administrators can create simulation workflows and lifecycles that manage all of the different steps that go into creating, running and concluding a simulation while ensuring that proper reviews and approvals handled. In addition, documenting and automating the workflows, some of the underlying work can be automated as well. As we will see later, batch submission is baked right into the EKM capabilities and workflows can automatically launch batch submission scripts to a cluster and get the simulation going as soon as the proper files are loaded and that stage in the process is released.
Figure 8. Workflow Tasks View
Workflow processes are defined in a simple XML format or created using a dedicated mini-tool and uploaded into EKM ready to roll. Email notifications are preset and will shoot out whenever progress is made on a step in the workflow or an approval is needed. A nifty process chart is also built into the EKM processes interface that shows the workflow structure and current progress.
Figure 9. Process Charts in EKM

Conclusion

In conclusion, ANSYS EKM is awesome! (Serious now), PADT invested a lot of time and resources in implementing EKM and in the coming months, we will be transitioning all of our engineering knowledge into it. It is already integrated with our HPC cluster and will be our central repository for engineering data. In this article, I tried to really skim the surface of what EKM can do and what it currently does for us here at PADT. If you are interested in checking out ANSYS EKM or have any questions or thoughts please reach out to us with a comment, email or just give us a call.

Spectre Side-Channel and Meltdown – How will living in this new reality affect the world of numerical simulation?

Posted on January 17, 2018, by: David Mastel

Literally, while I was sorting and running benchmarks and prepping the new benchmarks data originally titled. ANSYS Release 18.2 Ball Grid Array Benchmark information using two sixteen core INTEL® XEON® Gold 6130 CPU's. I noticed that my news feeds had started to blow up with late breaking HPC news. The news as you may have guessed is the Spectre and Meltdown flaws that were recently published. I thought to myself “Well this is just great the benchmarks that I just ran are no longer relevant.  My next thought was wait now I can show a real world example of exactly a percentage change. I have waited this long to run the ANSYS numerical simulation benchmarks on this new CPU architecture. I can wait a little longer to post my findings.” What now? Oh my more Late Breaking News! Research findings, Execution orders no barriers! Side channels used to get access to private address areas of the hardware! Wow this is a bad day. As I sat reading more news, then I drifted off daydreaming, then back to  my screen then the clock on the wall, great it is 2am already!, just go home..." Then thoughts immediate shifted and I was back thinking about indeed, how these hardware flaws impact the missing middle market? HPC numerical simulation!!! I dug in deep and pressed forward content with starting over on the benchmarks knowing after the patches released around Jan 9th will be a whole new world. I decided to spare the ugly details related to the Spectre array bounds/brand prediction attack flaws. The out of order meltdown vulnerabilities! UGH! I seriously believe that someone has AI writing news articles written five or six different ways with each somehow saying the same thing. I also provide the links to the information and legal statements directly from a who's who list of accountable parties: Executive Summary:
  • * Remember every case is different so please do your run your own tests to verify how this new reality affects your hardware and software environment.*
    • Due to costs this machine has a single NVMe M.2 for the primary drive with a single 2TB SATA drive for its Mid-Term Storage area.
  • What was the impact for my benchmark?
    • Positive takeaway:
      • In all of the years of running the sp5 benchmark. I recorded the fastest benchmark time using this CUBE w32s, dual INTEL® XEON® Gold 6130 CPU workstation.
      • Using all thirty two cores 125.7 seconds for Solution Time (Time Spent Computing Solution).
        • Next, Coming in at 135.7 seconds the Solution Time metric after running the OS patches is my second fastest data point for the ANSYS sp5 benchmark.
          • ANSYS sp5 benchmark data - PADT, Inc. Currently from 2005 until this time.
      • The Solution Times continued to solve faster with each bump in cores.
      • Performance per dollar was maximized in this configuration.
    • Depending on number of cores used that I used for the ANSYS sp5 benchmark. I give the actual data below showing the percentage differences before and after:
      • Largest percentage difference:
        • Solution Time: -9.81% using four CPU cores.
        • Total Time: -7.87% using two CPU cores.
  • The need to turn the security screws down within your corporate enterprise network is now.
  • A rogue malicious agent needs to be on the inside of your corporate network to execute any sort of crafted attack. Much of these details are outlined in the Project Zero abstract.
  • Pay extra attention to just who you let on your internal network.
    • I reiterate the recommendations of many security professionals that you should already be restricting your internal company network and workstations to employee use. If you are not sure ask again.
  1. Spectre flaw:
    1. INTEL, ARM & AMD CPU’s are affected by the Spectre array bounds hardware attacks.
  2. Meltdown flaw:
    1. INTEL CPU’s and some ARM high performance CPU’s are affected by the “easier to exploit” Meltdown vulnerability.
I am also interested to see how continued insertion of code barriers and changed memory mappings affect my gaming performance. Haha! No, I am just kidding my numerical simulation performance benchmarks. Clarifications & Definitions:
  • Unpatched Benchmark Data – No mitigation patches from Microsoft and NVidia addressing the Spectre and Meltdown flaws have been applied to the Windows 10 Professional OS running on the CUBE w32s that I use in this benchmark.
  • Patched Benchmark Data – I installed the batch of patches released by Microsoft as well as the NVDIA graphics card driver update released by NVIDIA addressing. NVIDIA indicates in their advisory that “their hardware their GPU hardware is not affected but they are updating their drivers to help mitigate the CPU security issue.” Huh? Installing now…
  • Solution Time – The amount of time in seconds that the CPU’s spent computing the solution. "The Time Spent Computing Solution"
  • Total Time – Total time in seconds that the entire process took. How the solve felt to the user also known as wall clock time.
The CUBE machine that I used in this ANSYS Test Case represent a fine balance based on price, performance and ANSYS HPC licenses used.
  • CUBE w32s, INTEL® XEON® Gold 6130 CPU, 128GB’s DDR4-2667MHz (1Rx4) ECC REG DIMM, Windows 10 Professional, ANSYS Release 18.2, INTEL MPI 5.0.1.3, 32 Total Cores, NVIDIA QUADRO P4000, Samsung EVO 960 Pro NVMe M.2, Toshiba 2TB 7200 RPM SATA 3 Drive.
  • Other notables, are you still paying attention?
    • My Supermicro X11Dai-N BIOS Settings:
      • BIOS Version: 2.0a
      • Execute Disable Bit: DISABLE
      • Hyper threading: ON
      • Intel Virtualization Technology: DISABLE
      • Core Enabled: 0
      • Power Technology: CUSTOM
      • Energy Performance Tuning: DISABLE
      • Energy performance BIAS setting: PERFORMANCE
      • P-State Coordination: HW_ALL
      • Package C-State Limit: C0/C1 State
      • CPU C3 Report: DISABLE
      • CPU C6 Report: DISABLE
      • Enhanced Halt State: DISABLE
    • With a read performance of up to 3,200MB/s and write performance of up to 1,900 MB/s using the Samsung NVMe M.2 drive was to tempting to pass up as my solve and temp solve area location. The bandwidth from the little feller was to impressive and continued to impress throughout the numerical simulation benchmarks.
My first overall impressions of this configuration is Wow! this workstation is fast, quiet and as you will see number crunches its way right on through to being my fastest documented workstation benchmark in this class. This extremely challenging and I/O intensive ANSYS benchmark is no match for this solver! Thumbs up and cheers to happy solving!
  • Cube w32s by PADT, Inc. ANSYS Release 18.2 FEA Benchmark
  • BGA (V18sp-5)
  • Transient nonlinear structural analysis of a electronic ball grid arrary
  • Analysis Type: Static Nonlinear Structural
  • Number of Degrees of Freedom: 6,000,000
  • Matrix: Symmetic
It Is All About The Data: Benchmark data related to Pre and Post Spectre and Meltdown industry software patches on the CUBE w32s. Table 1 - ANSYS sp5 Benchmark  - UnPatched Windows 10 Professional
ANSYS sp5 Benchmark  - Unpatched Windows 10 Professinal for Spectre and Meltdown hardware vulnerability - CUBE w32s
CPUs Solution Time Total Time
2 631.3 671
4 366.8 422
8 216 259
12 193 235
16 144.3 185
20 143.9 187
24 131.9 175
28 137.4 185
31 142.4 185
32 125.7 171
Apples to Apples, meltdown, spectre, ANSYS numerical simulation benchmark data
ANSYS Release 18.2 - SP5 Benchmark - Unpatched Windows 10 Professional CUBE w32s Solution and Total Time Values
Table 1.1 - ANSYS sp5 Benchmark  - Patched Windows 10 Professional
ANSYS sp5 Benchmark  - Patched Windows 10 Professional - CUBE w32s
CPUs Solution Time Total Time
2 683 726
4 405.5 446
8 235.8 277
12 209.2 251
16 148.8 191
20 145.7 189
24 136.3 182
28 138.7 186
31 134.6 179
32 135.7 179
Apples to Apples, meltdown, spectre, ANSYS numerical simulation benchmark data
ANSYS Release 18.2 - SP5 Benchmark - Patched Windows 10 Professional for the Sprectre and Meltdown hardware flaw - Solution And Total Time Values
Table 2 - ANSYS sp5 Benchmark  - The Before and After In Percentage Difference.
Percentage Difference - Not Patched vs. Patched for Sprectre, Meltdown
Solution Time Total Time
-7.94 -7.87
-9.81 -5.53
-8.34 -6.72
-7.57 -6.58
-2.73 -3.19
-1.09 -1.06
-2.87 -3.92
-0.81 -0.54
4.76 3.30
-6.74 -4.57
Fig 2.a
Percentage of impact for this example. Negative value means in this example. The patched Windows 10 Professional CUBE w32s is taking a performance hit.
Percentage of impact for this example. Negative value means "performance hit" in this example. Notice a very interesting blip of positive percentage at 31 cores. A patched CUBE w32s Windows 10 Professional for Sprectre and Meltdown hardware vulnerability. The data from this Windows 10 Professional CUBE w32s INTEL® XEON® Gold 6130 CPU is showing an impact related to the patches.
FIg 2.b
Percentage of impact for this example. Negative value means in this example. The patched Windows 10 Professional CUBE w32s is taking a performance hit.
Percentage of impact for this example. Negative value means there is some sort of impact. The patched Windows 10 Professional CUBE w32s will feel longer to solve by looking at the clock on the wall.
CUBE w32s in action - January 2018
CUBE w32s in action - January 2018
Please contact your local ANSYS Software Sales Representative for more information on purchasing ANSYS HPC Packs. You too may be able to speed up your solve times by unlocking more compute power! What the heck is a CUBE? For more information regarding our Numerical Simulation workstations and clusters please contact our CUBE Hardware Sales Representative at SALES@PADTINC.COM Designed, tested and configured within your budget. We are happy to help and to listen to your specific needs.
CUBE w32s in action - January 2018
CUBE w32s in action - January 2018

Save, Save, Save! Setting up a Save after Solution in ANSYS Mechanical

Posted on December 19, 2017, by: Joe Woodward

NOOOOOOOOOOOOOOOOOOOOOOOOOOOO!!!! Is this the reaction you have when you come in on Monday morning, and realize that another Windows update has, once again, rebooted your PC before you had a chance to save the 30-hour run that should have finished over the weekend? There a Workbench setting that can help relieve some of that stress. The “Save Project After Solution” option will save the entire project as soon as the solution has finished. So when your model runs for 30 hours over the weekend, it gets saved before a Windows update shuts everything down.  These settings are persistent, so once you’ve changed them to ‘Yes’, then you are all set for next time. You just need to make sure that you change them for each ANSYS version if you have more than one installed. Now on to my next blog… “How to recover a run if you forgot to change the settings above.” (Grumble Grumble!)

Transitioning to ANSYS

Posted on December 11, 2017, by: Ziad Melhem

Before joining PADT last July, I have worked on FEA and CFD analyses but my exposure to ANSYS was limited and I was concerned about the transition. To my delight, the software was very easy to learn; most often than not intuitive and self-explanatory (e.g. mechanical wizard), the setup time was minimized after learning couple simple features (e.g. named selection, object generator etc.) and the resources on the ANSYS portal were very instrumental in the learning process. Furthermore, the colleagues at PADT proved to be very knowledgeable and experienced and more importantly responsive and eager to jump for help. One of the most attractive features that caught my attention was the streamline of the Multiphysics nature that ANSYS has. I have been satisfied with the performance of standalone CFD packages in the past, and same goes for structural ones. But never have I dealt with an extensive software that maintained the quality of a specialized one. The importance of this attribute is showing more and more its powers in recent years given the development of new convoluted products of Multiphysics nature e.g. medical applications. Using ANSYS to simulate medical applications is one of the most rewarding experience I personally enjoy. Even though, it is definitely satisfying to be able to help accelerate innovation in the aerospace, automotive, and a myriad of other industrial areas…the experience in the medical area has a more refreshing taste, probably due to the clear and direct link to human lives. From intravascular procedures to shoulder implants and microdevices, there is one common factor: ANSYS is decreasing the risks of catastrophic failures, improving the product capabilities and shortening the innovation cycle. Editors Note: Ziad is part of PADT's team in Southern California.   He is a graduate of USC and has worked at Boeing, Meggit, and Pacific Consolidated Industries before joining PADT.  He works with the rest of our ANSYS technical staff to make sure our users are getting the most from their ANSYS investment.     

Without Risk There Can Be No Progress

Posted on December 5, 2017, by: Ted Harris

I’m sure most people don’t know the name George M. Low.  He was an early employee at NASA, serving as Chief of Manned Space Flight and later as a leader in NASA’a Apollo moon program in the late 1960’s.  In fact, he was named Manager of the Apollo Spacecraft Program after the deadly Apollo 1 fire in 1967, and helped the program move forward to the successful moon landings starting in 1969. As most alumni of Rensselaer Polytechnic Institute know, he returned to Rensselaer, his alma mater, serving as president from 1976 until his death in the 1980’s.  I still recall the rousing speech he gave to us incoming freshman at the Troy Music Hall on a hot September afternoon.  On our class rings is his quote, “Without risk there can be no progress.” I’ve pondered that quote many times in the years since.  It’s easy to coast along in many facets of life and accept and even embrace the status quo.  Over the years, though, I have observed that George Low was right, and the truth is that risk is required to move forward and improve.  The hard part is determining the level of risk that is appropriate, but it’s a sure bet that by not taking any risk, we will lag behind. How is that realization applicable to our world of engineering simulation?  Surely those already doing simulation have moved from the old process of design > test > break > redesign > test > produce to embrace the faster and more efficient simulate > test > product, right?  Perhaps, but even if they have, that doesn’t mean there can’t be progress with some additional risk. Let’s look at a couple of examples in the simulation world where some risk taking can have significant payoffs. First, transitioning from ANSYS Mechanical APDL to ANSYS Mechanical (Workbench).  Most have already made the switch.  I’ll allow there are still some applications that can be completely scripted within the old Mechanical Ansys Parametric Design Language in an incredibly efficient manner.  However, if you are dealing with geometry that’s even remotely complex, I’ll wager that your simulation preparation time will be much faster using the improved CAD import and geometry manipulation capabilities within the ANSYS Workbench Mechanical workflow.  Let alone meshing.  Meshing is lights out faster, more robust, and better quality in modern versions of Mechanical than anything we can do in the older Mechanical APDL mesher. Second, using ANSYS SpaceClaim to clean up, modify, create, and otherwise manipulate geometry.  It doesn’t matter what the source of the geometry is, SpaceClaim is an incredible tool for quickly making it useable for simulation as well as lots of other purposes.  I recently used the SpaceClaim tools within ANSYS Discovery live to combine assemblies from Inventor and SolidWorks into one model, seamlessly, and was able to move, rotate, orient, and modify the geometry to what I needed in a matter of minutes (see the Discovery Live image at the bottom).  The cleanup tools are amazing as well. Third, looking into ANSYS Discovery Live.  Most of us can benefit from quick feedback on design ideas and changes.  The new Discovery Live tool makes that a reality.  Currently, in a technology demonstration mode, it’s free to download and try it from ANSYS, Inc. through early 2018.  I’m utterly amazed by how fast it can read in a complex assembly and start generating results for basic structural, CFD, and thermal simulations.  What used to take weeks or months can now be done in a few minutes. Credits:  Motorcycle geometry downloaded from GrabCAD, model by Shashikant Soren.  Human figure geometry downloaded from GrabCAD, model by Jari Ikonen.  Models combined and manipulated within ANSYS Discovery Live. George M. Low image from www.nasa.gov. I encourage you to take some risks for the sake of progress.

Estimating Structural Response to Random Vibration in ANSYS Mechanical: Reaction Forces

Posted on November 27, 2017, by: Alex Grishin

One of the key outputs from any random vibration analysis is determining the response of the object you are analyzing in terms of reaction forces.  In the presentation below. Alex Grishin shares the theory behind getting accurate forces and then how to do so in ANSYS Mechanical. PADT-ANSYS-Random-Vib-Reaction-Forces-2017_11_22-1
As always, please contact PADT for your ANSYS simulation, training, and customization needs.    

IEEE Day 2017: Smart Antennas for IoT and 5G

Posted on November 7, 2017, by: Michael Griesi

IEEE Day celebrates the first time in history when engineers worldwide and IEEE members gathered to share their technical ideas in 1884. Events were held around the world by 846 IEEE Chapters this year. So, to celebrate, I attended a joint chapter meeting in at The Museum of Flight in Seattle with technical presentations focused on “Smart Antennas for IoT and 5G”. There were approximately 60 in attendance, so assuming this was the average attendance globally results in over 50,000 engineers celebrating IEEE Day worldwide! The Seattle seminar featured three speakers that spanned theory, design, test, integration, and application of smart antennas. There was much discussion about the complexity and challenges of meeting the ambitious goals of 5G, which extend beyond mobile broadband data access. Some key objectives of 5G are to increase capacity, increase data rates, reduce latency, increase availability, and improve spectral and energy efficiency by 2020. A critical technology behind achieving these goals is beamforming antenna arrays, which were at the forefront of each presentation. Anil Kumar from Boeing focused on the application of mmWave technology on aircraft. Test data was used to analyze EM radiation leakage through coated and uncoated aircraft windows. However, since existing regulations don’t consider the increased path loss associated with such high frequencies, the integration of 5G wireless applications may be restricted or delayed. Beyond this regulatory challenge, Anil discussed how multipath reflectors and absorbers will present significant challenges to successful integration inside the cabin. Although testing is always required for validation, designing the layout of the onboard transceivers may be impractical to optimize without an asymptotic EM simulation tool that can account for creeping waves, diffraction, and multi-bounce. Considering the test and measurement perspective, Jari Vikstedt from ETS-Lindgren focused on the challenges of testing smart antenna systems. Smart or adaptive antenna systems will not likely perform the same in an anechoic chamber test as they would in real systems. Of particular difficulty, radiation null placement is just as critical as beam placement. This poses a difficult challenge to the number and location of probes in a test environment. Not only would a large number of probes become impractical, there is significant shadowing at mmWave frequencies which can negatively impact the measurement. Furthermore, compact ranges can significantly impact testing and line of sight measurements become particularly challenging. While not a purely test-oriented observation, this lead to considering the challenge of tower hand off. If a handset and tower use beamforming to maintain a link, if is difficult for an approaching tower to even sense the handset to negotiate the hand-off. In contrast, if the handset was continuously scanning, the approaching tower could be sensed to negotiate the hand-off before the link is jeopardized. The keynote speaker, who also traveled from Phoenix to Seattle, was ASU Professor Dr. Constantine Balanis. Dr. Balanis opened his presentation by making a distinction between conventional “dumb antennas” and “smart antennas”. In reality, there are no smart antennas, but instead smart antenna systems. This is a critical point from an engineering perspective since it highlights the complexity and challenge of designing modern communication systems. The focus of his presentation was using an adaptive system to steer null points in addition to the beam in an antenna array using a least mean square (LMS) algorithm. He began with a simple linear patch array with fixed uniform amplitude weights, since an analytic solution was practical and could be used to validate a simulation setup. However, once the simulation results were verified for confidence, designing a more complex array with weighted amplitudes accompanying the element phase shift was only practical through simulation. While beam steering will create a device centric system by targeting individual users on massive multiple input multiple output (MIMO) networks, null steering can improve efficiency by minimizing interference to other devices. Whether spatial processing is truly the “last frontier in the battle for cellular system capacity”, 5G technology will most certainly usher in a new era of high capacity, high speed, efficient, and ubiquitous means of communication. If you would like to learn more about how PADT approaches antenna simulation, you can read about it here and contact us directly at info@padtinc.com.

Parameterizing Solid Models for ANSYS HFSS

Posted on October 23, 2017, by: Michael Griesi

ANSYS HFSS features an integrated “history-based modeler”. This means that an object’s final shape is dependent on each and every operation performed on that object. History-based modelers are a perfect choice for analysis since they naturally support parameterization for design exploration and optimization. However, editing imported solid 3D Mechanical CAD (or MCAD) models can sometimes be challenging with a history-based modeler since there are no imported parameters, the order of operation is important, and operational dependencies can sometimes lead to logic errors. Conversely, direct modelers are not bound by previous operations which can offer more freedom to edit geometry in any order without historic logic errors. This makes direct modelers a popular choice for CAD software but, since dependencies are not maintained, they are not typically the natural choice for parametric analysis. If only there was a way to leverage the best of both worlds… Well, with ANSYS, there is a way. As discussed in a previous blog post, since the release of ANSYS 18.1, ANSYS SpaceClaim Direct Modeler (SCDM) and the MCAD translator used to import geometry from third-party CAD tools are now packaged together. The post also covered a few simple procedures to import and prepare a solid model for electromagnetic analysis. However, this blog post will demonstrate how to define parameters in SCDM, directly link the model in SCDM to HFSS, and drive a parametric sweep from HFSS. This link unites the geometric flexibility of a direct modeler to the parametric flexibility of a history-based modeler. You can download a copy of this model here to follow along. If you need access to SCDM, you can contact us at info@padtinc.com. It’s also worth noting that the processes discussed throughout this article work the same for HFSS-IE, Q3D, and Maxwell designs as well. [1] To begin, open ANSYS SpaceClaim and select File > Open to import the step file. [2] Split the patch antenna and reference plane from the dielectric. Click here for steps to splitting geometry. Notice the objects can be renamed and colors can be changed under the Display tab.

This slideshow requires JavaScript.

[1] Click and hold the center mouse button to rotate the model, zoom into the microstrip feed using the mouse scroll, then select the side of the trace. [2] Rotate to the other side of the microstrip feed, hold the Ctrl key, and select the other side of the trace. Note the distance between the faces is shown as 3mm in the Status Bar at the bottom of the screen, which is the initial trace width. [3] Select Design > Edit > Pull and select No merge under Options - Pull. [4] Click the yellow arrow in the model, and drag the side of the trace. Notice how both faces move in or out to change the trace width. After releasing the mouse, a P will appear next to the measurement box. Click this P to create a parameter. [5] Select the Groups panel under the Structure tree. Change “Group1” to “traceWidth” and reset the Ruler dimension to 0mm. Then, save the project as UWB_Patch_Antenna_PCB.scdoc and leave SCDM open.

This slideshow requires JavaScript.

[1] Open ANSYS Electronics Desktop (AEDT), insert a new HFSS Design, and select the menu item Modeler > SpaceClaim Link > Connect to Active Session… Notice that there is an option to browse and open any SCDM project if the session is not currently active (or open). [2] Select the active UWB_Patch_Antenna_PCB session and click Connect. [3] The geometry from SCDM is automatically imported into HFSS.

This slideshow requires JavaScript.

[1] Double-click the SpaceClaim1 model in the HFSS modeler tree and select the Parameters tab in the pop-up dialogue box. Notice the SCDM parameter can now be controlled within HFSS. Change the Value of traceWidth to SCDM_traceWidth to create a local variable and set SCDM_traceWidth equal to -1mm. Then click OK. Notice a lightning bolt over the SpaceClaim1 model to indicate changes have been made. [2] Right-click SpaceClaim1 in the modeler tree and select Send Parameters and Generate. [3] Notice how the HFSS geometry reflects the changes. [4] Notice how the SCDM also reflects the changes. In practice, it is generally recommended to browse to unopen SCDM projects (rather than connecting to an active session) to avoid accidentally editing the same geometry in two places.

This slideshow requires JavaScript.

At this point, not only can the geometry in SCDM be controlled by variables in HFSS, but a parametric analysis can now be performed on geometry within a direct modeler. The best of both worlds! Use the typical steps within HFSS to setup a parametric sweep or optimization. When performing a parametric analysis, the geometry will automatically update the link between HFSS and SCDM, so step [2] above does not need to be performed manually. Be sure to follow the typical HFSS setup procedures such as assigning materials, defining ports and boundaries, and creating a solution setup before solving. Here are some additional pro-tips:
  1. Create local variables in HFSS that can be used for both local and linked geometry. For example, create a variable in HFSS for traceWidth = 3mm (which was the previously noted width). Define SCDM_traceWidth = (traceWidth-3mm)/2. Now the port width can scale with the trace width.

This slideshow requires JavaScript.

  1. Link to multiple SCDM projects. Either move and rotate parts as needed or create a separate coordinate system for each component. For example, link an SMA end connector to the same HFSS project to analyze both components. Notice that each component has variables and the substrate thickness changes both SCDM projects.

This slideshow requires JavaScript.

  1. Design other objects in the native HFSS history-based modeler that are dependent on the SCDM design variables. For example, the void in an enclosure could be a function of SCDM_dielectricHeight. Notice that the enclosure void is dependent on the SCDM dielectric height.

This slideshow requires JavaScript.

Using External Data in ANSYS Mechanical to Tabular Loads with Multiple Variables, Part 2

Posted on October 5, 2017, by: Alex Grishin

ANSYS Mechanical is great at applying tabular loads that vary with an independent variable. Say time or Z.  What if you want a tabular load that varies in multiple directions and time. In part one of this series, I covered who you can use the External Data tool to do just that. In this second part, I show how you can alternatively use APDL commands added to your ANSYS Mechanical model to define the tabular loading. PADT-ANSYS-Tabular-loads-2

Working Wonders with ADPL Math Illustrated: Thermal Modal Analysis

Posted on September 12, 2017, by: Nicolas Jobert

Guest Blogger

We are pleased to publish this very useful post from Nicolas Jobert from Synchrotron SOLEIL in France. Nicolas is a Mechanical Engineer with more than 20 years of experience using ANSYS for engineering design and analysis in academia and industry. He currently is Senior Mechanical Engineer at Synchrotron SOLEIL, the French synchrotron radiation facility. He also teaches various courses on Design and Validation in the field of structural and optomechanics. He graduated from the Ecole Centrale Marseille, France, and is a EUSPEN member.

As Time Goes By

Do you remember the moment you first heard about ANSYS introducing APDL Math? I, for one, do, and I have a vivid memory of thinking “Wow, that can be a powerful tool, I’m dead sure it won’t be long before an opportunity arises and I’ll start developing pretty useful procedures and tools”. Well, that was half a decade ago, and to my great shame, nothing quite like that has happened so far. Reasons for this are obvious and probably the same for most of us: lack of time and energy to learn yet another set of commands, fear of the ever present risk of developing procedures that are eventually rejected as nonstandard use of the software and therefore error-prone (those of you working under quality assurance, raise your hand!), anxiety of working directly under the hood on real projects with little means to double check your results, to name a few. That said, finally an opportunity presented itself, and before I knew it, I was up and running with APDL Math. The objective of this article is to showcase some simple yet insightful applications and hopefully remove the prevention one can have regarding using these additional capabilities. For the sake of demonstration, I will begin with a somewhat uncommon analysis tool that should nevertheless ring a bell for most of you, that is: modal analysis (and yes, the pun is intended). You may wonder what is the purpose of using APDL Math to perform a task that is a standard ANSYS capability since say revision 2.0, 40 years ago? But wait, did I mention that by modal analysis, I mean thermal modal Analysis?

Thermal Modal Analysis at a Glance

Although scarcely used, thermal modal analysis is both an analysis and a design validation tool, mostly used in the field of precision engineering and or optomechanics. Specifically, it can serve a number of purposes such as:

Q: Will my system settle fast enough to fulfill design requirements? A: Compute the system Thermal Time Constants

Q: Where should I place sensors to get information rich / robust measurements? A: Compute Thermal modes and place your sensors away for large thermal gradients

Q: Can I develop a reduced model to solve large transient thermal mechanical problems? A: Modal basis allows for the construction of such reduced problem effectively converting a high-order coupled system to a low order, uncoupled set of equations.

Q: How to develop a reduced order state-space matrices representation of my thermal system (equivalent to SPMWRITE command)? A: Modal analysis provides every result needed to build those matrices directly within ANSYS.

Although you might only be vaguely familiar with many or all of those topics, the idea behind this article is really to show that APDL Math does exactly what you need it to do: allow the user to efficiently address specific needs, with a minimal amount of additional work. Minimal? Let’s see what it looks like in reality, and you will soon enough be in a position to make your own opinion on the matter.

Thermal Modal Analysis using APDL Math

To begin with, it is worth underlining the similarities and differences between structural (vibration) modes and thermal modes. Mathematically, both look very much the same, i.e. modes are solutions of the dynamics equation in the absence of forcing (external) term:

Domain

Equation solved

Terms Explained

Structural

[K] is the stiffness matrix [M] is the mass matrix

Thermal

[K] is the conductivity matrix [C] is the capacitance matrix

Now, the fundamental difference is that the eigenvalues have completely different physical interpretations (This is a direct consequence of the fact that dynamical systems are 2nd order systems, whereas thermal systems a 1st order systems. While after being disturbed the former will oscillate around equilibrium position, the latter will return to its initial state via exponential decay. Mind you, there is no such thing as thermal resonances!) :
  • Structural : λ=ω², i.e. the square of a circular frequency
  • Thermal: λ=1/τ, i.e. the inverse of time constant
No big deal, right? Hence, the APDL Math code for Thermal Modal Analysis should be a straightforward adaption of the original. As it turns out, the modifications are quite small. Below is a table comparing input codes to perform both type of analyses, using APDL Math.  

Structural

Thermal

! Setup Model
 …

! Make ONE dummy transient solve 
! to write stiffness and mass
! matrices to .FULL file 
 /SOLU
 ANTYPE,TRANSIENT
 TIME,1
 WRFULL,1
 SOLVE


! Get Stiffness and Mass 
 *SMAT,MatK,D,IMPORT,FULL,,STIFF
 *SMAT,MatM,D,IMPORT,FULL,,MASS










! Eigenvalue Extraction
 Antype,MODAL
 modopt,Lanb,NRM,0,Fmax
 *EIGEN,MatK,MatM,,EiV,MatPhi

! No need to convert eigenvalues
! to frequencies, ANSYS does 
! it automatically



! Done !
! Setup Model
 …

! Make TWO dummy transient solve 
! to separately write conductivity
! and capacitances matrices to .FULL file
 /SOLU
 ANTYPE,TRANSIENT
 TIME,1
 NSUB,1,1,1
 TINTP,,,,1
 WRFULL,1

! Zero out capacitance terms
 …
 SOLVE
 ! Get Conductivity Matrice
 *SMAT,MatK,D,IMPORT,FULL, Jobname.full,STIFF
 ! Restore capacitance and zero out 
 ! conductivity terms
  …
 SOLVE
 ! Get Capacitance Matrice
 *SMAT,MatC,D,IMPORT,FULL,,STIFF

! Eigenvalue Extraction
 Antype,MODAL
 modopt,Lanb,NRM,,0,1/(2*PI*SQRT(Tmin))
 *EIGEN,MatK,MatC,,EiV,MatPhi

! Convert Eigenvalues for Frequency 
! to Thermal time Constants
!
 *do,i,1,EiV_rowDim
    Eiv(i)=1/(2*PI*Eiv(i))**2
 *enddo

! Done !
The only data requested from the users is the number of requested modes (NRM) as well as the upper frequency (or for that matter, the shortest time constant of interest). Also, note that in the thermal case, one needs to perform two separate dummy analyses to store the conductivity and capacitance matrices, since internally those are merged into an equivalent stiffness (conductivity) matrix: If you are familiar with APDL, some important differences are apparent here:
  • Results from the eigenvalues are stored in a vector (EiV) and a matrix (MatPhi), which need not be declared but are created when executing the *EIGEN command (no *DIM required).
  • For each APDL Math entity, ANSYS automatically maintains variables named Param_rowDim and Param_colDim, hence removing the burden to keep track of dimensions.

But where on Earth is my eye candy?

Now that we have some procedure and results, we would like to be able to show this to the outside world (and to be honest, some graphical results would also help getting confidence in results). The additional task to do so is really minimal. What we need to do is simply to put back those numerical results into the ANSYS database so that we can use all the conventional post-processing capabilities. This can be made using the appropriate POST1 commands, essentially: DNSOL. And, while we are at it, why not do a hardcopy to an image file? Here is the corresponding input.
 … User should place all nodes with non-prescribed temperatures in a component named MyNodeComponent

… First, convert Eigenvectors from solver to BCS ordering
 ! Conversion needed
 *SMAT,Nod2Bcs,D,IMPORT,FULL,Jobname.full,NOD2BCS
 *MULT,Nod2Bcs,TRAN,MatPhi,,MatPhi

! Then, read in mapping vector to convert to user ordering
 *VEC,MapForward,I,IMPORT,FULL,Jobname.full,FORWARD

! Put the results in ANSYS database
 /POST1
 *do,ind_mode,1,NRM
 cmsel,s,MyNodeComponent
 curr_node=0
 *do,i,1,ndinqr(0,13)
 curr_node=ndnext(curr_node)
 curr_temp=MatPhi(MapForward(curr_node),ind_mode)
 dnsol,curr_node,TEMP,,curr_temp
 *enddo
 Tau=1/(2*3.14*EiV(ind_mode))**2

To=NINT(Tau*10)/10 ! compress to 1 digit after comma
 /title,Mode #%ind_mode% - Tau=%To%s
 plnsol,temp
 ! Hardcopy to BMP file
 /image,SAVE,JobName_Mode%ind_mode%,bmp
 *enddo
This way, modes can be displayed, or even written to a conventional .RTH file (using RAPPND), and used as any regular ANSYS solver result.

Nice, but an actual example wouldn’t hurt, would it?

Now you may wonder what the results look like in reality. To remain within the field of precision engineering, let’s use a support structure typically designed for high-stability positioning. From a structural point of view, it must have a high dynamic stiffness and a low total mass so that a Delta shaped bracket is appropriate. Since we want the system to rapidly evacuate any heat load, we choose aluminum as candidate material. We do know from first principles that any applied disturbance will exponentially vanish and the system will go back to equilibrium state. Now, what will be the time constants of this decay? For the sake of simplicity we restrict the analysis to a highly simplified, 2D model of such a support. PLANE55 elements are used to model the structural part while the heat sink is accounted for using SURF151. Boundary conditions are enforced using an extra node. After applying boundary conditions, we execute the modal solution to obtain say - the first 8 modes.
Index Time Constant [s] Comment
1 535.9 Quasi-uniform temperature field (i.e. “rigid body” mode)
2 32.1 1st order (one wavelength along perimeter)
3 23.8 1st order (one wavelength along perimeter)
4 8.1 2nd order (two wavelengths  along perimeter)
5 6.8 2nd order (two wavelengths  along perimeter)
6 3.5 3rd order (three wavelengths  along perimeter)
7 3.1 3rd order (three wavelengths  along perimeter)
8 2.2 4th order (four wavelengths along perimeter)
The output is strictly the same as the one a standard modal analysis, except for the two additional lines at the end of the solving sequence.
Allocate a [8] Vector : EIV

Allocate a [227][8] Dense Matrix : MatPhi
Please note that the solution has 227 DOFs whereas the entire problem has 228 DOFs. This is the consequence of having introduced the boundary conditions as an enforced temperature on a node, which DOF is therefore removed from the DOF set to be obtained by the solver. Also, we might want to use the modal shapes information to decide which locations are best suited to capture the entire temperature field on the structure. Without knowledge of the excitation source, one straightforward way to do so is to retain for each mode the node that has the largest amplitude. This is made even easier in this situation, since we have normalized each mode to have unit maximum amplitude we just need to select nodes having modal amplitude equal to 1 (or -1). On the figure below, each temperature sensor location is marked with a ‘TSm’ label where m is the mode index. Doing so, we reach a pretty satisfactory distribution for the sensors locations, completely consistent with intuition. In numerical terms, we can also check that the modal matrix [Φ]_sensors, i.e. the original full matrix restricted to the selected DOF, has an excellent condition number. But there are many other things we could do starting from this. For example, with additional information, such as the location and the frequency content of the temperature fluctuations, one could further restrict the set of needed temperature sensors by running a dummy transient analysis and choosing locations where the correlation between sensors readings is as low as possible (using *MOPER,,,CORR). Even better, one can estimate the thermally induced displacements and select locations best suited to build an empirical model (typically using AR or ARMA), allowing one to predict structural displacements induced by temperature fluctuations using just a couple of sensors. This in turn can be used to select control strategies, check modal controllability… all within ANSYS.

Conclusion

APDL Math was presented as an alternate route for users who need to include specialized steps in an otherwise standard FE process, and in my opinion it does just that. The benefits can be immense and the learning curve is steep but short. As long as the user knows what he/she is doing, there is little possibility to get lost: after all, APDL Math only comprises 18 additional commands. What hindered me so far was the necessity to account for internal, BCS and user ordering, but it really is not a big deal, as seen from the above example. What is more, the possibility to store the created results in the Mechanical APDL database (DNSOL and RAPPND are your friends!) provides every means to control your results and finally to build confidence in your developments. And for those of us who prefer to stay within Workbench environment, there is nothing preventing from including APDL Math procedures into Workbench command snippets. This was just an introductory example, since many other applications could be found, to name a few just in the fields of precision engineering and/or opto-mechanics:
  • Speed up transient thermal mechanical analyses
  • Perform harmonic analysis of thermal models
  • Virtual testing of physical setup, including real-time control systems (model based)
  • Modal testing, error localization, automated model updating
Let us know your opinion on the matter, and if further introductory articles on APDL Math could be of use to the ANSYS users community.