Posted on April 25, 2018, by: Eric MillerThe use of FEA and CFD techniques to simulate the behavior of structures, fluids, and electromagnetic fields has gone from an occasional task done by experts to a standard method for driving product development. The webinar below is a presentation by PADT's Co-Owner and Principal, Eric Miller discussing recent advances in simulation that are pushing the technology towards covering more phenomenon, faster run times, and greater accuracy. From up-front real-time stress and fluid flow to massive combustion models with chemistry, fluid flow, thermals, and turbulence; simulation is how products are designed. The talk covers:
- What is Simulation and How did we Get Where we are
- Five Current Technology Trends in Simulation
- Business Trends to be Aware Of
- What Is Next?
- How to Keep Up
Posted on April 24, 2018, by: Ziad MelhemWere you so excited to jump on your analysis only to have a “server is down or not responsive” message pop out and alienate you from the fun like a prestigiously exclusive club would make their patrons wait at the door? It might have been your manager running a reverse psychology trick on you or maybe not. If it is the latter, you are not alone. As a matter of fact, licensing questions come to us on a regular basis. And even though there are plenty of information on the web, we figured it would be beneficial to have the most frequent answers gathered into one place: an FAQ document (attached on this blog). The Table of Contents includes the following topics:
Download the PDF here. The document was written with the assumption of the reader having no prior experience with ANSYS or licensing in general. It is formatted in an easy step by step format with photos. The table of contents has hyperlinks embedded in it and can be used to easily navigate to the relevant sections. We do hope that this document will bring value in solving your licensing issues, and we are always here to help if it doesn’t:
1-800-293-PADT or 480-813-4884
Posted on April 4, 2018, by: Alex GrishinA recurring theme in ANSYS Technical Support queries involves the separation of rigid-body from material deformations without performing an additional analysis. Many users simply assume this capability should exist as a simple post-processing query(or that in any case, this shouldn’t be a difficult operation). “Rigid-Body” displacements implies a transient dynamic analysis (as such displacements should not occur in static analyses), but as we’ll see, there are contexts within static structural environments where this concept DOES play an important engineering role. In static structural contexts, such rigid-body motion implies motion transmitted across multiple-bodies. There are two important and loosely related contexts we’ll look at; zero strain rotations of the CG and those rotations combined with strain-based displacement. The following presentation explains the issues including the math behind it, offers solutions including useful APDL marcros, and then gives examples. The models and macros used are in this zip file: PADT-ANSYS-Extract-Dsp-Files PADT-ANSYS-Mechanical-Extracting-Relative-Displacements-20180404
You can also download the PDF here. Find this interesting? This is just a small sample of PADT deep and practical understand of the entire ANSYS Suite of products. Please consider us for your training, mentoring, and outsourced simulation services needs.
Press Release: New Expansion into Texas Grows PADT’s ANSYS Sales & Support Across the Entire Southwest
Posted on February 6, 2018, by: Eric MillerWhen people look at PADT and where we are located, they almost always say "You should open an office in Austin, the tech community there is a perfect fit for your skills and culture." We finally listened and are proud to announce that our newest location is in Austin Texas. This new office will be initially focused on ANSYS Sales and Support across the great state of Texas. We have had customers for other products and services in the state for decades and are pleased to have a permanent local presence now. As an Elite ANSYS Channel partner, we provide sales of the complete ANSYS product suite to any and all entities that can benefit from the application of numerical simulation. Across industries, we bring a unique technical approach to both sales and support that is focused on identifying need and then selecting the right toolset, training, and support to deliver a return on the customer's investment as soon as possible. And the initial product purchase is just the start. Our ANSYS customers are our partners that we grow with, always ready to help them be better at whatever it is what they do. Customers in Southern California, Nevada, Arizona, Utah, New Mexico, and Colorado already know this, and it is time for the engineering community in Texas to benefit from the experience. Because we will be there for the long term, we are taking our time to look around the area. Our new salesperson, Ian Scott, is an Austin native and who has worked in the engineering software space for some time. He will be working with existing customers and partners in the area to find the right location for us long-term. But we are already putting plans in place to deliver outstanding training, hold meetings, and maybe even a celebration or two while we settle in. Over time we will add local engineers and additional sales staff to meet the needs of the state, which as you know is big. And we have big plans for PADT and Texas starting with this ANSYS Sales and Support role, it is just the beginning. Make sure you watch this blog, social media, or our newsletter for announcements on a celebration for our new office as well as technical events we will start holding very soon. We look forward to reconnecting with old friends and making new ones. If you are in Texas, please reach out to us and send us any suggestions or recommendations you may have. We are really looking forward to growing in Austin and across the Lone Star State. Please find the official press release on this expansion below as well as versions in PDF and HTML.
Posted on January 25, 2018, by: Joe WoodwardAs it so often does, another blog article idea came from a tech support question that I received the other day. “How do you view edge directions in ANSYS SpaceClaim?” You can do it in Mechanical, on the Edge Graphics Options Toolbar: This will turn on arrows so that you can see the edge directions. The directions of the edges or curves affects things like mesh biasing factors and mass flow rate boundary conditions. You need to make sure that all your pipes in a thermal analysis, for instance, are flowing in the same direction. (I have also had three tech support calls about weird spikes showing up in customers’ geometry. The Display Edge Direction is also how you turn those off.) In ANSYS SpaceClaim, there is no way to just display the edge directions. The directions are controlled by which point you pick first while sketching, so if you are careful, you can make sure they are all consistent. But that doesn’t help when you read in CAD files. So I thought I would share with you what I found, after a little bit of digging and playing. I discovered that the Move Tool behaves in a very specific way, a way that we can use for our need. When you pick on the edge of a surface or solid, or even a straight sketched line, the red arrow of the Move Tool will point in the direction of the curve. These directions match what gets shown in Mechanical. For splines, it’s a little bit different. If you just pick a spline with the Move Tool, the triad will align with the global coordinate system. To see the spline direction, you first have to hover over the spline, to show the vertices of the spline. Then you can pick an interior vertex, and the Blue arrow of the Move Tool will follow the spline direction. This only works at the interior vertices, and not at the ends. At the ends, the Blue tool arrow will always point outward from the spline endpoints, so you won’t really know which is the correct spline direction. I have also found that this technique does not work on sketched circles or arc because the tool always anchors to the center of the curve, and not to the curve itself. You can, however, use the Repair>Fit Curves tool to convert arcs to splines, using only the Spline option. Then the Move tool will show those directions as described above. For circles, you have to make one more step, and first, use the Split tool to split the circle into two arcs. All that though is, in my opinion, more work than it’s worth. I hope this helps make your lives just a little easier. Have a great day.
Posted on January 24, 2018, by: Ahmed FayedPart of the PADT core Philosophy is to “Provide flexible solutions of higher quality in less time for less money”. This part of the philosophy also applies to how we design and build PADT’s internal structure, tools we use, and processes we adopt. Among the growing pains of most engineering and simulation organizations is the constant growing demand for storage capacity, data management, and protection, and BOATLOADS of computing power. Sadly, PADT engineers have yet to develop a near infinite storage capacity (like DNA for storage) or a working quantum computer that can run ANSYS. So we’re in the same boat with everyone else. We have been exploring what are our major pains and what optimizations can be made to our simulation environment (about 2,000 cores of Cube Simulation Cluster Appliances) as well as a structured, controlled solution for engineering data management. As always we started by looking inwards:
- What skills are available, or learnable within PADT that can help address the need?
- What tools & resources do we have access to?
- What do we need to acquire or buy?
Interactive and batch submission to high-performance computing resourcesFor PADT, a very practical feature of EKM is the ability to easily interface to existing High-Performance Computing (HPC) infrastructure. By communicating through ANSYS Remote Solve Manager (RSM), EKM is able to effortlessly interface to most HPC schedulers and resource managers for both the Windows and Linux worlds. This feature is huge because analysts can seamlessly upload their models into the secure EKM repository, submit the jobs to the HPC cluster/s, monitor their runs, and upload their choice of results directly into EKM for review and post-processing. EKM works hard to keep the interface familiar to flatten the learning curve and keep things simple by making the batch submission menus as close as possible to the local ones. At PADT, whenever we are debugging models or application behavior, we want to have an interactive session to have the most control and visibility of the environment. With EKM, we can utilize the remote visualization & acceleration tool Nice – DCV. DCV is launched from within EKM and provides access to an interactive desktop on a cluster target while also accelerating OpenGL graphics for visually intensive programs.
Storage and archiving of simulation data with built-in version control, data aging, and expiry.ANSYS EKM provides a comprehensive data management toolset that is derived from real-world needs. Features like highly granular access control, file and folder sharing and collaboration, version control, check-out and check-in procedures, and many more are enabled and very powerful out of the box. Other more advanced features such as data aging, auto-archiving, auto unpacking option for zip files are also very useful. The capabilities don’t end here as EKM integrates directly with ANSYS Workbench. Analysts can seamlessly access their EKM repository from Workbench to perform any modifications and directly save back to EKM without the need to switch applications. Check-outs are automatically checked back in and new version numbers can be created automatically as well.
Metadata extractionAn extremely powerful piece of EKM is the metadata extraction engine that is baked into the core. EKM stores files as two entries, original file, and file metadata. EKM goes beyond the basic filename, date, owner metadata and digs deeper. It digs into the CAE meaningful metadata of the model, setup, material properties, element counts, mesh type and so on. It also extracts snapshots of the geometry, contours and in some cases even provides a 3d model that can be directly manipulated by the user. A sample of an ANSYS Fluent case metadata is below. Another feature of metadata extraction is the ability to take a quick look at simulation results, perform cutplanes, pan, tilt, and zoom as well as add comments and even capture and share snapshots without leaving the browser window. Metadata extraction is supported for ANSYS data types and the ability to define new data types is straightforward and easy to do for any other CAE data types or in-house codes.
A rich search capability that goes beyond filename, owner and timestamps.How many times have I kicked myself for not using meaningful file names with versions and useful time stamps and ended up spending hours opening a file for a quick peek to find that it isn’t the file I am looking for? Too many. CAE models have hundreds of variables and parameters that are embedded in them. Wouldn’t it be useful if someone came up with a system to store CAE models where an analyst can simply type a search variable and it would search not only name and timestamps but actually dig into the guts of the model and search those? Well EKM is one such system. Analysts can search using thousands of field combinations that encompass everything from material properties to partitioning methods, boundary conditions to cell counts, you get the idea, it's pretty awesome!
Simulation process and workflow managementIn EKM, administrators can create simulation workflows and lifecycles that manage all of the different steps that go into creating, running and concluding a simulation while ensuring that proper reviews and approvals handled. In addition, documenting and automating the workflows, some of the underlying work can be automated as well. As we will see later, batch submission is baked right into the EKM capabilities and workflows can automatically launch batch submission scripts to a cluster and get the simulation going as soon as the proper files are loaded and that stage in the process is released. Workflow processes are defined in a simple XML format or created using a dedicated mini-tool and uploaded into EKM ready to roll. Email notifications are preset and will shoot out whenever progress is made on a step in the workflow or an approval is needed. A nifty process chart is also built into the EKM processes interface that shows the workflow structure and current progress.
ConclusionIn conclusion, ANSYS EKM is awesome! (Serious now), PADT invested a lot of time and resources in implementing EKM and in the coming months, we will be transitioning all of our engineering knowledge into it. It is already integrated with our HPC cluster and will be our central repository for engineering data. In this article, I tried to really skim the surface of what EKM can do and what it currently does for us here at PADT. If you are interested in checking out ANSYS EKM or have any questions or thoughts please reach out to us with a comment, email or just give us a call.
Spectre Side-Channel and Meltdown – How will living in this new reality affect the world of numerical simulation?
Posted on January 17, 2018, by: David MastelLiterally, while I was sorting and running benchmarks and prepping the new benchmarks data originally titled. ANSYS Release 18.2 Ball Grid Array Benchmark information using two sixteen core INTEL® XEON® Gold 6130 CPU's. I noticed that my news feeds had started to blow up with late breaking HPC news. The news as you may have guessed is the Spectre and Meltdown flaws that were recently published. I thought to myself “Well this is just great the benchmarks that I just ran are no longer relevant. My next thought was wait now I can show a real world example of exactly a percentage change. I have waited this long to run the ANSYS numerical simulation benchmarks on this new CPU architecture. I can wait a little longer to post my findings.” What now? Oh my more Late Breaking News! Research findings, Execution orders no barriers! Side channels used to get access to private address areas of the hardware! Wow this is a bad day. As I sat reading more news, then I drifted off daydreaming, then back to my screen then the clock on the wall, great it is 2am already!, just go home..." Then thoughts immediate shifted and I was back thinking about indeed, how these hardware flaws impact the missing middle market? HPC numerical simulation!!! I dug in deep and pressed forward content with starting over on the benchmarks knowing after the patches released around Jan 9th will be a whole new world. I decided to spare the ugly details related to the Spectre array bounds/brand prediction attack flaws. The out of order meltdown vulnerabilities! UGH! I seriously believe that someone has AI writing news articles written five or six different ways with each somehow saying the same thing. I also provide the links to the information and legal statements directly from a who's who list of accountable parties: Executive Summary:
- * Remember every case is different so please do your run your own tests to verify how this new reality affects your hardware and software environment.*
- Due to costs this machine has a single NVMe M.2 for the primary drive with a single 2TB SATA drive for its Mid-Term Storage area.
- What was the impact for my benchmark?
- Positive takeaway:
- In all of the years of running the sp5 benchmark. I recorded the fastest benchmark time using this CUBE w32s, dual INTEL® XEON® Gold 6130 CPU workstation.
- Using all thirty two cores 125.7 seconds for Solution Time (Time Spent Computing Solution).
- Next, Coming in at 135.7 seconds the Solution Time metric after running the OS patches is my second fastest data point for the ANSYS sp5 benchmark.
- ANSYS sp5 benchmark data - PADT, Inc. Currently from 2005 until this time.
- Next, Coming in at 135.7 seconds the Solution Time metric after running the OS patches is my second fastest data point for the ANSYS sp5 benchmark.
- The Solution Times continued to solve faster with each bump in cores.
- Performance per dollar was maximized in this configuration.
- Depending on number of cores used that I used for the ANSYS sp5 benchmark. I give the actual data below showing the percentage differences before and after:
- Largest percentage difference:
- Solution Time: -9.81% using four CPU cores.
- Total Time: -7.87% using two CPU cores.
- Largest percentage difference:
- Positive takeaway:
- The need to turn the security screws down within your corporate enterprise network is now.
- A rogue malicious agent needs to be on the inside of your corporate network to execute any sort of crafted attack. Much of these details are outlined in the Project Zero abstract.
- Pay extra attention to just who you let on your internal network.
- I reiterate the recommendations of many security professionals that you should already be restricting your internal company network and workstations to employee use. If you are not sure ask again.
- Spectre flaw:
- INTEL, ARM & AMD CPU’s are affected by the Spectre array bounds hardware attacks.
- Meltdown flaw:
- INTEL CPU’s and some ARM high performance CPU’s are affected by the “easier to exploit” Meltdown vulnerability.
- Reading privileged memory with a side-channel - Project Zero
- INTEL Responds To Security Research Findings
- INTEL Analysis of Speculative Execution Side Channels
- ARM Vulnerability of Speculative Processors to Cache Timing Side-Channel Mechanism
- AMD Processors: Google Project Zero, Spectre and Meltdown – An Update on AMD Processor Security
- Security Bulletin: NVIDIA Driver Security Updates for CPU Speculative Side Channel Vulnerabilities. NVIDIA DRIVER RESPONSE TO CPU SPECULATIVE SIDE CHANNEL VULNERABILITIES - CVE-2017-5753, CVE-2017-5715, CVE-2017-5754
- Unpatched Benchmark Data – No mitigation patches from Microsoft and NVidia addressing the Spectre and Meltdown flaws have been applied to the Windows 10 Professional OS running on the CUBE w32s that I use in this benchmark.
- Patched Benchmark Data – I installed the batch of patches released by Microsoft as well as the NVDIA graphics card driver update released by NVIDIA addressing. NVIDIA indicates in their advisory that “their hardware their GPU hardware is not affected but they are updating their drivers to help mitigate the CPU security issue.” Huh? Installing now…
- Solution Time – The amount of time in seconds that the CPU’s spent computing the solution. "The Time Spent Computing Solution"
- Total Time – Total time in seconds that the entire process took. How the solve felt to the user also known as wall clock time.
- CUBE w32s, INTEL® XEON® Gold 6130 CPU, 128GB’s DDR4-2667MHz (1Rx4) ECC REG DIMM, Windows 10 Professional, ANSYS Release 18.2, INTEL MPI 184.108.40.206, 32 Total Cores, NVIDIA QUADRO P4000, Samsung EVO 960 Pro NVMe M.2, Toshiba 2TB 7200 RPM SATA 3 Drive.
- Other notables, are you still paying attention?
- My Supermicro X11Dai-N BIOS Settings:
- BIOS Version: 2.0a
- Execute Disable Bit: DISABLE
- Hyper threading: ON
- Intel Virtualization Technology: DISABLE
- Core Enabled: 0
- Power Technology: CUSTOM
- Energy Performance Tuning: DISABLE
- Energy performance BIAS setting: PERFORMANCE
- P-State Coordination: HW_ALL
- Package C-State Limit: C0/C1 State
- CPU C3 Report: DISABLE
- CPU C6 Report: DISABLE
- Enhanced Halt State: DISABLE
- With a read performance of up to 3,200MB/s and write performance of up to 1,900 MB/s using the Samsung NVMe M.2 drive was to tempting to pass up as my solve and temp solve area location. The bandwidth from the little feller was to impressive and continued to impress throughout the numerical simulation benchmarks.
- My Supermicro X11Dai-N BIOS Settings:
- Cube w32s by PADT, Inc. ANSYS Release 18.2 FEA Benchmark
- BGA (V18sp-5)
- Transient nonlinear structural analysis of a electronic ball grid arrary
- Analysis Type: Static Nonlinear Structural
- Number of Degrees of Freedom: 6,000,000
- Matrix: Symmetic
|ANSYS sp5 Benchmark - Unpatched Windows 10 Professinal for Spectre and Meltdown hardware vulnerability - CUBE w32s|
|CPUs||Solution Time||Total Time|
|ANSYS sp5 Benchmark - Patched Windows 10 Professional - CUBE w32s|
|CPUs||Solution Time||Total Time|
|Percentage Difference - Not Patched vs. Patched for Sprectre, Meltdown|
|Solution Time||Total Time|
Posted on December 19, 2017, by: Joe WoodwardNOOOOOOOOOOOOOOOOOOOOOOOOOOOO!!!! Is this the reaction you have when you come in on Monday morning, and realize that another Windows update has, once again, rebooted your PC before you had a chance to save the 30-hour run that should have finished over the weekend? There a Workbench setting that can help relieve some of that stress. The “Save Project After Solution” option will save the entire project as soon as the solution has finished. So when your model runs for 30 hours over the weekend, it gets saved before a Windows update shuts everything down. These settings are persistent, so once you’ve changed them to ‘Yes’, then you are all set for next time. You just need to make sure that you change them for each ANSYS version if you have more than one installed. Now on to my next blog… “How to recover a run if you forgot to change the settings above.” (Grumble Grumble!)
Posted on December 11, 2017, by: Ziad MelhemBefore joining PADT last July, I have worked on FEA and CFD analyses but my exposure to ANSYS was limited and I was concerned about the transition. To my delight, the software was very easy to learn; most often than not intuitive and self-explanatory (e.g. mechanical wizard), the setup time was minimized after learning couple simple features (e.g. named selection, object generator etc.) and the resources on the ANSYS portal were very instrumental in the learning process. Furthermore, the colleagues at PADT proved to be very knowledgeable and experienced and more importantly responsive and eager to jump for help. One of the most attractive features that caught my attention was the streamline of the Multiphysics nature that ANSYS has. I have been satisfied with the performance of standalone CFD packages in the past, and same goes for structural ones. But never have I dealt with an extensive software that maintained the quality of a specialized one. The importance of this attribute is showing more and more its powers in recent years given the development of new convoluted products of Multiphysics nature e.g. medical applications. Using ANSYS to simulate medical applications is one of the most rewarding experience I personally enjoy. Even though, it is definitely satisfying to be able to help accelerate innovation in the aerospace, automotive, and a myriad of other industrial areas…the experience in the medical area has a more refreshing taste, probably due to the clear and direct link to human lives. From intravascular procedures to shoulder implants and microdevices, there is one common factor: ANSYS is decreasing the risks of catastrophic failures, improving the product capabilities and shortening the innovation cycle. Editors Note: Ziad is part of PADT's team in Southern California. He is a graduate of USC and has worked at Boeing, Meggit, and Pacific Consolidated Industries before joining PADT. He works with the rest of our ANSYS technical staff to make sure our users are getting the most from their ANSYS investment.
Posted on December 5, 2017, by: Ted HarrisI’m sure most people don’t know the name George M. Low. He was an early employee at NASA, serving as Chief of Manned Space Flight and later as a leader in NASA’a Apollo moon program in the late 1960’s. In fact, he was named Manager of the Apollo Spacecraft Program after the deadly Apollo 1 fire in 1967, and helped the program move forward to the successful moon landings starting in 1969. As most alumni of Rensselaer Polytechnic Institute know, he returned to Rensselaer, his alma mater, serving as president from 1976 until his death in the 1980’s. I still recall the rousing speech he gave to us incoming freshman at the Troy Music Hall on a hot September afternoon. On our class rings is his quote, “Without risk there can be no progress.” I’ve pondered that quote many times in the years since. It’s easy to coast along in many facets of life and accept and even embrace the status quo. Over the years, though, I have observed that George Low was right, and the truth is that risk is required to move forward and improve. The hard part is determining the level of risk that is appropriate, but it’s a sure bet that by not taking any risk, we will lag behind. How is that realization applicable to our world of engineering simulation? Surely those already doing simulation have moved from the old process of design > test > break > redesign > test > produce to embrace the faster and more efficient simulate > test > product, right? Perhaps, but even if they have, that doesn’t mean there can’t be progress with some additional risk. Let’s look at a couple of examples in the simulation world where some risk taking can have significant payoffs. First, transitioning from ANSYS Mechanical APDL to ANSYS Mechanical (Workbench). Most have already made the switch. I’ll allow there are still some applications that can be completely scripted within the old Mechanical Ansys Parametric Design Language in an incredibly efficient manner. However, if you are dealing with geometry that’s even remotely complex, I’ll wager that your simulation preparation time will be much faster using the improved CAD import and geometry manipulation capabilities within the ANSYS Workbench Mechanical workflow. Let alone meshing. Meshing is lights out faster, more robust, and better quality in modern versions of Mechanical than anything we can do in the older Mechanical APDL mesher. Second, using ANSYS SpaceClaim to clean up, modify, create, and otherwise manipulate geometry. It doesn’t matter what the source of the geometry is, SpaceClaim is an incredible tool for quickly making it useable for simulation as well as lots of other purposes. I recently used the SpaceClaim tools within ANSYS Discovery live to combine assemblies from Inventor and SolidWorks into one model, seamlessly, and was able to move, rotate, orient, and modify the geometry to what I needed in a matter of minutes (see the Discovery Live image at the bottom). The cleanup tools are amazing as well. Third, looking into ANSYS Discovery Live. Most of us can benefit from quick feedback on design ideas and changes. The new Discovery Live tool makes that a reality. Currently, in a technology demonstration mode, it’s free to download and try it from ANSYS, Inc. through early 2018. I’m utterly amazed by how fast it can read in a complex assembly and start generating results for basic structural, CFD, and thermal simulations. What used to take weeks or months can now be done in a few minutes. Credits: Motorcycle geometry downloaded from GrabCAD, model by Shashikant Soren. Human figure geometry downloaded from GrabCAD, model by Jari Ikonen. Models combined and manipulated within ANSYS Discovery Live. George M. Low image from www.nasa.gov. I encourage you to take some risks for the sake of progress.
Posted on November 27, 2017, by: Alex GrishinOne of the key outputs from any random vibration analysis is determining the response of the object you are analyzing in terms of reaction forces. In the presentation below. Alex Grishin shares the theory behind getting accurate forces and then how to do so in ANSYS Mechanical. PADT-ANSYS-Random-Vib-Reaction-Forces-2017_11_22-1
As always, please contact PADT for your ANSYS simulation, training, and customization needs.
Posted on November 7, 2017, by: Michael GriesiIEEE Day celebrates the first time in history when engineers worldwide and IEEE members gathered to share their technical ideas in 1884. Events were held around the world by 846 IEEE Chapters this year. So, to celebrate, I attended a joint chapter meeting in at The Museum of Flight in Seattle with technical presentations focused on “Smart Antennas for IoT and 5G”. There were approximately 60 in attendance, so assuming this was the average attendance globally results in over 50,000 engineers celebrating IEEE Day worldwide! The Seattle seminar featured three speakers that spanned theory, design, test, integration, and application of smart antennas. There was much discussion about the complexity and challenges of meeting the ambitious goals of 5G, which extend beyond mobile broadband data access. Some key objectives of 5G are to increase capacity, increase data rates, reduce latency, increase availability, and improve spectral and energy efficiency by 2020. A critical technology behind achieving these goals is beamforming antenna arrays, which were at the forefront of each presentation. Anil Kumar from Boeing focused on the application of mmWave technology on aircraft. Test data was used to analyze EM radiation leakage through coated and uncoated aircraft windows. However, since existing regulations don’t consider the increased path loss associated with such high frequencies, the integration of 5G wireless applications may be restricted or delayed. Beyond this regulatory challenge, Anil discussed how multipath reflectors and absorbers will present significant challenges to successful integration inside the cabin. Although testing is always required for validation, designing the layout of the onboard transceivers may be impractical to optimize without an asymptotic EM simulation tool that can account for creeping waves, diffraction, and multi-bounce. Considering the test and measurement perspective, Jari Vikstedt from ETS-Lindgren focused on the challenges of testing smart antenna systems. Smart or adaptive antenna systems will not likely perform the same in an anechoic chamber test as they would in real systems. Of particular difficulty, radiation null placement is just as critical as beam placement. This poses a difficult challenge to the number and location of probes in a test environment. Not only would a large number of probes become impractical, there is significant shadowing at mmWave frequencies which can negatively impact the measurement. Furthermore, compact ranges can significantly impact testing and line of sight measurements become particularly challenging. While not a purely test-oriented observation, this lead to considering the challenge of tower hand off. If a handset and tower use beamforming to maintain a link, if is difficult for an approaching tower to even sense the handset to negotiate the hand-off. In contrast, if the handset was continuously scanning, the approaching tower could be sensed to negotiate the hand-off before the link is jeopardized. The keynote speaker, who also traveled from Phoenix to Seattle, was ASU Professor Dr. Constantine Balanis. Dr. Balanis opened his presentation by making a distinction between conventional “dumb antennas” and “smart antennas”. In reality, there are no smart antennas, but instead smart antenna systems. This is a critical point from an engineering perspective since it highlights the complexity and challenge of designing modern communication systems. The focus of his presentation was using an adaptive system to steer null points in addition to the beam in an antenna array using a least mean square (LMS) algorithm. He began with a simple linear patch array with fixed uniform amplitude weights, since an analytic solution was practical and could be used to validate a simulation setup. However, once the simulation results were verified for confidence, designing a more complex array with weighted amplitudes accompanying the element phase shift was only practical through simulation. While beam steering will create a device centric system by targeting individual users on massive multiple input multiple output (MIMO) networks, null steering can improve efficiency by minimizing interference to other devices. Whether spatial processing is truly the “last frontier in the battle for cellular system capacity”, 5G technology will most certainly usher in a new era of high capacity, high speed, efficient, and ubiquitous means of communication. If you would like to learn more about how PADT approaches antenna simulation, you can read about it here and contact us directly at firstname.lastname@example.org.
Posted on October 23, 2017, by: Michael GriesiANSYS HFSS features an integrated “history-based modeler”. This means that an object’s final shape is dependent on each and every operation performed on that object. History-based modelers are a perfect choice for analysis since they naturally support parameterization for design exploration and optimization. However, editing imported solid 3D Mechanical CAD (or MCAD) models can sometimes be challenging with a history-based modeler since there are no imported parameters, the order of operation is important, and operational dependencies can sometimes lead to logic errors. Conversely, direct modelers are not bound by previous operations which can offer more freedom to edit geometry in any order without historic logic errors. This makes direct modelers a popular choice for CAD software but, since dependencies are not maintained, they are not typically the natural choice for parametric analysis. If only there was a way to leverage the best of both worlds… Well, with ANSYS, there is a way. As discussed in a previous blog post, since the release of ANSYS 18.1, ANSYS SpaceClaim Direct Modeler (SCDM) and the MCAD translator used to import geometry from third-party CAD tools are now packaged together. The post also covered a few simple procedures to import and prepare a solid model for electromagnetic analysis. However, this blog post will demonstrate how to define parameters in SCDM, directly link the model in SCDM to HFSS, and drive a parametric sweep from HFSS. This link unites the geometric flexibility of a direct modeler to the parametric flexibility of a history-based modeler. You can download a copy of this model here to follow along. If you need access to SCDM, you can contact us at email@example.com. It’s also worth noting that the processes discussed throughout this article work the same for HFSS-IE, Q3D, and Maxwell designs as well.  To begin, open ANSYS SpaceClaim and select File > Open to import the step file.  Split the patch antenna and reference plane from the dielectric. Click here for steps to splitting geometry. Notice the objects can be renamed and colors can be changed under the Display tab.
- Create local variables in HFSS that can be used for both local and linked geometry. For example, create a variable in HFSS for traceWidth = 3mm (which was the previously noted width). Define SCDM_traceWidth = (traceWidth-3mm)/2. Now the port width can scale with the trace width.
- Link to multiple SCDM projects. Either move and rotate parts as needed or create a separate coordinate system for each component. For example, link an SMA end connector to the same HFSS project to analyze both components. Notice that each component has variables and the substrate thickness changes both SCDM projects.
- Design other objects in the native HFSS history-based modeler that are dependent on the SCDM design variables. For example, the void in an enclosure could be a function of SCDM_dielectricHeight. Notice that the enclosure void is dependent on the SCDM dielectric height.
Posted on October 5, 2017, by: Alex GrishinANSYS Mechanical is great at applying tabular loads that vary with an independent variable. Say time or Z. What if you want a tabular load that varies in multiple directions and time. In part one of this series, I covered who you can use the External Data tool to do just that. In this second part, I show how you can alternatively use APDL commands added to your ANSYS Mechanical model to define the tabular loading. PADT-ANSYS-Tabular-loads-2
Posted on September 12, 2017, by: Nicolas Jobert
Guest BloggerWe are pleased to publish this very useful post from Nicolas Jobert from Synchrotron SOLEIL in France. Nicolas is a Mechanical Engineer with more than 20 years of experience using ANSYS for engineering design and analysis in academia and industry. He currently is Senior Mechanical Engineer at Synchrotron SOLEIL, the French synchrotron radiation facility. He also teaches various courses on Design and Validation in the field of structural and optomechanics. He graduated from the Ecole Centrale Marseille, France, and is a EUSPEN member.
As Time Goes ByDo you remember the moment you first heard about ANSYS introducing APDL Math? I, for one, do, and I have a vivid memory of thinking “Wow, that can be a powerful tool, I’m dead sure it won’t be long before an opportunity arises and I’ll start developing pretty useful procedures and tools”. Well, that was half a decade ago, and to my great shame, nothing quite like that has happened so far. Reasons for this are obvious and probably the same for most of us: lack of time and energy to learn yet another set of commands, fear of the ever present risk of developing procedures that are eventually rejected as nonstandard use of the software and therefore error-prone (those of you working under quality assurance, raise your hand!), anxiety of working directly under the hood on real projects with little means to double check your results, to name a few. That said, finally an opportunity presented itself, and before I knew it, I was up and running with APDL Math. The objective of this article is to showcase some simple yet insightful applications and hopefully remove the prevention one can have regarding using these additional capabilities. For the sake of demonstration, I will begin with a somewhat uncommon analysis tool that should nevertheless ring a bell for most of you, that is: modal analysis (and yes, the pun is intended). You may wonder what is the purpose of using APDL Math to perform a task that is a standard ANSYS capability since say revision 2.0, 40 years ago? But wait, did I mention that by modal analysis, I mean thermal modal Analysis?
Thermal Modal Analysis at a GlanceAlthough scarcely used, thermal modal analysis is both an analysis and a design validation tool, mostly used in the field of precision engineering and or optomechanics. Specifically, it can serve a number of purposes such as:
Q: Will my system settle fast enough to fulfill design requirements? A: Compute the system Thermal Time Constants
Q: Where should I place sensors to get information rich / robust measurements? A: Compute Thermal modes and place your sensors away for large thermal gradients
Q: Can I develop a reduced model to solve large transient thermal mechanical problems? A: Modal basis allows for the construction of such reduced problem effectively converting a high-order coupled system to a low order, uncoupled set of equations.
Q: How to develop a reduced order state-space matrices representation of my thermal system (equivalent to SPMWRITE command)? A: Modal analysis provides every result needed to build those matrices directly within ANSYS.Although you might only be vaguely familiar with many or all of those topics, the idea behind this article is really to show that APDL Math does exactly what you need it to do: allow the user to efficiently address specific needs, with a minimal amount of additional work. Minimal? Let’s see what it looks like in reality, and you will soon enough be in a position to make your own opinion on the matter.
Thermal Modal Analysis using APDL MathTo begin with, it is worth underlining the similarities and differences between structural (vibration) modes and thermal modes. Mathematically, both look very much the same, i.e. modes are solutions of the dynamics equation in the absence of forcing (external) term:
[K] is the stiffness matrix [M] is the mass matrix
[K] is the conductivity matrix [C] is the capacitance matrix
- Structural : λ=ω², i.e. the square of a circular frequency
- Thermal: λ=1/τ, i.e. the inverse of time constant
! Setup Model … ! Make ONE dummy transient solve ! to write stiffness and mass ! matrices to .FULL file /SOLU ANTYPE,TRANSIENT TIME,1 WRFULL,1 SOLVE ! Get Stiffness and Mass *SMAT,MatK,D,IMPORT,FULL,,STIFF *SMAT,MatM,D,IMPORT,FULL,,MASS ! Eigenvalue Extraction Antype,MODAL modopt,Lanb,NRM,0,Fmax *EIGEN,MatK,MatM,,EiV,MatPhi ! No need to convert eigenvalues ! to frequencies, ANSYS does ! it automatically ! Done !
! Setup Model … ! Make TWO dummy transient solve ! to separately write conductivity ! and capacitances matrices to .FULL file /SOLU ANTYPE,TRANSIENT TIME,1 NSUB,1,1,1 TINTP,,,,1 WRFULL,1 ! Zero out capacitance terms … SOLVE ! Get Conductivity Matrice *SMAT,MatK,D,IMPORT,FULL, Jobname.full,STIFF ! Restore capacitance and zero out ! conductivity terms … SOLVE ! Get Capacitance Matrice *SMAT,MatC,D,IMPORT,FULL,,STIFF ! Eigenvalue Extraction Antype,MODAL modopt,Lanb,NRM,,0,1/(2*PI*SQRT(Tmin)) *EIGEN,MatK,MatC,,EiV,MatPhi ! Convert Eigenvalues for Frequency ! to Thermal time Constants ! *do,i,1,EiV_rowDim Eiv(i)=1/(2*PI*Eiv(i))**2 *enddo ! Done !
- Results from the eigenvalues are stored in a vector (EiV) and a matrix (MatPhi), which need not be declared but are created when executing the *EIGEN command (no *DIM required).
- For each APDL Math entity, ANSYS automatically maintains variables named Param_rowDim and Param_colDim, hence removing the burden to keep track of dimensions.
But where on Earth is my eye candy?Now that we have some procedure and results, we would like to be able to show this to the outside world (and to be honest, some graphical results would also help getting confidence in results). The additional task to do so is really minimal. What we need to do is simply to put back those numerical results into the ANSYS database so that we can use all the conventional post-processing capabilities. This can be made using the appropriate POST1 commands, essentially: DNSOL. And, while we are at it, why not do a hardcopy to an image file? Here is the corresponding input.
… User should place all nodes with non-prescribed temperatures in a component named MyNodeComponent … First, convert Eigenvectors from solver to BCS ordering ! Conversion needed *SMAT,Nod2Bcs,D,IMPORT,FULL,Jobname.full,NOD2BCS *MULT,Nod2Bcs,TRAN,MatPhi,,MatPhi ! Then, read in mapping vector to convert to user ordering *VEC,MapForward,I,IMPORT,FULL,Jobname.full,FORWARD ! Put the results in ANSYS database /POST1 *do,ind_mode,1,NRM cmsel,s,MyNodeComponent curr_node=0 *do,i,1,ndinqr(0,13) curr_node=ndnext(curr_node) curr_temp=MatPhi(MapForward(curr_node),ind_mode) dnsol,curr_node,TEMP,,curr_temp *enddo Tau=1/(2*3.14*EiV(ind_mode))**2 To=NINT(Tau*10)/10 ! compress to 1 digit after comma /title,Mode #%ind_mode% - Tau=%To%s plnsol,temp ! Hardcopy to BMP file /image,SAVE,JobName_Mode%ind_mode%,bmp *enddoThis way, modes can be displayed, or even written to a conventional .RTH file (using RAPPND), and used as any regular ANSYS solver result.
Nice, but an actual example wouldn’t hurt, would it?Now you may wonder what the results look like in reality. To remain within the field of precision engineering, let’s use a support structure typically designed for high-stability positioning. From a structural point of view, it must have a high dynamic stiffness and a low total mass so that a Delta shaped bracket is appropriate. Since we want the system to rapidly evacuate any heat load, we choose aluminum as candidate material. We do know from first principles that any applied disturbance will exponentially vanish and the system will go back to equilibrium state. Now, what will be the time constants of this decay? For the sake of simplicity we restrict the analysis to a highly simplified, 2D model of such a support. PLANE55 elements are used to model the structural part while the heat sink is accounted for using SURF151. Boundary conditions are enforced using an extra node. After applying boundary conditions, we execute the modal solution to obtain say - the first 8 modes.
|Index||Time Constant [s]||Comment|
|1||535.9||Quasi-uniform temperature field (i.e. “rigid body” mode)|
|2||32.1||1st order (one wavelength along perimeter)|
|3||23.8||1st order (one wavelength along perimeter)|
|4||8.1||2nd order (two wavelengths along perimeter)|
|5||6.8||2nd order (two wavelengths along perimeter)|
|6||3.5||3rd order (three wavelengths along perimeter)|
|7||3.1||3rd order (three wavelengths along perimeter)|
|8||2.2||4th order (four wavelengths along perimeter)|
Allocate a  Vector : EIV Allocate a  Dense Matrix : MatPhiPlease note that the solution has 227 DOFs whereas the entire problem has 228 DOFs. This is the consequence of having introduced the boundary conditions as an enforced temperature on a node, which DOF is therefore removed from the DOF set to be obtained by the solver. Also, we might want to use the modal shapes information to decide which locations are best suited to capture the entire temperature field on the structure. Without knowledge of the excitation source, one straightforward way to do so is to retain for each mode the node that has the largest amplitude. This is made even easier in this situation, since we have normalized each mode to have unit maximum amplitude we just need to select nodes having modal amplitude equal to 1 (or -1). On the figure below, each temperature sensor location is marked with a ‘TSm’ label where m is the mode index. Doing so, we reach a pretty satisfactory distribution for the sensors locations, completely consistent with intuition. In numerical terms, we can also check that the modal matrix [Φ]_sensors, i.e. the original full matrix restricted to the selected DOF, has an excellent condition number. But there are many other things we could do starting from this. For example, with additional information, such as the location and the frequency content of the temperature fluctuations, one could further restrict the set of needed temperature sensors by running a dummy transient analysis and choosing locations where the correlation between sensors readings is as low as possible (using *MOPER,,,CORR). Even better, one can estimate the thermally induced displacements and select locations best suited to build an empirical model (typically using AR or ARMA), allowing one to predict structural displacements induced by temperature fluctuations using just a couple of sensors. This in turn can be used to select control strategies, check modal controllability… all within ANSYS.
ConclusionAPDL Math was presented as an alternate route for users who need to include specialized steps in an otherwise standard FE process, and in my opinion it does just that. The benefits can be immense and the learning curve is steep but short. As long as the user knows what he/she is doing, there is little possibility to get lost: after all, APDL Math only comprises 18 additional commands. What hindered me so far was the necessity to account for internal, BCS and user ordering, but it really is not a big deal, as seen from the above example. What is more, the possibility to store the created results in the Mechanical APDL database (DNSOL and RAPPND are your friends!) provides every means to control your results and finally to build confidence in your developments. And for those of us who prefer to stay within Workbench environment, there is nothing preventing from including APDL Math procedures into Workbench command snippets. This was just an introductory example, since many other applications could be found, to name a few just in the fields of precision engineering and/or opto-mechanics:
- Speed up transient thermal mechanical analyses
- Perform harmonic analysis of thermal models
- Virtual testing of physical setup, including real-time control systems (model based)
- Modal testing, error localization, automated model updating