Some Revolutionary HPC Talk: 208 Cores+896GB < $60k, GPU’s, & ANSYS Distributed

Categories:
Tags:

imageThe last couple of weeks a bunch of stuff has gone on in the area of High Performance computing, or HPC, so we thought we would throw it all into one Focus posting and share some things we have learned, along with some advice, with the greater ANSYS user community.

There is a little bit of a revolution going on in both the FEA and CFD simulation side of things amongst users of ANSYS, Inc. products.  For a while now customers with large numbers of users and big nasty problems to solve have been buying lots of parallel licenses and big monster clusters. The size of problems that these firms, mostly turbomachinery and aerospace, have been growing and growing. And even more so for CFD jobs.   But also FEA for HFSS and ANSYS Mechanical/Mechanical APDL.  That is where the revolution started.

But where it is gaining momentum, where the greater impact is being seen on how simulation is used around the world, is with the smaller customers.  People with one to five seats of ANSYS products.  In the past they were happy with their two “included” Mechanical shared memory parallel tasks.  Or they might spring for 3 or 4 CFD parallel licenses.  But as 4, 6, and 8 core CPU chips become mainstream, even on the desktop, and as ANSYS delivers greater value from parallel, we are seeing more and more people invest in high performance computing. And they are getting a quick return on that investment.

Affordable High Value Hardware

554106 10150712079452692 135767802691 9493326 270637035 n

208 Cores + 869 GB + 25 TB + Infiniband + Mobile Rack = $58k = HOT DAMN!

Yes, this is a commercial for PADT’s CUBE machines.  (www.CUBE-HVPC.com) Even if you would rather be an ALGOR user than purchase hardware from a lowly ANSYS Chanel Partner, keep reading. Even if you would rather go to an ANSYS meeting at HQ in the winter than brave asking your IT department if you can buy a machine not made by a major computer manufacturer, keep reading.

Because what we do with CUBE hardware is something you can do yourself, or that you can pressure your name brand hardware provider into doing.

We just got a very large CFD benchmark job for a customer.  They want multiple runs on a piece of “rotating machinery” to generate performance curves.  Lots of runs, and each run can take up to 4 or 5 days on one of our 32 core machines.  So we put together a 208 core cluster.  And we maxed out the RAM on each one to get to just under 900 GB. Here are the details:

Cores: 208 Total
    3 servers x48 2.3 GHz cores each server
    2 servers x32 3.0 GHz cores each server
RAM: 896 GB
    3 servers  x128GB DDR3 1333MHz ECC RAM each server
    2 servers  x256GB DDR3 1600MHz ECC RAM each server
DATA Array:  ~25TB
Interconnect: 5 x Infiniband 40Gbps QDR
Infiniband Switch: 8 port 40Gbps QDR
Mobile Departmental Cluster Rack – 12U

All of this cost us around $58,000 if you add up what we spent on various parts over the past year or so.  That much horsepower for $58,000.  If you look at your hourly burden rate and the impact of schedule slip on project cost, spending $60k on hardware has a quick payback. 

You do need to purchase parallel licenses. But if you go with this type of hardware configuration what you save over a high-priced solution will go a long way towards paying for those licenses.  Even if you do spend $150k on hardware, your payback with the hardware and the license is still pretty quick.

Now this is old hardware (six months to a year –  dinosaur bones).  How much would a mini-cluster departmental server cost now and what would it have inside:

Cores: 320 Total
5 servers x64 2.3 GHz cores each server
RAM: 2.56 TB
5 servers x512 GB DDR3 RAM each server
DATA Array: ~50TB
Interconnect: 5 x Infiniband 40Gbps QDR
Infiniband Switch: 8 port 40Gbps QDR
Mobile Departmental Cluster Rack – 12U

The cost?  around $80,000.  That is $250/core.  Now you need big problems to take advantage of that many cores.  If your problems are not that big, just drop the number of servers in the mini-cluster.  And drop the price proportionally. 

Same if you are a Mechanical user.  The matrices in FEA just don’t scale in parallel like they do for CFD, so a 300+ core machine won’t be 300 times faster. It might even be slower than say 32 cores.  But the cost drop is the same.  See below for some numbers.

Bottom line, hardware cost is now in a range where you will see payback quickly in increased productivity.

GPU’s

image

NVIDIA’s Tesla GPU
We think they should have some 80’s era super model
draped over the front like those Ferrari posters we
had in our dorm rooms.

For you Mechanical/Mechanical APDL users, let’s talk GPU’s.

We just got an NVIDIA TESLA C2075 GPU.  We are not done testing, but our basic results show that no matter how we couple it with cores and solvers, it speeds things up.  Anywhere from 3 times faster to 50% faster depending on the problem, shared vs. distributed memory, and how many cores we throw in with the GPU.  

This is a fundamental problem with answering the question “How much faster?” because it depends a lot on the problem and the hardware. You need to try it on your problems on your hardware. But we feel comfortable in saying if you buy an HPC pack and run on 8 cores with a GPU, the GPU should double your speed relative to just running on 8 cores.  It could even run faster on some problems. 

For some, that is a disappointment.  “The GPU has hundreds of processors, why isn’t it 10 or 50 times faster?”  Well, getting the problem broken up and running on all of those processors takes time.  But still, twice as fast for between $2,000 to $3,000 is a bargain. I don’t know what your burden rate is but it doesn’t take very many hours of saved run time to recover that cost.  And there is no additional license cost because you get a GPU license with an HPC pack.

Plus, at R14 the solver supports a GPU with distributed ANSYS, so even more improvements.  Add to that support for the unsymmetrical or damped Eigensolvers and general GPU performance increases at R14.

PADT’s advice? If you are running ANSYS Mechanical or Mechanical APDL get the HPC Pack and a GPU along with a 12 core machine with gobs of RAM (PADT’s 12 core AMD system with 64GB or RAM and 3TB of disk costs only $6,300 without the GPU, $8,500 with).  You can solve on 8 cores and play Angry Birds on the remaining 4.

Distributed ANSYS

For a long time many of users out there have avoided distributed ANSYS. It was hard to install, and unless you had the right problem you didn’t see much of a benefit for many of the early releases. Shared Memory Parallel, or SMP, was dirt easy – get an HPC license and tell it to run on 8 cores and go.

Well at R14 of ANSYS Mechanical APDL it is time to go distributed.  First off they make the install much easier.  To be honest, we found that this was the biggest deterrent for many small company users.

Second, at R14 a lot more things are supported in distributed ANSYS. This has been going on for some time so most of what people use is supported. At this release they added subspace eigensolving, TRANS, INFINI and PLANE121/230 elements (electrostatics), and SURF251/252. 

Some “issues” have been fixed like restart robustness and you now have control on when and how multiple restart files are combined after the solve. 

All and all, if you have R14, you are solving mechanical problems, and you have an HPC pack, you should be using distributed most of the time.

Conclusions

We get a ton of questions from people about what they should buy and how much.  And every situation is different. But for small and medium sized companies, the HPC revolution is here and everyone should be budgeting for taking advantage of HPC:

    • At least one HPC pack
    • Some new faster/cheaper multicore hardware (CUBE anyone?)
    • A GPU. 

flamewarSTOP!  I know you were scrolling down to the comments section to write some tirade about how ANSYS, Inc overcharges for parallel, how it is on a moral equivalence with drowning puppies, and how much more reasonable competitor A, B, or C is with parallel costs.  Let me save you the time.

HPC delivers huge value.  Big productivity improvements.  And it does not write itself. It is not an enhancement to existing software.  Scores of developers are going into solver code and implementing complex logic that allows efficiency with older hardware, shared memory, distributed memory, and GPU’s. It has to be written, tested, fixed, tested again, and back and forth every release.  That value is real, and there is a cost associated with it.

And the competitors pricing model? The only thing they can do to differentiate themselves is charge nothing or very little. They have not put the effort or don’t have the expertise to deliver the kind of parallel performance that the ANSYS, Inc. solvers do.  So they give it away to compete.  Trust me, they are not being nice because they like you. They have the same business drivers as ANSYS, Inc.  They price the way they do because they did not incur as much cost and they know if they charged for it you would have no reason to use their solvers.

ANSYS users of the world unit!  Load your multicore hardware with HPC packs, feed it with a GPU, and join the revolution!

Categories

Get Your Ansys Products & Support from the Engineers who Contribute to this Blog.

Technical Expertise to Enable your Additive Manufacturing Success.

PADT’s Pulse Newsletter

Keep up to date on what is going on at PADT by subscribing to our newsletter.


By submitting this form, you are consenting to receive marketing emails from: . You can revoke your consent to receive emails at any time by using the SafeUnsubscribe® link, found at the bottom of every email. Emails are serviced by Constant Contact

Share this post:

Upcoming Events

09/27/2023

2023 AZ Bio Awards

09/26/2023

Experience Stratasys Truck Tour - Houston

09/22/2023

AIAA Rocky Mountain Section Technical Symposium 2023

09/22/2023

Experience Stratasys Truck Tour - Dallas, TX

09/21/2023

Accelerating the Energy Transition through Simulation

09/20/2023

3D Printing vs. CNC Machining - Webinar

09/13/2023

Maxwell Updates in Ansys 2023 R2 - Webinar

09/12/2023

Sandia Science & Technology Park 25th Anniversary

09/12/2023

Experience Stratasys Truck Tour - Tempe, AZ

09/08/2023

26th Annual New Mexico Flying 40 Awards

09/08/2023

New Mexico Tech Summit

09/07/2023

New Mexico Tech Summit

08/30/2023

Structures Updates in Ansys 2023 R2 (1) - Mechanical, Post & Graphics

08/23/2023

Improved Injection Molding with Additive - Webinar

08/22/2023

SPIE Optics & Photonics Exhibition 2023

08/16/2023

Fluids Updates in Ansys 2023 R2 - Webinar

08/04/2023

Experience Stratasys Truck Tour - Salt Lake City, Utah

08/01/2023

Experience Stratasys Truck Tour - Denver Colorado

07/26/2023

Solving Supply Chain Issues with Additive - Webinar

07/25/2023

Arizona Tech Leadership Golf Tournament

07/24/2023

Arizona Tech CEO Leadership Retreat

07/19/2023

System Automation & Optimization Updates in Ansys 2023 R1 - Webinar

07/13/2023

2023 AEROSPACE, AVIATION, DEFENSE AND MANUFACTURING CONFERENCE

07/12/2023

Materials Updates in Ansys Granta 2023 R1 - Webinar

06/30/2023

Turbo Expo 2023

06/29/2023

Turbo Expo 2023

06/28/2023

Turbo Expo 2023

06/28/2023

Revolutionize Packaging Design with Additive - Webinar

06/27/2023

Turbo Expo 2023

06/27/2023

2023 E-MOBILITY AND CLEAN ENERGY SUMMIT

06/26/2023

Turbo Expo 2023

06/21/2023

Optics Updates in Ansys 2023 R1 - Webinar

06/07/2023

LS-DYNA Updates in Ansys 2023 R1 - Webinar

05/31/2023

Driving Automotive Innovation with Additive - Webinar

05/24/2023

Hill Air Force Base Tech Expo

05/24/2023

Structural Updates in Ansys 2023 R1 (3) – Structural Optimization & Ex

05/23/2023

CROSSTALK 2023: Emerging Opportunities for Advanced Manufacturing Smal

05/10/2023

Signal & Power Integrity Updates in Ansys 2023 R1 - Webinar

04/26/2023

Additive Manufacturing Updates in Ansys 2023 R1 - Webinar

04/20/2023

38th Space Symposium Arizona Space Industry

More Info

04/19/2023

38th Space Symposium
Arizona Space Industry

04/19/2023

Additive Aids for Manufacturing - Webinar

04/18/2023

38th Space Symposium
Arizona Space Industry

04/17/2023

38th Space Symposium

04/13/2023

Venture Madness 2023

04/12/2023

Fluid Meshing & GPU-Solver Updates in Ansys 2023 R1 - Webinar

03/29/2023

8th Thermal and Fluids Engineering Conference

03/29/2023

Structural Updates in Ansys 2023 R1 - Composites, Fracture & MAPDL

03/28/2023

8th Thermal and Fluids Engineering Conference

03/27/2023

8th Thermal and Fluids Engineering Conference

03/26/2023

8TH Thermal and Fluids Engineering Conference

03/24/2023

Arizona BioPreneur Conference | Spring 2023

03/22/2023

2023 Arizona MedTech Conference

03/22/2023

Optimize Jigs & Fixtures with Additive - Webinar

03/15/2023

3D Design Updates in Ansys 2023 R1 - Webinar

03/08/2023

Competitive Advantages of 1D/3D Coupled Simulation - Webinar

03/01/2023

High Frequency Updates in Ansys 2023 R1 - Webinar

02/22/2023

Additive Advantages in Aerospace - Webinar

02/15/2023

Structural Updates in Ansys 2023 R1 (1) - Webinar

02/09/2023

IME 2023: MD&M | WestPack | ATX | D&M | Plastek

02/08/2023

IME 2023 MD&M | WestPack | ATX | D&M | Plastek

02/07/2023

IME 2023 MD&M | WestPack | ATX | D&M | Plastek

01/27/2023

Arizona Photonics Days, 2023

01/26/2023

Arizona Photonics Days, 2023

01/26/2023

TIPE 3D Printing | 2023

01/26/2023

Venture Cafe Phoenix Talent Night - Job Fari

01/26/2023

VFS 2023 Autonomous/Electric VTOL Symposium

01/25/2023

Arizona Photonics Days, 2023

01/25/2023

Building A.M.- Utah: Kickoff!

01/25/2023

TIPE 3D Printing | 2023

01/25/2023

VFS 2023 Autonomous/Electric VTOL Symposium

01/24/2023

VFS 2023 Autonomous/Electric VTOL Symposium

01/24/2023

TIPE 3D Printing | 2023

01/18/2023

2023 AZ Tech Council Golf Tournament

12/21/2022

Simulation Best Practices for 5G Technology - Webinar

12/14/2022

Digital Twins Updates in Ansys 2022 R2 - Webinar

12/08/2022

Tech the Halls - AZ Tech Council Holiday Mixer

12/07/2022

Electric Vehicle and Other Infrastructure Update Panel

11/30/2022

SPEOS Updates in Ansys 2022 R2 - Webinar

11/23/2022

Simulation Best Practices for Electronics Reliability - Webinar

11/16/2022

Discovery Updates in Ansys 2022 R2

11/10/2022

VentureCafe Phoenix Panel: Venture Capital in AZ

11/08/2022

2022 GOVERNOR’S CELEBRATION OF INNOVATION AWARDS + TECH SHOWCASE

11/03/2022

VentureCafe Phoenix Panel: Angel Investment in AZ

11/02/2022

High & Low Frequency Electromagnetics Updates in Ansys 2022 R2

10/26/2022

Simulation Best Practices For Chip-Package-System Design & Development

10/20/2022

Nerdtoberfest 2022

10/19/2022

2022 Southern Arizona Tech + Business Expo

10/19/2022

LS-DYNA Updates in Ansys 2022 R2 - Webinar

10/17/2022

Experience Stratasys Truck Tour - Clearfield Utah

10/14/2022

ASU School of Manufacturing Systems and Networks - Formal Opening Cele

10/14/2022

Experience Stratasys Truck Tour - Midvale Utah

10/12/2022

Experience Stratasys Truck Tour - Littleton Colorado

10/06/2022

Fluids Updates in Ansys 2022 R2 - Webinar

10/05/2022

Experience Stratasys Truck Tour - Colorado Springs

09/29/2022

White Hat Life Science Investor Conference - 2022

09/28/2022

2022 AZBio Awards

09/28/2022

Simulation Best Practices for Rotating Machinery Design & Development

09/21/2022

ExperienceIT NM 2022

Search in PADT site

Contact Us

Most of our customers receive their support over the phone or via email. Customers who are close by can also set up a face-to-face appointment with one of our engineers.

For most locations, simply contact us: