Greetings from the HPC numerical simulation proving grounds of PADT, Inc. in Tempe, Arizona. While bench marking the very latest version of ANSYS® Mechanical™ I learned something very significant and I need to share this information with you right now.As I gazed down on the data outputs from the new solve.out files, I began to notice something. Yes change indeed, something was different, something had changed.
A brief pause for emphasis, in regards in overall ANSYS® productivity and amazing improvements please read this post.
However, pertaining to this blog post, I am focusing on one very important HPC performance metric to me. It is one of the many HPC performance metrics that I have used when creating a balanced HPC server for engineering simulation.. But wait there is more! so please wait just a little bit longer, for very soon I will post even more juicy pieces of data garnered from taken from these new ANSYS® benchmark solver files.
To recap in all of its bullets points & glories:
- For today and just for today, we are focusing on just one of the performance metrics.
- The Time Spent Computing The Solution!
- This 1.3x speedup in solve times was achieved using just one CUBE workstation and with just one click!
- Open ANSYS®and while you are creating your solve.
- Select, withjust one click either the INTEL MPI or IBM Platform MPI.
- Next, run your test repeat as necessary using whichever MPI version that you did not start your test with.
The ANSYS® Mechanical™ Benchmark Description:
- Sparse solver, symmetric matrix, 6000k DOFs, transient, nonlinear, structural analysis with 1 iteration
- GPU Accelerator or Co-Processor enabled for: NVIDIA and Intel Phi
- A large sized job for direct solvers, should run incore on machines with 128 GB or more of memory, good test of processor flop speed if running incore and I/O if running out-of-core
CUBE ANSYS Numerical Simulation Appliance Used:
- CUBE w16i-v4
The ANSYS® Mechanical™ Benchmark Results:
||TIME SPENT COMPUTING THE SOLUTION||TIME SPENT COMPUTING THE SOLUTION|
|IBM Platform MPI||INTEL MPI|
|Cores||2016 CUBE w16i-v4||2016 CUBE w16i-v4||This Speedup is…X faster!|
Wow! using these latest 14nm INTEL® XEON® CPU’s, phew, I have been forever changed! As you can see from the data above, in just one simple click, changing from the IBM Platform MPI to using INTEL MPI and look! the benchmark time spent computing times are faster! A 1.3x Speedup!
Now in this specific benchmark example along with the use of the latest ANSYS® Mechanical achieving a 1.3x speedup without spending another penny is very wise and not so foolish.
Disclaimer: Please check with your ANSYS Software Sales Representative for the very latest on solver updates and information. Because some of the models and compatibility can very on the . You may need to use the MS-MPI, INTEL-MPI or IBM Platform MPI for your distributed solving. If you are not sure please contact your local ANSYS® Corporate Software Sales or ANSYS® Software Channel Partner that was assigned specifically to you and/or your company.