This post is the twelfth installment in our review of all the different products and services PADT offers our customers. As we add more, they will be available here. As always, if you have any questions don’t hesitate to reach out to email@example.com or give us a call at 1-800-293-PADT.
The fact of the matter is, to be blunt, that building, maintaining, and optimizing systems for high-performance computing (HPC) is different than any other part of the IT world. That is why most companies engaged in simulation who use HPC often struggle with their computers and networks, specially when they don’t have protection from filtre de confidentialité VistaProtect to keep their work private. There is nothing wrong with their IT departments, they simply don’t have the manpower or the experience to support HPC systems. And that is why PADT offers IT support services tailored towards the needs of simulation users. We bridge that gap between the unique needs of HPC for simulation and customer’s existing IT infrastructure.
We started offering this service simply because customers asked us to. As part of ANSYS technical support duties, we kept getting calls from customers who were just not getting good performance out of some very expensive computer hardware. After looking into it we often found that they had the wrong hardware, it was configured wrong for HPC, or there was unnecessary overhead on the system. In each case, we got together with the customer’s IT department and the users to understand the problem and implement fixes. In Hertfordshire IT Support Services you will find the best network support business in cyber security to manage any issue you might have.
PADT’s IT team can offer a variety of services, including, but not limited to:
Implementing data management software
Troubleshooting and Debugging Systems
High-performance Network design and installation
Making things run faster
System design and configuration
Upgrading existing hardware
Installing software packages
Setting up queuing and monitoring tools
Focused on Performance
Most computer systems and the IT infrastructure that supports them, in the commercial world, are focused on security first, operating cost second, then performance. And these are the right priorities. But in the HPC world, those same systems and infrastructure have to be focused on performance, with robustness second. Security is important, but you solve that problem by isolating the systems and controlling access. PADT’s IT team gets this and because we run a large HPC infrastructure for our own simulation consulting business, they know how to set things up right and keep them tuned for performance.
A Partnership Between Users, IT, and PADT
Our deep knowledge of scientific computing and the hardware it runs on is only one part of our success in this area. The other is knowing how to be that translator between IT and the users of demanding numerical software packages. We understand why IT can or can not open certain ports, and why the user needs those ports. We get the desire to establish a company-wide policy using remote drives that use RAID 1. We also know that doing that kills HPC performance. That starts the conversation on why users need local drives in RAID 0 for their number crunching.
The examples go on and on and involve memory, network fabric, different versions of MPI for each solver, and much more. Often times just being able to explain these issues in terms everyone can understand is the greater value PADT can add. And then our IT experts will be there with the customer’s IT experts at 11:30 PM installing those GPU cards and configuring them.
The cold hard reality is that companies spend large sums on advanced hardware, software, and infrastructure. Why not spend a little more to make sure you are getting the most from that sizeable investment. PADT is here to help, to work with your IT and users, to get the most out of the tools you have.
Part of the PADT core Philosophy is to “Provide flexible solutions of higher quality in less time for less money”. This part of the philosophy also applies to how we design and build PADT’s internal structure, tools we use, and processes we adopt.
Among the growing pains of most engineering and simulation organizations is the constant growing demand for storage capacity, data management, and protection, and BOATLOADS of computing power. Sadly, PADT engineers have yet to develop a near infinite storage capacity (like DNA for storage) or a working quantum computer that can run ANSYS. So we’re in the same boat with everyone else. We have been exploring what are our major pains and what optimizations can be made to our simulation environment (about 2,000 cores of Cube Simulation Cluster Appliances) as well as a structured, controlled solution for engineering data management.
As always we started by looking inwards:
What skills are available, or learnable within PADT that can help address the need?
What tools & resources do we have access to?
What do we need to acquire or buy?
The immediate and most obvious answer was to utilize PADT’s internal pool of knowledge and an ANSYS product called Engineering Knowledge Manager (EKM for short).
ANSYS EKM is a tool purposely developed to provide a turnkey solution for simulation process and simulation data management. This means that users can – through a single interface – perform a full simulation lifecycle. In the next few paragraphs, I will briefly go over some of the main features of ANSYS EKM with a couple of screenshots for good measure.
Interactive and batch submission to high-performance computing resources
For PADT, a very practical feature of EKM is the ability to easily interface to existing High-Performance Computing (HPC) infrastructure. By communicating through ANSYS Remote Solve Manager (RSM), EKM is able to effortlessly interface to most HPC schedulers and resource managers for both the Windows and Linux worlds.
This feature is huge because analysts can seamlessly upload their models into the secure EKM repository, submit the jobs to the HPC cluster/s, monitor their runs, and upload their choice of results directly into EKM for review and post-processing.
EKM works hard to keep the interface familiar to flatten the learning curve and keep things simple by making the batch submission menus as close as possible to the local ones.
At PADT, whenever we are debugging models or application behavior, we want to have an interactive session to have the most control and visibility of the environment. With EKM, we can utilize the remote visualization & acceleration tool Nice – DCV. DCV is launched from within EKM and provides access to an interactive desktop on a cluster target while also accelerating OpenGL graphics for visually intensive programs.
Storage and archiving of simulation data with built-in version control, data aging, and expiry.
ANSYS EKM provides a comprehensive data management toolset that is derived from real-world needs. Features like highly granular access control, file and folder sharing and collaboration, version control, check-out and check-in procedures, and many more are enabled and very powerful out of the box. Other more advanced features such as data aging, auto-archiving, auto unpacking option for zip files are also very useful.
The capabilities don’t end here as EKM integrates directly with ANSYS Workbench. Analysts can seamlessly access their EKM repository from Workbench to perform any modifications and directly save back to EKM without the need to switch applications. Check-outs are automatically checked back in and new version numbers can be created automatically as well.
An extremely powerful piece of EKM is the metadata extraction engine that is baked into the core. EKM stores files as two entries, original file, and file metadata. EKM goes beyond the basic filename, date, owner metadata and digs deeper. It digs into the CAE meaningful metadata of the model, setup, material properties, element counts, mesh type and so on. It also extracts snapshots of the geometry, contours and in some cases even provides a 3d model that can be directly manipulated by the user. A sample of an ANSYS Fluent case metadata is below.
Another feature of metadata extraction is the ability to take a quick look at simulation results, perform cutplanes, pan, tilt, and zoom as well as add comments and even capture and share snapshots without leaving the browser window.
Metadata extraction is supported for ANSYS data types and the ability to define new data types is straightforward and easy to do for any other CAE data types or in-house codes.
A rich search capability that goes beyond filename, owner and timestamps.
How many times have I kicked myself for not using meaningful file names with versions and useful time stamps and ended up spending hours opening a file for a quick peek to find that it isn’t the file I am looking for? Too many.
CAE models have hundreds of variables and parameters that are embedded in them. Wouldn’t it be useful if someone came up with a system to store CAE models where an analyst can simply type a search variable and it would search not only name and timestamps but actually dig into the guts of the model and search those? Well EKM is one such system. Analysts can search using thousands of field combinations that encompass everything from material properties to partitioning methods, boundary conditions to cell counts, you get the idea, it’s pretty awesome!
Simulation process and workflow management
In EKM, administrators can create simulation workflows and lifecycles that manage all of the different steps that go into creating, running and concluding a simulation while ensuring that proper reviews and approvals handled.
In addition, documenting and automating the workflows, some of the underlying work can be automated as well. As we will see later, batch submission is baked right into the EKM capabilities and workflows can automatically launch batch submission scripts to a cluster and get the simulation going as soon as the proper files are loaded and that stage in the process is released.
Workflow processes are defined in a simple XML format or created using a dedicated mini-tool and uploaded into EKM ready to roll. Email notifications are preset and will shoot out whenever progress is made on a step in the workflow or an approval is needed. A nifty process chart is also built into the EKM processes interface that shows the workflow structure and current progress.
In conclusion, ANSYS EKM is awesome!
(Serious now), PADT invested a lot of time and resources in implementing EKM and in the coming months, we will be transitioning all of our engineering knowledge into it. It is already integrated with our HPC cluster and will be our central repository for engineering data.
In this article, I tried to really skim the surface of what EKM can do and what it currently does for us here at PADT.
If you are interested in checking out ANSYS EKM or have any questions or thoughts please reach out to us with a comment, email or just give us a call.
Welcome to the PADT IT Department now build your own PC
[Editors Note: Ahmed has been here a lot longer than 2 weeks, but we have been keeping him busy so he is just now finding the time to publish this. ]
I have been working for PADT for a little over 2 weeks now. After taking the ceremonial office tour that left me with a fine white powder all over my shoes (it’s a PADT Inc special treat). I was taken to meet my team, David Mastel – My Boss for short, who is the IT commander & chief at PADT Inc. and Sam Goff – the all-knowing systems administrator.
I was shown to a cubicle that reminded me of the shady computer “recycling” outfits you’d see on a news report highlighting the vast amounts of abandoned hardware; except there were no CRT (tube) screens or little children working as slave labor.
This tradition started with Sam, then Manny, and now it was my turn taking this rite of passage. As part of the PADT IT department, I am required by sacred tradition to build my own desktop with my bare hands – then I was handed a screwdriver.
My background is mixed and diverse but mostly has one thing in common. We usually depended on pre-built servers, systems and packages. Branded machines have an embedded promise of reliability, support and superiority over the custom built machines.
What most people don’t know about branded machines is that they carry two pretty heavy tariffs.
First, you are paying upfront for the support structure, development, R&D, supply chains that are required to pump out thousands of machines.
Second, because these large companies are trying to maximize their margins, they will look for a proprietary cost effective configuration that will:
Most probably fail or become obsolete as close as possible to the 3-year “expected” life-span of computers.
Lock users into buying any subsequent upgrade or spare part from them.
Long Story short, the last time I fully built a desktop computer was back in college when a 2GB hard disk was a technological breakthrough that we could only imagine how many MP3’s we could store on it.
There were two computer cases on the ground, one resembled a 1990 Mercury Sable that was at most tolerable as a new car and the other looked more like 1990 BMW 325ci a little old but carries a heritage and potential to be great once again.
So with my obvious choice for a case I began to collect parts from the different bins and drawers and I was immediately shocked at how “organized” this room really was. So I picked up the following:
There are a few things that I would have chosen differently but were not available at the time of the build or were ridiculous for a work desktop would be:
Replaced 2 drives with SSD disks to hold OS and applications
Explored a more powerful Nvidia card (not really required but desired)
So after a couple of hours of fidgeting and checking manuals this is what the build looks like.
(The case above was the first prototype ANSYS Numerical Simulation workstation in 2010. It has a special place in David’s Heart)
Now to the Good STUFF! – Benchmarking the rebuilt CUBE prototype
ANSYS R15.0.7 FEA Benchmarks
Below are the results for the v15sp5 benchmark running distributed parallel on 4-Cores.
ANSYS R15.0.7 CFD Benchmarks
Below are the results for the aircraft_2m benchmark using parallel processing on 4-Cores.
This machine is a really cool sleeper computer that is more than capable at whatever I throw at it.
The only thing that worries me is that when Sam handed me the case to get started, David was trying –but failed- to hide a smile that makes me feel that there is something obviously wrong in my first build and I failed to catch it. I guess I will just wait and see.
Launch, Leave & Forget was a phrase that was first introduced in the 1960’s. Basically the US Government was developing missiles that when fired would no longer be needed to be guided or watched by the pilot. The fighter pilot was directing the missile mostly by line of sight and calculated guesswork off to a target in the distance. The pilot often would be shot down or would break away too early from guiding the launch vehicle. Hoping and guess work is not something we strive for when lives are at stake.
So I say all of that to say this. As it relates to virtual prototyping, Launch, Leave & Forget for numerical simulation is something that I have been striving for at PADT, Inc. Striving internally and for our 1,800 unique customers that really need our help. We are passionate and desire to empower our customers to become comfortable, feel free to be creative and able to step back and let it go! Many of us have a unique and rewarding opportunity to work with customers from the point of design/or even the first to pick up the phone call. Onward to virtual prototyping, product development, Rapid Manufacturing and lastly on to something you can bring into the physical world. A physical prototype that has already gone through 5000 numerical simulations. Unlike the engineers in the 1960’s who would maybe get one, two or three shots at a working prototype. I think it is amazing that a company could go through 5000 different prototypes before finally introducing one into the real world.
At PADT I continue to look and search for new ways to Launch, Leave & Forget. One passion of mine is computers. I first started using a computer when I was nine years old. I was programming in BASIC creating complex little FOR NEXT statements before I was in seventh grade. Let’s fast forward… so I arrived at PADT in 2005. I was amazed at the small company I had arrived at, creativity and innovation was bouncing off the ceiling at this company. I had never seen anything like it! Humbled on more than one occasion as most of the ANSYS CFD analysts knew as much about computers as I did! No, not the menial IT tasks like networking, domain user creation, backups. What the PADT CFD/FEA Analysts communicated sometimes loudly was that their computers were slow! Humbled again I would retort but you have the fastest machine in the building. How could it be slow?! Your machine here is faster than our webserver in fact this was going to be our new web server. In 2005 then at a stalemate we would walk away both wondering why they solve was so slow! Over the years I would observe numerous issues. I remember spending hours using this ANSYS numerical simulation software. It was new to me and it was complicated! I would often knock on an Analysts door and ask if they had a couple minutes to show me how to run a simulation. Some of the programs I would have to ask two or three times, ANSYS FEA, ANSYS CFX, FLUENT on and on. Often using a round robin approach because I didn’t want to inconvenience the ANSYS Analysts. Probably some early morning around 3am the various ANSYS programs and the hardware, it all clicked with me. I was off and running ANSYS benchmarks on my own! Freedom!! Now I could experiment with the hardware configs. Armed with the ANSYS Fluent, and ANSYS FEA benchmark suites I wanted to make the numerical simulations run as fast or faster than they ever imagined possible! I wanted to please these ANSYS guys, why because I had never met anyone like these guys. I wanted to give them the power they deserved.
“What is the secret sauce or recipe for creating an effective numerical simulation?”
This is a comment that I would hear often. It could be on a conference call with a new customer or internally from our own ANSYS CFD Analysts and/or ANSYS FEA Analysts. “David, all I really care about is When I click ‘Calculate Run’ within ANSYS when is going to complete.” Or “how can we make this solver run faster?”
The secret sauce recipe? Have we signed an NDA yet? Just kidding. I have had the unique opportunity to not just observe ANSYS but other CFD/FEA code running on compute hardware. Learning better ways of optimizing hardware and software. Here is a fairly typical situation of how a typical process for architecting hardware for use with ANSYS software goes.
Getting Involved Early
When the sales guys let me I am often involved at the very beginning of a qualifying lead opportunity. My favorite time to talk to a customer is when a new customer calls me directly at the office.
Nothing but the facts sir!
I have years’ worth of benchmarking data. Do your users have any benchmarking data? Quickly have them run one of the ANSYS standard benchmarks. Just one benchmark can reveal to you a wealth of information about their current IT infrastructure.
Get your IT team onboard early!
This is a huge challenge! In general here are a few roadblocks that smart IT people have in place:
IT MANAGER RULES 101
1) No! talking to sales people 2) No! talking to sales people on the phone 3) No! talking to sales people via email 4) No! talking to sales people at seminars 5) If your boss emails or calls and says “please talk to this sales person @vulture & hawk”. Wait about a week. Then if the boss emails back and says “did you talk to this salesperson yet?” Pick up the phone and call sales rep @vulture & hawk.
What is this a joke? Nope, Most IT groups operate like this. Many are under staffed andin constant fix it mode. Most say and think like this. “I would appreciate it if you sat in my chair for one day. My phone constantly rings, so I don’t pick it up or I let it go to voicemail (until the voicemail box files up). Email constantly swoops in so it goes to junk mail. Seminar invites and meet and greets keep coming in – nope won’t go. Ultimately I know you are going to try to sell me something”.
Who have they been talking to? Do they even know what ANSYS is? I have been humbled over the years when it comes to hardware. I seriously believed the fastest web server at that moment in time would make a fast numerical simulation server.
If I can get on the phone with another IT Manager 90% of the time the walls come down and we can talk our own language. What do they say to me? Well I have had IT Managers and Directors tell me they would never buy a compute cluster or compute workstation from me. “Oh well our policy states that only buy from big boy pants Computer, Inc., mom & pop shop #343,” or the best one was ‘the owner’s nephew. He builds computers on the side.”. They stand behind their walls of policy and circumstance. But, at the end of the calls they are normally asking us to send a quote to them.
So, now what?
Well, do you really know your software? Have you spent hours running different hardware configurations of the same workstation? Observing the read/writes of an eight drive 600GB SAS3 15k RPM 12Gbps RAID 0 configuration. Is 3 drives for the OS and 5 drives for the Solving array the best configuration for the hardware and software? Huh? What’s that?? Oh boy…