There are three main goals of the licensing changes in the latest release of ANSYS:
Deliver Ansys licensing using the FlexLM industry standard
Eliminate the Ansys licensing interconnect
Provide modular licensing options that are easier to understand
Finally – and this is the whopper (or Double Double if you’re an In-N-Out kind of analogy person) – this new licensing model eliminates the need for upgrading the Ansys License Manager with every software update. (please pause for shock recovery)
If you’re still shocked and would to like see a “shocked groundhog” compilation, check this out.
Why is this significant? Well, this was always a
sticking point for our customers when upgrading from one version to the next.
Here’s how that usually plays out:
Engineers eager to try out new features or overcome software defects, download the software and install it on their workstations.
Surprise – software throws an obscure licensing error.
Engineer notifies IT or Ansys Channel partner of issue.
After a few calls, maybe a screenshare or two, its determined that the license server needs to be upgraded.
The best-case scenario – IT or PADT Support can get it installed in a few minutes and engineer can be on his way.
The usual scenario – it will take a week to schedule downtime on the server and notify all stakeholders and the engineer is left to simmer on medium-low until those important safeguards are handled.
What does this all mean?
Starting in January 2020, all new Ansys keys
issued will be in the new format and will require upgrading to the 2020R1
License manager. This should be the last mandatory license server upgrade
for a while.
Your Ansys Channel Partner will contact you ahead
of your next renewal to discuss new license increments and if there are any
expected changes.
Your IT and Ansys support team will be
celebrating in the back office the last mandatory Ansys License Manager upgrade
for a while.
How to upgrade the Ansys License Manager?
Download the latest license manager through the
Ansys customer portal:
Make sure that you run the installed as an administrator
for best results.
Make sure license server is running and has the
correct licenses queued:
Look for the green checkmark in the license management center window.
Start your application and make sure everything looks good.
This was a high-level flyover of the new Ansys Licensing released with version 2020R1. For specifics contact your PADT Account manager or support@padtinc.com .
Part of the PADT core Philosophy is to “Provide flexible solutions of higher quality in less time for less money”. This part of the philosophy also applies to how we design and build PADT’s internal structure, tools we use, and processes we adopt.
Among the growing pains of most engineering and simulation organizations is the constant growing demand for storage capacity, data management, and protection, and BOATLOADS of computing power. Sadly, PADT engineers have yet to develop a near infinite storage capacity (like DNA for storage) or a working quantum computer that can run ANSYS. So we’re in the same boat with everyone else. We have been exploring what are our major pains and what optimizations can be made to our simulation environment (about 2,000 cores of Cube Simulation Cluster Appliances) as well as a structured, controlled solution for engineering data management.
As always we started by looking inwards:
What skills are available, or learnable within PADT that can help address the need?
What tools & resources do we have access to?
What do we need to acquire or buy?
The immediate and most obvious answer was to utilize PADT’s internal pool of knowledge and an ANSYS product called Engineering Knowledge Manager (EKM for short).
ANSYS EKM is a tool purposely developed to provide a turnkey solution for simulation process and simulation data management. This means that users can – through a single interface – perform a full simulation lifecycle. In the next few paragraphs, I will briefly go over some of the main features of ANSYS EKM with a couple of screenshots for good measure.
Figure 1. ANSYS EKM Architecture
Interactive and batch submission to high-performance computing resources
For PADT, a very practical feature of EKM is the ability to easily interface to existing High-Performance Computing (HPC) infrastructure. By communicating through ANSYS Remote Solve Manager (RSM), EKM is able to effortlessly interface to most HPC schedulers and resource managers for both the Windows and Linux worlds.
This feature is huge because analysts can seamlessly upload their models into the secure EKM repository, submit the jobs to the HPC cluster/s, monitor their runs, and upload their choice of results directly into EKM for review and post-processing.
EKM works hard to keep the interface familiar to flatten the learning curve and keep things simple by making the batch submission menus as close as possible to the local ones.
Figure 2. Simplicity of Batch Jobs
At PADT, whenever we are debugging models or application behavior, we want to have an interactive session to have the most control and visibility of the environment. With EKM, we can utilize the remote visualization & acceleration tool Nice – DCV. DCV is launched from within EKM and provides access to an interactive desktop on a cluster target while also accelerating OpenGL graphics for visually intensive programs.
Figure 3. EKM & Nice-DCV provide a Full Featured & Accelerated Interactive Desktop
Storage and archiving of simulation data with built-in version control, data aging, and expiry.
ANSYS EKM provides a comprehensive data management toolset that is derived from real-world needs. Features like highly granular access control, file and folder sharing and collaboration, version control, check-out and check-in procedures, and many more are enabled and very powerful out of the box. Other more advanced features such as data aging, auto-archiving, auto unpacking option for zip files are also very useful.
Figure 4. Data Management Interface
The capabilities don’t end here as EKM integrates directly with ANSYS Workbench. Analysts can seamlessly access their EKM repository from Workbench to perform any modifications and directly save back to EKM without the need to switch applications. Check-outs are automatically checked back in and new version numbers can be created automatically as well.
Metadata extraction
An extremely powerful piece of EKM is the metadata extraction engine that is baked into the core. EKM stores files as two entries, original file, and file metadata. EKM goes beyond the basic filename, date, owner metadata and digs deeper. It digs into the CAE meaningful metadata of the model, setup, material properties, element counts, mesh type and so on. It also extracts snapshots of the geometry, contours and in some cases even provides a 3d model that can be directly manipulated by the user. A sample of an ANSYS Fluent case metadata is below.
Figure 5. Metadata Sample
Another feature of metadata extraction is the ability to take a quick look at simulation results, perform cutplanes, pan, tilt, and zoom as well as add comments and even capture and share snapshots without leaving the browser window.
Figure 6. Metadata Includes Interactive 3D Models
Metadata extraction is supported for ANSYS data types and the ability to define new data types is straightforward and easy to do for any other CAE data types or in-house codes.
A rich search capability that goes beyond filename, owner and timestamps.
How many times have I kicked myself for not using meaningful file names with versions and useful time stamps and ended up spending hours opening a file for a quick peek to find that it isn’t the file I am looking for? Too many.
CAE models have hundreds of variables and parameters that are embedded in them. Wouldn’t it be useful if someone came up with a system to store CAE models where an analyst can simply type a search variable and it would search not only name and timestamps but actually dig into the guts of the model and search those? Well EKM is one such system. Analysts can search using thousands of field combinations that encompass everything from material properties to partitioning methods, boundary conditions to cell counts, you get the idea, it’s pretty awesome!
Figure 7. Advanced Search Option
Simulation process and workflow management
In EKM, administrators can create simulation workflows and lifecycles that manage all of the different steps that go into creating, running and concluding a simulation while ensuring that proper reviews and approvals handled.
In addition, documenting and automating the workflows, some of the underlying work can be automated as well. As we will see later, batch submission is baked right into the EKM capabilities and workflows can automatically launch batch submission scripts to a cluster and get the simulation going as soon as the proper files are loaded and that stage in the process is released.
Figure 8. Workflow Tasks View
Workflow processes are defined in a simple XML format or created using a dedicated mini-tool and uploaded into EKM ready to roll. Email notifications are preset and will shoot out whenever progress is made on a step in the workflow or an approval is needed. A nifty process chart is also built into the EKM processes interface that shows the workflow structure and current progress.
Figure 9. Process Charts in EKM
Conclusion
In conclusion, ANSYS EKM is awesome!
(Serious now), PADT invested a lot of time and resources in implementing EKM and in the coming months, we will be transitioning all of our engineering knowledge into it. It is already integrated with our HPC cluster and will be our central repository for engineering data.
In this article, I tried to really skim the surface of what EKM can do and what it currently does for us here at PADT.
If you are interested in checking out ANSYS EKM or have any questions or thoughts please reach out to us with a comment, email or just give us a call, we also recommend visiting the Couchbase website, to find more data management options.
In a previous post I argued that engineers do magic (read it here). And to help them do their magic better PADT Inc. introduced CoresOnDemand.com.
Among the magical skills engineers use in their daily awesomeness is their ability to bend the time fabric of the universe and perform tasks in almost impossible deadlines. It’s as if engineers work long hours and even work from home, while commuting and even at the coffee shop. Wait, is that what they actually do?
Among a myriad of tools that facilitate remote access and desktop redirection available, one stands out with distinction. NICE-Software developed a tool called Desktop Cloud Visualization (DCV for short). DCV has numerous advantages that we will get into shortly. The videos below give a general idea of what can be achieved with NICE-DCV.
Here is a video from the people at NICE:
And here is one of two PADT Employees using an iPhone to check their CFD results:
Advantages of Nice-DCV
Physical location of cluster/workstation or the engineers becomes irrelevant
Because engineers have fast, efficient and secure access to their workstations and clusters, they no longer need to be in the same office or on the same network segment to utilize the available compute resources. They can utilize NICE-DCV to create a fast, efficient and encrypted connection to their resources to submit, monitor and process results. The DCV clients are supported on Windows, Linux & IOS and even have a stand-alone Windows client that can be run on shared or public computers. In a recent live test, one of our engineers was travelling on a shuttle bus to a tiny ski town in Colorado, he was able to connect over the courtesy Wifi, check the status of his jobs and visualize some of the results.
The need for a powerful laptop or remote workstation to enable offsite work is no longer the only solution
There is no need for offsite engineers lug around a giant laptop in order to efficiently launch and modify their designs or perform simulation runs. Users launch the DCV client, connect to their workstation or cluster and are immediately given access to their desktop. No need to copy files, borrow licenses or transfer data. Engineers don’t need to create copies of files and carry them around on the laptops or on external storage which is an unnecessary security risk.
“If it ain’t broken don’t fix it!”
Every engineer uses ANSYS in his own special way. Some prefer the good old command line for everything even when a flashy GUI option is available. Others are comfortable using the Windows like GUI interface and would
Opens the door for GUI-only users to utilize large cluster resources without a steep learning curve or specialized tools.
Nice-DCV makes the use of ANSYS on large HPC clusters within reach for everyone. Engineers can log into pre-configured environments with all of the variables needed for parallel ANSYS runs already defined. Users can use can have their favorite ANSYS software added to the desktop as shortcuts or system admins can write small scripts or programs that serve as an answer file for custom job scripts.
From 0-60 in about…10 Minutes
For an engineer with the smallest amount of system administration skills it takes about 10 minutes to install the Nice-DCV server and launch the first connection. It’s surprisingly simple and straightforward on both the server and the client side. The benefits of Nice-DCV can be immediately realized in both simplified cluster administration and peace of mind for both the engineers and the system admins.
PADT’s CoresOnDemand and Nice-DCV
The CoresOnDemand service that PADT introduced last year utilizes the Nice-DCV tool to simplify and enhance the user experience. If you are interested in a live demo on Nice-DCV or the CoresOnDemand environment contact us either by phone: 480-813-4884 or by email cod@padtinc.com. For more information please visit: CoresOnCemand.com
(Note: some of the social media posts had a typo in the title, that was my fault (Eric) not Ahmed’s…)
In the world of simulation there are two facts of life. First, the deadline of “yesterday would be good” is not too uncommon. Funding deadlines, product roll-out dates, as well as unexpected project requirements are all reliable sources for last minute changes. Engineers are required to do quality work and deliver reliable results in limited time and resources. In essence perform sorcery.
Second, the size and complexity of models can vary wildly. Anything from fasteners and gaskets to complete systems or structures can be in the pipeline. Engineers can be looking at any combination of hundreds of variables that impact the resources required for a successful simulation.
Required CPU cores, RAM per core, interconnect speeds, available disk space, operating system and ANSYS version all vary depending on the model files, simulation type, size, run-time and target date for the results.
Engineers usually do magic. But sometimes limited time or resources that are out of reach can delay on-time delivery of project tasks.
At PADT, We Can Help
PADT Inc. has been nostrils deep in engineering services and simulation products for over 20 years. We know engineering, we know how to simulate engineering and we know ANSYS very well. To address the challenges our customers are facing, in 2015 PADT introduced CoresOnDemand to the engineering community.
CoresOnDemand offers the combination of our proven CUBE cluster, ANSYS simulation tools and the PADT experience and support as an on demand simulation resource. By focusing on the specific needs of ANSYS users, CoresOnDemand was built to deliver performance and flexibility for the full range of applications. Specifics about the clusters and their configurations can be found at CoresOnDemand.com.
CoresOnDemand is a high performance computing environment purpose built to help customers address numerical simulation needs that require compute power that isn’t available or that is needed on a temporary basis.
Call Us We’re Nice
CoresOnDemand is a new service in the world of on-demand computing. Prospective customers just need to give us a call or send us an inquiry here to get all of their questions answered. The engineers behind CoresOnDemand have a deep understanding of the ANSYS tools and distributed computing and are able to asses and properly size a compute environment that matches the needed resources.
Call us we’re nice!
Two Halves of the Nutshell
The process for executing a lease on a CoresOnDemand cluster is quite straight forward. There are two parts to a lease:
PART 1: How many cores & how long is the lease for?
By working with the PADT engineers – and possibly benchmarking their models – customers can set a realistic estimate on how many cores are required and how long their models need to run on the CoresOnDemand clusters. Normally, leases are in one-week blocks with incentives for longer or regular lease requirements.
Clusters are leased in one-week blocks, but we’re flexible.
Part 2: How will ANSYS be licensed?
An ANSYS license is required in order to run on the CoresOnDemand environment. A license lease can be generated by contacting any ANSYS channel partner. PADT can generate license leases in Arizona, Colorado, New Mexico, Utah & Nevada. Licenses can also be borrowed from the customer’s existing license pool.
An ANSYS license may be leased from an ANSYS channel partner or borrowed from customer’s existing license pool.
Using the Cluster
Once the CoresOnDemand team has completed the cluster setup and user creation (takes a couple of hours for most cases), customers can login and begin using the cluster. The CoresOnDemand clusters allow customers to use the connection method they are comfortable with. All connections to CoresOnDemand are encrypted and are protected by a firewall and an isolated network environment.
Step 1: Transfer files to the cluster:
Files can be transferred to the cluster using Secure Copy Protocol which creates an encrypted tunnel for copying files. A graphical tool is also available for Windows users (& it’s freeJ). Also, larger files can be loaded to the cluster manually by sending a DVD, Blu-ray disk or external storage device to PADT. The CoresOnDemand team will mount the volume and can assist in the copying of data.
Step 2: Connect to the cluster and start jobs
Customers can connect to the cluster through an SSH connection. This is the most basic interface where users can launch interactive or batch processing jobs on the cluster. SSH is secure, fast and very stable. The downside of SSH is that is has limited graphical capabilities.
Another option is to use the Nice Software Desktop Cloud Visualization (DCV) interface. DCV provides enhanced interactive 2D/3D access over a standard network. It enables users to access the cluster from anywhere on virtually any device with a screen and an internet connection. The main advantage of DCV is the ability to start interactive ANSYS jobs and monitor them without the need for a continuous connection. For example, a user can connect from his laptop to launch the job and later use his iPad to monitor the progress.
Figure 1. 12 Million cell model simulated on CoresOnDemand
The CoresOnDemand environment also has the Torque resource manager implemented where customers can submit multiple jobs to a job queue and run them in sequence without any manual intervention.
Customers can use SCP or ship external storage to get data on the cluster. SSH or DCV can be used to access the cluster. Batch, interactive or Torque scheduler can be used to submit and monitor jobs.
All Done?
Once the simulation runs are completed customers usually choose one of two methods to transfer data back. First is to download the results over the internet using SCP (mentioned earlier) or have external media shipped back (External media can be encrypted if needed).
After the customer receives the data and confirms that all useful data was recovered from the cluster, CoresOnDemand engineers re-image the cluster to remove all user data, user accounts and logs. This marks the end of the lease engagement and customers can rest assured that CoresOnDemand is available to help…and it’s pretty fast too.
At the end of the lease customers can download their data or have it shipped on external media. The cluster is later re-imaged and all user data, accounts & logs are also deleted in preparation for the next customer.
In a recent press release, PADT Inc. announced the launch of CoresOnDemand.com. CoresOnDemand offers CUBE simulation clusters for customers’ ANSYS numerical simulation needs. The clusters are designed from the ground up for running ANSYS numerical simulation codes and are tested and proven to deliver performance results.
POWERFUL CLUSTER INFRASTRUCTURE
The current clusters available as part of the CoresOnDemand offering are:
1- CoresOnDemand – Paris:
80-Core Intel based cluster. Based on the Intel Xeon E5-2667 v.2 3.30GHz CPU’s, the cluster utilizes a 56Gbps InfiniBand Interconnect and is running a modified version of CentOS 6.6.
2- CoresOnDemand – Athena:
544-Core AMD based cluster. Based on the AMD Opteron 6380 2.50GHz CPU’s the cluster utilizes a 40Gbps InfiniBand Interconnect and is running a modified version of CentOS 6.6.
Five Key Differentiators
The things that make CoresOnDemand different than most other cloud computing providers are:
CoresOnDemand is a non-traditional cloud. It is not an instance based cluster. There is no hypervisor or any virtualization layer. Users know what resources are assigned exclusively to them every time. No layers, no emulation, no delay and no surprises.
CoresOnDemand utilizes all of the standard software designed to maximize the full use of hardware features and interconnect. There are no layers between the hardware and operating system.
CoresOnDemand utilizes hardware that is purpose built and benchmarked to maximize performance of simulation tools instead of a general purpose server on caffeine.
CoresOnDemand provides the ability to complete high performance runs on the compute specialized nodes and later performing post processing on a post-processing appropriate node.
CoresOnDemand is a way to lease compute nodes completely and exclusively for the specified duration including software licenses, compute power and hardware interconnect.
CoresOnDemand is backed up by over 20 years of PADT Inc. experience and engineering know-how. Looking at the differentiating features of CoresOnDemand, it becomes apparent that the performance and flexibility of this solution are great advantages for addressing numerical simulation requirements of any type.
What goes into managing a Linux HPC (High Performance Computing) cluster?
There is an endless list of software, tools and configurations that are required or recommended for efficiently managing a shared HPC cluster environment.
A shared HPC cluster typically has many layers that deliver a usable environment that doesn’t have to depend on the users coordinating closely or the system administrators being superheroes of late-night patching and just-in-time recovery.
Figure 1 Typical Layers of a shared HPC cluster.
For each layer in the diagram above there are numerous open-source and paid software tools to choose from. The thing to note is that it’s not just a choice. System administrators have to work with the user requirements, compatibility tweaks and ease of implementation and use to come up with a perfect recipe (much like carrot cake). Once the choices have been made, users and system administrators have to train, learn and start utilizing these tools.
HPC @ PADT Inc.
At PADT Inc. we have several Linux based HPC clusters that are in high demand. Our Clusters are based on the Cube High Value Performance Computing (HVPC) systems and are designed to optimize the performance of numerical simulation software. We were facing several challenges that are common with building & maintaining HPC clusters. The challenges were mainly in the areas of security, imaging and deployment, resource management, monitoring and maintenance.
To solve these challenges there is an endless list of software tools and packages both open-source and commercial. Each one of these tools comes with its own steep learning curve and mounting time to test & implement.
Enter – Bright Computing
After testing several tools we came across the Bright Computing – Bright Cluster Manager (Bright CM). Bright CM eliminates the need for system administrators to manually install and configure the most common HPC cluster components. On top of that it provides the majority of the HPC software packages, tools and software libraries in their default software image.
A Bright CM cluster installation starts off with an extremely useful installation wizard that asks all of the right questions while giving the user full control to customize the installation. With a note pad, a couple of hours and a basic understanding of HPC clusters, you are ready to install your applications.
Figure 2. Installation Wizard
An all knowing dashboard helps system admins master and monitor the cluster(s) or if you prefer the CLI CM shell provides full functionality through command line. From the dashboard system admins can manage multiple clusters down to the finest details.
Figure 3. Cluster Management Interface.
An extensive cluster monitoring interface allows systems admins, users and key stakeholders to generate and view detailed reports about the different cluster components.
Figure 4. Cluster Monitoring Interface.
Bright CM has proven to be a valuable tool in managing and optimizing our HPC environment. For further information and a demo of Bright Cluster Manager please contact sales@padtinc.com.
Welcome to the PADT IT Department now build your own PC
[Editors Note: Ahmed has been here a lot longer than 2 weeks, but we have been keeping him busy so he is just now finding the time to publish this. ]
I have been working for PADT for a little over 2 weeks now. After taking the ceremonial office tour that left me with a fine white powder all over my shoes (it’s a PADT Inc special treat). I was taken to meet my team, David Mastel – My Boss for short, who is the IT commander & chief at PADT Inc. and Sam Goff – the all-knowing systems administrator.
I was shown to a cubicle that reminded me of the shady computer “recycling” outfits you’d see on a news report highlighting the vast amounts of abandoned hardware; except there were no CRT (tube) screens or little children working as slave labor.
Sacred Tradition
This tradition started with Sam, then Manny, and now it was my turn taking this rite of passage. As part of the PADT IT department, I am required by sacred tradition to build my own desktop with my bare hands – then I was handed a screwdriver.
My background is mixed and diverse but mostly has one thing in common. We usually depended on pre-built servers, systems and packages. Branded machines have an embedded promise of reliability, support and superiority over the custom built machines.
What most people don’t know about branded machines is that they carry two pretty heavy tariffs.
First, you are paying upfront for the support structure, development, R&D, supply chains that are required to pump out thousands of machines.
Second, because these large companies are trying to maximize their margins, they will look for a proprietary cost effective configuration that will:
Most probably fail or become obsolete as close as possible to the 3-year “expected” life-span of computers.
Lock users into buying any subsequent upgrade or spare part from them.
Long Story short, the last time I fully built a desktop computer was back in college when a 2GB hard disk was a technological breakthrough that we could only imagine how many MP3’s we could store on it.
The Build
There were two computer cases on the ground, one resembled a 1990 Mercury Sable that was at most tolerable as a new car and the other looked more like 1990 BMW 325ci a little old but carries a heritage and potential to be great once again.
So with my obvious choice for a case I began to collect parts from the different bins and drawers and I was immediately shocked at how “organized” this room really was. So I picked up the following:
There are a few things that I would have chosen differently but were not available at the time of the build or were ridiculous for a work desktop would be:
Replaced 2 drives with SSD disks to hold OS and applications
Explored a more powerful Nvidia card (not really required but desired)
So after a couple of hours of fidgeting and checking manuals this is what the build looks like.
(The case above was the first prototype ANSYS Numerical Simulation workstation in 2010. It has a special place in David’s Heart)
Now to the Good STUFF! – Benchmarking the rebuilt CUBE prototype
ANSYS R15.0.7 FEA Benchmarks
Below are the results for the v15sp5 benchmark running distributed parallel on 4-Cores.
ANSYS R15.0.7 CFD Benchmarks
Below are the results for the aircraft_2m benchmark using parallel processing on 4-Cores.
This machine is a really cool sleeper computer that is more than capable at whatever I throw at it.
The only thing that worries me is that when Sam handed me the case to get started, David was trying –but failed- to hide a smile that makes me feel that there is something obviously wrong in my first build and I failed to catch it. I guess I will just wait and see.
We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.Ok