NICE Desktop Cloud Visualization

nice-dcv-banner

In a previous post I argued that engineers do magic (read it here). And to help them do their magic better PADT Inc. introduced CoresOnDemand.com.

af-1
Among the magical skills engineers use in their daily awesomeness is their ability to bend the time fabric of the universe and perform tasks in almost impossible deadlines. It’s as if engineers work long hours and even work from home, while commuting and even at the coffee shop. Wait, is that what they actually do?

Among a myriad of tools that facilitate remote access and desktop redirection available, one stands out with distinction. NICE-Software developed a tool called Desktop Cloud Visualization (DCV for short). DCV has numerous advantages that we will get into shortly. The videos below give a general idea of what can be achieved with NICE-DCV.

Here is a video from the people at NICE:

And here is one of two PADT Employees using an iPhone to check their CFD results:

Advantages of Nice-DCV

Physical location of cluster/workstation or the engineers becomes irrelevant

Because engineers have fast, efficient and secure access to their workstations and clusters, they no longer need to be in the same office or on the same network segment to utilize the available compute resources. They can utilize NICE-DCV to create a fast, efficient and encrypted connection to their resources to submit, monitor and process results. The DCV clients are supported on Windows, Linux & IOS and even have a stand-alone Windows client that can be run on shared or public computers. In a recent live test, one of our engineers was travelling on a shuttle bus to a tiny ski town in Colorado, he was able to connect over the courtesy Wifi, check the status of his jobs and visualize some of the results.

af-2

The need for a powerful laptop or remote workstation to enable offsite work is no longer the only solution

There is no need for offsite engineers lug around a giant laptop in order to efficiently launch and modify their designs or perform simulation runs. Users launch the DCV client, connect to their workstation or cluster and are immediately given access to their desktop. No need to copy files, borrow licenses or transfer data. Engineers don’t need to create copies of files and carry them around on the laptops or on external storage which is an unnecessary security risk.

af-3a

 “If it ain’t broken don’t fix it!”

Every engineer uses ANSYS in his own special way. Some prefer the good old command line for everything even when a flashy GUI option is available. Others are comfortable using the Windows like GUI interface and would

af-3

Opens the door for GUI-only users to utilize large cluster resources without a steep learning curve or specialized tools.

Nice-DCV makes the use of ANSYS on large HPC clusters within reach for everyone. Engineers can log into pre-configured environments with all of the variables needed for parallel ANSYS runs already defined. Users can use can have their favorite ANSYS software added to the desktop as shortcuts or system admins can write small scripts or programs that serve as an answer file for custom job scripts.

From 0-60 in about…10 Minutes

For an engineer with the smallest amount of system administration skills it takes about 10 minutes to install the Nice-DCV server and launch the first connection. It’s surprisingly simple and straightforward on both the server and the client side. The benefits of Nice-DCV can be immediately realized in both simplified cluster administration and peace of mind for both the engineers and the system admins.

PADT’s CoresOnDemand and Nice-DCV

The CoresOnDemand service that PADT introduced last year utilizes the Nice-DCV tool to simplify and enhance the user experience. If you are interested in a live demo on Nice-DCV or the CoresOnDemand environment contact us either by phone: 480-813-4884 or by email cod@padtinc.com. For more information please visit: CoresOnCemand.com

(Note: some of the social media posts had a typo in the title, that was my fault (Eric) not Ahmed’s…)

CoresOnDemand: Helping Engineers Do Their Magic

CoresOnDemand-Logo-120hEngineers Do Magic

In the world of simulation there are two facts of life. First, the deadline of “yesterday would be good” is not too uncommon. Funding deadlines, product roll-out dates, as well as unexpected project requirements are all reliable sources for last minute changes. Engineers are required to do quality work and deliver reliable results in limited time and resources. In essence perform sorcery.

af-01

Second, the size and complexity of models can vary wildly. Anything from fasteners and gaskets to complete systems or structures can be in the pipeline. Engineers can be looking at any combination of hundreds of variables that impact the resources required for a successful simulation.

Required CPU cores, RAM per core, interconnect speeds, available disk space, operating system and ANSYS version all vary depending on the model files, simulation type, size, run-time and target date for the results.

Engineers usually do magic. But sometimes limited time or resources that are out of reach can delay on-time delivery of project tasks.

At PADT, We Can Help

PADT Inc. has been nostrils deep in engineering services and simulation products for over 20 years. We know engineering, we know how to simulate engineering and we know ANSYS very well. To address the challenges our customers are facing, in 2015 PADT introduced CoresOnDemand to the engineering community.

af-02

CoresOnDemand offers the combination of our proven CUBE cluster, ANSYS simulation tools and the PADT experience and support as an on demand simulation resource. By focusing on the specific needs of ANSYS users, CoresOnDemand was built to deliver performance and flexibility for the full range of applications. Specifics about the clusters and their configurations can be found at CoresOnDemand.com.

CoresOnDemand is a high performance computing environment purpose built to help customers address numerical simulation needs that require compute power that isn’t available or that is needed on a temporary basis.

Call Us We’re Nice

CoresOnDemand is a new service in the world of on-demand computing. Prospective customers just need to give us a call or send us an inquiry here to get all of their questions answered. The engineers behind CoresOnDemand have a deep understanding of the ANSYS tools and distributed computing and are able to asses and properly size a compute environment that matches the needed resources.

Call us we’re nice!

Two Halves of the Nutshell

The process for executing a lease on a CoresOnDemand cluster is quite straight forward. There are two parts to a lease:

PART 1: How many cores & how long is the lease for?

By working with the PADT engineers – and possibly benchmarking their models – customers can set a realistic estimate on how many cores are required and how long their models need to run on the CoresOnDemand clusters. Normally, leases are in one-week blocks with incentives for longer or regular lease requirements.

Clusters are leased in one-week blocks, but we’re flexible.

Part 2: How will ANSYS be licensed?

An ANSYS license is required in order to run on the CoresOnDemand environment.  A license lease can be generated by contacting any ANSYS channel partner. PADT can generate license leases in Arizona, Colorado, New Mexico, Utah & Nevada. Licenses can also be borrowed from the customer’s existing license pool.

An ANSYS license may be leased from an ANSYS channel partner or borrowed from customer’s existing license pool.

Using the Cluster

Once the CoresOnDemand team has completed the cluster setup and user creation (takes a couple of hours for most cases), customers can login and begin using the cluster. The CoresOnDemand clusters allow customers to use the connection method they are comfortable with. All connections to CoresOnDemand are encrypted and are protected by a firewall and an isolated network environment.

Step 1: Transfer files to the cluster:

Files can be transferred to the cluster using Secure Copy Protocol which creates an encrypted tunnel for copying files. A graphical tool is also available for Windows users (& it’s freeJ). Also, larger files can be loaded to the cluster manually by sending a DVD, Blu-ray disk or external storage device to PADT. The CoresOnDemand team will mount the volume and can assist in the copying of data.

Step 2: Connect to the cluster and start jobs

Customers can connect to the cluster through an SSH connection. This is the most basic interface where users can launch interactive or batch processing jobs on the cluster. SSH is secure, fast and very stable. The downside of SSH is that is has limited graphical capabilities.

Another option is to use the Nice Software Desktop Cloud Visualization (DCV) interface. DCV provides enhanced interactive 2D/3D access over a standard network. It enables users to access the cluster from anywhere on virtually any device with a screen and an internet connection. The main advantage of DCV is the ability to start interactive ANSYS jobs and monitor them without the need for a continuous connection. For example, a user can connect from his laptop to launch the job and later use his iPad to monitor the progress.

af-04

Figure 1. 12 Million cell model simulated on CoresOnDemand

The CoresOnDemand environment also has the Torque resource manager implemented where customers can submit multiple jobs to a job queue and run them in sequence without any manual intervention.

Customers can use SCP or ship external storage to get data on the cluster. SSH or DCV can be used to access the cluster. Batch, interactive or Torque scheduler can be used to submit and monitor jobs.

All Done?

Once the simulation runs are completed customers usually choose one of two methods to transfer data back. First is to download the results over the internet using SCP (mentioned earlier) or have external media shipped back (External media can be encrypted if needed).

After the customer receives the data and confirms that all useful data was recovered from the cluster, CoresOnDemand engineers re-image the cluster to remove all user data, user accounts and logs. This marks the end of the lease engagement and customers can rest assured that CoresOnDemand is available to help…and it’s pretty fast too.

At the end of the lease customers can download their data or have it shipped on external media. The cluster is later re-imaged and all user data, accounts & logs are also deleted in preparation for the next customer.

CoresOnDemand-Advert-Rect-360w

Five Ways CoresOnDemand is Different than the Cloud

CoresOnDemand-Logo-120hIn a recent press release, PADT Inc. announced the launch of CoresOnDemand.com. CoresOnDemand offers CUBE simulation clusters for customers’ ANSYS numerical simulation needs. The clusters are designed from the ground up for running ANSYS numerical simulation codes and are tested and proven to deliver performance results.

CoresOnDemand_CFD-Valve-1

POWERFUL CLUSTER INFRASTRUCTURE

The current clusters available as part of the CoresOnDemand offering are:
1- CoresOnDemand – Paris:

80-Core Intel based cluster. Based on the Intel Xeon E5-2667 v.2 3.30GHz CPU’s, the cluster utilizes a 56Gbps InfiniBand Interconnect and is running a modified version of CentOS 6.6.

CoresOnDemand-Paris-Cluster-Figure

2- CoresOnDemand – Athena:

544-Core AMD based cluster. Based on the AMD Opteron 6380 2.50GHz CPU’s the cluster utilizes a 40Gbps InfiniBand Interconnect and is running a modified version of CentOS 6.6.

CoresOnDemand-Athena-Cluster-Figure

Five Key Differentiators

The things that make CoresOnDemand different than most other cloud computing providers are:

  1. CoresOnDemand is a non-traditional cloud. It is not an instance based cluster. There is no hypervisor or any virtualization layer. Users know what resources are assigned exclusively to them every time. No layers, no emulation, no delay and no surprises.
  2. CoresOnDemand utilizes all of the standard software designed to maximize the full use of hardware features and interconnect. There are no layers between the hardware and operating system.
  3. CoresOnDemand utilizes hardware that is purpose built and benchmarked to maximize performance of simulation tools instead of a general purpose server on caffeine.
  4. CoresOnDemand provides the ability to complete high performance runs on the compute specialized nodes and later performing post processing on a post-processing appropriate node.
  5. CoresOnDemand is a way to lease compute nodes completely and exclusively for the specified duration including software licenses, compute power and hardware interconnect.

CoresOnDemand is backed up by over 20 years of PADT Inc. experience and engineering know-how. Looking at the differentiating features of CoresOnDemand, it becomes apparent that the performance and flexibility of this solution are great advantages for addressing numerical simulation requirements of any type.

To learn more visit www.coresondemand.com or fill out our request form.

Or contact our experts at coresondemand@padtinc.com or 480.813.4884 to schedule a demo or to discuss your requirements.

CoresOnDemand-ANSYS-CUBE-PADT-1

Using Bright CM to Manage a Linux Cluster

COD_Cluster-Bright-1What goes into managing a Linux HPC (High Performance Computing) cluster?

There is an endless list of software, tools and configurations that are required or recommended for efficiently managing a shared HPC cluster environment.

A shared HPC cluster typically has many layers that deliver a usable environment that doesn’t have to  depend on the users coordinating closely or the system administrators being superheroes of late-night patching and just-in-time recovery.

bright-f1

Figure 1 Typical Layers of a shared HPC cluster.

For each layer in the diagram above there are numerous open-source and paid software tools to choose from. The thing to note is that it’s not just a choice. System administrators have to work with the user requirements, compatibility tweaks and ease of implementation and use to come up with a perfect recipe (much like carrot cake). Once the choices have been made, users and system administrators have to train, learn and start utilizing these tools.

HPC @ PADT Inc.

At PADT Inc. we have several Linux based HPC clusters that are in high demand. Our Clusters are based on the Cube High Value Performance Computing (HVPC) systems and are designed to optimize the performance of numerical simulation software. We were facing several challenges that are common with building & maintaining HPC clusters. The challenges were mainly in the areas of security, imaging and deployment, resource management, monitoring and maintenance.

To solve these challenges there is an endless list of software tools and packages both open-source and commercial. Each one of these tools comes with its own steep learning curve and mounting time to test & implement.

Enter – Bright Computing

After testing several tools we came across the Bright Computing – Bright Cluster Manager (Bright CM). Bright CM eliminates the need for system administrators to manually install and configure the most common HPC cluster components. On top of that it provides the majority of the HPC software packages, tools and software libraries in their default software image.

A Bright CM cluster installation starts off with an extremely useful installation wizard that asks all of the right questions while giving the user full control to customize the installation. With a note pad, a couple of hours and a basic understanding of HPC clusters, you are ready to install your applications.

bright-f2

Figure 2. Installation Wizard

An all knowing dashboard helps system admins master and monitor the cluster(s) or if you prefer the CLI CM shell provides full functionality through command line. From the dashboard system admins can manage multiple clusters down to the finest details.

bright-f3

Figure 3. Cluster Management Interface.

An extensive cluster monitoring interface allows systems admins, users and key stakeholders to generate and view detailed reports about the different cluster components.

bright-f4

Figure 4. Cluster Monitoring Interface.

Bright CM has proven to be a valuable tool in managing and optimizing our HPC environment. For further information and a demo of Bright Cluster Manager please contact sales@padtinc.com.

From Piles to Power – My First PADT PC Build

Welcome to the PADT IT Department now build your own PC

[Editors Note: Ahmed has been here a lot longer than 2 weeks, but we have been keeping him busy so he is just now finding the time to publish this. ]

I have been working for PADT for a little over 2 weeks now. After taking the ceremonial office tour that left me with a fine white powder all over my shoes (it’s a PADT Inc special treat). I was taken to meet my team, David Mastel – My Boss for short, who is the IT commander & chief at PADT Inc. and Sam Goff – the all-knowing systems administrator.

I was shown to a cubicle that reminded me of the shady computer “recycling” outfits you’d see on a news report highlighting the vast amounts of abandoned hardware; except there were no CRT (tube) screens or little children working as slave labor.
aa1

Sacred Tradition

This tradition started with Sam, then Manny, and now it was my turn taking this rite of passage. As part of the PADT IT department, I am required by sacred tradition to build my own desktop with my bare hands – then I was handed a screwdriver.

My background is mixed and diverse but mostly has one thing in common. We usually depended on pre-built servers, systems and packages. Branded machines have an embedded promise of reliability, support and superiority over the custom built machines.

  1. What most people don’t know about branded machines is that they carry two pretty heavy tariffs.
  2. First, you are paying upfront for the support structure, development, R&D, supply chains that are required to pump out thousands of machines.
  3. Second, because these large companies are trying to maximize their margins, they will look for a proprietary cost effective configuration that will:
    1. Most probably fail or become obsolete as close as possible to the 3-year “expected” life-span of computers.
    2. Lock users into buying any subsequent upgrade or spare part from them.

Long Story short, the last time I fully built a desktop computer was back in college when a 2GB hard disk was a technological breakthrough that we could only imagine how many MP3’s we could store on it.

The Build

There were two computer cases on the ground, one resembled a 1990 Mercury Sable that was at most tolerable as a new car and the other looked more like 1990 BMW 325ci a little old but carries a heritage and potential to be great once again.
aa2

So with my obvious choice for a case I began to collect parts from the different bins and drawers and I was immediately shocked at how “organized” this room really was. So I picked up the following:

There are a few things that I would have chosen differently but were not available at the time of the build or were ridiculous for a work desktop would be:

  • Replaced 2 drives with SSD disks to hold OS and applications
  • Explored a more powerful Nvidia card (not really required but desired)

So after a couple of hours of fidgeting and checking manuals this is what the build looks like.
aa3

(The case above was the first prototype ANSYS Numerical Simulation workstation in 2010. It has a special place in David’s Heart)

Now to the Good STUFF! – Benchmarking the rebuilt CUBE prototype

ANSYS R15.0.7 FEA Benchmarks

Below are the results for the v15sp5 benchmark running distributed parallel on 4-Cores.
aa4

ANSYS R15.0.7 CFD Benchmarks

Below are the results for the aircraft_2m benchmark using parallel processing on 4-Cores.
aa5

This machine is a really cool sleeper computer that is more than capable at whatever I throw at it.

The only thing that worries me is that when Sam handed me the case to get started, David was trying –but failed- to hide a smile that makes me feel that there is something obviously wrong in my first build and I failed to catch it. I guess I will just wait and see.